If you’re just starting to learn about the nuances between Artificial Intelligence, Machine Learning and Deep Learning, this AI Dictionary along with our other resources can help you get started.
Artificial intelligence (AI) is the simulation of human intelligence processes by machines, especially computer systems. The broad term “AI” tends to mean something different every decade. As of the late 2010s, AI has mostly been used as a synonym for Deep Learning.
Also known as: systematic error
Errors in estimated models that do not go away even with infinite data. Also see, variance.
A box denoting the location and extent of an object in an image, defined by four numbers, either the locations of two corners or a center location and size.
An annotation type for image of text that assigns each entry to a finite set of groups.
Annotation that refers to two or more entities of the same type.
Computer Vision is one branch of Artificial Intelligence as it applies to images and videos. See also Perception (Robotics).
Annotation type of a 3-dimensional shape like a cube with independently varying length, width and height
Artificial Neural Networks as they are used in the 2010s. “Deep” because they are structured as a long series of non-linear functions composed together with one another. Machine Learning models with very high capacities, well-known subtypes include Convolutional Neural Networks (CNNs / convnets) and Recurrent Neural Networks (RNNs). Able to fit on massive datasets without saturating, using large amounts of computational power from SIMD processors like GPUs.
A detection model tells you if an object exists in any given location in a picture. Often combined with non-max suppression to eliminate duplicate detections.
Rules notable because they are useful and practical, with no guarrantee of optimality or rigor. Read more here.
LiDAR, short for Light Detection and Ranging is a remote sensing method that uses light in the form of a pulsed laser to measure distance to a target. The output takes the form of a set of 4D points consisting of 3D position and reflectivity.
For more on LiDAR Annotation, take a look at our Sensor Fusion Annotation page.
Programming of computer functions by using representative data instead of by explicitly coding in the steps of an algorithm.
A subfield of Artificial Intelligence (AI) that processes text in order to extract abstract syntactic and semantic content.
An approach invented in the 1980s for constructing functions from simple non-linear modular parts inspired by how biological neurons functioned. As of the 2010s, hundreds of layers are used, and the field is known as “Deep Learning.”
Conversion of images of printed, handwritten or typed text to machine-native text.
Processing of raw data from sensors (cameras / lidar / radar) into a higher-level of representation (entities / terrain / surfaces / volumes) for use by other parts of a robotic control system.
An annotation type in which every pixel in an image is classified and labeled with the type of object the pixel belongs to. Contrast with Instance Segmentation.
See more on our Semantic Segmentation Annotation.
Data used by Machine Learning algorithms to fit function parameters. In traditional software development, developers explicitly specify instructions for a computer to follow to produce outputs from inputs. With machine learning (ML), instead of explicit instructions, we provide the ML model with examples (pairs of outputs and inputs), which the model imitates trying to produce the same outputs from the corresponding inputs. These examples are called training data.
Techniques for using a model trained on one problem for a different problem.
In contrast with Supervised Learning, where models learn how to produce outputs given inputs, there is no output for unsupervised learning. Given only inputs, unsupervised learning methods learn how the different dimensions of the input relate to each other, revealing the underlying structure in the inputs. Unsupervised learning includes Principle Component Analysis (PCA) and Generative Adversarial Networks (GANs).
Also known as: Random error
Errors in estimated models that go away with infinite data. See Bias.