3D Sensor Fusion

The advanced annotation platform for 3D sensor data.

Why Scale

Accelerate the Development of AI Applications

Scale 3D Sensor Fusion is the industry standard-setting, advanced annotation platform for 3D sensor data. Capable of supporting inputs from any LiDAR, RADAR and camera, machine learning teams trust Scale 3D Sensor Fusion to accelerate and ensure the successful deployment of accurate perception models.

  • illustration of a circle made of tags

    ML-Powered Data Labeling

    The Scale AI Platform is built by machine learning engineers for machine learning engineers. Scale 3D Sensor Fusion leverages machine learning to power pre-labeling and tooling to produce large volumes of high-quality training data.

  • illustration of a cockade

    Automated Quality Pipeline

    Tech-enabled quality assurance systems built into the product to reduce costly human review without sacrificing quality. Customers are also provided transparency in regards to quality with a QA tool and dashboard metrics.

  • illustration of a circle made of arrows

    Highly Trained Workforce

    A large, globally distributed workforce of highly trained labelers with years of experience annotating 3D sensor data for the most challenging scenarios allows customers to scale up multiple large data pipelines without increasing the size of internal teams.

Use Cases

Computer Vision

Develop highly accurate perception models to locate and identify various objects, understand relationships between objects, predict behavior and more.

Detection & Tracking

Develop 3D detection and tracking models with cuboid or segmentation annotation. The Dependent Tasks API can also be used to label some parts of 3D Sensor Fusion tasks with cuboids (e.g. vehicles) and others with segmentation (e.g. vegetation) to leverage the benefits of both annotation types.

Detection & Tracking
Prediction & Planning
Prediction & Planning

Develop prediction and planning models using 3D attributes. Cuboid attributes for behavior and intent include gaze detection and turn signals. We also offer linked instance IDs describing the same object for long tasks, and linking of related objects (e.g. a vehicle and trailer).

Lane & Boundary Detection

Estimate the geometric structure of lanes and boundaries by using 3D Sensor Fusion segmentation annotation with intensity as an optional input parameter for greater accuracy. Talk To Us if you need line & spline or polygon annotation in 3D.

Lane & Boundary Detection
How It Works

Easy to Start, Optimize and Scale

Build models you can trust while maximizing operational efficiency and reducing the cost of ML projects.

"Label all cars, pedestrians, and cyclists in each frame."
client.createLidarAnnotationTask({
  instruction: 'Label all cars, pedestrians, and cyclists in each frame.',
  labels: ['car', 'pedestrian', 'cyclist'],
  meters_per_unit: 2.3,
  max_distance_meters: 30
}, (err, task) => {
    // do something with task
});
Run code

Get Labeled Data Today