Lidar Annotation Overview

Data Types and the Frame Objects

The input into our sensor fusion application will be a series of points, radar points, and camera images that will be rendered and labeled. Because of the size of the objects involved, we will require that the data be JSON-encoded (or protobuf-encoded) and accessible via a URL passed in through the task request. Basically, in order to annotate a point cloud frame, format the data in one of our accepted formats, upload the data as a file, and then send a request to the Scale API, similar to the way that we would process image files.Below are our definitions for our various object types for the JSON format, and for an entire point cloud frame. The protobuf format is largely identical, and can be downloaded here; the difference is that camera intrinsic parameters are encoded as a oneof within the CameraImage message type, and thus no camera_model field is needed.

Definition: Vector2

Vector2 objects are used to represent positions, and are JSON objects with 2 properties.

Lidar Segmentation Overview

Start From Completed LIDAR Task

Instead of creating Lidar Segmentation tasks from scratch, we can bring already completed work from a Lidar Annotation task into the Lidar Segmentation task. This will persist the cuboids that you got from the annotation step. Furthermore, you can specify which subset of frames should be included as the source material.

The following differences from the previous approach will take effect:

  • A new parameter name lidar_task will contain the identifier of the LiDAR Annotation task to be used as source. The lidar_task needs to be in a completed state.

  • You don't need to add the attachments and attachment_type parameters as the Frame objects will be taken from the source LiDAR Annotation task.

  • A new optional parameter appears, named lidar_task_frames, allows you to specify an array of frame indexes to select which subset of frames you want to use from the LiDAR Annotation task. If omitted all frames will be used.

    • For example, assuming we start from a completed LiDAR Annotation task with five frames and we wanted to use all frames except the last one, the parameter will look like lidar_task_frames: [0, 1, 2, 3].

  • The labels parameter needs to be a super set of the set used on the original LiDAR Annotation task.






x value



y value

Example Vector2 object

  "x": 1,
  "y": 2

2D / 3D Lidar Linking Overview

Inherited Lidar Attributes

Object attributes set in a lidarannotation task can be inherited by the corresponding object in lidarlinking tasks created from the lidarannotation task.

Option 1:

Inherited lidar attributes can be enabled by using the copy_all_lidar_task_attributes flag when creating the lidarlinking task.

If using this option, you can NOT set the same attributes in the lidarlinking task, because it will be copied over automatically from the lidarannotation task.

Option 2:

Setting copy_from_lidar_task: true on one or more annotation_attributes defined in the lidar linking task.

These attributes will be copied from the lidarannotation task to the lidarlinking task and cannot be modified in annotations derived from the lidarannotation task.

In both cases, for any new annotations that are added in, they can be used.

Example with inherited lidar attributes

{ // ... in the linking task payload
  "annotation_attributes": {
    "Size": {
      "copy_from_lidar_task": true,
      "type": "category",
      "description": "An attribute that was set in the original lidar task.  Note that the attribute name must match the original attribute name. If this attribute is copied from an annotation existing in the lidar task, its value cannot be changed.",
      "choices": [ "Large", "Small" ]
    "Shape": {
      "type": "category",
      "description": "This is a new attribute that is specific to the 2d task",
      "choices": [ "Parallelogram", "Square", "Rhombus" ]

Updated 1 day ago