Scale
Lidar Annotation Overview
Data Types and the Frame Objects
The input into our sensor fusion application will be a series of points, radar points, and camera images that will be rendered and labeled. Because of the size of the objects involved, we will require that the data be JSON-encoded (or protobuf-encoded) and accessible via a URL passed in through the task request. Basically, in order to annotate a point cloud frame, format the data in one of our accepted formats, upload the data as a file, and then send a request to the Scale API, similar to the way that we would process image files.Below are our definitions for our various object types for the JSON format, and for an entire point cloud frame. The protobuf format is largely identical, and can be downloaded here; the difference is that camera intrinsic parameters are encoded as a
Definition:
x | float | x value |
y | float | y value |
Example Vector2 object
{
"x": 1,
"y": 2
}
Lidar Segmentation Overview
Start From Completed LIDAR Task
Instead of creating Lidar Segmentation tasks from scratch, we can bring already completed work from a Lidar Annotation task into the Lidar Segmentation task. This will persist the cuboids that you got from the annotation step. Furthermore, you can specify which subset of frames should be included as the source material.
The following differences from the previous approach will take effect:
A new parameter name
lidar_taskwill contain the identifier of the LiDAR Annotation task to be used as source. Thelidar_taskneeds to be in acompletedstate.You don't need to add the
attachmentsandattachment_typeparameters as theFrameobjects will be taken from the source LiDAR Annotation task.A new optional parameter appears, named
lidar_task_frames, allows you to specify an array of frame indexes to select which subset of frames you want to use from the LiDAR Annotation task. If omitted all frames will be used.For example, assuming we start from a completed LiDAR Annotation task with five frames and we wanted to use all frames except the last one, the parameter will look like
lidar_task_frames: [0, 1, 2, 3].
The
labelsparameter needs to be a super set of the set used on the original LiDAR Annotation task.
2D / 3D Lidar Linking Overview
Inherited Lidar Attributes
Object attributes set in a
Option 1:
Inherited lidar attributes can be enabled by using the
If using this option, you can NOT set the same attributes in the
Option 2:
Setting
These attributes will be copied from the
In both cases, for any new annotations that are added in, they can be used.
Example with inherited lidar attributes
{ // ... in the linking task payload
"annotation_attributes": {
"Size": {
"copy_from_lidar_task": true,
"type": "category",
"description": "An attribute that was set in the original lidar task. Note that the attribute name must match the original attribute name. If this attribute is copied from an annotation existing in the lidar task, its value cannot be changed.",
"choices": [ "Large", "Small" ]
},
"Shape": {
"type": "category",
"description": "This is a new attribute that is specific to the 2d task",
"choices": [ "Parallelogram", "Square", "Rhombus" ]
}
}
}