Image Annotation Overview

This is the recommended task type for annotating images with vector geometric shapes. The available geometries are

box
,
polygon
,
line
,
point
,
cuboid
, and
ellipse
.
This endpoint creates an
imageannotation
task. Given an image, Scale will annotate the image with the geometries you specify.
The required parameters for this task are
attachment
and
geometries
.

Body Params

projectstring

The name of the project to associate this task with.


batchstring

The name of the batch to associate this task with. Note that if a batch is specified, you need not specify the project, as the task will automatically be associated with the batch's project. For Scale Rapid projects specifying a batch is required. See Batches section for more details.


instructionstring

A markdown-enabled string or iframe embedded Google Doc explaining how to do the task. You can use markdown to show example images, give structure to your instructions, and more. See our instruction best practices for more details. For Scale Rapid projects, DO NOT set this field unless you specifically want to override the project level instructions.


callback_urlstring

The full url (including the scheme http:// or https://) or email address of the callback that will be used when the task is completed.


attachmentstringrequired

A URL to the image you'd like to be annotated.


context_attachmentsarray of objects

An array of objects in the form of {"attachment": "<link to actual attachment>"} to show to taskers as a reference. Context images themselves can not be labeled. Context images will appear like this in the UI. You cannot use the task's attachment url as a context attachment's url.


geometriesobjectrequired

This object is used to define which objects need to be annotated and which annotation geometries (box, polygon, line, point, cuboid, or ellipse) should be used for each annotation. Further description of each geometry can be found in each respective section below


annotation_attributesobject

This field is used to add additional attributes that you would like to capture per annotation. See Annotation Attributes for more details about annotation attributes.


linksobject

Use this field to define links between annotations. See Links for more details about links.


hypothesisobject

Editable annotations that a task should be initialized with. This is useful when you've run a model to prelabel the task and want annotators to refine those prelabels. Must contain the annotations field, which has the same format as the annotations field in the response.


layerobject

Read-only annotations to be pre-drawn on the task. See the Layers section for more details.


base_annotationsboolean

Editable annotations, with the option to be "locked", that a task should be initialized with. This is useful when you've run a model to prelabel the task and want annotators to refine those prelabels. Must contain the annotations field, which has the same format as the annotations field in the response.


can_add_base_annotationsboolean

Whether or not new annotations can be added to the task if base_annotations are used. If set to true, new annotations can be added to the task in addition to base_annotations. If set to false, new annotations will not be able to be added to the task.


can_edit_base_annotationsboolean

Whether or not base_annotations can be edited in the task. If set to true, base_annotations can be edited by the tasker (position of annotation, attributes, etc). If set to false, all aspects of base_annotations will be locked.


can_edit_base_annotation_labelsboolean

Whether or not base_annotations labels can be edited in the task. If set to true, the label of base_annotations can be edited by the tasker. If set to false, the label will be locked.


can_delete_base_annotationsboolean

Whether or not base_annotations can be removed from the task. If set to true, base_annotations can be deleted from the task. If set to false, base_annotations cannot be deleted from the task.


image_metadataobject

This field accepts specified image metadata, supported fields include:
- date_time - displays the date and time the image is taken
- resolution - configures the units of the ruler tools, resolution_ratio holds the number of resolution_units corresponding to one pixel; e.g. {resolution_ratio: 3, resolution_unit: 'm'}, one pixel in the image corresponds to three meters in the real world.
- location - the real-world location where this image was captured, in the standard geographic coordinate system; e.g. {lat: 37.77, long: -122.43}


metadataobject

A set of key/value pairs that you can attach to a task object. It can be useful for storing additional information about the task in a structured format. Max 10KB. See the Metadata section for more detail.


paddinginteger


paddingXinteger

The amount of padding in pixels added to the left and right of the image. Overrides padding if set.


paddingYinteger

The amount of padding in pixels added to the top and bottom of the image. Overrides padding if set.


priorityinteger

A value of 10, 20, or 30 that defines the priority of a task within a project. The higher the number, the higher the priority.


unique_idstring

A arbitrary ID that you can assign to a task and then query for later. This ID must be unique across all projects under your account, otherwise the task submission will be rejected. See Avoiding Duplicate Tasks for more details.


clear_unique_id_on_errorboolean

If set to be true, if a task errors out after being submitted, the unique id on the task will be unset. This param allows workflows where you can re-submit the same unique id to recover from errors automatically


tagsarray of strings

Arbitrary labels that you can assign to a task. At most 5 tags are allowed per task. You can query tasks with specific tags through the task retrieval API.


Request

POST/v1/task/imageannotation
import requests

url = "https://api.scale.com/v1/task/imageannotation"

payload = {
    "instruction": "**Instructions:** Please label all the things",
    "attachment": "https://i.imgur.com/iDZcXfS.png",
    "geometries": {
        "box": {
            "min_height": None,
            "min_width": None,
            "can_rotate": None,
            "integer_pixels": None
        },
        "polygon": {
            "min_vertices": None,
            "max_vertices": None
        },
        "line": {
            "min_vertices": None,
            "max_vertices": None
        },
        "cuboid": {
            "min_height": None,
            "min_width": None,
            "camera_intrinsics": {
                "fx": None,
                "fy": None,
                "cx": None,
                "cy": None,
                "skew": None,
                "scalefactor": None
            },
            "camera_rotation_quaternion": {
                "w": None,
                "x": None,
                "y": None,
                "z": None
            },
            "camera_height": None
        }
    },
    "padding": None,
    "paddingX": None,
    "paddingY": None,
    "priority": None
}
headers = {
    "accept": "application/json",
    "content-type": "application/json"
}

response = requests.post(url, json=payload, headers=headers)

print(response.text)

Response

{
  "task_id": "string",
  "created_at": "string",
  "type": "imageannotation",
  "status": "pending",
  "instruction": "string",
  "is_test": false,
  "urgency": "standard",
  "metadata": {},
  "project": "string",
  "callback_url": "string",
  "updated_at": "string",
  "work_started": false,
  "params": {
    "attachment_type": "image",
    "attachment": "http://i.imgur.com/3Cpje3l.jpg",
    "geometries": {
      "box": {
        "objects_to_annotate": [
          null
        ],
        "min_height": 5,
        "min_width": 5
      },
      "polygon": {
        "objects_to_annotate": [
          null
        ]
      },
      "point": {
        "objects_to_annotate": [
          null
        ]
      }
    },
    "annotation_attributes": {
      "additionalProp": {
        "type": "category",
        "description": "string",
        "choice": "string"
      }
    }
  }
}

Boxes

Given a

box
entry in
params.geometries
, Scale will annotate your image or video with boxes and return the position and dimensions of the boxes.

Request Parameters

objects_to_annotatearray of strings

A list of string or LabelDescription objects.


min_heightinteger

The minimum height in pixels of the bounding boxes you'd like to be made.


min_widthinteger

The minimum width in pixels of the bounding boxes you'd like to be made.


can_rotateboolean

Allows a tasker to rotate the bounding box.


integer_pixelsboolean

Response fields denoting box location and size (top, left, width, height) will be returned as integers instead of floats. This does not work with rotated boxes.


Response Fields

Key

Type

Description

uuid

string

A computer-generated unique identifier for this annotation. In video annotation tasks, this can be used to track the same object across frames.

type

string

String indicating geometry type:

box

label

string

The label of this annotation, chosen from the

objects_to_annotate
array for its geometry. In video annotation tasks, any annotation objects with the same
uuid
will have the same
label
across all frames.

attributes

object

See the Annotation Attributes section for more details about the

attributes
response field.

left

float

The distance, in pixels, between the left border of the bounding box and the left border of the image.

top

float

The distance, in pixels, between the top border of the bounding box and the top border of the image.

width

float

The width, in pixels, of the bounding box.

height

float

The height, in pixels, of the bounding box.

If

can_rotate
was set to
true
, the following fields will supersede the above fields:

Key

Type

Description

rotation

float

The clockwise rotation in radians

vertices

An array of objects with a schema {x: 0, y: 0}

The vertices of the rotated bounding box

left

float

The distance, in pixels, between the left border of the unrotated bounding box and the left border of the image.

top

float

The distance, in pixels, between the top border of the unrotated bounding box and the top border of the image.

Example Box Request

{
  "geometries": {
    "box": {
      "objects_to_annotate": [
        "traffic_sign",
        {
          "choice": "vehicle",
          "subchoices": [
          	"Car",
            {
              "choice": "truck_suv",
              "display": "truck or SUV"
            }
          ]
        },
        "pedestrian"
      ],
      "min_height": 5,
      "min_width": 5,
      "can_rotate": false
    },
    ...
  },
  ...
}

Example Box Response

{
  "response": {
    "annotations": [
      {
        "type": "box",
        "label": "pedestrian",
        "attributes": {
            "moving": "yes"
        },
        "left": 2,
        "top": 4,
        "width": 3,
        "height": 5,
        "uuid": "65ec1f52-5902-4b39-bea9-ab6b4d58ef42"
      },
      {
        "type": "box",
        "label": "car",
        "attributes": {
            "moving": "yes"
        },
        "left": 7,
        "top": 5,
        "width": 14,
        "height": 5,
        "uuid": "0a6cd019-a014-4c67-bd49-c269ba08028a"
      },
      { ... },
      { ... }
    ]
  },
  "task_id": "5774cc78b01249ab09f089dd",
  "task": {
    // populated task for convenience
    ...
  }
}

Example Rotated Box Response

{
  "response": {
    "annotations" : [ 
      {
        "label" : "car",
        "attributes" : {},
        "uuid" : "122a4270-f9b2-4f66-a9ca-2e06f0de66e5",
        "width" : 121.878523862864,
        "height" : 71.6961921895555,
        "rotation" : 1.2440145049532,
        "left" : 613.440037825633,
        "top" : 199.208745812549,
        "type" : "box",
        "vertices" : [ 
          {
            "x" : 688.769014855216,
            "y" : 165.835344251165
          }, 
          {
            "x" : 727.891633787782,
            "y" : 281.264089660824
          }, 
          {
             "x" : 659.989584658913,
            "y" : 304.27833956349
          }, 
          {
            "x" : 620.866965726348,
            "y" : 188.84959415383
          }
        ]
      }
      { ... },
      { ... }
    ]
  },
  "task_id": "5774cc78b01249ab09f089dd",
  "task": {
    // populated task for convenience
    ...
  }
}

Polygons

Given a

polygon
entry in
params.geometries
, Scale will annotate your image or video with polygons and return the vertices of the polygons.

Request Parameters

objects_to_annotatearray of objects

A list of string or LabelDescription objects.


min_verticesinteger

The minimum number of vertices in a valid line annotation for your request.


max_verticesinteger

The maximum number of vertices in a valid line annotation for your request. Must be at least min_vertices.


Response Fields

Key

Type

Description

uuid

string

A computer-generated unique identifier for this annotation.

In video annotation tasks, this can be used to track the same object across frames.

type

string

String to indicate geometry type:

polygon

label

string

The label of this annotation, chosen from the

objects_to_annotate
array for its geometry. In video annotation tasks, any annotation objects with the same
uuid
will have the same
label
across all frames.

attributes

object

See the Annotation Attributes section for more details about the

attributes
response field.

vertices

array

An array of vertex objects describing the vertices of the polygon, listed in the order they were annotated. In other words, the point order will be either clockwise or counter-clockwise for each annotation.

Definition:

Vertex

Key

Type

Description

x

number

The distance, in pixels, between the vertex and the left border of the image.

y

number

The distance, in pixels, between the vertex and the top border of the image.

Example Polygon Request

{
  "geometries": {
    "polygon": {
      "objects_to_annotate": [
        "traffic_sign",
        {
          "choice": "vehicle",
          "subchoices": [
          	"Car",
            {
              "choice": "truck_suv",
              "display": "truck or SUV"
            }
          ]
        },
        "pedestrian"
      ],
      "min_vertices": 4,
      "max_vertices": 15
    },
    ...
  },
  ...
}

Example Polygon Response

{
  "response": {
    "annotations": [
      {
        "type": "polygon",
        "label": "car",
        "vertices": [
            {
                "x": 123,
                "y": 10
            },
            {
                "x": 140,
                "y": 49
            },
            {
                "x": 67,
                "y": 34
            }
        ],
        "uuid": "65ec1f52-5902-4b39-bea9-ab6b4d58ef42"
      },
      { ... },
      { ... }
    ]
  },
  "task_id": "5774cc78b01249ab09f089dd",
  "task": {
    // task inlined for convenience
    ...
  }
}

Lines

Given a

line
entry in
params.geometries
, Scale will annotate your image or video with polylines (segmented lines) and return the vertices of the lines.

Request Parameters

objects_to_annotatearray of objects

A list of string or LabelDescription objects.


min_verticesinteger

The minimum number of vertices in a valid line annotation for your request.


max_verticesinteger

The maximum number of vertices in a valid line annotation for your request. Must be at least min_vertices.


Response Fields

Key

Type

Description

uuid

string

A computer-generated unique identifier for this annotation.

In video annotation tasks, this can be used to track the same object across frames.

type

string

String to indicate geometry type:

line

label

string

The label of this annotation, chosen from the

objects_to_annotate
array for its geometry. In video annotation tasks, any annotation objects with the same
uuid
will have the same
label
across all frames.

attributes

object

See the Annotation Attributes section for more details about the

attributes
response field.

vertices

array

An array of vertex objects describing the vertices of the polygon, listed in the order they were annotated. In other words, the point order will be either clockwise or counter-clockwise for each annotation.

Definition:
Vertex

Key

Type

Description

x

number

The distance, in pixels, between the vertex and the left border of the image.

y

number

The distance, in pixels, between the vertex and the top border of the image.

Example Line Request

{
  "geometries": {
    "line": {
      "objects_to_annotate": [
        "unmarked_lane",
        {
          "choice": "marked lanes",
          "subchoices": [
            "solid",
            {
              "choice": "dashed",
              "display": "dashed or dotted"
            }
          ]
        },
        "shoulder"
      ],
      "min_vertices": 2,
      "max_vertices": 15
    },
    ...
  },
  ...
}

Example Line Response

{
  "response": {
    "annotations": [
      {
        "type": "line",
        "label": "solid line",
        "vertices": [
            {
                "x": 123,
                "y": 10
            },
            {
                "x": 140,
                "y": 49
            },
            {
                "x": 67,
                "y": 34
            }
        ],
        "uuid": "65ec1f52-5902-4b39-bea9-ab6b4d58ef42"
      },
      { ... },
      { ... }
    ]
  },
  "task_id": "5774cc78b01249ab09f089dd",
  "task": {
    // populated task for convenience
    ...
  }
}

Ellipses

Given an

ellipse
entry in
params.geometries
, Scale will annotate your image or video with ellipses and return the extremal points of the ellipses. The ellipses may be rotated relative to the X and Y axes.

Request Parameters

objects_to_annotatearray of objects

A list of string or LabelDescription objects.


Response Fields

Key

Type

Description

uuid

string

A computer-generated unique identifier for this annotation.

In video annotation tasks, this can be used to track the same object across frames.

type

string

String to indicate geometry type:

ellipse

label

string

The label of this annotation, chosen from the

objects_to_annotate
array for its geometry. In video annotation tasks, any annotation objects with the same
uuid
will have the same
label
across all frames.

attributes

object

See the Annotation Attributes section for more details about the

attributes
response field.

vertices

array

A list of Vertex objects of length 4 describing the extremal vertices of the ellipse

Example Ellipse Request

{
    ...
    "geometries": {
        "ellipse": {
            "objects_to_annotate": ["wheel"]
        }
    },
    "annotation_attributes": {
        "position": {
            "type": "category",
            "description": "What is the position of this wheel?",
            "choices": [
              "front_left",
              "front_right",
              "back_left",
              "back_right",
            ]
        }
    },
    ...
}

Example Ellipses Response

{
  "response": {
    "annotations": [
      {
        "type": "ellipse",
        "label": "wheel",
        "attributes": {
            "position": "front_left"
        },
        "vertices": [
            {
                "x": 123,
                "y": 92
            },
            {
                "x": 173,
                "y": 113
            },
            {
                "x": 123,
                "y": 134
            },
            {
                "x": 73,
                "y": 113
            }
        ],
        "uuid": "65ec1f52-5902-4b39-bea9-ab6b4d58ef42"
      },
      { ... },
      { ... }
    ]
  },
  "task_id": "5774cc78b01249ab09f089dd",
  "task": {
    // task inlined for convenience
    ...
  }
}

Cuboids

Given a

cuboid
entry in
params.geometries
, Scale will annotate your image or video with perspective cuboids and return the vertices of the cuboids. If camera intrinsics and extrinsics are provided as well, Scale will return scale-invariant 3D coordinates with respect to the camera, i.e. assuming the camera is at the origin. See https://scale.com/blog/3d-cuboids-annotations for a detailed explanation of how we can augment 2D cuboid responses.

Request Parameters

objects_to_annotatearray of objects

A list of string or LabelDescription objects.


min_heightinteger

The minimum height in pixels of the cuboids you'd like to be made.


min_widthinteger

The minimum width in pixels of the cuboids you'd like to be made.


camera_intrinsicsobject

An object that defines camera intrinsics, in format {fx: number, fy: number, cx: number, cy: number, scalefactor: number, skew: number} (skew defaults to 0, scalefactor defaults to 1). scalefactor is used if the image sent is of different dimensions from the original photo (if the attachment is half the original, set scalefactor to 2) to correct the focal lengths and offsets. Use in conjunction with camera_rotation_quaternion and camera_height to get perspective-corrected cuboids and 3d points.


camera_rotation_quaternionobject

Object that defines the rotation of the camera in relation to the world. Expressed as a quaternion, in format {w: number, x: number, y: number, z: number}. Use in conjunction with camera_intrinsics to get perspective-corrected cuboids and 3d points. Note that the z-axis of the camera frame represents the camera's optical axis. Use in conjunction with camera_intrinsics and camera_height to get perspective-corrected cuboids and 3d points.


camera_heightinteger

The height of camera above the ground, in meters. Use in conjunction with camera_rotation_quaternion and camera_intrinsics to get perspective-corrected cuboids and 3d points.


Response Fields

Key

Type

Description

uuid

string

A computer-generated unique identifier for this annotation.

In video annotation tasks, this can be used to track the same object across frames.

type

string

String to indicate geometry type:

cuboid

label

string

The label of this annotation, chosen from the

objects_to_annotate
array for its geometry. In video annotation tasks, any annotation objects with the same
uuid
will have the same
label
across all frames.

attributes

object

See the Annotation Attributes section for more details about the

attributes
response field.

vertices

array of

Vertex
objects

A list of

Vertex
objects defining all visible vertices of the cuboid. See the Vertex section for more details.

edges

array of

Edge
objects

A list of

Edge
objects defining the edges of the cuboid.. See the Edge section for more details.

points_2d

array of

{x, y}
coordinate objects

If

camera_rotation_quaternion
,
camera_intrinsics
, and
camera_height
were provided, contains projected 2D coordinates of all 8 vertices of the cuboid after perspective correction. See diagram below for the order that the points are returned in.

points_3d

array of

{x, y, z}
coordinate objects

If

camera_rotation_quaternion
,
camera_intrinsics
, and
camera_height
were provided, contains 3D coordinates (arbitrarily scaled, relative to the camera location) of all 8 vertices of the cuboid after perspective correction. See diagram below for the order that the points are returned in.

Definition:
Vertex

Key

Type

Description

x

number

The distance, in pixels, between the vertex and the left border of the image.

y

number

The distance, in pixels, between the vertex and the top border of the image.

type

string

Always

vertex
.

description

string

An enum describing the position of the vertex, which is one of:

face-topleft

face-bottomleft

face-topright

face-bottomright

side-topcorner

side-bottomcorner

Definition:
Edge

Key

Type

Description

x1

number

The distance, in pixels, between the first vertex of the edge and the left border of the image.

y1

number

The distance, in pixels, between the first vertex of the edge and the top border of the image.

x2

number

The distance, in pixels, between the second vertex of the edge and the left border of the image.

y2

number

The distance, in pixels, between the second vertex of the edge and the top border of the image.

type

string

Always

edge
.

description

string

An enum describing the position of the edge, which is one of::

face-top

face-bottom

face-left

face-right

side-top

side-bottom

Example Cuboid Request

{
  ...
  "geometries": {
    "cuboid": {
      "objects_to_annotate": [
        "car"
      ],
      "min_height": 10,
      "min_width": 10,
      "camera_intrinsics": {
        "fx": 986.778503418,
        "fy": 984.4254150391,
        "cx": 961.078918457,
        "cy": 586.9694824219,
        "skew": 0,
        "scale_factor": 1
      },
      "camera_rotation_quaternion": {
        "w": 0.0197866653,
        "x": 0.0181939654,
        "y": 0.6981190587,
        "z": -0.715476937
      },
      "camera_height": -0.2993970777
    }
  },
  ...
}

Text

Points on the cuboid are returned in this order for both points_2d and points_3d:

       3-------2
      /|      /|
     / |     / |
    0-------1  |
    |  7----|--6
    | /     | /
    4-------5

Example Cuboid Response

{
  ...,
  "response": {
    "annotations": [
      {
        "label": "car",
        "vertices": [
          {
            "description": "face-topleft",
            "y": 270,
            "x": 293,
            "type": "vertex"
          },
          {
            "description": "face-bottomleft",
            "y": 437,
            "x": 293,
            "type": "vertex"
          },
          {
            "description": "face-topright",
            "y": 270,
            "x": 471,
            "type": "vertex"
          },
          {
            "description": "face-bottomright",
            "y": 437,
            "x": 471,
            "type": "vertex"
          },
          {
            "description": "side-topcorner",
            "y": 286,
            "x": 607,
            "type": "vertex"
          },
          {
            "description": "side-bottomcorner",
            "y": 373,
            "x": 607,
            "type": "vertex"
          }
        ],
        "edges": [
          {
            "description": "face-top",
            "x1": 293,
            "y1": 270,
            "x2": 471,
            "y2": 270,
            "type": "edge"
          },
          {
            "description": "face-right",
            "x1": 471,
            "y1": 270,
            "x2": 471,
            "y2": 437,
            "type": "edge"
          },
          {
            "description": "face-bottom",
            "x1": 471,
            "y1": 437,
            "x2": 293,
            "y2": 437,
            "type": "edge"
          },
          {
            "description": "face-left",
            "x1": 293,
            "y1": 437,
            "x2": 293,
            "y2": 270,
            "type": "edge"
          },
          {
            "description": "side-top",
            "x1": 471,
            "y1": 270,
            "x2": 607,
            "y2": 286,
            "type": "edge"
          },
          {
            "description": "side-bottom",
            "x1": 471,
            "y1": 437,
            "x2": 607,
            "y2": 373,
            "type": "edge"
          }
        ],
        "points_2d": [
          {
            "y": 270,
            "x": 293
          },
          {
            "y": 437,
            "x": 293
          },
          {
            "y": 270,
            "x": 471
          },
          {
            "y": 437,
            "x": 471
          },
          {
            "y": 286,
            "x": 607
          },
          {
            "y": 373,
            "x": 607
          },
          {
            "y": 373,
            "x": 607
          },
          {
            "y": 373,
            "x": 607
          }
        ],
        "points_3d": [
          {
            "z": 0,
            "y": 270,
            "x": 293
          },
          {
            "z": 0,
            "y": 437,
            "x": 293
          },
          {
            "z": 0,
            "y": 270,
            "x": 471
          },
          {
            "z": 0,
            "y": 437,
            "x": 471
          },
          {
            "z": 0,
            "y": 286,
            "x": 607
          },
          {
            "z": 0,
            "y": 373,
            "x": 607
          },
          {
            "z": 0,
            "y": 373,
            "x": 607
          },
          {
            "z": 0,
            "y": 373,
            "x": 607
          }
        ],
      }
    ]
  },
  ...
}

Image Response Format

The

response
field, which is part of the callback POST request and permanently stored as part of the task object, will contain an
annotations
field (and a
global_attributes
field, if Global Attributes were specified in the task creation request).

The annotations field will contain an array of Annotation objects. The schema of each Annotation object depends on the Geometry of the Annotation. See the Boxes, Polygons, Lines, Points, Cuboids, and Ellipses sections for descriptions of the schemas.

Example response

{
  "response": {
    "annotations": [
      {
        "type": "box",
        "label": "small vehicle",
        "attributes": {
          "moving": "yes"
        },
        "left": 2,
        "top": 4,
        "width": 3,
        "height": 5,
        "uuid": "65ec1f52-5902-4b39-bea9-ab6b4d58ef42"
      },
      {
        "type": "box",
        "label": "large vehicle",
        "attributes": {
          "moving": "yes"
        },
        "left": 7,
        "top": 5,
        "width": 14,
        "height": 5,
        "uuid": "0a6cd019-a014-4c67-bd49-c269ba08028a"
      },
      {
        "type": "polygon",
        "label": "car",
        "vertices": [
          {
            "x": 123,
            "y": 10
          },
          {
            "x": 140,
            "y": 49
          },
          {
            "x": 67,
            "y": 34
          }
        ],
        "uuid": "65ec1f52-5902-4b39-bea9-ab6b4d58ef43"
      },
      { ... },
      { ... }
    ],
    "global_attributes": {
      "driving": "Yes",
      "night": "No"
    }
  },
  "task_id": "5774cc78b01249ab09f089dd",
  "task": {
    // populated task for convenience
    ...
  }
}

Image Annotation Hypothesis

When creating a

imageannotation
task, you can provide prelabels in the
hypothesis
field, so that workers don't have to start from scratch to annotate the image.

In order to add pre-labels in a task using hypothesis, you’ll need to provide these in the

hypothesis
field of the payload when creating the task. The schema of the hypothesis object must match the schema of the task response.

  1. Verify the task response field schema for the desired task type.

  2. Review your project taxonomy (label names, attribute conditions, annotation types, etc).

  3. Generate pre-labels that are formatted to match the aforementioned schema and taxonomy.

  4. Create a task, including a hypothesis field that contains the pre-labels at the same top-level as other task fields such as project and instructions.

The hypothesis format will largely mirror Scale’s task response format. In this particular task type,

annotations
field array is mandatory inside the hypothesis object for simple annotations.

Note: UUIDs are not mandatory, if you want to use a particular UUID to track an annotation you can add it to the hypothesis, if not, Scale will generate one for you.

For Image Annotation, you can also add Global Attributes in the hypothesis object at the same level of annotations in the

global_attributes
field.

task_payload_with_hypothesis

{
 ...
 "attachment": "https://example.com/attachment.png",
 "hypothesis": {
   "annotations":  [
     {
       "label": "car",
       "left": 90,
       "top": 66,
       "height": 94,
       "width": 96,
       "type": "box"
     }
   ]
 },
 ...
}

task_taxonomy

{
 "geometries": {
   "box": {
     "objects_to_annotate": [
       "car"
     ],
     "min_height": 10,
     "min_width": 10
   }
 },
 "annotation_attributes": {}
}

scale_task_response

{
 "links": [],
 "annotations": [
   {
     "label": "car",
     "uuid": "xfb506ca-d742-4e75-bb52-0725f099b238",
     "left": 115,
     "top": 68,
     "height": 97,
     "width": 69,
     "type": "box"
   },
 ],
 "global_attributes": {}
}

Video Annotation Overview

Note: Scale VideoAnnotation has been deprecated in favor of Video V2 (/task/videoplayback).

Note: Scale Video is only available for our Enterprise customers. If you want to learn more, please contact our sales team.

This endpoint creates a

videoannotation
task. Given a series of images sampled from a video (which we will refer to as "frames"), Scale will annotate each frame with the Geometries (
box
,
polygon
,
line
,
point
,
cuboid,
and
ellipse
) you specify.

The required parameter for this task is

geometries
.

You can optionally provide additional markdown-enabled or Google Doc-based instructions via the

instruction
parameter.

You may also optionally specify

events_to_annotate
, a list of strings describing events section to annotate in the video.

If the request is successful, Scale will return the generated task object, at which point you should store the

task_id
to have a permanent reference to the task.

Label Nesting and Options

There are often annotation tasks that have too many label choices for a tasker to efficiently sort through them all at once, or times when you want to show one version of a label name to a tasker, but would like another version in the response.

In those cases, you can utilize

LabelDescription
objects to support nested labels, where labels may have subcategories within them, as well as setting
display
values for the label.

When declaring

objects_to_annotate
in your task parameters, we accept a mixed array of strings and the more complex
LabelDescription
objects.


Definition:
LabelDescription

A simple example is illustrated in the example JSON below, where

objects_to_annotate
can simply be a string, a nested label with choices and subchoices, or a nested label where the subchoices themselves are
LabelDescription
objects with a display value.

While there may be a large number of total labels, using subchoices a tasker can first categorize an object as a road, pedestrian, or vehicle, and based on that choice, further select the specific type of pedestrian or vehicle.

Nested labels may be specified both for the object labels (the

objects_to_annotate
array parameter), as well as in the
choices
array of a categorical annotation attribute. In both cases, you would specify a nested label by using a
LabelDescription
object instead of a string.

For example, for an

objects_to_annotate
array of
["Vehicle", "Pedestrian"]
, you could instead add a nested label by passing an array, like
["Vehicle", {"choice": "Pedestrian", "subchoices": ["Animal", "Adult", "Child"]}]
. Then, if a tasker selected "Pedestrian" for an annotation, they would be further prompted to choose one of the corresponding subchoices for that annotation.

The LabelDescription object has the following structure:

Parameter

Type

Description

choice*

string

The name of the label. This should be singular and descriptive (ex: car, background, pole).

When both a choice and subchoices are defined, the choice will not be selectable, it will only be used for UX navigation. Only the "leaf" nodes will be returned in Scale's response.

subchoices

Array<LabelDescription | string>

Optional: Descriptions of the sub-labels to be shown under this parent label. Array can be a mix of LabelDescription objects or strings.

instance_label

boolean
default false

Optional: For Segmentation-based Tasks - Whether this label should be segmented on a per-instance basis. For example, if you set instance_label to true, each individual car would get a separate mask in the image, allowing you to distinguish between them.

display

string
default choice

Optional: The value to be shown to a Tasker for a given label. Visually overrides the choice field in the user experience, but does not affect the task response or conditionality.

LabelDescription Example

objects_to_annotate = [
  "Road",
  {
    "choice": "Vehicle",
    "subchoices": ["Car", "Truck", "Train", "Motorcycle"]
  },
  {
    "choice": "Pedestrian",
    "subchoices": [
      "Animal", 
      {"choice": "Ped_HeightOverMeter", "display": "Adult" }, 
      {"choice": "Ped_HeightUnderMeter", "display": "Child" }, 
    ]
  }
]
Updated 4 months ago