Exciting news! Scale has raised $100M in Series C funding led by Founders Fund. Read More
A full suite of data annotation types to build high quality 2D training data for computer vision models.
Our video annotation endpoint takes a series of images from a video (which we refer to as “frames”) and annotates each frame with bounding boxes.
Semantic Segmentation assigns each pixel of an image to a class of selected objects, making it easier to distinguish between objects. We offer both full pixel-wise masks and reduced rates for label-only annotations.
Our tightly drawn bounding boxes can include parts of a classified object that may be occluded as well as optional labels.
Customizable labels and conditional attributes for irregular shaped and coarse *object detection*
Auto-adjusted 3D perception in 2D images. The annotated image will include perspective cuboids with vertices in the image. If a task is submitted with camera extrinsics and intrinsics, the resulting annotation will include scale-invariant 3-dimensional coordinates with respect to the camera (assuming the camera is at the origin).
Most often used for precise lane annotation, the annotated image includes the vertices of the segmented lines, polylines and/or splines.
Most of Scale's Image Annotation endpoints all support per-annotation attributes and label hierarchies, allowing you to get richer labeled data. For more on this, take a look at our blog post or our documentation