Product

Metadata to Enrich Image Annotations

byon October 15, 2018

At Scale, we pride ourselves in developing superior tooling and implementing

forward-looking solutions for problems our customers on the cutting edge of

computer vision, including self-driving cars, AR/VR, and drones encounter. For

example, our 2D image annotation endpoints now support both

per-annotation attributes and

label hierarchies, allowing you to get richer labeled data

out of Scale.

This post will briefly outline the differences between the two.


Per-Annotation Attributes



Defining per-annotation attributes for a task helps you extract additional

structured information about individual annotations, beyond what a single

selected label can provide.


The attribute interface can also adapt dynamically to previously-selected

attribute choices. For example, only if workers answer “No” to “Is the car

currently parked?” will they be asked “ Which direction is the car heading?”

If they answer “yes” to “Is the car currently parked?” the interface will skip

the question “Which direction is the car heading?”

Dashboard selection


Label Hierarchies



Nesting labels can also help you extract additional structured information

from your data. By default, 2D annotation tasks sent to Scale have a limit of

7 flat labels to reduce the cognitive load on our human labelers and keep

their accuracy high. But using subchoices, a worker can, for example, first

categorize an object as pedestrian or vehicle, and based on that choice select

the specific type of pedestrian or vehicle.

Dashboard selection



Ready to get started? Take a look at our

Annotation Attributes

documentation for how to include attributes in your tasks, and for how to use

attribute conditions to dynamically request specific attributes. Or see our

Nested Labels documentation

to learn how to specify large lists of labels on annotation tasks.


The future of your industry starts here.