Using Calibration Feedback

If you chose to create a Calibration batch, it can help you understand whether you have a healthy project where annotators are able to understand your instructions and label your data. This is most helpful if you are launching a project for a new time with a team of annotators who are unfamiliar with your project.

Auditing Calibration Results

First, you want to see how well your annotators did with the set of instructions you did provide. To view the annotated results, go to the Batches page in the left hand navigation bar, and click into your latest production batch you want to see results from.

From here, you can audit the results that have come back:

  • Audit all: Click the "Start Calibrating: button at the top of the page to open up an audit modal where you can continuously audit all tasks that have come in
  • Audit one at a time: Clicking open an individual task and selecting the "Audit task" button will only take you to audit that specific task. When you are finished, you will have to open up another task's audit page to get the next task.

Understanding Calibration Score

Calibration Score is a function of audit results (determined by you) and Annotators confidence (determined by annotators)

It is recommended that your Calibration Score is above 80% before moving on to a Production Batch - but it is up to your discretion.

Inputs that go into calculating the Calibration Score include:

  • The number of rejections you made while auditing the returned completed tasks
  • How confident the Tasker self-reported after reading the instructions & completing your task
Here a score of 42% indicates improvements should be made before progressing to a Production Batch

Here a score of 42% indicates improvements should be made before progressing to a Production Batch

Implement Calibration Feedback

Read Labeler Feedback
For each Calibration Batch you will receive feedback on each task, based on how easy or confusing the Tasker found the process of reading the instructions and executing on the task. Commonly, the feedback may point to adding more context, explanations, and examples of properly and improperly labelled tasks.

You can see the feedback that your annotators left for you in 2 places:

  • Batches page, when you click into a calibration batch
  • Instructions page, where you will see the feedback from your latest calibration batch

From Batches Page: Feedback is on the right side of each Task when you click into an individual Calibration batch.

Each task can be expanded to see complete feedback & all actions that can be taken

Each task can be expanded to see complete feedback & all actions that can be taken

From Instructions Page: To edit the Instructions and make things more clear for your annotators, you'll need to go to Definition > Instructions, where you can view the feedback and Calibration Score next to your instruction workspace.

Feedback from the most recent Calibration Batch is situated next to the workspace

Feedback from the most recent Calibration Batch is situated next to the workspace

Iterate on instructions
Once you read through your labeler’s feedback, you should go back into your project definition and edit your instructions and taxonomy to address points of confusion.

In general, you should aim to write instructions in a way that shows how one should do the task as opposed to just describing it.

Note that this is an iterative process. You may have to create more than one Calibration Batch to properly fine-tune your instructions.

`Show examples` allows you to see all examples and add newly labelled examples

Show examples allows you to see all examples and add newly labelled examples

You can include examples as `Well labeled` or `Poorly labeled`

You can include examples as Well labeled or Poorly labeled

Studio Tip: Show how to do the task instead of just describing it

🐶 Example Image Annotation Project

Dogs & their coat color

Don't do: Describing the task

Label dogs and their coat colors [solid, merle, brindle, harlequin, ticked, spotted, roan, tricolor]

✔️ Do: Show how to do the task

Describe overall goal of labelling dogs and their coat colors.

"The goal of this project is to properly distinguish between dog coat colors across a variety of breeds. We would like to have each dog in the dataset labeled and also have their coat coloring marked."

Give an overview of how to label dogs:

"Draw a bounding polygon around the dogs, making sure to include all legs, ears, and tails"

→ Include examples of well drawn and poorly drawn polygons.

Give an overview of coat colors:

"Here are multiple examples of all of the coat colors across different breeds..."

→ Include various examples for coat colors.

Updated 10 days ago