Prepare for Launch

In order to launch a production batch you need:

  1. A calibrated project (See more about Calibration Batches)
  2. Quality tasks, of which there are two kinds:
    1. Training tasks: A subset of audited tasks that Taskers will complete before attempting live tasks from your production batch. These tasks make up the training course that all Taskers must complete (while meeting a certain quality bar) in order to onboard onto your project.
    2. Evaluation tasks: A subset of audited tasks that will help track quality of the Taskers.
      These are tasks that we randomly serve to Taskers after they’ve onboarded onto your project. To the Tasker, it appears as any other task on the project. However, since we already know what the correct labels are, we are able to evaluate how well they performed on the task. This enables us to ensure that Taskers continue to perform at a high quality bar over the entire course of time that they’re working on the project. Taskers who drop below the quality threshold will be automatically taken off the project.

Quality Tasks: Training vs. Evaluation

In order to ensure quality of your labels, you'll need to decide on subsets of Training tasks and Evaluation tasks. This short animation visualizes how to pick what audited tasks should be Training tasks and what audited tasks should be Evaluation tasks.

If you think the task would be a good one for all Taskers to complete before moving on to the live Production Batch tasks, it would make sense to make the task a Training task. Remember to think about your Training tasks as a set - make sure they cover a good breadth of the data variability of your dataset. These tasks should generally be easier, as it will be the first time a Tasker encounters your data.

If you think the task would be good one to track in terms of measuring quality of your Production Batch tasks, it would make sense to make the task an Evaluation task. Remember to think about your Training tasks as a set - make sure they cover a good breadth of the data variability of your dataset. These tasks should generally be harder, since they will be randomly served to taskers to gauge quality and accuracy. Note that since they tend to be harder, your general Production Batch quality should be higher than your Evaluation task quality.

Creating Quality Tasks

You can create a Quality Task from any audited task. For instance, you can take your Calibration Batch and after you audit each task, you can choose to make a Quality Task out of it.

It is important that you generate a diverse set of quality tasks. For example, for a 3 class categorization problem, you would want a equal balance between all 3 classes.

Selecting `Create Quality Task` in the lower right corner will prompt you to choose the type

Selecting Create Quality Task in the lower right corner will prompt you to choose the type

You can decide which type of task it should be.

You can decide which type of task it should be.

All the quality tasks you've created (both training & evaluation tasks) can be found under Quality Lab in the upper navigation of each project.

Evaluation tasks are automatically split into initial and review based on the changes you made in the audit. If you had Rejected and then made appropriate corrections to the attempted annotation, that Evaluation task becomes a Review Phase Evaluation task.

  • Initial Phase Evaluation Tasks measure a tasker’s ability to complete an annotation task from start to finish.
  • Review Phase Evaluation Tasks measure a tasker’s ability to take the completed work from another tasker, and make corrections as needed.

Recommendations for Quality Tasks

It is recommended that you create:

  • 5 training tasks
  • 30 evaluation tasks

Once you have determined that your quality tasks subsets represent your full dataset well, you're ready to launch your Production Batch!

Updated a month ago