Launch a batch

The final step is to launch a batch of tasks for labeling. There are three types of batches:

Self-label batch

To test your taxonomy setup or experience labeling on the Rapid platform, you can create a self-label batch where you would create a batch of data to be labeled by you or your team member.

Calibration batch

It is important to be able to iterate on your taxonomy setup and instructions. To do this, you can create a calibration batch, which is a smaller set of tasks that you send to the Scale workforce for labeling. You will generally receive labeler-written feedback on your instructions and the fully delivered batch of tasks, which you can use to create quality tasks from, after only a few hours. We pride ourselves in our quick turnaround time in order to facilitate quick experimentation and iteration.

Note that calibration batch tasks go through less quality controls and low calibration batch quality does not necessarily reflect production batch quality. These quality controls include instruction-reading checks (e.g. labelers must spend a certain amount of time on the instructions before continuing) and management of a special pool of trusted labelers. The purpose of launching calibration batches is to iterate and improve upon your taxonomy and instructions while building your suite of quality tasks.

You can use your Calibration Score to gauge how well labelers are able to understand your instructions and label your data. We generally recommend achieving a Calibration Score of at least 80% before proceeding to production. You can learn more about your Calibration Score and how to improve it here.

To read more about calibration batches and the ideal calibration workflow, you can go here .

Production batch

After launching a few calibration batches, iterating on your taxonomy and instructions, and building your quality task suite, you will be ready to scale to production volumes. You can launch production batches, which are larger sets of data, to the Scale workforce for labeling. When labelers first onboard onto your project, they must read through your instructions and complete your training tasks to check their understanding of your instructions. Then, before they touch your production data, we serve them a few of your evaluation tasks that they must perform well on in order to proceed. If they do not perform well, we screen them off your project. Note that labelers also do not know that we are checking their knowledge on the backend. Those who pass this diagnostic can then continue labeling on your project. As they are labeling your production data, we periodically check their performance by serving evaluation tasks. Those who do not perform well may be demoted from being reviewers to being attempters or may be screened off your project. Similarly, those who do well may be promoted from being attempters to being reviewers. To read more about training and evaluation tasks, which we refer to together as quality tasks, you can refer here.

Updated 15 days ago