Navigate to the labeler management section of your navigation pane to review your labelers’ performance. Your labeler management dashboard has all the high level metrics on your project performance. Here, you can see overall metrics for your project like the number of claimed tasks, total throughput, and average accuracy. You can also see specific performance by labeler.
Remaining Task in Queue
These charts will help you see see how many remaining tasks there are at each stage of the pipeline that have yet to be worked on. Claimed attempts & reviews are the tasks that have been assigned to your labelers. Every labeler will at minimum have 1 claimed task (which is the next task they will see in their queue when they open it up), but you can also go into batches and manually assign additional tasks to specific labelers. Claimed tasks are only those that are currently assigned to labelers - so any finished tasks would no longer show up.
These graphs are split into attempts and reviews. Using these graphs, you can see whether you have a healthy balance between outstanding attempts & outstanding reviews. For example, if you have a lot of unclaimed reviews but not a lot of unclaimed attempts, you might want to promote more annotators to be reviewers to help go through the review queue faster.
If you do not have any layers of review specified, then you do not need to worry about the reviews graph.
Throughput shows the total number of submissions that took place over a period of time that you specify by choosing different date ranges. You can see even how your submissions are split out across attempts & reviews.
Monitor task throughput to get a sense for how many tasks your labeling team can get through during a day, week, month etc. You can also use this information to determine if you need more workers to achieve your throughput goals.
Evaluation Task Accuracy shows the average score across all evaluation tasks in the date range that you specify. You can see how task accuracy has changed over time - and make adjustments to your workforce or your quality tasks as nee
Using the labeler insights table, you can see specific metrics on how an individual annotator is performing. You'll see a couple of metrics for every annotator:
The default time range for metrics shown is since the last day. However, you can easily change the time frame to suit what you're looking.
Choose the time range you want to see metrics for.
You can also set filters to show you annotators that meet a certain criteria. Filters can be based off Role, Status, Completed Tasks, Efficiency, and Evaluation Task Accuracy. For example, I say "I want to see the annotators who are attempters, and have really slow times" to figure out if I should be removing any slow performers from my project, or pulling them aside to see what's going on.
The labeler table is also where you can change the roles of your annotators. Simply select the annotators who you want to apply a change to, and change their settings.
You can deep dive into any of your labelers’ performance by clicking on their email. This will pull up the labelers' task log of all the tasks they have completed on your project.
View completed task or accuracy scores by toggling between “Completed Tasks” and “Benchmarks.”
For regular tasks (completed tasks), accuracy is based on how much a review or an audit changed compared to the original submission. Attempts will have accuracy scored based on the review, while reviews will have accuracy scored based on your audits.