There are a couple main indicators for quality that can easily be improved by understanding what the underlying issues are. The process of creating a well performing project can be highly iterative, so these scores are meant to be a way to keep a pulse on what can be improved.
Evaluation accuracy by task, found in Quality Lab > Evaluation Tasks
Evaluation accuracy overall based on audit results
The issues queue is found in Quality Lab and serves as a centralized location where the Rapid platform will automatically surface any issues to resolve that are impacting quality. There are different types of severities for each issue:
Blocking issues: these issues are extremely important to resolve and usually involve errors in project setup that fundamentally undermine the functionality of key features. For instance, a lack of review stage evaluation tasks would make it impossible to control the quality of reviewers, which would significantly impact any task.
Severe issues: these issues are highly likely to affect quality in a wide variety of situations. For instance, quality tasks with outdated taxonomy may no longer appear identical to production tasks for Taskers, which would potentially allow observant Taskers to provide inaccurate quality signals.
Regular issues: these issues still have a high likelihood to increase quality on your project, although they are not necessarily blocking tasks or opening up quality vulnerabilities. For instance, adding concepts and difficulties will allow us to serve evaluation tasks in a more balanced way to broaden each Tasker's domain knowledge.
Within the issues queue, you can browse a sorted list of issues and resolve them on a case-by-case basis.
Improving Calibration Score
You find your Calibration Scores at the batch level. For each Calibration Batch, this can be accessed through Batches > Calibration Batch.
You can only see your Calibration Score after you've finished a full audit of the calibration.
Improving Evaluation Score
You find an overall picture of the accuracy of your project in Metrics.
Keep in mind that while Evaluation Task Accuracies are intended to represent your project as a whole, this is just a summative representation of the tasks you selected to be Evaluation tasks.
It is important to maintain a healthy set of evaluation tasks in order to get high quality data.
See more: Examples of various Evaluation Task curves and what they might indicate
You will also be able to see individual accuracies at the Quality Lab view.
Diving into an evaluation task type will bring up each task and its average accuracy, as well as number of completions.