Product
Product

Nucleus Simplified—Dataset Management for Everyone

byon August 17, 2022

Nucleus is the centralized, scalable, and collaborative control center for your machine learning (ML) model’s training data. Nucleus helps you update missing or erroneous labels, and identify model failure cases so that you can direct your data curation efforts with the express goal of training better models.

With simple integrations for model integration, testing, validation, and deployment, Nucleus can serve as your one stop shop to continuously iterate on your dataset for superior model quality, even in the face of new edge cases, model drift, and unexpected changes in environmental variables. Nucleus helps you tackle the long tail, the one certain path to training models that can address new scenarios and eliminate failure modes.

Read on to learn the essentials—how Nucleus can help you manage your dataset efficiently.

The basics:

Addressing model failures:

  • Where is my model currently failing and how can I fix these issues with the next round of labeling data?

Once you’ve uploaded ground truth labels and at least one set of predictions for your dataset, you can click into the “Objects” view in Nucleus to identify False Positives, as shown in the below video. You can also identify other simple failure modes like False Negatives or low Intersection-over-Union (IoU) scores for image data.

this video

Addressing label quality issues:

  • How can I easily check and improve the quality of existing labels?

With your ground truth labels already in Nucleus along with one set of model predictions, the IoU slider and the “sort by” drop down menu lets you sort classifications by confidence from low to high. You can then choose if you want to label or fix the lowest-confidence images, with Rapid or if you’re an enterprise customer, your labeling pipeline.

this video

Breaking down your data in new ways:

Finding images without labels:

  • How can I find images with certain properties for which no ground truth label is available?

If no label exists for the query you’ve conceptualized, you can search your dataset with simple, natural language text. Using CLIP embeddings under the hood, Natural Language Search lets you search for images with English phrases, instead of the structured query language you use for labels and metadata. 

this video

Finding similar images:

  • I’ve identified an example image: how can I find all similar images in my dataset?

If no label is present for a class of images, but you’ve found a sample image, you can use Object or Image Autotag to build up a Slice of your dataset that consists of similar images. A step-by-step process will show you how to provide positive and negative examples, in multiple rounds, to fine-tune your search for like images. When you proceed to query your Autotag, you can set a threshold to broaden or narrow the scope of your search.

this video

Debug your models:

Find classification failures:

  • How can I find examples of classes in which my model and my labels disagree?

In Objects mode, once you select a model (predictions that you uploaded, or generated from a model in our Model Zoo), you can find a Confusion Matrix on the Charts page. This grid lets you find color-highlighted cases in which the labels and model disagree on a classification. You can click on any box to filter your entire dataset on just those examples, to further explore where your model fails and where it succeeds.

this video

Sorting and filtering both failures and successes:

  • How can I find the best and worst classification examples?

In Objects mode, once you select a model (predictions that you uploaded, or generated from a model in our Model Zoo), you can sort all of your data objects by confidence or intersection-over-union (IOU), or bound your images with specific minimum and maximum thresholds for these score values. You can also filter by specific types of confusions such as False Positives and False Negatives.

this video

Avoiding regressions:

  • How can I ensure I'm not breaking anything that worked with a previous model or dataset?

As you improve and expand your dataset, then proceed to train new models, you’ll want to ensure that your new models don’t fail on cases that your old models solved successfully. Through Scale Validate, Nucleus helps you track metrics such as mean average precision (mAP), precision, and recall on specific scenarios in your dataset. Or, you can establish and write your own metrics to track over time. This process ensures that every label addition and dataset modification you make only yields improving outcomes rather than regressions. You can think of this as a continuous integration (CI) or regression detection process for your models.

this video


Ready to try Nucleus? Click Here to set up your first project.


The future of your industry starts here.