The Learning Curve: Humans-in-the-Loop Don't Perform Gradient Descent

Recorded Aug 28th @ 12:30 AMOnline

About The Webinar

Join us for a Tech Talk!

Data labeling at scale is a complicated endeavor involving operations, machine learning and game theory. In this talk we’ll focus on the game theory, and introduce a model of human-in-the-loop labeling as an optimization problem. Based on our (Scale AI’s) experience of operating tens of thousands of labelers on high complexity tasks, we will discuss how various problems predicted by the model map to observed results.

We welcome all Product Managers, Machine Learning Engineers, Software Engineers, Technical Program Managers, Research Scientists, and Applied Scientists for this technical discussion about optimizing human-in-the-loop labelling.
Screen Shot 2020-08-24 at 11.46.38 AM

Agenda

  • Intro to Human-in-the-Loop Labeling
  • Similarities to training an ML Model
  • Data augmentations for training labelers
  • The right loss function
  • Q&A

Presented By

Akshat Bubna

Akshat Bubna

Akshat is currently an engineering manager at Scale AI, where he leads a team that is building software and systems to label text, audio and computer vision ground truth data for ML teams with incredible accuracy. He previously has worked at D. E. Shaw and database and fintech startups, and studied computer science and mathematics at MIT. A former competitive programmer, he was the first competitor from India to receive a gold medal at the IOI.

The Learning Curve: Humans-in-the-Loop Don't Perform Gradient Descent