Company

Scale’s Role In Building a Safer Internet

byon April 25, 2025

Training AI models to behave responsibly in the real world means preparing them for the full range of online content — including the challenging parts. It’s not easy work, but it’s necessary. At Scale, we believe that building AI systems that avoid harmful, abusive, or dangerous behavior is one of the most important challenges of our time. And we’re proud to support the people who make this possible.

AI is becoming a bigger part of people’s daily lives, unlocking new possibilities across work, learning, and creativity. As these systems grow more capable, it’s important to ensure they’re built to navigate complex situations safely and responsibly. This is why testing models against edge cases is so critical. It helps AI systems better understand context, avoid harmful pitfalls, and make more thoughtful decisions. With the right guardrails in place, we can help AI reach its full potential while keeping people safe.

At Scale, we help customers do this safely and responsibly by providing high-quality evaluation and alignment data. Outlier, a Scale product, plays a key role advancing GenAI through specialized human expertise. Among other tasks, Outlier contributors help AI systems recognize and respond appropriately to harmful prompts. This is a relatively small but important part of their overall contributions to model safety and performance.

For that reason, we’ve structured our contributor programs around foundational trust and safety pillars: 

  • Transparency: Contributors receive advanced notice about the nature of their tasks, including warnings when content may involve sensitive topics like violence or abuse.

  • Flexibility: Participation is always optional. Contributors can opt out of projects at any time–with no impact on their ability to continue to work on the platform in the future. 

  • Wellness resources: We offer weekly wellness sessions and dedicated support programs for contributors who want guidance or need help processing difficult material.

  • Community guidelines and clear task standards: We provide detailed specifications and guardrails to ensure tasks are completed accurately, safely, and within scope.

This approach helps ensure that contributors are given autonomy, set up for success and that the work is handled ethically and responsibly.

“While tasking on Outlier, some projects involve reviewing sensitive content to help train safer AI systems. We receive advance notice about the nature of these tasks and always have the option to opt out entirely. We're also encouraged to skip any task we're uncomfortable with—no questions asked. Health and wellness resources are available to support us, though I’ve never felt the need to use them.” – Scott O'Neil, Outlier Contributor

Scale’s customers are the world’s most innovative model developers, enterprises, and public sector entities looking to train, deploy and evaluate AI in their organizations. They are forward-thinking organizations that understand that safety isn’t optional, but critical. For many of our customers, evaluating edge-case scenarios is a critical step in aligning models with human values. Their trust in Scale reflects our shared commitment to building AI that is both powerful and responsible.

We’re proud of the work we’ve done, the people we support, and the values we uphold. The path to safer AI is challenging, and as AI continues to evolve, so will our approach to trust and safety. We’ll keep raising the bar on transparency, contributor care, and responsible training practices.


The future of your industry starts here.