Company Updates & Technology Articles
June 5, 2025
In this episode, we break down a critical component of AI governance: red teaming. We explore how traditional safety approaches fall short in enterprise contexts, why agentic systems raise the stakes, and what it takes to build a red teaming program that scales with your AI maturity.
As advanced AI rapidly evolves, red teaming needs an updated approach. Scale researchers propose a shift to test AI systems, not just models, in real-world contexts with a focus on product safety and realistic threats.
May 29, 2025
In the latest episode of Human in the Loop, we discuss a key pillar of AI governance: guardrails. We talk about what works and doesn’t work when it comes to implementing guardrails in enterprises and go over how to best wield guardrails as a tool in your governance toolbox.
May 22, 2025
May 15, 2025
At Scale, we believe great people are the foundation of great AI. Today, we’re excited to welcome a group of new team members from Papercup, a London-based company that built an impressive AI dubbing platform.
In the latest episode of Human in the Loop, we dive into the future and what's needed in the next generation of agents to enable them to work more effectively in an enterprise context.
May 9, 2025
LLMs are writing short fiction, but how good are they really? Sparked by a viral AI-generated story, this analysis dives into how an unreleased version of ChatGPT, Google's Gemini, and Anthropic's Claude tackle the challenging task of creating metafiction about AI and grief. Discover their unique approaches to self-awareness, philosophical depth, and the critical challenge of conveying genuine emotional texture in storytelling. A revealing look at the current state and future potential of AI in literature.
May 8, 2025
Welcome to Human in the Loop with Scale AI. We're kicking off with an episode diving into the current AI agent landscape and covering what’s important for enterprises to move beyond demos to real, reliable agentic systems.
May 1, 2025
Responding to Dario Amodei's urgent call for increased resources committed to AI interpretability, we agree on its importance while stressing the indispensable role of evaluations. Discover why understanding AI's internals and rigorously measuring its behavior are both necessary to ensure a future where AI is safe, steerable, and aligned with human values.