Scale AI Research
Scale AI’s mission is to accelerate the development of AI applications. By advancing research, we aim to create AI systems capable of solving complex, human-level problems.
Revisiting the Superficial Alignment Hypothesis
Read more
Learning Goal-Conditioned Representations for Language Reward Models
Read more
Planning In Natural Language Improves LLM Search For Code Generation
Read more
A Careful Examination of Large Language Model Performance on Grade School Arithmetic
Read more
LLM Defenses Are Not Robust to Multi-Turn Human Jailbreaks Yet
Read more
Refusal-Trained LLMs Are Easily Jailbroken As Browser Agents
Read more
Frontier AI Research
SEAL (Safety, Evaluations, and Alignment Lab) is our research initiative to improve model capabilities through challenging private evaluations and novel research.