
As AI becomes more capable and widely adopted around the world, we need systems people can trust. At Scale AI, we believe that trust isn’t granted - it’s earned through testing, transparency, and collaboration. Building it requires a coordinated global effort grounded in shared standards and rigorous evaluation.
Today at the 2025 Seoul Forum on Al Safety & Security in Seoul, we announced a new strategic collaboration with the Korea AI Safety Institute (Korea AISI) to strengthen international cooperation on AI safety. The Memorandum of Understanding (MoU), signed at the event, brings together Scale’s technical expertise in model evaluation with Korea AISI’s leadership in AI research and governance.
This work aligns closely with Scale’s mission to build reliable AI systems for the world’s most important decisions, as we continue to further bridge the gap between frontier AI researchers and global policymakers by generating the data and insights necessary to inform evaluations and governance for large language models.
The MoU cements our long-standing partnership, and builds on our ongoing collaboration around joint research, LLM evaluations, and red teaming initiatives aimed at identifying vulnerabilities and potential adversarial exploits in AI systems.
Scale AI was built on the belief that our work with frontier model developers, governments, and enterprises is rooted in one principle: reliable AI starts with reliable data and rigorous evaluation.
Korea AISI shares that commitment. As Korea’s national institute for AI safety, Korea AISI has become one of the world’s leading centers for responsible AI research. They bring a deep understanding of AI safety research and policy, specifically tailored to the unique regulatory and societal context of Korea, while contributing to global standards. By combining Scale’s technical depth and Korea AISI’s policy leadership, we are creating a bridge between the people developing advanced AI systems and the people guiding how they’re governed.
Areas of Collaboration
The MoU establishes a framework for collaboration on AI safety and security research built on mutual data sharing, joint technical work, and engagement on policy development. The collaboration will prioritize technical work across three key areas:
Joint Research, Benchmarking, and Red Teaming for Frontier AI: We will collaborate with KAISI on joint research related to AI interpretability, resilience, and governance. Together, we’ll run red teaming initiatives to identify vulnerabilities and adversarial exploits, and develop shared threat models and mitigation strategies. This joint research will also build new benchmarks and develop threat models to test and strengthen frontier AI systems.
Advancing AI Safety Standards: We’re combining Scale AI’s practical experience in data and model evaluation with Korea AISI’s policy expertise to help develop globally recognized AI safety standards. Our goal is to create actionable, measurable criteria that can be adopted globally to ensure AI systems are robust, fair, and transparent.
Informing Policy with Data-Driven Insights: We’ll bridge the gap between cutting-edge AI research and practical policymaking. Beyond technical research, the partnership will also include policy collaboration to inform the implementation of the AI Basic Act of the Republic of Korea and future approach to domestic AI policy, and contribute to shaping global AI governance frameworks.
The Future of AI is Collaborative
No single company or country can ensure AI safety alone. Models are trained across borders, tested across languages, and deployed everywhere. What happens in one lab can affect millions of people around the world.
We’re proud to join Korea AISI in this important work to advance global AI safety and security through collaboration, shared research, and transparency. Scale is excited to continue to build partnerships like this to shape how the world studies, tests, and governs advanced AI systems, combining deep technical research with forward-looking policy work. Secure and reliable AI is a shared responsibility, and we’re committed to building it together.