General
General

SEAL: Scale’s Safety, Evaluations and Analysis Lab

byon November 8, 2023

As the leading test and evaluation partner for frontier AI companies, Scale plays an integral role in understanding and safeguarding large language models (LLMs).

Our commitment to fostering safe, secure, and trustworthy AI is the bedrock of our mission. In August, we outlined our vision for testing and evaluation of frontier models and introduced a platform designed to establish an industry-wide scalable framework, mitigate bias, and uphold the principles of Responsible AI. In September, we proudly signed onto the Biden-Harris Administration’s voluntary commitments to ensure that AI is safe, secure, and trustworthy. 

Today, we're thrilled to further our commitment to this cause, unveiling our new frontier research effort, the “Safety, Evaluations and Analysis Lab” (SEAL). This cutting-edge initiative is dedicated to building robust evaluation products and tackling the challenging research problems in evaluation and red teaming. Summer Yue, the former RLHF research lead for Bard at Google DeepMind, will be joining Scale to lead SEAL as Director of Safety and Standards.

Our plan to build a robust test & evaluation platform

Typically, AI companies establish their safety guidelines and evaluation criteria in-house. Many frontier model developers collaborate with Scale to have experts meticulously apply these criteria while conducting evaluations. However, we've identified some limitations and inefficiencies in this approach:

  • Creating guidelines from scratch and ensuring consistent application of those guidelines is a labor-intensive task, which slows down the pace of progress for AI companies.
  • These guidelines, often devised without extensive community feedback, may overlook crucial safety aspects.
  • Some safety requirements tend to be duplicated across companies, with slight variations in guidelines resulting in inconsistent safety evaluation results and a lack of transparency.

Likewise, every enterprise and government deploying an LLM application is facing the need to adopt and comply with the forthcoming standards and regulations that will be put in place. Integrating each of these rules will require a detailed evaluation process that includes the creation of clear guidelines and the management of human evaluators to ensure they are straightforward to implement.

Our plan is to create a suite of broadly applicable evaluation products on our T&E platform, initially addressing the most prominent and clear-cut potential safety issues outlined in the Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence, such as AI enabled dangerous biological materials, cybersecurity risks, deceptive AI generated content, etc.

In pursuit of this goal, Scale is committed to ramping up investments in advanced red teaming research and evaluation methodologies. We plan to work in collaboration with government stakeholders, standards setting organizations, our current partners in frontier model development, and alongside the broader AI safety community. We believe that sharing openly about our progress and engaging with the research community will be crucial as we refine our approach to deliver the most reliable and valuable evaluation products.

We recognize the importance of catering to the unique requirements of different products and are dedicated to continuing our customized evaluation offerings to meet the specific needs of our customers.

SEAL: Scale’s Safety, Evaluations and Analysis Lab

We are excited to introduce the Safety, Evaluations, and Analysis Lab (SEAL), our first major research initiative aimed at enhancing transparency and standardization in the safety and compliance processes associated with LLMs deployments.

SEAL’s primary objective is to work in collaboration with standard-setting bodies to establish benchmarks for deploying Large Language Models (LLMs), by developing comprehensive evaluation products and advanced red teaming solutions. The SEAL research team is set to undertake various projects, including:

  • Designing and implementing robust evaluation benchmarks. Measuring and improving eval reproducibility and reliability.
  • Researching and implementing cutting-edge model assisted red teaming methods.
  • Developing state of the art LLM-based automated rating systems leveraging the Scale ecosystem for customized training data.
  • Developing rater assist techniques such as critique modeling for improving rating quality.

We are actively seeking experienced ML researchers to join us in shaping the landscape for safety and transparency for the entire AI ecosystem. If you are passionate about making a difference, apply to join the SEAL team here.


The future of your industry starts here.