Scale AI Policy Framework: Key Elements to Ensure American Leadership in AI

byon November 8, 2023

This past month has perhaps been the most consequential for AI governance to date. President Biden unveiled his Administration’s long awaited Executive Order (EO), the Office of Management and Budget (OMB) released their Federal Agency AI Implementation Memo, the G7 released their AI Guiding Principles, and global leaders attended the first ever Global AI Safety Summit in the UK. Scale is pleased to support all of these, which build on two previous Executive Orders and bipartisan pieces of legislation, months of Congressional hearings, Insight Forums, and other events that have started to shape the United States’ approach to AI governance. 


As Vice President Kamala Harris said during her remarks at the EO event on October 30, “Let us be clear: When it comes to AI, America is a global leader. It is American companies that lead the world in AI innovation. It is America that can catalyze global action and build global consensus in a way that no other country can.” This vision can only be realized if the United States’ governance framework maximizes innovation while putting in place the proper guardrails. The EO and OMB Implementation Memo are foundational steps that advance that vision in four key ways:


  • First, these efforts recognize the critical role that thorough red-teaming and use-case specific test and evaluation will play in safely deploying AI. Scale has long supported and has worked across industry to pioneer the techniques for red teaming and test and evaluation. Recently we released our methodology that includes red teaming to expose unintended vulnerabilities, followed by a risk-based approach to test and evaluation to ensure AI that is safe to deploy. 


  • A risk-based approach aligns the level of risk associated with the anticipated use and an appropriately rigorous test and evaluation process. As the OMB Memo points out, certain uses like auto correction of text pose a very different risk than AI being used for medical diagnosis, and their safety assessments should be treated as such. Paired together, these two techniques provide the confidence necessary to rely on AI for specific tasks. 


  • The EO highlights the necessity of consensus-based standards to put in place a repeatable and robust process for test and evaluation. Specifically, the National Institute for Standards and Technology (NIST) is tasked with launching new working groups, including industry experts, to write them. Once completed, these standards will be critical to reference in future rulemakings, legislation, and ultimately underpin the deployment of safe AI systems.  


  • Second, the EO and OMB Memo recognize that data is a critical asset for the government and directs agencies to prioritize AI-ready data. This is important because AI-ready data is the key to unlocking the power of AI. The government recently released a report with more than 700 different government AI use cases, but the data infrastructure to support them must be in place for them to be successful. AI is only as good as the data that powers it, and without the proper data infrastructure in place, the government’s ability to efficiently adopt AI will be limited. The Department of Defense has long recognized the value of AI-ready data and has called for it within their own data strategy, and the OMB memo calls it a “National Asset.” 


It is critical that agencies and Congress prioritize both AI-ready data strategies and devote adequate resources to carrying out those strategies. 


  • Third, the EO and OMB memo encourage agencies to use generative AI in ways that are safe and responsible for their day-to-day use. Generative AI applications like ChatGPT have taken the world by storm over the past year, and there is no shortage of use cases that would enhance the government’s daily work. For example, a large language model at the Department of Energy would drastically speed up the time it takes to understand the electrical grid for next generation transportation technologies. Encouraging agencies to embrace the technology responsibly is a step in the right direction, and when paired with test and evaluation, will help the government address data overload–one of the most pressing challenges that the government faces. 


  • Fourth, the EO establishes “Chief AI Officers” at every agency. As the use case inventory referenced above notes, there are no shortage of potential AI use cases at each agency, and this new position would enable a leader responsible for ensuring that each agency truly embraces each one of them. Beyond identifying use cases, this person will also have the critical task of leading the efforts on AI-ready data, building out a team that works throughout the agency to train staff on the benefits of AI, and also understanding the funding necessary to carry out these actions.


This is a critical time for the future of U.S. leadership in AI, and the release of these documents is just the starting point. Scale calls on the Administration and Congress to continue prioritizing AI over the coming months because there will be no shortage of work needed across the U.S. government. This includes implementing these new requirements, kicking off rulemakings, and developing the underlying standards to serve as the foundation for them. 


Lastly, Congress still has an important role to play. Scale looks forward to continuing to work with bipartisan leadership in the House and Senate to ensure that we craft legislation that codifies provisions from these documents, fund them where necessary, and fill the gaps that still exist.

The future of your industry starts here.