by Max Fenkell

Artificial intelligence is no longer a subject of theoretical policy debate; it is now a test of governance. Since the launch of ChatGPT in November 2022, the United States has moved rapidly from studying AI’s capabilities and risks to enabling its growth. During this period, early restraint helped avoid premature regulation and preserve U.S. leadership.
Over the past year, Congress and the Trump Administration have begun using the full range of policy tools available to them—accelerating data center permitting, directing investment through the Departments of Energy and War, and enabling access to the American AI technology stack abroad. These actions mark a clear shift from experimentation to execution. But they do not yet amount to a governing strategy capable of sustaining U.S. leadership in AI.
The central question facing policymakers is no longer whether AI will reshape the economy and national security. It is whether the federal government can govern, deploy, operationalize, and scale AI systems to maintain leadership.
If 2023 and 2024 were about understanding AI, and 2025 was about enabling growth, then 2026 must be the year of governance and implementation. Securing U.S. leadership in the AI era will require progress on three priorities.
AI Governance Should Modernize Existing Law, Not Replace It
Challenge: Effective AI governance must be grounded in how AI systems are used, not in the technology itself. Use-based regulation is the only approach that preserves innovation while placing guardrails where real risks arise. Treating all AI systems as interchangeable regardless of context, function, or impact creates regulatory uncertainty without improving reliability, accountability, or outcomes.
In practice, AI is deployed in two fundamentally different ways: the broad release of general-purpose models, and the use-case-specific application of AI systems within defined operational and regulatory environments. Both are essential to the AI ecosystem, but they present materially different risk profiles and therefore require different forms of oversight.
For use-case-specific AI applications, the appropriate starting point is the existing regulatory system. U.S. law already governs outcomes and conduct across sectors such as financial services, housing, healthcare, and consumer protection. In many cases, the law itself does not need to change. What is missing is clear, authoritative guidance on how existing requirements apply, and be complied with, when AI systems are involved.
That guidance should modernize compliance rather than invent new regulatory regimes. In practice, this may mean replacing static practices, such as generalized employee training or paper-based controls, with obligations to work with third parties to test, evaluate, and red-team AI systems against real-world risks. The underlying legal standards remain the same; what changes is how you demonstrate compliance in an AI-enabled environment.
In parallel, governance of the broad deployment of general-purpose models must be clearly scoped and disciplined. Overly expansive, technology-based regulation at the model level risks slowing adoption and investment without addressing downstream harms, which overwhelmingly arise at the point of use.
Recommendation: Congress and the Administration should direct federal agencies to adopt a use-based approach to AI governance and issue clear, authoritative guidance clarifying how existing laws apply to AI-enabled systems and how companies can comply with them via third party testing. Agencies should distinguish between areas where current frameworks are sufficient, where guidance is needed, and where regulatory gaps exist.
U.S. AI Leadership Depends on Government-Wide Implementation
Challenge: Much of the federal AI agenda has focused on governance frameworks and access to technology. That work is necessary but insufficient. Winning on AI requires adoption, integration, and sustained operationalizing across government departments and agencies, and the federal government is not yet positioned to do so at scale.
Recent efforts to make large language models available to government users are a positive first step. These tools can improve back-office productivity and day-to-day efficiency. However, access alone will not resolve the government’s most persistent operational bottlenecks. The hardest problems—permitting delays, acquisition backlogs, compliance workflows, and benefits administration—require use-case-specific AI systems designed to operate within complex, regulated processes.
Permitting reform illustrates the challenge. Despite wide recognition of the need to modernize the permitting process, progress has been incremental. Historically, agencies faced a trade-off between speed and completeness. Today, that is no longer the case: meaningful improvement will require AI applications that support the entire permitting workflow from intake and review to interagency coordination and decision-making. Even then, deployment remains difficult.
Legacy policies, fragmented data systems, and unclear operational ownership continue to block implementation across agencies. Data remains siloed, infrastructure uneven, and responsibility for AI deployment diffused. As a result, promising pilots stall and successful proofs of concept fail to scale.
True AI implementation in government will require a different foundation: routine data sharing across agencies, government-wide AI-ready data and infrastructure, and clear accountability for outcomes. Today, those foundations do not exist.
Recommendation: Congress should pass legislation or the Administration should take action to establish a Chief AI Officer interagency working group, led by the White House, with responsibility for removing—not merely identifying—barriers to AI implementation across the federal government. The group should drive actions on data sharing, AI-ready infrastructure, and operational ownership for priority use cases, with defined deliverables and timelines.
Advancing U.S. Leadership through AI Exports and Standards
Challenge: We are entering a decisive period. Over the next year, foundational standards governing how AI systems are built, evaluated, and deployed worldwide will take shape. If the United States does not lead that process, others will. China has already begun to export not only AI technologies, but also the standards and governance models that accompany them. The strategic consequences of losing that contest would extend far beyond the technology sector.
This risk is not theoretical. In the race to shape global 5G standards, China moved quickly, aligned industrial policy with standards bodies, and embedded its technology across global markets. The United States responded too slowly and without sufficient coordination—and continues to bear the consequences.
The United States must establish itself as the global standard-setter for artificial intelligence. Domestic adoption and sound governance are necessary, but insufficient. Global leadership will be determined by which country’s technology, standards, and operating norms are adopted by allies and partners at scale.
The Trump Administration’s Promoting the Export of the American AI Technology Stack Executive Order reflects a necessary shift in approach. It recognizes that we must assert leadership, not assume it is ours for the taking. To secure durable leadership, we must go beyond exporting the technology stack: U.S. technical standards must become the global default.
The Department of Commerce’s Center for AI Standards and Innovation (CAISI) is central to this effort. Its ability to coordinate across industry, government, and a growing network of international AI safety and security institutions positions it as a critical node in the global standards ecosystem. But without sufficient resourcing and empowerment, the CAISI will not be able to deliver.
Recommendation: Congress and the Administration should align trade policy, export promotion, and international standards engagement to ensure that American AI technologies and technical standards become the default among U.S. allies. Failure to coordinate these efforts risks ceding long-term leadership to competitors that move faster and not aligned with our values.
Conclusion
The United States now faces a different kind of AI race. The defining question is no longer who can build the most powerful models. It is who can deploy AI into real government workflows, measure its performance, and make trusted standards the global default. Preserving leadership will require governing AI through existing law, deploying it effectively across federal institutions, and ensuring that American technologies and standards shape global adoption. These priorities are mutually reinforcing: governance without implementation will stall, implementation without standards will fragment, and standards without credibility will fail to scale. In the AI era, leadership will belong to the countries that can move from policy to practice, and the United States must prove it can do both.