News
Oct 9, 2025 - 10 MIN READ
The Policy Pendulum: US Shifts from Mandatory AI Safety to Accelerated Innovation

The Policy Pendulum: US Shifts from Mandatory AI Safety to Accelerated Innovation

A rapid reversal of federal policy has replaced centralized risk management with a mission for global technological dominance, creating a patchwork of state-level guardrails.

N

News

I. Lead Section

The governance of Artificial Intelligence in the United States is currently defined by a sharp policy reversal at the executive level, signaling a profound shift in national strategy. After a period that culminated in a comprehensive mandate for mandatory AI risk management, the federal government has executed an abrupt pivot toward accelerating technological supremacy. This volatility matters because it instantly replaces established federal security guardrails with a core objective of reducing perceived bureaucratic burdens.  

The main action involves the rescission of the previous administration's Executive Order 14110 (EO 14110), a landmark measure focused on promoting the "Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence," which required agencies to appoint Chief AI Officers and focus on preventing AI-enabled threats to civil liberties and national security. This safety-first directive was quickly superseded by a new policy, Executive Order 14179, which explicitly shifts the focus onto "Removing Barriers to American Leadership" and achieving global AI dominance.  

The immediate significance of this shift is twofold. First, it prioritizes speed and economic competitiveness over centralized, mandatory risk vetting, directing agencies to "suspend, revise, or rescind" prior AI-related actions and, crucially, to "promptly provide all available exemptions" to existing compliance mandates where changes cannot be finalized immediately. Second, this federal policy vacuum has amplified the role of state and local governments, forcing an emerging and potentially inconsistent patchwork of regulation across the nation, creating new compliance challenges for companies operating across jurisdictions.  

II. Background

The foundation of U.S. AI governance is not as politically volatile as executive orders might suggest, resting instead on core statutory law and technical consensus. The legislative bedrock is the National Artificial Intelligence Initiative Act of 2020 (NAIIA), which outlined a long-term, comprehensive plan for R&D, coordination, and the responsible use of AI across government and private sectors. This bipartisan legislation established the enduring goals of ensuring U.S. R&D leadership, promoting the use of trustworthy AI, and preparing the workforce for this technological integration.  

Policymakers understood early on that fundamental challenges existed, including a lack of clear government understanding of AI's societal and economic effects, limited access to high-quality datasets and computing resources for researchers outside of Big Tech, and a general deficiency in technical standards needed to evaluate system performance.  

To address the trustworthiness deficit, the National Institute of Standards and Technology (NIST) developed the AI Risk Management Framework (AI RMF). This framework—built around four core, iterative functions: Govern, Map, Measure, and Manage—provides a systematic approach for organizations to mitigate AI risks voluntarily. NIST also released a specific Generative AI Profile (NIST AI 600-1) to help organizations identify and manage the unique risks posed by frontier models, such as those related to bias, data poisoning, and secure development practices.  

The recent urgency surrounding AI regulation, however, was driven by a technology shock: the rapid emergence of generative AI. This innovation accelerated risks at scale, particularly regarding the potential for fraud, cybercrime, the proliferation of deepfakes, and manipulation of public opinion. While earlier administrations had issued executive orders promoting "purposeful" and "traceable" AI use , the tension between technological acceleration and risk management escalated significantly with the advent of easily accessible, powerful generative models, prompting the initial move toward centralized mandatory safety (EO 14110), which was then quickly abandoned in favor of the current innovation-first policy.  

III. Analysis

The current executive direction is driven by the strategic motivation to secure definitive global AI dominance. The policy frame asserts that previous mandatory regulations were "burdensome" and "hindered innovation," positioning deregulation as the necessary catalyst for economic growth and accelerated breakthroughs in key sectors like medicine and manufacturing. A secondary, yet politically significant, motivation is the explicit requirement that AI systems must be "free from ideological bias" and designed to pursue "objective truth," tying the trustworthiness mandate to avoiding political or social engineering agendas.  

However, this light-touch regulatory approach presents a paradoxical consequence for the tech industry: the creation of a Regulatory Moat. Rigorous regulatory compliance, such as extensive reporting and vetting, typically imposes high fixed costs that are easily absorbed by large Big Tech firms (hyperscalers) but "squeeze out smaller start-ups" by diverting their limited R&D resources toward legal overhead. Ironically, while the policy aims for competitiveness, removing mandatory safety requirements does not alleviate the structural issue of market concentration. Startups remain highly dependent on hyperscalers for compute and data access, which enables the giants to maintain architectural lock-in and dominance regardless of compliance costs. Antitrust concerns globally, including from the U.S. Federal Trade Commission, recognize that the concentration of data, compute, and model ownership risks creating new monopolies and enabling algorithmic collusion.  

For policymakers, the strategic implication is that the federal volatility has ceded ground to states. With federal momentum stalled, states like Colorado and California have stepped in, enacting comprehensive AI legislation addressing issues from transparency and discrimination to intellectual property and government accountability. This emerging "patchwork" governance model provides flexibility but guarantees compliance friction, forcing companies to navigate disparate rules rather than a single, predictable national standard. This decentralized approach risks undermining the U.S. competitive edge by requiring firms to divert resources to fragmented legal compliance rather than innovation.  

IV. Key Impacts

The shifting regulatory landscape creates distinct and uneven impacts across stakeholders:

Impacts on Industry and Innovation:

Accelerated Development: In the short- term, the removal of mandatory federal reporting and vetting requirements will likely accelerate the speed at which tech companies, particularly startups, can deploy new AI models, aligning with the goal of innovation dominance.

Increased Compliance Burden: In the long-term, the rise of detailed state laws (38 states enacted measures in 2025) means companies face a heavier, more complex compliance burden nationally than a single federal rule would have imposed. Companies must now build systems flexible enough to comply with emerging standards like the Colorado AI Act while simultaneously navigating established federal agency requirements.  

NIST as the Due Diligence Standard: The non-regulatory guidance provided by the NIST AI RMF, including its detailed "Govern, Map, Measure, Manage" functions, becomes the de facto standard for responsible organizations seeking to mitigate liability and demonstrate due diligence voluntarily, as technical standards often outlive political mandates.  

Impacts on Consumer Protection and Public Trust:

Endurance of Sectoral Law: While sweeping AI regulation was curtailed, the enforcement authority of existing federal agencies remains intact. The Federal Trade Commission (FTC) continues to file suits against AI-enabled fraud and deceptive practices, demonstrating that existing consumer protection laws are robust enough to target marketplace harms.  

Civil Rights Accountability: Even with federal guidance rollbacks, legal risks for algorithmic bias persist. Existing statutes, such as Title VII and the ADA, continue to be enforced by the EEOC and other agencies. Companies using AI in high-stakes decisions like hiring must still proactively address the potential for disparate impact discrimination and monitor for bias, as exemplified by past cases where AI tools discriminated against protected groups. The CFPB is also working to protect consumers from algorithmic bias in areas like home valuations.  

Long-Term Erosion of Trust: In the absence of a clear, unified federal accountability framework, the public's trust in AI—and in the government's ability to protect them from systemic harm—risks erosion. The outsourcing of trust-building to the voluntary actions of Big Tech, which carries an inherent conflict of interest between profit and safety, creates an environment of systemic risk.

V. Conclusion

The United States is currently navigating a high-stakes trade-off, substituting comprehensive, centralized risk management for an aggressive pursuit of technological speed. While the immediate goal is to win the global AI race by removing perceived regulatory friction, the means employed—a sudden executive policy reversal—creates deep long-term instability.

Innovation is undoubtedly critical, but sustainable progress requires a reliable foundation of predictability and trust. The policy volatility undermines this foundation, forcing both industry and citizens to rely heavily on a fragmented regulatory landscape defined by state laws and sectoral enforcement actions. Ultimately, the strength of American AI leadership will be determined not solely by the pace of its technological breakthroughs, but by the resilience of the guardrails surrounding them.

Until Congress steps in to establish a durable, bipartisan statutory consensus—one that formally embeds requirements for transparency, accountability, and fairness into the development lifecycle—the U.S. will remain vulnerable to the policy pendulum. Without a stable legislative mandate, the biggest long-term risk to American AI leadership may not be competition from abroad, but the domestic collapse of public confidence following a major, avoidable algorithmic failure. The true challenge is building a policy framework that is both pro-innovation and pro-accountability, ensuring that technology serves society, rather than being simply unleashed upon it.