
Executive Summary
- The U.S. leads in AI innovation, but adoption is constrained by governance gaps.
- Fragmented authority creates uncertainty across federal and state levels.
- Institutions lack comprehensive statutory backing and enforcement tools.
- Real-world incidents reveal coordination failures.
- Without baseline standards, risks will scale faster than oversight.
Introduction
The United States is a global leader in artificial intelligence innovation. However, scaling AI safely and effectively remains uneven.
This reflects a governance problem—how institutions, rules, oversight, and incentives shape AI deployment.
Accountability refers to mechanisms that assign responsibility and enable redress when harm occurs.
Risk management involves identifying, mitigating, and monitoring AI risks across its lifecycle.
Fragmented Authority and Federalism
AI governance is distributed across agencies with different mandates.
For example, the NIST AI Risk Management Framework
provides voluntary guidance, while the FTC enforces against deceptive AI practices.
Why this matters: Fragmentation increases compliance complexity and leaves regulatory gaps.
Patchwork vs. Baseline
States are filling gaps in the absence of federal law. California’s CPRA regulations, Illinois’ BIPA, and the
Colorado Privacy Act all impose different requirements.
Why this matters: Companies face inconsistent obligations, while users receive uneven protections.
Institutional Capacity
Federal efforts include the OMB AI guidance (2024) and the AI Bill of Rights.
Why this matters: Without enforcement, frameworks remain aspirational.
Accountability and Elections
The FCC ban on AI robocalls (2024)
followed election-related deepfake incidents. The FEC is still evaluating rules.
Why this matters: Governance gaps in elections risk public trust.
National Security
The 2023 AI Executive Order and export controls
address national security risks.
Why this matters: Security measures do not replace broader civilian governance.
Corporate Governance
The White House voluntary commitments highlight industry self-regulation.
Why this matters: Voluntary measures lack enforceability.
Conclusion
The United States does not lack AI innovation—it lacks governance coherence. Stronger coordination, clearer accountability, and better institutional capacity are essential to realizing AI’s benefits safely.
References
- OMB AI Guidance (2024)
- NIST AI Risk Management Framework
- FTC AI Claims Guidance
- California CPRA Regulations
- Illinois BIPA
- Colorado Privacy Act
- FCC AI Robocall Ban
- FEC AI Rulemaking
- AI Executive Order (2023)
- Voluntary AI Commitments
- U.S. Export Controls (BIS)






