US strengthens self-regulation approach to AI
President Trump marked his first day back in office on 20 January 2025 by revoking several policies implemented during the Biden administration deemed as “harmful”, including an executive order focused on artificial intelligence (AI) safety and risk regulation.1 This decision, part of a broader rollback of Biden-era initiatives, has significant implications for the regulation of AI in the United States.
Executive Order 14110 of 30 October 2023 revoked by President Trump was a cornerstone of Biden’s AI policy framework, issued in late 2023.2 It aimed to establish guardrails for the development and deployment of AI technologies, mandating risk assessments, ethical guidelines, and oversight mechanisms to address potential harms. The Biden administration’s approach to governing “the development and use of AI safely and responsibly”3 was a step towards alignment with international trends, particularly in the European Union, aimed at increased security and transparency of AI systems.4
By contrast, Trump’s action signals a pivot away from regulatory caution in favour of fostering innovation with fewer restrictions. This revocation underscores the administration’s belief in self-regulation of the tech industry, and that the private sector, not the government, is best equipped to manage technological risks.5 However, critics have previously cautioned that the absence of robust oversight of AI, including the requirement to implement safeguards to prevent misuse, could lead to unchecked applications of AI, surveillance overreach, and vulnerabilities to cyberattacks.6
The tension between fostering innovation and ensuring safety will continue to dominate debates over AI policy. As AI systems increasingly influence critical areas such as healthcare, finance, and national security, calls for accountability and oversight are unlikely to subside.
In the months ahead, the U.S. must navigate the challenges of balancing its global competitiveness in AI with the need to protect public interests. Whether this can be achieved without formal regulations remains an open question, but the stakes are high for both the U.S. and the broader international community.
Subscribe to our newsletter below to stay up-to-date on key cases like this one and more.
Deep Lex: The AI hub for legal professionals.
Drop us a line at info@deeplex.ai with any questions or suggestions. We’re here to help you navigate the era of AI.
Disclaimer: The above article is intended for information purposes only and does not constitute legal advice. Please refer to the terms and conditions page for more information.
- “Initial Rescissions of Harmful Executive Orders and Actions.” The White House, 20 January 2025 (https://www.whitehouse.gov/presidential-actions/2025/01/initial-rescissions-of-harmful-executive-orders-and-actions/) ↩︎
- Executive Order 14110 of October 30, 2023 (Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence). ↩︎
- Executive Order 14110 of October 30, 2023 (Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence), Section 1. ↩︎
- European Commission. “Regulatory Framework Proposal on Artificial Intelligence.” Shaping Europe’s Digital Future (https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai) ↩︎
- “Trump Repeals Biden’s AI Oversight Order, Shifts Focus to Innovation-Driven Policies.” CIO. 21 January, 2025.(https://www.cio.com/article/3806594/trump-repeals-bidens-ai-oversight-order-shifts-focus-to-innovation-driven-policies.html) ↩︎
- Dubiniecki, Abigail “Trustworthy AI: String Of AI Fails Show Self-Regulation Doesn’t Work.” Forbes, 25 January 2024. (https://www.forbes.com/sites/abigaildubiniecki/2024/01/25/trustworthy-ai-string-of-ai-fails-show-self-regulation-doesnt-work/ ) ↩︎