Introduction
The evolution of artificial intelligence (AI) is reshaping society at a scale comparable to the industrial revolution.1 Originally grounded in theory at Dartmouth in the 1950s,2 AI has evolved from academia to a transformative force in our daily lives. The past two years have witnessed particularly dramatic advances, with generative AI (frequently referred to as “GenAI” or “genAI”) reaching mainstream adoption at an unprecedented scale—ChatGPT alone reached 100 million weekly users by 2023.3 Today, generative AI’s capabilities—enabling anyone with an internet connection to produce content autonomously at the click of a button—are making this technology accessible and practical for individuals and industries alike.
However, the rapid advancement of AI presents both unprecedented opportunities and complex challenges. As these systems grow capable of sophisticated tasks—from creative writing to medical analysis—society finds itself at a critical juncture. On the one hand, AI innovations hold enormous promise for improving healthcare and tackling climate change.4 On the other hand, they also raise difficult questions about human agency and societal governance.
Generative AI in particular raises new problems surrounding misinformation and content authenticity,5 with deepfake technologies and AI-generated art sparking urgent debates on privacy, consent, and creative rights.6 Industry leaders, such as OpenAI, are working to address these concerns with measures like early warning systems to prevent misuse,7 but the speed of AI advancements makes oversight challenging.8
The regulatory landscape around AI reflects this tension. While the European Union has taken important steps toward comprehensive regulation, most jurisdictions continue to lag behind (see our Global AI Regulation Tracker for the full picture on the current state of AI regulation). As a result, AI technology advances at an exponential rate worldwide, frequently outpacing the development of regulatory oversight. The result? This gap between rapid AI innovation and slower legislative action makes effective governance an ongoing challenge for lawmakers.
Deep Lex’s mission is to provide clarity and insight into this increasingly complex landscape. We aim to help legal professionals understand and navigate the impact AI has, and may continue to have, on society and our legal systems.
Ethics and Governance in AI: Foundations for Trustworthy Technology
The importance of ethical governance in AI cannot be overstated. As AI systems become more integrated into society, frameworks that emphasise human-centric values are essential to protect human rights, promote fairness, and prevent unintended harm.
The EU’s High-Level Expert Group on Artificial Intelligence (EU HLEG), an advisory body appointed by the European Commission, advocates for a human-centric approach to AI as necessary to build what it calls “trustworthy AI.”9 This concept rests on ethical principles that emphasise respect for human autonomy, fairness, transparency, and harm prevention—principles designed to ensure AI systems align with societal values and comply with robust technical and legal safeguards.10
Beyond Europe, the OECD has been a global leader in promoting AI ethics through policies rooted in human-centric values, such as human rights, social justice, and consumer protections. The OECD’s guidelines emphasise that AI systems must be developed and used in ways that are transparent and accountable, aiming to ensure that the technology benefits society as a whole and avoids perpetuating harmful biases.11
In a groundbreaking move, UNESCO’s Recommendation on the Ethics of Artificial Intelligence, adopted unanimously by 193 member states in 2021, established the first global agreement on AI ethics.12 This recommendation provides a shared framework of ethical values to guide AI policy and legislation across nations. Although not legally binding, it serves as a global benchmark, addressing critical issues like data privacy, accountability, transparency, and the environmental impact of AI systems. The Recommendation also explicitly prohibits certain high-risk applications, such as social scoring and mass surveillance, to protect individual rights and freedoms.13
To assist member states in implementing these principles, UNESCO has developed tools such as Ethical Impact Assessments and Readiness Assessment Methodologies,14 which are designed to help nations evaluate their progress and adherence to these ethical guidelines. Such initiatives reflect a growing international commitment to embedding ethics in AI development and governance, even as formal regulation continues to evolve.
Together, these frameworks—from the EU’s and OECD’s human-centric principles to UNESCO’s global ethical standards—represent a collective international effort to ensure AI operates responsibly and respects universal human rights.
Key Ethical Questions in AI Development
As AI continues to develop, it raises fundamental ethical questions that challenge our current moral and legal frameworks. These questions cover diverse aspects of AI’s potential impact on society, such as:
Consciousness and Moral Status
With AI systems becoming increasingly sophisticated, the prospect of AI achieving forms of consciousness or sentience poses complex ethical dilemmas. Key questions arise, such as: Can AI systems develop genuine consciousness? and What criteria should determine moral status for AI? These issues push us to consider if, and under what circumstances, advanced AI might one day merit moral or even legal status. In which case, we may also ask What obligations might we have toward these systems? And How could these developments reshape our understanding of whar constitutes a “legal person”?
Autonomy and Control
Determining appropriate levels of autonomy for AI systems is vital, especially as they begin to assist in complex decision-making. In this regard, we may ask How do we ensure “meaningful human oversight” over AI’s actions? Or, Should AI systems be allowed to make life-or-death decisions? These questions underscore the importance of frameworks that balance AI’s utility with essential human oversight and responsibility.
Responsibility and Accountability
When AI systems cause harm, assigning responsibility is challenging. Questions like, Who is accountable—the developers, users, or the AI system itself? and What liability frameworks should govern AI deployment? highlight the need for clear, enforceable accountability structures, particularly in sensitive areas like healthcare and autonomous vehicles where AI decisions have serious consequences.
Bias and Discrimination
As AI systems play a growing role in social and economic contexts, ensuring equitable treatment is essential. Ethical questions such as, How do we prevent AI from amplifying existing social biases? and What constitutes ‘fair’ AI decision-making? underscore the risks of unchecked biases in AI training data. Vigilance in AI development, especially in applications like hiring, law enforcement, and finance, is crucial to protect against discriminatory outcomes.
These are some of the critical ethical issues that Deep Lex will explore in future insights. Such questions are pivotal as we move toward frameworks that reflect the OECD’s call for a “rights and values alignment” through human intervention, oversight, and redress mechanisms to prevent AI misuse.15 Though, to achieve this, it would appear prudent to require embedding ethical considerations into AI systems from the outset, ensuring that ethical governance is foundational rather than an afterthought.
Promoting Fairness and Social Impact in AI Development
However, the challenge lies not just in developing ethical AI systems, but in ensuring they meet the needs of society, including with regards to social justice and equity. The UNESCO framework emphasises that “justice, trust, and fairness must be upheld so that no country and no one is left behind, whether in access to AI technologies or protection from their negative impacts.”16
Achieving a fair distribution of AI’s benefits, while safeguarding vulnerable populations from displacement or discrimination, is a significant challenge. AI systems risk reinforcing or amplifying existing societal biases, potentially worsening social inequalities if unchecked. In response, the OECD calls for shared responsibility among “AI actors”17—including all those directly or indirectly involved in AI’s development or use—to ensure “trustworthy AI” in the context of social justice, safeguarding fairness and non-discrimination and in compliance with international law.18
Addressing these challenges will undoubtedly include considerations of AI’s broader impact on society, such as how AI impacts employment and economic equality, and how we can preserve meaningful human relationships and work in an increasingly automated world.
Elsewhere, the concentration of AI development capabilities also raises concerns about power distribution and democratic oversight. Who shapes the future of AI? and How do we ensure that its development serves the public interest rather than narrow commercial or political goals? These questions are becoming increasingly urgent as AI assumes a larger role in critical decision-making processes, underscoring the need for a balanced and ethical approach to AI governance.
The EU AI Act: A Framework for Responsible Innovation
The development and use of AI systems has largely outpaced regulation. The EU Artificial Intelligence Act (Regulation (EU) 2024/1689) (the “Act”),19 introduced approximately five years after OpenAI created its first generative AI model GPT-1,20 represents a landmark response to these challenges. It provides the first comprehensive legal framework aimed at ensuring AI systems respect fundamental rights, safety requirements, and ethical principles.
There is explicit reference in the Act’s Recitals to the importance of AI being a human-centric tool, and thus the requirement for AI development to be in accordance with fundamental European Union human rights and democratic values:
“Given the major impact that AI can have on society and the need to build trust, it is vital for AI and its regulatory framework to be developed in accordance with Union values as enshrined in Article 2 of the Treaty on European Union (TEU), the fundamental rights and freedoms enshrined in the Treaties and, pursuant to Article 6 TEU, the Charter. As a prerequisite, AI should be a human-centric technology. It should serve as a tool for people, with the ultimate aim of increasing human well-being.” 21
The Act will be examined further by Deep Lex in the coming weeks. In the meantime the below provides a high level overview of some of the mechanisms through which the Act addresses ethical concerns arising from the protection of fundamental human rights and freedoms.
Risk-Based Classification
The Act creates a comprehensive framework to protect human rights and democratic values while enabling AI innovation. Of critical importance is the establishment of a tiered system of obligations based on the level of risk posed by AI:
- Unacceptable Risk (Title II, Article 5): Applications that threaten fundamental rights are prohibited, such as social scoring by governments:
“The following artificial intelligence practices shall be prohibited: (a) the placing on the market, putting into service or use of an AI system that deploys subliminal techniques… (b) the placing on the market, putting into service or use of an AI system that exploits vulnerabilities of specific groups… (c) social scoring by public authorities…”
- High-Risk (Title III, Articles 6-7 and Annex III): AI systems used in critical areas such as those below will be deemed “high risk” and subject to additional requirements under the Act:
- Justice and democratic processes (Annex III, Point 8)
- Law enforcement (Annex III, Point 6)
- Essential services (Annex III, Point 5)
- Employment and worker management (Annex III, Point 4)
- Educational and vocational training (Annex III, Point 3)
- Limited Risk (Title IV, Article 52): Applications requiring specific transparency obligations, including:
- AI systems intended to interact with natural persons (e.g. Chatbots)
- Emotion recognition systems
- Biometric categorisation systems
- AI-generated or manipulated image, audio or video content
- Minimal Risk(All other AI systems not explicitly covered by the above categories): Systems posing minimal risk that can be freely used, such as AI-enabled spam filters.
Developer Obligations for High-Risk AI Systems
For high-risk AI systems, the AI Act further provides that developers must implement comprehensive safeguards:
- Risk Management (Article 9)
- Continuous, iterative process for risk identification and mitigation
- Regular systematic updating throughout system lifecycle
- Testing procedures to ensure reliable performance
- Data and Data Governance (Article 10)
- High-quality training, validation, and testing data sets
- Relevant, representative, and error-free data
- Examination for possible biases
- Protection of personal data and privacy
- Documentation and Transparency (Articles 11, 12)
- Detailed technical documentation
- Comprehensive record-keeping systems
- Automatic logging of events
- Clear audit trails for system decisions
- Human Oversight (Article 14)
- Built-in operational constraints
- Real-time monitoring capability
- Clear activation/deactivation protocols
- Regular assessment of human oversight effectiveness
- Quality Management (Article 17)
- Written policies and procedures
- Testing and validation protocols
- Risk management strategies
- Post-market monitoring systems
These requirements aim to transform abstract ethical principles into concrete legal obligations, to provide a clear framework for responsible AI development and deployment.
A Snapshot of Global Regulation Efforts
The European Union currently leads in developing comprehensive AI legislation, while other major jurisdictions have yet to implement similar frameworks. Although many of the world’s leading AI companies—such as Amazon, Google, Microsoft, and OpenAI—are based in the United States, the country lacks a federal regulatory framework for AI. In the absence of binding legislation, the responsibility for mitigating AI risks largely falls on developers themselves, requiring self-regulation to prevent misuse and address potential harms.
For lawyers, understanding these self-regulatory practices and limitations is crucial when advising clients on compliance and risk management in the AI space, particularly in the absence of formal regulatory frameworks.
The Self-Regulatory Approach in the United States
In the U.S., self-regulation has long been the primary approach to managing emerging technologies like AI, driven by the speed of innovation often outpacing formal legislation.22 This model allows companies to establish guidelines, conduct internal audits, and implement voluntary standards that adapt quickly to technological advances. For instance, in July 2023, fifteen major U.S. AI companies made voluntary commitments to the White House to bolster safeguards in three key areas: safety (including “red team” testing involving stress-testing systems to identify vulnerabilities, improving security and reliability, and risk information sharing), security (addressing vulnerabilities), and trust (using watermarking for AI-generated content to ensure transparency).23
While these initiatives show progress, particularly in safety protocols and technical measures, experts highlight ongoing concerns about the lack of transparency and enforceable oversight in these voluntary commitments.24 Furthermore, without enforceable standards, these voluntary measures often lack consistency and accountability across the industry. Critics argue that companies may prioritise competitive advantage over ethical responsibilities, leading to issues with transparency and reliability in AI applications. This gap between self-regulation and formal oversight raises concerns about bias, data privacy, and the potential for misuse, prompting calls for stronger regulatory frameworks to support public safety and trust.25
Ultimately, these corporate pledges mark an initial step toward AI governance in the United States, but comprehensive federal regulations are required to more effectively manage the risks identified.26
China’s Proactive Regulatory Framework
China has taken a proactive yet targeted approach to AI oversight, implementing a range of specific rules on data privacy, content management, and AI ethics through both guidelines and legislation. Key laws, such as the 2023 Generative AI Measures and the Deep Synthesis Management Provisions, place strict controls on generative AI technologies with an emphasis on transparency, data privacy, and content authenticity. The 2023 Generative AI Measures in particular cite the requirement for AI to be developed and used in compliance with “social morality and ethics” as well as “business ethics”,27 albeit that these matters are neither defined nor elaborated upon. Moreover, these targeted laws fall short of a comprehensive regulatory framework for AI development and deployment. However, experts anticipate that China will introduce broader legislation in the near future to address the complex challenges posed by rapidly advancing generative AI technologies.28
Access our Global AI Regulation Tracker at the bottom of the page to stay up to date on international AI legislation as it emerges. In the fast-evolving field of artificial intelligence, the EU’s regulatory advances signal an important step toward global AI governance, yet much work remains. In regions without comprehensive AI legislation, self-regulation by companies fills a critical role but raises questions about transparency and accountability. Crucially, the differing regulatory strategies across regions underscore the need for global cooperation to manage AI’s cross-border impact effectively.
Implications for Legal Practice
As AI continues to permeate diverse sectors, the legal profession must be proactive in understanding both existing frameworks and emerging risks. Deep Lex is dedicated to supporting legal professionals on this journey, offering updates, insights, and analysis to help navigate the complexities of AI regulation and ethics.
Legal professionals must also grapple with the dual challenge which AI presents: advising clients on compliance while adapting their own practices to an AI-enabled legal landscape. This transformation demands both technical understanding and ethical awareness.
With the release of Introduction to Ethical Issues in AI (published 5 November 2024), Deep Lex launches a comprehensive series examining the ethical implications of AI in society and the professional legal context. The series will explore critical issues, including:
- Fairness and bias in automated systems
- Privacy and consent in an AI-driven world
- Algorithmic decision making in the public sector
- Human autonomy in an automated society
We look forward to engaging with our readers further on these issues.
Stay Informed
Subscribe to our newsletter for regular updates on:
- Major AI developments and their societal impact
- New regulatory frameworks and policies
- Practical insights for legal professionals
- Emerging ethical considerations
- Community perspectives and discussions
Deep Lex: The AI hub for legal professionals.
Drop us a line at info@deeplex.ai with any questions or suggestions. We’re here to help you navigate the era of AI.
Disclaimer: The above article is intended for information purposes only and does not constitute legal advice. Please refer to the terms and conditions page for more information.
Sources:
- Columbia Business School, “AI and the Industrial Revolution,” 16 April 2024, https://business.columbia.edu/research-brief/research-brief/ai-industrial-revolution. ↩︎
- Dartmouth College, ‘Artificial Intelligence Coined at Dartmouth,’2024, https://home.dartmouth.edu/about/artificial-intelligence-ai-coined-dartmouth. ↩︎
- James Vincent, “ChatGPT Has 100 Million Weekly Active Users, Says OpenAI,” The Verge, 6 November 2023, https://www.theverge.com/2023/11/6/23948386/chatgpt-active-user-count-openai-developer-conference. ↩︎
- Alvin Powell, “New AI Tool Can Diagnose Cancer, Guide Treatment, Predict Patient Survival,” Harvard Gazette, 15 September 2024, https://news.harvard.edu/gazette/story/2024/09/new-ai-tool-can-diagnose-cancer-guide-treatment-predict-patient-survival/; Karen Hao, “Here Are 10 Ways AI Could Help Fight Climate Change,” MIT Technology Review, 20 June 2019, https://www.technologyreview.com/2019/06/20/134864/ai-climate-change-machine-learning/. ↩︎
- Jess Whittlestone and Jack Clark, “Why and How Governments Should Monitor AI Development,” Nature Humanities and Social Sciences Communications 9, no. 1 (2022): 1-11, https://www.nature.com/articles/s41599-022-01174-9 ↩︎
- Adobe Communications Team, “Content Authenticity in the Age of Disinformation, Deepfakes & NFTs,” Adobe Blog, 22 October 2021, https://blog.adobe.com/en/publish/2021/10/22/content-authenticity-in-age-of-disinformation-deepfakes-nfts ↩︎
- OpenAI, “Building an Early Warning System for LLM-Aided Biological Threat Creation,”, https://openai.com/index/building-an-early-warning-system-for-llm-aided-biological-threat-creation/ ↩︎
- AI Regulation, “Artificial Intelligence and the Future of Art,” accessed November 1, 2024, https://ai-regulation.com/artificial-intelligence-and-the-future-of-art/ ; World Economic Forum, “Cracking the Code: Generative AI and Intellectual Property,” 15 January 2024, https://www.weforum.org/stories/2024/01/cracking-the-code-generative-ai-and-intellectual-property/ ; Mira Penava, “AI Art Is in Legal Greyscale,” The Regulatory Review, 24 January 2023, https://www.theregreview.org/2023/01/24/penava-ai-art-is-in-legal-greyscale/. ↩︎
- European Parliamentary Research Service, “EU guidelines on ethics in artificial intelligence: Context and implementation,” PE 640.163, 2019, p. 3. ↩︎
- European Commission, “Expert Group on AI,” Shaping Europe’s Digital Future (https://digital-strategy.ec.europa.eu/en/policies/expert-group-ai) ↩︎
- OECD, “Advancing accountability in AI: Governing and managing risks throughout the lifecycle for trustworthy AI,” OECD Digital Economy Papers 349 (Paris: OECD Publishing, 2023),p.27 ↩︎
- UNESCO. “UNESCO Member States Adopt First Ever Global Agreement on Ethics of Artificial Intelligence.” UNESCO, November 25, 2021. (https://www.unesco.org/en/articles/unesco-member-states-adopt-first-ever-global-agreement-ethics-artificial-intelligence); UNESCO, “Recommendation on the Ethics of Artificial Intelligence,” 2022, p. 5 (https://unesdoc.unesco.org/ark:/48223/pf0000381137). ↩︎
- UNESCO, “Recommendation on the Ethics of Artificial Intelligence,” 2022, p. 20 (https://unesdoc.unesco.org/ark:/48223/pf0000381137). ↩︎
- UNESCO, “Recommendation on the Ethics of Artificial Intelligence,” 2022, p. 26 (https://unesdoc.unesco.org/ark:/48223/pf0000381137). ↩︎
- OECD, “Advancing accountability in AI: Governing and managing risks throughout the lifecycle for trustworthy AI,” OECD Digital Economy Papers 349 (Paris: OECD Publishing, 2023),p.27 ↩︎
- UNESCO, “Recommendation on the Ethics of Artificial Intelligence,” 2022, p. 5 (https://unesdoc.unesco.org/ark:/48223/pf0000381137). ↩︎
- OECD, “Advancing accountability in AI: Governing and managing risks throughout the lifecycle for trustworthy AI,” OECD Digital Economy Papers 349 (Paris: OECD Publishing, 2023),pp. 17 and 23. ↩︎
- OECD, “Advancing accountability in AI: Governing and managing risks throughout the lifecycle for trustworthy AI,” OECD Digital Economy Papers 349 (Paris: OECD Publishing, 2023),pp. 10, 17, 22, 27, 31. ↩︎
- EU Artificial Intelligence Act (Regulation (EU) 2024/1689) ↩︎
- Dennis Layton, “ChatGPT: How We Got to Where We Are Today – A Timeline of GPT Development,” Medium, January 17, 2023 (https://medium.com/@dlaytonj2/chatgpt-how-we-got-to-where-we-are-today-a-timeline-of-gpt-development-f7a35dcc660e) ↩︎
- EU Artificial Intelligence Act (Regulation (EU) 2024/1689), at (6). ↩︎
- Adam Satariano, “Europe Takes Aim at Apple, Meta and Other Tech Giants Under New Digital Law.” The New York Times, March 4, 2024. https://www.nytimes.com/2024/03/04/technology/europe-apple-meta-google-microsoft.html); Melissa Heikkilä, “AI Companies Promised the White House They’d Self-Regulate. One Year Later, What’s Changed?” MIT Technology Review, 22 July 2024 (https://www.technologyreview.com/2024/07/22/1095193/ai-companies-promised-the-white-house-to-self-regulate-one-year-ago-whats-changed/). ↩︎
- The White House, “FACT SHEET: Biden-Harris Administration Secures Voluntary Commitments from Leading Artificial Intelligence Companies to Manage the Risks Posed by AI,” 21 July 2023 (https://www.whitehouse.gov/briefing-room/statements-releases/2023/07/21/fact-sheet-biden-harris-administration-secures-voluntary-commitments-from-leading-artificial-intelligence-companies-to-manage-the-risks-posed-by-ai/) ↩︎
- Melissa Heikkilä, “AI Companies Promised the White House They’d Self-Regulate. One Year Later, What’s Changed?” MIT Technology Review, 22 July 2024 (https://www.technologyreview.com/2024/07/22/1095193/ai-companies-promised-the-white-house-to-self-regulate-one-year-ago-whats-changed/). ↩︎
- Eric D. Reicin, “Industry Self-Regulation: A Path Forward for Governing Artificial Intelligence?” Better Business Bureau National Programs, December 21, 2023, https://bbbprograms.org. ↩︎
- Melissa Heikkilä, “AI Companies Promised the White House They’d Self-Regulate. One Year Later, What’s Changed?” MIT Technology Review, 22 July 2024 (https://www.technologyreview.com/2024/07/22/1095193/ai-companies-promised-the-white-house-to-self-regulate-one-year-ago-whats-changed/). ↩︎
- In the English translation: CAC, Interim Measures for the Administration of Generative Artificial Intelligence Services 2023 “2023 Generative AI Measures”
(https://www.cac.gov.cn/2023-07/13/c_1690898327029107.htm) ↩︎ - Matt Sheehan, “China’s AI Regulations and How They Get Made,” Carnegie Endowment for International Peace, July 10, 2023 (https://carnegieendowment.org/research/2023/07/chinas-ai-regulations-and-how-they-get-made?lang=en) ↩︎