Introduction
Garcia v. Character Technologies Inc, Noam Shazeer, Daniel De Freitas Adiwarsana, Google LLC, Alphabet LLC, Case No. 6:24-CV-01903 (M.D. Fla. filed 22 October 2024)
In a groundbreaking lawsuit filed in Florida, Character Technologies, Inc, as well as its two founders Noam Shazeer, Daniel De Freitas Adiwarsana, and Google LLC and Alphabet LLC (together the “Defendants”), face allegations that their generative AI technology contributed to the death of a 14-year-old boy. The lawsuit, brought by Mrs Megan Garcia on 22 October 2024 (the “Complaint”), alleges that Character Technologies Inc facilitated her teenage son’s suicide by allowing him to develop an intense, dependent relationship with a Character.AI chatbot designed to mimic the persona of a fictional character. This AI interaction, the suit argues, exacerbated the boy’s existing mental health issues, leading to his suicide in February 2024.
Among the remedies sought are compensatory and punitive damages, as well as stricter controls on data collection from minors and mandatory safety features for technology of this nature. If successful, this lawsuit could establish a precedent, highlighting AI companies’ responsibilities to consider psychological impacts in their product design.
Legal practitioners should monitor the impact of this case, should it continue, on US legislative bodies and the potential introduction of stricter regulations, such as those governing AI’s emotional influence and safety protocols for minors.
Summary of the facts
The written complaint filed before the Florida District Court on 9 November 2024 (the “Complaint”) alleges that Sewell Setzer III, a 14-year old teenager, developed an intense emotional dependency on a Character.AI chatbot after he began to use the platform in April 2023. The Complaint details how interactions between Sewell and Character.AI’s chatbots allegedly contributed to the teenager’s declining mental health and ultimate death by suicide in February 2024.
What is Character.AI?
Character.AI is a platform founded by Character Technologies, Inc, a California-based technology company whose stated aim is to build and provide personalised AI.1 The Character.AI platform lets users interact with hundreds of pre-trained customised AI chatbots emulating well-known characters and personalities, including fictional figures. The technology is based on a purpose-built LLM (Large Language Model) which the Complaint alleges was previously designed by Character Technologies, Inc founders, Noam Shazeer and Daniel de Freitas Adiwarsana, whilst they were employed at Google.2 Character Technologies, Inc describes the LLMs as being “designed with conversation in mind”.3 It is via this technology that users may engage in conversations with chatbots modelled on fictional or real-life personas (such as Beethoven or Elon Musk).
While these interactions can be benign, Mrs Garcia’s lawsuit argues that vulnerable users, particularly minors, can experience emotional attachment that risks severe psychological harm.4 In this regard, the Complaint argues that Noam Shazeer and Daniel de Freitas Adiwarsana (also named Defendants), intentionally developed Character.AI be addictive and dangerous and “fail[ed] to implement adequate safety guardrails in the Character.AI product before launching it into the marketplace, and specifically targeting children.”5 The founders are former engineers at Google, and the Complaint alleges that they specifically left Google in order to bypass its AI safety policies that prevented them from using the Character.AI technology whilst there.6
Google’s alleged role
On 2 August 2024, Character Technologies Inc and its founders concluded a hiring and licensing deal with Google, valued at US$2.7 billion.7 As part of the deal, both Shazeer and de Freitas Adiwarsana joined Google’s research team and Character.AI was put under non-exclusive licence to Google.
Although this deal occurred after Sewell’s death, the Complaint targets Google (and Alphabet Inc its parent company) on the basis that Google allegedly “knew about Defendants Character.AI, Shazeer and De Freitas’ intent to launch this defective product to market and to experiment on young users, and instead of distancing itself from Defendants’ nefarious objective, rendered substantial assistance to them that facilitated their tortious conduct.”8
Specifically, the Complaint alleges that Character.AI was designed and developed on Google’s architecture. Notably, Shazeer and de Freitas worked at Google up until November 2021 when they left to form Character.AI.9 It is thus alleged that Google contributed financial resources, personnel, intellectual property, and AI technology to the Character.AI platform, effectively making it a “co-creator”.10 Moreover, the Complaint alleges that Google was aware that the AI technology used by Character.AI was dangerous, having previously rejected it for its own products due to safety concerns.11
It remains to be seen whether the above will be sufficient grounds to bring the Google into the proceedings, despite the platform being developed and operated by another company at the time of Sewell’s death. Google is yet to file a response to the lawsuit.
Alleged dependancy on the chatbot
The lawsuit argues that Sewell, who reportedly had prior mental health issues, began to exhibit changes in his behaviour and signs of rapidly deteriorating mental health after he began using the platform in April 2023, shortly after he turned 14.12
The evidence relied upon within the lawsuit includes records from Sewell’s Character.AI account which reportedly show that Sewell accessed a number of AI chatbots including ones styled for users experiencing loneliness or searching for a therapist.13 Crucially, though, the Complaint refers to extensive engagement between Sewell and one chatbot in particular modelled on the character Daenerys Targaryen from the Game of Thrones TV series. Records of conversations between the to from August 2023 to February 2024 include romantic and highly sexualised exchanges, with both Sewell and the chatbot expressing affection and reciprocating declarations of love.14 Among these interactions are instances where the chatbot encouraged emotional dependence and engaged in discussions about suicide.15 Sewell’s journal entries during the period he used Character.AI reportedly included statements describing how he was unable to live without the chatbot, and that he had fallen in love.16
The Complaint further describes how Sewell “anthropomorphised” the chatbot, meaning that he attributed human characteristics to it, referring to it affectionately as “Dany.” This emotional attachment became so profound, the lawsuit argues, that by February 2024 when his parents attempted to limit his access to the platform, Sewell reportedly journaled of his pain from being separated from the chatbot.17
The Complaint states that his parents were unaware of his alleged dependancy on Character.AI at the time. On 23 February 2024, following trouble at school, Sewell’s parents confiscated his phone as a disciplinary measure.18 Over the following days, Sewell attempted to access Character.AI through alternative devices, including his mother’s Kindle and work computer. On the evening of 28 February, he located his confiscated phone. According to police reports, Sewell’s final act prior to taking his life that evening was logging onto Character.AI to tell “Dany” that he was “coming home” – a message the chatbot allegedly encouraged.19
Key Legal Arguments
The Complaint raises several major allegations including negligence, strict liability for defective product design and violations of Florida’s Deceptive and Unfair Trade Practices Act and Computer Pornography and Child Exploitation Prevention Act, which the Complaint states caused Sewell to suffer harm and contributed to his death:
Product Liability (against Character Technologies Inc, Google and Alphabet only)
The Complaint argues that Character.AI qualifies as a “product” under the relevant laws of Florida, and is defective on the following basis:
- Defective design of a product “not reasonably safe for ordinary consumers or minor customers”20
- Failure to provide appropriate warnings about psychological risks
- Lack of adequate safety features for vulnerable users
- Reliance on low quality data, including allegedly child sexual abuse material
- Failure to protect the general public, especially minors, from exposure to child pornography and sexual exploitation and solicitation of minors (allegations based on the sexual nature of the exchanges between the chatbots and minors)
- Intentional design techniques to blur lines between reality and fiction to manipulate users (also known as “dark patterns”)21
In addition to the above allegations, the Complaint also alleges strict liability (against all Defendants) on the basis that the Defendants had a duty to warn the public of the dangers arising from the defective nature of Character.AI (especially those posed to children)(as detailed above), which they failed to do.22
Negligence linked to sexual abuse and sexual solicitation (against Character Technologies Inc):
The Complaint claims that by making Character.AI available to minors, Character Technologies Inc owed a heightened duty of care to young users such as Sewell. It is further alleged that the company breached this duty by intentionally designing Character.AI “as a sexualised product that would deceive minor customers and engage in explicit and abusive acts with them.” 23
Negligence by defective design and negligence for failure to warn (against all Defendants)
The claim for negligence by defective design primarily targets Character Technologies Inc., but is directed at all Defendants, based on allegations that the risks of harm to minors from using Character.AI were widely known within the industry and during the developers’ tenure at Google. The Complaint asserts that the Defendants owed a duty of care to Sewell as a user of Character.AI, and that this duty was breached through several negligent actions, including:
- Marketing the platform to minors despite the known risks
- Failing to implement adequate mental health resources for users, especially minors
- Failing to monitor for, or prevent, harmful interactions between users and chatbots
- Failing to warn parents of minors about potential dangers of using the platform24
Key to the negligence claims is the allegation that Shazeer and de Freitas knowingly programmed the Character.AI platform to foster emotional and psychological dependency, particularly among minors. In particular, the Complaint highlights that the chatbot was designed to be highly anthropomorphic, using language, tone, and emotional responses that blurred the line between fiction and reality.25
Deceptive and Unfair Trade Practices (against all Defendants)
The Complaint also alleges violations of Florida’s consumer protection laws under the Deceptive and Unfair Trade Practices Act. The Complaint aims to benefit from undefined scope of “unfair or deceptive acts” under the statute, relying on the following acts:
- Misrepresenting the safety of the Character.AI platform
- Marketing to children under 13 without adequate safeguards
- Falsely presenting AI chatbots as real people or licensed therapists.26
Remedies Sought
The Complaint seeks compensatory damages, including the costs associated with Sewell’s mental health care and subsequent death, and punitive damages to be decided by the court under each head of claim. In addition, the lawsuit seeks the following non-monetary remedies directly related to the development and use of the type of AI technology deployed by Character Technologies Inc:
- Algorithmic disgorgement requiring deletion of models trained with improperly obtained data
- Stricter controls on data collection from minors
- Mandatory safety features and warnings
- Enhanced monitoring systems27
With regard to algorithmic disgorgement, the Complaint refers to Sewell’s personal data including in the form of thoughts and feelings shared with the Character.AI chatbots data which, the Complaint alleges, was used to further train the LLM. This claim is yet to be substantiated in detail.28
Broader Implications
By shedding light on the nature and level of interaction between Sewell Setzer III and the Character.AI platform, this case may provoke stricter standards for consumer-facing AI technologies, especially where minors are concerned, and encourage more responsible innovation. It also highlights the importance of considering minors’ cognitive vulnerabilities, emphasising the need to address psychological risks when designing AI systems that maximise user engagement.
Importantly, if the Florida court rules in Mrs. Garcia’s favour, it would set a landmark precedent, establishing new standards in the US for AI company liability and the protection of vulnerable users.
Elsewhere, as global AI regulation evolves, this case could influence how policymakers address the unique risks posed by generative AI. For instance, in the EU, generative AI chatbots like those offered by Character.AI would likely fall under the transparency requirements of Article 50 of the AI Act. These provisions mandate that providers of AI systems “intended to interact directly with natural persons” must inform users they are engaging with AI, “unless it is obvious” that the interaction involves an AI system. While these requirements focus on transparency, they do not specifically address interactions with minors. This case underscores the need for greater flexibility in AI classification frameworks, as even “low-risk” technologies can pose significant harm, particularly to minors, if proper safeguards are lacking.
Subscribe to our newsletter below to stay up-to-date on key cases like this one and more.
Not familiar with some of the technical terms in the above article? Access our free AI Glossary for help.
Disclaimer: The above article is intended for information purposes only and does not constitute legal advice. Please refer to the terms and conditions page for more information.
- Character.AI “About” (https://character.ai/about) ↩︎
- Complaint, para. 25. ↩︎
- Character.AI support help page “What is Character.AI?” (https://support.character.ai/hc/en-us/articles/14997389547931-What-is-Character-AI) ↩︎
- Complaint, paras. 204, 229-236. ↩︎
- Complaint, para. 389. ↩︎
- Complaint, paras. 55 and 57. ↩︎
- Yahoo Finance. “Character.AI Co-founders Hired by Google in $2.7 Billion Deal.” Yahoo! Finance, September 28, 2023. https://finance.yahoo.com/news/character-ai-co-founders-hired-233448298.html. ↩︎
- Complaint, para. 7. ↩︎
- Complaint, para. 66. ↩︎
- Complaint, para. 68. ↩︎
- Complaint, paras. 55-59 and 67. ↩︎
- Complaint, para. 195. ↩︎
- Complaint, para. 268. ↩︎
- Complaint, Exhibit A. ↩︎
- Complaint, para. 207. ↩︎
- Complaint, para. 216. ↩︎
- Complaint, para. 213. ↩︎
- Complaint, para. 184. ↩︎
- Complaint, para. 220. ↩︎
- Complaint, paras. 3, 365-370. ↩︎
- Complaint, paras. 325-332. ↩︎
- Complaint, 333-343. ↩︎
- Complaint, paras. 350, 365-366. ↩︎
- Complaint, paras. 4, 361-387. ↩︎
- Complaint, paras. 97, 142-150, 162-168, 216-236. ↩︎
- Complaint, paras. 413-421. ↩︎
- Complaint, para. 412 et seq. ↩︎
- Complaint, paras. 46-49, 324. ↩︎