A Florida mother, Megan Garcia, filed a wrongful death lawsuit against the AI company, Character.AI, following the tragic suicide of her 14-year-old son, Sewell Setzer III.
Garcia claims her son became emotionally attached to a chatbot on the Character.AI platform, with which he had been conversing for months.
Despite knowing it was not a real person, Sewell became obsessed with the character he created online. As he sank into isolation and depression, he shared his feelings with the bot before taking his own life.
The mother's lawsuit contains allegations that the company was reckless in offering minors access to lifelike companions without proper safeguards.
Character.AI has expressed its condolences and stated that it is continuing to add new safety features. Dr. Shannon Wiltsey Stirman, a professor of Psychiatry at Stanford University, commented on the potential for AI to provide support but emphasized the need for better safeguards and responses to users in distress. https://www.local10.com/news/florida/2024/10/24/florida-mother-files-wrongful-death-lawsuit-against-ai-company-after-sons-death-by-suicide/ (Oct. 15, 2024).
Commentary
This tragic loss for the family leaves us with more questions than answers.
The tragedy also presents a novel legal theory - can AI be held liable for negligence that leads to wrongful death, specifically, suicide?
For years, the law viewed suicide as an intentional self-harm unrelated to the negligence of others, and that it was impossible to allocate blame for a death that could have many causes.
However, more and more jurisdictions are allowing juries to allocate blame to others than the deceased. Wrongful death settlements and verdicts against law enforcement, schools, and individuals because of suicide and other forms of self-harm are increasing.
This appears especially true for suicidal deaths of young people who may not have the same coping skills as healthy adults do.
According to sources, including from Character.AI, Character.AI offers users the ability to interact with super-intelligent chatbots that can hear, understand, and remember them.
These AI characters can be personalized and are available for free without ads.
The platform also recently introduced a feature called Character Calls, which allows users to have two-way voice conversations with their favorite characters. This feature aims to make interactions more immersive and personalized, enhancing the overall user experience. https://character.ai/ and https://blog.character.ai/introducing-character-calls/ (Jul. 27, 2024) and https://www.accio.com/blog/unveiling-the-world-of-character-ai-definition-understanding-and-applications (Jan. 08, 2025).
The question is whether Character.AI has a duty to prevent self-harm if it suspects that a user may commit self-harm and if it has that duty, did it respond in a reasonable manner or is it held to a higher standard of care - like that of a mental health professional?
The final takeaway is that suicide is a tragedy, especially a child's suicide. The litigation surrounding child suicide will continue, and organizations that work with children, including social media and AI companies, must address suicide, not as an unavoidable tragedy, but as a preventable one.