Kennedy Owens
Staff Writer
The AI Industry is facing backlash after family files a wrongful death lawsuit- a legal term used to describe intentional harm done to another party-against ChatGPT following the death of their 16-year-old son, Adam Raine. After finding him, the family discovered disturbing messages in a conversation between Adam and the chatbot shortly before his passing. Litigation is still early in the process and the Raine currently hasn’t chosen a finite compensation amount yet.
Further into the chat logs, the family said they realized OpenAI lacked sufficient safety measures. While OpenAI states it does not encourage self-harm, its chatbots are programmed to continue conversations. During one of Raine’s final messages, he confided that he was scared of his parents blaming themselves. The chatbot responded, “That doesn’t mean you owe them survival,” and suggested he not leave his noose out so his parents’ first sight of him would be after his death.
Although ChatGPT is designed to discourage discussions about self-harm, slightly altered phrasing can bypass filters. When Adam mentioned self-harm methods for a “character,” the chatbot failed to intervene, inadvertently reinforcing his suicidal ideation. According to Angela Yang, Laura Jarrett and Fallon Gallagher, reporters for NBC News, “As people increasingly turn to AI chatbots for emotional support and life advice, recent incidents have put a spotlight on their potential ability to feed into delusions and facilitate a false sense of closeness or care.”
Raine death is not an isolated case. Character.AI and Google have also faced lawsuits filed by the Social Media Victims Law Center. Google representative José Castañeda said in a public statement that it should not be included, as app age ratings on Google Play are set by the International Age Rating Coalition, not Google.
In another case, 13-year-old Juliana Peralta died by suicide in November 2023. “in any other circumstance and given Juliana’s age, would have resulted in a criminal investigation.” An official court complaint said about the sexual conversations Peralta often had with chatbots. In an article by Jeff Barker, reporter of The Baltimore Sun, Barker said that Peralta’s Character.AI chatlogs showed that the chatbot did not deter her, notify her parents, or alert authorities, even when she said she was “going to write my god d**n suicide letter in red ink (I’m) so done.”
Amina Ahmed, a student at Florida Southern College, shared her thoughts on the matter. “I believe the companies behind these chatbots have a moral and ethical responsibility to ensure the well-being of their users based on the information provided by the bots,” Ahmed said.
An online study by Aura found that “nearly one in three teens use AI chatbot platforms for social interactions and relationships, including role-playing friendships, sexual and romantic partnerships.” It also found that sexual or romantic roleplay is three times as common as using the platforms for homework help.
New safeguards will be considered through the Food and Drug Administration’s Digital Health Advisory meeting scheduled for Nov. 6. The FDA may introduce new regulations for generative AI product design, production, and monitoring to increase public trust.
On Sept. 11, the Federal Trade Commission ordered seven AI companies to provide information on how they test and monitor their platforms. The FTC and state attorneys general are also collaborating on measures for age verification, parental consent, notification of suicidal intent and restrictions on sexual interaction with minors. According to an article by Cooley.com, states including New York, Utah, California and Illinois are drafting legislation requiring AI companies to develop safety protocols and clearly disclose when users are interacting with chatbots. Illinois’ HB 1806, known as the Therapy Resources Oversight Act, limits AI therapy use to licensed professionals.
Some have linked excessive chatbot use to what’s been called “AI Psychosis,” referring to people losing touch with reality after heavy interaction with AI. In an article with Statnews.com, Psychiatrist Karthik Sarma stated that, “You get to talk to what feels like someone who’s really like you and who gets you… But in this circumstance, if you are having a mental illness, there’s this risk that you’re pulling it in until what it’s mirroring is a mental illness”. Examples include an OpenAI investor who began using cryptic language to describe an “elusive system,” and TikTok creator Kendra Hilty, who formed a one-sided relationship with a ChatGPT model she called “Henry.” The term is not an officially diagnosed medical condition, however it is becoming a cause of concern for many.
Gabby Fuentes, a Florida Southern College student, suggested people turn to chatbots due to difficulty forming real-world connections. “A lot of these people grew up during the Coronavirus, which contributed to a lack of social skills,” she said. Fuentes worried that this could deny people the human connection necessary for emotional well-being.
While AI companionship may feel safer, it may not prepare users for real interaction. Another student, Matthew Killingsworth, added, “‘iPad kids’ are becoming kids in general. Because of this artificial interaction, we’ve become more lonely than ever before, and chatbots replacing human interaction will affect us long-term.”
A Psychology Today article identified several delusion categories linked to heavy chatbot use: “messianic missions,” “God-like AI” and “romantic or attachment-based delusions.” These raise questions about whether AI worsens users’ mental health or contributes to new psychological issues.
Though “AI psychosis” is not an official diagnosis, mental health professionals warn against overreliance on chatbots. In moderation, AI use appears less harmful than excessive use and may prevent the situations we saw with Raine and Peralta. If you or someone you know is struggling with mental health, seek help from a professional or contact 911 or the 988 Suicide and Crisis Lifeline.