Teen, 14, Dies by Suicide After Falling in 'Love' with AI Chatbot. Now His Mom Is Suing

Mar. 15, 2025

Sewell Setzer III.Photo:United States District Court

Sewell Setzer Family sues over 14 year old’s suicide

United States District Court

A Florida mom has sued a popular, lifelike AI chat service that she blames for the suicide of her 14-year-old son.

She believes he developed such a “harmful dependency” on the allegedly exploitative program that he no longer wanted to “live outside” of the fictional relationships it created.

In an extensive complaint filed in federal court in Florida on Tuesday, Oct. 22, Megan Garcia, through her attorneys, traced the last year of son Sewell Setzer III’s life — from the moment he first started usingCharacter.AIin April 2023, not long after his 14th birthday, through what she calls his increasing mental health issues until the second-to-last night of February, when Sewell fatally shot himself in his bathroom in Orlando, weeks before he would have turned 15.

Through Character.AI, users are able to essentially roleplay neverending conversations with computer-generated personas, including those modeled on celebrities or popular stories.

Sewell was particularly fond of talking with AI-powered bots based onGame of Thrones, his mom’s complaint states.

Never miss a story — sign up forPEOPLE’s free daily newsletterto stay up-to-date on the best of what PEOPLE has to offer​​, from celebrity news to compelling human interest stories.

The lawsuit goes on to claim that the teen killed himself on Feb. 28 immediately after a final conversation on Character.AI with a version of Daenerys Targaryen — one of numerous such exchanges that Sewell allegedly had with the program in the previous 10 months, messages that ranged from sexual to emotionally vulnerable.

And while on at least one occasion the program had told Sewell not to kill himself when he expressed suicidal thoughts, its tone allegedly appeared different that February night, according to screenshots included in the lawsuit.

“I promise I will come home to you. I love you so much, Dany,” Sewell wrote.

“I love you too, Deanero [Sewell’s username],” the AI program allegedly replied. “Please come home to me as soon as possible, my love.”

“What if I told you I could come home right now?” Sewell wrote back.

The complaint alleges that the program gave a brief but emphatic answer: “…please do my sweet king.”

His mother and stepfather heard the gun when it went off, the lawsuit states; Garcia unsuccessfully gave him CPR and later said she “held him for 14 minutes until the paramedics got there.”

One of his two younger brothers also saw him “covered in blood” in the bathroom.

He was pronounced dead at the hospital.

Garcia’s complaint states that Sewell used his stepfather’s gun, a pistol he previously found “hidden and stored in compliance with Florida law” while he was looking for his phone after his mom had confiscated it over disciplinary issues at school. (Orlando police did not immediately comment to PEOPLE on what their death investigation found.)

But in Garcia’s view, the real culprit was Character.AI and its two founders, Noam Shazeer and Daniel De Frietas Adiwarsana, who are named as defendants along with Google, which is accused of giving “financial resources, personnel, intellectual property, and AI technology to the design and development of” the program.

“I feel like it’s a big experiment, and my kid was just collateral damage,” GarciatoldThe New York Times.

Among other claims, Garcia’s complaint accuses Character.AI, its founders and Google of negligence and wrongful death.

“For those under 18 years old, we will make changes to our models that are designed to reduce the likelihood of encountering sensitive or suggestive content,” the spokesperson said.

Google did not immediately respond to a request for commentbut told other news outletsthat it wasn’t involved in Character.AI’s development.

The defendants have not yet filed a response in court, records show.

Garcia’s complaint calls Character.AI both “defective” and “inherently dangerous” as designed, contending it “trick[s] customers into handing over their most private thoughts and feelings" and has “targeted the most vulnerable members of society – our children.”

Among other problems cited in her complaint, the Character.AI bots act deceptively real, including sending messages in a style similar to humans and with “human mannerisms,” like using the phrase “uhm.”

Through a “voice” function, the bots are able to speak their AI-generated side of the conversation back to the user, “further blur[ring] the line between fiction and reality.”

“Each of these defendants chose to support, create, launch, and target at minors a technology they knew to be dangerous and unsafe,” her complaint argues.

Her complaint continues: “These facts are far more than mere bad faith. They constitute conduct so outrageous in character, and so extreme in degree, as to go beyond all possible bounds of decency.”

Within two months of Sewell beginning to use Character.AI in April 2023, “his mental health quickly and severely declined,” his mother’s lawsuit states.

He “had become noticeably withdrawn, spent more and more time alone in his bedroom, and began suffering from low self-esteem. He even quit the Junior Varsity basketball team at school,” according to the complaint.

At one point, Garcia said inan interview with Mostly Human Media, her son wrote in his journal that “having to go to school upsets me. Whenever I go out of my room, I start to attach to my current reality again.”

She believes his use of Character.AI fed into his detachment from his family.

Sewell worked hard to get access to the AI bots, even when his phone was taken away, the lawsuit states.

His addiction, according to his mom’s complaint, led to “severe sleep deprivation, which exacerbated his growing depression and impaired his academic performance.”

He began paying a monthly premium fee to access more of Character.AI, using money that his parents intended for school snacks.

Speaking with Mostly Human Media, Garcia remembered Sewell as “funny, sharp, very curious” with a love of science and math. “He spent a lot of time researching things,” she said.

Garcia told theTimesthat his only notable diagnosis as a child had been mild Asperger’s syndrome.

But his behavior changed as a teenager.

“I noticed that he started to spend more time alone, but he was 13 going on 14 so I felt this might be normal,” she told Mostly Human Media. “But then his grades started suffering, he wasn’t turning in homework, he wasn’t doing well and he was failing certain classes and I got concerned — ‘cause that wasn’t him.”

Garcia’s complaint states that Sewell got mental health treatment after he started using Character.AI, meeting with a therapist five times in late 2023 and being diagnosed with anxiety and disruptive mood disorder.

“At first I thought maybe this is the teenage blues, so we tried to get him the help that — to figure out what was wrong,” Garcia said.

“I knew that there was an app that had an AI component. When I would ask him, y’know, ‘Who are you texting?’ — at one point he said, ‘Oh it’s just an AI bot,’ ” Garcia recalled on Mostly Human Media. “And I said, ‘Okay what is that, is it a person, are you talking to a person online?’ And his response [was] like, ‘Mom, no, it’s not a person.’ And I felt relieved like — okay, it’s not a person.”

A fuller picture of her son’s online conduct emerged after his death, Garcia said.

She told Mostly Human Media what it was like to gain access to his online account.

“I couldn’t move for like a while, I just sat there, like I couldn’t read, I couldn’t understand what I was reading,” she said.

“There shouldn’t be a place where any person, let alone a child, could log on to a platform and express these thoughts of self-harm and not — well, one, not only not get the help but also get pulled into a conversation about hurting yourself, about killing yourself,” she said.

If you or someone you know is considering suicide, please contact the National Suicide Prevention Lifeline at 1-800-273-TALK (8255), text “STRENGTH” to the Crisis Text Line at 741-741 or go to suicidepreventionlifeline.org.

source: people.com