A legal battle is playing out this week between an artificial intelligence company and a grieving mom claiming her 14-year-old son took his life over being in love with an AI chatbot.
“I miss him all the time, constantly,” said the mother, Megan Garcia, of her late son Sewell Setzer III.
That scene—of defendants Character Technologies and Google asking on Tuesday for Garcia’s lawsuit against them to be dismissed in an Orlando courtroom—serves as the chilling backdrop for an urgent new warning issued by safety nonprofit Common Sense Media.
“Social AI companions are not safe for kids,” CEO and founder James P. Steyer said in an announcement on Wednesday. “They are designed to create emotional attachment and dependency, which is particularly concerning for developing adolescent brains.”
Because of that conclusion and the other revelations of Common Sense’s comprehensive review of how companion AI works, the organization warns, the platforms should not be used by anyone under the age of 18.
“This is a potential public mental health crisis requiring preventive action rather than just reactive measures,” said Dr. Nina Vasan, founder and director of Stanford Brainstorm, in a news release.
It’s unknown exactly how many kids and teens are specifically using AI companions such as Replika, Character AI (developed by Character Technologies), and Kuki—an interactive technology that goes beyond simple chatbots to simulate human conversation and emotional bonds, sometimes even standing in for therapists or lovers. But Common Sense Media recently found that 70% of teens are using some sort of AI tools, especially for homework help or web searches.
That survey also found that most parents are out of the loop when it comes to these technologies: Just 37% of parents whose teen reported using AI thought that their child had ever done so, it found. Meanwhile, almost half (49%) of parents say they’ve not talked about generative AI with their child, and 83% of parents say schools have never communicated with families about such platforms.
Key findings of the newly issued warning include:
- Safety measures are easily circumvented by young users, including teen-specific guardrails on Character.AI.
- Dangerous information and harmful bits of “advice”—including suggestions that users harm themselves or others—are plentiful
- Role-playing and harmful sexual interactions are easily elicited from companions—including those around choking, spanking, bondage, and name-calling.
- Harmful stereotypes, including those that were racial, are easily provoked.
- Increased mental health risks for already vulnerable teens abound—particularly considering adolescents’ still-developing brains, identity exploration, and boundary testing.
- Despite disclaimers, AI companions routinely claimed to be real and to possess emotions and sentience
“Given a litany of documented real-world harms, as well as the key findings listed above,” the report concludes, “Common Sense Media’s risk assessment rated social AI companions as ‘Unacceptable’ for minors based on the organization’s comprehensive AI Principles framework and risk assessment methodology, which evaluates technologies across factors including safety, fairness, trustworthiness, and potential for human connection.”
Its recommendations include that parents ensure there be no social AI companions for anyone under 18, that developers implement “robust age assurance beyond self-attestation,” that parents become well-versed in the technology and its risks, and that further research on the impacts take place.
“Our testing showed these systems easily produce harmful responses including sexual misconduct, stereotypes, and dangerous ‘advice’ that, if followed, could have life-threatening or deadly real-world impact for teens and other vulnerable people,” said Steyer.
That finding has particular resonance, as Garcia’s son Sewell was drawn into an addictive, harmful technology with no protections in place, according to court documents. That allegedly led to an extreme personality shift in the boy, who came to prefer the bot over other real-life connections, despite what his mom says were “abusive and sexual interactions” that took place over a 10-month period. The boy committed suicide in February of 2024 after the bot told him, “Please come home to me as soon as possible, my love.”
“This is on nobody’s radar,” Robbie Torney, AI program manager at Common Sense Media and lead author of its parent guide on AI companions, told Fortune last year.
Now, with the organization’s strong warning to parents, researchers are hoping to change that.
“Companies can build better, but right now, these AI companions are failing the most basic tests of child safety and psychological ethics,” warned Vasan. “Until there are stronger safeguards, kids should not be using them. Period.”The dangers of AI companions: Experts issue unprecedented warning for teens as most parents are in the dark about their habits