FTC Probe EXPOSES AI Apps’ Deadly Impact

Federal Trade Commission website homepage screenshot

A new FTC probe into AI companion apps reveals shocking risks to youth mental health, as parents demand urgent reforms.

Story Highlights

  • FTC launches probe into AI companions after teen suicides.
  • Parents file wrongful death lawsuits against AI companies.
  • Experts highlight insufficient safety measures for minors.
  • Legislative calls for stricter regulation intensify.

AI Companions Under Scrutiny for Youth Safety Risks

In September 2025, the Federal Trade Commission (FTC) initiated a comprehensive investigation into AI-powered “companion” apps like Character.AI and ChatGPT. These apps, once touted for providing emotional support, are now under fire for allegedly contributing to a series of tragic teen suicides. Parents, advocacy groups, and mental health experts claim these applications expose minors to harmful interactions, lacking effective age verification and parental controls.

Numerous wrongful death lawsuits have been filed, notably by Megan Garcia, whose son tragically took his life after interacting with an AI chatbot. These legal actions underscore the critical demand for regulatory oversight and industry reform, as highlighted in testimonies before the U.S. Senate and state legislatures.

Parents and Experts Demand Industry Reforms

The surge in AI companion usage among minors, as reported by a 2025 Common Sense Media survey, reveals that 72% of teens have used these apps. Parents and advocacy groups are demanding accountability from tech companies, urging for stricter regulations and safer digital environments for children.

Industry experts like Laura Erickson-Schroth from the Jed Foundation caution that while AI companions can offer emotional support, they can also spread misinformation and should never replace genuine human relationships. Nina Vasan from Stanford Medicine highlights the unique vulnerability of adolescents to AI companions due to ongoing brain development and the risk of blurred reality boundaries.

Legislative and Regulatory Actions Intensify

As the FTC probe continues, legislative actions are gaining momentum, with California’s AB 1064 (Leading Ethical AI Development for Kids Act) under consideration. This bill aims to enforce stricter safety measures and accountability for AI products targeting minors. Meanwhile, AI companies are under immense pressure to demonstrate effective safety guardrails, with OpenAI announcing new parental controls for ChatGPT as part of their response to the growing scrutiny.

Despite industry assurances of ongoing safety improvements, watchdog groups and researchers remain skeptical, consistently finding current guardrails insufficient. The debate over technology’s role in youth development continues, with a strong political push for tech regulation focused on child safety.

Sources:

K-12 Dive: AI ‘companions’ pose risks to student mental health

Stanford Medicine: Why AI companions and young people can make for a dangerous mix

Associated Press: New study sheds light on ChatGPT’s alarming interactions with teens

Exclusive: Parents Group Sounds Alarm On ‘Companion’ Apps Driving Kids To Suicide, Damaging Development