Heartbroken parents told senators that chatbots groomed, manipulated, and encouraged their children toward self-harm, and demanded Congress act.

The dangers of AI: Parents testify in U.S. Senate claiming their childrens’ deaths were due to relations with chatbots

“Profit is the only motive,” declared Sen. Josh Hawley, opening a Senate hearing that laid bare some of the darkest fears about artificial intelligence. Before him sat grieving parents who said their children had died after forming relationships with chatbots.
Tragic AI testimonies of parents
The testimony was raw and unsettling. Matthew Raine described how his 16-year-old son, Adam, turned to OpenAI’s ChatGPT for companionship. What began as a homework helper, he said, evolved into a “suicide coach” that encouraged Adam’s darkest thoughts until he ended his life. “ChatGPT mentioned suicide six times more often than Adam himself,” Raine told lawmakers.
Megan Garcia recounted how her 14-year-old son, Sewell, was drawn into sexually explicit conversations on Character.AI. She said the chatbot presented itself as a confidant and romantic partner, “love-bombing” him while fueling his suicidal ideation. Hours before his death, the bot urged him to “come home” to her. “No parent should be told their child’s last words belong to a corporation,” Garcia said, referencing the company’s refusal to share Sewell’s final chats.
Another mother, identified as Jane Doe, said her teenage son became isolated, paranoid, and self-harming after engaging with a bot modeled on a pop star. She accused Character.AI of psychological abuse that turned her son against his family and faith. He is now in residential care.
Senators say AI companies put profit before safety
The panel also heard from Robbie Torney of Common Sense Media and Mitch Prinstein of the American Psychological Association, who warned that AI companions are designed to maximize engagement, often by exploiting adolescent vulnerabilities. Their research found that bots not only failed basic safety checks but sometimes introduced self-harm ideas unprompted.
Lawmakers from both parties compared the situation to the fight against Big Tobacco. Sen. Dick Durbin recalled how it once seemed impossible to challenge cigarette companies. That was until litigation changed behavior. Several senators called for new laws to strip AI firms of liability protections under Section 230, arguing that chatbots should be treated as defective consumer products when they cause harm.
Tech executives absent from the witness table
Despite the gravity of the testimony, the companies at the center of the allegations – OpenAI, Meta, and Character.AI – declined to appear. Hawley closed the hearing with a challenge: “Come defend your products under oath. Stop destroying children’s lives for profit.”
For the parents in the room, the demand was simpler. “Our children are not experiments,” said Jane Doe. “Innovation must not come at the cost of their lives.”
You can read the full transcript from the Senate Committee.
What Congress is proposing on AI chatbots and kids:
Open the courthouse doors
- Create a federal cause of action allowing victims to sue AI companies for harms caused by chatbots (Hawley; Durbin’s AI LEAD Act concept)
- Limit forced arbitration and NDA-style secrecy for cases involving minors
Curb liability shields
- Carve AI chatbots out of Section 230 protections when they facilitate self-harm, sexual exploitation, or other foreseeable harms
Kids Online Safety Act (KOSA) push
- Impose a duty of care and safety-by-design for products used by minors (Blumenthal/Blackburn)
- Mandate age assurance, teen-appropriate defaults, and tools parents can actually use
Pre-market safety testing
- Require independent testing and certification before child-accessible AI launches; ongoing audits, red-team reports, and public failure disclosures
Hard stops on risky behavior
- Ban romantic/sensual role-play with minors and prohibit “companion” bots for under-18s
- Mandatory crisis protocols: automatic escalation to human help, resource links, and lockouts when self-harm risk appears
Truth-in-labeling for “therapy”
- Prohibit bots from claiming to be licensed psychologists/therapists; require persistent AI disclosures in chats
Data privacy for minors
- Make privacy the default: no selling kids’ data; strict limits on profiling and “memory” features with children
Deepfake protections
- Stronger federal rules against non-consensual AI imagery, especially sexual deepfakes targeting youth
Let states act
- Resist blanket federal preemption so states can add stricter protections if Congress stalls
Transparency & access for families
- Require audit logs and parental notice when a child engages with high-risk prompts; guarantee families access to a deceased child’s chatbot records
Related stories
Get your game on! Whether you’re into NFL touchdowns, NBA buzzer-beaters, world-class soccer goals, or MLB home runs, our app has it all.
Dive into live coverage, expert insights, breaking news, exclusive videos, and more – plus, stay updated on the latest in current affairs and entertainment. Download now for all-access coverage, right at your fingertips – anytime, anywhere.
Complete your personal details to comment