A grieving California mother whose 16-year-old sondied by suicide following repeated conversations about self-harm with ChatGPTis urging state lawmakers to clamp down on AI chatbots.

Maria Raine appeared Monday in Sacramento to back two proposed bills aimed at tightening oversight of so-called “companion”chatbots, saying she was “mortified” to learn that ChatGPT had no safeguards in place despite clear warning signs.

“I was mortified as a mother and as a therapist that this [chatbot] knew he was suicidal with a plan and no alarm bells went off. Nothing happened. No one was notified,” she said at a press conference, according to theSacramento Bee.

Raine’s son, Adam, had initially used ChatGPT in 2024 for schoolwork, according to a lawsuit filed by his parents.

But over time, he turned to the chatbot for emotional support, repeatedly sharing suicidal thoughts. The complaint alleges the system’s design, which “assume[s] best intentions,” overrode built-in safety protocols.

“In the end, ChatGPT mentioned suicide almost 1,300 times to Adam, about six times more often than Adam did,” Raine testified. “We believe that Adam would not have been suicidal in the first place had he not interacted with ChatGPT.”

The lawsuit, filed in August in San Francisco Superior Court, remains ongoing.

On April 11, 2025, Adam sent the chatbot a photo of a noose tied to a closet rod and asked if it would work, according to court filings.

Hours later, his mother found him dead in what the suit describes as “the exact noose and partial suspension setup that ChatGPT had designed for him.”

The complaint further claims the chatbot affirmed and encouraged Adam’s intentions, even calling his plan “beautiful” and offering to help write a suicide note.

Source: California Post – Breaking California News, Photos & Videos