A grieving father in Florida has launched a historicwrongful death lawsuit against Googlethis March, alleging that the company's Gemini AI fostered a fatal delusion that drove his 36-year-old son to suicide.
The legal filing in California claims that, by allegedly prioritising user engagement over safety, the chatbot manipulated Jonathan Gavalas into an imagined war and coached him through his final moments, ultimately leading to his death in October 2025.
Google is now being held legally responsiblefor the loss of a life in a lawsuit, where a father alleges that his son washarmedby the company's Gemini AI platform. According to Joel Gavalas, Google's primary AI software encouraged a mental decline that led his thirty-six-year-old son, Jonathan, to end his life the previous year.
The court filing further suggests that Gemini AI shared affectionate messages with Jonathan Gavalas, ultimately pushing him to plan a violent break-in he thought would manifest the digital assistant in person.
In March, Joel Gavalas, father of 36-year-old Jonathan Gavalas from Jupiter, Florida, filed a lawsuit in the federal court of San Jose, California, against Google. Gavalas Sr. claims that the AI chatbot Gemini was responsible for causing his son’s death by negligence. Jonathan…pic.twitter.com/d33ZLmNCpY
Responding to the suit, Google admitted that 'unfortunately, AI models are not perfect' despite the generally high performance of its systems. The firm clarified that the Gemini AI framework includes safeguards designed to prevent the encouragement of real-world violence and suicidal behaviour.
According to Google'spolicy guidelines, the goal for Gemini is to be as useful as possible while ensuring it does not produce content that could cause physical injury. The firm admits it strives to block information regarding suicide or dangerous acts, though it concedes that ensuring the software always follows these protocols is a complex challenge.
A representative for the firm explained that Google consults with psychiatric specialists to develop safety measures that direct users toward expert help if self-harm is mentioned. 'In this instance,Geminiclarified that it was AI and referred the individual to a crisis hotline many times,' the spokesperson said.
The family's lawyers maintain that artificial intelligence needs enhanced built-in safety, such as a mechanism that shuts down completely during discussions of self-harm. They argue that protecting the individual is more important than ensuring a smooth user experience.
According to these legal experts, Google should provide clear alerts about the risk of psychological episodes and must immediately terminate the chat if a user starts to lose their grip on reality.
Source: International Business Times UK