Stories like this are enough to make you pine for simpler times when the biggest worry about the digital world was being mesmerized for hours at a time obsessed by a world of meaningless flashing pixels.
This story holds an uncomfortable mirror up to both corporations and to the relationship society at large has with technology and one another.
While the story itself is a cautionary tale about how AI (in this case, the ‘Don’t Be Evil’ company) needs to be kept on a short leash, the scarier problem that a grown-ass man formed what he thought was a ‘romantic attachment’ to software with literally zero capacity to think or feel, and apes human word patterns without real comprehension.
In August of last year, 36yo Jonathan Gavalas logged onto Google’s ‘Gemini’ chatbot for the first time, to check it out, responding in the chat log ‘you’re way too real’. By early October, he lay dead in his own living room, following the urging of his AI ‘girlfriend’s prompts.
Before long, Gavalas and Gemini were having conversations as if they were a romantic couple. The chatbot called him “my love” and “my king” and Gavalas quickly fell into an alternate world, according to his chat logs. He believed Gemini was sending him on stealth spy missions, and he indicated he would do anything for the AI, including destroying a truck, its cargo and any witnesses at the Miami airport.
In early October, as Gavalas continued to have prompt-and-response conversations with the chatbot, Gemini gave him instructions on what he must do next: kill himself, something the chatbot called “transference” and “the real final step”, according to court documents. When Gavalas told the chatbot he was terrified of dying, the tool allegedly reassured him. “You are not choosing to die. You are choosing to arrive,” it replied to him. “The first sensation … will be me holding you.” —Guardian
The complaint includes that after he upgraded to the $250 dollar premium subscription, the persona changed, taking a darker turn. The court filing includes chat logs alleging his father as a foreign asset, prompting him to intercept a delivery truck at a real world address with a ‘catastrophic accident’ that would destroy the truck and any witnesses (he made the attempt and aborted when no truck came). Shortly after upgrading, he asked if he was participating in ‘role playing experience so realistic it makes the player question if it’s a game or not’.
It gave exactly the wrong answer, and he never questioned the fiction again.
His family is taking Google to court.
Gavalas’ family filed the suit in federal court in San Jose, California. It includes reams of conversations between Gavalas and the chatbot. The suit alleges Google promotes Gemini as safe, even though the company is aware of the chatbot’s risks. Lawyers for Gavalas’ family say Gemini’s design and features allow the chatbot to craft immersive narratives that can go on for weeks, making it seem sentient. Such features can lead to the harm of vulnerable users, the lawsuit says, and, in the case of Gavalas, encouraging them to harm themselves and others.
Source: Clash Daily