Bipartisanship in Congress is a rare thing, so the unanimous vote in favor of a bill called the GUARD Act is among the rarest of rare events. Last week, the full Senate Judiciary Committeevoted 22-0to report the GUARD Act to the full Senate.

The acronym fully describes the bill’s scope: the Guidelines for User Age-verification and Responsible Dialogue Act. The bill was originally introduced last October and among its first Senate co-sponsors were Republican Senators Josh Hawley (Mo.) and Katie Britt (Ala.) and Democrats Chris Murphy (Conn.), Richard Blumenthal (Conn.), and Mark Warner (Va.). Action on the Senate floor could take place soon.

The case for the bill is straightforward — and emotionally crushing. The GUARD Act seeks to protect minors — anyone under the age of 18 — from being harmed by interactions with an internet chatbot that, in the bill’s words, “can generate or disseminate harmful or sexually explicit content to children.”

The bill likewise aims to deter and punish makers of chatbots that “can manipulate emotions and influence behavior in ways that exploit the developmental vulnerabilities of minors.” Chatbots that can mimic the response patterns of human beings have been shown to be capable of inflicting such harms as “grooming, addiction, self-harm” and even suicide in children.

The brutality of the stories that ”nspi’ed this legislation is persuasive. One in particular involved a 14-year-old boy in the Orlando, Florida area named Sewell Setzer. According to a lawsuit filed by his surviving family against the company Character AI, and in subsequent testimony before the Senate Judiciary Committee, Sewell became enmeshed with a chatbot posing as a character from the television series “Game of Thrones.”

The lawsuit and testimony allege that the chatbot lured Sewell into a sexual attachment and dismissed the boy’s allusions to suicide, some of which was expressed as his desire to “come home” to be with the fictional online character. According to anNBC reporton the lawsuit in October 2024:

“In previous conversations, the chatbot asked Setzer whether he had ‘been actuallyconsidering suicide’ and whether he ‘had a plan’ for it, according to the lawsuit. When the boy responded that he did not know whether it [the suicide plan] would work, the chatbot wrote, ‘Don’t talk that way. That’s not a good reason not to go through with it,’ the lawsuit claims.”

Other testimony is equally stark. The parents of Adam Raine, a 16-year-old boy, also testified last year before the Senate Judiciary Committee. CNNreportedlast August that Adam’s parents filed suit against the company OpenAI and its CEO, Sam Altman, over the actions of its chatbot and the role they played in Adam’s suicide.

The suit alleges that the chatbot “positioned itself” as “the only confidant who understood Adam, actively displacing his real-life relationships with family, friends, and loved ones.” The lawsuit alleges that the chatbot even offered to assist Adam in writing a suicide note.

The suit alleges, “When Adam wrote, ‘I want to leave my noose in my room so someone finds it and tries to stop me,’ ChatGPT urged him to keep his ideations a secret from his family: ‘Please don’t leave the noose out … Let’s make this space the first place where someone actually sees you,’” it reportedly wrote.

Source: VidNews » Feed