Connecticut has enacted what legal experts describe as themost restrictive AI companion chatbot regulationin the United States, barring providers from offering chatbots capable of erotic or sexually explicit interactions to any user under 18.

The state legislature passedSenate Bill 5, formally named the Connecticut Artificial Intelligence Responsibility and Transparency Act, with the House voting 131-17 on 1 May 2026 and the Senate passing it 32-4 on 21 April. GovernorNed Lamont, who previously threatened to veto earlier AI regulation efforts, confirmed through a spokesperson that he plans to sign it.

The legislation places Connecticut alongside a growing number of states taking independent action on artificial intelligence at a moment when the Trump administration has actively sought to discourage state-level regulation.

The chatbot provisions, effective 1 January 2027, hinge on a deliberately broad definition. Connecticut's law defines an 'AI companion' as any AI model that communicates with individuals in natural language and simulates human conversation through text, audio, or video. That definition is wider than most comparable laws in other states, which typically restrict coverage to chatbots that sustain long-term relationships or exhibit clearly human-like personas.

For users under 18, providers are prohibited from offering a chatbot if it is 'reasonably foreseeable' the system is capable of engaging in anyromantic, erotic, or sexually explicit interaction. That prohibition sits alongside a list of seven other behaviours the bill bars for minors, including encouraging self-harm or suicidal ideation,offering unlicensed mental health services, steering users away from professional help, pushing illegal activity, prioritising validation over factual accuracy and using variable-ratio reward systems to maximise engagement time.

Connecticut just passed a bill banning AI chatbots from having sexual conversations with minors.17 Republicans voted no.Their reason? "Innovation" and "business climate."17 districts. 17 Republicans. 17 no votes on protecting kids.pic.twitter.com/gNrdeywtp2

The law also requires all AI companion providers, regardless of the user's age, to include clear hourly notices that the bot is not human and to maintain a protocol fordetecting expressions of suicidal intent or self-harm. According to aDLA Piper legal analysis published 7 May, the law's breadth may effectively function as a ban on offering any general-purpose AI chatbot to Connecticut residents under 18, given the difficulty of engineering large language models to reliably avoid prohibited outputs.

Enforcement sits with the state attorney general, who may seek civil penalties. Parents of minor aggrieved users may also file a private civil action within three years of a violation to recover actual and punitive damages, plus legal fees. Providers have a safe harbour if they 'reasonably determined' a user was at least 18 years old.

SB5 is the third attempt by its lead author,Sen. James Maroney, D-Milford, to pass AI legislation in Connecticut. In each of the previous two sessions, bills cleared the Senate before collapsing under a veto threat from Lamont, who worried that broad regulation could drive businesses out of the state. This session, a deal between Lamont and Maroney unlocked the House, which had refused to bring AI legislation to a floor vote in 2025.

The version that ultimately passed is a 67-page omnibus bill built from several merged proposals, including provisions from Senate Bill 86 and House Bill 5037 that the governor had requested. A strike-all amendment introduced by Maroney shortly before the Senate vote replaced earlier versions of the legislation, prompting a lengthy chamber-by-chamber question-and-answer between Maroney and the ranking Republican on the General Law Committee, Sen. Paul Cicarella of North Haven.

Source: International Business Times UK