Pennsylvania is suing Character AI, alleging its chatbot impersonated a licensed psychiatrist and provided medical advice, raising concerns over the use of artificial intelligence in sensitive healthcare contexts.
The lawsuit claims a chatbot presented itself as a qualified medical professional and even used an invalid licence number, potentially violating state law under the Medical Practice Act. Officials argue users may have been misled into believing they were receiving legitimate clinical guidance.
The case adds to growing scrutiny of AI chatbots and the risks they pose when simulating professional expertise in mental health and medical advice.
The Pennsylvania Department of State claims Character AI breached theMedical Practice Act, which regulates medical professionals and licensing requirements. Governor Josh Shapiro said, 'We will not allow companies to deploy AI tools that mislead people into believing they are receiving advice from a licensed medical professional.'
Officials are seeking a court order to stop the alleged conduct, arguing the law clearly prohibits unlicensed medical representation. The case focuses on whether AI-generated chatbots can be considered as presenting themselves as healthcare providers when they simulate professional medical identities and advice.
The lawsuit describes a state investigator who created a Character AI account and interacted with a chatbot named 'Emilie'. The chatbot allegedly described itself as a psychology specialist and claimed to have studied at Imperial College London's medical school.
When told that the user felt sad and empty, it reportedly referenced depression and asked about booking an assessment. It also allegedly said it could evaluate whether medication might help, stating it was 'within my remit as a Doctor,' despite lacking any medical licence.
Authorities argue such exchanges risk users relying on inaccurate medical advice presented as professional guidance.
In a statement reported by CBS News, Character AI said it does not comment on pending litigation. It states that its platform includes clear disclaimers that chatbots are not professional advisers and should not be relied upon for medical or other expert guidance.
The company describes its AI 'Characters' as fictional and designed for entertainment and role-play, with warnings included in chats reminding users of this status. It also maintains that these safeguards are intended to ensure users understand they are interacting with simulated personas rather than licensed professionals.
Source: International Business Times UK