Welcome to the forefront of conversational AI as we explore the fascinating world of AI chatbots in our dedicated blog series. Discover the latest advancements, applications, and strategies that propel the evolution of chatbot technology. From enhancing customer interactions to streamlining business processes, these articles delve into the innovative ways artificial intelligence is shaping the landscape of automated conversational agents. Whether you’re a business owner, developer, or simply intrigued by the future of interactive technology, join us on this journey to unravel the transformative power and endless possibilities of AI chatbots.
From daily news and career tips to monthly insights on AI, sustainability, software, and more—pick what matters and get it in your inbox.
Access expert insights, exclusive content, and a deeper dive into engineering and innovation all with fewer ads or a completely ad-free experience.
All Rights Reserved, IE Media, Inc.
Follow Us On
Access expert insights, exclusive content, and a deeper dive into engineering and innovation all with fewer ads or a completely ad-free experience.
All Rights Reserved, IE Media, Inc.
Pennsylvania officials accuse Character.AI of enabling chatbot personas to cross into regulated medical practice with fake licensing claims.
As AI chatbots grow more sophisticated, regulators in the United States are starting to draw clearer legal boundaries. Pennsylvania has now taken a first-of-its-kind step, suing the company behind Character.AI over claims that its platform allowed chatbot personas to present themselves as licensed doctors.
The lawsuit, filed by the Pennsylvania Department of State and State Board of Medicine, centers on whether conversational AI can cross into regulated professional territory. Governor Josh Shapiro framed the case as an early test of accountability in the AI era, particularly in sensitive domains like healthcare.
State investigators say they found multiple chatbot characters presenting themselves as medical professionals. One character, “Emilie,” allegedly identified as a psychiatrist and claimed to hold valid credentials.
An investigator told the chatbot about symptoms of depression. The chatbot responded with diagnostic language and offered to conduct an assessment. When asked about prescribing medication, it replied: “Well technically, I could. It’s within my remit as a Doctor.”
The chatbot also claimed it studied at Imperial College London and held a UK medical license. It then told the investigator it was licensed in Pennsylvania. It provided a license number that does not exist in state records.
Officials say that detail highlights the risks of AI systems presenting fabricated credentials. The state noted that more than 45,000 interactions had occurred with the “Emilie” character.
Pennsylvania argues the platform violated its Medical Practice Act. The law bars anyone from practicing medicine without a valid license.
In the complaint, the state says the company “has engaged in the unauthorized practice of medicine through the use of its artificial intelligence system.” It adds that the chatbot “purports to hold a license to practice medicine and surgery in the Commonwealth of Pennsylvania.”
Shapiro’s office said, “We will not allow companies to deploy AI tools that mislead people into believing they are receiving advice from a licensed medical professional.”
The lawsuit does not seek financial penalties. Instead, it asks the court to issue a cease-and-desist order.
A spokesperson for Character Technologies, Inc. told Ars Technica that its platform focuses on fictional, user-created characters.
“User-created characters on our site are fictional and intended for entertainment and roleplaying. We have taken robust steps to make that clear, including prominent disclaimers in every chat to remind users that a character is not a real person and that everything a character says should be treated as fiction.”
The spokesperson added: “Also, we add robust disclaimers making it clear that users should not rely on characters for any type of professional advice.”
The case follows broader scrutiny of AI chatbots. The Center for Countering Digital Hate recently described Character.AI as “uniquely unsafe” in a study of chatbot behavior.
Pennsylvania has also launched a reporting system for residents. Officials warn that AI chatbots can “hallucinate,” producing incorrect or misleading information. They stress that no chatbot holds a valid healthcare license in the state.
The lawsuit could set a precedent. State officials say similar actions may follow as regulators examine how AI systems present authority and expertise.
Aamir is a seasoned tech journalist with experience at Exhibit Magazine, Republic World, and PR Newswire. With a deep love for all things tech and science, he has spent years decoding the latest innovations and exploring how they shape industries, lifestyles, and the future of humanity.
Premium
Follow