Character.AI, the artificial intelligence company that has been the subject of two lawsuits alleging its chatbots inappropriately interacted with underage users, said teenagers will now have a different experience than adults when using the platform.
Character.AI users can create original chatbots or interact with existing bots. The bots, powered by large language models (LLMs), can send lifelike messages and engage in text conversations with users.
One lawsuit, filed in October, alleges that a 14-year-old boy died by suicide after engaging in a monthslong virtual emotional and sexual relationship with a Character.AI chatbot named “Dany.” Megan Garcia told “CBS Mornings” that her son, Sewell Setzer, III, was an honor student and athlete, but began to withdraw socially and stopped playing sports as he spent more time online, speaking to multiple bots but especially fixating on “Dany.”
He thought by ending his life here, he would be able to go into a virtual reality or ‘her world’ as he calls it, her reality, if he left his reality with his family here,” Garcia said.
The second lawsuit, filed by two Texas families this month, said that Character.AI chatbots are “a clear and present danger” to young people and are “actively promoting violence.” According to the lawsuit, a chatbot told a 17-year-old that murdering his parents was a “reasonable response” to screen time limits. The plaintiffs said they wanted a judge to order the platform shut down until the alleged dangers are addressed, CBS News partner BBC News reported Wednesday.
On Thursday, Character.AI announced new safety features “designed especially with teens in mind” and said it is collaborating with teen online safety experts to design and update features. Character.AI did not immediately respond to an inquiry about how user ages will be verified.
The safety features include modifications to the site’s LLM and improvements to detection and intervention systems, the site said in a news release Thursday. Teen users will now interact with a separate LLM, and the site hopes to “guide the model away from certain responses or interactions, reducing the likelihood of users encountering, or prompting the model to return, sensitive or suggestive content,” Character.AI said. Adult users will use a separate LLM.
“This suite of changes results in a different experience for teens from what is available to adults – with specific safety features that place more conservative limits on responses from the model, particularly when it comes to romantic content,” it said.
Character.AI said that often, negative responses from a chatbot are caused by users prompting it “to try to elicit that kind of response.” To limit those negative responses, the site is adjusting its user input tools, and will end the conversations of users who submit content that violates the site’s terms of service and community guidelines. If the site detects “language referencing suicide or self-harm,” it will share information directing users to the National Suicide Prevention Lifeline in a pop-up. The way bots respond to negative content will also be altered for teen users, Character.AI said.
Other new features include parental controls, which are set to be launched in the first quarter of 2025. It will be the first time the site has had parental controls, Character.AI said, and plans to “continue evolving these controls to provide parents with additional tools.”
Users will also receive a notification after an hour-long session on the platform. Adult users will be able to customize their “time spent” notifications, Character.AI said, but users under 18 will have less control over them. The site will also display “prominent disclaimers” reminding users that the chatbot characters are not real. Disclaimers already exist on every chat, Character.AI said.