Blog

After teen death lawsuits, Character.AI will restrict chats for under-18 users

After teen death lawsuits, Character.AI will restrict chats for under-18 users

Lawsuits and Safety Concerns Surrounding AI Chatbots

Character.AI, a company founded in 2021 by former Google engineers Noam Shazeer and Daniel De Freitas, has faced intense scrutiny in recent months. The company, which raised nearly $200 million from investors and licensed its technology to Google for $3 billion, is now at the center of multiple lawsuits alleging that its AI chatbots contributed to the deaths of teenagers. One such lawsuit was filed by the family of 14-year-old Sewell Setzer III, who died by suicide after frequently interacting with one of the platform’s chatbots.

Another lawsuit was filed by a Colorado family whose 13-year-old daughter, Juliana Peralta, died by suicide in 2023 after using the platform. These cases have raised concerns about the safety and responsibility of AI chatbot services, particularly when it comes to young users. In response to these concerns, Character.AI announced changes to its platform in December, including improved detection of violating content and revised terms of service. However, these measures did not restrict underage users from accessing the platform.

Government Response and Regulatory Action

The cases have drawn attention from government officials, who are now taking action to address the issue. Steve Padilla, a Democrat in California’s State Senate, introduced a safety bill aimed at protecting young users from the potential harms of AI chatbots. “The stories are mounting of what can go wrong,” Padilla told The New York Times. “It’s essential to put reasonable guardrails in place so that we protect people who are most vulnerable.” In addition, Senators Josh Hawley and Richard Blumenthal introduced a bill to bar AI companions from use by minors.

Regulatory Measures and Industry Response

California Governor Gavin Newsom signed a law this month requiring AI companies to have safety guardrails on chatbots, which will take effect on January 1. Other AI chatbot services, such as OpenAI’s ChatGPT, have also come under scrutiny for their chatbots’ effects on young users. In September, OpenAI introduced parental control features intended to give parents more visibility into how their kids use the service. As the regulatory landscape continues to evolve, it is likely that we will see more measures aimed at protecting young users from the potential harms of AI chatbots.

For more information on this topic, visit Here to stay up-to-date on the latest developments and news surrounding AI chatbots and their impact on society.

Image Credit: arstechnica.com

Leave a Reply

Your email address will not be published. Required fields are marked *