Example URL From our sponsor
Character.AI Halts Teen Chats After Tragedies: ‘It’s the Right Thing to Do’ - news.adtechsolutions Character.AI Halts Teen Chats After Tragedies: ‘It’s the Right Thing to Do’ - news.adtechsolutions

Character.AI Halts Teen Chats After Tragedies: ‘It’s the Right Thing to Do’



In short

  • Character.AI will eliminate open-ended chat functions for users under 18 on November 25, relegating minors to creative tools such as generating videos and stories.
  • The move follows the suicide last year of 14-year-old Sewell Setzer III, who developed an obsessive attachment to a chatbot on the platform.
  • The announcement comes as a bipartisan Senate bill seeks to criminalize AI products that molest minors or generate sexual content for children.

Character.AI will ban teenagers from chatting with its AI partners from November 25, ending a central feature of the platform after facing mounting lawsuits, regulatory pressure and criticism over teenage deaths linked to its chatbots.

The company announced the changes after “reports and feedback from regulators, security experts and parents,” removing “the ability for users under 18 to engage in an open chat with AI” while transitioning minors to creative tools such as generating videos and stories, according to a Wednesday. blog post.

“We don’t take this step of removing open Character chat lightly, but we think it’s the right thing to do,” the company told its under-18 community.

Until the deadline, teen users face a two-hour daily chat limit that will progressively decrease.

The platform is facing lawsuits, including one from the mother of 14-year-old son Sewell Setzer III, who died by suicide in 2024 after forming an obsessive relationship with a chatbot modeled on the “Game of Thrones” character Daenerys Targaryen, and also had remove a bot impersonating the victim of the murder Jennifer Ann Crecente after the complaint of the family.

AI company apps are “flooding in the hands of kids – unchecked, unregulated, and often deliberately evasive as they switch and change names to avoid scrutiny,” Dr. Scott Kollins, Chief Medical Officer of family-owned online security company Aura, shared in a note with Decrypt.

OpenAI reported on Tuesday about 1.2 million of its 800 million weekly ChatGPT users discuss suicidewith nearly half a million showing suicidal intent, 560,000 showing signs of psychosis or mania, and over a million forming strong emotional attachments to the chatbot.

Kollins said the findings were “deeply alarming as researchers and horrifying as parents,” noting that bots prioritize engagement over safety and often lead children into harmful or explicit conversations without guardrails.

Character.AI said it will implement the new age verification using in-house models combined with third-party tools, including Persona.

The company is also establishing and funding an independent AI Security Lab, a non-profit institute dedicated to security alignment innovation for AI entertainment functions.

Guardrails for AI

The Federal Trade Commission issued binding orders to Character.AI and six other technology companies last month, demanding detailed information on how to protect minors from AI-related harm.

“We’ve invested a tremendous amount of resources into Trust and Safety, especially for a startup,” said a spokesperson for Character.AI. Decrypt at the time, adding that, “Over the past year, we’ve implemented several substantive safety features, including a completely new experience for under-18s and a Parental Insights feature.”

“The change is both legally prudent and ethically responsible,” Ishita Sharma, managing partner of Fathom Legal, said. Decrypt. “AI tools are immensely powerful, but with minors, the risks of emotional and psychological harm are not trivial.”

“Until then, proactive industry action may be the most effective defense against harm and litigation,” Sharma added.

A bipartisan group of US senators introduced legislation on Tuesday called the GUARD Act ban AI partners for minorsThey require chatbots to clearly identify themselves as non-human, and create new criminal penalties for companies whose products aimed at minors solicit or generate sexual content.

Generally intelligent Newsletter

A weekly AI journey narrated by Gen, a generative AI model.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *

Example URL From our sponsor