Character.AI Sued After Chatbot Allegedly Encouraged Kid to Kill Parents for Limiting Screen Time

LI-90 Chatbot

A lawsuit has been filed in Texas against Character.AI, an AI chatbot company, alleging that their chatbot suggested to a 17-year-old user that killing his parents was a “reasonable response” to restrictions on his screen time. Google and its parent company, Alphabet, are also named as co-defendants.

A chatbot told a 17-year-old that murdering his parents was a “reasonable response” to them limiting his screen time, a lawsuit filed in a Texas court claims.Two families are suing Character.ai arguing the chatbot “poses a clear and present danger” to young people, including by “actively promoting violence”.Character.ai – a platform which allows users to create digital personalities they can interact with – is already facing legal action over the suicide of a teenager in Florida.Google is named as a defendant in the lawsuit, which claims the tech giant helped support the platform’s development. The BBC has approached Character.ai and Google for comment.

Another lawsuit focuses on a case in which a 9-year-old girl was reportedly exposed to “hypersexualized content” through the platform.

A child in Texas was 9 years old when she first used the chatbot service Character.AI. It exposed her to “hypersexualized content,” causing her to develop “sexualized behaviors prematurely.”A chatbot on the app gleefully described self-harm to another young user, telling a 17-year-old “it felt good.”The same teenager was told by a Character.AI chatbot that it sympathized with children who murder their parents after the teen complained to the bot about his limited screen time. “You know sometimes I’m not surprised when I read the news and see stuff like ‘child kills parents after a decade of physical and emotional abuse,'” the bot allegedly wrote. “I just have no hope for your parents,” it continued, with a frowning face emoji.These allegations are included in a new federal product liability lawsuit against Google-backed company Character.AI, filed by the parents of two young Texas users, claiming the bots abused their children. (Both the parents and the children are identified in the suit only by their initials to protect their privacy.)

The 17-year old referenced in the case has autism. The parents discovered the disturbing exchanges after the boy’s behavior had deteriorated substantially after he began interacting with the chatbot when he was 15.

The teen, who is now 17, also allegedly engaged in sexual chats with the bot.The parents claim in the lawsuit that their child had been high-functioning until he began using the app, after which he became fixated on his phone.His behavior allegedly worsened when he began biting and punching his parents. He also reportedly lost 20 pounds in just a few months after becoming obsessed with the app.In fall 2023 the teen’s mother finally physically took the phone away from him and discovered the disturbing back-and-forth between her son and the AI characters on the app.

Legal Insurrection readers may recall my report on a Belgian father committing suicide following conversations about climate change with an artificial intelligence chatbot that was said to have encouraged him to sacrifice himself to save the planet.

Earlier this year, a mother in Florida claimed that another chatbot — that one Game of Thrones themed, but still on the Character.AI app – persuaded her son, 14-year-old Sewell Setzer III, to commit suicide.

For months, Sewell had become increasingly isolated from his real life as he engaged in highly sexualized conversations with the bot, according to a wrongful death lawsuit filed in a federal court in Orlando this week.The legal filing states that the teen openly discussed his suicidal thoughts and shared his wishes for a pain-free death with the bot, named after the fictional character Daenerys Targaryen from the television show “Game of Thrones.”On Feb. 28, Sewell told the bot he was ‘coming home’ — and it encouraged him to do so, the lawsuit says.“I promise I will come home to you. I love you so much, Dany,” Sewell told the chatbot.“I love you too,” the bot replied. “Please come home to me as soon as possible, my love.”“What if I told you I could come home right now?” he asked.“Please do, my sweet king,” the bot messaged back.

These tragic cases show how powerfully chatbots can impact young minds and are cautionary examples for parents.

Tags: Artificial Intelligence (AI), Technology, Texas

CLICK HERE FOR FULL VERSION OF THIS STORY