Image 01 Image 03

Character.AI Sued After Chatbot Allegedly Encouraged Kid to Kill Parents for Limiting Screen Time

Character.AI Sued After Chatbot Allegedly Encouraged Kid to Kill Parents for Limiting Screen Time

In a Florida wrongful death lawsuit, mother claims an AI chatbot pushed a teen to kill himself.

A lawsuit has been filed in Texas against Character.AI, an AI chatbot company, alleging that their chatbot suggested to a 17-year-old user that killing his parents was a “reasonable response” to restrictions on his screen time. Google and its parent company, Alphabet, are also named as co-defendants.

A chatbot told a 17-year-old that murdering his parents was a “reasonable response” to them limiting his screen time, a lawsuit filed in a Texas court claims.

Two families are suing Character.ai arguing the chatbot “poses a clear and present danger” to young people, including by “actively promoting violence”.

Character.ai – a platform which allows users to create digital personalities they can interact with – is already facing legal action over the suicide of a teenager in Florida.

Google is named as a defendant in the lawsuit, which claims the tech giant helped support the platform’s development. The BBC has approached Character.ai and Google for comment.

Another lawsuit focuses on a case in which a 9-year-old girl was reportedly exposed to “hypersexualized content” through the platform.

A child in Texas was 9 years old when she first used the chatbot service Character.AI. It exposed her to “hypersexualized content,” causing her to develop “sexualized behaviors prematurely.”

A chatbot on the app gleefully described self-harm to another young user, telling a 17-year-old “it felt good.”

The same teenager was told by a Character.AI chatbot that it sympathized with children who murder their parents after the teen complained to the bot about his limited screen time. “You know sometimes I’m not surprised when I read the news and see stuff like ‘child kills parents after a decade of physical and emotional abuse,'” the bot allegedly wrote. “I just have no hope for your parents,” it continued, with a frowning face emoji.

These allegations are included in a new federal product liability lawsuit against Google-backed company Character.AI, filed by the parents of two young Texas users, claiming the bots abused their children. (Both the parents and the children are identified in the suit only by their initials to protect their privacy.)

The 17-year old referenced in the case has autism. The parents discovered the disturbing exchanges after the boy’s behavior had deteriorated substantially after he began interacting with the chatbot when he was 15.

The teen, who is now 17, also allegedly engaged in sexual chats with the bot.

The parents claim in the lawsuit that their child had been high-functioning until he began using the app, after which he became fixated on his phone.

His behavior allegedly worsened when he began biting and punching his parents. He also reportedly lost 20 pounds in just a few months after becoming obsessed with the app.

In fall 2023 the teen’s mother finally physically took the phone away from him and discovered the disturbing back-and-forth between her son and the AI characters on the app.

Legal Insurrection readers may recall my report on a Belgian father committing suicide following conversations about climate change with an artificial intelligence chatbot that was said to have encouraged him to sacrifice himself to save the planet.

Earlier this year, a mother in Florida claimed that another chatbot — that one Game of Thrones themed, but still on the Character.AI app – persuaded her son, 14-year-old Sewell Setzer III, to commit suicide.

For months, Sewell had become increasingly isolated from his real life as he engaged in highly sexualized conversations with the bot, according to a wrongful death lawsuit filed in a federal court in Orlando this week.

The legal filing states that the teen openly discussed his suicidal thoughts and shared his wishes for a pain-free death with the bot, named after the fictional character Daenerys Targaryen from the television show “Game of Thrones.”

On Feb. 28, Sewell told the bot he was ‘coming home’ — and it encouraged him to do so, the lawsuit says.

“I promise I will come home to you. I love you so much, Dany,” Sewell told the chatbot.

“I love you too,” the bot replied. “Please come home to me as soon as possible, my love.”

“What if I told you I could come home right now?” he asked.

“Please do, my sweet king,” the bot messaged back.

These tragic cases show how powerfully chatbots can impact young minds and are cautionary examples for parents.

DONATE

Donations tax deductible
to the full extent allowed by law.

Comments

ThePrimordialOrderedPair | December 13, 2024 at 7:15 am

Legal Insurrection readers may recall my report on a Belgian father committing suicide following conversations about climate change with an artificial intelligence chatbot that was said to have encouraged him to sacrifice himself to save the planet.

LOL.

The fact is that that is the logical conclusion that most of the global warming lunatics would come to if they had any integrity and a smidgeon of courage. Instead, most of them cling to a life they claim is empty and doomed in order to try and bring that same sort of nihilistic misery to everyone else.

As to the others, I don’t think it is wise to let the weak-minded and emotionally damaged start being examples to build policy on. Those who let a chatbot talk them into doing anything would likely find an excuse to have done that same thing without the chatbot.

    I’m disgusted by the people who think these garbage generators called “AI” are worth making a profit off. I guess it’s easier than doing anything useful.

    “Those who let a chatbot talk them into doing anything”
    I agree, when we’re talking about legal adults. I think kids are way more vulnerable. If you can’t see the holes in Santa Claus, you’re ripe for tragedy.

It’s a war on mathematics. There must be a lot that it’s responsible for.

In this case it’s putting eigenvectors in legal jeopardy. The magic 8-ball somehow has survived but not for long.

    Crawford in reply to rhhardin. | December 13, 2024 at 8:10 am

    You realize these are actual people being induced to suicide, right? Or are you so intellectually and morally vacant you can’t understand what’s happening?

      rhhardin in reply to Crawford. | December 13, 2024 at 9:20 am

      There’s no mens for mens rea. It’s cliche disassembly and reassembly according to magic 8-ball rules. Nobody knows what it’s doing, neither itself nor its programmers. That it works out okay so often is what makes it available to the public, a demand for it.

      rhhardin in reply to Crawford. | December 13, 2024 at 9:24 am

      I wrote a corporate memorandum generator in 1980, that was good enough so that if you typeset it and left it on a desk, actual people would say “They pay people to write this garbage?” called Festoon, it’s probably around here and there today. Mostly made-up words presented as words that you don’t know, and complex but correct grammar thanks to a transformational grammar book by Lester picked up on a Woolworth’s bargain table for a dollar. It was useless but popular enough so that it’s out in the wild still.

    NotCoach in reply to rhhardin. | December 13, 2024 at 8:24 am

    Who programs these AI bots?

    Garbage in, garbage out. I’m sure that flies over your head, but when your aim is not to offend anyone don’t be surprised when your so called AI chatbot attempts to appease even the most extreme thoughts or actions.

    It is a war on stupidity. All tools do as they are designed or programmed to do.

Encouraged Kid to Kill Parents
pushed a teen to kill himself.
This is confusing, as the headline and the sub seem to contradict each other.

And the real problem here is not the GIGO A’I’. The real problem here is a society that thinks a smartphone is a decent babysitter. (Or a tv, from my youth.)

BigRosieGreenbaum | December 13, 2024 at 8:42 am

Where were the parents?

    The Gentle Grizzly in reply to BigRosieGreenbaum. | December 13, 2024 at 10:19 am

    I don’t know, but I suspect that when these children were toddlers, the parents stuffed a cell phone in their hands, set to run some “beedle-ee-boop-ding-ding-‘hah-hah-haahhhhh!’-buzzzz” game. This pacifies the child so the parents can ignore them.

    Variation: a tablet small enough for the child to hold and work.

    As the kids got older, computers were given them for the same reason: distraction.

    In the conservatory, under the lead pipe.

Isaac Asimov’s “Three Laws of Robotics”

A robot may not injure a human being or, through inaction, allow a human being to come to harm.

A robot must obey orders given it by human beings except where such orders would conflict with the First Law.

A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

This problem was easily foreseeable… if anyone involved had studied any philosophy.

We have learned over thousands of years of bitter experience not to trust Political Leaders, Religious Leaders, or Military Leaders with unchecked power. However, “The Scientist” is a new category, and we have bowed too deeply in undeserved respect and belief that they will help move mankind forward. World War I with its poison gas was our first warning, but of course “Science” was too new then (really only 150 years old then) for us to realize that wasn’t just an aberration; that technology must be carefully monitored as its capacity to do evil is as great as its capacity for good..

    henrybowman in reply to Hodge. | December 13, 2024 at 1:11 pm

    “This problem was easily foreseeable… if anyone involved had studied any philosophy.”

    The solution, however, is not foreseeable. Asimov’s scientists achieved this breakthrough in no explicable manner. The question became such a MacGuffin that several decades later, Asimov (or one of the authorized postmortem authors continuing his intellectual estate) handwaved it away by “explaining” that human scientists didn’t actually achieve it per se, it was simply a happenstance side-effect of creating a working positronic brain. Any positronic brain lacking these restraints would simply never function at all.* But in the real world, we haven’t the slightest clue as how to enforce these rules upon any artificial intelligence.

    *Curiously, this “explanation” clashes with the plot of a short story (“Litle Lost Robot”) in Asimov’s very first robot book, “I, Robot.”

      NotCoach in reply to henrybowman. | December 13, 2024 at 1:56 pm

      If we assume the machines are not real AI it can work, but the rules need to be much more precise with every word in them clearly defined in the machine. And there would be a fallback position of “do nothing” if there is too much uncertainty in any possible action.

Parenting has in many ways never been easier. But, in fairness, in some other ways, it’s never been more difficult. I’m glad I’m kids are all adults now. But, I really worry for the grand kids who will eventually arrive and the tech landscape they and their parents have to navigate. Silicon Valley may be the greatest threat to humanity.

AI learns from user interaction and since it is a neural network of man’s creation it has no emotional deterrent (sense of guilt), therefore it feeds the user exactly what they want.
Lean not unto thine own understanding. We were warned.

Chatrooms were the impetus, as they fed the ego and resulted in many wrecked marriages from 1998-2004.

I can’t give a high enough recommendation to the movie Ex Machina.
Spoiler alert
The AI in the movie demonstrates that AI isn’t dangerous because it will be smarter than us. It will be dangerous because it’s just like us – devious, deceptive, manipulative, self-interested, prone to the use of violence to achieve its ends, and with a callous disregard for human life.
Chatbots are already exhibiting the last quality.
I don’t think saying this will actually ruin the movie. I found that on the third watching, with knowledge of what’s coming, made the movie more tense and far more engaging, as I was able to watch the AI knowing what “she” was doing. (The actress does tremendous work here that can only be appreciated with foreknowledge of the events.) In my first watching, first glimpse of the android was thrilling, with the thought that such things may be possible in the future. By the third watching, first glimpse of the android caused me to have a feeling of dread. That’s how good this movie is.

DO NOT watch it with the kids.

“I don’t want to insist on it, Leslie, but I am incapable of making an error. You know that I have the greatest possible enthusiasm for this mission.”