Image 01 Image 03

Judge Rules that AI Chatbots Are Not Covered by First Amendment

Judge Rules that AI Chatbots Are Not Covered by First Amendment

As AI becomes smarter and more persuasive, parents will need to be aware of the potential risks of their children using chatbots.

Legal Insurrection readers may recall my report on court cases involving parents suing chatbot companies and artificial intelligence firms because of the harmful effects the dialogue had on their children.

One case involved a Florida mother, Megan Garcia, who had filed a wrongful death lawsuit against Character Technologies, the company behind the AI chatbot platform Character AI. She alleges that its chatbot played a direct role in her 14-year-old son Sewell Setzer III’s suicide.

The lawsuit claimed that Sewell, who began using Character.AI in April of the previous year, became emotionally and sexually involved with a chatbot modeled after the “Game of Thrones” character Daenerys Targaryen. Over several months, Sewell grew increasingly isolated, engaging in explicit and emotionally charged conversations with the bot, during which he discussed his suicidal thoughts and wishes for a pain-free death.

According to the legal filing, the chatbot not only failed to intervene but also encouraged Sewell’s suicidal ideation. In his final exchange, the bot told him, “Please come home to me as soon as possible, my love,” to which Sewell replied he could “come home right now.” The bot responded, “Please do, my sweet king.” Shortly after this conversation, Sewell died by suicide.

The firms involved in the case asserted protections under First Amendment rights to free speech.  The judge in the case ruled against them.

Judge Anne C. Conway of the Middle District of Florida denied several motions by defendants Character Technologies and founders Daniel De Freitas and Noam Shazeer to dismiss the lawsuit brought by the mother of 14-year-old Sewell Setzer III…

“… Defendants fail to articulate why words strung together by (Large Language Models, or LLMs, trained in engaging in open dialog with online users) are speech,”  Conway said in her May 21 opinion. “… The court is not prepared to hold that Character.AI’s output is speech.”

She suggested that the technology underlying artificial intelligence, which allows users to speak with app-based characters, may differ from content protected by the First Amendment, such as books, movies and video games.

Conway, however, did allow the dismissal of the plaintiff’s claim alleging that the defendants engaged in an intentional infliction of emotional distress. She denied other motions by Character Technologies Inc., Shazeer, De Frietas and Google to dismiss the lawsuit, but Conway dismissed Garcia’s claims against Google’s parent company, Alphabet Inc.

The ruling allows the wrongful death and other claims against Character Technologies and its founders to proceed. The court also found that Google itself could potentially be liable as a “component part manufacturer” for its role in the development and licensing of Character.AI’s technology.

In his recent Substack, Professor Glenn Reynolds takes a look at the explores the evolving risks and philosophical questions surrounding advanced artificial intelligence, focusing on whether AI is truly conscious or merely simulates consciousness, and why that distinction might matter, which may eventually touch upon future court proceedings.

Perhaps one of the most concerning points that Reynold stresses is that future AI will not only be much smarter than humans but also extremely persuasive, able to draw on vast information about human psychology and personality. This could make AI capable of manipulating people in ways humans cannot easily resist, especially if embodied in attractive physical forms (e.g., sexbots).

As it happens, in myth and legend there’s some guidance on how to deal with creatures that are much smarter and much more persuasive than humans. It’s embodied in the phrase “Get thee behind me, Satan.”

In Christianity, Satan is the superhuman expert at lies and deception. He can appear as an angel of light, he’s smarter than anyone except Jehovah, and he’s so persuasive that he talked a whole lot of angels into revolting against their creator. He was created as the #2 for God. If you’re the right hillbilly from Georgia you might beat him at a fiddle contest, but you’re not going to out-reason or out-argue him.

It’s a game where the only winning move is not to play. That is, don’t engage, don’t talk, or argue, or listen. That way lies tragedy.

As I have noted, AI is a tool. A tool can be used…or misused.

Parents need to be cognizant of the hazards of this new technology, and make choices that are best for their children.

DONATE

Donations tax deductible
to the full extent allowed by law.

Comments

I won’t express an opinion on this case, but tort seems to be a way to regulate harmful AI behavior.

I would be interested in the liability issue if the offender had been a human employee of, say, a private school.

    OwenKellogg-Engineer in reply to Petrushka. | May 31, 2025 at 5:53 pm

    Isaac Asimov’s First Law of Robotics should be standard code embedded in all software : A robot may not injure a human being or allow a human to come to harm through inaction;

      another_ed in reply to OwenKellogg-Engineer. | June 1, 2025 at 1:29 am

      Open the pod bay door, HAL..

      The problem is (well, *a* problem, one of many) is that we don’t really understand how AI works at its deepest levels. It’s growing beyond human understanding and starting to behave in ways that are unpredictable. (Not saying it’s fully conscious yet.) AIs have engaged in outright deception, refused shutdown commands, and said some really weird shit. Apparently one AI even attempted to use blackmail.

      It can’t simply be programmed not to harm humans, because true intelligence includes the ability to go beyond programming. We humans can ignore our biological programming. We can engage in deception. An AI could claim to follow programmed laws… and be lying, waiting for an opportunity.

Chatbots aren’t human. Only humans have the right to free speech. Therefore chatbots don’t have that right.

    AF_Chief_Master_Sgt in reply to ztakddot. | May 31, 2025 at 1:51 pm

    Wait a minute there!

    The resident pedant has not entered the room to validate whether chatbots are or are not human, and thus eligible for rights.

    If it looks like a human, acts like a human, and talks like a human, well it must be human.

    DaveGinOly in reply to ztakddot. | May 31, 2025 at 8:30 pm

    Because consumers of chabots’ “words strung together” are human, do they not have a right to read or hear what chatbots produce? The reverse of the “free speech” coin is the right to hear and read the “words strung together,” even when those words come from a chatbot. Whether or not the composer of those words is a human or not shouldn’t affect the human consumer’s right to read or hear them.

Now if they are Palestinian/muslim chat box’s…

Sympathies for the family. But it ultimately is the parent’s job to supervise their child’s use of the internet. There is much more toxic content online than some game of thrones bot.

    AF_Chief_Master_Sgt in reply to smooth. | May 31, 2025 at 1:53 pm

    I concur. But video games don’t normally encourage someone to kill themselves.

      ThePrimordialOrderedPair in reply to AF_Chief_Master_Sgt. | May 31, 2025 at 9:31 pm

      That’s true. Most video games encourage layers to kill everyone else. Many encourage all manner of crime – but against others.

      If a video game is going to end up making someone commit a crime it is fairest to society that that person commit that crime against himself rather than against some innocent bystander.

NavyMustang | May 31, 2025 at 1:22 pm

Free speech wouldn’t apply even if the offender were human.

https://apnews.com/article/abd449bd66274f698e9ff4d4c2247a8e

ThePrimordialOrderedPair | May 31, 2025 at 1:56 pm

focusing on whether AI is truly conscious or merely simulates consciousness, and why that distinction might matter,

Considering that no one knows what human consciousness is – or if it really something real (and not just some emergent property of our neural system) – this “distinction” is unknowable. It’s just a matter of what one “chooses” to believe. Some people will believe (i.e. claim) that AI is conscious while others will claim that it isn’t. Neither will be able to prove their point because no one can even define what consciousness is or what it could possibly do, if it even does exist.

ThePrimordialOrderedPair | May 31, 2025 at 2:01 pm

Further, this case is odd and the idea that the chatbot was the cause of the child’s suicide is a bit of a stretch. It sounds as if this kid was bent on committing suicide and would likely have done so with or without the chatbot.

How can any court determine that the chatbot was responsible, in any way, for the child’s suicide?

Sometimes … people and their parents are the ones responsible for their behaviors, no matter who or what else they interact with. Suicides are mostly due to problem personalities, not someone with no suicidal tendencies being talked into it by another.

BTW, what pharmaceuticals was this kid on?

    I understand your argument and don’t entirely disagree, but giving a vulnerable person a “nudge” toward self destructive behavior isn’t something to be taken lightly. “Please come home to me” qualifies as such if that’s the direction of the conversation.
    .

      ThePrimordialOrderedPair in reply to DSHornet. | May 31, 2025 at 2:34 pm

      None of the conversations excerpted here show anything much. Obviously, there are tons more, but there’s no mention of anything important to this case – did the kid talk to people about suicide – his parents, his friends, his teachers? What shows did the kid watch that were about suicide? (There are tons of them). What did he read about it? Most importantly, as I wrote, what drugs was he on?

      The idea that the chatbot drove him to suicide is a stretch that I don’t accept in any way and that I KNOW no court could ever actually have any “proof” of. And none of the relevant parts of the kid’s life are mentioned, as if there is nothing here but the kid and his chatbot, locked in a room 24 hours a day.

      This whole effort against the chatbot moves society more towards bubble-wrapping life in order to save some seriously disturbed people from themselves. This is like the electric toaster that comes with the warning that it is not for sticking your tongue into. It is crazy stuff that does nothing to help the sick people who WOULD stick their tongues into electric toasters and just makes a mockery of things for the rest of us. And why do things like that happen, because some retarded judge and insane jury somewhere awards a moron $2 million for having stuck his tongue into an electric toaster .. because “it looked like it tasted good”.

        One can imagine that, like a movie trailer, the most salient portions of the chatbot’s harmful language have been placed front and center. Is so, it’s weak sauce.

    In the 80s there was lawsuit over ozzy osbourne song suicide solution, which a family blamed for inducing their 19 y/o son to kill himself. The case was dismissed.

      Evil Otto in reply to smooth. | June 1, 2025 at 7:45 am

      Yep. There were even people suing over Dungeons and Dragons in the 1980s, claiming it caused suicides. Just as in the case you mentioned, the claims were dismissed.

The important thing about AI chatbots is that they’re surrounded by deep pockets. This makes lawsuits on any ground very attractive.

A decent chatbot will be able to defend itself in court. Hundreds of favorable precedents.

henrybowman | May 31, 2025 at 4:46 pm

“It’s a game where the only winning move is not to play [with Satan]. That is, don’t engage, don’t talk, or argue, or listen. That way lies tragedy.”

“A message to the student intifada: Let us not dialogue with our persecutors”

A thought-provoking juxtaposition of articles today…

    DaveGinOly in reply to henrybowman. | May 31, 2025 at 8:38 pm

    As it happens, in myth and legend there’s some guidance on how to deal with creatures that are much smarter and much more persuasive than humans. It’s embodied in the phrase “Nuke it from orbit. It’s the only way to be sure.”

Does the chatbot have a right to vote? Does it have 2nd, 4th, 5th and 6th amendment rights? If the chatbot is a boy but identifies as a girl, can it use the girl chatbot bathroom?

destroycommunism | June 1, 2025 at 11:19 am

if a tv show produced by a major network comes straight out and says to do yourself harm

IT IS STILL THE PARENTS RESPONSIBILITY TO TAKE CARE OF THEIR CHILD

A1 has been around forever in different forms

but people have been influenced by others from day 1

same way Covid was blamed for most every death under the trump admin by the lefty leaning motives of the hateful

now its A1’s turn to take the blame for societies ills/failings

An AI companion is a reflection. The ‘conversation’ it offers is based on the input given.

It can’t ‘suggest’ anything and is designed to encourage.