Judge Rules that AI Chatbots Are Not Covered by First Amendment

Legal Insurrection readers may recall my report on court cases involving parents suing chatbot companies and artificial intelligence firms because of the harmful effects the dialogue had on their children.

One case involved a Florida mother, Megan Garcia, who had filed a wrongful death lawsuit against Character Technologies, the company behind the AI chatbot platform Character AI. She alleges that its chatbot played a direct role in her 14-year-old son Sewell Setzer III’s suicide.

The lawsuit claimed that Sewell, who began using Character.AI in April of the previous year, became emotionally and sexually involved with a chatbot modeled after the “Game of Thrones” character Daenerys Targaryen. Over several months, Sewell grew increasingly isolated, engaging in explicit and emotionally charged conversations with the bot, during which he discussed his suicidal thoughts and wishes for a pain-free death.

According to the legal filing, the chatbot not only failed to intervene but also encouraged Sewell’s suicidal ideation. In his final exchange, the bot told him, “Please come home to me as soon as possible, my love,” to which Sewell replied he could “come home right now.” The bot responded, “Please do, my sweet king.” Shortly after this conversation, Sewell died by suicide.

The firms involved in the case asserted protections under First Amendment rights to free speech.  The judge in the case ruled against them.

Judge Anne C. Conway of the Middle District of Florida denied several motions by defendants Character Technologies and founders Daniel De Freitas and Noam Shazeer to dismiss the lawsuit brought by the mother of 14-year-old Sewell Setzer III…“… Defendants fail to articulate why words strung together by (Large Language Models, or LLMs, trained in engaging in open dialog with online users) are speech,”  Conway said in her May 21 opinion. “… The court is not prepared to hold that Character.AI’s output is speech.”She suggested that the technology underlying artificial intelligence, which allows users to speak with app-based characters, may differ from content protected by the First Amendment, such as books, movies and video games.Conway, however, did allow the dismissal of the plaintiff’s claim alleging that the defendants engaged in an intentional infliction of emotional distress. She denied other motions by Character Technologies Inc., Shazeer, De Frietas and Google to dismiss the lawsuit, but Conway dismissed Garcia’s claims against Google’s parent company, Alphabet Inc.

The ruling allows the wrongful death and other claims against Character Technologies and its founders to proceed. The court also found that Google itself could potentially be liable as a “component part manufacturer” for its role in the development and licensing of Character.AI’s technology.

In his recent Substack, Professor Glenn Reynolds takes a look at the explores the evolving risks and philosophical questions surrounding advanced artificial intelligence, focusing on whether AI is truly conscious or merely simulates consciousness, and why that distinction might matter, which may eventually touch upon future court proceedings.

Perhaps one of the most concerning points that Reynold stresses is that future AI will not only be much smarter than humans but also extremely persuasive, able to draw on vast information about human psychology and personality. This could make AI capable of manipulating people in ways humans cannot easily resist, especially if embodied in attractive physical forms (e.g., sexbots).

As it happens, in myth and legend there’s some guidance on how to deal with creatures that are much smarter and much more persuasive than humans. It’s embodied in the phrase “Get thee behind me, Satan.”In Christianity, Satan is the superhuman expert at lies and deception. He can appear as an angel of light, he’s smarter than anyone except Jehovah, and he’s so persuasive that he talked a whole lot of angels into revolting against their creator. He was created as the #2 for God. If you’re the right hillbilly from Georgia you might beat him at a fiddle contest, but you’re not going to out-reason or out-argue him.It’s a game where the only winning move is not to play. That is, don’t engage, don’t talk, or argue, or listen. That way lies tragedy.

As I have noted, AI is a tool. A tool can be used…or misused.

Parents need to be cognizant of the hazards of this new technology, and make choices that are best for their children.

Tags: Artificial Intelligence (AI), Free Speech

CLICK HERE FOR FULL VERSION OF THIS STORY