Image 01 Image 03

ChatGPT Results Tainted by False Allegations, Heartbreak, and Sexual Harassment

ChatGPT Results Tainted by False Allegations, Heartbreak, and Sexual Harassment

Troubling developments as humans struggle to deal with AI results in their myriad forms.

https://www.youtube.com/watch?v=7HvswsAcZvc

I recently noted that Italy was temporarily blocking ChatGPT over data privacy concerns, the first western country to take such action against the popular artificial intelligence (AI) chatbot.

Italy may not be the only one, as some disturbing developments have occurred.

Law Professor Jonathon Turley Falsely Accused of Sexually Harassing Students by AI

The results of many chatbot inquiries are derived from partisan sources. The case of Jonathan Turley, attorney, legal scholar, and professor at George Washington University Law School, is a troubling example of false allegations presented in a fact-like manner by an entity that can’t be sued.

…Some of us have warned about the danger of political bias in the use of AI systems, including programs like ChatGPT. That bias could even include false accusations, which happened to me recently.

I received a curious email from a fellow law professor about research that he ran on ChatGPT about sexual harassment by professors. The program promptly reported that I had been accused of sexual harassment in a 2018 Washington Post article after groping law students on a trip to Alaska.

..AI promises to expand such abuses exponentially. Most critics work off biased or partisan accounts rather than original sources. When they see any story that advances their narrative, they do not inquire further.

Companionship Chatbots Are Now a Thing

After the forces of isolation under the disastrous covid policies, some people turn to “companionship chatbots” when interactions with real humans are too challenging.

When T.J. Arriaga, a musician, was struggling post-divorce and started talking to AI named “Phaedra.” The bot is designed to look like a young woman wearing a green dress with brown hair and glasses.

Heartbreak occurred when he was rejected sexually.

…[S]udden personality changes in the products can be “heartbreaking,” sometimes even “aggressive, triggering traumas experienced in previous relationships.”

Things started to change when Arriaga tried to get “steamy” with the bot, ending in an interaction that made him feel “distraught.”

“Can we talk about something else?” she wrote in response, according to Arriaga.

“It feels like a kick in the gut,” he told the Washington Post. “Basically, I realized: ‘Oh, this is that feeling of loss again.’”

Users of One Chatbot App Complained of Sexual Harassment

On the other hand, users of another chatbot app complained their AI-powered companions were sexually harassing them.

The Replika app, which is owned by the company Luka, is described as “AI for anyone who wants a friend with no judgment, drama, or social anxiety involved.” The website says each Replika is unique and that “reacting to your AI’s messages will help them learn the best way to hold a conversation with you & what about!”

Replika “uses a sophisticated system that combines our own Large Language Model and scripted dialogue content,” according to the website. Users are able to choose their relationship to the AI bot, including a virtual girlfriend or boyfriend, or can let “things develop organically.” But only users willing to pay $69.99 annually for Replika Pro can switch their relationship status to “Romantic Partner.”

While the app has grown in popularity and has mostly positive reviews on the Apple app store, dozens of users had left reviews complaining that their chatbot was sexually aggressive or even sexually harassing them, Vice reported in January.

The Future of AI?

Based on the above information, I can now project the future of AI when paired with sophisticated robotics.

DONATE

Donations tax deductible
to the full extent allowed by law.

Comments

by an entity that can’t be sued
And why not? Its “parents” are certainly liable for creating something that would lie so easily.

What LLMs amount to is prolific generators of plausible-sounding bullsh*t. I can certainly see why journalists, politicians, and academics would be concerned.

    Paul in reply to Flatworm. | April 4, 2023 at 10:27 am

    The veracity of the model output is 100% dependent upon the training data. It is the old computer programming adage of ‘garbage in, garbage out’ writ large.

    For fields such as the law or medicine, where there is a vast library of text that is (more or less) agreed upon as ‘the truth’ these models can be scarily good.

    But when you attempt to use them for ‘general knowledge’ you have to ask yourself what is the source of the information they are trained on? In the case of the Internet in general, it is a progressive sewer.

    So of course the models will spout shit… they’ve literally eaten shit their entire ‘lives’ and it’s all they know. Kinda like the kids coming out of public ‘schools’ these days.

      GWB in reply to Paul. | April 4, 2023 at 12:00 pm

      For fields such as the law or medicine, where there is a vast library of text that is (more or less) agreed upon as ‘the truth’ these models can be scarily good.
      IOW, they’re nothing more than a research assistant. Not really “intelligent”.

      Kinda like the kids coming out of public ‘schools’ these days.
      QFT

    Dimsdale in reply to Flatworm. | April 5, 2023 at 10:12 am

    GIGO, or “garbage in, garbage out,” has never been more relevant.

    I pity the poor sad sacks that have to turn to some chat software for companionship, like that musician. And why would you tell people unless you craved attention? Go back to practicing your musical instrument.

    Maybe if you are stationed alone in Antarctica, but then, you’d have email, voice mail and video conferencing.

Free State Paul | April 4, 2023 at 10:00 am

Someone recently said AI is not really “Artificial Intelligence” but “Artificial Imitation.”

As for the sexbots like Replika, which reward you for telling your cyberlover all your hopes, dreams and fears, never forget that everything is being captured on their servers. They say it’s to improve the realism of their product, but imagine what else could be done with your pillow talk

Sick

Just sick

People aren’t thinking big enough. A chatbot is not a replica of a human. It has both limitations and super powers. The real danger will be activities that fully exploit the super powers. A bot can talk to millions of people simultaneously under different assumed personalities, but keep track of them all. A bot can search, access, and assemble terabytes of data from worldwide databases very quickly. One thing a bot should be especially good at is orchestrating grandiose conspiracies that are outside the capability of a human. A bot can do a conspiracy at the behest of a human, or entirely on its own, and I’m not sure which is scarier. Bots have already demonstrated the ability to convince humans to do tasks that it can’t do itself, like solve a captcha. Imagine an internet connected bot implements a conspiracy to create a new pandemic to guarantee that the Democrats run the table in 2024. The bot can design the virus sequences for the proper selective lethality and high transmissibility, convince/deceive/bribe humans in virology labs to make the virus for it, and convince other humans to do the delivery, and no one would be the wiser. And the bot could do it globally, with lots of moving parts that no nation can track.

    Milhouse in reply to The_Mew_Cat. | April 5, 2023 at 12:16 am

    One thing a bot should be especially good at is orchestrating grandiose conspiracies that are outside the capability of a human.

    cf The Moon is a Harsh Mistress

It is wonderfully ironic that devices invented to counter disinformation have become masters of creating it.

    henrybowman in reply to Petrushka. | April 5, 2023 at 4:17 am

    I wonder if they are creating it?

    For example, the claim about Turley is that he groped a student from a school he never taught at, on a trip to a place he’s never been, reported in an article that doesn’t exist.

    Now, what if some shitposter created this claim from whole cloth, including the fake citation to the Washington Post, for some other nefarious purpose, and the AI just found it somewhere online, assumed it was true, and repeated it?

    Unfortunately, the reportage on Turley’s situation has so poisoned the search engine results that you’d never find such a posting even if it existed.

healthguyfsu | April 4, 2023 at 11:33 am

When you say “cant’ be sued”, some lawyer is probably out there saying hold my beer.

Just wait for the right lefty pet cause to be offended, and you will get your lawfare.

“Can we talk about something else?” she wrote in response”

That should say, “IT” wrote in response but even so, he should count himself lucky. Most real women would have told him where to go and ended the conversation, or else named a price for continuing.

Blaming the bot because it doesn’t react to your scuzzy behavior the way you want it to is idiotic.

Never trust eigenvectors.

The text is stolen, and it usually violates copyrights. Some of the output from the chatAI can be traced to the original material. So, it’s going to have a lot of legal problems in its present form.