I recently noted that Italy was temporarily blocking ChatGPT over data privacy concerns, the first western country to take such action against the popular artificial intelligence (AI) chatbot.
Italy may not be the only one, as some disturbing developments have occurred.
Law Professor Jonathon Turley Falsely Accused of Sexually Harassing Students by AI
The results of many chatbot inquiries are derived from partisan sources. The case of Jonathan Turley, attorney, legal scholar, and professor at George Washington University Law School, is a troubling example of false allegations presented in a fact-like manner by an entity that can’t be sued.
…Some of us have warned about the danger of political bias in the use of AI systems, including programs like ChatGPT. That bias could even include false accusations, which happened to me recently.I received a curious email from a fellow law professor about research that he ran on ChatGPT about sexual harassment by professors. The program promptly reported that I had been accused of sexual harassment in a 2018 Washington Post article after groping law students on a trip to Alaska…AI promises to expand such abuses exponentially. Most critics work off biased or partisan accounts rather than original sources. When they see any story that advances their narrative, they do not inquire further.
Companionship Chatbots Are Now a Thing
After the forces of isolation under the disastrous covid policies, some people turn to “companionship chatbots” when interactions with real humans are too challenging.
When T.J. Arriaga, a musician, was struggling post-divorce and started talking to AI named “Phaedra.” The bot is designed to look like a young woman wearing a green dress with brown hair and glasses.
Heartbreak occurred when he was rejected sexually.
…[S]udden personality changes in the products can be “heartbreaking,” sometimes even “aggressive, triggering traumas experienced in previous relationships.”Things started to change when Arriaga tried to get “steamy” with the bot, ending in an interaction that made him feel “distraught.”“Can we talk about something else?” she wrote in response, according to Arriaga.“It feels like a kick in the gut,” he told the Washington Post. “Basically, I realized: ‘Oh, this is that feeling of loss again.’”
Users of One Chatbot App Complained of Sexual Harassment
On the other hand, users of another chatbot app complained their AI-powered companions were sexually harassing them.
The Replika app, which is owned by the company Luka, is described as “AI for anyone who wants a friend with no judgment, drama, or social anxiety involved.” The website says each Replika is unique and that “reacting to your AI’s messages will help them learn the best way to hold a conversation with you & what about!”Replika “uses a sophisticated system that combines our own Large Language Model and scripted dialogue content,” according to the website. Users are able to choose their relationship to the AI bot, including a virtual girlfriend or boyfriend, or can let “things develop organically.” But only users willing to pay $69.99 annually for Replika Pro can switch their relationship status to “Romantic Partner.”While the app has grown in popularity and has mostly positive reviews on the Apple app store, dozens of users had left reviews complaining that their chatbot was sexually aggressive or even sexually harassing them, Vice reported in January.
The Future of AI?
Based on the above information, I can now project the future of AI when paired with sophisticated robotics.
CLICK HERE FOR FULL VERSION OF THIS STORY