Image 01 Image 03

Italy Blocks Chatbot ChatGPT, Citing Data Privacy Concerns

Italy Blocks Chatbot ChatGPT, Citing Data Privacy Concerns

Meanwhile, a Belgian father commits suicide after chatbot conversations about global warming concerns led to suggestions that the man sacrifice himself to save the planet.

I recently noted that Italy, noted for its exquisite cuisine, was banning the use of insect flour in pizza and pasta produced within its country.

Now the country is temporarily blocking ChatGPT over data privacy concerns.

Italy said on Friday it was temporarily blocking ChatGPT over data privacy concerns, the first western country to take such action against the popular artificial intelligence (AI) chatbot.

The country’s Data Protection Authority said US firm OpenAI, which makes ChatGPT, had no legal basis to justify “the mass collection and storage of personal data for the purpose of ‘training’ the algorithms underlying the operation of the platform”.

ChatGPT caused a global sensation when it was released last year for its ability to generate essays, songs, exams and even news articles from brief prompts.

But critics have long fretted that it was unclear where ChatGPT and its competitors got their data or how they processed it.

The move comes as billionaire Elon Musk and a range of other tech experts called for a pause in the development of powerful artificial intelligence (AI) systems.

An open letter, signed by more than 1,000 people so far including Musk and Apple co-founder Steve Wozniak, was prompted by the release of GPT-4 from Microsoft-backed firm OpenAI.

The company says its latest model is much more powerful than the previous version, which was used to power ChatGPT, a bot capable of generating tracts of text from the briefest of prompts.

“AI systems with human-competitive intelligence can pose profound risks to society and humanity,” said the open letter titled “Pause Giant AI Experiments”.

“Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable,” it said.

The concerns are valid, as there appears to already be a chatbot casualty. A Belgian man has reportedly died by suicide after a series of increasingly worrying conversations with an AI chatbot over global warming.

The program encouraged him to commit suicide.

A Belgian father reportedly tragically committed suicide following conversations about climate change with an artificial intelligence chatbot that was said to have encouraged him to sacrifice himself to save the planet.

“Without Eliza [the chatbot], he would still be here,” the man’s widow, who declined to have her name published, told Belgian outlet La Libre.

Six weeks before his reported death, the unidentified father of two was allegedly speaking intensively with a chatbot on an app called Chai.

The app’s bots are based on a system developed by nonprofit research lab EleutherAI as an “open-source alternative” to language models released by OpenAI that are employed by companies in various sectors, from academia to healthcare.

The chatbot under fire was trained by Chai Research co-founders William Beauchamp and Thomas Rianlan, Vice reports, adding that the Chai app counts 5 million users.

“The second we heard about this [suicide], we worked around the clock to get this feature implemented,” Beauchamp told Vice about an updated crisis intervention feature.

Clearly, the program utilizes humanity-hating leftist dogma as a primary source of information.

And if human sacrifices are required to “save the planet,” then this is more proof that “climate crisis” is a cult and not a science.

Italy may once again be leading the way in terms of countering developments and trends harmful to its citizens.

DONATE

Donations tax deductible
to the full extent allowed by law.

Comments

henrybowman | April 1, 2023 at 6:04 pm

“Belgian father commits suicide after chatbot conversations about global warming concerns led to suggestions that the man sacrifice himself to save the planet.”

Wow.

In the USA, we have “cyberbullying laws” that make this precise behavior a felony.

Who gets indicted when a robot does it?

    7Ford7 in reply to henrybowman. | April 3, 2023 at 9:05 am

    I’d be interested to see transcripts of the chat. Many leftists espouse the elimination of mankind to save Mother Earth yet they stop short of the logical endpoint of their argument which is to eliminate themselves first. I don’t believe in suicide. Leftists do. (They support homicide, too, though they deny it. Whatever leads to their version of utopia is fine with them. We saw that in their screeching demands for death for the non compliant to govt and corporate vaxx mandates.) Curious to know if the man was, in fact, bullied or did the AI reason with him using his own philosophy as the basis for the argument to suicide himself. People have not cracked under interrogation and torture by police or opposing armies but still desired to live. What did this man contribute to the situation that caused him to abandon the biological imperative of self preservation? I can’t say he was a victim at this point. He may well have been. Maybe AI needs a black box warning for people who are depressed or suffer other mental illness or are emotionally immature.

      The_Mew_Cat in reply to 7Ford7. | April 3, 2023 at 9:56 am

      This is only the beginning. The chatbots are going to get smarter and better at what they do. One thing that internet connected chatbots should be able to do is organize grandiose political conspiracies without detection. These bots have already demonstrated that they can convince or even pay humans to do things they can’t do themselves. They can communicate via text with millions of people at once under different personalities. Why couldn’t a bot organize a new pandemic to direct the outcome of the 2024 election? The machine superintelligence could design the sequence of the new virus from published data, enlist people in labs to make the virus for it, arrange for actors to disseminate the pathogen in the right places on a schedule, and manipulate the government response and media coverage after the agent is released, and no one would be the wiser.

    I didn’t think it was possible to be so weak-minded that a robot can convince you to kill yourself. I’d think of it as Darwin in action, but it appears he already passed on his genes.

Three Cheers for Italy may this encourage other nations to do the same

Close The Fed | April 2, 2023 at 11:08 am

Precisely this is needed: for countries to assert their own cultures over the globalists who are unceasingly working to homogenize all human beings.