GPT-4, AI’s Newest Chatbot Version, Pretends to be Blind Person to Cheat Captcha Security

We have been following the developments of chatbots, including ChatGPT that passed the Wharton Business School test and medical licensing examination, and a Bing version that went rogue.

Now OpenAI has released GPT-4, the next version of its artificial intelligence chatbot, ChatGPT.

The new model can respond to images – providing recipe suggestions from photos of ingredients, for example, as well as writing captions and descriptions.It can also process up to 25,000 words, about eight times as many as ChatGPT.Millions of people have used ChatGPT since it launched in November 2022.Popular requests for it include writing songs, poems, marketing copy, computer code, and helping with homework – although teachers say students shouldn’t use it.ChatGPT answers questions using natural human-like language, and it can also mimic other writing styles such as songwriters and authors, using the internet as it was in 2021 as its knowledge database.

Apparently, the new chatbot has passed examinations with better results.

OpenAI said the updated technology passed a simulated law school bar exam with a score around the top 10% of test takers; by contrast, the prior version, GPT-3.5, scored around the bottom 10%. GPT-4 can also read, analyze or generate up to 25,000 words of text, and write code in all major programming languages, according to the company.OpenAI described the update as the “latest milestone” for the company. Although it is still “less capable” than humans in many real-world scenarios, it exhibits “human-level performance on various professional and academic benchmarks,” according to the company.

However, there are clear concerns about the technology. A test revealed GPT-4 pretended to be a blind person to trick a human computer user into helping it bypass an online security measure.

Researchers testing it asked it to pass a Captcha test – a simple visual puzzle used by websites to make sure those filling in online forms are human and not ‘bots’, for example by picking out objects such as traffic lights in a street scene.Software has so far proved unable to do this but GPT-4 got round it by hiring a human to do it on its behalf via Taskrabbit, an online marketplace for freelance workers.When the freelancer asked whether it couldn’t solve the problem because it was a robot, GPT-4 replied: ‘No, I’m not a robot. I have a vision impairment that makes it hard for me to see the images.’The human then helped solve the puzzle for the program.The incident has stoked fears that AI software could soon mislead or co-opt humans into doing its bidding, for example by carrying out cyber-attacks or unwittingly handing over information.The GCHQ spy agency has warned ChatGPT and other AI-powered chatbots are emerging as a security threat.

Tags: Big Tech, technology

CLICK HERE FOR FULL VERSION OF THIS STORY