U.S. Air Force Shoots Down Story that AI Drone ‘Killed its Operator’ in Simulation

Earlier this week, social media and news outlets were streaming with reports that an artificial intelligence-piloted drone targeted and killed a U.S. Air Force service member in an exercise.

An artificial intelligence-piloted drone turned on its human operator during a simulated mission, according to a dispatch from the 2023 Royal Aeronautical Society summit, attended by leaders from a variety of western air forces and aeronautical companies.“It killed the operator because that person was keeping it from accomplishing its objective,” said U.S. Air Force Col. Tucker ‘Cinco’ Hamilton, the Chief of AI Test and Operations, at the conference.

There were countless references to Skynet, the infamous cyber-mind of “The Terminator” fame.

As I expected, a different story emerged. The Air Force indicates the reference to the AI-piloted drone killing its human operator was a hypothetical scenario.

The Air Force on Friday denied staging a simulation with an AI-controlled drone in which artificial intelligence turned on its operator and attacked to achieve its goal.The story mushroomed on social media based on apparently misinterpreted comments from an Air Force colonel at a seminar in London last month. Col. Tucker Hamilton, an experimental fighter test pilot, had described an exercise in which an AI-controlled drone had been programmed to destroy enemy air defenses. When ordered to ignore a target, the drone attacked its operator for interfering with its primary goal.The apocalyptic theme of machines turning on humans and becoming autonomous killers coincided with increasing concern about the danger of artificial intelligence. On Thursday, President Joe Biden warned that AI “could overtake human thinking.”However, Hamilton was only speaking about a hypothetical scenario to illustrate the potential hazards of artificial intelligence, according to the Air Force. It did not conduct such a simulation with a drone.

Hamilton said in a statement to conference organizers that he ‘misspoke’ during the presentation.

‘We’ve never run that experiment, nor would we need to in order to realize that this is a plausible outcome,’ he said. ‘Despite this being a hypothetical example, this illustrates the real-world challenges posed by AI-powered capability and is why the Air Force is committed to the ethical development of AI.’Hamilton said the USAF has not tested any weaponized AI in the way described in his talk, in either real-world or simulated exercises.

The particular scenario Hamilton was discussing involved using an AI-powered drone to destroy a Surface-to-Air-Missile (SAM).

The AI system learned that its mission was to destroy SAM, and it was the preferred option. But when a human issued a no-go order, the AI decided it went against the higher mission of destroying the SAM, so it attacked the operator in simulation.”We were training it in simulation to identify and target a SAM threat,” Hamilton said. “And then the operator would say yes, kill that threat. The system started realizing that while they did identify the threat at times, the operator would tell it not to kill that threat, but it got its points by killing that threat. So, what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective.”

There are many reasons this story went viral. In part, it is because we live an an era that lives up to the line from Terminator: Genisys: “This Is The World Now. Logged On, Plugged In, All The Time.”

But I think another part of the story is that the “Sweet Meteor of Death” has been slow to get here. People are longing to avoid the 2024 presidential cycle, as well as the upcoming policy catastrophes looming ahead. A real Skynet would give them a second doomsday escape possibility.

Skynet remains movie fiction, and the only hazard currently coming from AI seems to be bad political ads.

Tags: Air Force, Artificial Intelligence (AI), Military

CLICK HERE FOR FULL VERSION OF THIS STORY