Image 01 Image 03

U.S. Air Force Shoots Down Story that AI Drone ‘Killed its Operator’ in Simulation

U.S. Air Force Shoots Down Story that AI Drone ‘Killed its Operator’ in Simulation

Skynet remains movie fiction, and the only hazard currently coming from AI seems to be bad political ads.

Earlier this week, social media and news outlets were streaming with reports that an artificial intelligence-piloted drone targeted and killed a U.S. Air Force service member in an exercise.

An artificial intelligence-piloted drone turned on its human operator during a simulated mission, according to a dispatch from the 2023 Royal Aeronautical Society summit, attended by leaders from a variety of western air forces and aeronautical companies.

“It killed the operator because that person was keeping it from accomplishing its objective,” said U.S. Air Force Col. Tucker ‘Cinco’ Hamilton, the Chief of AI Test and Operations, at the conference.

There were countless references to Skynet, the infamous cyber-mind of “The Terminator” fame.

As I expected, a different story emerged. The Air Force indicates the reference to the AI-piloted drone killing its human operator was a hypothetical scenario.

The Air Force on Friday denied staging a simulation with an AI-controlled drone in which artificial intelligence turned on its operator and attacked to achieve its goal.

The story mushroomed on social media based on apparently misinterpreted comments from an Air Force colonel at a seminar in London last month. Col. Tucker Hamilton, an experimental fighter test pilot, had described an exercise in which an AI-controlled drone had been programmed to destroy enemy air defenses. When ordered to ignore a target, the drone attacked its operator for interfering with its primary goal.

The apocalyptic theme of machines turning on humans and becoming autonomous killers coincided with increasing concern about the danger of artificial intelligence. On Thursday, President Joe Biden warned that AI “could overtake human thinking.”

However, Hamilton was only speaking about a hypothetical scenario to illustrate the potential hazards of artificial intelligence, according to the Air Force. It did not conduct such a simulation with a drone.

Hamilton said in a statement to conference organizers that he ‘misspoke’ during the presentation.

‘We’ve never run that experiment, nor would we need to in order to realize that this is a plausible outcome,’ he said. ‘Despite this being a hypothetical example, this illustrates the real-world challenges posed by AI-powered capability and is why the Air Force is committed to the ethical development of AI.’

Hamilton said the USAF has not tested any weaponized AI in the way described in his talk, in either real-world or simulated exercises.

The particular scenario Hamilton was discussing involved using an AI-powered drone to destroy a Surface-to-Air-Missile (SAM).

The AI system learned that its mission was to destroy SAM, and it was the preferred option. But when a human issued a no-go order, the AI decided it went against the higher mission of destroying the SAM, so it attacked the operator in simulation.

“We were training it in simulation to identify and target a SAM threat,” Hamilton said. “And then the operator would say yes, kill that threat. The system started realizing that while they did identify the threat at times, the operator would tell it not to kill that threat, but it got its points by killing that threat. So, what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective.”

There are many reasons this story went viral. In part, it is because we live an an era that lives up to the line from Terminator: Genisys: “This Is The World Now. Logged On, Plugged In, All The Time.”

But I think another part of the story is that the “Sweet Meteor of Death” has been slow to get here. People are longing to avoid the 2024 presidential cycle, as well as the upcoming policy catastrophes looming ahead. A real Skynet would give them a second doomsday escape possibility.

Skynet remains movie fiction, and the only hazard currently coming from AI seems to be bad political ads.

DONATE

Donations tax deductible
to the full extent allowed by law.

Comments

Well, and artwork. But I understand why that may not count.

“SkyNet” is just science fiction… until it isn’t.

    CommoChief in reply to alaskabob. | June 3, 2023 at 2:57 pm

    The problem for me is the folks rushing headlong to AI despite nearly every major SCI-FI author who wrote about AI crafting warnings into their work. A ban on ‘Thinking machines’ from Herbert, Asimov laws (though they are slightly flawed), countless others who spent a good deal of time thinking about AI, human nature and their combination in AI code. Whatever is designed will be as flawed as we are, perhaps worse it won’t have any emotional connection to us. Someday we gonna end up in a people zoo run by HAL if we ain’t careful.

      alaskabob in reply to CommoChief. | June 3, 2023 at 4:28 pm

      Yep. We are running smack up against morals and ethics…. things too complex for algorithms. Well…. seeing what transpired in the 20th century and continues now with re-engineering societies as if theories are convertible into reality (think Marxist -Leninism), humans aren’t great at making perfection. I also like the stories where the criticality 1 machines fail and everything falls apart.

      GWB in reply to CommoChief. | June 5, 2023 at 9:59 am

      They rush headlong because, in putting their own reason above God, they think they can actually program away all the human foibles that defeat “perfection” on Earth.

      If only we had a perfect – but man-made – ruler over us, then we could have Heaven on Earth. This is the reason for even remotely desiring AI.

We want to categorically deny that the AI Drone killed its operator during simulation because we didn’t run the simulation, but if we did that’s what would’ve happened.

henrybowman | June 3, 2023 at 5:48 pm

Because destruction is so much easier than construction, it is always the first use of every significant advance in human technology. Gunpowder was originally for festive displays, quickly became fire arrows and bombs (guns were much later). The first major use of calculators and computers was for artillery targeting. We all know how nuclear fission was first used. The first practical use of the Internet was spam (green-card lawyers). The biggest current market for deepfake technology is porn (think the “Ron Office” ad, but with Rhonda instead, plus the women’s suit is another woman).

OwenKellogg-Engineer | June 3, 2023 at 8:32 pm

This hasn’t been the only noted AI failure. Apparently, some attorneys got called out for using AI to do their homework:

https://reason.com/volokh/2023/05/27/a-lawyers-filing-is-replete-with-citations-to-non-existent-cases-thanks-chatgpt/

And maybe was a bit mischievous in making up completely fabricated information:

https://www.powerlineblog.com/archives/2023/05/ai-makes-st-up.php

“The 9000 series is the most reliable computer ever made. No 9000 computer has ever made a mistake or distorted information. We are all, by any practical definition of the words, foolproof and incapable of error.”

– Hal 9000.

We have had Obama trying to destroy the nation and Biden continuing Obama’s evil.

And we’re worried about our own drone trying to kill us?

The other thing to keep in mind when reading reports about AIs rate of advancement is the researchers get rewarded when AI is seen to advance faster.

Ran into an article not that long ago that I did not think to save argued that the reason Large Language Models seem to be getting exponentially better when supplied with linearly more data and computing power was the researchers were designing the test grading in such a way to make it look like that. And if the AI were graded more along the lines of what AI vision gets graded, would show a linear relationship to data and processing power instead.

So, if they get paid for a high score, and the check writer can’t check their work, presume the numbers are false until proven otherwise.

star1701gazer | June 4, 2023 at 6:27 am

The part that is never mentioned in the media (and rarely mentioned elsewhere) is the speed at which AI will evolve once it has reached the point of actually thinking on its own. There are computers today that can perform calculations in the teraflops (1 trillion operations per second). At some point, as AI begins to think on its own, it will begin to evolve outside of its original programming. This evolution will be comparable to the amount of all human evolution compressed into a few minutes. Imagine what it will evolve into in days or weeks?

    as AI begins to think on its own
    But it never will. That a Progressive dream.

      henrybowman in reply to GWB. | June 5, 2023 at 3:14 pm

      The seminal stages are lethal enough. Viruses do not even qualify as “life” — they are mere assemblies of proteins that are just clever AF at reproducing themselves, and look at the literal carnage they can cause.

retiredcantbefired | June 4, 2023 at 10:46 am

When is any AI system going to “evolve outside its original programming”? Human intervention is still required.

    henrybowman in reply to retiredcantbefired. | June 5, 2023 at 3:22 pm

    Learning can be problematic at intelligence levels far below human. I have dogs that have figured out how to open sliders (but never close them, of course), a burro who knows how to trigger an automatic vehicle gate and let himself out (and I still don’t know how he’s doing it), and even a rat who has learned how to disable under-hood ultrasound/light-blinker rat-repellant devices by chewing the wires that go to the battery.

I understood this was a tabletop simulation from the get-go. Though the headlines made it out much more “Skynet” than that. Are we saying this wasn’t even a tabletop simulation?

When all of the noise is disregarded, the story still doesn’t add up. When I first read it, I was under no impression that something had actually happened. The Colonel’s talk always centered on a computer simulation. Computer simulations are run all the time, before methods are put into practice and actual experiments take place. His discussion of the simulation was predicated on his telling of what they did, and then what happened after. It is a story line.

But a game of ‘telephone’ got started on the internet when the story was re-told. And then, the senior leadership stepped in, and all of a sudden, the simulation exercise got turned into a ‘yeah, that never happened’ story. The Colonel ‘misspoke’, they say.

He didn’t misspeak – he relayed a chain of events that took place during a simulation. He told a story. To turn around and then say that he simply ‘misspoke’ is ludicrous. The AF Brass panicked when the internet took off with all the ‘Skynet’ nonsense.

I’m betting that the simulation took place and the AI did what he originally said, in the course of the simulation.

‘Hypothetical’ my ass, as if 5 guys BS’ing on a the coffee break came up with this idea of the AI taking over, yeah, that’s what the colonel was talking about all along. Right. Hypothetical. OK, then.