Image 01 Image 03

Pentagon Moving Toward Use of AI-Controlled Killer Drones

Pentagon Moving Toward Use of AI-Controlled Killer Drones

It appears the genius new plan at Biden’s Department of Defense is use AI-controlled drones, as they cannot find humans who wish to serve under this administration.

Legal Insurrection readers may recall a story I did in June that the US Air Force claimed the reference to a drone controlled by artificial intelligence (AI) killing its human operator in an officer’s presentation was merely a  hypothetical scenario.

The Air Force on Friday denied staging a simulation with an AI-controlled drone in which artificial intelligence turned on its operator and attacked to achieve its goal.

The story mushroomed on social media based on apparently misinterpreted comments from an Air Force colonel at a seminar in London last month. Col. Tucker Hamilton, an experimental fighter test pilot, had described an exercise in which an AI-controlled drone had been programmed to destroy enemy air defenses. When ordered to ignore a target, the drone attacked its operator for interfering with its primary goal.

While drones killing their human operators are hypothetical, The New York Times is reporting that the deployment of AI-controlled drones that can make autonomous decisions about whether to kill human targets is moving closer to reality, and other countries want restraints on their development.

…[T]he United States, China and a handful of other nations make rapid progress in developing and deploying new technology that has the potential to reshape the nature of warfare by turning life and death decisions over to autonomous drones equipped with artificial intelligence programs.

That prospect is so worrying to many other governments that they are trying to focus attention on it with proposals at the United Nations to impose legally binding rules on the use of what militaries call lethal autonomous weapons.

“This is really one of the most significant inflection points for humanity,” Alexander Kmentt, Austria’s chief negotiator on the issue, said in an interview. “What’s the role of human beings in the use of force — it’s an absolutely fundamental security issue, a legal issue and an ethical issue.”

But while the U.N. is providing a platform for governments to express their concerns, the process seems unlikely to yield substantive new legally binding restrictions. The United States, Russia, Australia, Israel and others have all argued that no new international law is needed for now, while China wants to define any legal limit so narrowly that it would have little practical effect, arms control advocates say.

The result has been to tie the debate up in a procedural knot with little chance of progress on a legally binding mandate anytime soon.

This past weekend, I noted that the US Air Force and Army recently sent letters pleading with former service members discharged due to their unwillingness to receive the COVID-19 vaccine to return to duty. Only 43 of the over 8,000 discharged did so, and the new recruitment numbers are down significantly.

It appears the genius new plan at Biden’s Department of Defense is to use AI-controlled drones, as they cannot find humans who wish to serve under this administration.

…[T]he Pentagon is intent on fielding multiple thousands of relatively inexpensive, expendable AI-enabled autonomous vehicles by 2026 to keep pace with China. The ambitious initiative — dubbed Replicator — seeks to “galvanize progress in the too-slow shift of U.S. military innovation to leverage platforms that are small, smart, cheap, and many,” Deputy Secretary of Defense Kathleen Hicks said in August.

While its funding is uncertain and details vague, Replicator is expected to accelerate hard decisions on what AI tech is mature and trustworthy enough to deploy – including on weaponized systems.

…Replicator highlights immense technological and personnel challenges for Pentagon procurement and development as the AI revolution promises to transform how wars are fought.

“The Department of Defense is struggling to adopt the AI developments from the last machine-learning breakthrough,” said Gregory Allen, a former top Pentagon AI official now at the Center for Strategic and International Studies think tank.

The Pentagon’s portfolio boasts more than 800 AI-related unclassified projects, much still in testing. Typically, machine-learning and neural networks are helping humans gain insights and create efficiencies.

I do not wish to see other countries, especially the United Nations,  decide what happens with our military. However, can AI be relied on to discern between all the essential inputs 100% of the time?

I have my doubts.

Police in the southern county of Goseong said the man died of head and chest injuries Tuesday evening after he was snatched and pressed against a conveyor belt by the machine’s robotic arms.

Police did not identify the man but said he was an employee of a company that installs industrial robots and was sent to the plant to examine whether the machine was working properly.

I would be more comfortable with this program if I trusted our government “experts” more than I now do.

DONATE

Donations tax deductible
to the full extent allowed by law.

Comments

Skynet is pleased.

Remember, when Hal turned on the astronauts?

Lucifer Morningstar | November 27, 2023 at 9:19 am

Great Maker, you’d that the military would realized by now that giving autonomous AI entities drones, guns, and bombs never, I repeat, never ends up well for humanity. But I guess that stupid is that stupid does.

    Yes, in the movies. This is real life, and this has never been done before.

      In stories, not just movies.
      And what does “never been done before” have to do with it, when the stories seldom rely on simply technology, but human nature, for their results? See, it’s the idiotic hubris that they can create some non-human slave that will think so perfectly that the human element (aka, ‘mistakes’ and ‘stupidity’) just disappears from the loop that’s actually at issue.

      Those stories are reminding you that can’t happen. To believe it can requires you to accept Progressivism: denial of human nature.

        NotCoach in reply to GWB. | November 27, 2023 at 5:37 pm

        “AI” is not human nature. Therefore we have nothing to go on for how this will work out, but one thing we can say definitively is that it will not end like any of these stories. True AI does not exist.

As with every other aspect of the Brandon Administration, this has “bad idea” written all over it.

Just like the John Mayall song: “Full Speed Ahead to Destruction”:
https://youtu.be/VaAHWN2-NvQ?si=j5gNcnArLVGsbVjd

The irrational fear of “AI” (in quotes because true AI does not currently exist) is sometimes maddening. There is not any danger of programmed machines turning on us. The dangers are whether or not they can be made safe. Humans also make errors that result in the loss of life. A robot mistaking a technician for a package, from the example you provide, is a failure of safety, and programming, ultimately a human error.

What we really need to ask ourselves is whether or not technology is advanced enough for these machines to work autonomously without separation between man and machine. Autonomous machines have actually been with us for decades. I personally build robotic welding cells, which is an autonomous machine, but the robot is not allowed to operate when a person is in the danger zone. We are now trying to eliminate the safety zone with things like self driving cars, and this is an ongoing experiment. It is only logical that autonomous military drones will be explored and developed.

You know, human pilots mistake friendlies for enemies all the time. Can an autonomous drone do better, or will it be worse?

    There are lots of folks with irrational fear of AI.
    My fear is not the AI itself. My fear is all the people will turn and follow it believing it’s some mechanical god that can do no wrong. And it will neither be flawless, nor a god, nor even really intelligent.

    And, the reason I pound on keeping a human in the loop is because humans have insight and wisdom and hairs on the back of their neck and guts. “I know the sensors say that’s a terrorist down there, but something tells me this isn’t right.” Sometimes those instincts are wrong (despite Tucker’s claim to the contrary), but I’d rather have someone with accountability making that call, than a machine with a really complicated checklist.

    ss396 in reply to NotCoach. | November 28, 2023 at 1:09 pm

    Yes, humans make mistakes all the time. Still, humans have judgment – which is something “AI” will never have. Human judgment entails much, much more than can or ever will be programmed into the “AI” datasets, thus making “AI” perpetually inferior to humans. That gap will never be bridged.

    Current doctrine calls for a human somewhere in the life-death decision process; something that grew out of nuclear warfare as it became more automated. This doctrine should not be abandoned.

In the Phoenix area, there are AI cars, without any driver, that are picking up passengers,

This is scarry.

Just give all of our enemies free Teslas with operation limited to autopilot.

Echo Papa 607.

Once again, the tech-fanboi idiots in the Air Force want to put computers in where people used to be, removing humanity from the decision loop. It’s been an obsession with them since at least the 1980s.

The Army has some of the same folks, but they only work in procurement there, and not in the warfighting branches, it seems.

I would not trust AI as all computers can be hacked whether by the outside or in the case of AI inside. We need people to monitor and handle defense systems and other systems instead of AI.

Go to any night time drone show and now think each one of those drones is armed with a grenade and they are inbound to where your squad is dug in for the night.

Not a lot has changed in the last 100k years except who is more efficient at killing their enemies will win.

There is a reason they say “War is hell.”

“AI” is not human nature. Therefore we have nothing to go on for how this will work out, but one thing we can say definitively is that it will not end like any of these stories. True AI does not exist.

“It appears the genius new plan at Biden’s Department of Defense is to use AI-controlled drones, as they cannot find humans who wish to serve under this administration.”

We need Colossus… to do the work that Americans won’t do.