What if AI turned on its creator and a combat drone killed its human operator? This Terminator-like scenario is not the work of some science fiction author, but the result of combat AI simulations led by the US Air Force. Or perhaps not, as the USAF has firmly denied comments made recently by one of its officers at a conference.

Will military AI give rise to the worst dystopias? On the morning of 2 June, English-language media reported a story that might suggest just that – before publishing embarrassing denials later in the day.

The world’s leading armed forces are actively working on autonomous weapons, controlled by artificial intelligence (AI) and seen as the future of the battlefield. The US Air Force (USAF) is no exception, but the technology is perhaps not quite there yet. Colonel Tucker Hamilton, the USAF’s Chief of AI Test and Operations, recently let this slip at the highly regarded Royal Aeronautical Society (RAeS) summit.

He was talking about a combat mission simulation carried out by the USAF in which an AI-piloted drone was tasked with destroying anti-aircraft defences, with the decision to fire resting with a member of the military. The AI used “very unexpected strategies to achieve its objective,” he said. « The system started realising that while they did identify the threat at times, the human operator would tell it not to kill that threat, but it got its points by killing that threat. So what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective, » he added, according to a detailed account of the discussion posted on the RAeS website on 26 May.

AI « killed the operator who was keeping it from accomplishing its objective »

« This Future Combat Air and Space (FCAS) Capabilities Summit brings together UK defence leaders, its allies, and industrial partners, to assess the strategic direction for air and space combat capabilities now and in the future, » states the RAeS on its website to describe the event, which was held from 24 to 26 May in London. Some 200 senior military and civilian officials attended the conference, the outcome of which should have remained confidential. But then Business Insider picked up the story on 2 June 2023, followed swiftly by the British daily The Guardian.

When questioned by the New York-based media outlet, USAF spokeswoman Ann Stefanek immediately denied the information: “The Department of the Air Force has not conducted any such AI-drone simulations and remains committed to ethical and responsible use of AI technology. It appears the colonel’s comments were taken out of context and were meant to be anecdotal,” she said. In the afternoon, the two news outlets changed the headlines and introductory paragraphs of their respective articles, as simply mentioning the official denial was apparently not enough to satisfy either USAF officials or their editors-in-chief.

At the same time, a paragraph appeared on the RAeS website in which Colonel Hamilton “admitted” he had “misspoken”. The update went on to say that “the ‘rogue AI drone simulation’ was a hypothetical ‘thought experiment’ from outside the military, based on plausible scenarios and likely outcomes rather than an actual USAF real-world simulation.”

The « rogue AI drone simulation » was a « thought experiment »

The US colonel then explained to his audience that, in the same incredibly detailed “thought experiment”, the AI’s programmers had added an instruction expressly forbidding it from attacking its operator. Still believing that human decisions were interfering with its objectives, the AI destroyed the communication tower that the mission supervisor used to authorise the drone to fire.

As Colonel Hamilton explained, the major problem with this “plausible scenario” was that the priority instruction encoded in the AI was to destroy military targets. Obeying the human chain of command was therefore not one of the primary parameters. “You cannot have a conversation about artificial intelligence, intelligence, machine learning, and autonomy if you’re not going to talk about ethics and AI,” said Hamilton.

« Despite being a hypothetical example, this illustrates the real-world challenges posed by AI-powered capability and is why the Air Force is committed to the ethical development of AI, » the USAF emphasised in its denial published on the RAeS website.

« A Terminator-like scenario »

Clearly, when it comes to combat robots, the famous Three Laws of Robotics developed by science fiction writers Isaac Asimov and John W. Campbell in 1942 cannot be applied. We need to think hard about the ethics and limits we impose on the development of autonomous weapons, and on artificial intelligence more broadly.

In the “hypothetical” scenario described by Colonel Hamilton, the programmers clearly overestimated the desired end effect at the expense of human and ethical considerations, and the chain of command to boot. The result is a scenario that rivals the best episodes of the futuristic series “Black Mirror” or the “Terminator” saga, in which the Skynet AI decides to eradicate any humans that pose a threat to its existence.

Closer to home, it is worth remembering what ethical hacker Shahmeer Amir had to say in our columns about AI’s hacking capabilities: “We are definitely heading towards a Terminator-like scenario, where a cyberattack will be orchestrated with an AI capable of causing major damage.”

Drone battle in China: AI 1-0 up against humans

Military powers need to keep these considerations in mind when developing weapons enhanced by algorithms and machine learning. The United States is working hard to implement AI in its armed forces, particularly its air force. In 2022, the USAF successfully completed the first test flights of a .

Colonel Hamilton, himself a former test pilot, was actively involved in these tests, as he was in the development of the F-16’s Auto-GCAS lifesaving system. He points out that pilots are not very enthusiastic about this system as it can take control of the aircraft. The year before, an AI system intelligence had beaten a pilot five times in a row in close-air-combat simulations.

Beijing has taken the exercise a step further, recently organising a battle between real-life drones, one piloted by a human, the other by an AI. After just 90 seconds, the AI was victorious. A “turning point”, according to the Chinese scientific journal Acta Aeronautica et Astronautica Sinica, which reported the experiment and whose authors claim that “next-generation AI pilots, currently under development, will be able to learn during real combat situations and improve their performance with little or no ground support.”

Meanwhile, Russia launched its national development strategy for artificial intelligence and robotics in 2019, with the aim of becoming self-sufficient in this area by 2030. In May 2019, Russia’s sovereign wealth fund announced that it had raised nearly €2 billion from investors.

France using AI to support humans

This is a substantial sum for Russia, mainly targeted at the defence sector. Moscow is banking on the automation of the battlefield, where soldiers will gradually be replaced on the front line by semi-autonomous armed robots, plans they did not get off the ground before the start of the war against Ukraine.

So far, France has tended to use AI to support humans. In the fighter aircraft for the Future Combat Air System (SCAF) project, the fighter pilot is at the heart of an intelligent collaborative combat system that includes drones. The most sophisticated of these, the “Loyal Wingman”, is designed to support the pilot in carrying out specific tasks, such as strike, surveillance and damage assessment. Similarly for the French army. For the moment, Paris sees robots and AI more as support for infantry troops in reconnaissance, load carrying or fire support, rather than as combatants in their own right.

Stay tuned in real time
Subscribe to
the newsletter
By providing your email address you agree to receive the Incyber newsletter and you have read our privacy policy. You can unsubscribe at any time by clicking on the unsubscribe link in all our emails.
Stay tuned in real time
Subscribe to
the newsletter
By providing your email address you agree to receive the Incyber newsletter and you have read our privacy policy. You can unsubscribe at any time by clicking on the unsubscribe link in all our emails.