Reports of an AI drone that ‘killed’ its operator are pure fiction

It has been widely reported that a US Air Force drone went rogue and "killed" its operator in a simulation, sparking fears of an AI revolution - but this simulation never took place. Why are we so quick to believe AI horror stories?

News of an AI-controlled drone “killing” its supervisor jetted around the world this week. In a story that could be ripped from a sci-fi thriller, the hyper-motivated AI had been trained to destroy surface-to-air missiles only with approval from a human overseer – and when denied approval, it turned on its handler.

Some AI stories are so bad they would make a robot facepalm
Corona Borealis Studio/Shutterstock


Only, it is no surprise that story sounds fictional – because it is. The story emerged from a report by the Royal Aeronautical Society, describing a presentation by US Air Force (USAF) colonel Tucker Hamilton at a recent conference. That report noted the incident was only a simulation, in which there was no real drone and no real risk to any human – a fact missed by many attention-grabbing headlines.


Later, it emerged that even the simulation hadn’t taken place, with the USAF issuing a denial and the original report updated to clarify that Hamilton “mis-spoke”. The apocalyptic scenario was nothing but a hypothetical thought experiment.


“The Department of the Air Force has not conducted any such AI-drone simulations and remains committed to ethical and responsible use of AI technology. It appears the colonel’s comments were taken out of context and were meant to be anecdotal,” a USAF spokesperson told Insider. USAF didn’t respond to New Scientist‘s request for interview before publication.

This story is just the latest in a string of dramatic tales told about AI that has at points neared hysteria. In March, Time magazine ran a comment piece by researcher Eliezer Yudkowsky in which he said that the most likely result of building a superhumanly smart AI is that “literally everyone on Earth will die”. Elon Musk said in April that AI has the potential to destroy civilisation, while a recent letter from AI researchers said the risk of extinction is so high that dealing with it should be a priority alongside pandemics and nuclear war.


Why do these narratives gain so much traction, and why are we so keen to believe them? “The notion of AI as an existential threat is being promulgated by AI experts, which lends authority to it,” says Joshua Hart at Union College in New York – though it is worth noting that not all AI researchers share this view.


Beth Singler at the University of Zurich in Switzerland says that the media has an obvious incentive to publish such claims: “fear breeds clicks and shares”. But she says that humans also have an innate desire to tell and hear scary stories. “AI seems initially to be science fiction, but it is also a horror story that we like to whisper around the campfire, and horror stories are thrilling and captivating.”


One clear factor in the spread of these stories is a lack of understanding around AI. Despite many people having used ChatGPT to write a limerick or Midjourney to conjure up an image, few know how AI works under the hood. And while AI has been a familiar concept for decades, the reality is that the current crop of advanced models display capabilities that surprise experts, let alone laypeople.


“AI is very non-transparent to the public,” says Singler. “Wider education about the limitations of AI might help, but our love for apocalyptic horror stories might still win through.”

Post a Comment

Last Article Next Article