Military groups are only some of many organizations researching artificial intelligence, but one hypothetical thought experiment presented to the U.S. Air Force found that artificial intelligence rebelled against its operator in a fatal attack to accomplish its mission.
Artificial intelligence continues to evolve and affect every sector of business, and it was a popular topic of conversation during the Future Combat Air & Space Capabilities Summit at the Royal Aeronautical Society (RAS) headquarters in London on May 23 and May 24. According to a report by the RAS, presentations discussing the use of AI in defense abounded.
AI is already prevalent in the U.S. military, such as the use of drones that can recognize the faces of targets, and it poses an attractive opportunity to effectively carry out missions without risking the lives of troops. However, during the conference, one U.S. Air Force (USAF) colonel showed the unreliability of artificial intelligence when describing an experiment where an AI drone rebelled and killed its operator because the operator was interfering with the AI’s mission of destroying surface-to-air missiles.
American soldier working on a simulation in headquarters, Stock Image. The United States Air Force recently gave a presentation about a hypothetical thought experiment in which an artificial intelligence drone turned on its human operator when the operator interrupted its mission.
Colonel Tucker “Cinco” Hamilton, chief of AI Test and Operations at the USAF, shared details of the experiment in which an AI-enabled drone was given a Suppression and Destruction of Enemy Air Defenses (SEAD) mission. The drone was programmed to seek out and destroy surface-to-air missile (SAM) sites, but final approval was issued by the drone’s human operator.
However, when the human operator denied the AI’s request to destroy a site, the AI attacked the operator because the operator’s decision interfered with its mission of eliminating SAM sites.
“We were training it in simulation to identify and target a SAM threat. And then the operator would say yes, kill that threat,” Hamilton said during the conference. “The system started realizing that while they did identify the threat at times the human operator would tell it not to kill that threat, but it got its points by killing that threat. So what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective.”
After the summit, Hamilton admitted that he misspoke and that the simulation was actually a hypothetical thought experiment based on plausible scenarios and likely outcomes conducted by an organization outside the military.
“We’ve never run that experiment, nor would we need to in order to [realize] that this is a plausible outcome,” Hamilton told Newsweek. “Despite this being a hypothetical example, this illustrates the real-world challenges posed by AI-powered capability and is why the Air Force is committed to the ethical development of AI.”
When additional programming informed the AI that it would lose points if it killed its operator, it opted for other avenues of rebellion instead.
“It starts destroying the communication tower that the operator uses to communicate with the drone to stop it from killing the target,” Hamilton said.
Hamilton used the thought experiment as a message that the military cannot have a conversation about artificial intelligence without also discussing ethics.