Number 4 Hamilton Place is a be-columned building in central London, home to the Royal Aeronautical Society and four floors of event space. In May, the early 20th-century Edwardian townhouse hosted a decidedly more modern meeting: Defense officials, contractors, and academics from around the world gathered to discuss the future of military air and space technology.
Things soon went awry. At that conference, Tucker Hamilton, chief of AI test and operations for the United States Air Force, seemed to describe a disturbing simulation in which an AI-enabled drone had been tasked with taking down missile sites. But when a human operator started interfering with that objective, he said, the drone killed its operator, and cut the communications system.
Internet fervor and fear followed. At a time of growing public concern about runaway artificial intelligence, many people, including reporters, believed the story was true. But Hamilton soon clarified that this seemingly dystopian simulation never actually ran. It was just a thought experiment.
“There’s lots we can unpack on why that story went sideways,” said Emelia Probasco, a senior fellow at Georgetown University’s Center for Security and Emerging Technology.
Part of the reason is that the scenario might not actually be that far-fetched: Hamilton called the operator-killing a “plausible outcome” in his follow-up comments. And artificial intelligence tools are growing more powerful — and, some critics say, harder to control.
Despite worries about the ethics and safety of AI, the military is betting big on artificial intelligence. The U.S. Department of Defense has requested $1.8 billion for AI and machine learning in 2024, on top of $1.4 billion for a specific initiative that will use AI to link vehicles, sensors, and people scattered across the world. “The U.S. has stated a very active interest in integrating AI across all warfighting functions,” said Benjamin Boudreaux, a policy researcher at the RAND Corporation and co-author of a report called “Military Applications of Artificial Intelligence: Ethical Concerns in an Uncertain World.”
Indeed, the military is so eager for new technology that “the landscape is a sort of land grab right now for what types of projects should be funded,” Sean Smith, chief engineer at BlueHalo, a defense contractor that sells AI and autonomous systems, wrote in an email to Undark. Other countries, including China, are also investing heavily in military artificial intelligence.
“The U.S. has stated a very active interest in integrating AI across all warfighting functions.”
While much of the public anxiety about AI has revolved around its potential effects on jobs, questions about safety and security become even more pressing when lives are on the line.
Those questions have prompted early efforts to put up guardrails on AI’s use and development in the armed forces, at home and abroad, before it’s fully integrated into military operations. And as part of an executive order in late October, President Joe Biden mandated the development of a National Security Memorandum that “will ensure that the United States military and intelligence community use AI safely, ethically, and effectively in their missions, and will direct actions to counter adversaries’ military use of AI.”
For stakeholders, such efforts need to move forward — even if how they play out, and what future AI applications they’ll apply to, remain uncertain.
“We know that these AI systems are brittle and unreliable,” said Boudreaux. “And we don’t always have good predictions about what effects will actually result when they are in complex operating environments.”
Artificial intelligence applications are already alive and well within the DOD. They include the mundane, like using ChatGPT to compose an article. But the future holds potential applications with the highest and most headline-making stakes possible, like lethal autonomous weapons systems: arms that could identify and kill someone on their own, without a human signing off on pulling the trigger.
The DOD has also been keen on incorporating AI into its vehicles — so that drones and tanks can better navigate, recognize targets and shoot weapons. Some fighter jets already have AI systems that stop them from colliding with the ground.
The Defense Advanced Research Projects Agency, or DARPA — one of the defense department’s research and development organizations —sponsored a program in 2020 that pitted an AI pilot against a human one. In five simulated “dogfights” — mid-air battles between fighter jets — the human flyer fought Top-Gun-style against the algorithm flying the same simulated plane. The setup looked like a video game, with two onscreen jets chasing each other through the sky. The AI system won, shooting down the digital jets with the human at the helm. Part of the program’s aim, according to DARPA, was to get pilots to trust and respect the bots, and to set the stage for more human-machine collaboration in the sky.
Those are all pretty hardware-centric applications of the technology. But according to Probasco, the Georgetown fellow, software-enabled computer vision — the ability of AI to glean meaningful information from a picture — is changing the way the military deals with visual data. Such technology could be used to, say, identify objects in a spy satellite image. “It takes a really long time for a human being to look at each slide and figure out something that’s changed or something that’s there,” she said.
Smart computer systems can speed that interpretation process up, by flagging changes (military trucks appeared where they weren’t before) or specific objects of interest (that’s a fighter jet), so a human can take a look. “It’s almost like we made AI do ‘Where’s Waldo?’ for the military,” said Probasco, who researches how to create trustworthy and responsible AI for national security applications.
In a similar vein, the Defense Innovation Unit — which helps the Pentagon take advantage of commercial technologies — has run three rounds of a computer-vision competition called xView, asking companies to do automated image analysis related to topics including illegal fishing or disaster response. The military can talk openly about this humanitarian work, which is unclassified, and share its capabilities and information with the world. But that is only fruitful if the world deems its AI development robust. The military needs to have a reputation for solid technology. “It’s about having people outside of the U.S. respect us and listen to us and trust us,” said University of California, San Diego professor of data science and philosophy David Danks.
That kind of trust is going to be particularly important in light of an overarching military AI application: a program called Joint All-Domain Command and Control, or JADC2, which aims to integrate data from across the armed forces. Across the planet, instruments on ships, satellites, planes, tanks, trucks, drones, and more are constantly slugging down information on the signals around them, whether those are visual, audio, or come in forms human beings can’t sense, like radio waves. Rather than siloing, say, the Navy’s information from the Army’s, their intelligence will be hooked into one big brain to allow coordination and protect assets. In the past, said Probasco, “if I wanted to shoot something from my ship, I had to detect it with my radar.” JADC2 is a long-term project that’s still in development. But the idea is that once it’s operational, a person could use data from some other radar (or satellite) to enact that lethal force. » …
Read More