Last month, the San Francisco Board of Supervisors voted in favor of allowing that city’s police department to deploy robots equipped with a potential to kill, should a situation—in the estimation of police officers—call for lethal force. With that decision, the board appeared to have delivered the city to a dystopian future. The vote garnered a loudly negative response from the public, and this week the supervisors reversed course and sent the policy back to committee. But the fact that the decision initially passed—and may yet pass in some form—should not have been surprising. Police departments around the country have been acquiring robotic devices for decades. Most are used for what have become routine policing activities, such as surveillance and bomb disposal. But some can be outfitted with other capabilities, such as to fire 12-gauge shotgun rounds, and in Dallas, in 2016, the police used a bomb-disposal robot to detonate an explosive device, in order to kill a suspected sniper who had shot twelve officers, killing five. The San Francisco Police Department has seventeen robots, twelve of which are functional, and among that number are bomb-disposal units that can be repurposed to deliver an explosive device. (They, too, can be outfitted to fire 12-gauge rounds.)
Since 1997, when the National Defense Authorization Act sanctioned the transfer of surplus Department of Defense matériel to local police departments, ostensibly to shore up their defenses for the war on drugs, law-enforcement agencies around the country have been stockpiling the weapons and equipment of war. An earlier program that enabled police departments to buy military surplus at a discounted rate was given a boost after 9/11, when grants from the Department of Homeland Security enabled local forces to purchase armored personnel carriers, tactical gear, sound cannons, drones, and other accoutrements of modern warfare. According to the Law Enforcement Support Office, which oversees the weapon transfers, more than seven billion dollars’ worth of equipment has been transferred to more than eight thousand police departments since the program began.
The federal policies have contributed to the militarization of local police departments. The risk is that, once departments are in possession of military-grade weapons, these arms have the possibility of being deployed even if they are disproportionate to the situation at hand. “If you receive a present at Christmastime, a new toy, you want to be able to use it. Well, this is kind of the same thing for a number of law-enforcement agencies,” Sabrina Karim, an assistant professor of government at Cornell University, told “Marketplace” ’s David Brancaccio, in 2020. But, as a 2021 report from the American Civil Liberties Union observed, there is essentially no oversight of how police forces can use these weapons—and the stockpile includes “more than 60,000 military grade rifles, 1,500 combat-ready trucks and tanks, 500 unmanned ground vehicles (functionally landed drones), and dozens of military aircraft, machine gun parts, bayonets, and even an inert rocket launcher.” The report notes that, when departments were asked to supply reasons for requesting an armored personnel carrier, they tended to give vague answers, such as “for active shooters” and “for high-risk operations.” The S.F.P.D.’s rationale for turning a standard-issue bomb-disposal robot into a killer robot—because there is an outside chance that it may someday be useful—is similarly loose. As a consequence, it could broaden the justifications for such use. As Elizabeth Joh, a professor of law at the University of California, Davis, put it, “We went right from ‘Should this be a tool that the police have?’ to ‘What are the circumstances in which police robots can kill suspects?’ ”
The reason that the San Francisco Board of Supervisors was discussing the issue in the first place is that, last year, the state legislature passed Assembly Bill 481. The new law requires every police department in California to inventory its military arsenal and to develop a “use policy” for it, in collaboration with local legislators. In October, across the Bay, Oakland lawmakers had debated and, after pushback from the public, rejected a proposal to allow their police department to arm robots with guns. (Oakland will, however, soon vote on whether to allow officers to use robots to discharge pepper spray.)
During a hearing at the San Francisco board before the initial vote, Aaron Peskin, one of the members, explained that, in his conversations with the S.F.P.D., officers initially did not foresee a circumstance in which its robots would be deployed to kill. So he included a line to that effect in the proposed policy. Upon reviewing the policy, though, the S.F.P.D. recanted. As the assistant chief David Lazar told the board, members of the department did not want to completely foreclose the possibility. When asked to describe a situation in which a robot might be required to kill a suspect, Lazar mentioned the 2017 mass shooting in Las Vegas, when a sniper on the thirty-second floor of the Mandalay Bay Resort and Casino began shooting people attending a music festival hundreds of yards away. But that shooting, which left fifty-nine people dead and hundreds more wounded, was over in about ten minutes. In such a scenario, arming a terrestrial robot and delivering it to the scene is likely to slow down, rather than expedite, a police response. (In that case, too, the shooter killed himself before police could reach him.)
“Something like ninety-nine per cent of law enforcement is not those worst-case scenarios,” John Lindsay-Poland, of the American Friends Service Committee, a Quaker social-justice organization, told me. “But when they have this equipment, it tends to get used in some of the less-worst cases.” (The A.F.S.C. has been monitoring public hearings on Assembly Bill 481, and urging citizens to question the police on how they plan to use their military equipment.) In theory, A.B. 481, which requires municipalities to revisit their military-equipment-use policies every year, should help mitigate that tendency. But, instead, it may further normalize the use of military weapons, by making them seem commonplace.
The proposed San Francisco policy does suggest that the police should attempt to de-escalate a situation before considering the deployment of a lethal robot, and only the deputy chief of special operations, assistant chief of operations, or chief can authorize it for that use. But there is nothing in the policy that lays out consequences if those procedures are not followed. For the time being, lethal police robots do not function without a human operator. But, as technology continues to advance, we are likely to see an increasing number of law-enforcement robots powered by artificial intelligence. Earlier this year, the Department of Homeland Security announced that it was testing “robot dogs,” outfitted with sensors and cameras, to patrol the southern border. A company called Knightscope, whose mission is “to make the United States of America the safest country in the world,” has been selling robots to police forces (and to private companies) across the country. Some models, equipped with facial-recognition technology, are meant to alert law enforcement if they see “persons of interest.” But a number of studies have shown that facial recognition is bad at identifying people with dark skin, and could potentially lead to false arrests. Joy Buolamwini, an M.I.T. Media Lab researcher and the co-author of a major study that demonstrated facial recognition’s “race problem,” told Boston magazine that this technological shortcoming put persons of color “at higher risk of being misidentified as a criminal suspect.”