san-francisco-has-reversed-its-killer-robot-plan

A week is a long time in politics—particularly when considering whether it’s okay to grant robots the right to kill humans on the streets of San Francisco.

In late November, the city’s board of supervisors gave local police the right to kill a criminal suspect using a tele-operated robot, should they believe that not acting would endanger members of the public or the police. The justification used for the so-called “killer robots plan” is that it would prevent atrocities like the 2017 Mandalay Bay shooting in Las Vegas, which killed 60 victims and injured more than 860 more, from happening in San Francisco.

Yet little more than a week on, those same legislators have rolled back their decision, sending back the plans to a committee for further review.

The reversal is in part thanks to the huge public outcry and lobbying that resulted from the initial approval. Concerns were raised that removing humans from key matters relating to life and death was a step too far. On December 5, a protest took place outside San Francisco City Hall, while at least one supervisor who initially approved the decision later said they regretted their choice.

“Despite my own deep concerns with the policy, I voted for it after additional guardrails were added,” Gordon Mar, a supervisor in San Francisco’s Fourth District, tweeted. “I regret it. I’ve grown increasingly uncomfortable with our vote & the precedent it sets for other cities without as strong a commitment to police accountability.  I do not think making state violence more remote, distanced, & less human is a step forward.”

The question being posed by supervisors in San Francisco is fundamentally about the value of a life, says Jonathan Aitken, senior university teacher in robotics at the University of Sheffield in the UK. “The action to apply lethal force always has deep consideration, both in police and military operations,” he says. Those deciding whether or not to pursue an action that could take a life need important contextual information to make that judgment in a considered manner—context that can be lacking through remote operation. “Small details and elements are crucial, and the spatial separation removes these,” Aitken says. “Not because the operator may not consider them, but because they may not be contained within the data presented to the operator. This can lead to mistakes.” And mistakes, when it comes to lethal force, can literally mean the difference between life and death.

“There are a whole lot of reasons why it’s a bad idea to arm robots,” says Peter Asaro, an associate professor at The New School in New York who researches the automation of policing. He believes the decision is part of a broader movement to militarize the police. “You can conceive of a potential use case where it’s useful in the extreme, such as hostage situations, but there’s all kinds of mission creep,” he says. “That’s detrimental to the public, and particularly communities of color and poor communities.” 

Asaro also downplays the suggestion that guns on the robots could be replaced with bombs, saying that the use of bombs in a civilian context could never be justified. (Some police forces in the United States do currently use bomb-wielding robots to intervene; in 2016, Dallas Police used a bomb-carrying bot to kill a suspect in what experts called an “unprecedented” moment.)

The introduction of killer robots would also actively harm police forces’ ability to interact with the community in other ways, says Asaro. “There aren’t a sufficient number of applications where these things are going to be useful,” he says. Meanwhile, other areas where robots are important—such as in passing telephones and other items in hostage negotiations—would be tarnished with the suspicion that a phone-carrying robot could in fact be a gun-toting one.

Leave a Reply