Artwork for podcast Stay in Command
Killer Robots: Mary Wareham at Human Rights Watch
Episode 11st September 2020 • Stay in Command • John Rodsted
00:00:00 00:41:57

Share Episode

Shownotes

SafeGroud Presents the series ‘Stay in Command’

: Mary Wareham on the Killer Robot Campaign

2020-Sep-1

Welcome to SafeGround, the small organisation with big ideas working in disarmament, human security, climate change and refugees. I’m John Rodsted.

Thank you for tuning in to our series Stay in Command where we talk about lethal autonomous weapons, the Australian context and why we mustn’t delegate decision making from man to machines.  

Today we speak with Mary Wareham who is the advocacy director of the arms division  at Human Rights Watch. Originally a native of Wellington in New Zealand she has been working in the disarmament sector for many years and is based in Washington DC. She is also the International Coordinator of the Campaign to Stop Killer Robots and joins us from Washington now. Welcome Mary.

You’ve had an extraordinary career working on the most important treaties since the 1990’s. The list of work is the success story of recent disarmament driven by civil society. The big two would have to be the treaty banning Anti-Personnel Landmines in 1997, the treaty banning Cluster Munitions in 2008. 

Of these treaties, the work of civil society drove those processes and forced governments to account and ultimately change. The Landmines Treaty was awarded the highest international accolade with the Nobel Peace Prizes from 1997. 

Today we don’t look back to celebrate the past but to the future in her work to ban, Killer Robots.

 Killer Robots - sounds like a cheap Sci fi movie[00:02:52] 

John Rodsted: Killer robots. Sounds like a cheap sci fi movie or a scene from the Terminator. What in fact are they?

Mary Wareham: Well, the Campaign to Stop Killer Robots is not so concerned about the Sentient walking, talking you know Terminator, like a killer robot. We're more grounded in reality. And what we've seen is the small number of military powers, most notably China, Israel, South Korea, Russia, and the United States are investing very heavily, now in military applications of artificial intelligence and the developing air land and sea based autonomous weapon systems.

We've been quite careful to call for a preemptive ban on fully autonomous weapons, which means that focuses on future weapons systems, not these existing ones that are out there today. But it helps to look at them, especially the extent of human control over the critical functions of selecting your target and then firing on it more and more.

We see senses being used to detect targets. And increasingly they're not controlled by humans. We have facial recognition technology cameras that are now employing that, there's heat senses, to detect body heat, motion senses, which can detect how you walk, your gate and of course, since it's for radars and we're all carrying around a great you know tracking device in our pockets, which has called a mobile phone using GPS technology. So it's a combination of different technologies, but, I think it's a bigger reflection of how our own lives are becoming much more subject to computer processing. And there are big technological developments that raise fundamental questions for humanity. When you try and incorporate artificial intelligence into a weapon system, to the point that you no longer have that meaningful human control.

Meaningful Human Control? [00:04:43] 

John Rodsted: Can you explain a bit about meaningful human control for us and what's the difference between an autonomous weapon, which is using artificial intelligence. Can you flesh that out a bit more for us please?

Mary Wareham: wow. I mean, what is that artificial intelligence?  There's still not  any agreed on definition. So what we tend to talk about more, in this campaign is about autonomy, how autonomy is incorporated into weapons systems. And when we talk about human control over the use of force, we prefer to use the term control as opposed to judgements or intervention that implies a weaker role for the human.

We also like this word or modify meaningful because that ensures that the control is substantive. But of course there  are other descriptions for that. We put out a paper a few months ago, detailing how we believe the concept of meaningful human control can be distilled down in an international treaty.

And it can be done in several different ways because it can apply to the decision making, the technological and the operational components, the decision making components of meaningful human controller about ensuring the human operator has got the information and the ability to make decisions about the use of force and ensure that they had to being done in compliance if legal rules and ethical principles. The human operator of this weapons system has to understand the operational environment, how the autonomous system functions, what it might identify as a target. And there needs to be sufficient time for deliberation.

Technological components are the embedded features of a weapon system that would help to enhance meaningful human control. This is about predictability and reliability. It's about the ability of the system to transfer or relay relevant information to the human operator. It's also about the ability of the operator or the human to intervene after the system has been activated.

This is what we would call a human on the loop, as opposed to a human out of the loop. And then finally the operational components that can make human control more meaningful. And this is about limiting when and where a weapon system can operate and what it can target. There's a whole bunch of factors that need to be considered.

And this, not least , how, how the force is applied. The duration of the systems operating period, the nature and the size of the area in which it's operating,  the types of targets that it may be attacking people anti-personnel once or anti material. And I think it's also interesting to look at the mobility or stationary nature of an autonomous weapon system. And if there's anything particularly problematic in that. So what we're trying to do is determine the acceptable level of meaningful human control over the use of force. It's not a short answer because it's one that requires a negotiation. And in order to do that, to agree on it, there will have to be an international treaty negotiated.

Who would benefit from an arsenal of killer robots? [00:07:45] 

John Rodsted: So if there were arsenals of killer robots who would benefit from that, what sort of scale of military would benefit from an arsenal of killer robots? 

Mary Wareham: We hear a lot about short term gain for long term pain when it comes to autonomous weapon systems. We had the United States and other countries talk about how they would use autonomous weapon systems responsibly and in compliance with the laws of war, et cetera. But even, even countries like the United States acknowledge that once these kinds of weapons systems get into the wrong hands, they could definitely be misused, and not just, to kill one or two people, but potentially to commit genocide.

If it came down to that, It's possible to make a case for any weapon system, but for autonomous weapon systems, I guess some of the attractions are, yes, you could have fewer soldiers on the battlefield. you would have fewer soldiers, Dying because they're not on the battlefield. But when we hear about these arguments, I always look at it from my perspective as a human rights activist and researcher, which is you never have a clean battlefield. There's always civilians who end up in there, especially if warfare is being fought in towns and cities as it is these days.

An Arms Race? [00:09:01] 

John Rodsted: So effectively, investing into killer robots could trigger a new arms race. If, for instance, some of the big superpowers put a lot of money into developing, manufacturing, stockpiling large quantities of these weapons, which they could then swarm a battlefield. That would spurn the opposition to do the same and that they just keep spending money and stockpiling more and more weapons.

So it becomes a major arms race, economically, and, and eventually has to be triggered by some form of conflict.

Mary Wareham: Yes, the potential for arms race  is very strong, and it's one of the biggest defenses that we hear in Washington DC. If you talk to the think tanks and defense contractors, , they'll talk about how responsible the US is and how irresponsible China is. And, and if China is investing in the stack, then we need to as well.  It's the self perpetuating circle, which Russia is also involved in as well as part of the reason why we've got a preventative campaign here, trying to aim for it, taking action before it's too late.  One of the big attractions for me working on this concern is that it took hundreds of thousands of people to be maimed or killed by landmines before we managed to create the treaty banning those weapons. And since then it has had a remarkable impact in reducing those numbers of human casualties. But this is an opportunity to to act in a preemptive way, - preventative way, when it comes to fully autonomous weapons.  And we don't have to, except this narrative of the arms races, it's definitely one that the developing world does not want to accept. Because they look past the arms races and they look at the destabilizing consequences both regionally and internationally.

Who would make money from them? [00:10:43] 

John Rodsted: so who would make the money out of such technologies if they were in fact developed?

Mary Wareham: You look at, who's making these investments and it's the regular big name, arms manufacturers from Norfolk Grumman to Lockheed Martin, and the rest of it. Some of this is in state owned production facilities. We believe that to be the case in China and Russia, things are quite tightly controlled there.

in terms of making money, I guess off the really big, major,  platforms such as the very large autonomous fighter aircraft. There is money to be made in that, for sure. But we're also concerned as establishing this principle of human control over the use of force, meaningful human control, so that everybody can understand it, so that it can apply to the biggest military power right down to the non-state armed group who's thinking about putting out the infrared senses, to get their explosive device to detonate and that would make it an anti-personnel landmine. So in effect, we've dealt with the dumb end of this consern, through the prohibition on anti-personnel landmines. And yes, we're talking about bigger platforms, but all sorts of different types of platforms.

And this is why we have to come back to this notion of human control, because that's the one common, defining point in all of them or absent from all of them.

Can AI technology be fooled? [00:12:03] 

John Rodsted: So it's driven by artificial intelligence. Can the technology that is proposed at present, be fooled?

Mary Wareham: Yes. I mean, we heard there was a glossary of terms and Pentagon directive a few years ago that was quite revealing, because it talked about all of the things that could go wrong, hacking, spoofing. What happens when your enemy gains control of the system and uses it against you? If they copy it, if they try to develop it. We see that already happening today with armed drones that Iran, and other countries are deploying. So this is what could happen.

What role do universities play in killer robots? [00:12:39] 

John Rodsted: So universities and research facilities are major players in the development of any technologies, where do these institutions come into the story?

Mary Wareham: We talk about an arms race and artificial intelligence, but really it's more like a talent quest, trying to find the best programmers, the people who are at university learning these skills. There's quite an effort underway here in the United States and I think in Australia and elsewhere by certain arms manufacturers, defense contractors, but also militaries themselves to set up  these university centers for excellence in artificial intelligence.

But to do this in quite a tight knit way, working, with funding in some cases from defense contractors or from the government itself, and this is where students, especially, and faculty have to wake up.  I've had a number of different engineering, robotics, and other students studying artificial intelligence contacting me worried about their university's relationship, in the United States with the Pentagon. but also they're worried about defense  manufacturers coming on campus, and trying to get them involved in this work. And now they're also concerned about the technology companies themselves, because some of them are now doing contracts with the defense sector.

And so this is, yeah, this is what I would call the, the military industrial complex. And when it's on universities, it becomes the military industrial academic complex, which overused word, but I, which I never really believed in until I, I started working on this issue and realized just the scale of what we're confronting here. It is gigantic.

What dialogue have you had with serving or past military? [00:14:19] 

John Rodsted: So, what sort of dialogue have you managed to have with either serving military or past military about this? Because I would imagine it's a complicated issue for them being cut out of the decision making process.

Mary Wareham: We've had a lot of discussion with country delegations in the military attachés and defense officials that participate in them. And I remember one with the United States way back in 2013, where we were, we were just under meet each other and we were asking them a lot of questions about this DoD directive on autonomy and weapons systems. And I remember them saying to the civil society group that I was there with, you know, you think that we're a monolith here at the Pentagon. We're not, and this policy had months of debate going into it. It was a debate between the boots on the ground guys who go to Afghanistan who understand the importance of community engagement and not kind of hiding behind their desks. There were fights with the military lawyers and their interpretation of international humanitarian law, there were fights of the acquisition people and the Techs who want to develop the latest and greatest devices. And then with all of the policy hacks, and I kind of, I can see that for sure.

It's easier for veterans to speak out on this issue than serving military, but in my conversations, a lot of serving military have whispered in my ear that they think the campaign is on the right track here. I remember a German general before an event that we did, who was saying, you need to help us to get the chemical weapons convention for autonomous weapon systems. He's like -  I want to live in  a , you know, a rules-based international order, which is what the Germans love to say. But it's true. They want to live in a world that has climate rules, that has trade rules, that has arms rules. And this is where the killer robots treaty comes into it.

What is the advantage of having autonomous weapons? [00:16:14] 

John Rodsted: So from a techie developer come military perspective, what's their proposed advantage of having an autonomous weapon? What do they think is so good about it?

Mary Wareham:  It's hard to get people to say good things about fully autonomous weapon systems.  We see governments basically denying that they are currently producing or developing them, saying that they've got no plans to.  We're kind of like, well, if that's your view right now, then what's your problem with a preemptive ban? We should be able to move forward without a doubt. 

We see some of the bigger defense contractors adopting the language of human control that we use in the campaign. I think it was one of the really big ones who made quite a slick  film about 'the human is everywhere'.  This is what's happened  since the campaign was launched a few years ago, a lot of the content that was originally on the web has been pulled down now by defense manufacturers, but also by, I think military who's too afraid that if we see the words full autonomy being an ultimate objective here, if we see the words autonomous without an explanation about the human control, then we're going to start asking questions. And I think the campaign is now having such an impact that it's no longer just us asking those questions, but it's the media who is scrutinizing this. And, I think it is starting at the university and student level as well.

What disadvantages are there with autonomous weapons? [00:17:40] 

John Rodsted: I'm guessing one of the great disadvantages of having autonomous weaponry is that it could be hacked? If you can make it, you can break it. And there's always some clever mind out there who can get onto the inside. And I would be guessing turn the weapon system back on yourself. Have you got any comments on that?

Mary Wareham: I mean, we've seen just with the tactics that the Taliban and other non-state armed groups have taken to in Afghanistan and elsewhere to evade armed drones,  they've created all sorts of shelters to try and not be seen from the sky above. And I think they will continue to innovate when it comes to how you respond to such technology.

I guess this is a good example of why the developing world is so furious about  killer robots is that they see these weapon systems being rolled out by rich military powers. And they don't have the similar means to do that. But they know that they are most likely the ones who are going to be the victims of such weapon systems, especially what we hear from people in the middle East and North Africa, but also from Africa itself and across Asia, most countries are quite opposed to this notion. Less so the bigger military powers.

John Rodsted: And I guess that it creates a situation where from a implementing military perspective, the only people they can identify on the battlefield is their own people. Which then turns every living creature down there into the enemy, whether that is civilian or whether that is opposing military. Have you've got some thoughts on that?

Mary Wareham: I really like to hear some military perspectives on this. I hate to try and speak for the military on it. And I hate the way in which so much of those conversations that I've had have been kind of off to the side and not done in a, in a public way. I think one of the most abhorrent things that I hear though for militaries is this notion that you're crossing the moral Rubicon if you go this far in terms of outsourcing, killing to machines. It's been a trend that has been happening for a while, that the ever greater distance from the battlefield, we see that here in the U S and that's already  exacerbating a lot of things. So there's...

Follow

Links

Chapters

Video

More from YouTube