Artwork for podcast Stay in Command
Matilda Byrne on Australia and Killer Robots
Episode 24th September 2020 • Stay in Command • John Rodsted
00:00:00 00:37:24

Share Episode

Shownotes

Matilda Byrne on Australia's position on Killer Robots

 John Rodsted:

Welcome to SafeGround, the small organisation with big ideas working in disarmament, human security, climate change and refugees. I’m John Rodsted

Thank you for tuning in to our series Stay in Command where we talk about lethal autonomous weapons, the

Australian context and why we must not delegate decision making from humans to machines. 

Matilda Byrne is the national coordinator of the Australian Campaign to Stop Killer Robots. The Campaign to Stop Killer Robots is an international effort to preemptively create a binding treaty that will bring restrictions and the ban to a concept of weapons system that would have no meaningful human control - lethal autonomous weapons or killer robots. She holds a master’s degree in international relations and is presently working on a PhD, on international security and global governance. Welcome Tilly.

Matilda Byrne: Thank you for having me!

John Rodsted: Killer robots! Can you tell me what they are and why do you want them banned?

Matilda Byrne: Killer robots or lethal autonomous weapon systems are essentially weapons that are using artificial intelligence. And so for their selecting of targets and the decision to deploy lethal force, this is all done by the AI algorithms. So there's no human that oversees or intervenes or controls the targeting of people and then deciding to kill those people as targets. And so as for why we would like to ban these weapons, there's a whole host of different concerns across moral, ethical, legal, security concerns. For me, I think one of the most compelling things is this idea of delegating the decision making over life to a machine. And so seeing that as humanity, we are not prepared to have this decision done solely by an algorithm and that a human has to control this question of life and death of another human being.

Is Australia for or against killer robots? [00:03:20]

John Rodsted: So, where does Australia sit on this subject? Is Australia for killer robots or against them?

Australia, regrettably has this position where they say it's premature to support a ban. They've been saying this for years now. And essentially what this means is that Australia would like to have the option to potentially develop lethal autonomous weapons in the future. And so beyond that as well, they have suggested many times in public forums, so at the United Nations and in their own sort of reports and things that these weapons could potentially be also desirable. And so we need to research more. We want to look at developments in this direction and see how it could be really positive for our military. Obviously this is an incredibly disappointing position, especially because there's been no attempt by the Australian government or defence to engage with the idea of human control and actually to maintain human control in the decision making.

There are strong diplomatic efforts from civil society to get a ban on these weapons before they are developed and deployed, in short a treaty. Is this movement gaining any traction? And if so, with who?

Matilda Byrne: Yes, it definitely is. We've been seeing growing momentum towards these calls for a ban. And so first you have the different governments of the world. There is a grouping of 120 different countries called the non-aligned movement who have declared their support for a ban. In addition, there's also 30 different countries who have explicitly stated that they support a ban in the talks at the specific forum that deals with this issue of lethal autonomous weapons.

And as well as that, you've mentioned the civil society movement. So we have a lot of tech workers that are speaking up about having a ban and why that's really important for their work. So people in software, AI design, robotics, et cetera. There's also a lot of academics across different areas; so morality, ethics, philosophers, international security. They, I would say are the main sort of people, in addition to the kind of coordinated non-government organizations of the world that are working as part of the Campaign to Stop Killer Robots.

Is Australia creating killer robots? [00:05:36]

John Rodsted: Australia has a large research and development facilities in many universities and they do exceptional work in software and engineering along with medical advances. Are we working on creating killer robots or at least the software and the technology?

Matilda Byrne: The short answer is probably. So, what we know is that in a lot of our universities, there's a lot of research that's done in partnership with the department of defence and as well as defence industry. In a lot of those programs, there's a lot happening at the moment in autonomy; autonomous capabilities, autonomous systems, the kind of sensors that you would need for these weapons. Because we haven't explicitly statede at the department of defence that we are in fact, creating lethal autonomous weapons systems, it's impossible to know for sure the extent to which university research is being incorporated into such weapons. But what we do know is that the capabilities are there and that it would be very easy through these programs for those to be used for these weapons if that was the direction the Australian government decided to take.

How would kiler robots benefit Australian universities? [00:06:44]

John Rodsted: So if a university gets involved in research and development, how would it actually benefit the university?

Matilda Byrne: So I think one of the large incentives for universities to be involved with these programs is money. So they have received funding from the government, pretty simply. And I think a couple of the other things are more around reputation and marketing for the university. So they're involved in cutting edge and innovative programs, language like this, which is true. And, it's not an issue in and of itself for the university to do great groundbreaking research in AI and software and things like this. What's important is that they do have policies in place that say, as a university, we oppose lethal autonomous weapon systems and do not want our research being then contributing to the development of these weapons.

An ethics issue [00:07:35]

John Rodsted: So with the sort of technological advances, it really turns into an ethics issue, to draw the line between where a certain technology or algorithm can be used for, or effectively good or for weapon systems. So it does turn into ethics.

Matilda Byrne: Yeah, that's exactly right. If you put it really simply, just because something can be developed, it doesn't mean that it should. And I think you could retrospectively apply this to a lot of other weapons. So the creation of the atomic bomb or agent orange that we saw had devastating impacts. And having kind of learned from the past, we can then ask ourselves, well, what's the onus on us at present to prevent the development of something that would be abhorrent. And I think that there is an onus, and that it is really important to take into consideration these ethical dimensions.

John Rodsted: So what are the thoughts of some of the developers that their technology might be used to kill masses of people?

Matilda Byrne: So I suppose in terms of developers, you could put them in three categories. You have the people that are developing in these programs with defence and looking at lethal autonomous weapon systems. And I'm sure from their perspective, they're not thinking about how, what they're doing could cause mass civilian casualties. They're thinking about how they're contributing to the national security of Australia, things like this, but it is really problematic when there's then no controls or real consideration and reflection within those programs as to what it is that they are exactly doing and what the repercussions are.

Then you have developers in the sector that are just unaware that this is something that's taking place. They're a really important group that they sort of go about developing whatever is they're doing, sensors, algorithms, unaware that in the future, perhaps, this work that they're doing could be used for a lethal autonomous weapons system.

And then of course you have the people that are aware that this is a real concern and that are really troubled by this prospect. And they sort of face really tough decisions. The things like having to turn down a project that could be really positive for say, Agriculture, because it looks at targeting pests and eliminating pests in the native Australian environment, which they feel uncomfortable to do because they know that that system could be repurposed and turned into a lethal autonomous weapon in the absence of any real regulation.

John Rodsted: So regulation really is such a key factor to controlling and keeping a cap on these technologies?

Matilda Byrne: Yes, that's right. It's a key point in terms of delineating what is acceptable and what's not.

How much money? [00:10:13]

John Rodsted: Have you got any idea what kind of money is floating about within Australia at present developing various components or platforms for autonomous weapons?

Matilda Byrne: It's actually a very alarmingly high amount of money. The main area where we know that autonomous weapons or autonomous systems development is happening is 'trusted autonomous systems', which is quite an ironic name, also -'trusted systems'. This is a defence cooperative research centre. What that means is it's a partnership between the department of defence, research institutions like universities and also arms manufacturers or the defence industry. Trusted autonomous systems was the first research centre like this to be launched and it was awarded $50 million for its first seven years of operation. That's an area where we know a lot of the development is happening around autonomy for defence. But in addition, for example, just at the beginning of this year in January, the Royal Australian Air Force announced $40 million for a project with Boeing to make an autonomous combat aircraft. So that one project of these prototypes was 40 million, as I said.

We know since the release of the defence strategic update, that there's an $11 billion investment also in our land vehicles and autonomy specifically, to be made over the next 10 years. And as well as that, I think lastly, and sort of most problematic of all of these, it's less money, it's $9 million, but this is for a project that Australia says is to research how we embed ethics into killer robots. Which is a very bizarre and just problematic concept. The fact that this is something that Australia sees is good to do or important to do instead of just drawing a line and saying, we accept that fully autonomous weapons or lethal autonomous weapons will never be lawful, I think quite appalling.

Why do defence want them? [00:12:08]

John Rodsted: defenceWhy would the Australian defence force want these weapons systems?

Matilda Byrne: There's a few reasons why lethal autonomous weapons could be desirable. One of the main ones is in terms of response time. So this idea that there'll be much faster to make decisions. Some of the other things are around longevity. So if you have a person that's having to make decisions, fatigue and things, whereas these machines could just go and go and go.

And also, there's been arguments by the military, that they'll also be good for precision. Which I think as well as a bit of a flawed idea, when we think about how they do their targeting and we know that they will not be successful in targeting actual military targets correctly. And that there's this huge room for error where they could falsely or, or wrongly engage civilians instead.

But one of the huge ones, is that idea of response time in that it's beyond human endurance to do certain things. I think though on that point, what it really means is that we're prepared to then have all of these machines that then just escalate the pace of warfare. Because if we don't need a human to react, then machines can go much faster, which will ultimately cause more devastation and severe impacts.

Can they escalate conflicts? [00:13:27]

John Rodsted: One of the points you made there was about how it would escalate a conflict, because it would be response versus response and things would keep going faster and faster. And one of the roles of a commander is to take into account all sorts of things that are changing battlefield and try to de-escalate a conflict because that's part of a command responsibility.

And, and I think of an analogy to this would be the Russian Colonel back in the early eighties who held off doing a nuclear strike on America when their instrumentation to all intents and purposes showed that a full nuclear strike was heading to Russia. What's his name? Stanislav Petrov. He wouldn't launch the counter attack because he just believed something was wrong with the system. And he was proved to be right. And if it was left to a machine, it would have been a full nuclear response on America. And that would have been world war three. And it was one person in that loop who stopped the reaction. So, yes, the concept escalation or deescalation is a very important point to consider.

So could you paint me a picture of a battle using autonomous weapons? Now, what would they do instead of how would they do it?

Matilda Byrne: So I think the thing about fully autonomous weapons or having these killer robots in battle, it's a lot more insidious than what we might think about, which is, ultimately having these little robots, driving around an area at war and firing at each other. It's much closer to what we see at the present in sort of context of urban warfare, where you have drones circulating around. And then these are ones that are able to strike. You're able to have more of them go into areas. I think, initially, it's going to look not totally dissimilar to how warfare looks now. But just with a lot less accountability. And a lot less humans actually having to make these hard decisions and exercising and evaluating the current context and making sort of thoughtful decisions. Instead, it's going to be these robots flying around going, "Oh yeah! That fits my parameters. So I'm going to fire" without looking at things like; collateral damage. Is this really worth it for the strategic gains? All of these really essential evaluations that commanders do have currently, and that they have to take into account in order to maintain international humanitarian law.

Is there any human control? [00:15:47]

John Rodsted: So where's the point of human command and control in the targeting of autonomous weapons, or is that a set and forget, technology or is there a point that they can intervene to pull things off?

Matilda Byrne: What's incredibly concerning, in particular about the Australian position, is some of the remarks that they've made recently when pressed on this idea of human involvement. One of the things that the Chief of the Defence Force, General Campbell has stated is that there's never one answer for where a human would be involved. And we're one of the only countries that has stated something like this in the world, if the only. And I think what that means is we're trying to leave the door open and say, well, maybe it's at the very beginning when we choose who the target is, or maybe it's a little bit later. Or, you know, we just don't know, we're not committing to where the human's going to be involved or where if there will be any human control over targeting and selecting and choosing to deploy lethal force.

What can go wrong? [00:16:49]

John Rodsted: So what could go wrong with autonomous weapons?

Matilda Byrne: One is machine error, which I think you touched on, is definitely a huge concern. As well as that you have also a great risk of hacking, and the security of these systems which is very troubling. Because the more these machines are capable of, if they are hacked, the more negative the ramifications are. So there's other concerns also around if they could be used as a tool of oppression. So for committing genocide or other sort of atrocities and oppression. Because it isn't hard to set a certain set of parameters for the targets and all people in this one kilometer radius or whatever, into these systems and just send them off and go; 'okay - go'. And these robots don't have a conscience. So it's not like military personnel turning around and saying, no, we're actually not comfortable to fire on our a hundred thousand people that are gathered in this square protesting against the government. It's just this tool where it's free of any sort of human conscience or decision making. And so it's very, very problematic. And I guess that's not so much an instance of it going wrong, but about it being used for nefarious reasons that we hadn't necessarily thought about when we're thinking about just utilizing these systems in warfare.

Can killer robots be used for civil oppression? [00:18:07]

John Rodsted: I suppose it brings you to the point where how would the cross over into civil oppression be with autonomous weaponry? If you chose to use that to, for instance, the riots that are happening in various parts of the world at the moment, what would that look like? If people chose to use autonomous weapons against those civilians?

Matilda Byrne: Exactly. And I think though the risk that these systems could be used for domestic policing is really alarming. And the reality with these kinds of systems and the way the technology works is that if it is developed in one area, then it's easy to then change how it's used. But if it's never developed at all, because there is a ban in place, for instance, that it's much harder for people to conjure up these systems separately.

 John Rodsted: So you take away the industrial manufacturing component, which can give you the ability to create masses of well-produced machinery. And it turns it into more of an ad-hoc method. So you won't get the saturation point.

Matilda Byrne: Right. Exactly.

Can killer robots follow international laws of war? [00:19:05]

John Rodsted: Battle fields are rapidly changing and confusing place. Hence the term, the fog of war. Much of how orders are given and followed depends on ethics, international humanitarian law, rules of war and engagement, Geneva conventions, et cetera. Could autonomous weapons be programmed to perfectly navigate such a space?

Matilda Byrne: The simple answer to that question is no. I want to break down one element of those parts of international law that you touched on, which is international humanitarian law. And even just two key elements of that, which is the principles of distinction and...

Follow

Links

Chapters

Video

More from YouTube