In the ‘Stay in Command’ episode today, Major General Mike Smith (Ret.) and John Rodsted from SafeGround explore the issues surrounding the development of Lethal Autonomous Weapons with Artificial Intelligence. The mechanics, ethics and application of this new technology paints a disturbing picture of a world where machines decide who will live and who will die. Today we will talk about leadership both civil and military and the complexities of command responsibility in regards to Lethal Autonomous Weapons.
Major General Michael Smith (ret) has spent his life leading others. He graduated the Royal Military College Duntroon as Dux of his year in 1971 and since commanded everything from a Platoon to a Brigade. His 34 years in the Australian Army had him in some complicated situations.
If you have questions or concerns please contact us via info@safeground.org.au
If you want to know more look for us on Facebook, Twitter and Instagram Australia Campaign to Stop Killer Robots or use the hashtag #AusBanKillerRobots.
Become part of the movement so we Stay in Command!
For access to this and other episodes along with the full transcription and relevant links and information head to safeground.org.au/podcasts.
John Rodsted with Mike Smith
A Commander’s View on Lethal Autonomous Weapons - Interview with Major General Mike Smith (Ret.) [00:00:00]
John Rodsted: [00:00:25] And today we're speaking with Mike Smith as part of the 'Stay in Command' series. ' Stay in Command', explores the issues surrounding the development of lethal autonomous weapons and artificial intelligence. The mechanics, ethics, and application of this new technology paints, a disturbing picture of a world where machines decide, who will live and who will die.
Mike spent 34 years in the Australian Army and retired a Major General. He graduated from the Royal Military College Duntroon in 1971 as Dux of his year and has had a distinguished military career as an infantry officer commanding all levels from Platoon to Brigade Commander.
He served as Australia's Defence Advisor in Cambodia in 1994. And throughout 1999 was Director General for East Timor. He was then appointed as the First Deputy Force Commander of the United Nations' Transitional Administration in East Timor (UNTAET) in 2000 and 2001. In recognition of this, he was promoted to an Officer in the Order of Australia.
After the army Mike became the CEO of the Australian refugee agency Austcare from 2002 until 2008. He then became the founding Executive Director of the Australian Government's Civil-Military Center from 2008 until late 2011. He then served with the United Nations Support Mission in Libya for 12 months as the Director of Security Sector Reform. He's the immediate Past President of the United Nations Association of Australia and is the current Chair of the Gallipoli Scholarship Fund and a Non-Executive Director of the Institute for Economics and Peace.
Mike holds a master's degree in International Relations from the Australian National University, a Bachelor of Arts in History from the University of New South Wales, and is a Fellow of the Australian College of Defence and Strategic Studies. He's also a graduate of the Cranlana Leadership Program and the Company Director's Course of the University of New England.
Today we'll talk about leadership, both civilian and military, and the complexities of command responsibility in regards to lethal autonomous weapons. Welcome to SafeGround Mike Smith.
Mike Smith: [00:02:28] John Rodsted! Lovely to be here with you.
John Rodsted: [00:02:31] Mike, 'The buck stops here'. This was a sign that sat on president Harry S. Truman's desk. Someone, at a level, a high level is ultimately responsible and here he is in front of you. What do you see the role and responsibility of a commander is?
Mike Smith: [00:02:46] Well, books have been written about this, John, and, let me try and be as succinct as I can. Basically a good commander needs to demonstrate leadership. And in doing that, they need to make sure that what they do is always legal. That's always a good start because if a commander doesn't abide by the laws and in particular, in conflict, the laws of armed conflict, then they are perpetrating crimes or potentially perpetrating crimes against humanity.
So a good leader needs to provide fearless advice to his or her superiors. And at the same time, a good leader needs to set the example and to motivate their subordinates - both by his or her actions and by doing the right thing. But a commander needs to do a few other things than having those personal traits that we all know about.
A good commander must provide the proper training. And acquire the resources necessary for their men and women to do the job that they are set to do. And a good commander must always know the capabilities of those under his or her command. I found, personally, that one of the best traits of a good commander is the ability to be a good listener and to always encourage subordinates to honestly tell you what they think . A poor commander only ever wants 'yes-men and women'. A good commander wants to hear different points of view.
John Rodsted: [00:04:35] So a really key point to that is that you've got empathy. You've got empathy with the people that are within your command. You can see it from their perspective.
Mike Smith: [00:04:45] I think empathy and respect are key to being a good commander. And of course not, everybody will always agree with a commander's decision, but if everyone respects the commander, they say, well, I didn't agree with it, but I respect it. And I trust the commander that what that commander is telling us to do is the right thing to do.
John Rodsted: [00:05:10] I guess that brings you when you're a military commander it's a complicated environment. That you're, you know, you're in an operational role. You're in a dangerous environment, and as you said, you've got to operate legally. You need to have the respect of the people under you. If you're operating, say in a combative environment, you're making decisions that can be life and death for your own troops, but also for civilians, prisoners, refugees, opposition, combatants, and all of them within the legal framework.
So, can you guide me through a little bit more about how the decision making would take shape under these conditions and how you'd have to adapt?
Mike Smith: [00:05:48] Well, I think the most important thing is that if you have good doctrine, then everybody understands what the right and the wrong is and how to do things. And a good commander, always ensures that people understand what the doctrine is and that they abide by it. And good commanders are always inventive, and they use their initiative and they encourage their subordinates to use their initiative. In fact, they expect them to. But to do so lawfully all the time, within the rules and regulations, not to go outside of them.
John Rodsted: [00:06:28] But I suppose then if you get into, a life and death situation, as in combat, is it almost an oxymoron to think that wars have limits because the business of fighting a war is achieving your objectives and people are going to get killed as part of that and staying within a legal framework, does that not get stretched or, how do you see that?
Mike Smith: [00:06:49] Well, of course, it gets stretched. It can get stretched, but a lot of work has gone into the laws of armed conflict, into international humanitarian law. So there are boundaries. Now there will always be grey areas. There's no question about that because, in the heat of battle, instantaneous decisions have to be made. But, generally speaking, I think that it has to stay within those limits. And there might be some mistakes made, but if those mistakes are war crimes, if they are targeting innocent civilians, those sorts of things, then a commander must be held accountable and responsible for breaking those laws of armed conflict.
John Rodsted: [00:07:35] So staying within what is a legal framework is an essential part of being a military commander, achieving your goals, but staying within the legal framework, that is your umbrella?
Mike Smith: [00:07:45] Absolutely. And to go outside that means that you're just really acting like a terrorist, aren't you? You don't abide by the laws of armed conflict. So, some people sometimes say 'that's like fighting with one hand tied behind your back'. But I've never subscribed to that view because, if you - a soldier, a sailor, or an airman - and you are representing your state, you abide by the rules of your nation- state. And in Australia's case, we abide by the laws of armed conflict and they are irrefutable.
John Rodsted: [00:08:23] That brings us to the point that what you're provisioned with to achieve your goals, what is in your arsenal, what is available to an Air Force, a Navy, an Army, et cetera, become tools that are legally acceptable to that nation for their commanders to use in the field. Would that be sort of correct?
Mike Smith: [00:08:41] Yeah, absolutely. Absolutely. And the whole nature of warfare is that it's a constantly changing way that combat occurs, largely because of technology.
John Rodsted: [00:08:54] Hmm.
John Rodsted: Historically there's been times when weapons systems have been acceptable within a military framework and have got somewhat out of control. And I imagine a couple of the good examples would be the use of anti-personnel landmines, cluster munitions, and the elephant always in the room would be nuclear.
And they've all been addressed with international treaties that have brought about their removal and restriction. I imagine at the time when they were employed they were all legal, but then the flavour of, the national humanitarian law and international treaties turned against those.
Then things become a suppose, a little bit more complicated when you have to look in hindsight at a weapon system that's been removed, but it doesn't change things in the field at the time. So there are weapons that have been used, and then have become unacceptable internationally and treaties have been formed to deal with those. Land mines, cluster bombs, nuclear weapons would be some of those. I guess there's another series of weapons that have also been dealt with by treaties. One would be poisoned gas after world war one. The other weapons system that was beaten before it was used in combat was blinding laser weapons, and the protocol was created in 1995 in the CCW . So that was a good example of beating a weapons system before it was deployed. It sort of brings us to the thorny issue that's on the table at the moment, which is about lethal autonomous weapons or 'killer robots'.
There's quite a bit of international research and development in the various forms of these systems. Here I need to draw the important division between killer robots and drones, as systems are now, drones have an operator who makes the final decision to strike or not to strike . With killer robots the machine makes the final decision and the choice to kill. The machine is in command with no human in that loop.
Mike, from your command perspective, how would you feel about handing over the role of decision-maker to kill or not to kill the one machine?
Mike Smith: [00:10:53] Well, I feel very uncomfortable about it. And of course, the distinction you make between lethal autonomous weapons and drones, and not only drones but a whole range of weapons systems that use artificial intelligence. You're quite right in saying the difference is that the decision is made by a robot - by an algorithm - and the other is made by a human. And the difficulty is that's happening with lethal autonomous weapons, as I see it, is that this distinction is becoming increasingly blurred. It's becoming a really grey area. So that for example, there are autonomous weapons systems that are lethal, which even Australia has. And I'm thinking here about, anti-missile defence systems onboard ships, and that sort of thing, that just come into play automatically if the ship, or if an area, is threatened. These, I think can be justified in the sense that they are not targeting humans. They are really defending against an incoming missile or an incoming threat which is itself not human.
But then we get to the situation that we say, well, if that can happen in that situation, why don't we program these weapons so that we don't have to be there at all? And they become offensive. And that they attack humans. And that's where I think the line has to be drawn. So I guess in terms of lethal autonomous weapons, I see that a human being must be responsible for targeting and must be held accountable should things go wrong, and humans be killed, as a consequence of their use. When I say, humans I'm talking about non-combatants.
John Rodsted: [00:12:49] So trying to limit the destruction to the combatants on a battlefield and keeping the civilians out of that equation, if at all possible?
Mike Smith: [00:12:56] Yeah, absolutely. And saying that there are limits to the extent to which we will allow machines to make the decision to make a strike.
John Rodsted: [00:13:09] If there was a movement towards a deployment use of lethal autonomous weapons within militaries of the world, do you think that could become a bit of a slippery slope, which would reduce the threshold to go to war, which would make it easier for governments or militaries to choose to go for a conflict, as opposed to trying to preserve life on their own side? Do you think the presence of autonomous weapons would do that?
Mike Smith: [00:13:34] they're could. I think that we're entering uncharted waters here. It's a little bit like when poison gas was used on the battlefield because it existed. It was only when people saw the consequences of it that they said, 'Hey, this is just too much. We've got to ban it.' And they did successfully. When I look at things like that, I have great hope; the same as you know, after all of those landmines were used and they caused havoc they were then banned. Cluster munitions is another one where I think that some progress has been made, but not as much as I would like to see. So lethal autonomous weapons are very much in that category, where there needs to be limits on how they can be used. And this is why I really hope that Australia plays a big role in the United Nations, in the CCW Convention, in trying to define those roles. One thing is clear, John, and that is that technology is not going to stop. These things are going to keep being invented.
Algorithms are going to be done. And, I just read the other day that, a robotic F 16, defeated a human- flown F 16 aircraft five times in a row. So, machines can definitely do this stuff. There's no question about it, but it's what is the purpose of those machines?
Now, does that make it a slippery slope to go into conflict? Because you've got these? I would like to think that it would be more about, well, how this enables us to defend ourselves better. This enables us to deter conflict better, to prevent atrocities occurring because it can be done accountably. But it comes down to what control we will keep over the use of these autonomous weapons systems.
John Rodsted: [00:15:37] And what you're really saying is at some point there needs to be a human in the loop that can override what the machine is doing so it still has some form of meaningful human control?
Mike Smith: [00:15:47] Yeah. You can't take a robot to the International Criminal Court can you? So a human being has to be responsible at all times. That's what makes the human race what we are. We have to be accountable for our actions, and by just creating machines to go and do this sort of thing for us is hardly an excuse for atrocities even when they occur.
John Rodsted: [00:16:12] I read that same report about the F16 simulator in dogfights with a manned aircraft. And one of the things that struck me was the F16 robotic would go on a head-on attack to the other aircraft and close within 100 meters, which is effectively suicidal. And from the top gun school were saying you would never close in a head-on attack like that because the chances of surviving are fairly slim.
It brings into the issue of machines are prepared to be suicidal because they just a machine, where humans still wish to preserve their own life or generally do. So that certainly puts an advantage towards the machine. Doesn't it? If it's prepared to be destroyed in the execution of its role?
Mike Smith: [00:16:55] Oh, totally. And of course, it's a lot cheaper. Now, of course , there have been many precedents where humans have been prepared to go into suicide type missions and not cared about their own safety. But if armies, navies and air forces were encouraging their humans to do that, then those armies, navies and air forces wouldn't last very long would they? So, if you can send machines in to do it and they're cheap, you can say, 'well, that's all right, we'll just make more machines.' And this is when I think it becomes extremely dangerous. Particularly if those machines are going in to kill human beings, not other machines.
John Rodsted: [00:17:38] And it takes us into that world of sort of asymmetric warfare, where you let's take the scenario of a large powerful, industrial nation has got the ability to build lots of these weapons and stockpile through the years of peace. And simply through the might of money, be able to swarm and overpower their opposition.
Then it becomes the right and wrong rests in the hands of wealth, as opposed to in any ideological issue. So that would just turn the situation into I suppose capitalism wins?
Mike Smith: [00:18:12] I don't quite see it that way, because technology is transforming at such a rapid rate, that there's no point in stockpiling...