Artwork for podcast Inside West Point: Ideas That Impact
Artificial Intelligence on the Modern Battlefield - Exploring Interdisciplinary Efforts with Colonel Chris Korpela and Professor Hitoshi Nasu
Episode 67th August 2023 • Inside West Point: Ideas That Impact • United States Military Academy at West Point
00:00:00 00:41:48

Share Episode

Shownotes

Join the conversation as we delve into the future of autonomous weapons, bringing together two experts from different fields to explore the potential benefits, risks, and ethical considerations surrounding this rapidly advancing technology. 

 

Professor Hitoshi Nasu, Professor of Law and expert on international security law, the law of armed conflict, and the law of weaponry, discusses incorporating the law of armed conflict into technological developments and the necessity of providing a well-rounded discussion to the integration of new technology on the battlefield. Recently retired Colonel Christopher Korpela, a seasoned military practitioner, and researcher in robotics and autonomous systems, discusses the complexities behind incorporating autonomous technologies into military operations. He highlights improved efficiency and reduced risks to human life as some of the key benefits. 

 

The conversation addresses misconceptions, emphasizes the need for public support, and highlights the value of interdisciplinary collaboration in refining perspectives and finding innovative solutions for the complexities of autonomous weapons. Explore these technologies' potential and lawful operation and join us in envisioning a safer and more secure world. 

 

Chapter Summaries; 

 

(0:00:02) - Start of the episode and introduction to the topic of AI and Robotics in modern warfare. 

(0:02:00) - Introduction to the guests, Korpela and Nasu, and their collaboration. 

(0:13:05) - Discussion on the differences between, and challenges of, autonomy and artificial intelligence in warfare, including the DoD's efforts to define a lethal autonomous weapons system. 

(0:17:05) - Discussion on the ethical and moral arguments against using technology in warfare. 

(0:26:20) - Discussion on the technical parameters for autonomous military systems, including the potential inclusion of legal parameters within the technology. 

(0:30:00) - Insight into the potential to team autonomous systems with soldiers. 

(0:31:17) – Discussion on how this work is developing discussions with external partners and cadets in the classroom.

(0:35:00) - Discussion on how autonomous systems could replace legacy systems and shift command responsibility. 

(0:37:00) - Discussion on the future changes autonomous systems may bring in the chain of command responsibility. 

(0:39:00) - Speculation on the potential changes autonomous systems may bring to the battlefield. 

(0:41:12) - Conclusion of the episode and final thoughts on the future of AI in warfare. 

 

Loved this episode? Remember to rate, review, follow, and share this podcast with others.     

 

 

Resources mentioned in the episode: 

 

Connect with us:      

 

    

 

This episode does not imply Federal endorsement.    



Transcripts

Episode 3 w/ music

Dean: [:

Today, I'm thrilled to host Colonel Chris Korpela and Professor Hitoshi Nasu for this episode where we discuss the role of artificial intelligence on the modern battlefield. Their remarkable and different backgrounds coming together to work on this important topic is a great example of the interdisciplinary efforts that are happening all over our academy.

Welcome, gentlemen. Thanks for being here. Thank you.

Colonel Christopher Korpela: Thank you, sir.

Professor Hitoshi Nasu: Thank you, sir.

with Colonel Chris Korpela. [:

As a researcher, he has coordinated research projects in grants across the United States Department of Defense Academia, an industry in the field of robotics, autonomous systems, and artificial intelligence. He's an Apgar Award winner, which is West Point's Highest Teaching Award, an Andrew Carnegie Fellow. Colonel Korpela has testified at the United Nations as part of the United States delegation to the group of governmental experts on lethal autonomous weapon systems in Geneva, Switzerland.

member of the Institute for [:

His first textbook titled Aerial Manipulation was released for print in September, 2017. Of course, and most impressively, he's my West Point classmate from the great class of 1996. That's impressive, actually. Let me turn to up Professor Hitoshi Nasu. Professor Nasu joined the West Point Department of Law in September, 2021, and he is a member of the Libra Institute for Law and Land Warfare and the Robotics Research Center.

Professor Nasu brings a wealth of expertise in the field of international law. With many years of research and teaching experience in Australia, Japan, and the United Kingdom, he has extensive publications on a range of military legal affairs, including a world class body of scholarship on advanced military technologies and the law of armed conflict.

interdisciplinary work with [:

So with that is our background. I think this raises the first question. How did a robotics expert and an international law expert link up to collaborate on research related to artificial intelligence on the modern battlefield? Chris?

Christopher Korpela: So since:

Dean: So Hitoshi, how did you how'd you get linked up with this effort?

law and doing some research [:

And this chance that is presented to me to work at West Point on this exact project on AR and law conflict, that was too good an opportunity to miss some for me. So, I jumped onto it and now I'm working with Korpela on various, some different issues

Dean: Well, it's Tara. Our great benefit that you did decide to, to join in this, this effort. How's the collaboration going between you two? It's almost like marriage counseling right here. Like how is it, how's it going? Right? So Chris, how is the, how's the collaboration going?

Colonel Christopher Korpela: Yeah, no, so we've, been able to really get thought pieces out in terms of blogs we're working on, on a paper right now, on roles of engagement. And then really trying to stay engaged with the DOD, AI experts.

international partners, when[:

Dean: Yeah. How about the collaboration from your perspective?

Professor Hitoshi Nasu: I think it's going well. It's an occasional conversation. It's like really an occasional frank conversation we have over the coffee break.

It it makes a huge difference. In fact, over the coffee break we often come up with really interesting ideas we want to pursue with the papers or with the blog post. So, I think it's it's going well.

Dean: I'd like to highlight your background Chris, is very STEM heavy, obviously Hitoshi, you're a lawyer. Coming together in a, in a place where you drink coffee and bat ideas off each other is, is awesome for me to hear as the Dean, but what happens at these coffee breaks that, that makes this collaboration so successful?

h us, cuz it is a, a, a high [:

Dean: So tell me a little bit about lemme talk to you about individually. How does your research on artificial intelligence and robotics, how's that going in your respective fields? So just in you individually, Chris, how is, how's it going? What are you doing? I guess also.

Colonel Christopher Korpela: My role or my job really is to, to ensure that we are tied in within the, the Department of Defense, science, and, Scientific Research Community, right. Ensuring that, we're not stove piped. Mm-hmm. That our efforts , from the cadets and the faculty are, are supporting real DOD efforts, right?

So that's, and then of course we get, know, we can be funded through these DOD sponsors to bring, and usually I bring in research faculty to help the cadets and, and the, and the civilian and military faculty that we have. So there's a number of efforts if it's threat recognition, if it's intelligence surveillance and reconnaissance, if it's what's called SLAM, Simultaneous Localization, and Mapping.

, right? So there's semantic [:

Dean: So is there an example you can you can give of where you're, this is happening and contemporaneous with , the ongoing conflicts in the world or, or any type of security situation?

Colonel Christopher Korpela: So, a few, a few examples. So one, so the artificial intelligence the Army artificial intelligence integration center out in Pittsburgh.

So they're, they're nested within Carnegie Mellon University and we have an effort with them called auto adjust fire. So we can set up a, a small UAS, Unmanned Aerial System, commonly known as a drone, to detect where artillery rounds are impacting and then providing corrections back to the firing point.

Just like a human observer, a human forward observer is gonna observe those rounds. Can we have the drone do that and then provide those corrections back in a more accurate and expedient manner?

Dean: Is [:

Colonel Christopher Korpela: Absolutely. So we've all read the article of, 15 year old with his commercial drone operating it and, and, and calling in for fire.

Right. So there's a number, number of examples that we see in, in Ukraine right now where, where drones are being used to to call in direct fire and very successfully.

Dean: Yeah. Let me, let me switch to you Hitoshi. So give me a little bit of your research background.

Professor Hitoshi Nasu: Sure. Yes. So my research has been more about how various military applications of new technologies have had an impact on the Law and Armed Conflict and interacted with that body of law. And it's operation in the context of military operations. So, I, in the past I dealt with the for example, legal issues arising from military applications of nanotechnology invisibility technology.

Dean: Are those, hold up, are those even real things? Invisibility technology? Yes. I mean, what, give, give me some details on invisibility.

Hitoshi Nasu: Absolutely. So [:

So it's like a Harry Potters invisibility cloak science is actually working on it.

Dean: Wow. Okay. And then nanotechnology, tell me a little bit. When you say nano, how small are we talking?

Professor Hitoshi Nasu: Sure. Yes. The nano technology itself doesn't create anything small. This is an , engineering technology that manipulates matters in a, a very small nano size scale.

and those nano air vehicles [:

So, this is really an innovative way of using nanotechnology with some amounted vehicles.

Dean: So in this convergence between your two fields and disciplines, then bring some of the, not just the legal applications but also some of the theoretical thoughts that you've, you've discussed between invisibility and nano and bringing into the operator or practitioner's world.

And the application as you plug into the army, is that how, this is how this has worked? Are you like basically a holistic approach to some of these new technologies and how they can be applied in a military setting?

Colonel Christopher Korpela: Yeah, I mean, we can look at the umbrella of emerging technologies and how they're going to change the way, that soldiers operate.

t's, it's gonna be much more [:

So 20, 22 years ago, , if we sat there then, and where we thought we'd be in 2022, it's probably much further than we, than we really are, right? So the, the, the speed , is slower, but, with these rapid advances, the speed of change is going to be, is gonna be faster than has been.

Dean: So you think it's accelerating?

Colonel Christopher Korpela: It's gonna be accelerating. Absolutely.

Dean: So let me pivot Here a bit to really artificial intelligence autonomy. I read your article you have a great blog titled, Stop the Killer Robot Debate, why We Need Artificial Intelligence in Future Battlefields.

ban weapons. So, famously in:

That obviously also failed with the advent of World War I. And then again, in leading up to World War II, there was a very strong movement to ban the use of airplanes in any type of armed conflict. And obviously that didn't work either. And so, as I read your article, there's, to some extent it feels like we're just repeating discussions over from history.

And for me it seems as if this is a losing effort by those who would like to absolutely ban and preemptively ban the use of autonomous weapons or the use of artificial intelligence, because technology always finds this place onto the battle space. But I, I, I just give that as a little bit of background as when I read your article, immediately made me think about that.

ots? I might, and I might be [:

Colonel Christopher Korpela: The advances in technology are not, in, in terms of autonomy, in terms of artificial intelligence and machine learning are not inherently bad. I think that , anytime, they use terms like the weaponization of AI right? That's a common phrase. And there's so much more to AI and ML than just

Dean: ML being Machine Learning?

Colonel Christopher Korpela: Machine learning. Yep, exactly. So artificial intelligence, machine learning, so commonly, we'll, we'll refer to them as, as AI and ML, and there's nothing that is inherently bad.

Right? Are there concerns? Absolutely; in terms of bias and in terms of the adversarial techniques in terms of, reliability and trust, right. So that, of course that's why the DOD came out with the ethical principles for artificial intelligence to get ahead of of these concerns, right?

be, reliable, that a soldier [:

So there's, that, and that's one, that's one small piece. There's many others. And I can go into topics on distinction, which is one that we, we often, look at is how can we identify uniform patterns, right? To try to identify our friends, our coalition partners, potential enemy combatants, and civilians and leveraging those tools to do that.

Dean: Yeah. So let me ask you this, Chris, you've had a lot of engagement in really, not just on the technical side, but in the, in engaging with the international community. On this first is a preliminary question. Can you just for those listening, discuss the distinction between artificial intelligence and autonomy?

I know that oftentimes is, is conflated, so can you do that.

. Right? So with DOD director:

So they're the two are, we'll say distinct. Autonomy really is, the ability to operate without further human input, right? So if I, if I want a robot to come into this room and localize, right, and map that room, right, that robot or that agent , can operate without instructions from a human.

Just like when we sat down at this table. We immediately made a map of this room, we immediately understand exactly where we are, our position and location in this room. And , trying to get a, a robot to do that is, is, we can say that that's, autonomy. And then of course there's there the various levels of autonomy in most of those applied to the self-driving car industry, right?

se fiducials markers, signs, [:

How can a machine , mimic, human intelligence, right? And we're very, we'll stick with narrow AI, right? General AI that's, maybe another discussion, but most in, in our applications with narrow AI and, and machine learning, we're using, graphics processors, right?

Multiple, you thousands of cores, hundreds and thousands of, of, of processing cores to implement a neural network to process data and then infer what those new images are right to, or, we'll say, maybe it could be video, it could be still images and trying to detect or correctly label, is that an apple?

a lot of, there's also many [:

Dean: So let me, let me ask you, Hitoshi back to this, this idea or this thought about attempts to historically prohibit technology from getting onto the battle space. There's been ethical, immoral arguments made against using technology and warfare. Those in the past that hasn't, that hasn't worked, obviously, express positive law attempts to prohibit through international agreements that hasn't worked.

There's even been some discussions under customary law using the Martin's clause to try to prohibit artificial intelligence use in the battle space. All of those seem to be, to be losing arguments. And so do you see it as even pragmatic or realistic that there could be the stop stoppage of this type of technology to being used in the battle?

No. In the contemporary battle space?

te about the military use of [:

And many critics at that time warned about the reckless use of drones for let targeting, because they argued that the soldiers would not experience the same emotional reaction as they would otherwise do when they used let force against the person in front of them. But a decade later now, no one questions the legality of using drones for military purpose.

And David, there's even a scientific evidence later on to suggest that human operators experience the same emotional reaction even when they're remotely piloting and controlling those drones for diesel targeting. So those critics at that time have now proven to be wrong because their argument was based on the speculation.

[:

They will run out control they will start killing everyone that fits certain descriptions and assault. They're causing the public to fear about this type of weapons. Because all of these allegations are based on me speculation. So we thought what we are very concerned about, this kind of one-sided view is presented to the public in the mainstream media.

tary decision-making. That's [:

Dean: Have you seen the movie Terminator? ? Yes, I, well, yes I did. I mean, that's what has informed me in this debate is the Terminator.

Professor Hitoshi Nasu: A good example if I can give you sir, is some Microsoft workers who protested against the company about the proposed the involvement in the development of a piece of military equipment.

Raise the Department of Defense. Personally didn't anything, didn't see any problem or ethical issues. So with this particular piece of equipment, I think it was the augmented reality goggles called HoloLens. But this informed view had a very negative impact on the defense industry as a whole.

Because we we need the corporation and involvement to make sure that our soldiers will be ready to fight the next war.

me ask you either if you go [:

What's, what, what's the consequence if, if there's a significant percent of the population or of experts who say, we're gonna stop this, we're gonna put it in an absolute ban. If you follow what I said, it's gonna, it's irrelevant. So what's, why, why engage in the debate?

Professor Hitoshi Nasu: It's the public support we need, sir.

In countries where democratic, democratic voice, people, the voice doesn't matter. They can go ahead and use all sorts of technologies to the advantage. But we in the United States, we value democracy and the people's voice of public views about what we do and how we operate. Without the support of those engineers, those scientists and technologists, we cannot really go ahead to produce really cutting edge technologies.

We need to fight the next fight and win the next war.

at the table, right? We need [:

There's no commander wants to employ a weapon system that cannot be controlled, right? There's no utility in that, right? And so, trying to convey those principles, right, that we all abide to, right? Commander's authority, commander's responsibility, based on their training and our doctrine, right?

Those are all, things that maybe do not necessarily understand. And we want to try to help inform them. And again, it is trying to of course not, not engage in warfare if we don't have to, right?

If we have, if we have the, the ability to defeat the adversary and they know that, then they will not, they will typically not engage with us. And yeah, we just, we need to bring everyone we need to bring everyone to the table.

Dean: So when I listen to this, there's a pragmatic component. There's really three parts.

ntial misperceptions as well [:

And then those provisions will be followed by law abiding states and nations. And, but they can also, those provisions can be weaponized against those law abiding states by those who do not, are not as interested in complying with international law and their international legal obligations. Which means that this debate really is an important one so that the law develops in such a way that there's enough legal space for the military practitioner to be able to operate successfully in the contemporary battle space from both the United States or any democratic nation.

Do you agree with that?

eapons. It's likely to cause [:

Dean: What's the most valid criticism you've heard so far on the use of either autonomous or or autonomous weapons or the, the implementation of AI into military operations?

Professor Hitoshi Nasu: AI has a variety of different applications across the spectrum of military operations.

So, there is always military decision making, the long process of military decision making behind it. Mm-hmm. . So we have to understand that contents in which any AI based system is going to be used in warfighting.

Dean: Do you think, there's any valid concerns or criticisms of, of behind the campaign, the stop killer robots, as they call it?

Colonel Christopher Korpela: [:

The machine is, is determining who is engaged and who is not engaged, and that is not anyone that I, anyone that I collaborate with, there's no intent there. Right. To transfer agency, to a machine. Is it a tool that we can help to distinguish between combatants and non-combatants?

Absolutely. Is it a tool that helps us make better decisions and deliver the appropriate effects ? Absolutely. I understand the concerns. Okay? If it's swarming capabilities and race conditions. Are there, chances, algorithms can go wrong, and there's going to be a mass casualty event that there, there's those possibilities.

But again, it just goes back to , testing and evaluation and verification of these systems and, and a constant process of ensuring that they are abiding by, the, the DOD ethical principles.

Dean: [:

How do you see your joint work helping to harness the potential of these technologies in the interest of national security? So, I'll, I'll ask you first Hitoshi.

Professor Hitoshi Nasu: So, what we are currently working on is to find and set technical parameters that can be programmed into autonomous weapon systems to ensure that their operation will always and fully comply with the legal and policy requirements.

it requires a legal input to [:

Our project is not about transplanting the law arm conflict rules, books into autonomous platforms. It's not so simple or easy as that. What we are trying to do is to develop a series of a technical parameters that can be programmed into a system architecture that regulates autonomous functions.

And this idea actually came up when Colonel Korpela and I were having a conversation. Again, it's just an informal conversation between us about the AI and the law conflict. And we both knew that the rules of engagement is always used to execute the military emissions in compliance with legal requirements and policy demands.

of autonomous systems can be [:

And I thought, well, that's an interesting and fascinating thought. And it was while pursuing it as the potential for big research project.

Dean: And so when you're talking rules of engagement, oftentimes the rules of engagement include both policy and law policy shifts all the time. I mean, it would be how do you, how do you address for the policy shifts if you're actually implementing the ROE into the, the machines themselves?

ds to be put in place that's [:

Dean: So let me ask you, Chris. So, one of the things I don't disagree that it's a, not a legal constraint, but a technical parameter to try to enforce the principle of distinction in a machine.

Yet I do find it difficult because the principle of distinction. Very clearly lays out in in international law that states have an obligation to distinguish between civilians and combatants and military objectives and civilian objects, and only target those things that are military objectives.

Yet the principle of distinction has been increasingly blurred in the last 20 or 30 years, making it extraordinarily difficult to distinguish between a civilian and a combatant or a civilian that's directly participating in hostilities and a civilian that's not. And so the idea, and it's been so difficult that our own human actors have a very difficult time in discerning between the, the two.

to say that a machine, even [:

Colonel Christopher Korpela: We, we believe so. And again, it's not, it's a, we see this as a teaming aspect, right? The, the agent is not, necessarily operating on its own and implementing a rules of engagement card much like a soldier would , at a checkpoint, right?

The agent has these technical parameters that can help it. When you say agent, what do you mean? So a system, a a, an autonomous system , can, let's say , it's a teammate to the soldier, right? Maybe , they're both men in the checkpoint and they're going through their, their rules of engagement checklist, right?

Somebody who's approaching the, the someone's approaching the checkpoint. And again, with, with an autonomous system, it can take more more risk , than a soldier would, right? It doesn't have to necessarily use a lethal a lethal mechanism. And maybe in this rules of engagement construct where we do not allow the autonomous system to engage lethally, right?

ngagement that, lead up to a [:

Dean: So you could basically program in escalation of force measures.

Colonel Christopher Korpela: Absolutely.

Dean: And it seems that would seem advantageous compared to a human actor because you eliminate emotion.

Colonel Christopher Korpela: Emotion and fear and, uncertainty and fatigue and a number of other things.

Dean: Chris, how was this work generating discussions with cadets in the classroom or with internal partners across the academy or external actors across but of course I'm most, I'm primarily focused on developing the cadets in the classroom and helping them be those sophisticated thinkers we need.

Colonel Christopher Korpela: So we, yeah, we bring these discussions into our, into our traditional engineering courses. We've been working with p y 2 0 1, our philosophy course that the, the sophomores take of course, into, Hitoshi has talked with his students.

toon leaders, but as company [:

Dean: Do you see it that way, Hitoshi? Do you see that today's cadets are gonna be gonna have to be comfortable with this type of technology when they execute military operations?

Armenia and other by John in:

It's quite clear. This is the essential, essential tools for the military to fight and win the future battles. So they have to feel comfortable, they have to be confident, and commanders the most importantly, they must have a trust in the ability of those devices and the systems to operate lawfully and in the way they expect to behave.

an: How do you, Chris, think [:

It's expectation that they make those decisions. There's that art of command. And it seems that you may be pulling, these machines may be pulling some of that away from the commander and responsibly might, might actually lie at a, at a much earlier stage in, in either development or acquisition or whatever.

Is that true or false? Do you see it that way?

Colonel Christopher Korpela: We want commanders to feel that there is, the, the benefit of of having autonomous systems, right? If it's a robotic combat vehicle, if it's short range recon, a small, a small drone and or multiple multiple drones that are, that are replacing a lot of the legacy systems that we've seen, over the last 20 years.

ch and development community [:

If they're, if it takes three operators to, to operate the drone, then that's probably not what, not the direction we need to be going.

Dean: But do you see this as complimenting commanders and their authorities versus replacing?

Colonel Christopher Korpela: There should not be a concern of them losing maybe, okay, well if we have these, these autonomous systems are, are potentially replacing soldiers, right?

That it's, it's still, it's, it's, increasingly complex and increasingly, technical we're, they, they they should see it as, okay, hey, these are my, this is my formation and these are my, this is my team to, to execute the mission. And really how they, how they do it now.

It's just different equipment and different capabilities. .

hich incentivizes the use of [:

Do you, do you see the same with this, that the use of autonomous weapons would dehumanize and disconnect the, the human element from warfare such that it actually could potentially increase violence?

Professor Hitoshi Nasu: Well, the drone is already in, use some in various, some combat situations. Some and but ultimately the decision has been made at the very high level in the political circles.

They are the ones that who make decisions, but I do not think that they, they, the ability to project force some remotely would change their colorization so much. In essence, they will be looking at the actual outcome, what's going to happen to the country, to the populations, and ultimately they want to influence the public view of the potential adversaries or the enemy state so that they can engage and create a favorable situation through the use of military forces.

That essence never would, [:

Dean: So where do you see us in 20 years? Chris?

Colonel Christopher Korpela: I'll be retired. is, is my plan, but

Dean: no, you're gonna be recalled. .

Colonel Christopher Korpela: You recalled back to duty. I'll be 85 in 20 years. No. So, it's. Yeah, we see, the, these ideas about, about the singularity, right? I think it's what, seven years, right? Seven years from now. And I'm not, I don't think we're anywhere near there.

We're, machine intelligence exceeds our own. But as I did mention, the last 20 years sure we've, we've seen, technical improvements. Of course, the, the tank that we were on is not much different, was, not much different from now than it was 20 years ago.

o deter aggressors, to deter [:

And again, we wanna develop, these systems to be to be reliable, to be, to achieve greater proportionality, to achieve, better, better distinction. Less, less human suffering, right? I mean, 20 years from now, there's a good chance that it will be illegal to fire a traditional artillery round because it's, once it leaves the tube, it's never coming back.

Right? There's no way to, to to recall it, to, to render it ander, right? So, centrifuge, munitions, which we see now, right, which are be, are being debated as a autonomous weapon, right? Are only, achieving a smaller effect and a more distinct effect. So we're gonna see that more, and I think 20 years from now, it could be illegal to use a, a, a round that, just like it would be for, not using a proposition guide, ammunition from a, from an aircraft, right?

That is. That's not happening anymore. I would say from ground forces using traditional artillery that, that are not centrafuged are, is gonna be a war crime.

s an interesting, and I, and [:

Professor Hitoshi Nasu: I think this is the way to increase the humanitarian benefit.

I think in 10 years time I don't know how many people are still seeing the diesel autonomous weapons as an issue and consider seriously banning that type of weapon. Because this is the, this is, this is the path. We will be going down and this is direction we are heading with the advancement of technologies and the military necessity and humanitarian conservations.

s innovation ecosystem. And, [:

How has, and I'll ask each of you, so Chris, how has your collaboration with Hitoshi and the Libra Institute helped refine or change how you have viewed this problem?

Colonel Christopher Korpela: Yeah, it, it's really important for for all the, all of these disciplines, the, the STEM disciplines, the humanity disciplines to, to work together to, to answer these problems.

And he's Hitoshi is gonna see it at a different lens than I do. I see it from, of course, from the technical lens and maybe not necessarily considering aspects of international humanitarian law or low act that he, in rules of engagement that. And that he's seen and he's bringing that expertise into discussion.

So it's really important to engage outside of our our own disciplines and, and, and work with others. Yeah.

Dean: How about Hitoshi about your work with Chris? How has it changed how you viewed this this topic?

Professor Hitoshi Nasu: [:

And it's great because whenever I need some technical input on, so. That needs to be confirmed for a particular legal argument or analysis. I can just simply ring him up. I can simply just email him and asking him to check, do you think this is correct? Is this right? And he can, instantaneously or he can immediately respond to my questions, that that's real opportunity.

That's a very precious opportunity for me. And I'm sure that for many legal academics out there and we have that luxury of a collaboration here.

s the the point, which is to [:

And that's the only way we will solve the problems. All right.

Gentlemen, I'd like to thank both of you for participating. I appreciate it. It's been a lot of fun. Thank.

Links

Chapters

Video

More from YouTube