Artwork for podcast Tech Transforms
The Marvel of Risk, With General Stan McChrystal
Episode 218th August 2021 • Tech Transforms • Carolyn Ford
00:00:00 00:41:14

Share Episode

Shownotes

Risk taking is unavoidable when it comes to modernization. Best selling author and retired four-star U.S. general Stan McChrystal outlines 10 control factors to help citizens and agencies alike take smarter risks. Carolyn and Mark also get some early insight on Stan's upcoming book Risk: A User's Guide.

Episode Table of Contents

  • [01:44] Stan McChrystal of the Team of Teams
  • [10:54] A Story of a Defensive System by Stan McChrystal
  • [18:26] Stan McChrystal Talks About Inertia
  • [25:57] Why Stan McChrystal Doesn’t Want to Worry About External Threats
  • [35:40] Artificial Intelligence Based

Episode Links and Resources

Stan McChrystal of the Team of Teams

Carolyn: We are joined this morning by retired four-star U.S Army General Stanley McChrystal. We've had the pleasure of talking to General McChrystal on a few occasions. Good morning, General McChrystal.

Stan: Call me Stan, and it's an honor to be with you again.

Carolyn: Let me give our audience for those that have been living under a rock, just a few more of your credentials. So Stan is a former commander of the U.S and International Security Assistance Forces, ISAF Afghanistan. He’s the former commander of the nation's premier military counter-terrorism force, JSOC. He is best known for developing and implementing a comprehensive counter-insurgency strategy in Afghanistan. For creating a cohesive counter-terrorism organization that revolutionized the inter-agency operating culture. Is it fair to say, Stan, that’s the basis for your book Team of Teams?

Stan: It was certainly the foundation of it, and then our study beyond that.

Carolyn: Honestly, Team of Teams, just a little plug here, is the best book on leadership that I have read. If you haven't read that one, do that. But we're here to talk about a new book that will be out this October that is also sure to be a bestseller. Mark and I got to have a sneak peek. We got to read an early copy of the manuscript. We're here to talk about your new book, Risk: A User's Guide. We’d like you to talk to us a bit about the 10 risk control factors, and the four measures that are the foundation for your new book Risk.

Calculating Risks

Stan: We decided to take on Risk as a subject because through my career, there had been processes to follow. Calculating risk and acting on that too, to be able to measure the threats or risks to your organization. But it never connected with how we actually did it. Now, certainly there are some financial firms that use financial models that theoretically do this. But if you look at so many things in our lives, there's one way we talk about risk. Then there's another way we actually respond to it.

I wanted to understand what the disconnect was. Of course, that had been my experience as well. In most cases in my career, we had done checklists or matrices, and calculations to come out with a risk score. But the reality was most of our reaction was intuitive. And so we decided to study risk. What we came away with was the idea that each organization and individual actually, but organizations particularly have something which I'll call a risk immune system.

It's a system consisting of 10 factors, such as communication, diversity, bias, timing. Those things interact together to determine the health of the system to respond to and prevent threats from undermining the organization. It's very much akin to the human immune system. If you're not familiar with it, the human immune system is a marvel. We face about 10,000 pathogens a day that come to our body. Any one of which could harm us or kill us, but we don't think much about it.

The Theory Behind Vaccines

Stan: We don't have to, because the body has got a process in which our immune system detects all the threats that come. Assesses each one, responds, kills off the ones that need to be, and then learns from the process. The miracle is it gets smarter and that's the theory behind vaccines. You build up an immunity so that the next time it's very easy to fight off that known threat.

The human immune system is this marvel that we sort of go through most of our lives taking for granted until it's compromised. Then it's compromised like with HIV/AIDs or another assault on our system. When it's weakened, suddenly we fall prey to threats that otherwise wouldn't be a problem for us. Really, nobody ever died of HIV/AIDs itself. What they died from was other lesser things that the body was unable to combat.

Now we come back to organizations. It was interesting we started to write this book and do the research and COVID arrived. It became almost a perfect example of what we were talking about. If we think about COVID-19 someone says it was a black swan event. Who knew that a pandemic was coming? The answer was, everybody knew. We'd been through it before many times in world history, particularly the Spanish flu.

Most notably in 1918 and in a smaller sense, many times since. In fact, in 2019, just a few months before COVID-19 arrived, the Department of Health and Human Services ran a series of exercises. They were called Crimson Contagion and they were based on a scenario of a viral threat. Pathogen coming out of China, coming to the United States, going around the world and wreaking havoc.

Stan McChrystal Talks About COVID-19

Stan: The lessons from that set of exercises was, the United States had not done enough preparation. Had not stockpiled enough supplies, had not worked enough of the processes, and therefore paid a heavy cost. That was only a few months before COVID-19 arrived. The interesting thing about COVID-19 is we knew the threat was inevitable. You don't know the exact strain of the pathogen, but you know that kind of assault is inevitable.

The second thing is you know exactly what to do about it. Public health is not a new science, we knew the basics that we had to do. We had to stockpile things that would take certain steps. Of course, we pulled a rabbit out of a hat in terms of developing vaccines faster than any time in history. But except for that, the world's response to COVID-19 has been very weak. It's been very weak because of many of the risks in the immune system.

I would argue the society's ability to communicate effectively, to make decisions on time, to overcome the inertia against inaction. To have the kind of leadership that emerges, that brings all capabilities together. We literally stumbled on every one of them on COVID-19. It's a tremendous, but sad example of how important, because this wasn't a scientific failure. In fact, COVID-19 is a scientific triumph. It is a societal and governance failure in our analysis.

Mark: Or leadership failure, if you look at the 10 dimensions of control. How did you come up with the 10 dimensions of control?

The Antidote Stan McChrystal Concocted

Stan: When we decided to take a look at those things which were important factors. They were distinct or different enough to be categorized, you probably could have had 12. You probably could have had 8, if you'd put something together. There was some bias and diversity. They're a little bit akin to each other. If you have biases, the antidote is diversity, different perspectives.

They could be linked to communication and narrative, but they're also distinct into themselves. We wanted readers to understand, these factors are all things which would be consciously addressed by an organization. Trying to both be sure it has a healthy risk immune system and then improving or strengthening it.

Carolyn: One of the risk factors, control factors is technology, which I want to focus on since this is Tech Transforms. You start that chapter of technology with a quote that I loved. So you said, "Technology raises a new question, who or what is in control?" This is something that I think about almost every day and have since I was a kid watching Star Trek. And this got my head spinning about how we ensure that technology is an advantage instead of a disadvantage.

Who's in control? Is it helping the agencies that are using the technology within the government? Is it helping the citizens and warfighters that those agencies are serving? Can you talk about that? Who is in control when it comes to technology?

Stan: The answer is it should be us. But if we go back in our history, we refer to the movie Fail Safe, an early 1960s movie in the book. If anyone hasn't seen it, you ought to watch it.

A Story of a Defensive System by Stan McChrystal

Stan: It's a story of a defensive system that has been implemented by the United States based on technology. It allows the United States to essentially be able to strike the Soviet Union without being stopped. It's got a defensive aspect to it, so it can analyze whether there's a threat coming and then launch a counter-threat.

Of course, it malfunctions in signaling that there is an attack and then it launches a counter strike. Humans are not able to recall the counterstrike. So in the desperately tragic final scene of the movie, the president of the United States works and deals with Soviet leaders. After the United States bombs Russia, which we cannot stop our plane from doing, we bomb New York City ourselves as a tit for tat to prevent further war.

Now that question you say, who's in charge? The answer is, because of the dependence upon very highly technical devices, they can get ahead of us. If we fast forward to 60 years and we've got artificial intelligence, and we've got things like hypervelocity missiles, we've now got response systems where you have to let the machine respond based upon inputs from its collection. There's no time to put a human in the loop.

We always say, we'll always put a human in the loop if it's got anything to do with lethal effects. You can't do it and make it work. The reality is you either depend upon that, or you have a much slower human system. Which probably is not fast enough to deal with some of these threats. We're building threats that make us dependent upon technology-based responses.

Human’s Last Touch

Stan: At a certain point, the human's last touch of this thing is when we craft the system and if we get it wrong, or if someone spoofs the system or corrupts it in some way, there's tremendous vulnerability.

Mark: I think of the movie WarGames in the '80s, where they had the WOPR. They were simulating nuclear attacks, ominously.

Stan: It's terrifying. We also refer in the book to some things that are more mundane, but they're pretty important. For example, most companies have implemented automated voice or automated telephone systems. You call your favorite company and they say, if your problem is X, dial one, or press one. You sit through this thing and you get more and more frustrated. And you want somebody to fix your problem.

It's much cheaper for the firm to do, but how many times have you taken your business elsewhere? How many times do you just say, "I give, I want to talk to someone who will accept my problem and fix it." That's a hidden cost or a hidden risk that technology gives us that we're not even sure we can measure.

Mark: It seemed to me, reading through the book, that the way you laid out the 10 dimensions of control in the different use cases as human analysis in decision-making, et cetera. Across that, the question I've got as it relates to technology and AI is, is there possibility of taking the risk immune system and the 10 dimensions of control? And apply that into artificial intelligence so that artificial intelligence can assist humans in this process move faster. Even in the book, a couple of examples, things start moving so fast. I wonder if we're too slow in that game.

Stan McChrystal Reveals What AI Could Do

Stan: I think we are. The first thing AI could do for a system like that is tell us what we're not doing. If you think about it, a problem comes, a fire starts in your kitchen, and you're worried about getting your kids out first. An AI system could pull all the factors together and it could remind you. It could say, "Wait a minute, you got this wrong. You haven't done this." Et cetera.

As conditions change, AI with the right amount of detection out could bring that information in. So that it could widen the aperture of the organization or individual making the decision and potentially respond more effectively. The problem is, we as leaders, have to understand AI much better than we do right now. We are going to get instructions from artificial intelligence in the future.

We're not going to be able to have the time to dissect them or through our own processes, compete with that. We are either going to have to accept it or not. It will say, do this. We're going to have almost an act of faith because artificial intelligence can bring so many data sources together. Draw conclusions, and make a recommendation. We can't compete with that so we're going to have to, at a certain pace, take it as an article of faith. That means we really need to understand what our data sources are, how the system is, how the system works.

Carolyn: It takes us down a very scary rabbit hole too. Where we've seen arrogance, laziness, I don't know how you want to label it.

Brains in the Foot Locker

Carolyn: But for whatever reasons, we just put our brains in the footlocker to quote one of my favorite quotes from your father. We want somebody else to do the thinking for us, we want the technology to do the thinking for us. Which is why we've heard many stories of people driving off the pier into the ocean because the GPS told them to. I think the idea of, can we get AI to the point of making all these decisions for us? It's a scary thought to me. It's the stuff that science fiction has been built on for the last 50 years.

Stan: No, you're exactly right. My wife and I were driving just last weekend. And my pet peeve about most of the GPS systems is they drill in and they show you a very small area. You just turn right or turn left. With my background, from the military, I want to see the big map, I want to see where I am. I want to see the route it's chosen, I want to do that the whole way. It automatically doesn't want to do that, it just wants to tell you what to do. They may be right, and they may not be right. That's the issue.

Carolyn: My dad, before we would go on any trip, he'd pull out the map and he'd make me look at it. I'm like, "I've got GPS. I don't need to do this." And you know what? Technology is awesome. It gets me where I want to go until it doesn't. Single sign-on is awesome until it doesn't work.

Stan McChrystal Talks About Inertia

Carolyn: I want to shift gears a little bit and talk about inertia. This is something that you brought up multiple times in the book. You said, in my Physics class at West Point, we learned that in the most basic terms. Inertia tells us that absent external forces, things will keep doing whatever they're doing.

That's true not only if what they're doing is brilliant and successful, but also if what they're up to is silly and destined for failure. To use the military axiom, never interrupt your enemy when they're making a mistake. But shouldn't we interrupt ourselves? Do you think that we are allowing tech to move us forward from momentum rather than deciding our own direction?

Stan: Potentially, it can. Of course, we proved it in history. We didn't need tech to do that either, we could make all those things. The problem we'd make is it can reinforce that. We build processes, we do things a certain way, and we do them to be more efficient with technology. To a certain degree, an operator of a machine or a computer does certain things, gets certain guidance or responses. We think we're not smart enough to say, no, we shouldn't do that right now.

Often, we give people instructions in their position. No, this is what you follow. This is the process. If it comes, if they say two plus two equals five, you use five and move on. The problem is really sociological, it's our leadership and the human side, but it is aided and abetted by technology. It's easier to go faster, further and get it way off track.

A Small Problem in a High-Speed System

Stan: There's a great story on the news recently about a high-frequency trading company. It was some years back when it happened, but one of their algorithms got off in the course of 40 minutes. They lost billions of dollars. It was because of a small problem in a very high-speed system, jumping the track and boom. That can happen in our economy. It can happen in almost anything.

Carolyn: Back to what you were talking about with just leveraging tech. One of the things you also say in the book, I assume this was early 2003 ish, when you were first kind of reorganizing the way JSOC operated. You said that the tech and the hardware, you were able to have communications, which is another one of the risk factors. And you were able to have communications all over the world, cross teams because of technology.

You said that technology was just as important as food and ammo. As I read the book, I kept thinking all of these things are interconnected. They rely on each other. Even as we're developing tech, it would be prudent. We should be applying this model to the development of the technology.

Stan: I'd throw out a couple of ideas. What would have happened a year and a half ago as COVID-19 sent us home, dispersed us, if we didn't have the level of technology that we enjoyed at that point? If let's say we didn't have the internet and whatnot, let's say we didn't have telephones. Literally society at the size and interconnected may have, would have stopped. Our Achilles' heel at that point became our ability to communicate because it built our confidence.

Stan McChrystal Communicates the First Step

Stan: If we weren't being communicated with all the time, most of us would have panicked in some way. Society would likely have done that as well. We learned something very interesting in JSOC, and I'm not sure we're the first people to learn it. It became really clear to me, that as we were dispersed and we had to implement a lot more technology than ever, to

Links

Chapters

Video

More from YouTube