“Don’t Automate Chaos: Why Most AI Transformations Fail”
Alternatives (more/less provocative):
“Rocket Boosters on Paper Planes: The AI Implementation Trap”
“AI Isn’t the Problem—Your System Is”
“Agentic AI, Real Risk: How to Avoid Scaling Dysfunction”
“The 80% AI Failure Rate: What Leaders Keep Missing”
“AI Transformation ≠ IT Project: The Systems Approach”
Episode summary (listing copy)
Companies are spending thousands — even millions — on AI. And then… confusion. Worse outcomes. More complexity. More opacity. Sometimes, real reputational or legal blowback.
In this episode of How to Build a Growth System, Colin and Chris unpack why so many AI rollouts are failing to deliver measurable value — and why the “race to AI” is pushing organisations into a dangerous pattern: automating broken systems.
Drawing on widely reported failure rates (including claims that ~80% of organisations see no measurable positive impact), they argue the core issue isn’t the model, the vendor, or whether GenAI “works.” It’s that leaders are treating AI like just another tool rollout, when it’s actually a business transformation problem.
The conversation explores:
Why AI often becomes “a rocket booster on a paper aeroplane”
How agentic AI can amplify risk when goals, rules, and context are unclear
Real-world cautionary tales (including public failures like AI drive-thru ordering and misguided regulatory chatbots)
The systemic causes behind bad outcomes: broken processes, contradictory information environments, weak governance, and unclear ownership
Why “move fast and break things” becomes far more dangerous with autonomous systems
The missing ingredient: systems education at the executive level
And crucially, they outline what to do instead: treat AI as a transformation programme, understand and redesign the underlying system first, and only then layer intelligent automation on top — with governance that enables speed through clarity, not just legal risk mitigation.
The takeaway is simple: AI can be a force multiplier — but only for organisations with foundations solid enough to multiply what works, not what’s broken.
Transcripts
Colin (:
So Chris, I and presumably everyone else has been seeing this everywhere. Companies rushing to implement AI, which is fairly sort of broad brush term at this point, spending thousands, hundreds of thousands, millions, and then nothing, or not nothing, but outputs which they are, and outcomes that they're confused about, I guess, or worse.
Essentially what they've done is make things more expensive and complicated and opaque. think in the research for this we looked at Mackenzie's data on this and said that over 80 % of organisations have seen no measurable impact from the AI investments. That's effectively billions just being set up in smoke. So what's going on?
Chris (:
Well, here's the uncomfortable truth, I think, that nobody's really wanting to hear right now, because ultimately it slows down your headlong rush to beat your competitors to AI, which is what most companies are doing with their AI investments. And I think particularly if I think about agentic AI as being the current flavor of the month, and perhaps rightly so.
is that they're doing the sort of business equivalent of putting a rocket booster on their paper aeroplane, you know, that they're automating chaos. And when you automate chaos, you just get faster, more expensive chaos. And I think that's what is happening is that I'm not actually sure I agree with the McKinsey data. I think when they say no measurable impact from their AI investments, I think what they actually mean is no measurable positive impact.
their AI investments. And I think that's quite a scary thing to consider is that as we're rushing like lemmings to the cliff, we need to stop and take stock and I hope that's what we're going to talk about today.
Colin (:
Yeah, I was going to say.
Colin (:
So we're very sort contrary to the theme of this episode, very sort of AI positive, let's say. I was AI curious and now I'm AI positive. But everyone's talking about this massive opportunity, right? PWC, think it was said AI could add, I don't know, 16 trillion or 15 trillion dollars to the global economy by 2030, which is a...
Chris (:
You
Colin (:
a date that's looming sort of alarmingly close. Are we saying that that's not real? Like we take it with a grain of salt, what are we saying?
Chris (:
Well, I think we've had a bit of a theme recently, haven't we, of throwing around, you know, trillion dollar figures on this podcast. And actually, the fact that we've been doing that, and when we've dug under the surface of a couple of them, I think we've considered them perhaps to be conservative. I wonder if $16 trillion is conservative too. But so yeah, I think it's real. But there is a massive asterisk attached to that number.
Because that number assumes through fairly simplistic extrapolation of some fairly unreliable data, I would imagine, it assumes that companies will actually redesign their businesses to leverage AI properly.
And instead, what we're seeing at the moment is organizations trying to layer sophisticated technology as I say, particularly this kind of agentic AI that can make autonomous decisions that can actually take action within the rest of your tech stack. And they are layering that on top of fundamentally broken systems. And when I say systems, of course, on the Great System podcast, I'm not talking about necessarily their Salesforce is broken. I'm talking about their human operating system, the way
that people processes data and tech interact together. And ultimately what they're doing is giving this brilliant, genius new employee that is the agentic access to every system in the company and the ability to make decisions at human speed and to take action just as fast. And they're doing it with no real clear understanding of, you know,
what they're asking it to do, how they're asking it to do it, they're not really providing the agents with any context, they are, as we sort of preface at the top, just automating the chaos that exists and delivering it faster. And that, unless that changes, is not going to deliver 16 trillion dollars of positive impact to the global economy.
Colin (:
Yeah, so this is... I'm curious to see what number they came up with when we get closer to 2030 or how the landscape changes. At the moment, it feels like we're in this sort of Wild West scenario where we're hearing... One minute we're hearing that chat GPT-5 is going to change the world and the AI race is dead and the next we're hearing that 80 % of...
I think an 80 % failure rate in AI implementations in business or think Rand Corporation said that AI projects fail at double the rate of normal IT projects. Like how bad is this really? Like what's the sort of scale of this problem?
Chris (:
Hmm.
Chris (:
I think if you take the data at face value, it's catastrophic. mean, multiple sources and you often get this with the big research houses that are pushing stuff out. But, but actually I think the signals here are really quite strong, which is, know, when you've got McKinsey and Gartner and BCG and Rand, as you just said, all really converging on pretty similar numbers, as far as I can tell of kind of
three quarters to, you know, 80%, 75 to 80 % of AI transformations failing to deliver sort of meaningful value. I think you've really got to sit up and take notice. I think when you particularly look at Gen.ai deployments, they are spectacularly failing to meet ROI targets. And
you know, I think as we were trying to dive into.
The problem is that so much of this is happening on the edge of the organization. There was a stat, actually one of studies, I can't remember which, but it stuck with me, that only 1 % of executives describe their Gen.AI rollouts as mature. Okay, maybe there's an inevitability about that with the new technology as not having matured yet, but I think really what they're saying is everyone's in POC phase. We're loving POCs around the whole business.
We're just scaling stuff on top of things that already exist. And funny enough, the ROI is not there. And why would it be?
Colin (:
Yeah, using it and getting value from it is different things. Seems like everyone's using it. Something like 80%, 78 % or something the rate of organisations have deployed AI in one business function, but that's not the same as success or maturity. It's saying you bought a gym membership, which isn't the same as being fit as I can testify to. But why?
Is the failure rate so high? So obviously these aren't stupid companies, they're not stupid people, they're smart people, big budgets, big best of breed technology stacks. What is going wrong? I'm going to have a feeling that you're going to tell us it's a system problem.
Chris (:
I
I know if it is actually for once. I think the problem is that...
Well, I think there's quite a lot problems going on here, but you the 78 % of organizations have deployed AI in at least one business function. What does that mean for a start? Like they bought someone a chat GPT subscription? Like, that an AI deployment these days? You know, I think the problem is that we are rushing so fast, you know, as a business community say, I won't even level, you know, that accusation, any one industry vertical company size. Everyone's just rushing headlong to wanting to be
doing this thing. And I think the failure rate is high is because no one knows what they're doing. I think that we always say, smart execs making silly decisions. I think this is a really significant example of that because I don't think the education is there. don't think that if this is a systems problem, it's not actually a systems problem. I think it's a systems education problem. I think that most executives
Colin (:
you
Chris (:
don't or can't and are not equipped and have not been equipped by their companies, by their boards, you know, but by the time they have available in the day to really harness what is this thing that's coming down the tracks because this is not, this is not a new bit of sass and I think people are.
are deploying AI like they deploy a new tool of the many hundreds that they have in the business. And they're not considering the deployment to be a system re-engineering, a systems thinking problem. a lot of the research I think focuses on, I don't know, that problems have been misunderstood or miscommunicated, or they haven't got enough data, or...
they're focusing on technology over problem solving, that they're automating broken processes as we alluded to. But I think really what's underlying all of that is that the executives sponsoring these programs and the people responsible for delivering them are not transforming their businesses. are not re-architecting the way that their businesses create value in the age of AI.
They are layering AI on top of the stuff they're already doing, and that will not work. So I think that is why the failure rate is there, is that we don't even necessarily, we're not even trying to fix the right problem.
Colin (:
Yeah, that's a good point. I like that framing of it as a system education problem. And I guess that's what we're here for, So we're talking in quite general terms at the moment. I always love to dive into real world examples. Quite often come from the B2C world. There seems to be more colourful examples there. But have we got a good example of where this looks like in practice? What do you find in the research?
Chris (:
There's a few, aren't there? mean, McDonald's, everyone knows McDonald's. They partnered with IBM. And IBM, I think have a bit of a legacy rap, but they have been in the AI game with Watson as long as anybody and doing some really cool stuff. And they partnered with them to create an AI powered drive-through system, whatever that means. And think that was probably the problem.
But you know, sounds, I was about to say sounds smart. Does it sound smart? I mean, okay, let's go with sounds smart. Sounds jazzy. Their idea was that they would reduce labor costs. They would improve order accuracy. They'd be able to scale it across thousands of locations. And here's what actually happened, at least so far as I've read, is that the system for some reason,
started adding precisely 260 chicken nuggets to many of the orders, which, you know, it's a challenge, I'd say, but you know, one mind might try and take on. And the problem was it couldn't handle ambient noise. It couldn't really handle different accents. Well, yeah, I think it wouldn't work in Scotland, that's for sure.
Colin (:
Even my kids couldn't eat that many.
Colin (:
I'd be student.
Chris (:
And sometimes it even struggled with basic menu modifications. actually the problem was that
because this is McDonald's and because it's in the drive-through, because it's in the public domain, and because to grossly overgeneralize in a probably horribly unfair way, the sort of people driving through a McDonald's drive-through and trying to mess with the AI probably also have a high correlation with TikTok and Instagram users where they love the video of that, is that it wasn't really the 260 chicken nuggets they had to cook that was causing the problem. It was really the fact that
they have this suddenly this real reputational issue that just manifests itself in a really horrible way.
Colin (:
Yeah, God, what a frustrating... So essentially they haven't just automated chaos, but they've sort of created chaos with this sort of just layering what was clearly not a very mature technology on top of... I've worked the drive through before many years ago and what's to be fair quite a slick process generally. Not sure it was the best choice of...
area of the business to implement AI. I've heard about sort of even worse cases in my mind. New York City, did they not launch an AI chatbot, like a Microsoft powered chatbot to help small businesses navigate the incredibly complex regulatory landscape? And that actually sounds like a great idea in practice compared to say, or, you know, improving the drive through.
Chris (:
Hahaha
Chris (:
Mmm.
Colin (:
process in the McDonald's.
Chris (:
I did hear about this one and I actually really love to know what happened under the skin with this one because like you say, know, indexing and interpreting information and surfacing it is pretty much, you know, the bullseye gen AI use case. know, that should have been a really, really successful value add thing. But as I read,
it was giving some really quite, you know, Dickensian advice out about, you know, how people could treat their tenants and what you could do with your workers and your tips. And, and, and I just, I just don't know how that happened.
And it strikes me actually that it's either I think a failure of the regulations that it was having to interpret that they were in such horrendous legalese that actually they were open to interpretation and the chatbot was probably doing quite a good job without the intended consequences or actually there was something nefarious going on there because I just I didn't get that one at all but
But as ever in America, it resulted in some litigation, which is of course how it comes into the public domain. yeah, some scary stuff going on, that's for sure.
Colin (:
Yeah, and I mean, ending up creating legal liability and additional complexity, which really the whole point was to sort of cut through this. And it gets me thinking about why all these smart people haven't seen it coming. Like don't organizations like New York City Council, which is, guess, larger than most companies or McDonald's, they've got smart people thinking about this. like what?
what is happening here.
Chris (:
I think it's really what we said right at the start, which is that the project became to make the existing process happen faster and more cheaply. And as we discussed on the Metrix episode, couple of, yeah, early in the season, a couple of episodes ago.
When you set the wrong target or when you set a target, like make it faster, make it cheaper, know, achieving that target becomes the end, becomes the goal. And if you, excuse me, I'm coughing in the background on you. And yeah, when it seems you do that, you get to the point where achieving that.
goal starts to shape the behaviors of the entire project and of the people working within that project and it becomes the stated aim is to just do that. Let's do it quicker, do it faster, do it for the price that we quoted because it definitely is a consultancy or a technology provider doing it. And what they didn't do in both of those cases is actually look at the underlying
system or the underlying data or the underlying processes that existed. So in the case of, you know, New York City, as I said, I suspect what was going on underneath is that actually the regulatory environment was a whole maze of contradictions and what without getting too technical, you know, retrieval augmented generation typically relies on vector databases, which are
Colin (:
Good point,
Chris (:
all about chunked up bits of data and actually what they're not that good at doing is relating different bits of information to each other. You know, if this regulation said one thing, it will have an impact on this regulation over here.
And actually, unless that reference is explicitly referenced within the chunk of data that's been retrieved, it won't go and look at the rest and try and create a relationship. So I think that's potentially a sort of, yes, it's a limitation of the technology, but it's a limitation of technology that should have been understood by the team. And if you wanted to do that job properly, in inverted commas, you would have gone actually to the regulatory environment and got rid of the contradictions before you then automated the response of data. But that's probably not a realistic way of managing that particular problem. McDonald's.
I think the edge cases were probably quite obvious, you know, what the issues could be there and whilst you can't change people's accents you probably can change more things than they did there in the underlying process design.
Colin (:
I don't know, I think I should work on changing my accent, to be honest. It strikes me as well, and this is something that kind of not quite keeps me up at night, but it worries me. So all this, the New York City case is a classic. I think that really is automating chaos, right? That's a really good illustrative example of what we're talking about today. But we don't even have a handle on that stuff. And already the AI world has sort of...
Chris (:
Hahaha
Colin (:
exponentially moved on to adding agentic AI into the mix. So systems that can plan and act and learn across your entire tech stack with minimal supervision. So if you're adding that kind of power to a system that doesn't understand your actual business goals, what's the potential? Obviously that comes with extraordinary potential and power.
And we really do have to just start to completely reimagine how we engineer the design of an organization with the advent of agenting. But it strikes me that it's also extraordinary risk if the underlying systems are chaos. I would argue probably exponentially more than what we've seen so far with AI.
Chris (:
Yeah, absolutely. mean, the the potential is massive. You know, we talked about the sort of 16 trillion of PWC. There's a load of other data out there. Three to four trillion or something from generative AI alone. But because we have this phenomenal potential, this phenomenal power, you say, particularly when you put a gem tick into the mix, we've also got
huge risk. And I think that that is what rightly is probably slowing some of this adoption down because unless you re-conceptualize how you're actually automating or what you're automating and why you're doing it and how you are giving systems context and how you're feeding them with information and how you are
limiting their potential for misuse, how you are making their outputs explainable. You know, so we have auditory and compliance-based components there in terms of particularly in regulated industries. You know, how we are getting from the potential that is sort of latent within every business of like, we could just do this better and faster and with less people, which is great.
but actually getting the confidence in the business and with the shareholders to say, we're actually gonna do this slower, but we're gonna do it better because we're gonna re-engineer the processes that underpin this stuff so that we can create real exponential compounding value from this investment rather than just being first to market. And I think that it'll be interesting. My perception is that some cultures are better at that than others.
You know, dare I say that I think the US business culture is very action-orientated. It's, let's just go do. And I think that's what we're seeing is that the reason that they're leading the market and the reason that we are seeing some of these slightly unfortunate case studies emerging is that, you know, is a business culture that is rushing headlong into doing this.
Chris (:
I wonder whether in five years time when we start seeing businesses, technology businesses in the DAC region going more sort of AI rollouts, whether we'll be seeing lower numbers but more sustainable numbers, higher success rates, less froth, but more real value. So as you say, as we get 20 to 2030, it'll be interesting to see how those numbers change and whether we're seeing any regional differences in those.
Colin (:
It gets me thinking, I agree with you about the American business culture, but then it kind of goes back to what we were talking about a couple of weeks ago about moving fast and breaking things. And it strikes me that in this world of, as we move into a gentic AI, where we have systems that can perceive their environment, reason about it, make decisions, and actually take anonymous action and actually operate our business.
Software for example, which kind of sounds like science fiction, but is happening now It strikes me that that moving fast and breaking things mentality Amplifies the risk essentially then again, I've also seen Some really excite exciting if you get geeky about this stuff like we do Stories start to commit like green shoots of illustrator potentially I think there was a
Microsoft and Fujitsu case where Gentiki was able to reduce proposal production time by like 67%. So it's not just filling in templates that you could probably use, chat, GBT to imperfectly do. it's autonomously conducting the market research, analyzing the competitor data, creating customized proposals, and effectively making judgment calls and making decisions that humans can't.
make at scale. think thinking about the potential of that, especially if you move forward with that move things and fast and break things mentality is mind boggling. But I guess as a cautious person, the risk is also kind of terrifying. Like what missteps might we make?
Chris (:
Well, I think that when we think about agentic AI, that the opportunity, as we said, is massive and it is real. And there are loads of really, really positive and useful and interesting agentic use cases that people are getting out into production. And I think that the, you know, the
The way that we de-risk that, I guess, is that when we kind of look at what they can do versus traditional automation, that should give us the sort of steerage in terms of how we're then going to de-risk that situation. So you think with kind of AI and agentics.
which are really just a mechanism for automation on top of Gen.ai capabilities. They're kind of the same thing, but with the ability to take action, is that they are reasonably good at kind of perceiving and understanding context. So data being contextual, the sort of business environment, the situation, the what happened when I did this before, they're quite good at that.
You know, they create multi-step plans. So they have gateways, they perceive context at each stage so they can adapt to situations change. And of course they can orchestrate tools across the entire tech stack, whether that's kind of calling APIs or triggering workloads and other systems, or even handing off to other agents, which is quite interesting development in the sort of virtual workforce.
and they learn from outcomes and they adjust strategies. So great, that all sounds like features, that's the features on the box. But of course, if you don't give them good context.
Chris (:
They don't know the environment in which they're operating. If you give them unfettered access to your entire tech stack and they can decide what they're going to do and change things in context. If you don't define the sort of last mile of the automation and let the agent make decisions, then you're introducing so many vectors for chaos to be introduced. So, you know, it's not just process design.
But it's really good automation and technology architecture principles and architecture patterns that need to be deployed and adapted to the world of AI. And if we don't do that, yeah, think that we will continue to see, you know, embarrassingly sort of negatively impactful case studies coming out of this sort of rollout and adoption phase that we're in right now.
Colin (:
Yeah, we're going to continue to see stories about sort of optimizing for the wrong thing and then sort of exponentially multiplying chaos like a sort of even worse example of the New York City story. And isn't this something that we are sort of warned about? Systems thinkers are warned about by likes of Dorella Meadows as the most powerful lever for change in the system is
is the paradigm, the sort of fundamental goals and worldview that frames everything. It's kind of what you were alluding to, like about the context, understanding the inset written rule and the culture. If the agents don't understand that, it has the wrong paradigm, then you're going to efficiently be optimized into potentially a disaster,
Chris (:
Yeah, and that's so true. And it's such a good point that actually that is, and maybe, you know, we're skipping to our usual end of the podcast earlier here, of actually, know, what can I do tomorrow? I mean, start shifting the paradigm within the organization of how we conceive automation and agentic automation, particularly, until we do that, you know, the vectors for risk are still significant.
Colin (:
Yeah, I guess and you've kind of hinted there, you've brought up the spectre of the end of the podcast and I'm afraid as usual Chris, we are once again over running. Shocking, this could be added to mentioning alignment in the how to build a growth system drinking game.
Chris (:
You surprise me.
Chris (:
Yeah
Colin (:
If we could spend a couple of minutes though, and I stress a couple of minutes talking about just trying to make this concrete. Clearly we're saying that you should apply sort of systems thinking, systems theory, take a step back and think about the system as a whole and apply that to your AI implementation before you throw any more good money after bad effectively. But what does it actually mean?
in practice? Like, we kind of dive into that? Can we make it concrete for the listeners?
Chris (:
Okay, let's do it. Let's talk about alignment. organizations, as we know, are complex adaptive systems. They are human systems where performance emerges from the structure, the way that the people processes data and technology are compiled, are connected together. And we know that sort of their goals, metrics,
the rules of engagement, the information flows and the feedback loops that we talked about last week, they're the things that actually drive behavior. So when you automate, and I will use the term automate when we're about agentic AI, because really that's what we're talking about is intelligent automation.
is that before you understand those dynamics you really should not be considering doing any process automation because the systemic impact of that automation can be, will be significant. You hope it will be significant but positively significant and if we don't build our agentic particularly automation
to work within the context and view it in the sort of new paradigm of how we create value within the organization, then really what we're doing is just snapshotting the dysfunction that was entrained in the system and then making sure that we replicate that forever more. So what we need to do is not just take, and we've talked about this so many times when talking about conventional automation, you know.
If the process has, you know, A, B, D steps in it and, you know, C and D are currently manual, a human being, don't just connect C and D, work out how you can get from A to D in a different way. And I think that's what, in the context of system structure, we really need to be striving for to harness that $16 trillion opportunity.
Colin (:
Indeed.
I'm tempted to dive really deeply into this topic and into the theory piece. But unfortunately, I think we once again, what we've done is generate a bunch of episode topics for future episodes where we can dive more deeply into this. it strikes me moving on, alternatively, let's move on. It strikes me that
It's not just the AI revolution or the agentic revolution that's happening, but kind of lagging behind that is really a governance revolution, or maybe that's just not so much what's happening as what needs to happen.
Chris (:
Yeah, I think that's a really, really interesting point because I think the recognition is there that the sort of the mechanisms for governance within organizations need to change. And I think that is what initially caused a lot of reticence to really dive into the application of AI, particularly into the center of kind of value creation processes within the organization.
But I don't know, I feel like there has been a sort of softening on that position. I feel like we have almost seen a, you know, we've seen a sort of tacit recognition that actually we don't know the answer to that question, so we're just going to go do it anyway. And I think that's quite worrying because I think the governance lens that has been applied
has been one of legal risk mitigation rather than true internal AI governance. And I think to a degree, perhaps necessarily so, we've sort of abdicated that responsibility to the technology providers. And I think we are starting to see that catch up, know, our particular, you know, favored automation and now agentic AI.
provider, Wacato, you know, they've really baked that into the overall product design of how they're going to market. And it's one of the reasons that we really like them. But where people are going and doing bespoke stuff, I think that's really, really tricky. And I think going back to the point I made earlier, that I don't think executives even know what question to be asking. I'm not sure we really conceptualized how we sort of
embed that governance. I don't think we're asking the right questions yet and I think where bespoke stuff is happening we are still missing that layer quite substantially.
Colin (:
Yeah, I really agree with you about most of what I've seen about governance tends to be about the legal risk mitigation. I think governance is still also kind of not exactly a dirty word, a sort of I guess salespeople will think about it as part of the whole sales prevention landscape. Lots more process to tick some boxes, stop us from getting sued.
It is the kind of general perception of governance, if people are quite honest.
Chris (:
Yeah, yeah, absolutely.
Colin (:
more process, less speeds is how it's viewed.
Chris (:
Yeah, there are some, you know, I guess there are some glimmers there on the market. I'm not sure really what the adoption rate is like, but I think there is a new ISO standard around kind of AI management. Not something that I've read deeply into, but yeah, I think we're on the cusp of change in that area. And I think that the next, I don't know,
three years, I think that probably puts us what kind of five years into the adoption curve, I think we'll start seeing more maturity in these areas and we'll start seeing fragmentation of the sort of players in the space, they will start to stratify more kind of
into more areas, I think, where we've got kind of definable, sort of discernible categories emerging. I think we'll see some of the smaller players start to fall away. I think we'll start seeing some consolidation in the space. And I think that growing up of the market will probably bring with it, you know, a more sort of reliable framework for governance that creates more confidence within the market.
Colin (:
confidence and clarity and presumably if we get the clarity right with the governors that should enable speed rather than act as a brake.
Chris (:
Yeah, yes, I think so. I'm gonna, I'm gonna have to sort of pause here because for some reason for the first time ever someone has managed to book this room which I don't think is even bookable but so I'm gonna have to work out somewhere else I can go that is quiet which might be a challenge.
So yeah, if we just pause.
And then we can go back into it.
Chris (:
Thanks.
Colin (:
This is where you mess your old home office, right?
Chris (:
So we shall see whether I get away with recording the rest of it here.
Colin (:
I that you were in the shared workspace place, so for a second I thought Christina had just decided to start doing something with the kids in the background.
Chris (:
Right, where did we get to?
Colin (:
So we were just, we were kind of talking about the governance piece and talking about, I just finished this, a clarity enabled speed point. We do need to sort of pick up the pace. Unfortunately, this is too long, this script.
Chris (:
Yeah, I think governance on top of doesn't necessarily answer the sort of automation of chaos. So where are we on time taking out the five minutes of fucking about I've just done?
Colin (:
So this is.
Colin (:
We probably have about, if we do about another eight minutes or so, then we will probably get to about 45 minutes of total. The only thing with this script is, I think partly because we're talking about not automating chaos, it doesn't really go into the sort of what we typically do in an episode where here are some steps that you can take. There's a little bit about it in the governance section.
which is why I've kind of gone through that, like about sort of effectively creating the bumpers, if you like, constraints to enable freedom. But I wonder if maybe it's all just a bit too long.
Chris (:
Yeah, I'd suggest should we just jump into the wrap up?
Colin (:
and me.
Yeah, that's fine. That's absolutely fine. And in the wrap up, maybe have a chat about, you know, what we could actually do. I mean, to be fair, this whole topic is a bit more like you could frame it in this way as opposed to here are some practical steps that are super programmatic. You know, it's like it's a big new topic. Yeah. So I'll just dive into the sort of Chris, we've covered a lot of ground here part.
Chris (:
Yeah.
Chris (:
Yeah, sounds good.
Colin (:
And no doubt this is very exciting and visceral for Dominique. Hi Dominique, thanks for editing this part out. Sorry. Cool, I'll just, I'll just count down silently and then crack on to the wrap up part, okay?
Chris (:
Hahaha
Colin (:
So Chris, we've covered a lot of ground here and actually, once again, come up with a bunch of ideas that we could probably do episodes about. Let me try to of rein us in as always, we're overrunning and trying to synthesize a bit. So companies aren't failing AI because the technology doesn't work or because chat GBT doesn't know how to count how many Rs in the word strawberry.
It's because they're automating broken systems. So even the most powerful AI, especially agentic AIs we've just been talking about, the more dangerous it becomes, the more I guess power and autonomy the AI has. So the solution isn't necessarily slow down, but to understand your systems first, like how your business works effectively and fix the foundations and then amplify what works.
rather than what doesn't work. Like in the case of New York City being the negative example. Is that about right?
Chris (:
Yeah, I think that's exactly right. And here's what I'd really like to kind of emphasize, I guess, if we're in wrap-up mode, is that this isn't theoretical. In fact, this document what you're doing now, work out where the issues are, you know.
do the 2B from the asses, you know, this is pretty standard stuff and the same rules apply to AI and agentic AI adoption as they really do to any other transformation project. And I think that is the problem is that lots of companies aren't seeing this as a business transformation project. They're seeing this as an IT project and they are resolutely not the same thing. And when you do see it as a transformation project, the numbers
are startlingly positive and you really can start to see where you get to that giant 16 trillion number because you're looking at potentially 45 % cost savings and 60 % higher revenue growth if you start to aggregate some of the numbers from people like Harvard Business Review and Boston Consulting Group. So the real numbers of projects that have gone well when you isolate those as a control group are pretty startling.
And the difference between success and failure, as you say, isn't the technology that you choose. It isn't the LLM model that you pick. It isn't even necessarily the sort of agentic orchestration tool that you decide to use. It's whether you actually take the time to understand the system you have, to identify the issues and the opportunities latent within it, and to re-architect.
the way that they operate and only then layer on top AI and agentic automation. And if you do that, then you will win. And I think the data is pretty clear about that.
Colin (:
I can already hear the groaning and the people whose situation is just far too unique. And if you only just understood a bit more about it, you would see that they don't have time for all this analysis. The competitors are moving so fast and the board's putting pressure on to see AI initiatives right now. So we can't really do it this way.
Chris (:
Yeah, I mean, I'm laughing and, you know, slightly.
or tear is falling at the same time because yes, that's exactly what we're hearing. That's exactly what everyone's saying. That is what's driving this. But I guess what I would say is that that urgency is exactly what is driving the 80 % failure rate. And do you want to be the first CEO to fail or do you want to be the first CEO to succeed even if that happens later than your competitors start at the adoption?
I think that when we look at things in the fullness of time, we want to be the company that won, not the one that was first to fail. And that's a bit of a game of brinksmanship, isn't it? You know, what if the other ones do succeed? Well, the numbers are in your favor for going a bit slower, I think is what we can say absolutely. If you have an 80 % failure rate, then you have a 20 % chance of succeeding. And
I don't think many of us would take those odds. Certainly our lives depended on it and the lives of our business may. So I think when we see the problem and the scenario for what it is, which is the biggest opportunity for transformation that we are seeing within the world of business probably since the advent of the internet, then I think it pays to spend a little time to get it right.
By doing what we've just described to a greater or lesser extent in terms of the magnitude of your problem, that will hugely increase the odds of you getting it right and being one of the 20 % that do succeed.
Colin (:
So Chris, I for this particular topic, it's all quite new. And as you kind of alluded to earlier, there's kind of an education and understanding gap. So I think we've devoted a lot of time this episode to really understanding the problem. But at the same time, it wouldn't be how to build a growth system if we didn't actually talk about what we could do about it.
So although we've got a bit less time than normal to do this, apologies, my monitor's decided it's going to turn itself off shortly. What is the first thing that someone should do after listening to this episode? We've made it all a bit big and scary. Maybe we can zoom into some practical steps.
Chris (:
It's good question. So it's always really, really tempting to answer this question by saying, start small, do it in one area, map it out, implement a process, show value. And that's not a bad answer to that question. But I think we have to first frame the problem that we are trying to solve.
because I think that for once the start small and build answer is actually what most companies are doing. They are mucking around on the edge with little processes to show that they're doing something to be doing the AI equivalent of virtue signaling, whatever that might be, with AI signaling, with innovation signaling. And actually, if that's all you want to do, great, do that.
I think that makes sense. It's a rational answer. If what we want to do is actually be the first to transform our businesses, to transform the way that we're creating value in our organizations and just to transform, then you probably can't start at the edge and expect that that's going to work. So what I would say is actually the first thing we need to do as leaders is to educate ourselves.
is to really understand what is going on here, what we are endeavouring to do to our business. Understand the technology, understand the implementation processes, but more than anything, understand the way that your leadership team
are going to interact with these problems of changing their business because as leaders, as senior leaders, really we're there to steward the organization through change, to help guide them, to bring our people along. And as we said, you know, from the Denial Amedo's research, from the conversation earlier, to really affect change, we need to change the paradigm through which we're viewing the problem.
Chris (:
to one of necessary transformation, to one of opportunity that comes through your imagination. And therefore the first thing I would do tomorrow as a leader, as I say, is get myself educated and equipped. And then I would start to equip my senior leadership team to cascade this paradigm shift through the organization. And then I would start building a roadmap only then. But I think change starts from within and change starts from within the people and from within the sweet sweet.
because as a complex human system, the humans are the bits that need to be.
Listen to Amplified.
we need to really think about that component before we think about technology as a project. And whilst that might sound slow, I think it is the way that we will truly change our organizations in the same way that the computer changed us from doing double entry bookkeeping and having secretaries carry memos around, know, and, you know, how the advent of the email changed the way that we communicate across business and the fact the way that it opened up global markets. And when we look back in the fullness of history and look at those inflection points in business,
you know this is another one so don't treat it like a technology implementation that you've just got to do quickly.
Colin (:
Yeah, paradigm shift indeed. So all this stuff can really be a force multiplier, but only in systems that are actually sort of legible, stable, governable, well designed. Yeah.
Chris (:
Yeah, absolutely. And you know, let's start with people first.
Colin (:
Yeah, I like that. like that. It's a good way to end actually to start with people first. Now, once again, Chris, with much of the topic left to discuss, look forward to the medium article on this as well. We do have to call time on the episode. All this I'd say is how to build a growth system is as always brought to you by Rebspace.
RevSpace is a growth systems consultancy that connects B2B organizations like yours with the future of growth. We offer account-based, growth-managed services and go-to-market engineering projects. please don't forget to follow and rate the podcast. It really helps us to bring the content to a wider audience and we'd really appreciate a moment of your time to tell us what you think, what steps you took, what things you discovered in your growth system. That's all we've got time for this week.