Beena Ammanath is a global thought leader in AI ethics and an award-winning senior technology executive with extensive experience in AI and digital transformation.
In this podcast we talk to Beena about the evolution of AI, and draw some interesting conclusions on it's maturity, the challenges of adopting AI in companies, and what the future could hold for AI.
"Will AI cure cancer in the next 5 years?"
"Yes, with human help".
Beena is an extremely knowledgable business leader, and author of a fantastic and groundbreaking book, Trustworthy AI.
Hey everyone.
Speaker:Welcome back to the Tech Seeking Human Podcast.
Speaker:It's been a while, Claire.
Speaker:Um, where have you been?
Speaker:Well, not for you, for me.
Speaker:What, what have you been doing?
Speaker:Uh, I've been working, yeah.
Speaker:Do you know about work ? I do.
Speaker:Well, I did take a little bit of a break, um, and invested time in recording an
Speaker:album, which I hope to release soon.
Speaker:Oh, see the plug?
Speaker:Yep.
Speaker:Um, you know, I don't think it's gonna make me any money,
Speaker:so I probably will be working.
Speaker:Are you looking forward to being back on the podcast?
Speaker:I
Speaker:am, I
Speaker:didn't know I'd left Well, so you sort of had a bit of a hiatus.
Speaker:I, I don't know if it was, I did.
Speaker:I don't know.
Speaker:I think, um, you know, looking for a job, getting a job, starting
Speaker:the job, it's pretty full on.
Speaker:Yeah.
Speaker:I don't know.
Speaker:You, you, you needed to get some better guess.
Speaker:So
Speaker:, is that why you're back?
Speaker:So Claire's come back to the podcast.
Speaker:That's really bad.
Speaker:I trust all the guests.
Speaker:Claire's come back to the podcast because now we have an AI thought
Speaker:leader, general manager of, uh, Deloitte, uh, practice or focused around ai.
Speaker:And because she's a female tech leader who's super intelligent,
Speaker:you decide to turn up.
Speaker:Yeah, I was just like, oh, I'll join that one.
Speaker:You pick and choose.
Speaker:Yeah, that's good.
Speaker:Why not?
Speaker:I guess if you can Well, you know, I'm not being paid so . Well, we don't
Speaker:make money off this podcast for anyone that wants to donate sponsorship or
Speaker:episodes, we're really open to it.
Speaker:The whole reason why we started this podcast though was so
Speaker:we can learn from people.
Speaker:Yeah, exactly.
Speaker:And um, it makes me read books.
Speaker:I read this.
Speaker:From Bina and I did AM and a and um, it's a business guy for
Speaker:navigating trust and ethics in ai.
Speaker:It reads a little bit like Jean Kim's Phoenix project, where she uses, uh,
Speaker:fictional company to, yeah, I read.
Speaker:Well, that's why I do podcasts so I can read with people
Speaker:about people as we do podcasts.
Speaker:That's why does it.
Speaker:Um, otherwise I have no reason and they sign books for me.
Speaker:And then I have signed copies.
Speaker:So when she's an international megastar, that's, yeah, I have to
Speaker:sign copy and I have lots of books over there that I've been reading.
Speaker:It forces me to do it.
Speaker:You've got them on
Speaker:this.
Speaker:Display.
Speaker:I don't think they were here
Speaker:last time.
Speaker:No, no.
Speaker:I've, so you can't see it.
Speaker:But I have like all the people that we interviewed on our podcast
Speaker:over there, Jean Kim and Max Tag.
Speaker:Mark and Hannah Fry and Ellen Bra.
Speaker:That's, that's a plug for all the other episodes you probably missed
Speaker:and should go back and watch.
Speaker:But today, Bina is gonna take us on a journey around AI and tell us a
Speaker:little bit about like how companies are leveraging it, why they're leveraging.
Speaker:What could possibly go
Speaker:wrong?
Speaker:AI and then they're not getting outcomes.
Speaker:Yep.
Speaker:Yeah, like I think that's pretty topical.
Speaker:I heard a statistic that's like 75% of AI projects fail, which is, it's huge.
Speaker:If at first you don't succeed, you gotta pick yourself up and try again.
Speaker:Yep.
Speaker:Get those people on board.
Speaker:So should we get beaner on and we'll talk about Yes.
Speaker:What are we supposed to do?
Speaker:No, she'll tell us.
Speaker:I'm ready.
Speaker:She will.
Speaker:Yep.
Speaker:All right.
Speaker:Let's get her.
Speaker:This is BNA Amman ath on trustworthy AI on the Teche Human Podcast.
Speaker:Hope you enjoy it.
Speaker:Bna.
Speaker:Welcome to the podcast.
Speaker:How are you?
Speaker:I'm great.
Speaker:Thank you so much for having me on your show, Dave and Claire.
Speaker:And actually for anyone who doesn't know the behind the scenes that goes on in a
Speaker:podcast, we spent, what do you reckon?
Speaker:Bena 15 to 20 minutes trying to get audio working for our particular software.
Speaker:Do you reckon they could have developed a trustworthy AI that can
Speaker:get an audio sync to work with these software programs so we don't waste
Speaker:so much time doing these podcast?
Speaker:Absolutely.
Speaker:I, I think, you know, we are going to get there, you know, but it's
Speaker:also a reality check of where things are in the real world.
Speaker:Right?
Speaker:You know, there's nobody taking all our jobs yet.
Speaker:There's nobody, you know, trying to, um, replace all work that humans do.
Speaker:Uh, I think we still need humans.
Speaker:And, uh, given that we spent almost half an hour going
Speaker:back and forth trying to fix.
Speaker:I, I think, I think we are okay for now.
Speaker:But doesn't the media tell us, the media tell tells us that the AI is coming.
Speaker:We have autonomous vehicles according to what people believe.
Speaker:They think the Teslas are driving themselves even though
Speaker:we know they're not, isn't.
Speaker:Are we, are we marketing it ahead of ourselves?
Speaker:Yeah.
Speaker:You know, we live in this, um, hype cycle, right?
Speaker:Everything is, uh, on click bait and you know, how many clicks can you get?
Speaker:Um, unfortunately that era is driving a lot of the headlines, and that's why
Speaker:you see more of that extreme scenarios.
Speaker:It's either all good or it's all all worse, right?
Speaker:So it's the best possible scenario and the worst possible scenario that you hear.
Speaker:But the reality is, in the real world, you know, we are very much in the.
Speaker:Solving the boring problems, that's all AI can do today.
Speaker:Um, now, you know, there are advances happening in certain parts,
Speaker:obviously, uh, but it's not, it's not across the board, across all
Speaker:industries, across all possible jobs.
Speaker:It really depends on the industry and the use case.
Speaker:Which, which industries?
Speaker:So, so there's this middle ground that we're not hearing about.
Speaker:Right.
Speaker:And I think this was a question you were sort of thinking about when both
Speaker:Claire and I have worked in tech for a long time and so we probably hear, well
Speaker:every company in tech is leveraging ai.
Speaker:Absolutely.
Speaker:Well,
Speaker:yeah, exactly.
Speaker:And I'd love to hear Be's thoughts in terms of, you know, the leaders
Speaker:in, in, um, AI and what type of.
Speaker:Yeah.
Speaker:Yeah.
Speaker:Actually, let me take a step back.
Speaker:So we did a survey earlier this year to answer that question exactly, and we
Speaker:surveyed about, uh, 26 hun, uh, 2,600 executors across the globe to find out,
Speaker:you know, what they were seeing and what was going on in their organization.
Speaker:And you'll be surprised, but 94% of business leaders survey
Speaker:agree that AI is critical.
Speaker:To success over the next five years, and 79% of them say they have fully
Speaker:deployed three or more AI applications, and that number is actually growing.
Speaker:This is an annual survey we do, so last year it was 62%.
Speaker:This year it is 79%.
Speaker:The real challenge is coming in, you know, how much outcome or how much
Speaker:output are they seeing by deploying, you know, so much ai And that's where,
Speaker:going to my earlier point, right, lot of it is still in the exploratory phase.
Speaker:We are still figuring out there is no fixed playbook where, where
Speaker:we are seeing the most advances from an industry perspective are.
Speaker:The naturally the data intensive ones, right?
Speaker:Uh, healthcare, uh, life sciences, financial sector, and so on.
Speaker:Um, but you know, there is still a lot of challenges around scaling
Speaker:the ai even though all business leaders are keen to get started.
Speaker:And Dave and Claire, you know, one thing I would like to clarify is,
Speaker:you know, there's also this notion that it's a one size fits all right?
Speaker:Whereas I like to think of it as the companies that existed before the
Speaker:internet era versus the, you know, the newer big tech companies, so to speak.
Speaker:You know, obviously you know how you use AI within the post-internet era
Speaker:companies versus the pre-internet there.
Speaker:There is a huge difference and.
Speaker:And that's where, when I say life sciences, healthcare, fsi, those
Speaker:are more of those legacy non digitally native industries where
Speaker:we are seeing a lot of traction.
Speaker:But there's obviously a lot of traction happening in the big tech
Speaker:industry where, you know, AI is embedded within their products and
Speaker:is driving more revenue models.
Speaker:And so we, we have to make a distinction of, you know, the company's, uh, you
Speaker:know, origin, where are they in their digital journey, that themselves,
Speaker:to see how much of AI they're using.
Speaker:That's a really good point, cuz as that number increased, you said 93%
Speaker:of organizations have deployed an ai.
Speaker:I started thinking about.
Speaker:Now are they answering it in terms of, well, we bought a product that
Speaker:has embedded AI in that product, therefore we are leveraging an AI model.
Speaker:Like does that skew it and do you have a sense for like.
Speaker:What percentage of people Cause cuz that's easier because you
Speaker:didn't develop the ai, right?
Speaker:And you've bought the off the shelf product that gives you an AI versus
Speaker:you are collecting data and training models and deploying AI and then using
Speaker:more of a broad based AI platform to get your results or an algorithm.
Speaker:Do you have a sense of like what percentage is in that ballpark where
Speaker:they're a little bit more advanced and they're customizing their own algorithm?
Speaker:Yeah, we, we haven't seen that level of detail.
Speaker:It is almost always, it's AI cannot be just part of the shell.
Speaker:Without, you know, the organization having their own data, uh, unless it is
Speaker:a very, you know, narrow specific use case, you have to have the data available,
Speaker:your organizational data available to be able to train the solution that you buy.
Speaker:So if you're not a digitally native company, you probably don't have
Speaker:the historical data needed to deploy the AI within your organization.
Speaker:Um, so we, we are seeing scenarios where companies are also struggling
Speaker:in terms of just, you know, Having an AI ready culture, right, having, uh,
Speaker:the talent that's needed, even if you buy the AI solution, uh, it, the, the
Speaker:kind of maintenance that needs to be done, uh, you know, the operations,
Speaker:the scaling it out, uh, training employees on how to use the solution.
Speaker:Those are all things that any organization has to do irrespective
Speaker:whether they buy the AI solution or whether they build the AI solution.
Speaker:So it's a, it's, it really depends on where they are in their
Speaker:culture readiness for adopting.
Speaker:Yeah.
Speaker:And I, we were listening just before you, you put up a video of,
Speaker:um, who was talking in that video?
Speaker:That was, uh, Dr.
Speaker:Katrina Wallace.
Speaker:Wallace.
Speaker:Okay.
Speaker:She's an AI expert here locally, uh, in Australia.
Speaker:And she was just talking about how AI is really the power of
Speaker:AI and how big it is with no, it has no rules, no regulations yet.
Speaker:Yeah.
Speaker:You know, it's similar to.
Speaker:When fire was discovered, and it's a big analogy, you know, and I
Speaker:thought, wow, you know, it's putting that into perspective a little bit
Speaker:and yeah, like where are we going with regulation in terms of ai?
Speaker:So fire is a great, you know, comparison.
Speaker:I usually use the comparison of, you know, uh, of Auto Engine, right when
Speaker:the first car was invented, when the first engine was created, right.
Speaker:And you're, you're seeing where the engine has been created.
Speaker:We, we've put body, a body on it, right?
Speaker:It can be any kind of model, but it's the, you know, you are able to get from
Speaker:point A to point B faster, even though that engine is not fully developed.
Speaker:And in this case, I'm.
Speaker:Talking about AI as that engine, right?
Speaker:And we are able to drive it on whatever roads we have, but the reality is
Speaker:the speed limits are not defined.
Speaker:The lanes are not drawn up the, you know, we, we don't have seat belts, right?
Speaker:So we are in that era where while the engine is still being developed
Speaker:in the labs, it's being used in real world because it is helping us
Speaker:get from point A to point B faster.
Speaker:And we are driving on roads that are yet to be.
Speaker:So it's this interesting era where our generation actually has the opportunity
Speaker:to define what those speed limits should be, what those, uh, you know, whether we
Speaker:need seat belts and it, you know, it's never going to be a one size fits all.
Speaker:We think about AI regulations as this one thing, whereas it is
Speaker:going to be as nuanced, even more nuance than speed limits, right?
Speaker:Speed limits in Australia versus Germany versus.
Speaker:Versus India, extremely different.
Speaker:So how can we expect there to be one regulation for ai?
Speaker:It is just, it is just this interesting phase where everything is coming at
Speaker:us and we have this once in a lifetime opportunity to actually figure this out.
Speaker:I mean, how cool is
Speaker:that?
Speaker:It's so, it's such a good point that you raised and then I read
Speaker:your book, um, trustworthy ai.
Speaker:So if anyone hasn't already picked up a copy, they definitely should.
Speaker:And you use that analogy in the book and I'm glad you mentioned it cuz it really
Speaker:made me visually understand where we are at with something that's so abstract.
Speaker:Like AI is really abstract, it's not a thing.
Speaker:You can't really understand what it is.
Speaker:It's an intelligence of some sort and you are gonna define it.
Speaker:And you did do a good job of that in the book, but the analogy of the car.
Speaker:And when Henry Ford invented this car, and then we just started driving
Speaker:and people started running into each other and running people over.
Speaker:And we had all sorts of problems.
Speaker:And eventually, to your point, we had to create seat belts and stop signs and we
Speaker:had to have roads with lanes and we had to have speed limits and speed cameras.
Speaker:And we're still trying to figure it out.
Speaker:And actually, the most shocking statistic that I read in your book,
Speaker:and I know this isn't real, I mean it, it's interesting, all the same.
Speaker:It said one in 107.
Speaker:I was trying to find.
Speaker:People in one in 107 likelihood of being killed by a car.
Speaker:In the US one in 107.
Speaker:I remember reading it, I can't find it right now off the top of my head, but
Speaker:do you remember putting that in there?
Speaker:The motorbike?
Speaker:Is it No, it was a car.
Speaker:Car and the, and the argument was an autonomous vehicle
Speaker:has one in 10,000 likelihood.
Speaker:You know, maybe cuz there's not that many of 'em on the road, but clearly
Speaker:they're developed to stop humans from doing and hitting people like it.
Speaker:It's augmenting our intelligence and making it.
Speaker:It's pretty.
Speaker:Yeah.
Speaker:I guess there's two things, and I'm trying to get to the question,
Speaker:which is really difficult for me.
Speaker:The first part of the question is amazing analogy.
Speaker:Of the car and where are we at with AI today?
Speaker:Like, are we still painting the, the road and putting the lines on the road?
Speaker:And the second part is just the augmentation of the intelligence and
Speaker:the analogy then to deaths and the consequences of getting it wrong.
Speaker:That was sort of a question.
Speaker:Yeah,
Speaker:yeah, yeah.
Speaker:You know, I'll, I'll just say the car analogy.
Speaker:The difference between that time and this time is, you
Speaker:know, there was no social media.
Speaker:So it didn't get to amplifi to these extremes where, you know, everybody was,
Speaker:you know, getting hyped up or worried.
Speaker:And, you know, we were, you know, there were few folks.
Speaker:The experts from each field came together, figured it out, and moved along, right?
Speaker:Every country figured it out for their own, and maybe that's how we need to
Speaker:think about AI and regulations as well.
Speaker:You need to ignore the noise to a large extent, to be able to make progress
Speaker:because you and I both know as tech is that you know you, that is not going to
Speaker:be just one regulation, just like ai.
Speaker:The type of AI you use is based on the use case, right?
Speaker:Uh, it is the regulation, the rules.
Speaker:The best practice is going to be based on the use case, right?
Speaker:And, uh, uh, so that's, that's part one.
Speaker:Part two to your point, uh, is, uh, you know, uh, we just because
Speaker:it's a machine that's doing.
Speaker:Our expectation for the margin of error is very different than what a human would do.
Speaker:Right?
Speaker:We were, we have, uh, you know, a higher tolerance for the
Speaker:margin of error when humans are performing that exact same task.
Speaker:Whereas as machines, there is an expectation that it would be much, much,
Speaker:you know, And, um, and it could be that case, uh, in some time in the future, but
Speaker:you also have to train that AI as much, you know, to make it robust enough to
Speaker:be able to be better that than humans.
Speaker:Uh, in terms of the error rate, uh, you know, and it's, um, it, it,
Speaker:it's, it's hard because we are in this fuzzy, evolving phase at this.
Speaker:It's hard to, you know, focus on just one area and solve for it
Speaker:and try to think of the extremes.
Speaker:Let me, let me use the example of bias, right?
Speaker:That's the one thing you hear every time you bring up bi ai and ethics.
Speaker:You know, bias is that common theme that you hear, and the reality
Speaker:is, you know, bias may not be relevant in certain AI use cases.
Speaker:It really depends on the use case.
Speaker:Uh, bias.
Speaker:If you are trying to predict.
Speaker:You know, uh, machine failure, you know, it probably is not going to be a big
Speaker:factor because unless it's touching human data bias doesn't even come into play.
Speaker:And lot of work that's done in the industrial space with
Speaker:AI is without human data.
Speaker:So bias doesn't come into play.
Speaker:And, um, but when bias does come into play, you know,
Speaker:going back to that question of margin of error, let me use an.
Speaker:So bias and, um, facial recognition.
Speaker:We've all seen the headlines, uh, every time, right when, uh, facial recognition
Speaker:is used in a law enforcement scenario.
Speaker:If it is biased and it is, uh, flagging innocent people as criminals,
Speaker:that's a terrible thing, right?
Speaker:Bias and that scenario is intolerable.
Speaker:But right here, you know, we, we, I'm, I know of companies using that exact
Speaker:same technology in the exact same like location to identify human trafficking
Speaker:victims and kidnapping victims.
Speaker:Right.
Speaker:It's the exact same technology.
Speaker:It is tagging a potential kidnapping victim or a human trafficking victim.
Speaker:Now the question is, you know, is the algorithm performing better
Speaker:than with human eyesight only if the algorithm is helping you rescue 60%
Speaker:more, or even 30% more than you could without, if you didn't use it, is
Speaker:it still worth using the algorithm?
Speaker:So you have to be able to.
Speaker:The metrics, the margins, that's acceptable.
Speaker:And it's about, it's comes down to those stakeholders to then say, yes,
Speaker:it's helping us rescue, you know, not a hundred percent, but 60%, which is
Speaker:better than not rescuing them at all.
Speaker:Right?
Speaker:So, you know, maybe it should be used in that scenario, right?
Speaker:So that's why I think this topic really needs to be solved, you
Speaker:know, from the use case perspective for us to make real progress.
Speaker:And, and also it's about the AI working with people like not on its own.
Speaker:Yes.
Speaker:Yep.
Speaker:So, and that comes back to also in organizations where you're
Speaker:deploying your AI in your company.
Speaker:And if you just do it in a silo, sort of, um, area of the business, then
Speaker:it'll probably fail because you need a end for productivity to happen.
Speaker:That comes back to culture.
Speaker:Yeah.
Speaker:There is, you know, you cannot have just a bunch of techies, data scientists or
Speaker:data geeks going and building it in silo without, uh, you know, what, uh, thinking
Speaker:about is the culture ready for acceptance?
Speaker:Because you can build the best AI with the best accuracy rate if, but if nobody
Speaker:is using it, it's a failed project.
Speaker:And, uh, you know, the, the reality is there is a lot of hype on this topic.
Speaker:There's a lot of fear.
Speaker:Of, you know, AI taking over jobs.
Speaker:And so, you know, people who don't understand are not AI fluent,
Speaker:AI savvy, worry about it, right?
Speaker:If they use that software, maybe their jobs will be gone, right?
Speaker:So, you know, it needs to go hand in hand with driving
Speaker:that AI fluency, AI education.
Speaker:You need to bring your entire organization along the journey.
Speaker:It, this cannot be just an IT project, right?
Speaker:For a, an organiz.
Speaker:Succeed, it's, you know, there has to be executive buy-in, but there also
Speaker:has to be buy-in from every employee who feels comfortable on, you know, the
Speaker:company's AI strategy, how, you know, how it is going to help them do their jobs.
Speaker:That's the level of, um, you know, that's the level of detail that needs
Speaker:to go into the thinking that needs to go into, to make it successful.
Speaker:And, and that would have a lot to do with trust because, I, I used to,
Speaker:um, get, I used to rally around a particular statistic that said something
Speaker:like 75% of, um, of AI projects fail.
Speaker:And I dunno, I don't know what the number is now, but I guess, um, I'd
Speaker:love to hear from you that, how, how do you make it successful then?
Speaker:Is it about breaking it down and taking people along for the ride and, and
Speaker:presumably like in these large organiz.
Speaker:People have been through covid, they've been through working from home, they've
Speaker:had a lot of change, and all of a sudden you hear about recessions and inflation,
Speaker:and then you've got senior leadership going, oh yeah, and by the way, we're
Speaker:gonna start deploying an AI as well.
Speaker:Now, like the, the trust level must be pretty low.
Speaker:Like what's, what are your tips for people and, and, and how do
Speaker:you make the project successful?
Speaker:How do you flip that 75% around from failure to.
Speaker:. So if leadership isn't bought it, right,
Speaker:That's a big challenge because as, as I, as we spoke about earlier,
Speaker:there needs to be a culture change.
Speaker:There needs, every employee needs to be bought in.
Speaker:So, you know, starting with the leadership buy-in, that's, that's usually number one.
Speaker:And then number two is really on managing AI related.
Speaker:Just as you know, you and I as general citizens read about
Speaker:the ethics, risk or bias in ai.
Speaker:You know, there's also a hesitancy on, you know, we don't know what kind of
Speaker:additional risks we might be taking and we don't know how to mitigate it.
Speaker:So, you know, there's a fear of, you know, taking on risky project, which.
Speaker:It doesn't, you know, which doesn't really help AI reach its full potential.
Speaker:And then the focus on, you know, uh, how do you maintain it, how
Speaker:do you support a post launch?
Speaker:AI is not just a build it once and deploy it and, you know, it'll,
Speaker:it's going to, is stay consistent.
Speaker:It's not like traditional software.
Speaker:It is going to evolve and change as the machine learning model gets more robust
Speaker:and is fed different kinds of data.
Speaker:So having a clear plan on how do you support it post launch, I think that's,
Speaker:that's one of the reasons as well.
Speaker:And in terms of, you know, driving that, uh, learning and AI fluency.
Speaker:Let me give you an example of something that I've experienced as
Speaker:well, uh, is, you know, like one of the things I highly encourage is,
Speaker:uh, you know, make sure everybody in your organization understands ai.
Speaker:When they read a headline, they are, they, they know what it means when
Speaker:they are trying to use an AI solution.
Speaker:They know they're reassured.
Speaker:It's not gonna take away their job.
Speaker:Right.
Speaker:Uh, it's going to help make their jobs easier.
Speaker:Like we talk a lot about, you know, how AI is going to take the boring,
Speaker:rudimentary tasks and automate it so that you can focus more on the creative part.
Speaker:But think of it from, you know, say an x-ray machine operator, right?
Speaker:Where we are using AI to do that.
Speaker:Analysis of the x-ray and provide recommendations.
Speaker:Now it's helping speed up the process from a patient perspective, and it's gonna free
Speaker:up time for the x-ray machine operator.
Speaker:But the question that comes in this person's mind is, but it's not like humans
Speaker:are not suddenly going to break more bones and are going to need more x-rays.
Speaker:So what are they supposed to do with that free?
Speaker:And that's where really leadership needs to come in and say, yes, it is going to
Speaker:free up time and here are some of the other cool things you could be doing.
Speaker:Right?
Speaker:Because when you look at it from a worker's perspective, it freeing up
Speaker:time is not as reassuring because then you know the job is reduced.
Speaker:We are used to this mindset, right?
Speaker:So I think that it is much more nuanced.
Speaker:Just saying yes, you know, humans working with machines and freeing
Speaker:up time from boring tasks.
Speaker:But what, what happens in that free time?
Speaker:And it's like continuous learning, like companies reinvesting in education so they
Speaker:can predict sort of particular industries that might need to evolve a little bit.
Speaker:Like I don't think it's any on any company's agenda to go, well, let's
Speaker:just get an AI in and SAC everyone.
Speaker:I think if you ask any CIO or CTO or CEO.
Speaker:Um, and I should have just said, executive team, they would say
Speaker:retaining staff and hiring really good people is really important.
Speaker:And I think this was in your report that there was some statistic
Speaker:that's a percentage of people that really want to embrace ai.
Speaker:Like they actually find, they're not scared of it.
Speaker:They're actually think this could be really good for them in the future.
Speaker:But yeah,
Speaker:it's an education process, isn't it?
Speaker:And and it's such a good point.
Speaker:Yeah.
Speaker:I never really thought about that in terms.
Speaker:You know, freeing up time that that could create a lot of anxie with people's roles.
Speaker:And if it's not explained what the journey is gonna be, okay.
Speaker:Once that AI is put in place, you know, of course, you know, every, every role
Speaker:is different and, and some are more administrative and what, what does happen.
Speaker:So I think it's, it's.
Speaker:Bigger.
Speaker:It's such a bigger, there's such a bigger picture of play with with that.
Speaker:And then there's the skill shortages.
Speaker:So Yes.
Speaker:You know, it's, it's all coming at us in a way.
Speaker:. Yes.
Speaker:But I look at it as an opportunity, like how cool is it for us, this generation
Speaker:to figure this, this really complex, naughty, you know, problem and, you
Speaker:know, get, get to a solution, right?
Speaker:Not a solution, but solution across the.
Speaker:How
Speaker:big is this?
Speaker:Can I, can I put a number?
Speaker:Where are we with an industry around ai and one being we're
Speaker:really only in, its inf.
Speaker:And when we, we've got a long, long way to go.
Speaker:Clearly five.
Speaker:We're, we're sort of, we're doing okay.
Speaker:We're about halfway there and nine like, yeah, no, it's pretty saturated.
Speaker:We're doing pretty well.
Speaker:Where do you think we are in terms of just the, the evolution of AI and
Speaker:the potential of where we're going?
Speaker:Oh, I think we are very much in the infancy.
Speaker:There is still, because remember that car engine is still being developed.
Speaker:It not all the kinks have figured out.
Speaker:Like there is really cool new stuff coming at us, whether it's, you know,
Speaker:the work that, the research work that's going on with generative AI and Lang
Speaker:large language models, that's going to.
Speaker:Change a lot of things as well, and we've just started it, right?
Speaker:So we, I, if, if I had to put a number, I would say we are between two and three
Speaker:in terms of the potential of AI impacting all industries and the kind of use cases
Speaker:that we, you you'll see coming at us.
Speaker:Just even with ai, it's still early, but think of the potential when AI and
Speaker:you know, NFT and crypto and metas, you know, all of those come together, right?
Speaker:When these emerging technologies blend together, those scenarios
Speaker:are going to be even more powerful.
Speaker:So we are very much in the inf.
Speaker:Deloitte has built a huge practice around ai.
Speaker:So like, and whenever I travel through the airport, I see ads and they're
Speaker:focused on AI now and the possibilities.
Speaker:So obviously the company as well is really well positioned to, um, to, to
Speaker:take, you know, take companies by the hand and lead them towards a, a future.
Speaker:Yeah, the,
Speaker:the, the challenge is because it's so early on, there is no
Speaker:one single playbook, right?
Speaker:We hear about there being no regulations, but there's really no best practices
Speaker:playbook, you know, for every possible industry, every use case, and it's,
Speaker:it's kind of an phase where you have to collaborate, you have to learn
Speaker:from the ecosystem to grow your own.
Speaker:And the ecosystem includes your, uh, ac academic partners, your, uh,
Speaker:alliance partners, your research teams.
Speaker:So you have to be able to tap into the ecosystem to really
Speaker:thrive in this age of ai.
Speaker:So give us something uplift Bino like we're, give us a, give us a story that'll
Speaker:get us excited and make us want to go out and become data scientists and mess
Speaker:with algorithms and deploy AI in, into.
Speaker:You know, I'm very excited about, you know, what we'll see in the
Speaker:healthcare space for too long.
Speaker:You know, it's been a one size fits all.
Speaker:It's been more focused on, you know, uh, on providing medication
Speaker:on a one size fits all basis.
Speaker:But we realized it is an opportunity to do more personalized medicines, right?
Speaker:And, uh, and also get more predictive.
Speaker:Like, I see such uplifting news on.
Speaker:Ai being able to predict certain kinds of illnesses based on how you
Speaker:walk or how you speak and you know, I think there is so much power in there.
Speaker:Uh, I also am very excited about, you know, AI's potential in education.
Speaker:That's another area where if you think about it, you know, our
Speaker:ancestors, the ancestors before that, you know, all have had this one
Speaker:education model of one teacher to.
Speaker:I studied in a school where we had 50 students in one class.
Speaker:My son goes to school where there's 30 students in one class.
Speaker:So one is too many, right?
Speaker:I think, uh, what I've also learned that education and how
Speaker:you learn is, you know, extreme.
Speaker:There are different, six different ways you learn.
Speaker:So fashion, the power of AI to being able to provide that.
Speaker:Personalized education and then you connect it with the power
Speaker:of 5G and network and things we are doing with IOT and Edge.
Speaker:You know, we can truly take education to the remotes parts of
Speaker:the world and really uplift an entire population because I do believe
Speaker:that there is a cure for every ail.
Speaker:In, you know, some human brain that we just need to tap into via education.
Speaker:So I am personally very excited about the excitement about, you know, the power
Speaker:of AI in healthcare and in education.
Speaker:Oh, that's awful.
Speaker:That's awesome.
Speaker:Because I'd like my kids to learn through an AI that isn't YouTube and TikTok.
Speaker:I think that would be really beneficial for us.
Speaker:But, but to your point, like one to 30, I think we made big inroads
Speaker:during Covid because even my kids now have sort of specialist teachers
Speaker:and, but I think personalization, they wanna learn differently.
Speaker:They wanna learn different things.
Speaker:And why do we have the, the same course that we've always had throughout
Speaker:time And that's just going to, yeah,
Speaker:really big.
Speaker:I think personalization in terms of everything that we do.
Speaker:And that's, We have high expectations of the ai, then Yes,
Speaker:because you just, you expect it.
Speaker:So
Speaker:yeah.
Speaker:Couple of rapid fire questions for you to wrap up.
Speaker:You gave an example earlier about an AI that profiled facial recognition
Speaker:and could help, uh, reduce kidnapping and, and other such things.
Speaker:Is it okay for the government to deploy an AI throughout
Speaker:society and profile everyone?
Speaker:It's a bit controversial.
Speaker:It, it
Speaker:depends on the use case.
Speaker:That's a pretty, pretty good political answer.
Speaker:um, what do you, what's your experience?
Speaker:Ok.
Speaker:Okay.
Speaker:Here's another one then.
Speaker:Um, will AI cure cancer within the next five years with human help?
Speaker:Yes.
Speaker:Uh, dinner party guests.
Speaker:You got three dinner party guests that you're inviting over, dead or alive?
Speaker:Who's coming to your dinner party?
Speaker:Ada
Speaker:Loveless Einstein.
Speaker:Oh yeah.
Speaker:And Thomas Edison.
Speaker:Oh,
Speaker:who's the first one?
Speaker:I don't Lovelace.
Speaker:Can you?
Speaker:She is the first data scientist ever.
Speaker:She wrote the first algorithm.
Speaker:She's a British woman and, uh, Lovelace.
Speaker:So you have three very smart people coming to your dinner.
Speaker:I, so I think I'd be a little intimidated being at that dinner party.
Speaker:Hey, being a last question.
Speaker:What advice would you give to your, to your 18 year old?
Speaker:Uh, I would say everything is going to be alright.
Speaker:You know, I studied AI and it was all in textbooks.
Speaker:I never imagined it would become real in my own lifetime.
Speaker:Even personalized marketing was considered, you know,
Speaker:an extreme case scenario.
Speaker:We didn't have access to data or cloud or you know, it just, there, there was
Speaker:no way anything could have been done.
Speaker:But, you know, I'm an optimist and I.
Speaker:Just so happy to be alive and watch this technology evolve during my own lifetime.
Speaker:I'm so excited.
Speaker:So I would say, you know, it's, uh, you, you are on the right.
Speaker:Oh, that's awesome.
Speaker:It's so wonderful to follow you on LinkedIn and see your passion for AI
Speaker:and all the reports that Deloitte are putting out a really, really good reading
Speaker:for anyone that really good reading.
Speaker:Yeah.
Speaker:For anyone that wants to, um, dive in and in your book, um,
Speaker:which I should also mention.
Speaker:So I really enjoy the book.
Speaker:It's a practical guide to all the different types of AI that can be
Speaker:deployed and the types of guardrails that we should put around it.
Speaker:And I would talk to you for hours, um, but you know, you probably
Speaker:would prefer to talk to Einstein.
Speaker:And Lovelace.
Speaker:Bino, it's been wonderful having you on our podcast.
Speaker:Thank you so much.
Speaker:Um, sorry about the AI in the software that couldn't quite get
Speaker:our audio to work, but, um, audio issue that you've gotta look into.
Speaker:Like all good things.
Speaker:We resorted to Zoom and it just seemed to work when we did Zoom, so it's
Speaker:been wonderful having you on the.
Speaker:And um, and yeah, look forward to following you more on LinkedIn.
Speaker:Thank you so much for
Speaker:having me.
Speaker:Take care.
Speaker:Thanks
Speaker:Bino.
Speaker:A really cool chat.
Speaker:I really, I mean, aside from using up, you know, we had 50
Speaker:minutes to spend with Bina.
Speaker:Roughly.
Speaker:We used 20 minutes on getting the audio to work.
Speaker:Yeah.
Speaker:So we only had a 30 minute window to, um, she's very gracious about it.
Speaker:She was very good.
Speaker:What's the key takeaways?
Speaker:Key takeaway
Speaker:is that she was, Super positive, so very optimistic.
Speaker:Mm.
Speaker:Uh, always looked at the education side, the healthcare
Speaker:side I thought was really good.
Speaker:You know, take away from
Speaker:it.
Speaker:It's like AI for good, right?
Speaker:Yeah.
Speaker:Like not just for corporate, not for profiling CVS and resumes and
Speaker:data profiling, but what is AI gonna do for the betterment of society?
Speaker:Betterment.
Speaker:Betterment.
Speaker:Is that a word?
Speaker:I don't know.
Speaker:. Anyway, we'll go with it.
Speaker:Yep.
Speaker:Betterment.
Speaker:My key takeaway is 93% of organizations have deployed an ai,
Speaker:which that doesn't surprise me.
Speaker:I wonder what the 7% are doing.
Speaker:But it's a big, it's a big number.
Speaker:And, and even for executives, and that was, it was from their report.
Speaker:Um, and uh, some executives might not even know that it's been deployed,
Speaker:but it's interesting that they do know, which shows the maturity
Speaker:now of where we're at with ai.
Speaker:And yet the results also show that the majority of AI projects fail.
Speaker:So even though I was still quoting that stack, the outcomes are low.
Speaker:The outcomes are low.
Speaker:Yeah.
Speaker:So you gotta lower expectations.
Speaker:So as be was saying, we're in a hype cycle.
Speaker:It's a little bit like going to the gym and expecting to
Speaker:get a result within four weeks.
Speaker:If you just do Peloton for four weeks every single day, you're
Speaker:gonna have a difference compared to doing it for two years.
Speaker:It's continuous development and continuous evolution every day.
Speaker:You love it when I bring it back to Peloton, don't you?
Speaker:But it's actually true.
Speaker:Like it's sort of like fitness.
Speaker:If you can do fitness continually and you deploy ai, you retrain, you relearn, you
Speaker:educate, you get positive affirmations.
Speaker:I, no one wants to be preached out.
Speaker:Let's just keep it.
Speaker:No, I was going back to the AI at that point.
Speaker:I was thinking about how they would do that within an organization.
Speaker:So I think it's, it's interesting.
Speaker:We're at the infancy.
Speaker:We're really early on.
Speaker:Yeah.
Speaker:And she said that.
Speaker:Yeah, like between
Speaker:a scale of one to 10,
Speaker:two to three.
Speaker:Yeah.
Speaker:We should have kept
Speaker:that
Speaker:a secret so everyone would watch it.
Speaker:This is the end.
Speaker:Yep.
Speaker:So it doesn't matter.
Speaker:so good to.
Speaker:It's been really great.
Speaker:Clearly you've not missed a step since you've been gone.
Speaker:Um, but you know, this is why we do the podcast.
Speaker:It's fun.
Speaker:Well, he's got the full-time job, . Uh, I hope you enjoyed the podcast.
Speaker:Um, it's great having Claire back and, uh, um, for those that what actually is
Speaker:being produced in the end is a series of.
Speaker:Um, that we had to go through just to finish it.
Speaker:So, you know, she behaves herself really well during the podcast and then we
Speaker:get to the outro and it's loose Cabo.
Speaker:So, um, you know, you got places to be, gotta go, gotta go.
Speaker:So, hey, thanks for joining us.
Speaker:Don't forget to subscribe if you made it this far, um, on the various channels.
Speaker:Share with your friends too, cuz um, and if you've got guests that we
Speaker:should have on the show, we would love to hear from it because my ability
Speaker:to recruit people is a little lazy.
Speaker:Yeah.
Speaker:All different kinds of tech, all different types of tech.
Speaker:Yeah.
Speaker:Have a great week.
Speaker:See you later.