Artwork for podcast The Business Integrity School
Behind the Buzzwords in Tech with Sam Charrington
Episode 73rd March 2022 • The Business Integrity School • University of Arkansas: Sam M. Walton College of Business
00:00:00 00:34:48

Share Episode

Shownotes

Sam Charrington, software engineer, entrepreneur and TWIML podcast host joins Cindy Moehring and sheds light on what lies behind the tech buzzwords.

The conversation covers how machine learning works, the contextual and inherent risks that exist, the need for diversity in tech, reimagining the link between labor and livelihood, and start ups to watch.            

Learn more about the Business Integrity Leadership Initiative by visiting our website at https://walton.uark.edu/business-integrity/     

Links from the episode:

https://twimlai.com/      

https://twimlai.com/ethics-bias-and-ai-twiml-episode-playlist/     

Transcripts

Cindy M.:

Hi, everyone. I'm Cindy Moehring, the founder and Executive Chair of the business integrity Leadership Initiative at the Sam M. Walton College of Business, and this is theBIS, the Business Integrity School podcast. Here we talk about applying ethics, integrity and courageous leadership and business, education and most importantly, your life today. I've had nearly 30 years of real world experience as a senior executive. So if you're looking for practical tips from a business pro who's been there, then this is the podcast for you. Welcome. Let's get started.

Cindy M.:

Hi, everybody, and welcome back to another episode of the BIS, The Business Integrity School. I'm Cindy Moehring, the founder and executive chair and we have with us a very special guest today, Sam Cherrington. Hi, Sam, how are you?

Sam Charrington:

Hi, Cindy, I am wonderful and great to be speaking with you.

Cindy M.:

Wonderful. Well, I can't wait to jump into this conversation and let the students and the rest of our audience get to know you a bit. You have had a very interesting career path. So far, you've gone from being an engineer, starting at AT&T. You've worked at like other you know, smaller companies, Plum Tree Software, Tsunami Research, Appistry. And then you found your way into being an entrepreneur and a founder and co host, a host actually of your own "This Week In Machine Learning AI Podcast" series, which we'll talk about in a minute, because it's extensive. Tell us a little bit about yourself, how did you go from being a software engineer to entrepreneur and where you are today?

Sam Charrington:

Yeah, absolutely. I've always been excited about technology. And as I started my career, I was particularly interested in emerging and like transformational technologies. And my career has really given me the opportunity to span a bunch of those at AT&T, I was part of the wave of folks that were helping businesses get online with the internet. We're talking about pipes, you know, just getting them fundamentally connected.

Cindy M.:

Right

Sam Charrington:

And then I went to a start up that was helping folks really leverage the web, particularly for finding information within their organizations. And then the next company I joined was all about making cloud computing real for organizations and so I rode that wave. And at my current company, TWIML, we're focused on AI and helping inform and educate and create community around making AI real for folks and making the community of AI practitioners more diverse and accessible.

Cindy M.:

Yeah, I love that. And I love the education focus, because it's like, there's a lot of hype. People talk about AI and machine learning and the metaverse and, you know, artificial intelligence and, and, and augmented reality and VR, and you know, it all just kind of, if you can throw the terms around, then you know, you sound like you're in the know. But the fact of the matter is, a lot of people really don't understand it and then it kind of makes it kind of hard to trust the technology if you don't have a good basis of understanding even what it is. And you can get all tied up and ethical issues around it. And that's what I really want to spend some time talking with you about today. And you can help educate the audience a little bit from what you've seen. But when you think step back and think about just responsible artificial intelligence or machine learning, what does that even mean to you? What are some of the dangers of AI from a ethical perspective, if you will?

Sam Charrington:

Yeah, I think your point about buzzwords really resonate strongly with me and it's a big focus of my work is helping kind of folks get through the the buzzwords and create understanding. And when we talk about AI, we're already battling history of use of the term in popular culture and movies like, you know, the Terminator and things like that. And I think, you know, oftentimes when we talk about the dangers of AI, we kind of fall back to those those tropes as well. And I think it's important to maybe ground out on why we even care about AI in the first place and what the promise of it is, and

Cindy M.:

Yeah, let's start there.

Sam Charrington:

I think of it as fundamentally AI is an opportunity to free humans from monotonous intellectual tasks, right? There are all these things that we do as quote unquote, knowledge workers in our day that wouldn't be great if the computer just knew what I was trying to do and can do it. And beyond that, AI offers the opportunity to help us deal with this, you know, deluge of information that we're all presented with every day. It offers the opportunity to help our organizations run more efficiently and more, in a more optimum way. And it is really showing promise at pushing the frontiers of our understanding of the world and allowing us to advance science and medicine. And so there are a ton of there's a ton of promise in artificial intelligence. And I think that's why people are so excited about it. Yeah. But the kind of the one too, is that there's also a lot of potential risk and danger with AI as a technology. And I tend to think of them as really springing up from kind of the inherent aspects of AI. Meaning based on how AI works, and the contextual aspects of AI, meaning, how it's used, and the environment in which it's created.

Cindy M.:

Ah, so let's peel back the onion on that a little bit. So the contextualized was the second point. But the first point was, how it is actually what was that, again?

Sam Charrington:

How it's created, how we make it.

Cindy M.:

How it's created

Sam Charrington:

Yeah

Cindy M.:

Right, So what are some of the dangers that are inherent in just how it's created? And and have you seen companies make advancements, given that it's all hype, and everybody's using it in terms of getting better at creating it to avoid some of the risks?

Sam Charrington:

So, there's been a ton of advancements in the way that we create AI, and it may be worth spending a couple minutes just talking about, you know, how we create AI. What does that really mean? And as technical as it can be, when you're trying to do it, it's, you know, fairly simple to explain, I think. The most promising form of AI today is this thing we call machine learning. And machine learning is an algorithmic approach to building predictive software using data. And a simple example of this is, hey, let's say we've got a bunch of labeled pictures of dogs and cats and then we run them through this algorithm. And ultimately, that helps us create a piece of software, that when given a picture, that we don't know what's in it, can tell us if it has a dog or a cat in it. And all that process training a machine learning model or training a model.

Cindy M.:

Got it.

Sam Charrington:

And it's a, you know, this is a silly example. But it's a powerful idea, right? Because the same concept applies to creating a fraud prediction model, we take a bunch of legitimate and fraudulent credit card track transactions, we train a model, and now we can predict fraud. And we can do it in a way that is more reflective of the world that we live in today. And you know, the kind of fraud that's happening in any given moment,

Cindy M.:

Yeah

Sam Charrington:

relative to some rules that maybe some fraud analysts created, you know, six months ago.

Cindy M.:

Right.

Sam Charrington:

Another example might be if we've got sensor readings from a machine in a plant, when that machine is working well versus when it's about to break, we can use that to create or train a predictive maintenance model. And that can allow us to ensure that our plants are always operating by fixing problems before they happen.

Cindy M.:

Yep, yep

Sam Charrington:

Or to give a third example, and pay attention with this one. You know, what if we have a set of resumes from all of our past job applicants that we hired versus resumes of those that we reject it. And so maybe we train a model, and that model is now a candidate success predictor, and we use it to automate our recruiting. And that would be great, right? We'd save all of our recruiters a bunch of time we'd save money flying folks around the interviews, heck, we'd even be like saving the planet, because we'll be reducing the CO2 emissions from all these flights that folks are taking. What's not to love there?

Cindy M.:

Yeah let's talk about what's not to love there.

Sam Charrington:

Exactly. You know, if you're, if you're listening, you probably picked up on that that last one is a trick example. And in practice, it's very problematic because of these inherent and contextual risks that we've been talking about, or that I mentioned earlier. And so when we, you know, I think about these inherent risks, like there's a bunch of things that we talk about, in the context of AI, there's this idea of transparency, and that is that the most powerful models what we call deep learning models. They're very complex and we don't really understand how they work or why they work and why they make the kinds of why they make the decisions that they make?

Cindy M.:

Yeah.

Sam Charrington:

And in a business setting, you know, at the very least if a candidate is rejected or accepted, you'd like to know why.

Cindy M.:

Why, exactly.

Sam Charrington:

And the models can't really tell us that, inherently, it's not an inherent property of the models to do that. Although it is an active area of research, transparency, and AI and

Cindy M.:

Right

Sam Charrington:

We're building these capabilities for different types of models all the time.

Cindy M.:

Yeah.

Sam Charrington:

Maybe even more illustrative of the danger is the idea that these complex models, they are very susceptible to picking up what we call spurious correlations, that the idea there is simple going back to our dogs and cats model. Your pictures of dogs are all taken outside on the grass, and your pictures of cats, cats don't want to go outside, they're in their corner, you know, on their carpety climy things. And you think that your model, hey, it's learned what a dog is and what a cat is. But you give it a picture of a cat outside on the grass, and all of a sudden that thinks it's a dog. Why is that? Well, it didn't really learn anything about cats or dogs, it learned that, hey, this kind of green grassy thing is associated with what you said was a dog. And so it's important, it's kind of an inherent aspect of the way these models work is that they pick up patterns, and they kind of amplify these patterns, and use them as part of their predictions.

Cindy M.:

Right and that amplification of patterns can get really difficult. So when you're talking about dogs and cats, the other area that I'm thinking about where it, it's obviously going to be much higher risk is, you know, the self driving cars, when you think about safety, and whether or not the model is trained to recognize a human form when, for example, it's not in a crosswalk, right. And if it isn't, then you know, I mean, that's all kinds of issues with the car like not recognizing, "Oh, that's a human form", and the car could hit the person if their not in a crosswalk. Just like the the model isn't going to pick up a cat if you've never seen pictures of cats outside. And so some of these are going to be high risk issues, and others are going to be a little bit lower risk. But the point is still it gets back to what data are you feeding in to the model, right?

Sam Charrington:

That's right. That that data is key to the way that these models are created and we talk a lot about bias in AI and models. And bias is a little bit of an overloaded term in the sense that, as I explained around these spurious correlations, machine learning models are inherently working by picking up these biases in the data and using them to make predictions. So we can never really eliminate bias, that's not the goal. The problem is when there's biases that we don't understand, or aren't thinking about or aren't looking for, and they influence the prediction. So going back to the resume example, we may not have thought of this or recognized it, but you know, what, if all of our historical data shows that traditional male names make it through the screening and get into the interview process, while female or ethnic names are just rejected?

Cindy M.:

Right.

Sam Charrington:

You know, that's, that is a bias that's going to be picked up by the model and encoded and propagated into the way it makes decisions.

Cindy M.:

Yeah, so HR data is a is a high risk, probably back right up there with safety, you know, different than understanding is it a dog, or cat or, or, you know, a good use, which is, you know, reducing fraud. So it's all over the the spectrum in terms of how machine learning can actually be used. But when you're focusing back on the data at the at the beginning, and obviously trying to think about different situations upfront, are there some patterns around or processes around governance for, you know, groups that are working on that, that can that can shield it a bit or make it better? And maybe another way to ask this is, do you view, and you're an engineer, so maybe you have a different opinion than me. But do you view the creation of artificial intelligence, algorithms, machine learning, do you view all of that as a just an engineering project? Or is it broader?

Sam Charrington:

Well, I think there are there are several dimensions to that question. I think one of the things that we've seen independent of the questions about responsible AI is that the teams working on machine learning most effectively are increasingly diverse in terms of their role. To not overload diverse, increasingly interdisciplinary.

Cindy M.:

Exactly

Sam Charrington:

So whereas historically or you know, five years ago, seven years ago, someone working on machine learning, you know, would be a data scientist. And that data scientist was a kind of a unicorn person that was expected to have, you know, all of the knowledge of machine learning and data analytics, as well as all of the knowledge of the business problems that they're trying to solve. Now, what's much more common is to see again, these interdisciplinary teams that consists of data scientists that know how to manipulate the, or how to create the models, data engineers that know how to pull the data from corporate systems and make it available to the data scientists. Machine learning engineers that know how to take these statistical models and get them into real applications. Product people that know how to map the requirements of the users and the actual applications into a problem that's well formulated for an algorithmic solution. User experience people that put the human at the center of the problem as opposed to, you know, kind of the classic technology hammer looking for nails.

Cindy M.:

Yeah, yeah

Sam Charrington:

So increasingly, the cast of characters involved in successful machine learning efforts in the real world is very interdisciplinary. Now, I think, with regards to

Cindy M.:

I think legals, governance is usually at that table to these days, particularly high risk one. You know, and for the audience to understand, I think product owners, user experience people, any of your governance or kind of risk management personnel, there, they're going to be outside of the engineering discipline, right. So working in an interdisciplinary team, you also have the issue of sort of Mars talking to Venus, because you know, that, that they may be all talking English, or whatever language they may be using, but still talking past each other.

Sam Charrington:

That's right.

Cindy M.:

So I don't want to go past that point too far before we before we stop and recognize that sometimes that can be as difficult as trying to have a conversation with somebody who's speaking a foreign language, and you have to have an interpreter in the room. Right? So that interdisciplinary model can be a, it's super important, but not as easy as you just made it sound, I think, right? Sometimes that's really hard.

Sam Charrington:

That's right. The kind of developing shared language across different teams is, is a key challenge and a challenge, a difficult one to overcome. And one that presents itself at all different areas of the have these kinds of problems, or these kinds of systems, even from, you know, does a, you know, customer and your system mean the same thing as a customer in my system? And are the fields the same, there's that kind of language and taxonomy issue. But then when we are talking about at a governance level, you know, what does it mean for a system to be fair, in the context of my organization and the, the, the work that we're doing and our users? The first thing an organization needs to do when it is embarking on a responsible AI effort is really define what that means in the context of that organization? What are the principles that are going to govern? The way an organization pursues artificial intelligence, in light of issues like fairness, bias, transparency, equity, etc?

Cindy M.:

You know, I love that point. And in fact, I was just talking with a company recently about this, how do we get started? Right?

Sam Charrington:

Yeah

Cindy M.:

And I think, I mean, you're deep into it. But there are a whole host of companies out there that are still just trying to figure out, how do we get started, they may have some algorithms working within their organization, but they don't really have a very mature model. And I love what you just said, it's so important, it starts I think, with really sitting down and deciding kind of strategically what is going to be our framework? What are gonna be our values? What is our, you know, touchstone when we're thinking about how we want to be perceived with our development and use of AI? So and how does that marry up with our company's values and make sure that they're aligned so that these interdisciplinary teams have at least something that can bring them back to talking the same language and communicating well, right?

Sam Charrington:

That's right. That's right. All of those things are important. I think the the one thing that is also key that I didn't hear you mention is really having that strong executive support and buy in.

Cindy M.:

And that. From the top

Sam Charrington:

Without that, without it coming from the top, you know, these these, these teams don't have teeth, right?

Cindy M.:

Right, exactly.

Sam Charrington:

They don't have the ability to really impact whatever in the business, some great examples I've seen are, you know, Microsoft has a committee called the Aether committee. I forget the specific acronym, but it is a committee consisting of both researchers working kind of in the trenches of, you know, they're responsible AI efforts, but also some of the company's senior leadership. And when issues come up, like, "Okay, we've developed a technology that allows you to take text and create kind of spoken voice and anyone's, you know, using anyone's voice that you train it on, you know, how should we make that available? Should we make that available?" It goes to that committee, and that committee explores the issues surrounding the use of technology like that. And then, and because that committee is, you know, that committee has teeth, it's sanctioned by the the company CEO, they can say, "Yeah, we're not going to make this available. Generally, it's, it may be available to specific sets of customers for specific, very specific use cases". Well, we've seen other examples where that that kind of support from the top wasn't there. And those ethics teams didn't have didn't have teeth, and they pushed back against the machine and the machine chewed them up, unfortunately,

Cindy M.:

Yeah, yeah. So it really is about getting that support from the top working in an interdisciplinary manner, really against kind of values and principles that are established, supported by the top and tie back into the, you know, kind of the company's mission and values, and then making sure you're looking out, you know, then you get into all the other issues that we've been talking about more deeply about, you know, should we not? Can we and, you know, how does the algorithm actually going to work? And what's the data? And there's so much going on with it that, you know, I've also heard, I'd love your opinion on this, companies. Some will say, you know, we just need to start small. Let's do a little project here, a little use case here and build on that. And others, I think, are like, "No, you know, speed, we've got to go fast. It's all about disruption. And we got to, you know, get that product out there as fast as we can". Which does cause some missteps. What do you think about, you know, get given how quickly the world is changing, to companies need to go big, bigger, faster, better? Or is it wiser to start smaller and go incrementally?

Sam Charrington:

It's funny that you asked this question. When I talk to AI leaders within organizations, this is their fundamental challenge. And I think of it as AI portfolio management. In order for them to recruit, justify their, you know, big budgets, right, AI, people are expensive. They have to pursue these moonshots, to some degree or another, some companies, you know, they're gigantic moonshots. Others. They're just, they're, you know, projects that inspire the organization, because of the possibilities that they would unlock if they realized.

Cindy M.:

Right, right, right.

Sam Charrington:

At the same time, those types of efforts take many, many years. And the organization's need to continue to justify their existence. And there's more than more so than just that. There's a clamoring within the typical enterprise to put AI to use to really go after this low hanging fruit use cases like predictive maintenance, which I mentioned earlier. If you're, you know, if you've got a significant physical plant, and you've got downtime with machines or are your sending parts in advance that sit on shelves because they don't get used because or you're taking good parts out of machines. All of that is a tremendous source of expense. And it's been demonstrated that predictive maintenance can address that and reduce cost dramatically. So if you're an organization that has a physical plant, and you're responsible for AI, you bet your you know, your bottom dollar that people are knocking on your door saying, hey, I need this.

Cindy M.:

Right, right

Sam Charrington:

And so this idea of portfolio management and managing the the these kinds of low hanging fruit use cases while still keeping a kind of a big picture is a key responsibility, I think for senior AI leadership. That said, when taking on any project, even a low hanging fruit project, the best advice is always to start small, start the small things small, and try to simplify it as much as possible and build from there. And when you identify when you reach goals that you set up front before you even started, you know that you can stop as opposed to throwing technology at these problems, because the technology is exciting.

Cindy M.:

Yeah, yeah. You know, one of the things we haven't even, just sort of the elephant in the room, I think with this conversation here, and it's a conversation for another day, but it's worth mentioning, at least here. Is that another responsibility of leadership, when they are thinking about how to make their companies more efficient and more effective in the use of technology is, what are you going to do with the manual work work that now is being done by machine and the humans who were doing that manual work? So there's a whole other you know, side of this equation, which will have to be in a podcast episode for another day about retraining and retooling and reskilling and recredentialing and, and having that vision about "Yeah, I'm an AI leader, and I'm going to help my company, you know, advance in terms of efficiency and effectiveness. But I also have a responsibility to figure out what are we going to do with this groups of workers that we're doing the work that now can be done more efficiently by a machine?"

Sam Charrington:

And that's I agree, that's a it's a key, an important question. My hope is that, over the long term AI, is a spark that allows us to really, in a fundamental way, reevaluate the relationship between labor and livelihood, and really creates an opportunity for us as humans to spend our time and energy, you know, contributing in our kind of best and highest way. But that's not at all to say that along the way, that that's not going to be a rocky road. And there's going to certainly have to be a lot of retraining and upskilling, and things like that.

Cindy M.:

Yeah, I think that's part of you know, we talked about hype at the beginning of the episode with the terminology. I think, a hype fear that's out there, as you know, machines are just going to put us all out of out of jobs. And I think that issue needs to be addressed. Probably more head on for those who may feel a bit fearful. Understand that now, it's more about retooling and, you know, helping you live your your life out in a way that's going to be your best and highest use and purpose. So yeah. All right. Couple more questions. Before we go. Um, you have this incredible, "This Week In Machine Learning" podcast, which has Gosh, I don't know, at this point, probably over 600 episodes, because I know it was 590, for a little while ago, as I was listening, I did not listen to all of them. But I've listened to many are there if you discovered any, let's say, startups along the way, in your journey of educating the field more that you want to call out that you think people should kind of keep their eye on for responsible and ethical use of AI and you know, kind of using it in the right way.

Sam Charrington:

Sure thing. So a couple of things on the Podcast, the podcast started out, as this week in machine learning and AI and actually started as kind of this news round up, like I was, at end the week with 100 tabs of all these cool articles about AI that I wanted to learn about and share. And "This Week In Machine Learning and AI" was like, you know, my roundup of the most interesting five, seven stories of that week. About three years ago. It was long past the time that I was doing new stuff it you know, I switched to interviews shortly after that. But we also started doing conferences and publishing ebooks and stuff like that. So TWIML became kind of the umbrella brand for our efforts to inform and educate and create community and the podcast is called the twimble ai podcast. So if you got it want to look for it on any of your podcast readers, catchers that's how to find it. And yeah, we're should be around 600 episodes now over 10 million downloads. And we do every year or round up of of, we do a podcast with kind of friends of the show where we talk about, you know, what are the trends over the past year and in different fields and this year, I'm on topic like computer vision and natural language processing and reinforcement learning, which is an exciting technology in AI. And so if someone wants to catch an episode, those are interesting ones to look for as well as we've got a responsible AI, or bias, fairness and ethics and AI, I think it's called playlist. Which is a dozen or so interviews that I've done with leaders in the field. So that's where I would point folks. And with regard to startups, I think, the my first thought, echoes a conversation, I had an interview with a woman named Abeba Birhane, where we spent a lot of that time, that conversation talking about this concept of tech solutionism, and cautioning against it. That you know, tech isn't necessarily the solution to responsible AI, it's, you know, a human centric approach. That said, I think there are some interesting, there's some interesting activity happening in a field called model observability. And the key idea there is that there's, in the early days of AI, all the energy was focused on let's create this model, can we create this model? We've got some data, if we put it through this model training pipeline, can we produce a model that can predict accurately? The key metric was accuracy.

Cindy M.:

Right

Sam Charrington:

And as our use of these models has matured, and realize that while that's one part of this much bigger workflow, yeah, if you're going to actually use that model productively. And one of the things that that organizations that are kind of mature in their use of these models have realized is that, yeah, you can't just throw the model over the wall to developer, you know, to development or to it or whatever, and say, okay, you know,

Cindy M.:

Go do this!

Sam Charrington:

system, right? It's that model, you know, needs constant care and feeding. And, you know, that's called monitoring the model or model observability. And there are some interesting companies, startups that are in that space, trying to make it easier for organizations to understand these models and how they work. I did a roundup post on my blog about them, but it's companies like Arthur AI and Fiddler Labs and Parody and True Era and WhyLabs. But I think that, to me, it's the, you know, the, the concept and the space that's most exciting, as opposed to any of the individual companies is this idea that putting a machine learning model into production is an ongoing investment, not a kind of one and done thing. I do

Cindy M.:

Love it. I'm so glad you mentioned that because it truly is a lifecycle. And I think that's a great place to end the conversation is it's not just creating it, it's not just using it, you actually do need to then monitor it and tune it appropriately. Or even have a kill switch if it gets out of control. Right, and being able to pull it out of production if you need to. Oh, that's great. Well, I love TWIML highlighted in my newsletter last month as an additional great resource for places to go and, you know, with with I do think over 600 episodes now there is something there for everyone. And your ebooks in the playlist in particular to go deeper on this particular topic about responsible and ethical AI. Is is fabulous. So I love what you're doing. Sam, thank you so much for taking time to educate us a little bit more and share with us about what you're doing. I appreciate it very much.

Sam Charrington:

Thanks so much, Cindy. It was a ton of fun.

Cindy M.:

Great. All right, talk soon. Bye bye.

Sam Charrington:

Talk soon.

Follow

Links

Chapters

Video

More from YouTube