Today we’re sharing another insightful presentation from our most recent Innovative Executives League Summit, where Maya Mikhailov, Chief Executive Officer and founder of SAVVI AI, discusses machine learning as a powerful toolkit of solutions. Comparing efficiency with and without AI, she highlights how the proper tool makes the difference and cuts through assumptions. Is Chat GPT the AI tool that makes Amazon such a success? No. It is Amazon's recommendation engine built on billions of data points. Looking beyond the hype of select functionalities of machine learning, AI applications abound.
In this episode, Maya introduces AI’s key practical uses, as she currently views the technology: decision automation, classification and prediction, large language models, and writing documentation and code. She emphasizes how natural language makes a query more accessible than programmatic language and shares example after example of increasing efficiency. Maya’s presentation sheds insight into where AI technologies are gaining traction (delinquencies) and continuing to grow in popularity (writing content).
Maya dives into the importance of guardrails, building trust, and maintaining transparency when utilizing machine learning. She shares where AI is having massive success (summarizing data) and the problems that might emerge from AI reliance (“code bloat”). Maya discusses how when AI is wrong, it is still learning. Employing the right AI tool is essential for strategy and meeting goals.
Maya Mikhailov is the Chief Executive Officer and founder of SAVVI AI. She co-founded GPShopper, which Synchrony acquired in 2017. At Synchrony, Maya served as SVP and General Manager of the Direct-to-Consumer group (FinTech AI). She has been a speaker at CES Money 2020 and CTIA and featured in Bloomberg, CNBC, Forbes, Business Insider, and other outlets. Maya served as an adjunct professor at New York University, lecturing on digital and mobile technology. She earned a bachelor's degree in international management at American University.
If you'd like to receive new episodes as they're published, please subscribe to Innovation and the Digital Enterprise in Apple Podcasts, Google Podcasts, Spotify, or wherever you get your podcasts. If you enjoyed this episode, please consider leaving a review in Apple Podcasts. It really helps others find the show.
Patrick:
Hello fellow innovators. This is Patrick Emmons. Today we're sharing an insightful presentation from one of our Innovative Executive League Summit speakers, Maya Mikhailov. If you're unfamiliar with the Innovative Executives League, it's an invite-only community of innovators, entrepreneurs, and intrapreneurs with a growth mindset and a passion for innovation. I founded the organization about five years ago to increase the network of innovation here in the Chicagoland area and also on the national scale. At the November summit, Maya spoke to the audience about finding value in machine learning, moving past the hype and to practical solutions.
exited Synchrony Financial in:She's been a speaker at CES, Money20/20, and CTIA, and has been featured in outlets such as Bloomberg, CNBC, Forbes, Business Insider, and others. She's also served as an adjunct professor at NYU lecturing on digital and mobile technology. I know you'll enjoy this episode.
Maya:
So back when I started machine learning, you'll never believe this, I started machine learning with sneaker drops. That's right. If you wanted a pair of Yeezys, you had to go through our algorithms. Now, that also got me the attention of several banks, one of which I exited to ... Again, Mark, I'm sorry. I should have read your book first. I should have. I will next time, I promise.
I exited to Synchrony Financial, where I stood up and built a new division that built AI-enabled products for banking and credit services. So that was a really exciting experience. I've also taught at NYU for over six years in digital strategy and technology.
And basically, I'm here to talk to you about AI. But before I do that, can I get a really honest show of hands? How many of you guys are kind of sick about hearing about AI? You're kind of sick of it. Be honest. Be honest. I think if we did an eyes closed, there'd be a lot more hands up.
Here's the problem. So recently I went to a conference on financial innovation, not the kind of innovation that gets you in trouble with the SEC, the kind of innovation that's legal. And recently, as I was setting up my presentation for AI, I walk into the room and I hear this behind me, "If I hear another startup talk about AI, I might puke."
So you can imagine that is not the way you want to start your presentation talking about AI. And when I turned around and I asked that bank executive, "What is it about this that makes you so sick of it?" She gave me a really honest answer. She goes, "First of all, I think it's a bunch of nonsense. All I see is startups waving their hands, giving me a lot of hand waving hyperbole that AI is going to change my business, change the world, replace half my team, and I think it's all a bunch of BS."
"I have seen very few case studies. I've seen very few action items. I want to know what my business can actually do, and none of this in the future, your cars will be run by AI, in the future. I don't want to talk about that. I want to talk about what I can do right now." And that's a really, really fair point and a fair point that I hear echoed over and over again from business executives.
And so the first thing we're going to talk about today is something really, really simple. When people talk about AI, they often talk about it as if it's like Harry Potter magic, we're going to sprinkle some AI on that. We're going to do a little spell here, and boom, your business transforms. And they talk about it with mysticism, but what is it exactly? You know what it is? It's this. That's all it is.
AI is basically any computer program that can learn. It's really, at the end of the day, not really mystic and it's been around for a really, really long time. Machine learning might be older than I am, but we don't really talk about that. We like to talk about, oh, it's this new thing that just came out with OpenAI and then they just invented AI. That's not at all what happened.
But how does AI really change our business processes? Let's start with what we do right now because this is really important to think about how AI is changing things. What we do right now is we collect a lot of data. Somebody at your organization is looking at that data in Tableau, in Excel, in whatever they're using, and they make some assumptions.
They make some guesses about, oh gosh, this process looks a little inefficient because this number is going down. Or, you know what? When users come to our website, if they're already returning visitors, they don't like to click on our banners because we're showing them the same information they already know.
So they make some guesses and they make some if-then statements. How about this? When people come to our website, if they're a returning visitor, we show them this content. If they're a new visitor, we show them that content. Then they take that logic and they encode it in software. And then they wait, they collect data, and they see if their guess was right. It's pretty simple.
Here's where AI is really changing things. We are no longer just tacticians executing if-then statements. We are now strategists. We are saying, "Hey, I don't have to think about what banner to put on my website or which truck to roll out of which facility. What I'd like to think about is what are my business goals? Are my business goals to increase traffic? Are my business goals to lower ACH return rates? Are my business goals to deliver packages faster but cheaper?"
Then we tell a machine programmatically, "Hey, by the way, these are my business goals and this is the data that I'm collecting about my business. Can you help me figure out how to get to that goal?" And that's literally what machine learning learns to do. It executes tactics. It trains and it finds patterns. And through the patterns and the data that you're already collecting, it can deploy thousands, tens of thousands of if-then statements and if-then statements that even learn and evolve over time.
So no longer are we waiting for our development team to encode new logic, the machine is getting smarter based on the data that it's seeing. That kind of changes the game because we're no longer guessing and executing code. We're now thinking about strategy and that's really important to know.
fall, it actually happened in:But wait a minute, if this machine can create, is it alive? Is it one of us? No, it is not alive. It is just a different form of pattern recognition. Sentience is still a Hollywood reality, not a machine learning reality, and this is really important to know. Generative AI, how many of you have seen Westworld, the show?
Okay, I hope I'm not doing a spoiler alert. But in the end of the last season of Westworld, we discover that the human being actually has a little earpiece where basically the machine is telling them all the answers. And when some people talk about generative AI, they pretend that it's the Westworld algorithm that's going to tell you all the answers of what to do right in your business. It is not that. I can't stress that enough.
But it is really cool, don't get me wrong. It can do things. It can write content. It writes really great content. It's helped a lot of marketing teams. It's helped a lot of teams that have created documentation and had to slave over documentation that can now do it cheaper and faster. It can make great cat art. I mean, if you're in the business of creating cat art, it might replace you because it's really good at it, and it can make videos.
Now, we can have an ethical discussion, certainly about trust and whether some of these videos are causing people to believe things they shouldn't. But let's say AI makes videos because it certainly does. It also hallucinates sometimes. And it's weird that we even use the word hallucination because hallucination is a very human word to describe what the machine is actually doing. The machine is just guessing.
When it writes a paragraph for you, it is guessing what the probabilistic outcome of the right acceptable next word is, next paragraph, next sentence. It's guessing. So when we say it's hallucinating, it's not like it's on an acid trip making things up because it's suddenly saw The Doors play in concert. No. What it's actually doing is it's taking a guess, but probabilistically, that guess was wrong.
Here's a great example. I talked to a hedge fund recently. They were using generative AI to read environmental studies that company's issue. Public companies issues environmental studies all the time, I'm Pepsi, I've lowered carbon dioxide by X, I've done this, I've done this. They were using it to summarize reports, and then they were spot checking with their analysts.
Their analysts found that sometimes generative AI, because it was trained on the data of these reports, would not exactly be accurate about what Pepsi was doing. It was just accurate to the reports in general. So if Pepsi planted 1,000 trees, it would say, well, Pepsi planted 10,000 trees, because on average, the reports it was trained on planted 10,000 trees. That's a hallucination, but remember, it has no intent. It's not making up things.
But there's all sorts of other things you can do with AI beyond just chat box and cat art. And that's what we're going to talk about today, the practical realities. Please take a picture of this slide because if you remember nothing about what I talk about today, literally nothing, not even my name, take a picture of this slide and show it to everybody who starts talking to you about AI.
Because AI is not one thing, it is a toolkit composed of many different types of tools. And some of those tools are good for some things and some of those tools are good for others. Just as if I try to screw in a screw with a hammer, it can kind of get the job done, but not really well. So, when you think about AI, it's not just generative. That's one tool. It's also classification systems. It's also prediction systems, decisioning automation, natural language processing, computer vision. This isn't even all the tools. I've just selected some that are most popular.
So next time someone tells you AI is going to take care of that for me, you can say, "Cool, what kind of AI? What kind of AI are you using?" Because it's not one thing, and when you start thinking about it as a toolkit, you start thinking about the value that each of these tools can bring in different situations. They're optimized to solve different problems. They can increase efficiency, they can increase revenue opportunities, but you have to have the right tool for the right job. So let's talk about some of these tools.
Decision automation, recommendation, classification, prediction, chat, these are the tools I'm going to cover today. I'm going to talk about practical use case. I'm going to talk about the boring shit, what works, what doesn't, what people are actually doing. Because let me tell you, those billions of dollars that Amazon has made never involved a chat bot. It's using a combination of these tools.
So the first one is decision automation. So what does it mean when an AI produces decision automation? This means that an AI is choosing between a set of options based on the data it's seeing. It's almost like a multiple choice test. Let's say you had 17 facilities that you can ship from. Those are your 17 options. The AI can ingest the data and say, well, for this given package, given your strategy, remember you're now the strategist, given your strategy of trying to lower cost while delivering things on time, which of these 17 options is the right option?
But unlike a human, it can do this on tens of thousands of rows of data almost instantaneously. It can plow through data, find the patterns to make a decision that meets your goals. It's like a really hyper intern that's hopped up on Red Bull and Twizzlers who wants nothing better than to make you happy all day long, 24 hours a day, without a vacation, without breathing.
So here's some practical use cases where decision automation as an AI tool has helped companies, first of all, increases inventory efficiency. McKinsey did a study with warehouses that employed decisioning automation were able to increase decision inventory efficiency by 35%. It can pick best processing facility. It can determine the best email campaign. If you have 10 different email options, it can determine the best one to send. It can even do debt collection optimization. I hate to say this in this economy, we're getting a lot of requests for debt collection optimization and charge-off predictions.
Recommendations, Netflix is awesome at recommendations. Netflix understands the signals that you as a consumer are giving and suggests content for recommendations. Think about how you can use that in your business. Often these are revenue driving use cases, recommending next best action. Looking at your data and saying, "Wow, I have a bunch of consumers and I have a bunch of data about these consumers, and we're introducing a new product. Who's most likely to respond to this product? What are the signals?" Next best product, next best insurance bundle, next best action for a call center.
This recommendations is basically making hundreds of millions, if not a billion dollars for Amazon every single year. It's a simple, optimized recommendation engine. What makes theirs and Netflix recommendation engine maybe better than any other one you've seen? Billions and billions of data points. The engine gets better the more data you feed it.
Classifications. Okay, everybody probably has seen classification problems, which you can use visual classification, you can use audio classification, you can use image classification. This is really transforming manufacturing right now, especially from a quality control perspective of being able to look at products on a line millions of times a day and detect a slight anomaly, to detect variances in color, detect variances in size, and lower costs, vis-a-vis, classification AI.
You can also use it for customer prospecting. I want to classify my customers by certain behaviors, certain outcomes that we hope to achieve. And we have one customer who's using this for loan underwriting. They're looking at the data of their customers, the data coming through their bank accounts, and they're using it to paint a 360 degree picture of a customer.
Before, we relied ... The traditional way of using finances is send me your W-2s, I will assess how much income you actually make. And then based on that, I'll offer you a line of credit. The modern way that SoFi, Ally, all the banks are doing is they're saying, actually, what I want to do is a cash flow analysis.
That means I know your W-2 is probably not a full picture of who you are. You might be getting some money from a sympathetic aunt. You might have an Etsy side hustle going on. You might be Ubering on weekends. I want to get a better perspective of who you are based on the money coming in and out of your bank account because that's more reflective of your cash position and how much money I want to lend you.
And finally, predictions. Predictions are almost like the most simplistic way to use machine learning because it's so good at predicting. It's so good at predicting what the next possible outcome will be given a set of data points. And you can predict next month's revenue, predict, again, delinquency. Sorry, I keep mentioning delinquency, but it's so popular right now in the financial services community. It's almost disturbing. And predicting sales volume.
Anywhere where you're pouring over an Excel spreadsheet, you're pouring over a Tableau dashboard and you're saying what's going to happen next month, you can use AI practically to help you make that prediction faster and more accurately. Why? Because AI, even if it makes a wrong decision ... This is really important. Even when AI is doing things wrong, it's learning that it did something wrong, and it's adjusting the algorithm and the strategy. And that's just one of the most important parts about AI in general.
And finally, chat ... Oh, okay, I'll talk about generative AI a little bit. I know everybody wants to hear about it, but let's talk about large language models. I have been talking to business leaders all around the country since ChatGPT came out, and there's been a weird schism that occurs. Everyone's really excited about generative, really excited. Oh yeah, I have a generative project going. My CEO told me we have a generative project going. We definitely have a generative. The board wants to know what we're doing in generative. Yeah, we're doing something in AI.
I've seen a lot of lab projects, a lot. What I fail to see over and over again is a lot of scalable production use cases. I'm just going to be honest. And here's some of the problem with that, and Mark touched on this as well. Why don't people trust generative? They don't productionize generative in some cases because A, they don't know how it works. And the CFPB just issued guidance about a week ago that said, if you can't explain your model when they come calling, be prepared for fines and be prepared one day to testify in front of Congress and a very angry Elizabeth Warren.
Nobody wants this. Banks do not want to testify in front of Elizabeth Warren. That's their nightmare. When bankers wake up in a sweat at 3:00 AM, it's because Elizabeth Warren came to them in a dream. So they can't ... The problem is, is right now, the lab projects are really cool. They're keeping the boards placated. They're keeping the executives placated, but they're not moving beyond innovation labs because we don't have that trust, because we don't have guardrails.
And this is something really important to understand about AI. When you think about practical AI projects, you don't have to let it loose and just say it'll figure it out. Just like you wouldn't let a drunk teenager let loose with your car. It'll figure it out. He'll figure it out. She'll figure it out. No, they won't figure it out. You can set business rules and guardrails.
And part of building trust in an organization is saying to that organization, yes, we're using AI, but here is the box in which it's allowed to play. Here are the guardrails we've put around it. Oh, and by the way, we have transparency. Here is what this means. Here's how it's working. Here's the data it's taking into consideration. Here's how we trained it. Here's how we're preventing it from doing something we don't like.
Where I'm seeing chat LLMs and ChatGPTs do really, really well is first summarizing data. If you're in a business where you're reading a lot of reports and you need it to summarize something, you can fine tune it to summarize, and you can start setting up guardrails to make sure that it's not hallucinating too much. It's great for summarizing data. It's great for providing an alternative query to a question.
So how many of you're familiar with SQL queries? Yeah. Okay. So nobody likes SQL queries. You can write table ... You join tables wrong. Things go bad. In fact, I was banned at my last company from ever writing another SQL query, again, because I took down our entire server infrastructure because I wrote an open-ended query, and it locked up all the machines. And then after that, they never let me do anything again. It's like, "I just wanted to know this." And they're like, "No, don't touch that anymore."
Natural language is a great interface when before we relied on programmatic language that not everybody knew and not everybody knew how to use well, this one included, to query data. So where I'm seeing ChatGPT type competitors succeed or LLMs succeed is when they're trained on a data set and you're using it to query that data in natural language. I want to know what my sales are next month, or this month. I want to know who the best sales person in the Southwest region was. That's a great use case.
Writing documentation, I mean, nobody likes writing documentation. There's a reason perhaps you've heard this use case. So Samsung had this hyper secret chip that they were developing, and their developers and engineers so hated writing documentation that they started shoving everything into ChatGPT until Samsung realized they were leaking super proprietary IP into ChatGPT. And then of course, they took ChatGPT away from everybody because nobody writes documentation.
And you know what LLMs seem? An easy button. Who wouldn't press the easy button? I hate writing, boom, boom, give me the answer. Here's some specs, you can figure that out, right? And code writing. Code writing is a great use case for large language models because code is just another language and it's a language that it's learned quite well.
Now, the flip side of that is if you're a developer and you think you're so hot, because now you don't have to write a lot of code, if you're a bad developer, you really have no way of assessing if the code that's being output is any good or not. And then I talked to one senior leader who's petrified of using ChatGPT with their developers because he brought up a really good point I hadn't thought of, code bloat.
It's now so easy to write new code that they're not fixing old code. They just rather rewrite that whole function from scratch. And then all of a sudden he's like, "In three years, I think there's going to be new startups that just specialize in cleaning up code bloat from companies who rewrote their entire stacks rather than just fix the work they had. So that was a really interesting phenomenon.
Patrick:
Thanks for listening to this week's episode. We really appreciate everyone taking the time to join us today. If you'd like to receive new episodes as they're published, you can subscribe by visiting our website at dragonspears.com/podcast or find us on Apple Podcasts, Spotify, or wherever you get your podcasts. This episode was sponsored by DragonSpears and produced by Dante32.