This episode is sponsored by Lockton, click here to learn more
AI dominates every conversation in the automotive industry, but very few companies know how to make it truly useful. That focus on real value is what led MIT research scientist Dr. Bryan Reimer to write How to Make AI Useful.
The idea began casually over dinner in Lisbon, when someone asked him what he really thought about AI. Bryan didn’t dive into predictions about machines taking over. He focused on something more practical: how AI only matters when it’s built with people in mind.
He breaks AI down into three realities: the excitement of what it could do, the fear that follows when we realize what it might do, and the long, steady work required to make it truly valuable.
AI can automate the basics and even create new content, but its real strength is amplifying human skill, not replacing it. The goal isn’t an autopilot workforce. It’s a copilot.
That means the fear that AI will take jobs is misplaced. AI changes work; it doesn’t erase it. Just as assisted driving has changed how we drive, rather than removing the driver, AI will shift roles and demand new skills.
Bryan points out that layoffs blamed on AI are often just business decisions wearing a convenient mask. The real question is how companies use AI to make work better rather than cheaper.
To do that, leaders in automotive need to unlearn old habits. Years of rigid processes, slow decision-making, and fear of change make it hard for AI to deliver value.
He argues that useful AI requires trust and transparency. It’s hard for any organization to move forward when fear, hidden approvals, and layers of bureaucracy control decisions. If employees can’t be trusted to make decisions, AI won’t save them. The real challenge is cultural, not technical.
Bryan expands the conversation globally. Japan is embracing robotics as companions, while Europe is focusing heavily on privacy. Culture shapes how AI grows, and automotive companies need to pay attention to what consumers value, not just what tech can do.
He connects this to China as well. China’s speed is not about dumping features into cars. It’s about building products people can afford and use. If Western brands only chase faster or cheaper without real value, they will lose.
AI becomes useful when companies start small, test real-world problems, and continually improve the tool until it actually helps people do their work. That progress may cost more in the beginning, but better safety features, more accurate data, and enhanced customer experiences rarely come from shortcuts. The goal is not to replace people. It’s to build technology that helps them perform at a higher level.
Themes discussed in this episode:
Featured guest: Dr. Bryan Reimer
What he does: Dr. Bryan Reimer is a Research Scientist at the MIT Center for Transportation & Logistics and a key member of the MIT AgeLab. His work focuses on how drivers behave in an increasingly automated world, using a combination of psychology, big data, and real-world testing to study attention, distraction, and human interaction with vehicle technology. He leads three major academic-industry consortia that are developing new tools to measure driver attention, evaluate how people use advanced driving systems, and improve in-vehicle information design, thereby guiding automakers and policymakers toward safer, human-centered mobility solutions.
Mentioned in this episode:
Episode Highlights:
[03:04] Lisbon, Wine, and a Big Question: A casual dinner in Portugal, fueled by a few glasses of wine, led to a book built around a simple idea: AI only matters when it helps real people, not just shows off technology.
[05:13] The Wow, Whoa, and Grow: AI starts with excitement, triggers hesitation when its power becomes real, and only becomes useful when organizations move past fear and begin building systems that support people, policy, and long-term value.
[09:55] Fear vs. Reality: Layoff headlines make AI sound like a job killer, yet its real impact is changing how work is done, not removing it, and companies often use AI as an excuse while human skills and responsibilities continue to grow alongside the technology.
[11:50] Header: AI note-taking creates efficiency, but the real shift comes when companies unlearn old processes and use AI to turn meeting outputs into work plans that assign tasks, drive follow-through, and reshape how the work actually gets done.
[15:04] Unlearning to Compete: To meet China’s pace and build vehicles people can actually afford and use, the industry must rethink old development cycles and focus on AI that supports drivers rather than chasing fully automated cars.
[19:31] Different Cultures, Different AI: Japan embraces robotics as companions, Europe prioritizes privacy, and the U.S. remains cautious, showing how each culture adapts AI in its own way and must shape policies that reflect human needs, not just technology trends.
[21:03] Technology Moves Fast. Institutions Don’t.: Austin’s Law explains why automated driving and AI can advance quickly while governments, policies, and organizations move slowly, creating delays driven by fear, inconsistent rules, and low trust within the systems trying to adopt new technology.
[24:39] Trust Before Technology: Layers of approvals, hidden decisions, and bureaucratic red tape break trust inside automotive companies, and without a culture that empowers people to act, AI has nowhere to grow and no one who believes in it.
[27:59] Fix Culture, Then Code: AI can’t succeed in a blame-driven industry, because once decisions are written into software, companies must own them, learn from them, and evolve like the pharmaceutical model that improves systems over time instead of pointing fingers.
[30:14] Copilot, Not Cost-Cutting: AI isn’t a cheap layoff tool, it creates value when leaders plan for lifecycle costs, learn through small pilots, and use it as a decision-support copilot instead of dumping out low-value work.
[35:08] AI Plus People: AI can speed up translation work, but the real value comes from pairing it with human expertise, where the best results may cost more yet deliver a higher-quality experience that’s worth it.
[38:31] Mindset Over Machines: Real progress happens when leaders stop fearing the technology or spending blindly on it, and instead redesign their processes with a practical, consumer-focused mindset that keeps core values intact while evolving how work gets done.
Top Quotes:
[10:28] Bryan: “I don't think AI, like any other technological revolution, is going to shake all the jobs. I think what it is going to do is change the nature of work. It's automation by a different language. Automation doesn't replace work; it changes the nature of it.”
[23:03] Bryan: “Technology is evolving much faster than the institutional changes to support that, and that fear is a limiting factor. And that's where the fear and hype of AI become so challenging: we need to find a middle ground that allows us to build and evolve these technologies forward faster and more efficiently, while managing the overhype. Automation's going to change the nature of work, and we're going to get rid of all employees due to the fear of, "What am I going to do if technology takes over?" And I think a lot of that comes down to balancing trust, trust in institutions, trust in organizations, trust in my colleagues.”
[37:59] Bryan: “We've got to think about what the value proposition for that is and how we deploy AI and other technologies. If we keep chasing better, faster, cheaper, and that's the sole output, I could tell you that the Chinese will win with that. Our strength is going to become how we strategically focus on each of those elements in a more optimal system. And that's exactly how I think Detroit and other western and legacy automakers are going to have to reinvent, driving the mobility experience to compete with a growing potential tsunami of Chinese cars across the world.”
[Transcript]
[:Stay true to yourself, be you, and lead with Gravitas, the hallmark of authentic leadership. Let's dive in.
This episode is brought to you by Lockton. Rising benefit costs aren't inevitable for you or your employees when you break through the status quo. Independence matters, it means Lockton can bring you creative, tailored solutions that truly serve your business and your people. At Lockton, clients, associates, and communities come first, not margins and not mediocrity. Meet the moment with Lockton.
In the auto industry, we've seen technology reshape everything from how we design and build cars to how we think about safety, supply chain, people, and just about everything. But with AI, with AI, it hits a little different, I think. It feels faster, bigger. So you can't turn on the news or look at any articles without hearing something about AI. But how do we cut through the hype and make sure that AI really helps us, really has a benefit to our business and our lives?
My guest today is Dr. Bryan Reimer. He is a research scientist at the MIT Center for Transportation and Logistics. He is a founding member of the MIT Age lab, and the founder of the Advanced Vehicle Technology Consortium, which brings together major automakers and suppliers and policymakers to study how humans and automation work together behind the wheel. Now, his new book: How to Make AI Useful, is helping us move beyond the buzzwords to understand what it really takes to make AI practical and impactful. This episode is all about what it really takes to turn AI from a buzzword into a meaningful tool. Bryan Reimer, welcome to the show.
[:[00:03:02] Jan Griffiths: It's great to have you. Wow, this book, talk about timely. When did you start writing it?
[:[00:04:21] Jan Griffiths: And you said it, bringing the human into the equation. I feel like in the auto industry, we're not very good at that. We're great at identifying all the latest shiny objects, and we wanna jump on board with that latest technology, but bringing the human in. Ooh, that's a little bit more tricky now.
[:[00:05:12] Jan Griffiths: You're so right. Now, with your book, I love the way put it into three different sections, and that is: Wow, Whoa, and Grow.
[:You know, it can separate our email. It can provide me with with calendar suggestions. It can do lots of different things in scheduling, but it's really simple problems that can be broken down into fairly black and white answers. And that'll change over time. That will evolve. We know that. AI is beginning to create things, whether that's using all my LinkedIn posts to feed an algorithm to create new content, or that's creating new poetry or painting pictures. You see creativity beginning to being an area where AI excels. In essence, it's really good at filling up blank page with something to start from.
But the real value as we see in the wow factor is AI's power in amplifying human skill. You know, AI can really help us be better. So, AI is an assistant. So even when it creates a new and fills the blank page for me, it's my job to assist that AI, interact with that AI, and create something that is better than the machine in and of itself.
We begin to see the potential value of the wow, but when we do, we begin to say, "Hmm, do I really want this?" Okay, the guardrails begin to go up. Does society really want things to be automated for us? And this the woe, "Oh my God, what can it do? What can it allow us to do?" And this is where the debate between what can AI do, and whether we're really going to let AI do what it can do really begins to shine.
So, you know, an interview with David Dixon, who leads e-learning at MIT really uncomfortable. He's worried that we are not going to let AI change education as much as it should. You know, the humans in the system love to throw up roadblocks, 'cause we're a little scared of the unknown.
And, obviously, we've seen many movies to tell us that things can go wrong. In the case of a lot of applications, it's very much the human who evolves over decades, if not generations, versus the AI or the technology that evolves much faster. You know, we're not really equipped to change as fast as technology is looking for us to. So, stop this, "I'm gonna stay with my old pen and paper." It becomes an easier answer for an individual who's not willing to trust technology.
So lots of red flags going up. I love the one we both sit and work with automotive a lot. Well, in the state of Massachusetts, there's a ballot initiative right now trying to ban automated vehicles in the Commonwealth. Not necessarily something that's really in tune with the future, where automation will take hold and will win over the long haul. It just may be a century or more. Hopefully, a little faster.
So, when we begin to think about pulling back the reigns and saying "Whoa" here, we also can think about how do we grow this technology? And the book really begins to frame that in two components. One is the tactical. We have a problem. We need to fix this problem. In essence, filling the potholes.
So, we see whether we're talking about AI today, we're talking about automated driving, we're talking about healthcare, finance. Many different topics out there. It's all about tactically solving the component of the problem I'm dealing with today. It's not taking the step back and saying strategically, "How do we want this system to function optimally over the long haul?" And that's really the paving part of taking that step back.
And for AI or many other technologies, like automated driving or the fact of seamless reinvention of healthcare. To really take shape, we're gonna have to sit back and really look at paving new infrastructures that integrate together the human, the technology, and the policy needed for success over the long haul.
We look at paving here and is a very hard aspect of saying, "Okay, we gotta leave the tactical side aside," but step up, what do we want this to look like? So, when we look at AI in particular Jan, how do we get it to seamlessly evolve into the background of life, much like electricity, the personal computer, or smartphones have supporting us in ways that we could never have dreamed of. But to get there is gonna take some reinvention of the policies and the behaviors that we have to make it work.
[:[00:10:23] Bryan: No, I didn't hear him laughing at it.
[:[00:10:28] Bryan: Look, I don't think AI, like any other technological revolution is going to shake all the jobs. I think what it is going to do is change the nature of work. It's automation by a different language. Automation doesn't replace work, it changes the nature of it. We in Detroit here can think about, hey, assisted driving. Assisted driving is changing driving. I still have a role to play. Even when we think about automated driving, level four systems, we're thinking about systems that are gonna change the nature of how humans are involved with them. Driverless is not human less, and I would argue that the human capital and the cost of that human capital is only going to up as we begin to truly embrace that the cost of maintaining these AI enabled systems actually probably exceed the cost of building them in and of themselves.
So I think we're gonna find lots of new ways that that labor is going to be involved. That human skill is going to evolve. But at the end of the day, we as humans have to be even willing to change what we do a little bit to integrate with the automation and the AI. And we're fearful of that. I don't think the layoff headlines are all about AI. I think there's lots of components under that. But AI is a great excuse. AI is exploding and it makes it a little more palatable for Amazon to announce a 30,000 person layoff. Good way to blend that into the scene work.
[:[00:12:23] Bryan: Yeah. And I think, look, you're summarizing notes much more like you had a note taker or an administrative support person in the past, sitting in your meetings to do that. Sure, that's a great piece of efficiency. But how do we thread the outcomes of that seamlessly into a work plan to ensure that these are actionable and that the follow through exists.
So right now, there's, you know, wonderful AI summaries that are now sitting in my inbox, your inbox, and everybody else's inbox. But that's not seamlessly integrating into the background of life. Taking that, automatically developing the work plan, the product structure, the meeting outputs that are required to operationalize it. That's down the road yet. And I think that's where we're gonna get to, you know, at some point.
It, you know, AI isn't magic. It just doesn't solve everything instantly for us, it's, it's a mind shift that that's gonna require unlearning as much of history as it's gonna require relearning a new history. So, how we met before, how we took action items out of a meeting, we need to unlearn that that's how we had to do things on pen and paper.
Now, the meeting synthesizer could create a whole new project plan based upon what we discussed and agreed to in that meeting. And we're all gonna be held to the accountability of, on Monday, we're gonna have what X do. On Tuesday, we're gonna have Y do, and so forth.
[:We operate in silos. We know that Silicon Valley type tech culture, for lack of a better term, operates differently. They don't, they don't have all that, I hate to call it baggage, but they don't have all that background. And it can be a cure and a curse. I mean, it's good in one sense because we know what we're doing. We know, in legacy auto land, we know how to build a car. We know.
And in Silicon Valley, they know how to build technology, how to build a smartphone. But the two have to come together. So, we have to unlearn a lot of those processes and procedures that hold us back, that prevent us from moving forward in terms of technology in this industry. We talk a lot about China. We talk about the existential threat of China. China's speed. I can't tell you how many times I've heard this term China's speed, lately. But that's gonna require us to unlearn a lot of the ways that we do business. Is that right?
[:But our views on system safety need to shift around. We cannot continue to take three to five or more years to develop a platform that China's evolving in 18 months. So the tsunami of Chinese cars may be kept off of US roads for now, but I believe strongly it's coming. Is it five years? Is it 10 years? Is it 20? I don't know, but I can tell you that we need to reinvent what automotive looks like, and we need to reinvent that around the consumer in particular, to me.
bile. If you look back at the:I think we need to look about the experience, and the experience that the human has in the vehicle, and beginning to reshape how we support the human operator to perform better. And as soon as we move from automating to supporting humans, we begin to fuse the elements of machine control together with human expertise. You know, machines are really good at the black and white, and maybe a little bit on the edges. But we as humans, make instinctual decisions based upon very imperfect information. Sergio Marchionne's talk about a decade ago, 'Confessions of a Capitalist Junkie,' was a decade old forewarning to the auto industry of moving beyond inefficient capital spending, and begin to focus on the consumer, and what product characteristics differentiate for the consumer. How do we use AI to enhance my experience in the vehicle?
I'll give you really simple example of how AI can begin to fuse in the logic. In a vehicle. We all talk about driver distraction all the time, and I don't really think distraction's the fundamental component we should be talking about. It's really driver attention. I'm about to get off an exit of a busy highway. My kids are calling me, they need a little reminder, hey, dad's trying to get off an exit. We'll connect you in a moment. Put it on pause. Let me get off the exit. When I get off the exit, the phone calls then connected. Again, this is just a very simple aspect of how we can use AI and a little automation to help pace information, but being transparent to the consumer at all possibles, in this case, my kids. Me, I know my calls are being delayed for a second, not suppressed. Just delayed for a second or two.
You know, triaging information. You know, when I'm running outta gas, the car's telling me what the nearest prices are at the gas stations and giving me the most. This gas station's 10 cents cheaper, but it'll take you three minutes to get to. This one's 5 cents cheaper, it'll take me one minute to get to. Leading me to make that executive decision, okay, how much time do I wanna waste for how much money do I wanna save? So these are just little nuggets of how AI can frame into the background seem work of the automotive experience. But we have to begin to recognize that for the vast majority of miles traveled, humans are going to be at the helm for the foreseeable future.
[:[00:19:42] Bryan: So look, it's not a one size fit all bucket into how different regions of the US or cultures around the world are going to adapt and change to the advent of technology. You know, consumers in Japan have been enamored with robotics in ways that we have not yet fully embraced here in the US. Some work of my colleagues within the Age Lab looking at robots around the caregiving side of sympathetic pets. But a lot more manageable than the real cat or dog.
Anthropomorphizing robots is in an area that the Japanese culture has embraced and studied for years. We in the US are a little more cautious with technology and robotics. And you can shift over to Europe where the European AI privacy acts are really curtailing some of that information flow in different ways.
So each culture looking at, okay, how much am I willing to let the technology dominate? And then, I talked earlier about moving from that tactical blocking phase where we are here to the seamless strategic understanding of how to we repave policy here. That's exactly it. Different cultures need different things, but we all need to move to a more optimal spend of human capital.
The sparse resources and economics that we have to move each and every one of us into a better state of play where we can enhance our lives and those around us using the technologies at hand.
[:This episode is sponsored by UHY.
see what's really changed in:[00:22:18] Bryan: So look, we have always known that we can spend money, create change technology, create a new G whiz, but humans move slower. That policy and frameworks and governments move even slower. So it is very difficult for policy to keep pace with the technology. I think this is really highlighted in the auto industry around automated driving today.
We see some phenomenal pilots in different places of the country. A lot of us would believe Waymo is leading that pilot. But we see no national framework that's really guiding this technology ubiquitously coming outta DC. We see 50 states still moving in 50 different or 50 different shades of directions. And this is because technology is evolving much faster than the institutional changes to support that, and that fear is a limiting factor. And that's where fear and hype of AI gets so challenging, is that we need to find a middle ground that allows us to build and evolve these technologies forward faster and more efficiently, managing the over hype.
Automation's going to change the nature of work and we're gonna get rid of all employees to the fear of, "what am I gonna do if technology takes over?" And I think a lot of that comes down to balancing trust, trust in institutions, trust in organizations, trust in my colleagues. And trust is, quite frankly, quite low for many of us at this point in time. You know, okay, we started this conversation a little bit ago, okay, AI layoffs. You know, my trust in that isn't any different than your trust in it. I think it's a headline versus a reality.
So leadership is really about observing, seeing reality clearly while imagining what's next without worshiping the wave. The wave of technology is exploding now. I gotta bleed and figure out how am I going to make my organization work, buying return on investment from my organization. I just can't keep dumping capital after topic, after topic, after topic.
We can go back and stay with automation for a moment. We keep seeing in the headlines, lots of new efforts in automation in Detroit. I don't think any of these are going any differently than the last few. We still have to figure out how we're gonna make money with automated driving.
[:Here's this really simple example: Getting approval for somebody to come on a podcast, right? Two different major, major tier ones. One, there's a conversation within their comms team. They get the information, they make the decision. Took about, I dunno, 24-48 hours. Another, has to go to the president of a division for approval. And that was, I dunno, four weeks ago? Still haven't heard anything.
That's a very, very simple example, but that kind of behavior manifests itself everywhere, in every procedure, in every process in automotive. And a lot of it comes down to lack of trust. Maybe somebody screwed up at some point in time and they got a very good reason why it has to go through so many different approval levels. Who knows? But we gotta get over ourselves and we gotta get over that.
I wanna talk to you about trust, and quote specifically from the book, in the playing around in the sandbox phase. And you say, the Transforming Transportation Advisory Committee, and you served on as Vice Chair of the AI subcommittee, suggested to the US Department of Transportation four areas of recommendations that you believe apply to government, industry, and other organizations.
Number one, prepare for AI. Number two, a trustworthy culture for AI. let's just go with foster trustworthy culture first. Let's just do that first and then add the four I bit. I think that's an area we really struggle. Do you see that?
[:So I think when we think about AI, when we think about mobility, when we think about our business practices in general, if we're not willing to trust our organizations, what is the culture that we're bringing up, and can that culture be successful in helping us chart our ways forward?
The amount of bureaucratic red tape in our governments, in our businesses, in our academic and social policies is greater than ever. So that is all inefficiency. And that red tape is not going to allow AI to move forward as seamlessly and ubiquitously as we'd like to bring efficiencies to allow us to do other things that would be more productive. It begins a fear of the unknown.
Who's approving? Who should approve? And you've all been through these conversations over and over. Time and over, time again. You know, if you don't have employees working in your organization that could be trusted to make decisions, maybe they're wrong employees in the wrong spot.
[:[00:28:08] Bryan: Yeah. I think that's a fair one, because you're gonna have to deem successful something that you're codifying into code. I mean, at the end of the day, AI is code and you're gonna have to make a decision that this code works for my culture. And we have to do that successfully. We have to move beyond the blame based culture that a lot of our automotive decisions are based upon. It's blame the consumer. They're driving, they made a mistake. No, unfortunately, once you codify it in code, you're now talking about a development decision that was very clearly laid out in a piece of software code. It is easier to reconstruct when it is laid out in code.
So we need to stop the blame and we need to focus on proactive solutions that can help society. So, okay, we made a decision to employ a system that did X and we should have done Y. Learning and evolving together is the right answer, and that does require some reform to the legal system as well. I do like the pharmaceutical model that's not applied in automotive enough where you did the best you could with the information yet at hand to provide a pharmaceutical product or a medical device that enhances our capabilities to solve a problem. This drug seems to be efficacy based upon all of our clinical work. You know what? It isn't all the time. But we made a strong data-driven decision to evolve and we trust that institution or have for the last several decades, if not a hundred years to make decisions that are clinical in nature. And I think that needs to infuse a little more in automotive.
Not to say we want the same timelines that are required to bring a drug to market, but the same concept of we have to prove this works based upon what we know. And if we do that reasonably, you know what? We have the ability to come back, whether that's through over the air updates or other software approaches to fix things that we didn't understand at the time. Because as humans, we are learning and we're learning faster and faster and need to correct those mistakes as we learn them. And that's the unlearned part.
[:[00:30:34] Bryan: You know, and that's a really good question, Jan, I think a lot of that comes back to how we report to Wall Street. We report to Wall Street of profits are declining on cutting workforce to cut costs. That's a simple answer. Wall Street understands that.
We need to be speaking much more in the efficiencies in our organizations, and we need to be talking about how we're making our organizations more efficient using a combination of human labor and machine intelligence. And the fact that by combining those two together, I am producing a product that is more efficient to design, develop, and deploy over the lifespan.
And that lifespan a piece, is a piece that I just think is so underappreciated in both automotive and technology. We'd often think about what is the investment needed to get to a product to market. And when you're thinking about AI enabled technologies, you know, getting something to market's probably only a fraction of the cost that is often gonna take to enable this over the lifespan of the product. So, we think about developing a feature. Now, we need to be thinking about the lifecycle management of that feature. And that's often left to an afterthought. And that's why in safety centric environments like the auto industry, it is so difficult to maintain vehicles over the 20 year or more year lifecycle in which we're still driving 'em on the road.
So, Jan, when I think I'm thinking of leadership, I think what's really important if we want to be successful in an era where some MIT research has suggested up to 95% of AI pilots are failing. It's really to start small, scale smart, pilot narrow, repeat, and find what works to drive value to the consumer before we begin to invest hundreds of thousands, if not millions of dollars into mission critical technology.
Treating AI much more as a copilot, not as an autopilot. Using it to assist decision making, not replace it. I think we need to very quickly begin to build up the guardrails to ensure that our organizations are leveraging these tools to advance, not creating more documentation and more work slop. To me, work slop is the abuse of use of AI and automation to produce low valued input, output. It needs to be something that is culturally unacceptable in organizations. So, enabling my organization the privilege and the freedom to learn faster.
So I told the C-suite at an auto supplier the other day. You know what, AI is changing so quickly, you as an organization need to figure out how you're gonna learn and fit AI faster into your organization for success. You can't look at the academic market traditionally was used for learning to learn something that is evolving literally day by day. But what's important is that you provide your team the trust to explore, and I would go with the contract to report back if you learn something and how it works or how it fails. That's something I want reported back throughout my organization. So I create the institutional knowledge and I evolve and learn.
In essence, investing in your own workforce's ability to create new knowledge in ways as long as they're willing to share that back with the organization at large. Using AI much more as an equalizer across your organization where different staff levels can use it for different reasons and move on faster to produce a better outcome.
I believe strongly that human skill coupled with machine intelligence is what's really gonna produce the best bang for the buck over the long haul. As much as we really want Jetsons-like science fiction, it's really about collaboration. The copilot, not the autopilot.
So as leaders focus on AI in their organizations, I think it's much more treating AI as a co-pilot, not an autopilot. It's really today and maybe they'll change tomorrow, effectively something that can help my employees as a decision support tool, as an editor, as a colleague in the middle of the night providing some suggestions on a document. It is not an autopilot to do things in and of by itself. Automation has not solved self-driving problems, nor is it going to solve the aspects of dealer delivery, call centers, and every other use that we'd like to use Automation for human expertise is often going to be needed in the gray areas and in between for the foreseeable future. Perhaps, someday in the future, Jetsons-like science fiction will occur.
[:So it's this mindset of don't fear the AI. Find a way to integrate it into your work, to make it work for you to get the result that you want. It's not gonna be the answer to everything, but it's about how you adopt it into your mindset, into your culture, into your working routines. That's where we've gotta stock, right?
[:I think this is a really, you know, coins over to assistive driving features, okay? You know, assistive driving features, level two driving assist, you know, not necessarily a cheap feature to add to a car. It improves the experience and if developed and deployed correctly, probably helps the safety side as well. But the end may be a system that costs a little more but is more comfortable, convenient, and a little safer along the way.
So we need to be thinking about what's that benefit to the consumer. So if you're translating something in Mandarin and you want it to get listened to, if that tone is a little stronger by using an iterative process that involves AI, you're winning. So it's always not better, faster, cheaper, the Chinese way; it's sometimes it's better, sometimes it's faster, and sometimes yeah, it's cheaper.
And we gotta think about what the value proposition for that is and how we deploy AI and other technologies. If we keep chasing better, faster, cheaper, and that's the sole output, I could tell you, that Chinese will win with that. Our strength is going to become how do we strategically focus on each of those elements in a more optimal system? And that's exactly how I think Detroit and other western and legacy automakers are gonna have to reinvent, driving the mobility experience to compete with a growing potential tsunami of Chinese cars across the world.
[:[00:38:47] Bryan: And how do we seamlessly integrate that into the background of society? So it is successful. You know, if I put up all the firewalls and say, "I don't want this, I'm scared of it," that's not gonna help me either. If I release the floodgates of capital and throw a few hundred billion dollars at that, that's not gonna solve anything long term, that's successful as well.
So I think it's very much about a mindset shift at the leadership level in organizations. We need to redefine and optimize every business process that we can for the outcome that makes sense. If the outcome is a consumer-focused tool, car system, it's one direction. If it's for automating something simple, it's a different direction, but it's about taking apart what the Chinese have become so good at; Better, faster, cheaper, and saying, "Okay, how do we optimize each of those components for that application?"
And if we can do that, I think we'll figure out how to take AI and make it truly useful for Western culture and supporting us, the mind shift of keeping some of our core values intact as we transform into a technology infuse society in the future. If we don't, we're not gonna compete with the folks who to just integrate better, faster, cheaper, more effectively than we do.
[:[00:40:29] Bryan: Thank you for having me.