Artwork for podcast The Automotive Leaders Podcast
AI, Trust, and the Human Shift: What Automotive Leaders Must Do Next
Episode 17211th December 2025 • The Automotive Leaders Podcast • Jan Griffiths
00:00:00 00:36:01

Share Episode

Shownotes

Register NOW for the UHY 2026 Annual Automotive Supplier Outlook - click here

Sometimes a conversation hits so deeply that it demands a part two , and that’s exactly what happened after our episode with MIT’s Dr. Bryan Reimer. The response was immediate, and the very first message came from CADIA CEO Cheryl Thompson, who had been quietly diving deep into AI for months. Her reaction captured what so many leaders are feeling right now: excitement, overwhelm, fear, and possibility all at once.

This episode brings Cheryl and Bryan together to talk about what AI is really doing inside companies — not the hype, but the human impact. The emotional truth? AI is forcing us to look hard at our culture, our trust levels, and our willingness to unlearn the habits that hold us back. That’s where transformation starts.

Cheryl shares how AI has changed the way she works, creates, leads, and even manages her daily life. But she’s honest about the trap many leaders fall into: using AI to produce more… instead of stepping back to breathe, think, and lead. Bryan brings the research lens, grounding the conversation in what AI can do, what it can’t, and how leaders must shift from delegation to collaboration if they want AI to be truly useful.

Together they unpack psychological safety, generational differences, the rise of agentic AI, and the cultural tension AI exposes inside legacy automotive. And they remind us that AI will not replace leaders — but leaders who use AI well will absolutely outpace those who don’t.

This isn’t a conversation about technology. It’s a conversation about courage, trust, and the future of leadership in an industry that desperately needs to move faster while staying true to its values.

Themes Discussed in This Episode

  • How trust and culture determine whether AI succeeds or stalls
  • Why leaders must collaborate with AI instead of delegating blindly
  • What the Wow, Whoa, Grow framework reveals about human behavior
  • How generational differences shape AI adoption and comfort levels
  • Why AI in automotive demands unlearning old processes, not just adding tools
  • The risk of locking down AI too tightly — and the risk of letting it run wild
  • How small businesses and startups are using AI to outrun traditional OEMs

Watch the Full Video on YouTube - click here

This episode is sponsored by Lockton, click here to learn more

Featured Guests

Cheryl Thompson, CEO, CADIA

Cheryl leads the CADIA: Culture Evolved, where she equips organizations to build equitable, high-performing cultures. A former manufacturing engineering leader in the automotive industry, Cheryl is known for her human-centered approach to leadership, her commitment to psychological safety, and her skillful integration of AI into learning and development. She helps teams work smarter, remove friction, and accelerate change by pairing technology with deep emotional awareness.

Dr. Bryan Reimer, Research Scientist, MIT

Dr. Bryan Reimer is a Research Scientist at the MIT Center for Transportation & Logistics and a founding member of the MIT AgeLab. His work examines how humans and automation interact in real-world conditions, including driving, attention, decision-making, and safety. He leads three major academic–industry consortia focused on human-centered vehicle technology and is the author of How to Make AI Useful, a practical guide for leaders navigating AI’s cultural and operational impact.

About Your Host – Jan Griffiths

Jan Griffiths is a champion for culture transformation and the host of the Automotive Leaders Podcast. A former automotive executive with a rebellious spirit, Jan is known for challenging outdated norms and inspiring leaders to ditch command and control. She brings honesty, energy, and courage to every conversation, proving that authentic, human-centered leadership is the future of the automotive industry.

Mentioned in This Episode

Episode Highlights

  • [02:35] Cheryl’s AI “wow” moment: Enthusiasm turns into overload, forcing her to reset and take the lead back from the tool.
  • [04:06] Bryan on LLMs: Useful copilots, not autopilots — and only one part of a much larger AI ecosystem.
  • [07:18] Human in the Loop: Cheryl and Bryan break down why AI must be viewed as an opinion, not a fact.
  • [11:14] Next-level use cases: Cheryl explains how to move beyond meeting summaries into real business transformation.
  • [14:00] Leaders must stop throwing AI to IT: AI adoption requires business alignment, courage, and clarity.
  • [16:33] Culture and unlearning: Why legacy processes slow AI more than technology does.
  • [20:52] Generational differences: Gen X trusts AI most; boomers the least; Gen Z remains skeptical.
  • [23:03] The collaboration equation: Neural activity drops when we delegate to AI — but rises when we collaborate with it.
  • [32:18] Capturing knowledge before it walks out the door: AI as a tool for organizational memory.
  • [34:29] Final advice: Leaders must experiment, question, and use AI to learn faster than the pace of change.

Top Quotes

  1. “AI won’t replace us, but leaders who use it well will outrun those who don’t.” — Cheryl Thompson
  2. “Large language models are opinions. You have to decide whether you trust that electronic opinion.” — Bryan Reimer
  3. “The future belongs to those who ask how AI becomes useful, not those who sit on the sidelines.” — Bryan Reimer
  4. “Most people are using maybe one percent of AI’s potential. The opportunity is enormous.” — Cheryl Thompson

Jan Griffiths

  1. “You cannot codify a bad culture. You have to fix the human issues first.”
  2. “Leaders today can’t throw AI over the wall to IT. This is a business responsibility.”

Send us your feedback or questions, we'd love to hear from you — email Jan at Jan@Gravitasdetroit.com.

Transcripts

[Transcript]

[:

Stay true to yourself, be you and lead with gravitas, the hallmark of authentic leadership. Let's dive in.

s brought to you by Lockton. [:

AI still remains top of mind for everybody and for sure everybody in the automotive industry. And our last episode, we had the pleasure of interviewing Dr. Bryan Reimer who is an MIT research scientist and author of the brand new book, how to Make AI Useful, and that episode was so successful. I've received so much positive feedback that we had to bring him back onto the mic again.

first person that responded [:

[00:02:13] Bryan: Jan happy to be here.

[:

[00:02:18] Cheryl Thompson: Thank you. It's good to be back.

[:

[00:02:35] Cheryl Thompson: Jan, I have gone down the AI rabbit hole. I have been in weekly classes, since April, and on top of those classes I have been building things. And so when I heard the podcast. And I listened to what Bryan said about trust and culture that resonated so much because we really do have to have trust when we think about using these tools. We have to have trust in the tools, we have to have trust in the people to use [00:03:00] them. And we have to have the culture for that. So that takes a certain type of leader and that takes that culture of psychological safety. But the one thing that really got me was the Wow Whoa Grow framework. And I'll just tell you how I relate to it. I see all these cool things I can do with ai. I use it in every element of my business. So that is the big wow. But what has happened to me is I have been creating like crazy all of that free time that I was supposed to free up. Yeah. I'm just creating more work. So I had to say, whoa. And now I do like a little meditation and center myself, so I'm leading AI and it's not leading me.

So I think I'm in the grow phase right now, really learning to use the tool.

[:

[00:04:06] Bryan: first and foremost, I think we need to remember that large language models embodied through chat, GPT, Gemini, copilot are just one attribute of ai. They're hot fad right now. They're incredibly useful tool. but they are just an element of the broader landscape of ai, a tool in the quiver per se. One of the things that I think is, intriguing about these tools is that they are incredibly useful for a lot of everyday activities. They are very much, if used correctly, a phenomenal copilot to work and collaborate with, to provide and refine product. They are not, however, an autopilot to do a whole bunch of stuff automatically for you.

practices into one's lives. [:

[00:05:35] Jan Griffiths: When we talk about ai, the first thing comes to mind is chat, GPT, of course. But then when we start to talk about more advanced AI systems and we start to talk about agent ai, can you help us understand that Bryan?

[:

[00:07:18] Cheryl Thompson: I agree with that a hundred percent. I have found really good success in using it as an assistant. It will lie to you, right? It

[:

[00:07:28] Bryan: Yeah, I think that's important. Look, AI is an opinion, like you and I trained on a bunch of our opinions, so we hallucinate. Why wouldn't you expect it to hallucinate?

[:

[00:07:40] Bryan: It's regression to the mean at the best, so, as soon as we begin to treat AI as an opinion. Okay. It's an opinion that we need to weigh and say, okay, do I trust this electronic opinion? Well, may trust your and Jan's opinion a little more because I know you a little better. but the electronic opinion provides value. But at the end of the day, it's our ability to [00:08:00] synthesize that electronic opinion that's the amplifier.

[:

[00:08:06] Bryan: We call it the H factor in the book.

[:

And in my custom instructions, I will say, do not fabricate. And it still will. And I'll have to say, you know, whenever it comes out with anything that has data, I'll say, where did you get that from? And try to validate it and it'll say, oh, you caught me.

[:

And the [00:09:00] other thing I noticed that it did the other day, so I've started creating my own GPTs for very specific projects and tasks and I was doing one for social media. And I said, okay, I want you to create some LinkedIn posts, right? And I gave it very specific criteria. 'cause it knows my voice now.

I mean, let's face it, there's enough transcripts out there, right?

[:

[00:09:20] Jan Griffiths: Give it all my transcripts. Yeah. I could probably speak Welsh by the end of the day, but it knows my voice, it knows my tone. It's got me pretty good, but it's structure in the LinkedIn post.

it structured. I think I said:

And I said, why did you ignore the rule that I gave you? Right? And then I said, oh, sorry. Sorry, Jan. Yeah, you are right. I was focusing on those three other things you told [00:10:00] me. What do you mean you're focusing on, you're supposed to do what I tell you to do.

[:

[00:10:07] Cheryl Thompson: That's true.

[:

[00:10:14] Bryan: This is why large language models are great assistance in many ways, but they're not gonna solve safety critical problems.

[:

[00:10:21] Bryan: And the value is. What they can produce, but nobody understands why they produce what they produce or what they're gonna produce next. So you have an artificial system that does some amazing things at times, but no one knows when it's gonna fail and why it's gonna fail. this is exactly why we need to understand where this can be an assistant to us and why it's not gonna be a replacement to us. And look, I think large language models are a fad. I think that much like deep neural networks, were gonna solve everything five years ago. Large language models are gonna roll on out to whatever's next. And if you can guess that, well, you probably shouldn't be hearing [00:11:00] the podcast. You should be doing a little betting with a retirement portfolio on Wall Street.

[:

[00:11:14] Cheryl Thompson: I think learning. How you can use the tool as an assistant like we've been talking about, like what are the use cases and they have to be personal to what you do every day. The workshops that I do. I've powered them with ai, so I'll give them some prompts. So let's say I'm doing a workshop on self-care. give them a prompt and it'll say, you are Henry Cloud, the author of Boundaries. Right. And it is assisting them in setting those boundaries so it's something that they can relate to or I will teach them. How to summarize a transcript, which is pretty basic, but also taking like a large technical document and summarizing, it for a technical person. For a non-technical person. You know, I came from manufacturing engineering and I [00:12:00] wasn't extremely technical. I would've loved to have something like that so I

[:

[00:12:04] Cheryl Thompson: and understand the information and then give me an analogy. So it's just real simple to understand and I'm thinking, if I was in corporate now I can remember I had a boss and he would ask the same questions and I would always think, why don't people pick up on his questions?

Why aren't they preparing that presentation? With that in mind already, I would be running that filter, like before I presented anything to him, I would have his voice in there and

[:

[00:12:30] Cheryl Thompson: Right?

[:

[00:12:31] Cheryl Thompson: if I was a project manager, I would put my project plan in there and say, identify the risk. You know, so I just, I try to get real close to who's using it and what the use cases are, and come up with some ideas so that they can start to see how applicable it is. Or I will help them use things at home. Like meal planning, like for, Thanksgiving,

[:

[00:12:53] Cheryl Thompson: I had my chat GPT right there with me. The Lions game was on. The guys were watching the game. I wasn't sure when it was [00:13:00] gonna end and I had said something like, yeah, these jokers are watching the Lions game. I've gotta figure out all the timing and chat. GPT came back and said, okay, this is what we're gonna do when the game's done and those jokers talk to me.

It's like when those jokers come in and wash their hands, that's when you put the biscuits in, right? So it's just like, how do you use it day to day? How do you use it to plan a travel itinerary? It can be so useful. So trying to get people to see the use cases. it's personal for them. And when I go into companies, I think that the co-pilot enterprise just got released in the beginning of the year. And so I'm seeing it departments start to put their policies together, put those guardrails in place. And so I'm encouraged because I am seeing them come to the table and say, okay, here's what you can do. You know, here, here are the, the guardrails, the guidelines, and all of that. So they're giving them some structure. But I think people still need a little help for the use cases.

[:

You've gotta agree in my mind as a leadership team, all right, what makes sense for our first use case for the implementation of ai. Where are we seeing maybe a lot of transactions, a lot of humans doing a lot of mundane work. Maybe there's a lot of errors to start to identify criteria and start to identify our project and then work together on that project.

does mean is that you can't [:

[00:15:01] Bryan: More importantly, Jan, I think leaders need to be open to learning from individuals in the organization and how this can help the organization. How this can be build. It's not about leading and telling everybody and dictating this is the way it's gotta happen. No, we're dealing hmm. Technologies that, you know, whether it's AI automation or a bunch of other technical trends that nobody is a technical expert. well, team, what can we use this for? What should we be losing for? Can you enlighten me? okay. If I was to do a, what would that do? If we were to do B, what would that do? You know, talking through the scenario, planning and moving, and, Cheryl, you appreciate this in the book, how to make AI useful from the tactical aspect of just. Trying to play whack-a-mole with the potholes to a more strategic application of how does this threaten Our organization? leaders have to be willing [00:16:00] to listen, unlearn as much as they relearn in many senses. The ways that they've been dictating and ways that they've been charging business are not necessarily the paths forward.

There's a great article out In Business Insider with a history professor saying AI didn't break college. It expressed how broken it already was talking about how AI's application and learning process are filling holes that weren't there. And I think it's a really interesting written piece on why and how the education system needs to really think about changing bail because of the advent of AI over the last few years.

[:

How can we use this? Like what are the things that are just a pain we would like to automate or make easier and really go after those things [00:17:00] first, you know, not going after the things that people love to do, but what are the things that people dread doing? Let's see what we can do with AI to help us there. And you know, I am 59 and so I am not a tech person, and so I think if I can do it, anybody can do it. for me, just thinking about all the things I dreamt about doing in my business two, three years ago, I'm able to do now because I have this copilot I'll tell chat, GPT or Claude, I'll say, talk to me like I'm 10 years old and walk through this with me step by step.

And now AI is built into all of these tools. So notion, I love using notion to organize everything. AI is built in there. Zapier, a tool to automate things. Ai, you can use copilot within ai. So now I don't have to worry about downloading my recording from Zoom and uploading it to Vimeo. It just happens, and I get an email, right? I would've never been able to do that six months ago.

[:

Sign up the link is in the show notes and I'll see you there.

[:

And I think that, same employees that we wanted to hire before, but they're, empowered by new technology. [00:19:00] And so I think Cheryl, folks like you who are saying, okay, I need to change, to leverage these tools to create more, faster, better material. You know, where it's going.

I mean, Look, I, went. a presentation the other day and said, how would an audience of X, y, Z interpret this presentation? And had some great suggestions in there.

[:

[00:19:20] Bryan: Me listening to the electronics assistant and saying, Hmm, that's a good idea. I'm not agreeing with that one. Hmm, a great idea.

[:

[00:19:29] Jan Griffiths: Yeah.

[:

You know, an OEM where it's just, bureaucratics and red tape galore. And now being a very small business owner and now meeting other business owners and seeing how fast they're able to move with ai, these small businesses and startups, they're gonna lap the bigger businesses, [00:20:00] right?

[:

And we, have remember that this is a copilot, not an autopilot.

And. other things are gonna change in ways that we can't predict. I mean, open AI is changing the foundations of chat GPT many times a year at this point. That means even, if chat p version X, y, Z provided a response, it couldn't replicate it, nor would you expect the next version to replicate it. all it is is just great suggestive evidence and direction.

[:

[00:20:52] Cheryl Thompson: Well, it's interesting. I just attended a talk. It was the Women in Manufacturing Conference, and there was a person that spoke on generations [00:21:00] in the workplace. Right? We've got some traditionalists left. Still, we've got the boomers, we've got the Xers, we've got millennials, and then we've got the Zs.

And she had a slide up there. How much does each generation trust ai? And I thought it was going to be Gen Z, you know, out of the

[:

[00:21:16] Cheryl Thompson: was not Gen X trusts it the most. And I think about that. You know, I'm on the generation that has seen the most change. I was just talking to somebody the other day about running around with a tape recorder in my car.

Right.

[:

[00:21:28] Cheryl Thompson: Listening to the radio, trying to hit record when my song came on so I could make my own little playlist. And so I find this stuff so fascinating. And then I think my personality is wired a little bit for this as well, but the Gen Zs, they have come into a world where social media and everything that's out there, there's so much misinformation and so there's already a little bit of that mistrust. And so that's what the expert was saying. And then the boomers trusted it the least.

[:

[00:22:12] Cheryl Thompson: A friend of mine her mom is 84 and she just hooked her mom up with a chat GPT account, and she said, mom, if you hear that noise in the furnace, talk to Chad about it. And then her mom sent her an image that she created out of nano banana. I mean, look at her. Go 84.

[:

[00:22:40] Cheryl Thompson: Oh no.

[:

I mean, she'll play with it, but no, she says you're not using your brain to think. [00:23:00] So MIT research scientist, what you gotta say about that?

[:

The question is, where do we want that atrophy to be and what do we want to use the accelerated capabilities for? So I think this is a good societal question, and I think your daughter's saying, Hmm, I wanna learn everything the old fashioned way. You know, you can, but you're gonna miss many opportunities that you would've been able to learn if you had embraced, support tools in some aspects of what you're doing. Quite frankly, grammar Check and Spell Check has been around for, since the dawn days of word processing. We are talking about grammar check and spell check on steroids, and that alone is a huge augmenter of time and the output written work.

[:

So today I had to have a conversation and I had our enneagrams, our kolby types, you know, all our personality types in the backend. And so it was able to coach me on how to have this discussion. And it was so helpful because, my Gallup. My strengths finder. I am an achiever, but I'm also a harmonizer, and she is an Enneagram eight, which is like very straightforward, say it like it is, but she also likes her freedom.

of before. So for me it's a [:

[00:26:01] Jan Griffiths: We talk about Bryan, I like what you said. We have to decide are we gonna delegate or are we gonna collaborate? Now, agentic ai isn't that delegating it.

[:

[00:27:14] Jan Griffiths: Hmm.

[:

[00:27:23] Jan Griffiths: Hmm. That's interesting.

[:

[00:27:34] Jan Griffiths: Well, making business decisions. My background is a lot in purchasing and supply chain, and I was at a conference a few weeks ago and we were talking about the evolution of procurement function and the integration of ai. And it starts with augmenting basic tasks, transactional things, taking care of those, okay, we got that.

But then it moves to a point [:

[00:28:31] Bryan: And look, you're talking about where the hope is. Agent AI and I think there's a real possibility that's the case, but. Over automation is ripe with failure at times, and, so I think we will see more and more a balance of, agents may go out there and negotiate but at the end of the day the human process may actually produce better negotiations 'cause it's emotional. And the stronger negotiator is not [00:29:00] necessarily the stronger AI system, it's the emotional component that's in there. And we find some hybrid of the two.

We work with folks that we begin to build trust in over time and. the bots negotiating for us don't have that critical element. Larry, who I've been talking to and working with for 20 years, has always come through today. I need to make sure I negotiate for a shipment. I pay well, I procure well, but I'm guaranteed a delivery 'cause I gotta have something in the just in time system tomorrow. different than I'm trying to book something six months out, seven months out, Okay. You know that stuff's gonna be hard to codify.

And codify successfully producing the outcomes that we want. And we expect, remember, we can create outcomes. They just may not meet the expectations of us.

So, we are the failure aligned, we're the failure aligned in delegation, and we're the failure line in receiving. So just automate it all away, it'd work, but not necessarily the effective solution we're looking for.

[:

[00:29:57] Cheryl Thompson: Well, I was just thinking about Tesla. [00:30:00] I think, you know, I've read about Tesla and how they're using ai, you know, and from what I've read, they have built AI into how work gets done. Right. And they don't have all the legacy, like the traditional OEMs do. They've got flatter structures. They've got where the AI people are sitting with the engineers. So I think that would be really interesting to see, you know, there's some things maybe about the culture we would not want to Right. You know, I've also heard there's a lack of psychological safety and just this always on culture. So, I think we need to be learning and taking the good and leaving the bad. I, think we do need to be talking about it more. I think we need to work on our culture because, I see two extremes right now. I see some managers lock everything down, don't use it. I mean, some sites are completely blocked and then some just open it up and they're that laissez fair approach and people are left wondering what's safe, how do I use these tools? So I think we've got a ways to go and there's a and. Opportunity.

[:

[00:31:22] Cheryl Thompson: A hundred percent agree

[:

[00:31:32] Bryan: Yeah, I

[:

[00:31:35] Bryan: Not only is Tesla using these tools, our, our competitors in China are as well, we are looking at cultures that are embracing and accelerating with tools. Now, I'm not gonna say they solve time and provide efficiencies at all points. We need to understand where they help. Where they don't, where they can be, the collaboration can produce a better product, and we need to begin to think about how do we invest strategically and then [00:32:00] accelerate smartly.

[:

[00:32:18] Bryan: You can go through 50 years of documentation and potential find something with search that

[:

[00:32:25] Bryan: I can't find using any traditional method

[:

[00:32:46] Jan Griffiths: Yeah, I like what you said, learn faster than the pace of change. And that lines perfectly, Bryan, with your thought process about it's the human that will actually slow down the rate [00:33:00] of advancement of ai.

[:

[00:33:02] Bryan: Will we let AI change us effectively is a, is important question. Centerpiece, my recent book. Look, we're sitting here talking and I'm laughing in some sense 'cause you know, I'm sitting here, I have Outlook on my computer but MIT has not allowed us to adopt co-pilot into our email boxes yet. And I can't find anything. I would. Be blessed to have copilot searching my inval. The amount of time that would free in searching for information. I know it's there. Just traditional search approaches don't index it well.

[:

And then I learned that I can automate that. So now I set up a workflow every Friday at five, it'll run by itself. So

[:

[00:33:58] Jan Griffiths: Oh.

[:

[00:34:10] Jan Griffiths: All right. In closing today, I wanna hear one piece of advice from both of you to leaders out there in the auto industry that are perhaps, I would say at the stage where they're comfortable playing around with chat GPT, but they wanna take it further with their team. What do they do? Bryan? Let's start with you.

[:

[00:34:51] Jan Griffiths: There it is, Cheryl.

[:

[00:35:15] Jan Griffiths: There it is. Perfect. Bryan, Cheryl, thank you so much for joining me today.

[:

[00:35:22] Jan Griffiths: Thank you for listening to the Automotive Leaders Podcast. Click the listen link in the show notes to subscribe for free on your platform of choice, and don't forget to download the 21 Traits of Authentic Leadership PDF by clicking on the link below and remember. Stay true to yourself, be you, and lead with gravitas, the hallmark of authentic [00:36:00] leadership.

Chapters

Video

More from YouTube