Artwork for podcast The CTO Compass
AI Risk Is a Design Problem: 3 Questions Every CTO Should Ask Before Their Next AI Release ft. Jill Stover Heinze
Episode 2720th March 2026 • The CTO Compass • Mark Wormgoor
00:00:00 00:41:42

Share Episode

Shownotes

Most AI strategies fail before anything ships, not because of the tech, but because leaders never test assumptions against reality. In this episode, CTOs will learn how to ground AI strategy in real user behavior, reduce risk early, and avoid costly AI failures before they scale.

Jill Stover-Heinze breaks down how generative AI changes the risk model, why non-deterministic systems demand new leadership thinking, and how CTOs can turn governance, user research, and risk into a competitive advantage instead of a bottleneck.

Key Takeaways

  1. Why most AI strategies fail before build and how to validate ideas against real user behavior
  2. How generative AI changes risk and why non-deterministic systems require new leadership thinking
  3. How to use risk and governance as a design tool instead of a compliance exercise
  4. Why product-market fit still kills AI initiatives and how to avoid building the wrong thing
  5. What to ask your board and teams to move fast without falling into AI hype and costly mistakes

About Jill

Jill Heinze helps product leaders make smarter AI decisions through strategic intelligence and ground truth research. As founder of Saddle-Stitch Consulting, she brings 20 years of user research and competitive intelligence experience to help organizations navigate AI uncertainty, revealing what competitors miss and avoiding expensive mistakes before they happen. She serves as Responsible AI Program Director for The American College of Financial Services and hosts Responsible Tech Talks on LinkedIn Live.

Chapters

00:00 The Ground Truth

04:38 AI's Hidden Consequences (NIST)

12:47 Ad

13:19 Approaching AI as a CTO

18:45 FOMO in the Corporate World

22:57 Keeping Up with AI

28:42 Ad

28:53 Effectively Using AI

37:27 Talk to Your People!

Where to find Jill

  1. Website: https://www.saddlestitchconsulting.com
  2. LinkedIn: https://www.linkedin.com/in/jill-stover-heinze/
  3. Instagram: https://www.instagram.com/jill_saddlestitchconsult/
  4. YouTube: https://www.youtube.com/@JillHeinze-SaddleStitchConsult
  5. Facebook: https://www.facebook.com/profile.php?id=61581363390571
  6. TikTok: https://www.tiktok.com/@jill_saddlestitchconsult

Transcripts

Jill:

But I think generative AI put it in a new light in that it is a non-deterministic technology, meaning we don't know what the outputs are going to be. When you introduce a non-deterministic technology into workflows that traditionally we've got, you know, bounded pretty tightly, that to me changes the entire risk conversation. We take data, which is an abstraction of real life. We, you know, put it into software, which is an abstraction. We slap an interface on it. And so Over time, we get further and further removed from the actual people, the actual things that are happening on the ground, and the pace of generative AI makes that happen faster. Quicker and more accessible to more people. And that is where I think there's just a difference in degree there that really raised the flags for me and inspired me to really lean into this more heavily.

Mark:

Welcome to the CTO Compass podcast. I'm your host, Mark Wormgoor, tech strategist and executive coach. In every episode, we meet tech leaders from startup CTOs to enterprise CIOs to explore what it means to lead in tech today. They share their stories and their lessons so that you can navigate your own journey in tech. Let's dive into today's episode. Most AI strategies, it fails before a single line of code is written. And it's not because the models aren't performing.

I mean, they're getting better all the time. It's because no one really tested the idea against the ground truth. And that's what we're going to talk about. How real users behave, what the competitors miss, where the real risk and the user use Today, I'm joined by Jill Stover-Heinze and she's the founder of Settle Stitch Consulting. She helps leadership teams test their AI strategy against real-world user behaviors, what people actually do. Competitive blind spots, governance risk, and Before any of those assumptions, they go south and they turn into really expensive mistakes or yet another failed AI pilot, of course.

So, Jill, before we get into all of those details, tell me a bit about your background and how did you end up here?

Jill:

Yeah, thanks so much, Mark, for having me. Well, my background is probably a little bit unique for many of your listeners, I'm going to assume. I started out my career as an academic librarian, of all things, so that's a little bit unusual, I expect. But it was actually through working in academic libraries that I just found myself in the position to become a UX librarian and then shortly after managing a web team with developers, designers, researchers, the whole kind of a cross-functional team from beginning to end.

So that gave me the bug for technology and for building with teams, which I have taken with me as I have become a research director working alongside delivery teams. And most recently, prior to starting my consultancy, I worked on generative AI projects through my consultancy for pretty large, complex, high-risk clients.

So that was very eye-opening, and I think the inspiration for a lot of the things we're going to talk about today.

Mark:

I'd love to talk about the work that you did there. Before we get there, The ground truth. What does it actually mean when or what do you mean with those words when it talks about AI or product strategy?

Jill:

Yeah, it's really deceptively simple, but it's the thing that I think most of the time we forget when we are in build mode. And that is simply understanding what the reality is, what the facts are, not only the facts about the use case that we are building, which is very important, but for technology leaders in particular, I'm thinking about what are the teams contending with on the front lines? What are the leadership ambitions that aren't maybe making their way down through a kind of the delivery process, a lot of disconnects happen. And I have found that leaders are sometimes really unaware that their teams are even struggling with these questions about What are we building with Jenny? What are we allowed to do? Maybe there's, you know, a little concern that people don't feel they have the venue to express or the process to really reconcile. And so that is kind of a hidden drag on teams. And I think it comes from just people. Not knowing what is happening at that level.

And then more broadly, outside of the organization, which is something I've spent many years doing research in and kind of pulling into our product ideation and so forth.

Mark:

I think in my world, this is something that I've seen even before Gen AI. I mean, in the tech world, we often have technology people that are somewhat or more disconnected from what actually happens in the business. I've always fought for those teams just understanding what happens in the business, going there with the frontline people and just working with them, seeing a day in the business life of those people. You just talked about the business that you worked or the large businesses you worked with around this. Tell me more.

Jill:

Yeah, so in the earlier days when ChatGPT was just coming onto the scene, which was the turning point for many of us in tech, we worked with clients who wanted to spin up, you know, GenAI-approved concepts and really just sort of figure out what the technology is capable of and do some experimentation. So that's where I got pulled in to our Data and AI Research Group as a research director, specifically to help guide teams to think about the greater context of what it is that we are building. I had, at the time, maybe not the most popular take in the organization, because I was the one kind of waving my hands like, we got to think about, you know, the external unintended consequences that we could be, you know, bringing to life through our these projects. And so that was perceived, you know, in some ways as a drag, right? It's like, we're going to get stuck in this kind of endless loop of questions about should we or shouldn't we and so forth, which makes total sense. But I came up with a framework, a way to use the NIST.

So the NIST AI Risk Management Framework really translate that and boil it down to questions that teams could ask themselves about their particular Gen AI use case and then ideate on what could go wrong, which again is a little bit of a bummer for delivery teams who are really excited about the technology and just want to see what they can get to work. But we would flip that around to say, okay, well, knowing what we know, now we are grounded in these risks and what could happen for this use case for this group of users. Now we're really empowered to say, hey, we can do something about this because we can put in the mitigations, we can plan and our architecture properly.

So I would guide teams through that for our clients. And it was always eye opening for the technologists who are like, wow, I just didn't think about that because we are, we're moving ahead and we're figuring out these new, you know, techniques and that kind of thing.

So, It didn't really slow things down, but we were able to anticipate things earlier. So we were building from like firmer footing earlier.

Mark:

On. So, and of course it's so nice when it goes well, but give me some examples of where people missed some of those assumptions at the beginning where you were involved and how that went. Out of themselves.

Jill:

Yeah. So I think this kind of goes back to how I got involved in the data and AI teams beforehand, because this was not part of the organizational structure. It's a little bit atypical, which I thought was interesting. Really kind of a cool endorsement from leadership to say, no, we need to infuse these teams with some kind of human-centered design practice.

So that was pretty cool. But one of the impetuses for doing that was seeing that we had delivery teams who were getting close to shipping for financial institutions with tools that were not red teams. And this was early days. None of this actually got out, mind you. We actually tended to it. But the point is we got a lot further along than I think leadership was comfortable with. Before they said, okay, we need to kind of rethink our processes and our assumptions.

Like we assumed that engineers, software engineers would know about some of these AI engineering techniques and principles and practices, but really we haven't maybe readied the organization and moving quickly, this kind of thing. So it created an opening for something I had been advocating for many months prior, which is we need some kind of governance function, some kind of way to insinuate some good best practices for risk mitigation in the delivery. And.

Mark:

This NIST framework, or at least your framework that's based on NIST, what does that look like? How complex is it? How simple is it? What are kind of the questions that are in there?

Jill:

Yeah, it's not overly complicated. For people who are in human-centered design, a lot of it might sound very familiar. But if you don't have kind of that background, you might be surprised by some of the techniques.

So the very first thing that we do is we look at our use case. What is it that we are aiming to build? Usually we have some general idea. We have some idea of the tech stack or maybe the Gen AI methods that we want to use.

So we kind of start grounding ourselves in like, you know, not the abstract, but the actual real thing we're looking to do. We identify the users or the stakeholders in this. And usually we'd pick like one or two who are kind of the most likely to be impacted. But then we spend some time doing something, and this goes back to like old school user research design, which is empathy mapping. We take a look at our users and we think about what is their day-to-day life? What are the concerns that they're contending with? What are they observing? What's their environment like? Which, again, We can talk about in the Gen AI context why that's particularly important, but going through that exercise, you learn all of these ahas like, you know, maybe it is that people are excited about adopting this technology at the same time they're afraid of it displacing them, just to use like a common example we're finding in research.

So starting there, we can then move over to the NIST framework. And I pulled out the trustworthy characteristics that NIST has identified for what characterizes a trustworthy AI system.

So there's a quite a list of those things, depending on how you count, like 11, 12, something like that. There's quite a few. But we spend some time in each of those categories, like explainability, reliability, and we ideate what could go wrong with our use case, knowing our users in each of these categories.

So the idea being we don't anchor on one particular risk, like hallucination, but we really try to cover the spread of relevant risks in Then we ideate around, you know, how could we solve for these things? These then become these like requirements and project constraints that people can actually work with and pull forward into requirements.

So we take it from kind of the abstract, but really get down to like the technical architecture and mapping the risks through that whole kind of chain.

Mark:

Of thought. And that sounds, I mean... I love that approach. I think we should have been doing this in tech projects for the last 20, 30 years. And I know it's missed in most projects.

So I love that approach. And I'm a big fan, specific to AI projects. What are the things that usually get missed or overlooked by not doing this?

Jill:

Yeah, this is a good question because I'm with you. I keep thinking, why didn't I have this as part of my just regular process? We know that algorithmic harms have been long known just in like our traditional machine learning, recommendation engines. It's not like that bias didn't exist or that we in the technology community were unaware. But I think generative AI put it in a new light, at least for me, and I suspect this will resonate with folks, in that it is a non-deterministic technology, meaning we you don't know what the outputs are going to be.

So there are ways, as you know, to constrain that, to ensure that the outputs themselves are more deterministic, et cetera. But when you introduce a non-deterministic technology into workflows that traditionally we've bounded pretty tightly, that to me changes the entire risk conversation. You overlay that as well with the kind of macroeconomic push to, Put AI into everything and do it quickly. And don't think too hard about it. Just do it. That's kind of been the mantra for quite some time. And the concern that I have with that is that we're not maybe organizationally prepared, but we're introducing people ever more abstractions onto the ground reality, which we have to do to make anything, technology in particular, that's part of what we do. We take data, which is an abstraction of real life. We put it into software, which is an abstraction. We slap an interface on it. And so over time, we get further and further removed from the actual people, the actual things that are happening on the ground. And the pace of generative AI makes that happen quicker and more accessibly to more people. And that is where I think there's just a difference in degree there that really raised the flags for me and inspired me to really lean into this more heavily.

Mark:

Before we jump back in, here is something that I've learned from over 30 years of working in technology. The hardest part of leadership, it's not the technology and it's not even the people or the teams. It's often that you're added alone by yourself. There's no one in the room that fully gets what you're dealing with. There's no one that you can trust to discuss your decision with. If that sounds familiar, find me on LinkedIn. Mark Wormgoor, and tell me what's in your mind. There's no pitch, just a discussion with somebody who's sat in a chair as well. Let's get back to it.

So if I'm a CTO and I'm listening to this and I'm probably being asked to do exactly what you just said, to deploy AI or to just build into the next project, and it likely needs to be done in 30, 60, 90 days. What should I do? What should I ask myself? What should I ask my board maybe? Who should I talk to? What should I really pay attention to?

Jill:

Yeah, I have a take on risk, which is I think of it as kind of a design problem to be solved. So I would get very clear on what your risk profile is and what your individual organizational risk tolerance is and make sure that is well understood throughout the organization.

So that way, teams from leadership down to delivery can really understand what they're doing in the context of the overall risk profile. And it's going to look different for different functional areas, but different industry We have different levels of regulation, different maturity levels in terms of data governance and all of this.

So that will shape how those decisions are made. But it's often this unspoken, maybe inconvenience, because again, risk kind of opens up a lot of cans of worms. But if we constrain our thinking to what is most relevant in the moment, I think it is manageable. And not only that, it's this jumping off point to be creative and innovative and understand the problem space better than most.

So it's time well spent is just redirecting that effort on the front end versus the back end where you know, are at greater risk of, you know, reputational harm, consumer harm, those kinds of things.

Mark:

And I think a lot of people, at least including me, have some negative connotation with risk because of the famous ERM enterprise risk models that we all need to mandatorily fill out and then nothing happens with them. A project manager comes by and says, we have to do a risk log and then nothing ever happens with any of those risks. How do you make sure it's practical and actually used instead of the paper exercise.

Jill:

Yeah, it's a bigger question than I have the answer to. I'm working through it along with clients and, you know, leaders in the space because it is an unsolved problem. I think it's an example of where we have these people system and business processes that haven't caught up to where we are with the technology. And in fact, we need more agile ways of structuring governance that are continuous.

So you hear in governance circles, our risk managers, legal, there's a real big appetite to work across functions and to work with developers and engineers to better understand the technology so that we are right-sizing our governance. We are keeping in current conversation. We're understanding data flows and risks when we bring on vendor products and so forth. But We, I think there's going to be a greater demand for this cross-functional sort of collaboration and there's really a strong appetite for that. But in the near term, I just, I go back to constraining it to what is most relevant and timely versus the idea that we have to sort of boil the ocean and then kind of build up from there.

So as you discover more latent risks as you discover use cases for how employees are using technology that might not be quite in alignment. Start socializing those and documenting those and then gradually kind of building up versus a top-down exercise of let's just lay out a lot of guardrails and protocols that people can't absorb necessarily because it doesn't necessarily ring true to the things that they're working with on a day-to-day basis.

And then.

Mark:

Just for fun, what is some of the most expensive assumption that you've seen organizations make around AI where they really had to either redo everything or just go into a giant failure.

Jill:

Yeah, here's this is going to probably resonate with you as a long term kind of concern. But I think it's really understanding product market fit. It goes back to the fundamental like what needs are we solving for? Are we really addressing the needs? I have worked with clients who have in very large international organizations who have tried to spin up new product lines using what they thought was a sound and reasonably so research process. This particular example I'm thinking of, they had an advisory group of what they thought was the target market for this And they had them do this participatory design exercise, really come up with a product that would be appealing. What they didn't recognize is that as soon as we got those mocks in front of the potential users, there was no appetite whatsoever. They were actually recoiled at it because that advisory group had built a product for kind of their juniors.

So this was something they anticipated another group would find valuable, not themselves. But that assumption really didn't get exposed until later. Fortunately, we were fairly early on. And we could pivot the target market. But that stuff happens all the time where we assume there's a need because, you know, no one else is maybe doing this thing. Hey, we have generative AI tools where we can spin up, you know, a prototype in no time and, you know, float it out there. But are we actually like learning? Are we doing it in a methodical enough way that we can, through process of elimination and testing hypotheses, move forward? Or are we just spinning things up because now we can move ostensibly or on the surface more quickly, but underneath We really haven't made traction in kind of focus and.

Mark:

Fit. And it feels like the FOMO that we all know from Instagram and TikTok has moved into corporate life. It's in boardrooms. We all just have to go faster and faster because we're going to lose to the competition. What is the right approach here?

I mean, we should go fast because of course we don't want to lose to the competition, but FOMO maybe isn't the right answer. What is?

Jill:

Yeah, intentionality, it comes down to, I don't have like a, you know, a concise technical answer to that question. But if you can experiment with intention, like you have a well-defined problem space and you have an idea that has a reasonable relationship to that problem space that you can test and actually design a thing that if it is going to work, we should be able to see X, Y, and Z when we put it out to market. I think that gets you ahead a lot more quickly. In addition, if you take this risk-based kind of approach and welcome risk as a bit of a design constraint and requirements gathering exercise, I have found that we will actually get insights from people earlier on that creates space for competitive differentiation. There's an example that I like to share. It's very kind of tiny, but you can imagine how this could be the source for a bigger kind of shift. But I was looking at... And for this large company, we were mapping out student workflows, how they actually go through using different tools to do their work. We entertain the idea of using AI, some form of, I don't know if it was generative at the time, machine learning, it doesn't matter, but some form of AI to kind of test their knowledge throughout.

Well, this student base was medical students who are self-testing all of the time. And they said pretty clearly in testing, like, please do not introduce that tool because we are under so much stress. This is just another layer of it that we don't need. And I thought that was not only insightful as to how we would construe that project, but.. Imagine if we rethought the idea and said, how can we make a tool that helps educate students, but gives them this moral support, helps them to de-stress, maybe insinuate like positive, you know, reinforcement versus sort of this negative reinforcement. And so, but that was because we took the time to talk to, you know, five, 10 folks and walk them through the concept.

So I think you can get a lot of differentiation out of doing these things and on the surface might feel a little confusing. Cumbersome at the outset, but they don't have to be executed in a cumbersome.

Mark:

Way. So what do we do with competitor stress, right? Our competitors, that's apparently, or at least in the press releases that they put out have already solved all of the problems that we're still looking at. How do we ignore those?

Jill:

Right. But well...

You know, gosh, how to even begin with that one? There is so much hype in the space. And, you know, as I'm working with, especially those in smaller organizations who are just trying to grapple with like, kind of like basic use of some of this Gen AI and what would be appropriate for what use case. It's really kind of having a bit of a paralyzing effect. It's like it's almost like so much and the pressure is so high that I think understanding those friction points is a good place to start.

So when I go in to an organization and we're talking about governance, it's really more about like what is holding you back right now? Is it education? Is it really skill using the tools or understanding how to build in, you know, appropriate monitoring or metrics? Pinpointing the idea or the friction point earlier is really kind of the way forward to move more quickly, but it is not telling people you must adopt this today and just move forward because you would be, I mean, you're kind of laughing a little bit because I think it just reads as like deer in the headlights, kind of a paralyzing mandate that people get.

So the human aspect of that is we can't, mentally, emotionally, and our organizational systems cannot quite keep pace. So rather than moving everything all at once, what are the key things moving us, you know, or keeping us from moving forward in the way we want to? And are people, do we have a shared understanding about what that is?

Mark:

And I think what you just said is one of the biggest questions that I have with what's happening in the world today. So we'd love to get your take on that. It's going so fast. That's.

I mean, I'm very technical, right? And I read a lot and I experiment a lot and I play with a lot and I'm, sort of feel like I'm able to keep up. I can't imagine how other people... Even can keep up with all of this, going from all the next models to image generation to agents to teams of agents is going so incredibly fast. How do we... Get people along? How do we take people along without pushing it on them or getting that deer in the headlights view that you just spoke.

Jill:

About? Yeah, it's a great question. And I think one idea I would propose on that is to maybe think of it less as how can individual how can we as performers keep up with everything and how can we collectively as an organization, knowledge share and help support others in keeping up. One of the things that I have found to be true is that you're right. There's just there's a natural limit to what we can absorb.

So we have to be a bit selective. Now, fortunately, AI and other tools give us the ability to really hone in on the things that we need to know. And that might change over time. But I think we do have to tune some of it out there. The technology itself, there's so much hype about, for example, artificial general intelligence. I am one of those who just isn't quite sold on these kind of grand technologies. Kind of visions for how intelligent, quote unquote, these systems can become. But like the day to day is you've got people who are maybe over relying on chatbots to make decisions or, you know, not quite sure how to deal with hallucinations.

So let's focus on those problems. There's lots of research to your point on these topics as these aren't new techniques entirely.

I mean, some of the applications, of course, are new. So there's some selectivity, but it presupposes some kind of goal or agenda that leadership, our CTOs and their peers really need to establish for the organization.

So I would say think collectively versus individually. And one thing that I have done, and I won't bring it back to the individual, but I will share one of the great things about generative AI that I have really enjoyed is the ability to build and use that process as a way to kind of interrogate my own assumptions.

So I have built GPTs that encapsulate my thought process as I evaluate an AI use case and then run it through there with some levers and selections to help just sort of depict the use case. And then what it feeds back to me isn't an answer to a question, but it's a methodology for what conversations need to be had, what other information are we lacking?

So Can we use these tools to help us be more strategic and selective? Like that might be another avenue to explore, but we're certainly all figuring it out together, I think is fair.

Mark:

And I love that. And some of the challenges and the risks that we have with AI, then some, at least part of the answer, I mean, of course, it's a lot of the things that you just said and having the right models and the right thought process, but AI being part of that solution as well.

Jill:

Yeah, absolutely. Very nice.

Mark:

- So, and all of the industries that we have, because you already talked about financial services. Financial services is very different from medical students Which use cases or which industries really make you the most nervous right now when it comes to Gen AI?

Jill:

It isn't even to me a matter of industry personally what makes me nervous. What makes me more nervous and kind of the dividing line that I put, and even in terms of regulation, this is the dividing line. Is it external facing, consumer facing? Is it internal kind of, you know, for your workforce development? In the cases where it is consumer facing, I am deeply concerned that we are not creative enough about regulation. Understanding how people will use these tools in their real lived context and anticipating the risk that we're introducing into those systems.

So I'm not trying to be alarmist. I'm not trying to suggest we don't experiment and try, but I am seeing evidence every day about how, say, chatbots are being used for mental health therapy purposes. I don't think it would have taken a whole lot of creative thinking to understand that when you personalize a technology and make it behave and sound like a person who empathizes with you, that people might come to rely on it for kind of these therapeutic needs.

So the question then as a designer and a technology provider would be, what are the signals that we would be looking for to say that this is going off the rails? What are the interventions? I don't think that that's happening. And I think that's, again, this push to move fast. But I really want folks to question, what are we moving quickly toward and at whose expense are we doing that? That's kind of my biggest area of concern. But yeah, internal has its own kinds of constraints or considerations. But I think it's once you put it out to the public, you own that. And maybe the regulation hasn't caught up yet, but it will. And have you done the work? Have you done the groundwork, honestly, and documented that to prove that you are being thoughtful about the implications?

Mark:

And indeed, internal use or even personal use. I mean, if I build something for me, I don't care if it works, if it doesn't. I'll read it and I mean, I would assess it and judge it doesn't matter. It's for consumers. That's quite different. If you put some, especially if that's minors, but indeed, like you said, therapists, Yeah, that doesn't...

Sounds like it could go wrong very quickly. And it already has. And we've seen other news. A quick one before we continue. If you're getting something out of this conversation, please hit the subscribe button below. That way, other tech leaders can find us as well. I would really appreciate it. Let's get back to it.

So then if we take it back to responsible AI, getting it right, being compliant, I think you've said that could be a strategic advantage. So how do we use that?

Jill:

Yeah, I think we use it to probe a bit deeper. I think we use it as an opportunity to get to know our user base and our stakeholder base. And I think then we take those examples of the interventions that we have introduced or the insights that we have gained about how we can better not just protect people, but advance them in solving their problem. That's ultimately what we're trying to do. And by really tending to that early on, we will have more fully formed ideas about how to do that. Are we translating them back then to our project? Product marketing teams, and other leadership to be able to use that and leverage that as a competitive differentiator, assuming that that's what resonates with the market. It's all very use case dependent. But what I am suggesting is that in the process of doing this sort of due diligence work, you are creating something unique. People aren't doing this.

Like I will just say that the move is to just go quickly and kind of at the surface level, at least at this point in time with generative solutions, they're everywhere. They're proliferating. What Really makes you different is being grounded and exposing that kind of work that you've done in some form or fashion, be it a feature, be it a, you know, kind of promotional effort. But I think that is opportunity there that can get overlooked.

Mark:

Just by doing the user research, figuring out what they actually want, what they actually need, and doing the two diligence before starting. Before actually building and releasing.

Jill:

And it's hard and it's not an easy thing to do. So I'm not going to say that you do the user research, you figure it out, you launch. It doesn't kind of work in that way. You're going to learn as you go. This is in some ways, a new application of technology. And so I don't presume that we have all the answers, but if we formulate the right questions, then we can kind of track to being able to learn on a continuous basis because we know what we're looking for. We know the signals that really matter and what is noise because we've decided, you know, we're honing in on a particular topic problem space and a particular solution set. And if we really believe we have a good solution, we should be able to have an opinion as to how we'll know when we are hitting the.

Mark:

Mark. And it's scary that there are businesses that are doing user research now with AI or... - But let's not go.

Jill:

There. - No, that's another abstraction that again, I'm like, why can't we just talk to people who will be impacted? I've done user research, gosh, for over 10 years now. I have never ever failed to learn something from a conversation. And I'm not talking to 100 people. I'm talking to, you know, 5, 10, 15 people.

You know, it's pretty incredible ROI.

Mark:

And can you give an example of that ROI where you did the user research and where it actually gave you real insights that helped you make a product so much better or solution so much better?

Jill:

Yeah. I'm thinking about, gosh, we did, there's so many examples, but I keep coming back to the one where we had to pivot our target market because it was a very salient example when you really have to make that dramatic of a swing.

So once we understood who the That is kind of the ROI and it's easy to brush off. But that was one of the more stark and bigger scale examples of where we headed that off before it materialized. And that's really what we're, you know, here to do at the end of the day.

Mark:

My CFO used to call that blue dollars. They don't exist, right? The cost that we avoided, the cost that we never spent. And it just, I mean, it happens and then it's out of the books and everybody forgets. And if it happens, of course, it's green dollars, money actually spent and everybody cares, but... Cost avoidance is never a great benefit. It is, but... It's never really recognized as such.

So having listened to all of this, and if I'm a CTO, I'm a CAO, how do I start? How do I have that conversation with my board, with the people that are demanding that we put everything out in 30, 60, 90 days? What do I tell them?

Jill:

Yeah, I think, you know, first of all, you ask good questions. You don't tell them. You ask them good questions. What are we most worried about? What are our biggest risk unknowns?

And then how do we get clarity on those things quickly? I think that is a great place to start.

And then as you go, it's a matter of like capturing that in some form or fashion. I'm not talking about copious documentation, but I am talking about progressively building on what we know.

So we're not relearning and relearning the same things over and over. So I think that is an excellent way. And don't assume that folks are on board or have full understanding about the direction that we're going or that they are adequately prepared to meet kind of the challenges that you have laid out before them. Because honestly, what I am hearing is a lot of rumbling on the front lines of like, gosh, we really feel like we need more guidance here or we need some kind of governance. And executive leadership's like, no, we're moving forward. We're delivering.

So I don't mean to characterize this as everyone is doing this. I'm saying what I am hearing. And we need to as not just CTOs, but as leadership teams, figure out how we can have more close connection points. You had a previous guest talking about siloization. It's never been a great thing, but I think the way that AI permeates an industry, the way that it scales quickly and the way that it particularly when we get into a genetic, like the risks that are going to be less obvious really require people who are not only tending to their units, but are communicating across because that's the technology is going to stitch those things together. And we need to be really anticipating that.

So not a small task, but it is about not assuming and about collecting that ground information.

Mark:

Yeah. And that's probably, I think for, awesome technology I'm from the tech silo I've been there for 30 years the biggest change that always should have happened but never happened and probably needs to happen now because there's no way to move forward is to move out of the silo because I mean, tech, AI, gen AI, agents, it's everywhere. It's in every single business unit and it's cross company and we just have to go out and I think it's no longer tech as a silo. It just can't be anymore. It's got to change.

Jill:

You're hearing more about forward-deployed engineers, too. So folks who are getting closer to the problem space, closer to the client-facing issues. I think maybe that is, I don't know what you think. I'm curious if that is kind of part of this push to de-siloize. But yeah, it's a thing I've observed happening.

Mark:

I love that idea. And of course, I mean, in the tech silo, the way that we used to talk about this is shadow IT, right? Every business unit is doing their own IT where we as tech, we set the standards and you can only follow our way. Our way is the highway. You can't do anything else on your own. Right.

So I think, yeah, that's something we're going to have to get over very quickly. I mean, shadow IT is there for a reason, at least that's what I believe. I love the idea of forward engineers, but I think that's on the operational level. I think it just has to happen across... All levels in the organization, across the middle management, across the executive layer, it has happened everywhere or it's not. Not going to get done right. And it's not just tech, right?

I mean, it's between sales and marketing, it's delivery, it's just across all of the silos. Yeah. I think somehow the silos are going to have to die after I don't know, 50, maybe 100 years in business and corporate life.

Jill:

I actually welcome that. I've always worked on cross-functional teams and leading those kinds of teams. And I've always been the better for it because you really do have to challenge yourself to put yourself in someone else's shoes, to anticipate things in a way that maybe you're not naturally inclined to do. I think it's very healthy.

So if nothing else comes out of our Gen AI moment, desiloization, I think would be good. But the amount of innovation pressure it puts on our human systems is considerable. Like I say, the folks in risk management, who I'm talking to in compliance, they're very keen to, agile-ify a lot of the, you know, the checks that used to be very periodic. This needs to become more regular.

So that will be a technical and a personal, sort of people challenge.

Mark:

I would love risk to be more agile. In most corporates that I've seen on the inside, I think that's a very welcome change, but, Yeah, let's see if that happens over It would be incredible if that could happen.

Jill:

Time. That's right. We'll see.

Mark:

So if tomorrow you started the new project, you were put next to a CTO for, let's say, 90 days. What would you recommend? Ask them to do like in first 30 days? Where do they start?

Jill:

Yeah, I think it would come back to, and I do this. So I would say I would, you know, kind of start with some inventories of, you know, what's going on. I interview team members.

So sometimes it's uncomfortable to have conversations about what's really going on the front lines yourself, which is why people hire folks to come in and have those conversations. But that is absolutely where I start. I talk to the leadership. I talk to the folks kind of executing and then really call out where there are our friction points and gaps.

And then we start problem solving from there. And I don't like to prescribe the exact remedy for that, but we do have regulatory expectations. We have expectations from our board. We do have some kind of usually constraints as to what the deliverables will be. But at the end of the day, I'm hoping that we get to a place of kind of a better shared understanding of where we truly are and what is landing, what is not.

And then we kind of have a more surgical, agile, prioritized approach for how we tackle this kind of massive task that we have in front of us here, this cultural and technological change.

Mark:

And if I'm a CCO. I didn't have the benefit of having you right next to me. What are the two, three questions that I should ask myself to figure out if I'm doing the right thing, I'm on the right track, or if maybe I need to adjust.

Jill:

Yeah. Where do I want to see the organization go? Where are we going? What is driving that? And is that a shared understanding across the organization? And if not, your management teams can help be perhaps the folks who can help communicate up. I did this when we started our governance function at my consultancy and I got that off the ground and we did it just in that way. We had our executive group convene and establish a charter, all that fun stuff. But at the same time, we involved our functional leadership and we talked to the frontline staff.

And then clearly, really say, okay, this policy, we have a use policy in place that says we will not produce, employees shall not produce biased outputs from, you know, using these GPT tools. Well, unfortunately, that is exactly what you're going to get. And by the way, do people understand what bias means in this content? I think you do not want harmful bias, like escaping from, you know, these systems and getting themselves into production. But we should be intentional about that. Everything's biased. But that's an example of where an intent, it sounds good on the surface. And if you don't interrogate it and actually marry it up with what is meaningful for the folks who have to grapple with these things, then you're going to, you know, constantly be missing the mark.

Mark:

I love that. Okay.

Yeah, I think that was my final question. Where can people find you? Or can they learn more?

Jill:

I spend a lot of time on LinkedIn at Jill Stover Hines. So that's always a good place. But I'm on pretty much all the social media channels. I do an approximately biweekly chat with technologists on the front lines who are trying to do good with technology. It's called Responsible Tech Talks. I post those to YouTube. I'm on TikTok and Instagram and Facebook.

So if you can't find me, I don't know, just Google me and I'm sure to pop up somewhere. But yeah, I really appreciate the time mark today. This was a really fun conversation.

Mark:

And I'll make sure, I think I have almost all of the links that you just talked about. So I'll make sure that we put them in the show notes. And it's been incredible having you on. Thank you so much, Jill.

Jill:

Thank you, Mark. Take care.

Mark:

As we wrap up another episode of the CTO Compass, thank you for taking the time to invest in you. The speed at which tech and AI develop is increasing. - Demanding a new era of leaders in tech. Leaders that can juggle team and culture, code and infra, cyber and compliance. All whilst working closely with board members and stakeholders. We're here to help you learn from others, set your own goals and navigate your own journey. And until next time. Keep learning. Keep pushing and never stop growing.

Links

Chapters

Video

More from YouTube