Artwork for podcast Designing Successful Startups
The Librarian Who Wants You to Slow Down—Why 95% of AI Implementations Fail
Episode 10221st January 2026 • Designing Successful Startups • Jothy Rosenberg
00:00:00 00:39:06

Share Episode

Shownotes

Jill Heinze

Bio

Jill Heinze helps startups turn AI risks into competitive advantages. A former academic librarian turned AI strategist, she founded Saddle-Stitch Consulting after spending two decades in market research, competitive intelligence, and digital strategy. As former Research Director at a major consultancy, she built responsible AI frameworks for Fortune 500 clients and founded the firm's first executive AI governance committee. Her superpower? Using research methods to uncover real risks before they become expensive problems. She hosts Responsible Tech Talks on LinkedIn Live and serves as Responsible AI Program Director for the American College's Cary M. Maguire Center for Ethics in Financial Services.

Intro

Jill Heinze, the founder of Saddle Stitch Consulting and a distinguished UX researcher, elucidates the critical intersection of technology and human experience in our discussion. She emphasizes that the rapid advancement of AI should not eclipse our responsibility to consider its impact on individuals and society at large. Throughout the conversation, Jill highlights a staggering statistic: 95% of organizations fail to achieve return on investment from their AI initiatives, attributing this failure to a lack of understanding of user needs and the inherent risks of technology implementation. We delve into the imperative for founders to engage with their teams and audit AI usage within their organizations to mitigate potential pitfalls. Ultimately, Jill advocates for a paradigm shift in how we approach technology, urging us to prioritize thoughtful engagement over reckless acceleration.

The dialogue with Jill Heinze, a prominent figure in user experience research and the founder of Saddle Stitch Consulting, unfolds a critical examination of the intersection between technology and human experience in the context of artificial intelligence. Jill articulates a profound concern: the hasty deployment of AI technologies without a thorough understanding of their implications poses substantial risks not only to organizations but also to the individuals who interact with these technologies on a daily basis. Through her extensive background as a librarian and UX researcher, she underscores the necessity of grounding technology in real human experiences rather than abstract notions. Her insights urge founders to prioritize the well-being of users and to engage deeply with the communities affected by their technological innovations. In a landscape where 95% of organizations reportedly fail to realize a return on investment from AI initiatives, Jill's perspective becomes an invaluable guide for those looking to navigate the complexities of integrating AI responsibly. She emphasizes the importance of asking critical questions about the potential repercussions of new technologies and advocates for a shift in mindset from mere risk management to recognizing opportunities within those risks.

The episode serves as a clarion call for a more human-centric approach to technology, reminding us that the decisions we make in the realm of AI will shape our collective future. The conversation with Jill Heinze delves into the often-overlooked aspect of technology's impact on human lives, presenting a compelling case for responsible AI implementation. Jill shares her journey from librarian to UX director and now to founder, highlighting the lessons learned along the way. She reflects on the inherent unpredictability of technology and the necessity of embedding a human perspective in AI governance. The discussion reveals that technology does not exist in isolation; rather, it interacts with human behaviors, expectations, and societal norms. Jill's experiences illuminate the challenges faced by organizations that move too quickly without proper consideration of the ethical implications of their technological advancements. Her assertion that 'moving fast and breaking things' is not always a commendable practice resonates throughout the episode, as she encourages a more deliberate approach to innovation.

This dialogue serves as a guide for founders who are eager to harness AI's potential while remaining acutely aware of their responsibilities toward their users and the broader community. In this engaging episode, Jill Heinze articulates the necessity of a paradigm shift in how startups approach the integration of artificial intelligence into their products. Drawing on her wealth of experience, she highlights the alarming statistic that 95% of organizations fail to achieve a return on investment from their AI efforts, attributing this failure to a lack of foundational understanding and user engagement in the development process. Jill advocates for the importance of obtaining 'ground truth' by engaging directly with users and understanding their needs before deploying AI solutions. The conversation emphasizes that technology should be informed by the realities of human interaction rather than abstract models or hypothetical scenarios. Jill's insights challenge the prevalent startup culture of rapid iteration, urging founders to consider the ethical implications of their technological choices. By the episode's conclusion, listeners are left with a potent reminder that the future of AI is not merely about innovation for its own sake, but about fostering a landscape where technology serves humanity rather than undermining it. The shared wisdom in this discussion is essential for any founder aiming to build a sustainable and responsible business in the age of AI.

Takeaways:

  1. A comprehensive understanding of technology necessitates recognizing its profound impact on human lives.
  2. The prevailing notion of moving swiftly in startup environments can lead to detrimental consequences if not managed carefully.
  3. Organizations should conduct audits of AI tools being used by employees to mitigate risks associated with proprietary data exposure.
  4. Effective governance in AI must be grounded in the lived experiences of users, ensuring relevance and applicability.
  5. Startups must prioritize honest conversations regarding their AI implementations to avoid falling into the 95% that do not achieve ROI.
  6. It is imperative to establish a connection between technology and its real-world implications to foster responsible innovation.

Transcripts

Jill Heinze:

Hello.

Jothy Rosenberg:

Please meet today's guest, Jill Heinze.

Jill Heinze:

I just have always been keenly aware that technology doesn't exist in a bubble. It impacts human beings, impacts their day to day lives.

And also as our example about the timeline being blown shows like it doesn't go as we planned. It just never does. So how are we exercising care with people? By thinking about our technology, not the risks that we're putting out there.

Jothy Rosenberg:

Move fast and break things. It's practically the startup founders creed.

But what if your breaking isn't just your product, it's your customers trust their data or your company's entire reputation. My guest today has a message that every founder building with AI needs to hear. Moving fast toward a cliff isn't a badge of honor.

Jill Heinz is the founder of Saddle Stitch Consulting and a UX researcher who spent years helping organizations from Fortune 500 companies to scrappy startups think critically about the technology they're putting into the world. She's got a background you wouldn't expect.

She started as a librarian, became a UX director, managing developers and designers, and now helps teams identify AI risks before they become disasters.

In this episode, Jill shares why 95% of organizations aren't getting ROI on their AI implementations, how to flip risks into opportunities, and the one question every founder should be asking their team right now. Let's get into it. Hello Jill and welcome to the podcast.

Jill Heinze:

Yeah. Hi, Jothy, it's so great to be here. Thanks so much.

Jothy Rosenberg:

I'm thrilled. And you introduced me to another person, a woman that you know in Charlottesville. Oops, I gave away where you are.

Jill Heinze:

Don't tell it.

Jothy Rosenberg:

And, and so we've got a little, you know, once there's two people that are on the podcast from the same small town, we ought to open an office there. I mean, it's just like. Yeah, yeah.

Jill Heinze:

Ooh, I would welcome that. It's a deceptively small but entrepreneurial minded town in Charlottesville, Virginia.

Jothy Rosenberg:

Well, you've half answered my first question, which is where are you originally from? And we already just know that you live now in Charlottesville, Virginia. But where are you originally from?

Jill Heinze:

I'm originally from, I believe, your neck of the woods, if I'm not mistaken. I'm from Toledo, Ohio, so northwestern part of the state, not far from Detroit.

Jothy Rosenberg:

I am from Detroit, actually a suburb to the northwest of Detroit, and spent a lot of years there and also spent a lot of years in the western part of the state going to college.

Jill Heinze:

Yeah, it's beautiful. Beautiful Michigan, Northern Michigan, Western. Yeah, I love it. But don't tell my Ohio State alums, my fellow, my fellow alums, that I said that.

Jothy Rosenberg:

So before we get into what you're currently working on and that you started, why don't you just give us a little bit of your, your journey, the startups and other companies maybe that you were at and that where the experiences shaped the way you're doing this company.

Jill Heinze:

Yeah, great. Yeah. Where to start? I think I will go back to my librarian days.

So one, one thing that might be interesting to listener me is that I had intended to be an academic librarian. I went to library school for it. I was going to be in libraries for my career. So I thought, and it's a great profession.

I often get people saying, man, if I could be anything, I would be a librarian. So I think we have a lot of closet aspiring librarians out there and it is, it's a great profession.

But through libraries you learn a lot about public service, you learn a lot about data privacy, organizing information. They're actually little tech hubs if you think about it.

We, we associate them with books, but in fact, you know, everything's digitized for the most part. So yeah, that was, that was kind of my origin story. But like, I like doing research, I like making change.

I like kind of using the information that I discover and not just maybe passing it off to people, but actually advising, you know, organizations with what to do with it. So I left academic libraries and did market research, competitive intelligence for companies that work for large financial institutions.

So I was in the loyalty space for a while.

I also helped banks assess their, their marketing strategies and so forth, but wound back in libraries to be, and this will be an interesting title for you, a UX librarian, which at the time I had no idea what that meant. But I ended up at University of Virginia Library, which is how I came to Charlottesville.

And that is where I learned tech really like the, the nuts and bolts of how do you manage a website, how do you think about your tech stack and how that translates to different touch points. So I managed a team, 10, 10 folks, developers, designers, researchers, and we, we were in charge of the library's digital presences.

So that was eye opening for me because I could see how one takes a team of talented folks, does the user research, does all of the, you know, this thoughtful work and actually makes a change. Like people come to the library's website, thousands of people, they, you know, come through the doors and see your signage.

That kind of stuff was really moving for me.

So that's, that's How I got into product and then my final kind of leg on the journey before I started my own company was working for a digital consultancy here in Charlottesville. It's formerly called Willow Tree, but there we worked with big, you know, Fortune 5, 500 company brands.

I was a research director there and got into AI when ChatGPT came on the scene. So that's a long. That's a long backstory, but that's how I got where I am.

Jothy Rosenberg:

Well, it was a lot shorter than the actual journey you took, so.

Jill Heinze:

That's true. That's true.

Jothy Rosenberg:

Was the size of the staff, the 10 you mentioned, was that the entire staff or was that a subset of the full complement of people that worked at the library?

Jill Heinze:

Oh, yeah, that was a subset. So our team didn't have accountability for all of our digital touchpoints. It was a subset of touchpoints.

So it was website, digital, physical signage, some archival repositories. But we had about 200 people in the library, a good number of whom were also IT staff. Not under me, but they would manage things like, you know, our.

Just more of our traditional IT infrastructure, our catalog and things of that nature. So pretty large organization, I'd say, as far as academic libraries go.

Jothy Rosenberg:

I am. I am stunned, actually, that. That. That the staff would be that big. I mean, I. I was. I went to grad school at Duke.

I was in the library there a fair amount. I never would have imagined that there were that many staff people at.

And I imagine Duke's roughly the same size, maybe a little bit smaller than uva.

Jill Heinze:

I think that's about right. I did a library internship at Duke, so I'm very familiar with. Well, I was back in the day, very familiar with. I worked on the.

The Lilly Library side of things. So, in fact, there's two kind of. There's like the. What, first year campus and then where the other folks are.

So they have a very robust library system.

Jothy Rosenberg:

Yeah, well, it's a wealthy. It's a wealthy institution. They.

Jill Heinze:

This is true.

Jothy Rosenberg:

They. They definitely are. Okay, so before we move one more step, what.

Were there any mistakes that were made that you learned from during those, you know, those various experiences? Some were startups and some were, you know, at the library.

And I'm just always interested because people learn so well, even not only from their own mistakes, but from other people. So I thought if you had some of that, you could. You could talk about it.

Jill Heinze:

Yeah. Oh, of course. Yeah. With a path like that, who does it take? Some winding detours, for sure.

And I think something that comes to mind on the technology side of things. So back when I was managing this team, which by the way, I was really fortunate and grateful, grateful that someone gave me a chance to do that.

I hadn't had management experience before. This was really kind of a leap of faith to entrust these folks to me. So that in and of itself was a huge learning opportunity.

How people, once you become their manager, like interpret what you say in a different way, what they share and how they share it with you, it just transforms kind of overnight. Which isn't to say I made a mistake, but it was, it was eye opening in that respect.

But what I, what I learned about the, the technology mostly from my developers who were very generous in helping to educate me on very fundamental things about web development. Like, you know, what's a dome? Like, what is the dome? I don't, I don't know.

It's like somebody explained it to me and this was at a time when we were re envisioning our entire tech stack, you know, from ground up.

And in order to manage that transition out of, at the time, WordPress into using web components and things like that, I really had to understand the technology and the workflow flow of the different folks and how they intersect with one another because we're all one team and we need to communicate.

My misstep was being, and this will be relatable, being overly optimistic about timelines, I had thought, oh, we're going to be able to migrate this site to this new stack and probably a few months, I think it was maybe double that, if not more.

Jothy Rosenberg:

You know what, you're the only person in the world who's ever missed their schedule.

Jill Heinze:

Am I? Man, what a, what a.

Jothy Rosenberg:

The only question that a manager of a development project has to ask, to ask is should I double, triple or quadruple the number I'm getting from the overly optimistic development team?

Jill Heinze:

I love that. I wish I had had that insight. But this was learned the hard way for me.

Jothy Rosenberg:

It's funny because what you end up doing is you actually there's a number, a factor that is different applied to each individual developer. And so if, if you're the first line manager of a group of developers, then you need to apply that factor.

Jill Heinze:

Okay.

Jothy Rosenberg:

Hey, on Joe, I do a doubling on, on Sally, you know, she's actually right on the dot every time. And, and then, and then when, when that bubbles up, the next level manager has to also add a factor.

You know, I mean, you'd think that you could make this more of a science but it's, it's just so hard when you have a fairly large team to get it all, you know, to come together in a schedule that you can really, you know, believe in.

Jill Heinze:

And that is another maybe underlying lesson we could all take a note from. I think about it a lot with AI, but I think in that example, humility is a part of this.

There's a lot of variability and unpredictability in the world and technology does not make us immune from that just because it, I don't know, maybe gives the air of like concreteness and it's code and it's abstracted and all of this. But in fact, we're people, we operate in real life situations and that's a very dynamic space.

And so too should our processes reflect that or our expectations, I think.

Jothy Rosenberg:

Okay, so now could you connect the dots between where you're a librarian and you're helping people find information and maybe how various pieces of information are connected to help them do whatever they're doing in the, in the library and now what you're doing and you can talk about, you know, the name of your startup and everything is you're helping organizations identify AI risks before they become disasters. So what is that startup and how did, how did those dots get connected?

Jill Heinze:

Yeah, that's sort of interesting to reflect on, I will say, because I did skip the last part of my journey, which is where I am right now. I have founded a consultancy called Saddle Stitch Consulting, which does reflect a bit of my library background.

A Saddle stitch is a very inexpensive, expensive book binding. And it to me just represents bringing things together, bringing people and technology together, bringing ideas together, et cetera.

So that's the genesis of the name. I think how I translate my library background into what I do today is I really care about people. Like you work in a library.

You see folks of all stripes, you know, that come in, they want and need access to information that I believe everyone should have. You know, so there's this sense of, you know, serving the, the greater good in an equitable way.

And I just think that is something that I carry with me. I've seen people interact with bad technology quite a bit in libraries too.

Databases that don't make sense, trying to find call numbers and match that to the real world and like that's its own feat. Right. So I think I just have always been keenly aware that technology doesn't exist in a bubble.

It impacts human beings, it impacts their day to day lives. And also as our example about the timeline being blown shows, like it doesn't go as we planned. It just never does.

So how are we exercising care with people?

By thinking about our technology, the risks that we're putting out there as we put technology in the world, and then just being proactive about being accountable for that. And I think that's really kind of what motivated me. And then ChatGPT on the market was the big aha, wake up call.

As I think maybe for many people it was when I thought, oh gosh, we really need some education and to reconcile with the power of this new technology that can do really great things and really scary things. But we were just on the cusp of figuring that out.

Jothy Rosenberg:

Sorry for the interruption, but in addition to the podcast, you might also be interested in the online program I've created for startup founders called who says yous Can't Start Up? In it, I've tried to capture everything I've learned in the course of founding and running nine startups over 37 years.

It's four courses, each one about 15 video lessons, plus over 130 downloadable resources across all four courses. Each course individually is only $375. The QR code will take you where you can learn more.

Now back to the podcast and what you explained to me earlier was that while most people think of governance applied to AI as like they would think about governance of anything else, that is a risk. You know, you've got, you've got compliance to think about, you've got checklists, but you have a different approach.

Jill Heinze:

Yeah, yeah, it's dry stuff. If I. Yeah, governance, exactly.

That's what people think you think about, you know, lengthy PDFs with thou shalts and Thou shalt nots, and it's just overwhelming. And most people kind of, you know, don't pay attention to it, to be honest. And so I kind of look at that disconnect as a design problem.

I'm a UX researcher, right? So everything's a design problem.

But if you want to enact a change in your organization, which governance is a tool for, that, it needs to be humane, meaning it needs to relate to the real problems people have on the front lines. It needs to speak the language that they speak. Speak. It needs to make sense for their workflows.

And when I was working on governance within a large organization, I could see where the statements that one made on paper, once you go and you try to true that up with people who are doing the work, they just didn't marry up at all. And so therefore, not only did that piece of the policy of governance fall flat.

I think everything else, the credibility of it was undermined because, like, clearly people are writing this who don't really, really know.

So my job, I felt, was to be that bridge and think about, you know, the user perspective, the frontline managerial perspective, the executive perspective, and make all of those things come together in a way that makes sense.

Jothy Rosenberg:

And why, why do you think that early stage startups need this?

You know, there's a limited number of things that start, you know, startups can assimilate and do either because they don't have the people resources to take on another project, or they don't have the money. But why should startups prioritize this?

Jill Heinze:

Yeah, I think we know whether or not AI is going to live up to the hype that it has today. We, we don't know. I do think the technologies are going to stay because they've just prove pretty darn useful.

And I know we've talked in the past about how you use it for your podcasting, for example. It's really helped your workflow. You're a person of one who can now do things, maybe a team of two, three, you know, would need to do so.

One, I don't think it's going anywhere. So that's a technology we need to engage with. I think for startups, you know, this doesn't have to be an onerous or process oriented.

I think it can be conversations with staff that go something like, hey, what AI tools are you currently using? How are you using them? To what end? And why do you feel the need to inject that here?

And the reason it's important even for a startup to ask those questions is that we know people are bringing technology to work of their own.

They may be exposing proprietary data into those systems unknowingly because maybe again, this is new technology trying to figure out how it works and what the rules of the road are. So if that's happening and your IP is getting out into the world, it's getting retrained into models in the future, that's, that's gone.

Those, that IP is gone. Those models cannot unlearn what they know.

And so even simple steps like that, just figuring out what people are using or why, and if there are vulnerabilities you're unknowingly exposing yourself to, I think that's really worthwhile for startups.

Then, of course, as you scale, I think having that kind of reflexive knowledge of how to keep a pulse on what people are using and the use cases, that all serves for good Governance going forward.

Jothy Rosenberg:

So one of the things that you're dealing with when you talk about AI and startups is risks in all kinds of situations. With my companies, you're trying to list out a set of risks. Sometimes you have to do it as a. In a contract.

And you're basically stating, I'm making these representations and these warranties and, and those are related. Oh, those are also related to risk of, of things that you're saying aren't, haven't happened or won't happen anyway, so.

But I've always found it really difficult and I wonder if it's partly because, like a lot of startup founders, I'm an eternal optimist. If you weren't an eternal optimist, you couldn't do startups.

Jill Heinze:

Yes.

Jothy Rosenberg:

Because the number of things that could go wrong is, is almost infinite and you're plowing ahead anyway.

And, and so I feel, I feel like people that can identify a pretty comprehensive set of risks have a superpower that I don't have, and I'm glad to have them around, although they, they, they can.

Don't you think that if all you, you're the kind of person that's always thinking about risks and you're, you're kind of living in a world of negativity?

Jill Heinze:

Oh, yeah, yeah. I think there's a real, to use the word risk, there's a risk of being too negative with your, your risk management. But, and that, that stopped me.

And I won't say stopped me, but it did take me a while to gain traction in institutions where I was internal in trying to really point people toward understanding these, these risks in a more grounded way. Um, and I found a couple of unlocks for that.

But one of the things that you're saying, and I think this is very true, if we look at risks as negatives, it will carry that heaviness and you'll, you'll be a downer. People won't invite you to their parties or happy hours, but. So I like to think of them.

This makes me a little bit of an outlier, but they're really like opportunities once you kind of flip the script on them. So when I would work with development teams through and for me, it helped to have a framework.

So I'm not ideating in the ether where I'm just coming up with risk after risk. That's not. And I have no idea which one is more likely than another to occur. It just feels like this morass of ick.

But actually I would use the NIST AI Risk management. Framework, I'd look at the trustworthy characteristics they outline there, and I'd marry that up to use cases.

So then I would sit with the team, we would ideate, here's, here's the people who will be affected. Here is the tech stack we're going to use.

We know kind of the vulnerabilities or downsides of those things, and we know these trust characteristics we're aiming for. So we would spend time thinking, okay, how can this go completely off the rails?

And it's really generative because honestly, teams don't think like that. Like you say, we're optimists, we're building stuff, we're figuring out problems, and that's all good.

But at the end of that session, we then say, okay, well, how are we going to fix it? Like, what are we going to do from a technical perspective, from a UX perspective?

Or maybe we find things like, whoa, we are not comfortable doing this at this time. This is outside of our risk tolerance. And that's okay. But I think what's not okay is not having thought about it.

Because if you care you about people at the other end, like, you need to spend some time, you have a responsibility when you put technology into the the world.

Jothy Rosenberg:

So there are probably startup founders listening or watching this podcast, and I would presume that there are, most of them are probably thinking about how I should put AI into my product. And what would you say is the first thing they should do?

And what is the first mistake, I guess, or the biggest mistake that they need to be thinking about upfront and avoid?

Jill Heinze:

Yeah, very first thing. Get the ground. Truth. So what do I mean by that? I'm a researcher. If you're looking at a problem space, sit with the people who are engaging with it.

I mean, metaphorically or literally, depending on the problem. But get out there, get to the environment in which you're looking to impact and the people who are in that environment.

One of the things I find about AI and technology in general is that we are dealing in abstractions. And I think with AI and sometimes they're helpful. Sometimes you need to abstract a problem to make it manageable.

But I think if you layer abstraction on top of abstraction, and now with AI, heck, I don't have to talk to people. I can make up a Persona and talk to my chatbot and pretend I did the work of talking to people.

It's just very tempting to get far away from the grounded truth. And then the problem with that is that you run into that MIT study that found like 95% of orgs are not getting ROI on their AI implementations.

And some of the reasons for that is they're focusing on the wrong use cases.

Like they looked at marketing and they looked at business development as this low hanging fruit, but they neglected the less, I guess sexy or the more mundane backend systems where AI could actually really generate more efficiency. And I feel as though that is a fault of lack of that ground truth, like really understanding where the friction points are.

So that would be my first advice is really get familiar with the problem and talk to people.

Jothy Rosenberg:

You use the term harmfulness, what does that mean to you?

Jill Heinze:

Ooh, that's a good one.

So in the example I used previously about governance, one of the terms I had to negotiate in the when we were rolling out our use policy is there was a statement that said employees shall not use genai to produce biased outputs. Right. And that to me was like, whoa, that's, that's not how this tech works. Like every output is biased for sure.

And there are harmful biases, which is what the NIST standard seeks to address. And then there are biases that we need to be mindful of, but that might be acceptable for our use case.

So when I think about harms and harmful bias, it is very use case dependent.

It's going to look different if you're talking about an AI implementation for a, you know, women's healthcare clinic versus a retail chatbot or something of that nature. The risk profiles are just different.

But ultimately it's the folks who are using or engaging with the technology that will be the best to enlighten you about the potential harms it could carry. The people impacted, you know, most harmfully are often those who are marginalized in general, whose voices are not in the room, so to speak.

So those are the ones I think we need to more intentionally seek out and simply ask them. Show a prototype, show, you know, a concept and start to just engage in what that's like for them. And I think that goes a long way.

But there's not one answer to what harm looks like.

Jothy Rosenberg:

Hi, the podcast you are listening to is a companion to my recent book Tech Startup Toolkit how to Launch Strong and Exit Big. This is the book I wish I'd had as I was founding and running eight startups over 35 years.

I tell the unvarnished truth about what went right and especially about what went wrong. You could get it from all the usual booksellers. I hope you like it. It's a true labor of love.

Now back to the show so in the context of that and also the whole idea of, of understanding the risks and managing the risks, so that's sort of one side, that's the side of things that I think you really are trying to get people to think about. But what would you say to.

Most startup founders I know, including myself, tend to operate in a world where you move fast and break things and then, and then figure out what happened and fix it and move fast and break things again. I think you probably have something strong to say to those kinds of people.

Jill Heinze:

Yes, I'm holding back, but I can see that. Stop doing that. Stop doing that.

Jothy Rosenberg:

I can see that.

Jill Heinze:

I don't know if that was a caveat here.

I think there are times when moving quickly can help you get good feedback and get those insights that we were just talking about harm and talking to people on the front lines. That's good feedback to get early and often. I think you can move quickly in that the risks are fairly low in those situations.

So I think the first thing people need to ask themselves is what are you moving fast toward? Are you moving fast toward a very risky, potentially harmful scenario or don't you even know? Because you didn't take time to think about that.

So moving fast toward a, you know, a cliff isn't a badge of honor. It's not a, it's not a good thing to do. You need to kind of look ahead. And also breaking things. I again ask, what are you breaking?

And who, whose is it? You know, it's easy if it's not you, but if it's somebody else who's bearing the brunt of that.

And let me just give you an example from the news that I read recently. So I think this was telling. Now this is related to Google Gemini.

So maybe a little bigger than most of our listeners organizations, but of course they, they released Gemini in the wild. There was a solar company they, they install, I guess, solar panels and they're based in Minnesota.

And they started noticing, hey, our clientele is just dropping off. Like what happened to all of these contracts?

They start talking to folks and learn that Gemini was hallucinating a news report that said that they had negotiated with the attorney, state, attorney general because they had violated some sale or they had deceptive sales practices. Okay, great. Gemini, you know, is out there in the wild, is doing, you know, cool things.

But what redress do those people have whose company is fundamentally undermined and the trust that they've built for decades is eroded because of a hallucination? These are the kinds of things that are easily predictable. Like we know that these things hallucinate what.

What's Google doing about, you know, or what can one do? I mean that's a hard thing to take back.

So I just think the forethought is, is just really important and more so for startups because it's really reputation based. You're carving out your unique value proposition, you're building your identity along with the tech.

Jothy Rosenberg:

Well, you asked a question as you were talking about this. You know, what is it you're moving fast towards?

Well, what I'm always advising startup founders to do is to make sure they're carefully tracking their burn rate, how much cash they have in the bank and therefore their Runway and their Runway has to take them to the next big important event, which frequently is a financing.

Jill Heinze:

Mm.

Jothy Rosenberg:

Let's, if you're not a consultancy like you are, which essentially, you know, is self, self, self funding you, then you need to be very careful that you're not going to run out of cash.

So these, these are the, the founders that are driving very fast and that one of that, one of the biggest things that you've, you've got to hit is you've got to get to product market fit because it's an absolute barrier between the earliest founding stage and actually being able to raise a serious institutional round of financing called Series A that's going to enable you to grow and you have proving product market fit includes having a minimum viable product that at least three, if you're in the B2B space, at least three customers will pay for. So that's, that's the undeniable reason thing that is pushing especially the CEO to move very fast.

And what, what you have to do is, and I'm sure you're, this is, you know, this is the mantra for your company is that you can't stop that, you can't change that. That's a given. And what you have to try to do, and I'm projecting and you can answer me, yeah, sure.

With what's really true is how to allow them to continue to move fast. And yes, it'll break things occasionally.

And when I say things, I usually am talking about their product and it turns out that they can't get more customers because it doesn't quite have, it's not really a minimum viable product and they have to go back and try again or they start selling, they hired a salesperson and then they find out we actually don't have product market fit and you have to fire all your salespeople and Retrench. And this happens all the time. So and then you come along and you're saying I want to inject some caution about AI. Do you?

Okay, so that's thing one and related to that is do you see them being able to go faster with the use of AI so that they can take some more time to be careful?

Jill Heinze:

Oh yeah, absolutely. Yeah, yeah. This is a great conversation. So yes, you have let's say a bucket of time and a bucket of money, your Runway, if you will.

And then it's a, it's a matter of allocation within that. And so one of the questions is what can we do to learn the fastest and cheapest way possible to get to that product market fit?

And AI is a great tool in that because now we can prototype stuff at a fidelity that you know, gosh, I was used to write on like paper, like wireframes and stuff like that and still there's, there's uses for that. But now we can really iterate very quickly and get input very quickly when it is cheap to change, you know, direction and pivot and find that fit.

I can give an example. This might be of interest from an organization I was working on. This is a large publisher.

But they had what was kind of like an internal little lab startup and they, they wanted to rethink their, one of their products which was print based and they wanted to think about it in a digital light. So they did things that on paper look fantastic.

Like they had an advisory group that helped co design like a new app that could help make these publications accessible, et cetera, et cetera. The first time I put their concept in front of the target user group, everyone was like I will never use this thing.

I'm like what are you talking about? Like we did, we did the right things. Like we had a co design with our stakeholders.

Well, those stakeholders were designing for, not for themselves but for a different audience. But the assumption that didn't go interrogated was that they were designing it for themselves.

So when we put that in front of other people like those in the advisory group, it was, it was just completely a misfit. We had to go a little bit downstream in the student base to find like oh, this is the part of the learning where this makes sense.

So I say all that because had one not asked that question, had one not gotten that feedback, you are talking about a multi million dollar investment in a technology solution that would have had no chance. I feel very, with no chance of landing as intended. It took me nothing to put up a PowerPoint with like, here's a mock of the idea.

I mean, I won't say nothing. That's my job. I do research. But that's the idea. Learn. Learn the right things quickly, but define what it is that you want to learn.

Jothy Rosenberg:

That's a great story. That's really a great story.

I only have one final question and it's slightly off where what we've been talking about, but I want to talk about the word grit. And it, the word grit is defined by words like resilience and fortitude, drive, determination, stick to itness, and a really important one, courage.

I would state. Well, I can tell you that from personal experience and I've known a lot of founders. There's not one who doesn't have a lot of grit.

And I think that has to be applied to you as well. So the question is, where did your grit come from? What's your grit story?

Jill Heinze:

Oh, wow. Yeah. And gosh, I love you and your podcast because it is a jolt of grit when I need it. I don't know that I naturally, hey, I'm a risk averse person.

You might have gathered that.

And I'm starting out on my own in a way that's deeply personal because it's, it's me, you know, that's, I am the brand, you know, and so failures and successes hit different when you're, you're the person. And I'm sure a lot of people relate to that.

But the, the grit that I, I do think, and I'm proud of myself for having stuck to, was that, you know, I left a comfortable full time job and I don't have mountains of Runway.

But I believe we need to care about human beings in our technology landscape and that I can help because I know enough about technology and how teams work to bridge that chasm and help make a future better for my kids who are definitely going to be impacted by this technology, other people's kids, and really just we're creating the world that we are going to be living in through all of these little decisions we make, whether, you know, we're pausing or we are breaking things, like those are decisions we need to own. And I want to be a force for that. So whenever I get discouraged and I do, I try to just go back to that, like, why am I doing this?

I really, I really care about it. And so I think that's maybe my courage and grit. Whereas otherwise, if I didn't have that sense of mission, I don't know that I could do this.

Jothy Rosenberg:

I should add one more word to my, to my list of words that I use to help define grit, and that is passion, which is what you're talking about.

Jill Heinze:

Yeah, I think that's it. Yeah.

Jothy Rosenberg:

And I don't think you should feel because you clearly are exhibiting courage to not be doing the taking the easy path and going out on your own and fighting to make sure it works to, you know, to be successful. So I don't. If that's not grit, I don't know what is.

Jill Heinze:

Oh, I really appreciate you saying that. I think I need to hear that from time to time. So thank you so much for that.

Jothy Rosenberg:

Well, thank you for doing this. It was a wonderful conversation and I really appreciate you coming on the show.

Jill Heinze:

Very much the same. Thank you so much. This was really a privilege for me. I appreciate it.

Jothy Rosenberg:

And now for your toolkit takeaways. Toolkit number one get the ground truth before you build anything with AI. Stop abstracting and start talking to real people.

Sit with the humans who will actually use or be impacted by your technology. AI makes it tempting to skip this step to generate fake Personas and pretend you did the work. Don't.

That's how you end up in the 95% of organizations getting no ROI on their AI investments. Toolkit number two know what you're breaking and whose it is. Before you move fast and break things, ask yourself two questions.

What exactly am I moving fast towards and what am I breaking? And who bears the cost of it if it goes wrong? If you can't answer these questions, you're not being bold, you're being reckless.

Moving fast towards a cliff isn't something to brag about. Toolkit number three Audit your team's AI usage today. Your employees are already using AI tools, probably ones you don't know about.

They may be exposing your proprietary data into systems that will retrain on it. That IP is gone forever. These models cannot unlearn what they know. So ask your team, what AI tools are you using? How and why?

Simple questions that could save your startup.

Now go have a 15 minute conversation with your team about what AI tools they're actually using before your proprietary data ends up training someone else's model. And that is our show with Jill. The show notes contain useful resources and links.

Please follow and rate us@podchaser.com designingsuccessful startups. Also, please share and like us on your social media channels. This is Jonathy Rosenberg saying TTFN Tata for now.

Links

Chapters

Video

More from YouTube