Artwork for podcast Behind The Product
Kraftful: Yana Welinder
Episode 5924th June 2025 • Behind The Product • SEP
00:00:00 00:37:37

Share Episode

Transcripts

Yana Welinder:

This concept of having heard that users want a faster horse and then kind of coming up with the idea that it's actually a car that's strictly in the human space at the moment. That's really what humans can do. Being creative around user insights, that's really that AI human collaboration, where that line is today.

Zac Darnell:

Welcome to behind the Product, a podcast by Sep, where we believe it takes more than a great idea to make a great product.

We've been around for over 30 years building software that matters more and we've set out to explore the people, practices and philosophies to try and capture what's behind great software products. So join us on this journey of conversation with the folks that bring ideas to life. Hey, everybody, welcome.

Welcome back to another episode of behind the Product. I'm your host Zach Darnell and we've been on a little bit of a hiatus, but we're going to come back to our normal programming.

So on this episode I am joined by Yana Welinder, co founder and CEO of Kraftful, which is an AI powered sidekick for product teams.

We chat about how AI is changing the game in product management from surfacing insights you didn't know you needed to helping teams move faster without losing their creative spark.

Yana shares how Kraftful is helping PMs turn their user feedback into product gold and what it really takes to build trust when AI is part of the team. Hope you enjoy Yana.

Really quick just to kind of level set anybody kind of listening and just for Chris and I, tell us the 30 second pitch around Kraftful.com, like, what do you guys focus on? What's the platform for?

Yana Welinder:

Graphical is a copilot for product teams. We help. Now over 50,000 product teams listen to users across lots of different platforms.

Call transcripts, support tickets, app reviews, survey data, and then we use large language models to turn all of that data into product insights, PRDs, user stories and make it really easy to act on all that data and essentially automate that process end to end. So you really never have to think about including users in the process. They're really just involved as you're building products.

Zac Darnell:

Oh, wow. I didn't realize. 50,000 teams.

Yana Welinder:

50,000 teams.

Zac Darnell:

That's awesome.

Yana Welinder:

Thank you. Yeah, it's been a while. 20 months, which is really how fast, how fast we've gone.

Zac Darnell:

Incredible. Oh my gosh.

Well, you know, obviously AI has kind of blown up in the last couple of years and so there's great opportunity there to kind of maybe like get us started. I want to Maybe double click into kind of this role of AI in product development. That's it's a really hot topic.

A lot of people have been talking about it. A lot of people are still trying to figure out like what does that actually mean? How do you actually apply AI in product development?

Whether it's AI as a product, AI as a tool in the process, AI as a function in your overall product suite. So there's like so many different ways that people are trying to apply these things.

So you know, to think about Kraftful specifically to get us started. I'm curious about your thoughts at the industry at large.

You know, what do you think the role of AI is playing in the product management space and how does kraftful specifically integrate these insights into the way that you serve your customers?

Yana Welinder:

As AI is transforming how we approach product management, we help turn complex, overwhelming volumes of user feedback into clear in actionable insights. At Prafle, we think of ourselves as sort of a strategic partner for product teams.

We enable them to prioritize the most valuable ideas and tackle user pain points effectively.

Our approach is to leverage large language models to connect all feedback sources and then apply sort of a very complex multi step analysis to analyze all of these different sources in different ways to give teams product insights. That may normally take days or even weeks, but we're providing all of that data in moments.

And then we also make all of those insights very actionable by turning this into prds User Stories acceptance criteria based on all that data and instantly syncing it to user stories into the backlog so that you can automatically share all of that context with the team, which I think is really, really important.

Zac Darnell:

Your system is automatically pre filling out into whatever ALM tool a team might be using Stories and Insight. So it's basically rather than have as a team having to create something from scratch from a conversation they can edit from a starting point.

Yana Welinder:

Exactly.

So Kraftful writes a user story, writes acceptance criteria using everything that user said and then it also has a link back to everything that user said so that when the development team starts working on it and they actually realize that there's some optionality here and I would normally go back to the product manager and say do you want me to do X or Y?

They can actually go back and see what did the users actually say that ultimately led to this feature that I'm working on and maybe that will answer the question without even having to do that back and forth.

Zac Darnell:

I love that because I feel like something that I feel like I talk a lot about and I feel like I'm just repeating myself constantly. Is this idea that it's so much easier to edit than it is to create.

So if I'm asking whether it's a customer for SCP or a friend for help, giving people a starting point or a frame of reference that they can just edit from even if it's wrong, is so much easier than asking somebody an open ended question that they have to fill out from a blank page.

Yana Welinder:

Absolutely.

Zac Darnell:

I would rather make the wrong assumption and say, hey, I think you said this and have somebody say, eh, you didn't get that right. Because how often have we all been in the kind of I'll know it when I see it period.

When you're in that mode, you got to fill in the puzzle pieces for somebody to help them articulate what they mean.

Yana Welinder:

First of all, that's actually how Kraftful is built at every step of the way. So you get product insights, they're editable. You get user stories, they're editable. You get PRDs, they're editable.

And so we kind of think of it as AI is doing a first stab and then it's really up to the team to then collaborate around all of this to make it theirs and tell the story to other team members. Right.

Part of it is learning what user said and part of it is then telling the story to your, to your team about why, why they're working on certain things. So like we kind of have this curation be really part of the process.

Zac Darnell:

You're touching on something that might make some people a little uncomfortable, this human versus computer. And I think some people are feel fearful to some degree, like oh, ChatGPT is going to take my job.

Where have you seen so far in this journey over the last couple of years? Again, keeping inside the scope of like product development where you feel like AI shouldn't be involved.

You know, where's human intuition still essential? As much as we want to make data driven decisions. Right.

Chris Shinkle:

That's good.

Zac Darnell:

But there's a creative side of this job.

I mean I think building software, whether you're an engineer, designer, product manager, business leader, there's both art and science in what we all do. How do we balance that?

Yana Welinder:

I think that we are sort of on this timeline of AI developing. And so there's the question of like what can AI do today and what should it be involved in today?

And then a separate, separate, a very, very different question around what will AI be able to do in the future and what should it be tasked with doing in the Future.

e language models since early:

But in where we are at today, I think there's a very clear distinction between kind of the analysis that AI can do based on existing data, and it can do a little bit of leap between, for example, what users are requesting or what they're complaining about to kind of predict what potentially could be the solution. But it's very limited.

It's usually kind of the AI can come up with success metrics in our PRDs, for example, they're based off of everything that they've read or it's read in all the user input. And then it can kind of formulate pretty good success metrics just based on those pain points.

So that's an example of the kind of things that AI can do.

But this concept of sort of having heard that users want a faster horse and then kind of coming up with the idea that it's actually a car, that's strictly in the human space at the moment, that's really what humans can do. Being creative around user insights, that's really that AI human collaboration, where that line is today.

Chris Shinkle:

Hey, Jan, I was. I'm kind of curious in order to probably pull out these customer insights, you're obviously connecting to lots of different sources of information.

Looks like on your website there's a huge number of different connections connected. Curious, I guess. Have there been any surprising insights?

Like, have there been things that teams have connected to and maybe didn't expect to get as much insight from as they did? They weren't maybe a typical source of user insights to drive product development or features or those things.

Yana Welinder:

One thing that's surprising to me personally on top of this, which is also kind of separate from what's been surprising to users, just how much of an integrations play I found myself being with Kraftful, because before Kraftful, I was head of product at ifttt and so Clue, I led a product that was all just an integrations platform and now I'm sort of rebuilding that, but with AI at the core. But yeah, definitely lots of really interesting kind of sources of feedback I've had.

Folks been sometimes come in and be kind of skeptical about what type of insights they will be able to pull out from particular types of sources. Sometimes even sources that you would expect to be rich with data, like survey data.

And just once they're able to see it pull together in the way we pull it together, which is, for example, we extract feature requests from each user comment and then we group them together.

And then we go one step further to understand beyond that feature request what are kind of some deeper level requests that people are asking from that entire group is asking. And then we pull it back and link it to the original mention so that people can see that.

And I think just being able to see it like that, prioritize based on how frequently everything came up and be able to click through, even though that they may have analyzed their survey data and seen that that generally is there, but just be able to see it in one glance.

I think that usually folks find that surprising, not to mention them being able to get that same type of insights from support tickets, which product managers rarely see, or sales call transcripts, which again is sort of like not something that the folks fairly frequently see.

So I think everything that's on the product side, the user interviews, survey data, also app reviews, sometimes that's more been a surprise element around just being able to see the insights in a different light, but then actually be able to access insights that were have been completely out of their reach previously. That's been really cool to see how.

Chris Shinkle:

Well does it sort of link things?

So if somebody leaves a review in the app store, then there's a call that comes into the support center and there's a call about something as similar and then somebody files a health ticket or whatever, does that link? Does the AI do some linking there and sort of pull all those insights together?

Yana Welinder:

So because it reformulates the item, it doesn't necessarily take verbatim exactly how the user said it. It tries to do the most concise articulation of a feature or a complaint or a category.

So there's a few different types of insights we pull out because it does that. Those then become very comparable across the call transcript, app review, survey.

So you can see them side by side being very similar suddenly because it's taken to its very essence, what's the request? And then you can click through and see verbatim exactly what people said. It does a little bit of both.

But yeah, to answer your question, yes, it can tell that it's the same thing.

Zac Darnell:

Chris, you're kind of hitting on something I'm kind of curious about. You've got 50,000 customers using this and all these. You talk about all these integrations and how important that was.

How does Kraftful's engine decide what's most critical inside of all that feedback and all those data sources and does it factor in things like sentiment analysis and its impact against business goals or product objectives, okrs or whatever model companies use?

Yana Welinder:

We make it really easy for you to particularly on the kind of the off the shelf side of things.

And we have kind of, we do a little bit more mapping for our enterprise customers, but we try to make it easy for you to see those different variables and then be able to weigh them based on how you would want to do it. But to give you an example, so you can look at the data from frequency of mentions perspective. Right? That's the default.

Then we have another way to look at it from sentiment perspectives.

You can see what are the trends that came up in the data, what's the sentiment associated with that trend and then what's the mood of users as they're talking about this point.

So that's another way to see where you can kind of maybe while there's something that maybe didn't come up nearly as many times, but you can sense that there's a certain urgency there because people are just very, very upset about it. Right. So that gives you a different way of prioritizing it.

And then a third way to prioritize it is if you connect your data to your HubSpot or your Salesforce CRM.

You can pull up deal size associated with certain feedback and then be able to prioritize it based off of okay, this actually unlocks quite a bit of revenue and I want to prioritize it for that reason. Right. So you have a few different levers to pull from depending on what you're trying to do at any point in time.

Zac Darnell:

That's got to be probably the trying to think usually at scale. I find that products are trying to streamline the way that customers use their products.

A lot of times, you know, kind of being a little general and hand wavy here.

And at the same time you want some level of flexibility to be able to slice things appropriately for your use case, your conditions, the uniqueness that makes you a business that's going to be a really hard thing to balance inside of this. So it's really cool to hear kind of the way that you guys are thinking about that right now.

Yana Welinder:

You know, we've really thought about and I think that this is actually important.

How did I used to do this as a product manager when I was sitting with a spreadsheet and just trying to kind of like analyze the data and pulling out things.

And if I had all the time in the world to do that, Kind of thinking, how would I set it up so that I can then get everything summarized and distilled in various ways and then I can slice and dice that based off of like now I'm in a conversation with an exec and they're asking different questions and I may want to like look at it from that perspective. Right.

But that's sort of like how Kraftful was built was really thinking about if you had a product manager sitting in the box doing all of this, but it's actually a large language model. How would we. What's the subtask? We would. We would split it into and then we have those like reasoning points.

Chris Shinkle:

I'm guessing you're using your own product. I'm getting your customers of Grass. Well, that's probably a safe assumption. Maybe.

What's some insights that you withdrew and led to a new feature that you've integrated in?

Is there a story there you could share about something you learned surprising and then turned into maybe a new feature and sort of the impact on the business there?

Yana Welinder:

We've definitely been saved by the fact that this is the tool we're building and we can talk because you can imagine having grown so rapidly and with such a small team, it's been a blessing to be able to use our product this way. And there's a few things there. So one piece about how we built Kraftful is that we started with a really, really small product.

So I tried to kind of figure out what's the kind of the bare bone product we could launch with.

t it today, we launched early:

And we didn't have this whole workflow back then, but we did have the workflow that took you from Insights from user feedback to feature requests and complaints.

And some of these insights, we didn't make it into PRDs, we didn't make it into user stories, we didn't have any of that second piece of the equation. And we also didn't have all of these different integrations and there's all these different things that we didn't have.

But we tried to kind of figure out what's the smallest thing that we can launch? Because now's the time to launch it. There's big things happening. We should make sure to get it out right now.

We shouldn't sit on it for a year and then we can kind of build in public and build on it. And one interesting thing, and we had a chat.

There was this ability to ask follow up questions in addition to the insights that we pulled out to take that same body of feedback and ask follow up questions. And that chat was different from ChatGPT. It had a certain kind of product personality.

We had done some things with it to make sure that it didn't hallucinate quite the same way that ChatGPT may hallucinate about data. And we had done some things to make sure that we could ask. You could pull in more data than you at the time could do with ChatGPT.

And one thing we discovered from then talking with customers using our product was that a lot of folks use this chat to not to ask follow up questions, but to take the insights that we had pulled out for them and turn them into PRDs and user stories. So that was kind of the first insight that then led to us building features that would complete that cycle.

The other thing that we learned that was sort of interesting is that for were features that we did have in the product. And at the time we had kind of, we had features that automatically turned feedback into feature requests. No one used chat.

You could foreseeably use chat in the same kind of way to ask questions about the data. But the things that we have built into the ui, people did not want to use chat.

So like kind of the two big takeaways we had was that hey, people want to be able to write PRDs and user stories with their insights. Cool. We should build that. And also we should build that because really people don't want to use chat.

They want to have something that does things for them automatically and has done all of that kind of prompt optimization on the backend and really thought deeply about how to do that in the best possible way.

Then they really just want to be able to push a button and say, take all the mentions of notifications and these 10 call transcripts and turn that into PRD about notifications.

Zac Darnell:

Yeah, that would not have been an assumption I would have made early on.

Yana Welinder:

So it's really helpful to then be able to like observe customers and talk to them and understand that. Okay, that's really, really helpful.

Zac Darnell:

One of the things that I feel like I hear a lot from again Whether it's a customer, a friend in the industry, somebody I meet at a conference, this idea around like privacy and regulatory, you know, that's still, it's important and squishy. You know, a few years ago GDPR hit the scene and that impacted the way that some software products operate.

And the regulations around AI are probably coming. No one really knows what that's going to look like or shape up to look like. But privacy is really important.

Do you have a read or at least just yourself? The way that you think about Kraftful, are there things that you're keeping an eye on?

Are there things that you expect to happen or just generally, how do you think about the future regulations around the use of AI and what should people be preparing for or thinking like, what do you, you know, read your crystal ball for?

Chris Shinkle:

Me?

Yana Welinder:

Yeah, yeah.

Zac Darnell:

You know, what does that look like from your perspect?

Yana Welinder:

But I actually thought about this a lot, not even in the context of Kraftful, but I've been in academia and actually wrote a lot on policy implications of AI, but like over a decade ago.

And so this is something I probably wrote the most kind of the earliest piece on the policy implications of face recognition technology and like published it in like Harvard Journal of Law and Tech and it was cited in all the cases that came after that. So this is.

Zac Darnell:

You have an opinion?

Yana Welinder:

I have an opinion.

And you know what, what I think will happen, and I hope doesn't happen, but I think will happen, is that we will get over broad regulation that will not focus on the application of AI in certain harmful ways, but it will be very broad and potentially overreaching and have unintended consequences. That is what tends to happen and what I guess, you know, I would make a case for.

And what I did back when I was thinking about the regulation of AI, but in a very different context, is this idea that you should really look at, what are the potential harmful effects of the technology and how do we make sure that we avoid those specific effects without overregulating and causing potential negative impacts in ways that you didn't really intend to.

Zac Darnell:

You don't want to stifle innovation. Right. You want to allow it to keep growing and take certain precautions.

Yana Welinder:

Exactly. And then I think it is up to product builders to really deeply think about how to build trust with their users and customers. Right.

In our context, it's that we are analyzing very sensitive user data. So we need to make sure that we design our process with that in mind.

We have specific agreements in place with the companies that Provide the Lawrence language models that we use that automatically deletes all the data instantly upon analysis. We have built our product with SoC2 and various security measures from the ground up and really thought deeply about that.

But I think it goes beyond just kind of checking regulatory boxes. It's about how do we build trust with users.

At industry where we originally met and talked about this, I talked about how we build our products with these two Kraftful design principles, which is to keep users in the know and keep users in control.

And I think that that actually sort of, it goes a long way for building user trust to show our customers and users how the AI is making decision on their behalf in an intuitive way as part of the product, so that they can easily see that. And I sort of mentioned that the fact that you can click through and see how data was analyzed and what it ultimately became, that's one way.

And then the fact that you can then edit anything and you're completely in control of the output.

This concept of the user, user controlling the data, this is kind of more around helping people build trust in the system and understanding how it works and feeling like they control it.

It's not just doing crazy things for them and polluting their JIRA with tickets that are then causing people to tell them to kind of be mad at them for having connected the system. But they completely control it. Right? Like they can control it end to end.

Chris Shinkle:

What are some of the challenges growing so fast and obviously being super successful with what you're doing? What are some of the immediate challenges you're facing as an organization or business?

Yana Welinder:

So many. So, so many.

One is that we've really mindfully tried to use the latest technology as fast as we can because that really has given us an edge in our analysis. And so we're always both adopting every model the moment that comes out.

When I talk with my peers, other founders building with large language models, I find that we are usually the ones that have the latest models in production first, always.

And that means we are always then in conversations with these companies and giving them feedback on, hey, this thing doesn't quite work the way you thought it would, you know, and so we kind of have to be that guinea pig.

But I think that that's the only way to do it in this rapidly evolving space, because that means that we have the best analysis because we're leveraging everything fast.

The other thing that's less about AI and more about just what we're doing is that how different feedback is coming from different sources that actually is a very, very big problem to solve. So being able to both in terms of, you know, our customers, even within the same categories may have different types of feedback.

But even for one customer, their survey data is so different from their app reviews, so different from their call transcripts, so different from their support tickets, and being able to analyze each of those in a meaningful way actually is a really big challenge to solve. You can think of it like if you have a customer call, you may have a sales rep or a product manager that's also talking during that conversation.

But of course you don't want to have what they said be interpreted as a feature request. But oftentimes what they said is kind of providing context on what the customer said.

And so you need to analyze everything but still be able to have an analysis of the intent of the different individuals in the conversation to then be able to see how everything that was said is the feature request attributable to the actual customer versus just mentioning the conversation. Right. So those kind of things.

It's a lot of different puzzle pieces that we've had to figure out to make sure that our product lives up to the promise.

Chris Shinkle:

If I go back to your the first challenge you meant there expressed there in terms of integrating new models sounds like there's an underlying hypothesis that a newer model is always going to be better or provide better analysis. And I'm sure a model is better in some ways. But I've also seen articles, you know, that chatgpt got worse at math over time or trickles.

How do you evaluate or know if a new model is providing better analysis for customer data? What's that process before it goes into production?

Yana Welinder:

They're not necessarily always better at everything.

We have built our solution to be very modular where we can swap out any model, any large language model at any point in time for any piece of our analysis. In practice we'll really just leverage Azure and OpenAI but because those have been the best at doing the things that we needed to do.

But we could foreseeably at any point in time switch it out for Gemini or Claude or any one of those.

And as soon as a new model hits the market or doesn't, even if there's any way to get early access, which is kind of when we've shipped, a lot of things we've shipped has been with early access. Not even production ready models we run eval to understand could this model do this step in our process better than the model we use today?

And there's so many different steps and we Use a combination of models for different steps in that process. Some things the model can become better at, while it's not better at something else.

And so that's kind of why we end up having a combination of things.

Chris Shinkle:

Sounds like you have probably objective measures for maybe the different types of information coming in and you can sort of do some evals or comparison. I imagine that was challenging to put all that together to come up with those objective measures to do that.

And I also imagine that provides a little bit of a moat around what you're doing versus people trying to come into the space.

Yana Welinder:

It's very, very challenging when you're dealing with qualitative feedback.

We try to get an objective measure that given this body of feedback, how would a reasonable product manager say, what were the feature requests in this text?

And then you may see that there's probably going to be some subjective assessment on top of that, where people are like, oh, I want my feature requests to be more general, or I want them to be more specific.

And so over time, we actually built out our products to do all those things so that you can click through and start with the general and get some more specific features. That's been kind of an interesting puzzle as well, where we've gotten feedback.

Oh, it's too general, it's too specific, it's too in the weeds, it's too something. Something, you know. So figuring out all of that has been a challenge too in its own right.

Chris Shinkle:

I'm guessing.

As a founder and CEO, it is a challenging spot to be in, to have such high and fast growth on the user side of things and just the scaling of what it takes to scale a startup or business. And all of those challenges, like more revenue, more customers, always a good thing.

Like there are challenges that come on that side and then there are challenges on this side where you're using these large language models that are evolving and changing so rapidly. It's like you're in the middle of these high growth, high changing things.

Like, you just imagine there's some tension there that feels, I'm imagining, just sort of very taut, like it's a rubber band that's stretched to the limit and feels like it could snap at any time.

Yana Welinder:

You know, it's actually snapped so many times.

t some point in the spring of:

And we at the time were analyzing more data on behalf of our customers that could be analyzed in a day having received the absolute rate limit extensions that you could possibly get. And so there's a moment in time where we were just everything was breaking because we just had too much volume and had to figure out. That's partly.

That's when we started building our kind of modular approach because we ended up splitting our data between OpenAI and Azure using the same models but across two APIs. So that was sort of the first time when we started building out that infrastructure. But yeah, definitely was an interesting time.

We ended up showing this funny notification in our product where we had this pop up with a Taylor Swift gif where she's like has all of these like kittens running everywhere and we're sort of like trying to figure out what's a good way where we can kind of tell our users and customers that we love the fact that there's so many of them that want to use our product all at once, but it actually is breaking it at this moment and they see that it's taking longer than necessary. So just like trying to figure out what's so funny and hopefully these folks are going to come back again when things will be a little bit calmer.

Zac Darnell:

I think it's admirable to think about how you've been able to navigate two sides of a ditch, both sides being high growth and high volatility. Chris, that's a great point that you're making.

I thought we could maybe end on the topic of product management because really that's like functionally who you serve. I know it's more than that, it's teams.

But if we talk about the product manager role in general, how do you see, you know, we talked a little bit about the human versus the machine, the art and balancing the science, but how do you see the role of product managers evolving with tools, tools like Kraftful and others? Do you see? Is it maybe today it's more of amplifying existing skills in the future? Is it completely changing the role? What do you think?

Actually Chris, I'm kind of curious about yours and Yana's perspective on this.

Yana Welinder:

AI frees up the kind of the positive side of it. It frees up product to be more creative, to do more high impact work.

But with that I do believe that the kind of traditional product management role is actually going away. But I do think it will be replaced by another critical role which is product managers becoming product builders.

They will spend less time on project management which will be entirely handled by AI. It already is in Some products like ours, and then they will be much more strategic.

They will be focused on strategic building and they will also be building themselves.

That doesn't mean that they will need to learn how to code because a lot of products probably will be AI products that they will need to learn how to train models.

I think it's more about that they will need to know how to build product assisted by AI solutions and sort of expect product managers to be much more in the weeds, actually building, designing product, creating the code that is the product more so than they do today. That's my take.

Zac Darnell:

Yeah, I love it. Hot takes. Chris, do you have thoughts around this? I know that you spent a lot of time thinking about it.

Chris Shinkle:

I agree with that. The role is definitely changing.

If I think about the product managers today, most of them, probably all of them, haven't grown up with this technology and they probably have built successful products where they haven't required the use of AI.

In some cases, you may say AI is not providing something that's necessary, an alternative like they don't already have tools and skills for managing to some degree. And so I think the mental shift of when do I reach for this and when do I intentionally start to pull this into my workflow and how do I stop myself?

Because I already know how to do this thing and just go do it. But the project management task, like I know how to do project management.

I've done all this like, and stopping and starting to think like, oh, wait a minute, this is a time to like reach for these new tools and these new technologies. And I think the ones who are going to be able to do that are the ones who are going to build super successful interesting products like Kraftful.

And the ones who struggle, I don't think you're going to see the success of those. It's just the ability to grow and change and stuff is just going to. It's going to be limited and impacted by it.

And if you're not careful, I think you can get caught not even recognizing that the pace of the world around you is speeding up. It's like the old, you know, Geoffrey Moore kind of escape velocity or what got you here is not going to get you the next thing.

And what you did to get to the spot you are today to being a successful product manager. Those skills may not serve you well moving forward. And so I think I agree.

I just, I don't know what that looks like today, but I certainly think it's changing.

And even if I in myself, I have to acknowledge that There's a challenge to make some of these mental shifts and you see somebody else do something and you're like, oh, wow, I never even thought about like trying it there, right? And then. But once you see it sort of opens that mindset and so I think you have to position yourself to be open. I need conversations like this.

And elevating these conversations and seeing what these big successful companies like Craftable are doing, I think is going to help move things along.

I think it's going to provide interesting perspective and learning opportunities and so meet people like Alan, who's willing to come and share with our audience, with other.

Zac Darnell:

People, give some hot takes.

Chris Shinkle:

What's happening, I think is just going to further expand and change the role of product management.

Zac Darnell:

I love that.

So, Chris, you made a great point that a lot of folks that are probably in product management roles or product roles in general, they didn't grow up with this stuff.

Iana, how are you advising younger entrepreneurs, younger product managers, product leaders that are maybe not quite in the role or are been in the role for a little while or haven't really made the leap into understanding this space? Well, how do they equip themselves? Where do they go learn? What are you advising those folks?

Yana Welinder:

There's so many things are changing and then some things are still the same. And so I would say that there's some core fundamentals of product management that even this future world of product builders will need.

Things like empathy, curiosity, user centricity, those things are still as important. Then I think it is actually flexibility.

And I think that that's sort of what Chris mentioned, this kind of building a skill around being flexible in adapting to new technology. I think of this not only for kind of when I advise product managers, I think of this as a parent.

I'm raising a child that's going to be an adult in this very different world. What are the skills I try to encourage?

And it's a lot around kind of being focused on creating things and being flexible and try to adapt to new technology as fast as possible because it's going to just be moving faster and faster and faster and faster. Right. That's what I would tell product managers kind of coming into the role.

Zac Darnell:

What I hear what you're saying is almost like looking at it through the lens of a parent. You know, unstructured play. Go explore, go try some stuff, figure it out, see what works, what doesn't work.

But really it's kind of that you talked about the curiosity that will never go away for successful product managers. And so how do you get curious inside of building that skill of flexibility and going to try all of these things to understand the nuances?

I love that.

Chris Shinkle:

I do think that it's going to require some play and experimentation and in new and different ways. I think some of the things that I think that AI promise is the ability to explore multiple options. For longer into the process.

Even earlier said they were trying to decide what's the minimum amount they could release this product to first get it out there.

I mean if you go back to sort of the old lean concepts of set based design where companies were actually pursuing multiple paths of development for much longer in and then discontinuing and that was with physical products, right. Not even digital. And so I keep thinking about the ability to do that.

You know, Chip and Dan Heath in their book Decisive talked about multi tracking and considering multiple options and avoiding whether or not decisions. I just think AI has got to enable some of that. But man, you have to be mindful of it and you have to come to grips. Right?

As Marty Kagan said, like, like all of our ideas are not necessarily the best ideas.

And if you sort of believe that you're going to be less likely to explore some of these other paths and I think you're probably then not missing out on some of the benefits or promises that AI offers. If you think why are you have good customer feedback? I already have good sources of that. We're really in tune with our customers.

Why would I pull in Kraftful? But you don't even know maybe where people are talking about your product and where the next insight's going to come from.

And so I think letting go of some of those things, that's kind of the hard part.

But when your competitors are reaching for Craft Pool and getting new insights and keep delivering killer features and you're just trying to keep up with them, as Deming would say, survival is not mandatory. I think other companies like Craft Pool and people are being really mindful and changing the industry, they're going to push people to change.

Zac Darnell:

Yann and Chris, I appreciate you guys so much for just sharing your experiences and kind of story on it. You know, it was great to meet you at industry so just thank you so much for, for your time and for spending a little time with us.

Yana Welinder:

Thank you so much for having me. Yeah, really, really enjoy.

Chapters

Video

More from YouTube