Artwork for podcast Forcing Function Hour
Minimum Viable Testing for Startup Founders with Gagan Biyani
Episode 1610th December 2021 • Forcing Function Hour • Chris Sparks
00:00:00 00:53:35

Share Episode

Shownotes

​Gagan Biyani is the co-founder of Maven, which empowers experts to offer cohort-based courses directly to their audience. Gagan is a serial entrepreneur; he previously co-founded Udemy and Sprig, a food delivery company. Gagan is an investor and advisor to over thirty companies.

​Gagan joins Chris to share how founders can accelerate their path to acquiring product market fit. Gagan’s extensive startup experience taught him that creating a “minimum viable product” or MVP—widely considered startup dogma—leads teams astray by over-building and over-developing too early in the process. If you are building your company around outdated paradigms like Lean Startup, you’re going to have a bad time.

​Instead, through Minimum Viable Testing, teams can save time, avoid hiring developers too early, and have higher accuracy on their eventual initial product.

For the video, transcript, and show notes, visit forcingfunctionhour.com/gagan-biyani.

Transcripts

Chris:

Welcome to Forcing Function Hour, a conversation series exploring the boundaries of peak performance. Join me, Chris Sparks, as I interview elite performers to reveal principles, systems, and strategies for achieving a competitive edge in business. If you are an executive or invest ready to take yourself to the next level, download my workbook at experimentwithoutlimits.com. For all episodes and show notes, go to forcingfunctionhour.com.

So, it is my honor and privilege to introduce our guest for today, Gagan Biyani. Gagan is absolutely one of the most thoughtful and incisive founders, but especially humans that I know. You guys are in for a total treat today. Gagan is a serial entrepreneur. Gagan previously co-founded Udemy and Sprig, and he was also one of the earliest employees at Lyft. Today Gagan is an investor and advisor to over thirty companies, so as far as startups go he's pretty much seen it all at this point. Most notably today, Gagan is the co-founder of Maven. Maven empowers experts to offer cohort-based courses directly to their audience, and I'm excited to pre-announce that Forcing Function is going to be hosting one of our courses on Gagan's platform (Maven) next year, a partnership we're very excited about. So, really cool to have him here.

Now, today for you all, we're going to be discussing Gagan's framework for validating startup ideas. He calls this framework "Minimum Viable Testing." Now as we said, Gagan's extensive startup experience has taught him a few things, most notably that creating a minimum viable product, or MVP, widely considered startup dogma in many circles, actually leads teams astray, and it causes them to overbuild or overdevelop far too early in the process. As I like to say, if you're building your company around outdated paradigms like a lead startup, you're going to have a bad time. Now, this is gonna be a conversation around acquiring product-market fit, so if you have that in mind pay attention, because you will learn how this path to product-market fit can be greatly accelerated. Thanks for joining us, Gagan. Always a pleasure to see you. Very excited to dig in today.

Gagan:

Great to see you too, Chris. Thanks for having me.

Chris:

Let's set the stage, why don't we. So, MVPs, minimum viable products. I mean, everyone has heard this term before. It's just basically assumed that- "Hey, you want to create a company, you start by making an MVP." How did this become the dominant paradigm, and most importantly, what is amiss?

Gagan:

One of the great things about startups and technology, in general, is that we're always learning and improving and coming up with new ways to figure out how to build success. There's so much risk in startups. Ninety-plus percent fail. Depending on how much funding you have, you can argue it's ninety-nine percent of startups fail. So the minimum viable product framework was created because people realized that originally when people were creating startups, and this sounds insane today, but they would say, "I have a brilliant idea of the future, I'm gonna spend the next two to five years building it, and then I'm gonna sell the product two to five years later after I've spent five, ten, fifty million dollars, and then I will figure out whether or not I'm successful."

And the problem that ended up occurring where so many of these companies would raise a bunch of money, build a product, and then crickets. Customers didn't really want it. And it's so hard to build a product that people really want that a framework was developed called the minimum viable product, and the MVP framework is really just about before you build the full version of your product, build a Mini-Me. Build a small version of your product and test it out with users and see what they think, and instead of spending two or five years and tens of millions of dollars, you could do an MVP in three months or maybe even two weeks and see what people think about the first version of your product before you build out the whole thing.

And in my mind, this was totally groundbreaking. I grew up in a post-MVP world, so in my first company, we already knew what an MVP was, and were trying to follow that format. But over the next decade or so I've realized that there's another thing that MVPs miss, which is that MVPs are not about discovery, they're about evaluation. And so when you put an MVP out there, you're essentially like, "Hey, I have an idea that's fully formed. I'm gonna try it out and see how people respond." And I think that actually there's a step before MVPs, or a step in lieu of MVPs, which I'm calling MVTs. "Minimum Viable Testing."

And it's the idea that if you build an MVP, you are likely going to say, "Hey—" Let's say I'm building Lyft, as an example. Well, an MVP of Lyft would be a situation in which—I have to build a lot to build an MVP of Lyft. I have to be able to order a ride, I have to be able to get matched with a driver, the driver needs to be able to know where to come. There's a GPS associated with that, and I have to know where both the driver and the rider are. And I probably need some sort of central dispatch system and an interface by which you can do this.

Pretty soon, you're building an MVP that's like three months, or a month or two in development, and it's a lot of moving parts. And what I realized was I don't think that's the best way to start a company. In fact, I think what you want to do instead is pressure test specific assumptions through specific targeted tests to better understand and discover the shape of the problem before you go and build your first MVP. And so we at my last company Sprig, and now Maven, use this MVT framework where when we're starting out and thinking about companies, we don't test the entire system, we test little parts of the system, and then learn from each of these tests, and that helps us inform what the system's gonna be. Then we build a minimum viable product, a very, very, very basic version of this system, and then we build on top of that minimum viable product. This allows us to build a minimum viable product that is more in line with what we actually think the long-term vision is, because we do these MVTs first and really understand whether or not the market wants what we need.

Chris:

Cool. So let me know if I'm getting this right. We all know that most startups fail, and in lieu of that most ideas are going to fail, so it makes sense that we should want to test as many ideas as we can before committing to building one of those ideas, because the building is very resource-intensive, especially if someone is non-technical, and that there are cheaper ways to test our ideas by let's say you pull these ideas apart. You identify them as parts of a system, that there are core assumptions to each idea, each of which must be true in order to make that idea worth building. Is this a good paraphrase?

Gagan:

Yeah. And specifically, it's that there are assumptions that you're making in your MVP logic. So you create an idea for a business, you have a list of five or ten assumptions that you're making about the future, about what customers want, about what's viable, what you can build, how much it would cost, you're making assumptions about these things. And if you're right about these assumptions, then you end up with a big business. So in the case of Lyft, some basic assumptions were there would be drivers who would want to pick up passengers for a living outside of the taxi industry. Passengers would want to just like click a button and get a ride to them. That you could actually deliver rides to them in a cost-effective manner. That there are a significant number of people who want to use a product like this.

So these are all different versions of assumptions that you make, and if you're right about these assumptions you end up with a multi-billion-dollar business, as Lyft ended up being. And if you're wrong about these assumptions, then you end up in the toilet. And so what I am advocating is you take each one of these assumptions and you try to create a test against one of these assumptions, and slowly build in more and more assumptions into each test until you get to the point where you're like, "All right, I've done a bunch of testing. Let me throw all these tests out, and now let me build from scratch the first version of this product."

Chris:

That's so good, yeah. I want to underline this concept of compounding probabilities, where a lot of times we take each of these assumptions one at a time, but if your goal is to build a large scalable company, you might have these five to ten assumptions, each of which needs to be a 'yes'. And so that being five to ten possible failure modes that you need to address or at least to de-risk or increase your conviction on before it even makes sense to get started. And important that a lot of these assumptions are implicit, that without a good framework like MVT, you might not even realize that they are assumptions that you have. They need to be surfaced. Let's try to generalize this a little bit. Let's say that I'm looking to start a company. How do I reveal these implicit assumptions that I'm using to choose, "Hey, this is the idea I want to pursue"?

Gagan:

And this is one of the first tests of an entrepreneur, is are you so dogmatic about your idea that you see that there's no chance it won't work, or do you have enough awareness to know what the risks are? And so honestly the first version of writing down your assumptions is just doing it yourself. Sitting in front of a whiteboard or in front of a piece of paper or Evernote or whatever you do, and saying, "What—If this were to succeed, what do I need to believe? If this were to fail, what could go wrong?" And just write those things down. I usually write five to ten different answers. Sometimes I get to fifteen. And I'm someone who's exceedingly paranoid about an idea failing, and so I'm always doing this logic of what are the chances this might fail and what would cause it to fail, et cetera. If it is difficult for you, then the next step is to bring in team members. Potential co-founders, partners, investors, friends, anyone who's willing to be honest with you about your idea. And you want to come in with as low of an ego as possible, because your whole framing of this is, "Hey, I don't know what I don't know, please help me figure out what I don't know, and help me decide what the risks are and what I'm doing."

And so you ask your friends the same questions as you ask yourself. "If this were to succeed, what needs to be right? What do I need to be right about? And if this were to fail, what would I need to be wrong about?" After you list those assumptions, you then rank order them with the group's help by the importance and impact of these assumptions. So the ones that are gonna be most likely to either make or break your business should go at the top. And then the ones that are less likely to make or break your business should go at the bottom. And that's a riskiness measure that then allows you to focus your time. You want to focus your time on the ones at the top of the list.

Chris:

Funny you bring up paranoia. It immediately brought to mind the Andy Grove quote of, "Only the paranoid survive," or, "Once you've eliminated all the ways to fail there's no choice but to succeed." All of these overnight successes—You know, there's clearly a lot of luck involved, but there was also a lot of pruning of the undesirable paths along the way, removing the obvious known failure modes. This is I think a really interesting point, is obviously we wanna start with ourselves, our own knowledge. A lot of times we're starting with some domain expertise, we're starting a company 'cause we felt the pain point this early or we have a strong connection with the target customer, but you don't know what we don't know, where we're stuck inside boxes of our creation, and sometimes we need other people to point out the obvious to us. Maybe we're a little bit too close to the problem, we're a little bit hubristic in our ability to solve all the world's problems. And how do we choose the people to ask for this feedback, and how do we have a conversation that allows them to be honest with us? Right?

It's very easy to imagine scenarios where we ask for feedback and they say, "Yes, awesome, go ahead and do it," and nothing comes out of it. Or the opposite, they're like, "Oh, that'll never work, why are you wasting your time?" What have you found works for having an honest conversation for surfacing these assumptions?

Gagan:

The most important thing is to be open to it and to bring the energy and intention of being open to it into the room. So if you're someone who's really hard to give feedback to, people know that intuitively. They can tell that you're bracing for impact, or that you are ready to jab back, or whatever it is, if it makes you afraid or sad or angry to hear something that you aren't doing well, people will just inherently tell you less stuff. And so that's an important thing, to change your intention internally. From there you wanna find people who are willing to break through that intention, either because they have a personal relationship with you that is trusted and they feel safe, or because they're just personality-wise the type of people who are pretty honest. And those are both good enough characteristics. You don't need both, you just need one or the other. You wanna have the right people in the room.

And then in the conversation, I find that if you just ask the questions with the right intention, you'll usually get good answers. But that's in part because it is a natural trait for me to want and listen and be open to feedback, and for people who are not as open, I would surmise that they need to do a little bit more work. And so what's the work on top of the two things I mentioned? The third recommendation I make is to be explicit and be upfront and say, "Hey, I'm really excited about this idea. I know however that if that excitement overtakes me, I could spend the next two or four years of my life working on this thing and then look back and say, "Ah, I wish I would’ve known X." And our goal today is to make sure we surface X before it happens, so at the very least even if I'm stubborn enough to just move forward with this anyways, we've all done our part to say, "We know what the challenges might be, and I would really appreciate you taking the time and the courage to provide me with feedback on what those challenges might be. And then you go into the session.

That's a sample speech. I mean, you could give any version of that, but what you want to do is be explicit about the problem, which is that you might not get people feedback—People might not be honest with you. You want to ask, give them permission to be honest, and then to give them a reason as to why that honesty is so important. I just did that in that example, and there are other ways you could do that. I think that that's a pretty productive way to start this conversation.

Chris:

That's so gold, and it expands and generalizes to all conversations of stating intentions, being clear about those intentions, being explicit about any necessary norms, clearing the air—It brings to mind, I was recently doing reference calls, and adding the simple line of, "Anything that you say today is between us only." What do you know—I mean, that might be implicit, but by saying it explicit it gives them some permission to open up more. So, yeah, it's really hard to over-communicate in these cases, especially when it comes from a clear and intentional place. And trying more generally not to be the type of person who braces for impact, someone who flinches away from feedback, but to see bugs as opportunities, that this is already good and this is an opportunity for this to become even better, that we are here together to explore solutions together. This is a collaborative effort.

And it also brought to mind when you were talking about—So we've had this conversation before around the types of people who we want to cultivate around us, and sharing this interest and having people who are open to feedback and very willing to give feedback being very important to us. My wife Marianna calls this "the spinach test," is how long can you walk around with spinach in your teeth or your fly unzipped before someone points it out to you, and if you're in the wrong room you could be walking around quite a long time, so it's a good way to suss out who's going to tell you the thing that you need to hear but that you might not like them to tell you. That that's really like a decent evaluation of a friend, is, "Does this friend care about you enough that they're willing to anger you and risk your opinion of them in order to help you improve?"

So yeah, a lot of this happens in the conversation itself, in the framing, but it's also a bit of a lifestyle in the types of people and conversations that you have.

Gagan:

Completely agreed.

Chris:

Let's bring this to the next level. We've started to get out of our own heads and to reveal the critical assumptions behind our idea, behind our business. You've seen so many businesses, I know you've been an advisor to a bunch, I know this process is really instrumental to the founding of Maven. What are those key critical most risky assumptions that you see come up most often?

Gagan:

First one is the classic Paul Graham quote, which is, "Build something people want." Most of the time people are building something that not enough people want, and that's a core assumption that people make, and it's actually funny enough almost never at the top of people's lists. When someone's pitching a new idea, and you ask them, "What do you think the riskiest assumption is," it's amazing how often people will skip over things like, "Hey, people might not actually want my idea." I think it's the most personal, but it's also like the number one killer of startups. It is like, it is literally the motto for YCombinator, which is the greatest startup investor of all time. And I would say that that's the number one thing. Another example is the viability of an idea, particularly from a financial and business model perspective. There's usually some area of the business that's really hard from a viability perspective. It's very rare to have a business that is just obviously viable from day one. So examples of things that could be hard are, the uniquenomics are tight. So you're going to have to sell something for fifteen dollars, and it's gonna cost you thirteen. Well, that's not a lot of margin. How do you make sure that there's enough margin there and you're right about your assumptions on costs and profits?

Another example that's completely different from that is you've got an idea that is technically very difficult. There's a lot of technology involved. And you have this idea that you could for example deliver someone a car in less than five minutes, but the technology associated with actually routing a car to you and having enough density is another type of viability that it's not obvious that it's possible, it's not obvious that you could build an algorithm that would correctly deliver you a car or more accurately—Or, a more powerful question is whether or not you could deliver a search result that is actually useful to people. Right? Google, as an example. And then buried within that is also whether or not you can get to critical mass enough to have enough cars on the road to make it work.

So these are all various versions of viability that are worth doing some work on and getting some sense. And some of them just require you to do some some spreadsheet math. It's not always that the test is an actual real-world test. Which is the other problem with an MVP; it only talks about real-world tests. And MVT tests in that term can just be really anything. It can be you writing things down on a spreadsheet, it could be you asking people asking people questions or a survey or whatever.

Another area is market size. So there's often a challenge of whether people want it is one thing, how many people want it is another thing. And how many people will want it after you've faced reality, which is that whatever you are envisioning is not gonna be as good as you thought. So you know, you ask someone, "Hey, would you like to have an internet radio station where it always knows what song you want next?" And literally everybody would say, "Sure, that would be great. Would you like free money? The answer is yes, of course." However, there's a viability issue that completely changes this value proposition that makes the market size very different. It's not how many people would want the perfect song to be next, it's how many people are going to be interested enough and trust enough to just sort of let another person decide algorithmically, and whether that algorithm is gonna be good enough to give you the next music, and how many are music listeners, who (like eventually happened with Spotify versus Pandora) who said, "Hey, actually, I just kinda want to listen to what I wanna listen to."

And an algorithm is really not gonna do a great job here. So there's a lot of different ways to market size that make it complicated. Then there's go to market strategy. One of the biggest challenges people have is they have a great idea, but they have no way of figuring out who the first customers are going to be, who's gonna adopt it, and how to get them to adopt it at scale. And that is often a big question mark for people, is how am I going to position this, who's going to want it? Specifically, like, who's gonna be the first people to get it, and how do I get first thousand, ten thousand, hundred thousand of those people to use the product?

Chris:

It feels like a commonality here is trying to understand what the customer wants, which is especially difficult when the customer doesn't necessarily know what they want and what the customer wants might not even be technologically feasible. So let's start with our original question, how do you know if you are trying to make something that people want?

Gagan:

Great question. This is the most artistic part of startups, I think, and it's not easy because as you say it is frequently the case that people say they want something, and then they actually don't want it once they realize all the compromises involved in it. This is the classic political problem. You have people who vote in favor of propositions that don't have any money attached or funding attached to them, or they don't think about the funding source, and then there's a trade off in the political decision and you realize, "Wait a second, I was only sold the positive side of this thing, but nobody told me the negative." And this is really common with products, where someone's like, "Yeah." Like the Pandora example, "Like, of course I'd like the next song." But actually, hey, that means you have to trust an algorithm to decide what song is next for you. And like, are you going to do that? And how do I position that in such a way? Then you ask the question, "Okay, well are you going to do that?" And most people would be like, "No, I don't want an algorithm to do it."

And then, what? You think Tim Westergren just gave up at that point? No. So then you go to the opposite side, which is like, "Hey, wait a second. You don't know that you want a faster car. You think you need a faster horse, but actually you need a car. And people don't know this." So you have the maddening other side of the problem, which is that people will say they don't want something and then they will use it in droves. How many people would have said they wanted Twitter before Twitter existed, or how many people said they wanted Facebook before Facebook existed? Very few.

So, how do you figure this out? Couple ideas. One is to be an extremely good active listener. So there's a ton of books out there and articles and stuff about active listening. I'm not gonna go into super detail, but ultimately you need to be able to hear what people are saying, and what is the question behind the question, as my co-founder Wes always says. Or what is the answer behind the answer, which is my reframe of that. You ask someone, "Hey, how do you get from point A to point B, and what makes that decision?" And someone says, "Well, I take the bus." And you're like, "Okay, well, why do you take the bus?"

This is where people don't know why they do things, oftentimes. They do things for unclear reasons. We all do, for what it's worth. I do lots of things for unclear reasons. So this is not a pejorative, "Hey, I'm better than everyone else." It's actually everyone should just realize almost everything you do is completely random. Like, "Why do I still have this water bottle here in front of me instead of—" Like, this is a scratched-up, shitty water bottle that I just randomly drink out of. Why aren't I drinking out of a glass?

I didn't actually think about that decision very much. Every once in a while it passes through my head, I don't actually consciously know why I drink out of a water bottle instead of a glass at my desk. It doesn't make any sense. So we do lots of things that don't make sense. And what you have to do is understand what is the motivation and how can I get someone to do the thing that I want them to do? And so the question behind the question is a very important trick in understanding that. When someone gives an answer, there's often an underlying reason for why they do it, and you have to tap into that underlying reason and not tap into what they explicitly say is their reason. So I picked this package because it was blue. "Okay, let's make all packages blue from now on." Well, actually it's a lot more nuanced than that. It turns out that I like this shade of blue and if you change this shade it doesn't work, and if actually everyone starts to use blue then all of a sudden I need red, because it pops, and so for a while red packages will be super successful, and then you'll have to go back to yellow packages.

And so it's all context and complicated. So one thing is the answer to the second is context. You have to understand the context of the world you live in and the competitors and what are the other alternative options, and what is what you're doing better? And the rule many people say is it needs to be 10x better than the previous option. I think that 10x better is a very crude answer, but it's a good one. Is your solution 10x better? If you want to build a startup, it needs to be 10x better. If you want to build a business, you can just be twenty percent better and actually build a business.

A third way is to force people to make decisions and watch them. Just be an observer of their behavior. And you can observe in two different ways. You can observe by like literally watching them. One of my favorite things to do—You know, I love people watching. Go to an airport or you go to a grocery store or whatever and you literally just like watch people as they buy things and you look at the people who are making decisions and you start to notice, "Oh, what are the things they picked up? What do they put in their cart? What do they put back on the shelves?" And you pick up a lot of details there. And you can just be a general observer of humans and your customer. I like to think of this like, "The best way is to become friends with your customer, not just be a researcher of them."

So you know, as you know, Chris, I'm very good friends with a lot of course instructors, and have been. And it wasn't totally intentional. I just like course instructors. It just so happens that the intersection of people who are my customer are also people I tend to like. But it's been super useful, to the time I was living in Austin, to be friends with people like you and David Perell because I could just learn about your challenges and learn how you think, and I feel like I know how David thinks about his business in some ways better than he does, sometimes. And of course he knows his own stuff better sometimes too, but so that's another area. Really understanding and immersing yourself with the people that you're targeting and observing their behavior.

And then also you can just ask them to tell you facts. So you can question people and ask them to tell you, "Like, oh, when you were in this decision, who did you consult?" That's a fact. They'll tell you. "I consulted these people." They might miss a few things, but they're going to tell you. "What were you thinking when you went through this?" "Who else did you tell about this idea, about this business product that you use?" You could ask a series of questions that get you information that you can then use to glean whether or not people want what you're building. So it's an art, and there are lots of different tools. Eventually you need to pick. You need to come up with a tapestry of knowledge and decipher that tapestry and come up with an idea that can fit.

Chris:

And there's two different interpretations of that, psychologically. The realization that startups are an art, not a science. The wrong way to take it is, "Oh, this is something that's unknowable, I can't just read a book and follow the steps. Okay, I guess I can't do it." While the opportunity is, "Because there's no right answer, you can make it up, and in fact that everyone is making it up, and perhaps you might even get better than everyone else at making it up than they are." That this is a learnable skill, that this ambiguity is always an opportunity. And at this meta-point around—We talk a lot about this at Forcing Function, that we like to think that we're rational, but a lot of the things that we do aren't strictly rational. Those habits grow by accretion. We do things generally because that's what we've always done. "Okay, well why do you take that route to work?" "I don't know, that's always the route that I took." "Well, why did you take that route for the first place." Like, "I don't know, that's just the way that I went."

And you'll find this pattern in other people, but most importantly if you find it in others you find it within yourself. So it's always a good thing to realize and to, you know, without getting too paranoid with it start to gently, with a light touch, start to question everything, because you know very little.

I really loved this line you had, that people don't realize the compromises necessary to get what they want. Man, isn't that just one of the rubs, is we had these grand dreams but we haven't reconciled all the sacrifices we'll need to get there. Everyone wants to be the rock star without traveling around the country in the crappy van eating pizza and drinking beer, right? I have this pet theory right now that free will is pretty limited, but I'll try not to open up that can of worms. Essentially just saying that if all behavior is contextual, customer and otherwise, do the work necessary to understand the context under which customers are making their purchasing decision. They probably won't understand it, and you might have to observe them and see what else is going on. Even if it's not causation, you might get some correlation there. Just be a shadow, be a friend, listen. And just this general perspective of, "I am going to go about life assuming that this person is an expert, even if they don't know they're an expert." I just see what they do and assume that there's some interesting things to be learned by observation. I mean, beyond just a business context, I think that's just a more fun way to live.

And one more final one that I'll drop on you, this is a really fun exercise to do to reveal when someone is talking to you, especially in a business context, every time they say something ask yourself, "Is that a fact or an opinion?" A lot of times we just absorb things as if they're facts, but if we take this lens we realize that most things we're absorbing as facts are actually opinions. So when we're doing this analysis, especially when it comes to customers, we don't want opinions, but if we ask things that are factual, that can be validated, we can actually build something on that foundation.

Gagan:

I'd say that you want both the opinions and the facts, because the opinions give you an understanding of the soft side of how decisions get made, and then the facts give you information that allow you to understand the context. And so they're both really important. I generally agree with everything you just said, though. There's a lot of nuance here, and it's not systematic. At least not yet in my mind. But if you can get better and better over time at seeing reality and seeing people's reasoning and understanding who they are, you'll get better at building great products, because you'll understand humans. And you only need to understand them in one specific context. You only need to understand them in the specific context that you're building a business in. Usually that's a very, very small, narrow area of their lives. So if you can get a lot of empathy there, you'll be successful.

Chris:

So for fun, I know you're deep into cohort-based courses now, that this is the context, both the client and the customer that you spend your days trying to understand, this reality that you're trying to create. Tell me what your empathy has taught you. Who are taking these courses, but most importantly who are the people who need to be leading them?

Gagan:

At Maven, we have a vision that there are a number of experts out there who aren't today teaching online who the world would benefit from their knowledge. And so that's the first thesis of the business. And to some extent we've proven this by enabling a bunch of instructors (we have about a hundred at this point) to teach online courses and to see the number of students who come in and say, "Wow, I've never taken a course like this before, this is life-changing," et cetera.

More specifically, the customer that we have on the instructor side that we decided to go to market with is the instructor who has a big following, usually on email or Twitter, and is usually a knowledge influencer. So they know some subject really well, or they talk about subjects. Sometimes they have multiple subjects they talk about. And they're known online for something. And there's a number of things I've learned about this group of people. One is that Twitter is very much a club, and people who become successful in Instagram or Twitter or YouTube very much like to be a part of an elite club, and they view that as part of their identity. And so making people feel like they're part of an elite club is something that we didn't really focus on for a while at Maven, but something that I think we'll start to focus on more, because it's something that people care about.

Another thing is that creators are fundamentally on a much shorter cycle than almost any other buyer out there. You put out a tweet and within an hour or two you know whether you have a hit on your hands. So you're used to this really fast feedback cycle. You maybe tweet three to five times a day. You write an email, you know whether or not your email resonated within a few hours. It's like an incredibly fast feedback cycle. This affects the ability for creators sometimes to see it longer term, decisions and visions. And so you'll find that creators can sometimes be very transactional in nature, and view short-term over long-term, but creators are also incredibly smart. So all you have to do to get that to change is to point it out. And as soon as you say, "Hey, look. You've been doing this for like two to five years, and you've killed it. Look at your career over this time. And over the course of five years, have you realized that actually each individual tweet doesn't matter, what matters is the accumulation of all of them?"

So what we've benefited from is because of the empathy that I have and because of the credibility, because I'm also on Twitter and have—Well, my email newsletter is basically nonexistent, but I do have a Twitter audience, and because of that I have the empathy and also the credibility to share, "Hey, look. If you invest in building a course, you may find it to be really tough in the first three weeks, three months. But it's an asset that you can scale over time." And that actually the amount of time you put in goes down dramatically over time, and you're able to get returns from that that are going to go up over time.

And that's an example of something that I've learned that we have to really clearly articulate is the vision for what happens if you teach a course, because many people only see what's in front of them, and they're not online learning experts. That's my job. Their job is to be experts in their field. Design, decision-making, raising capital, whatever it is. And so I think I've learned a lot about how to communicate with these folks, but also honestly I'm still learning a lot, and I'm still at the point where I'm trying to get better at understanding what is going to motivate creators, and then more importantly the compromise here, which is a thing you and I talked about a lot today, Chris, which is what can you actually deliver to creators that they want? What is the intersection between what we can deliver and what people want? That's where the rubber meets the road.

Chris:

I have one more question for you before I'm going to hand it off to the Q&A. This might feel like a slight diversion, but I think it's very related. You talk about vision a couple times, where that's how these ideas start, that's what they mold into, is a vision of what the future is going to look like and potentially helping to nudge towards that desired vision, or having a vision of where things are going to be and trying to mold oneself, one's place in the world, to fit that vision. What is your vision, specifically for online education, let's say the next ten years? Where do you think things are heading, and how has that affected how you position yourself and Maven?

Gagan:

Our vision is that online learning over the next ten, twenty years will slowly become more and more of a viable alternative to in-person education, and that the way that that will happen is by providing new supply of educational content that is not easily accessible in person. That when you provide this new supply, there will be an explosion in innovation, that by getting more people to just go and teach online courses and teach them in a cohort-based fashion that there will be thousands, tens of thousands, hundreds of thousands of courses out there, each of which will make some minor innovations and changes, and will sort of function like an evolutionary curve of continually providing better and better learning outcomes for students who are taking it.

I think that in twenty years, ten years, you'll find that there will be a world in which it will be increasingly possible for people to say, "Hey, instead of getting an MBA, I'm gonna take five or ten online courses over the next two years in this critical moment in my life, and I'm gonna learn more from those courses that I am from the MBA." You can imagine a world in which eventually that trickles down to college, and eventually that trickles down to below college, that trickles down to K through 12, where the alternatives, the free market, the laissez faire internet will provide enough alternatives to provide pressure to the existing education system. And that's really our vision, is to empower new entrepreneurs to start online education businesses, and by empowering them to therefore provide a real active alternative to the existing system.

Chris:

Beautiful vision. I'm very honored to be a part of it and help give a platform. Got some really good questions in the Q&A, I want to make sure I get to these. And always apologies if I mispronounce names.

First question comes from Sourav Chatterjee. Gagan, you're probably a better pronouncer than me.

Gagan:

Sourav.

Chris:

Sourav. Sourav asked, "When creating a new category, or disrupting an entrenched way of doing business, how do you differentiate genuine critique from defensive negativity when you're seeking feedback?

Gagan:

I would generally err towards all feedback is good feedback, so it doesn't really matter if someone's being defensively negative or not. Just listen to what they have to say anyways, and at that point you have to decide whether or not you agree with it or whether it's a real risk, but more often than not it is a real risk, so you know, any example if we go to online learning, for about a decade in online learning, you know, this is sort of 2007 to 2017 or so, think it changed around that last couple years, but for around a decade in that time period, if you asked someone in learning itself, like you're—If you told them you were starting an online learning company, which I've told many professors and teachers that, they'll almost all say, "Well, it can never be as good as in-person learning."

And what's interesting is you could take that as defensive negativity from the incumbents. They just like are being negative about your future vision and it's all downside from there. Or you could take it like an opportunity and say, "Well, I don't agree with the ultimate premise that it's impossible for online learning to be better, but it's true that today we are way far away from that, and more important customers don't see online learning as good as in person." And so we'll have to do a bunch of things as a company to educate the public on why this is effective, to educate the public on why it's enjoyable, and we'll have to carve out a niche that in-person education is bad at, because if we go directly after the mainstream in-person education markets, we'll get completely slaughtered by a product that's already very advanced. And so there is a lesson in that defensive negativity despite the fact that it's kind of a ridiculous comment. I think anyone who works in the internet knows that any incumbent saying, "Hey, you'll never disrupt us" has always been wrong. I mean basically everything gets disrupted. Fortune 10 and Fortune 20 like changes every fifty years or something like that.

I think that it's just useful to take it as input but not let it define your decision-making, and instead talk to the customers. Incumbents might tell you they don't want something, and customers might tell you they don't tell something, but watch what they do and watch where they gravitate towards and find the customers who are gravitating towards new and different, and then figure out why, and then put your product effort towards wherever the new and different customers are mostly adopting today.

Chris:

Made me think of this idea of empowerment as embedded scale, that I was able to think of all these examples of marketing strategy essentially being empowering others to use a platform to unlock unused or unrecognized capacity, whether you know, you have Lyft, people with cars or spare time or creators with knowledge and audience waiting to be unlocked, you have AirBnB, I have a spare room or an airbed, that if you can find a way to empower people to recognize and leverage and utilize what they already have but not have realized is an asset, you're gonna be in a pretty good position. And that's probably a good assumption or a question to be teasing apart, is who am I going to empower here?

Gagan:

Yeah, I love that.

Chris:

Cool. I have a question from David Nebinski. David asks, "At what point do we know if we have enough information to know that the test is truly a test? Has there been a time when you made a decision where you thought you had enough information, but you later learned that you did not?

Gagan:

Sure. This is like a classic question I get. It's like, how do you know if your customers are telling you the truth, or how do you know if your idea is actually good, and it's what I would call like a question that tries to make an artistic decision a scientific decision. And how I think about this is actually you don't know, and you can never know whether or not your test is good enough. You have to use your mental faculty, though, to try. And that's actually where I think people make the biggest mistake. It's like, "Hey, because this is nebulous, I'm not even gonna try. Because I don't know whether or not the vaccine is going to give me a hundred percent certainty, I'm not gonna try to figure out if it should be ninety-five percent or ninety percent or ninety-eight percent and how that might affect my life." And because I don't know for sure that if I don't step outside that I'm not gonna get COVID, as a crude analogy, this focus on certainty actually beguiles your ability to have a focus on possibility and probabilities.

And so the question you have to ask yourself is not, "Hey, I'm doing this test, how do I know if it's good or bad," it's actually, "How likely am I to be right about this?" And spend some time thinking about it. So if you run a test—Let's say you run a test—We ran a test when we first started Maven where we ran a course. So we just—I taught a course myself, and I created it from scratch and taught it, and I wanted to see like, "Okay, what is a cohort-based course like?"

And now there are a thousand arguments you could make against this task. You could say, "Well, you're a well-known entrepreneur who has a following online. Like, you're not the same as average joe instructor, average jill instructor." You could say that, "Well, you're teaching these courses, you're not building technology at all." You know, there's all sorts of counter-arguments. But what you can do on the flip side, and you can all of a sudden say like, "Hey, there's no point in doing the test," or, "I'm gonna do this test, but I'm not gonna be sure if it's right." That's not how I look at it. How I look at it is, "This test was focused on learning very specific goals. Do I feel like I learned these goals, and how much do I feel like I learned these goals, and how much can I rip apart what I've already done?" And in that nuance, in the nuance of how much, not whether but how much, I can then start to assess whether or not I'm happy with the learnings that I got from this test or not.

And absolutely, to your second part of your question, the question was are there times that you've done a test and then you ended up being wrong. More often it's that I didn't do the test in the first place. It's the things that I forgot. But there are times where I've done tests and they've been incorrect. I mean, an example was my second company was a food delivery company where we build sort of an on-demand restaurant. We were testing our product at a certain price point, we didn't realize that we just simply could not sell it at fifteen dollars, and we needed to make it eighteen to twenty dollars. And so a lot of our tests on cost ended up being wrong. We thought we could hit a certain price point, and it was really difficult. More difficult than we expected. So we learned the hard way that even if you do the tests, you can still be wrong. And honestly you just kinda have to live with the reality that no matter what you're gonna get to a point where you're like, "I can't test this any further."

And honestly if you get to the point where you can't test it any further, you probably tested it way too much. In reality, you probably want to test it just enough to feel confident that you know the assumptions, and then you need to go do it, and as you're doing it you have to remember your risky assumptions, and watch the market respond to your product and say, "Oh, actually that risky assumption wasn't a big deal." You know, at Maven one big thing as an example that wasn't a big deal was a lot of people thought it was gonna be hard for creators to become good instructors. Turns out we don't worry about that at all today. Most of our creators have super good reviews from their students on their courses. The cohort-based format is good enough, and our teaching of how to teach a good cohort-based course is good enough that most courses are getting high reviews. But the problem that we did identify that is hard is creators were gonna have trouble understanding the ROI of course creation.

The ROI has been in our minds very clear, so we've been right about the ROI, but it's been much harder than we expected to convince creators to sort of stick with it through two to four cohorts to get to the point where the system works and it runs itself and it becomes a big revenue generator for them. That's been something that's been more difficult. So we've done tests on those things, we've pitched instructors, we've got them on, did first courses, but you just learn over time a lot more than you can learn always in your tests. So you want to do tests enough to understand, "Hey, is this worth the next two years of my life to go and figure it out?" Then you spend the next two years and you ask yourself, "Are we at a point where I can spend the next two years of my life and figure it out?"

And you're just sort of gradually getting closer to the point where at some point you're like, "Oh, this actually worked. Great."

Chris:

Brilliant. I knew you were going to blow some minds today, and you did not disappoint. I want to peel back, because there's a lot to unpack there. First, you made a really good meta-point that tests are useful no matter what happens. Right? If it turns out you didn't need to worry about that at all, great, you can sleep better at night. Or it might even increase your conviction on the idea that you're working on so you can move faster, you can take more risks, that a lot of this self-sabotage that I'm sure every founder can relate to where everyone is looking to you to go, "Rah, rah, lead the charge, everything we're doing is great," when behind the scenes you're questioning everything, the more that you have validated your idea the more that you can push through this classic trial of sorrow, that you know you've done the hard work necessary to take this risk. It's risk-adjusted, that you're continually just bumping up against reality, and the sooner that you find out what reality is telling you it allows you to pivot. You're paying attention.

You had this beautiful, beautiful line that I want to read back that said, "Over-focus on certainty causes blindness to opportunity." There's so much to unpack there, and because we can't measure something exactly we just decide it's not important to know at all, which is just a little bit insane, if you think about it. There's this book that really, really drove this point home to me that I highly recommend to anyone here listening. It's called, "How To Measure Anything," by Douglas Hubbard. And he proves, as the title promises, that you can indeed measure anything. You might not be able to put a number to it, exactly, but you can put a range to it, right? You can have a confidence interval. Or you can say, "Hey, I have this big nebulous thing. Well, I can break off some bricks from this nebulous thing and measure pieces of that, and at least reduce my uncertainty." But the fact that you don't know something doesn't mean that that thing is not worth knowing.

So yeah, continuing to bump up against reality, but in a determined, systematic way, of course.

Time for one more question. This last question's coming in from Cody Anderson. Cody wants to know why do you think that entrepreneurial online education is going to be better or more innovative than online education offered by the traditional institutions?

Gagan:

First of all, there's no reason why it needs to be better, it just needs to be good in some situations. So first of all, there might be situations in which the existing institutions still do a better job than what we do, and luckily, you know, there's multi-trillion dollars worth of spend, and everyone can sort of be happy here if we all deliver some value. So that would be my first answer. But I actually don't fully believe that answer. I mean I believe that that is true, that it could be either, but I actually believe that online will be much better than the institutions, and honestly the reason is just from my having gone to a top-tier university and having watched a lot of what top-tier universities do and just feeling like the quality is so bad that it's impossible for us not to beat it. I mean we're not talking about competing—I mean, the education that you get, even if you go to a Harvard or a Stanford, is honestly like on an average course fairly disappointing. I think if you ask any student who went to Stanford or Harvard, they'd say, "Yeah, I had some great courses and I had some terrible courses."

And I think that is one of the biggest challenges with the institutions is that they have created enough ossification in the way that they deal with quality control, the way that they deal with cost management, the way that they deal with curriculum design. That essentially their system sort of guarantees a lot of failures. I think that the institution has sort of given itself up to forces like tenure, forces like elitism, forces like increasing administrative costs over time, because there's essentially a never-ending willingness from banks to lend against the top-tier schools. And so you have structural challenges across the board that I think make it very easy to imagine that a reset might be better. So I really believe both from observation and from just theory that the average online course now that you can take, like if you take David Perell's "Rite of Passage," or if you take a course from Chris Sparks, you're gonna get a better experience than the average course you take even at top-tier schools. And so of course it's gonna be better than the average state school or community college or something like that.

And that's in my mind almost at the point of observational fact. It's one of the things I can be more certain in, because I've taken these courses, I've taken all of these types of courses, and I know lots of people who've taken lots of different types of courses, and universally that's what customers say. They say that these courses are life-changing, and you rarely hear people say that about college course. They will definitely say that about college, but not about a college course.

Chris:

We have so much room to cover, but I wanna be respectful of your time, and just thank you so much for being here. Anyone listening, I highly encourage you to see what Maven is doing, either as a student or as a teacher. There's no reason that you can't be both. I'm certainly very much a student these days. That's at maven.com. Gagan's website's gaganbiyani.com. If you found this concept interesting, definitely, definitely go to the source code. Gagan has an amazing article that he's written on the first round review. We'll make sure that that link is in the show notes. Gagan, is there anything else that you want to share today, any other resources or places to send people before we wrap?

Gagan:

No, this is great. Thanks so much, Chris, for having me, and I really appreciate everyone listening in.

Tasha:

Thank you for listening to the Forcing Function Hour. At Forcing Function, we teach performance architecture. We work with a select group of twelve executives and investors to teach them how to multiply their output, perform at their peak, and design a life of freedom and purpose. Make sure to subscribe to Forcing Function Hour for more great episodes, or go to forcingfunctionhour.com to sign up for our newsletter, so you can join us live.

Chapters

Video

More from YouTube