Artwork for podcast Club CI
Member Conversation: Dan Von Kohorn - Agentic AI
Episode 326th February 2025 • Club CI • Cognitive Investments
00:00:00 00:47:05

Share Episode

Shownotes

Welcome to the first installment of Member Conversations, a biweekly series where we talk with members and fellows of Club CI. Kicking things off is Dan Von Kohorn, a longtime investor at the frontier of early-stage startups.

In this episode, Dan and I dive into one of the biggest questions in venture capital right now: How will AI reshape industries, and which businesses are first in line for disruption?

We talk about:

  • Agentic AI – Not just chatbots, but AI that does things for you.
  • The Low-Hanging Fruit – Why customer service is about to look very different.
  • The Rising Cost of Attention – How AI will protect (or exploit) our focus.
  • The Dark Side – Criminal networks using AI for scams

Dan also shares what he's seeing in the investment landscape: which AI startups have real staying power, how they’re being valued, and what might make them vulnerable to commoditization.

Timestamps:

(00:04) - Introduction to the Series

(02:20) - The Rise of Agentic AI in Early Stage Venture

(11:21) - The Future of AI in Customer Service

(16:10) - The Future of AI in Personal Management

(28:32) - The Impact of Language Models on Industry

(36:57) - The Economics of AI Startups

(45:10) - Opportunities in Generative AI

Transcripts

Speaker A:

Foreign.

Speaker B:

Hey everyone, it's Rob.

Speaker B:

And I'm very happy to introduce the first of what is going to be a bi weekly series of interviews with members of the club and fellows of the club.

Speaker B:

The first is with my friend and growing collaborator at Bespoke, Dan Von Cohorn.

Speaker B:

Dan is co founder along with his partner Jeff Rosen of a firm called Broom Ventures.

Speaker B:

They're based in Boston and Broom Ventures focuses on very early stage startup investing.

Speaker B:

So they just finished raising their second fund and they're beginning to put that to work.

Speaker B:

But essentially their approach is to remain small, to become involved very, very early.

Speaker B:

Oftentimes they're writing the first check for a founder or a team with a new idea and they work very closely in sort of providing support, mentorship really as they say, sweeping the floors to, to help them get up off the ground and up to the right find product, market fit, all these sorts of things.

Speaker B:

So Dan is really knowledgeable in the areas of early stage business formation.

Speaker B:

Where is disruption happening specifically within software in those areas.

Speaker B:

So I'm really super pleased to introduce you to him and his ideas and what he's think about.

Speaker B:

This is sort of a informal conversation that we have about stuff going on in his area and I think this will be something of a regular series because Dan is someone that at Bespoke we're working with more and more and.

Speaker C:

It looks like Broom is going to.

Speaker B:

Be a major partner for us as we begin doing more of this investing with our clients ourselves.

Speaker B:

So without further ado, I will get right into it and here is our conversation recorded on January 22.

Speaker B:

I should say there's a little bit of a delay, but the conversation between myself and Dan Von Cohorn.

Speaker B:

Thanks.

Speaker C:

All right, Dan, we're here for the first conversation of what I hope will be quite a few of them.

Speaker C:

Where should we begin?

Speaker C:

What is on your mind in.

Speaker C:

In the world of early stage venture right now or what are you thinking about?

Speaker A:

I've been in this market for a long time and the zeitgeist today is agentic AI.

Speaker A:

A lot of people are thinking about it and contemplating business models that use agentic AI.

Speaker A:

The world has not yet deeply adopted simple AI.

Speaker A:

To leap forward to agentic AI makes a lot of sense for some companies, but there's still a lot of use cases that are yet to be served even with simple applications of AI.

Speaker C:

So and by agentic AI, you mean AI that does stuff for you, not just an LLM that communicates with you or.

Speaker A:

Yeah, it usually involves a couple of different steps and A couple of different AIs.

Speaker A:

These are AIs that might interact back and forth with each other.

Speaker A:

Maybe one that writes code and another one that tests code, or one that creates a plan and then delegates work to multiple other AIs to achieve goals.

Speaker A:

And then the orchestration layer of the AI will evaluate the internal components that are being achieved, make sure that they satisfy some criteria, and then allows it to move on to solve more complex, longer term tasks.

Speaker A:

An orchestration layer might also trigger the underlying AIs to work on some particular use case, even if there's no human that's, that's triggering it directly.

Speaker A:

So these things might be triggered by conditions in the world, or they might be triggered by time, like a cron job.

Speaker A:

But the idea is that these kinds of systems can handle a lot more complexity and be more autonomous, get more done, be more reliable because they have multiple layers of intelligence that evaluates themselves and proceeds with longer term planning and tasks.

Speaker C:

On the sort of orchestration or the managing layer.

Speaker C:

I was listening to an interesting conversation on this subject recently and you know, they were talking about sort of the declining returns to intelligence and how just adding more intelligence to something doesn't solve all problems.

Speaker C:

Meaning like the exponential super, you know, singularity cases that some people make.

Speaker C:

When you think about that at like the orchestration layer, which is essentially like managing other AIs.

Speaker A:

Right.

Speaker C:

Do we have any sense of like where some issues might crop up when you're working with an AI rather than a, a human, if that makes sense.

Speaker C:

You know what I mean?

Speaker C:

Yeah.

Speaker A:

The AIs are actually quite good at considering alternatives that they've never seen before.

Speaker A:

They're quite good at noticing small issues that a human might overlook.

Speaker A:

They have infinite patience to go back and forth with their component AIs to continue to process until the job is done.

Speaker A:

Right.

Speaker A:

They don't just give up and accept an answer that's not quite right.

Speaker A:

So I think it works fairly well.

Speaker A:

The marginal benefits of increasing intelligence I think are still positive.

Speaker A:

There's still gains to be made with improved intelligence of each of the individual LLM models.

Speaker A:

But I think there's also maybe even more value to be had in correctly building the context.

Speaker A:

So, you know, I'm seeing a lot of progress with state of the art models that can right out of the box, solve problems with reasoning and sound very human, very intelligent.

Speaker A:

But the real gains, I think at least in the next several months, the next sort of iteration of AI applications I think will come from improving the content that these LLMs are given inside the context window.

Speaker A:

And that is like when you prompt an LLM rather than just prompting it cold, giving it some appropriate related context.

Speaker A:

What are we talking about here?

Speaker A:

What's the background data?

Speaker A:

You know, what is the frame of reference within which to think?

Speaker A:

How should I respond to this particular situation, getting all that background?

Speaker A:

We humans have this very naturally because we have multiple senses and we have long term memory and we can be aware of our situation and respond appropriately in context.

Speaker A:

And that's very important.

Speaker A:

The way that you perceive someone else's intelligence, whether it's a human or an AI, depends largely on how they respond to you in the context where you're asking a question or interacting.

Speaker A:

And that's coming.

Speaker A:

I think for AI, that's going to be a big deal.

Speaker A:

And so even a less state of the art model, a small model, these models can appear quite intelligent and be quite intelligent if you give them the right context.

Speaker A:

So I think there's a lot of gains to be made with that.

Speaker C:

And how does that change the importance of someone's skill in manipulating the AI themselves?

Speaker C:

Like when LLMs first kind of hit the scene and there were, you know, ChatGPT and consumer applications you can use.

Speaker C:

I saw a lot of people selling or training others on prompting methods and you know, okay, you have your question, but then you set your mood and you put your restriction, you know, different sort of ways of formulating that.

Speaker C:

Is that going to go away as, as an important layer between you and the AI?

Speaker C:

Like when you say that improving the context that gets inputted, is that basically tools that kind of does that for you or acts as that intermediary or what exactly are we talking about here?

Speaker A:

Yeah, prompt engineering is part of it.

Speaker A:

And as these LLMs are being built, new ones have better context window management systems.

Speaker A:

These systems pull in external data like from a rag, a resource augmented generation system to get external data from tools.

Speaker A:

But they might also have multi step prompting that first evaluates the original prompt and then modifies it to add in additional context that is appropriate and then pulls in the RAG content and anything else that might be available from various memory systems and then builds the final prompt that goes to the language model.

Speaker A:

So that kind of multi step prompt building context window building can add a lot of important context and information that the language language model would then use for the final output.

Speaker C:

I see.

Speaker C:

And is that where a lot of the value add in new startups is going to be?

Speaker C:

Is sort of feeding things to more of a standard model in different ways?

Speaker C:

Or how do you See that playing out?

Speaker A:

Yeah.

Speaker A:

There's so many opportunities to implement this stuff.

Speaker A:

There's all kinds of ways to make things smarter and better and improve the state of the art.

Speaker A:

But really I think it's just an integration game at this point.

Speaker A:

The industry is so slow to adopt, not because they don't want to, it's just maybe my expectations are, you know, once it's invented, it should be implemented immediately.

Speaker A:

But the reality is that these things take quite a bit of time.

Speaker A:

There's time to understand what the tools are capable of, there's time to build and test, there's time to adjust, and it just takes time.

Speaker A:

So there are a lot of business opportunities that don't really require a lot of innovation, they just require integration.

Speaker A:

And so adapting legacy business models to language models just means building on top of new sets of tools that have different cost structures.

Speaker A:

And so the economy is going to adjust with the cost structure just collapsing for various kinds of work.

Speaker A:

And that's going to add a lot to productivity.

Speaker A:

There's companies that are going to be able to do a lot more with a lot less.

Speaker A:

And as certain types of activities become approximately free, those things can happen dramatically more and be built into processes.

Speaker A:

And I think we'll discover that there are a huge set of new opportunities that are created because various types of activities that used to take a lot of time and money now don't.

Speaker C:

And just to put some tangible imagery around this, like what is an example of what you think will be an early application of using sort of improved sort of multi step prompting or RAG in conjunction with one of these models, like where are we going to see this happening first?

Speaker C:

Because it's most conducive to actually adopting and, and using it.

Speaker A:

I'm pretty excited about the use case in customer service.

Speaker A:

This is, it's sort of a perfect example of multi step prompting and supplementing the context with rag.

Speaker A:

In customer service, this is happening already.

Speaker A:

There are companies that are supplementing their customer service with AI agents that interact by voice or text.

Speaker A:

And these systems will take in the original prompt which includes the customer's question, but they will supplement that prompt with information about the user and their account.

Speaker A:

Supplement the prompt with lots of background information about the company's policies and the situation that the customer is asking about.

Speaker A:

All of that content can be pulled out of the database using tools or mcp, which is a model context protocol that Anthropic recently released.

Speaker A:

And these tools can give the language model a much deeper context with which to support the customer and then Additional tools are available to the LLM to be able to actually handle the request, to be able to write to the database, make changes, interact with other APIs.

Speaker A:

So that allows the customer service agent to understand what the user is asking for, understand the context, understand the account, and use the APIs like a regular customer service agent would.

Speaker A:

And I think the potential there is that customer service could be 24, 7, it can have all the major languages, it will have infinite patience.

Speaker A:

You can yell at it and be mean to it, and it will still be professional and polite and get the job done.

Speaker A:

There are a lot of jobs that are currently very difficult jobs where people yell at you and complain at you and it's stressful and a lot of that work can be free.

Speaker A:

And so I have no doubt that there will be lots of other things for people to do, but humans need not do that kind of work anymore.

Speaker C:

What about on the sort of demand side in the customer service?

Speaker C:

So if this is one of the early adoption areas and say six or seven years from now, pretty much every large company, you know you're going to be speaking with an AI agent.

Speaker C:

When you call with a question, does that make it easier to build agents that work for you as the customer?

Speaker C:

So, like, if you know there's going to be a certain protocol of hey, here's my problem, here's what I need.

Speaker C:

Do you expect that there'll be agents on this side where you say, okay, agent, go off to American Express and here's my problem, and you go deal with it?

Speaker A:

I do, yeah, I think that's right.

Speaker A:

There will be agents who are, who are vying for your time, like advertising agents who are seeking engagement, who are trying to get your attention to do various things or sell to you.

Speaker A:

And also there will be companion AIs that represent you and they act as your defense layer against incoming noise.

Speaker A:

They serve your needs in shopping and, and scheduling and a wide variety of other applications.

Speaker A:

But you'll be able to interact with the world through an intermediary who will again, have infinite patience and know your preferences, understand your contact list and know who has access to you and who shouldn't.

Speaker A:

I think there's a very strong potential for us to lead more focused time and have a defensive barrier against the bombardment of social media and the bombardment of advertising.

Speaker A:

All of this will be optional and people will be able to determine on their own how much of this or how little of this they want to use.

Speaker A:

But I think the potential is there for that to be extremely helpful for people.

Speaker A:

And again, lead to productivity and increases and also improvements in, in lifestyle.

Speaker A:

We are just at the beginning of humanity's integration with information and I'm not sure that we've been doing such a great job so far.

Speaker A:

Social media and, and other online sort of addictive news feeds have a lot of influence over the population and it's not clear that we as a species are good at managing this level of information flow and choices that we have.

Speaker A:

So I think that there are people who are responding poorly to it and humanity will figure that out too.

Speaker A:

And AI can help because it's not necessarily the case that we're all interacting the way that we want to with social media.

Speaker A:

And this might provide know a protective layer that allows us to still get what we want, but do it through a window that is managed and protected in the ways that we ask it to manage and protect.

Speaker A:

A moderation layer maybe it sort of ties into.

Speaker C:

I've talked with Nerif Carlisle about this, this notion of using COMPUTE to help shape and protect your attention, which is kind of what the Make Time Flow system is all about.

Speaker C:

And like today it doesn't really integrate AI or agents like that.

Speaker C:

Still in the future it's almost a prototype of that because I mean the whole thing is just about blocking your time and finding deep work time, which is essentially like get away from the flow of information, get away from your computer and just focus on one thing, which is remarkably refreshing and feels wonderful.

Speaker C:

But then you have to block out time to like, you've seen how terrible I am at email.

Speaker C:

I'm super rude because I just never like, I'm, I'm almost just giving up on it at this point.

Speaker C:

But you know, you alternate between these deep work periods which are so great and these like really intensive one on one interactions which when you have them are incredible.

Speaker C:

And then the rest of the time it's like you're rushing like an idiot to, you know, do the messages and the slack and the email.

Speaker C:

And even if you block it in, okay, here's my batch time, it's still.

Speaker C:

You're just jumping right in the flood and there's so much crap in your inbox and every time you go on LinkedIn there's 47 people messaging you.

Speaker C:

Hey, just thought I'd check.

Speaker C:

It's like, oh, so to have that protector is.

Speaker C:

Yeah, it seems like the logical next step to just be like, hey agent, go find a time that works for me and Dan so we can get together and really connect and you know, and don't bother me with Anything else.

Speaker A:

Right.

Speaker A:

It's so valuable that it's inevitable that that will be something that people want and then use.

Speaker A:

Not everybody, but, but many people will.

Speaker C:

And how far away are we from this?

Speaker C:

Like, what are the technical bottlenecks not only to building it, but also to implement and reach customers with it?

Speaker C:

Like are you seeing startups trying to build these tools today?

Speaker C:

Like where are we in terms of actual boots on the ground stuff getting built and put out there in the world?

Speaker A:

The things that are being built today are point solutions for existing problems.

Speaker A:

Largely there are some more forward looking, sort of visionary applications of AI that are being built and experimented with, but most of what's happening now is modernization of existing workflows.

Speaker A:

/:

Speaker A:

And so those are the kinds of things that are pretty easy to fund.

Speaker A:

They make obvious sense, the demand is known.

Speaker A:

They may be subject to a lot of disruption because, you know, not just one organization sees the value proposition there.

Speaker A:

It's like a coming wave of, of competition and change.

Speaker A:

And it may be that the margins for that type of business disappear completely because there's no moat and it's now free to get it done and so that, that can just collapse.

Speaker A:

But those are the kinds of businesses I'm seeing mostly the idea of an AI that sort of manages you, protects you, acts as your, as your interface across different applications and communications systems.

Speaker A:

There's really no barrier to getting that done today.

Speaker A:

But it's a lot of work because it requires integration with all of these, with all of these tools.

Speaker A:

It may be that something like that is best positioned sort of inside your browser or inside your operating system so that it automatically will travel with you from application to application and account to account and see where you're logged in and what are you doing and how are people interfacing with you and you know, sees all the different websites where you're interacting on, on LinkedIn or Reddit or you know, wherever you are and, and helps to be your, your interface there.

Speaker A:

So there are, there are companies that are working on things like that, but mostly, no, mostly they're building smaller, simpler applications that have more direct short term value propositions and revenue models.

Speaker C:

Yeah, I mean it seems like that would be the low hanging fruit for.

Speaker A:

Yeah, it'll go that way for a while.

Speaker A:

But I'm sure there's a lot of people who are working on these more complex ideas and you know, they'll keep it in the garage until it's ready to go.

Speaker C:

Well, the thing that strikes me is like it just feels like such a fundamentally different kind of business than most of the sort of web or software oriented business that we've seen for the last 20 years or so.

Speaker C:

In the sense that it's inherently very personal in a way that doesn't seem to work well for like a SaaS kind of solution.

Speaker C:

Like I don't want to be plugged into the Salesforce version of the Protector because I just wouldn't trust it.

Speaker C:

And they would have such an incentive to let things in or to not act in, in the best interests of the user.

Speaker C:

I mean, just look at all the AI companies are now starting to talk about advertising models and stuff like that.

Speaker C:

Like that didn't take too long.

Speaker C:

So how do you bridge that gap?

Speaker C:

Or do you need to have a company like that to get buy in?

Speaker C:

Will people trust, you know, the Google Protector or do you need a new kind of even revenue model where you just buy these things off the shelf like an old school personal computer like give me the hardware, give me the software and leave me alone and disconnect me.

Speaker A:

I don't know.

Speaker A:

Predicting how trust works is pretty tough.

Speaker A:

Consumer behavior is fickle.

Speaker A:

I think, you know, brands manage their public image pretty pretty closely.

Speaker A:

Google and Apple are in a good position to do this kind of thing, but I don't know if consumers will trust them.

Speaker A:

It may be easier to trust Google and Apple with this.

Speaker A:

If it's built into the browser or built into the operating system.

Speaker A:

That just might be enough of a low friction entry point that people will adopt it.

Speaker A:

And as they adopt it then other people will observe that it's safe or valuable and they'll do it too.

Speaker A:

So that very well may be the case.

Speaker A:

It is such a strong advantage to have that distribution already built in that I think has a pretty good likelihood of winning.

Speaker A:

But it may also be the case that somebody new comes along with something that is open source or otherwise trustable and has fine grained controls for users to determine the level of of advertising interactivity and the level of information sharing that gives users enough control and builds enough trust that it overcomes the inherent advantages of built in distribution.

Speaker A:

So it could go either way.

Speaker A:

I really don't know in advance which scenario is going to play out, but I think open source does add a Lot of credibility and, and people trust it for good reason because they can see the underlying code.

Speaker A:

These closed source models often have functionality built in that is not necessarily purely for the benefit of the user.

Speaker A:

And sometimes it's because there's something proprietary, something very valuable and secret and good that a company wants to protect as part of their competitive footing.

Speaker A:

But mostly, mostly, no, mostly open source is important for building trust like that.

Speaker A:

You see it with encryption protocols and other things where trust is really paramount.

Speaker C:

Just going back to your comments on which areas you think are ripest to adopt these point solutions, do you think we're going to see certain areas of the economy just like change much more rapidly than others?

Speaker C:

And what are potential ramifications of that?

Speaker C:

king of if you go back to the:

Speaker C:

They were the first real sector to adopt electricity.

Speaker C:

They retrofitted lots of factories from the old, you know, steam engine kind of centralized shoot with the pistons and stuff to actual electrical factories and grew like per capita GDP within manufacturing, like 12, 14, 15% a year even when the overall economy was in a real slump.

Speaker A:

Like water mills and, and the textile industry.

Speaker A:

I think that's really important way of, of thinking.

Speaker A:

I kind of liken LLMs to.

Speaker A:

I heard somebody describe it this way.

Speaker A:

This is not my original idea, but it's like we've discovered a new continent of people.

Speaker A:

And these people are relatively intelligent.

Speaker A:

They work online, they're willing to work 24, 7 for, you know, a penny an hour.

Speaker A:

And they're eager, they speak every language, they never get mad, they follow instructions perfectly.

Speaker A:

They have the ability to read every PDF, to understand all of it, to, to, to have perfect recall from, from their memory.

Speaker A:

And there's infinitely many of them.

Speaker A:

And so the labor markets are going to have to adjust to that I think.

Speaker A:

You know, we've been seeing in other industries how more traditional machine learning has impacted things.

Speaker A:

Financial forecasting has made financial markets much more efficient.

Speaker A:

And now the inefficiencies are really just perception and social media.

Speaker A:

But the fundamentals are relatively efficiently priced.

Speaker A:

Assessing the credit worthiness of a company, these kinds of things have adjusted to machine learning models, making forecasts about engineering projects.

Speaker A:

All of this has changed pretty dramatically over in 20, 30 years with computation and now just, just a, just as.

Speaker C:

A, as an aside on that, one of the things that we've really noticed at off Wall street doing the short side is many more of these short setups are Cropping up because some algorithm has latched onto something in the trailing data that's not representative of what's actually happening in the business in the real world.

Speaker C:

So it puts up these mispricings based on sort of what machine learning algorithms have to work with, which is the financial backward looking data.

Speaker C:

And occasionally you have these weird discrepancies where they're looking at this very cyclical company and saying, oh, revenues and margins have been going like this and then that's really good trend and that's going to continue and really in the background it's starting to roll over.

Speaker C:

So like the underground drummers in the matrix, I guess we have to adapt and find the little crevices in between to thrive.

Speaker A:

Yeah, that's a signal that machine learning has really taken over that industry where you're noticing places where the machines are making mistakes and getting it wrong.

Speaker A:

They're too backward looking and not necessarily considering the out of context forecast that the thing that that's happening in the future that makes it distinct from the past.

Speaker A:

Yeah, so you know, we've been seeing this happen across many industries with more traditional machine learning, numerical machine learning.

Speaker A:

And now in the last several years language models have really have really opened up.

Speaker A:

And so I think the next transition is going to have to do with that introduction of language models and voice interaction being able to interact with a computer using natural language.

Speaker A:

It used to be that only coders could interact, you know, at a deep level and programmatically with a computer, but, but now really anyone will be able to do that.

Speaker A:

And it's going to be a new, a new paradigm in terms of humans using computers.

Speaker A:

It's going to be a new paradigm in terms of getting intellectual work done by computers that is know, flexible and handles cases that it hasn't seen before.

Speaker A:

So yeah, I think the cost lens is a, is a useful one for predicting what, what happens to different industries, the amount of work that's getting done that can be put onto LLMs.

Speaker A:

That's a major driver of which industries adopt this first and how big of an impact it has.

Speaker C:

So it sounds like from an investment standpoint the way to think about where we are today is like early Internet days.

Speaker C:

There's certain very obvious applications that are somewhat linear from what we're doing today.

Speaker C:

Where, oh, you have a lot of humans that are expensive and they're talking and looking at things and looking up things, you can replace them.

Speaker C:

Like the customer service example, that's going to be where a lot of the activity and low hanging fruit is plucked.

Speaker C:

And then sort of at some point there will be a second wave that more internalizes what LLMs can actually do in a more creative way.

Speaker C:

It's so funny because it's so hard to tell the future and you know, these stories from the past and you try to apply mental models from them.

Speaker C:

Like I know what I'm going as an investor and an analyst, what I'm going to get wrong is I'm going to underestimate what it really means to have essentially not zero, but very, very, very low marginal cost intelligence that you can add.

Speaker C:

And agents like just, just the same way everyone underestimated what it really would mean to have, you know, 0% marginal costs to distribute software.

Speaker C:

And like the early Internet applications were like, oh, we have a newspaper, let's put a newspaper online.

Speaker C:

You know, like these very linear things.

Speaker C:

And then it became more, you know, version two was like really starting to, you know, getting to SaaS and all of the, all of the Internet companies that basically just took physical stuff and tied it together.

Speaker C:

So like Uber and you know, every real estate, like basically.

Speaker C:

And now we're, this is sort of a separate area but thinking about how truly the, the rock bottom marginal costs of that are really being used in really innovative ways for things like drug discovery and R and D and like all that new wave stuff.

Speaker C:

But like this is the same, I could see it playing out the same way where it's like, okay, oh, let's replace the call center agents.

Speaker C:

Like, yeah, obviously, exactly.

Speaker A:

Yeah, that happens first because it's replacing a direct known cost.

Speaker A:

The ROI is extremely high right away that, you know, it's, it's easy to get approval to put that in the budget.

Speaker A:

You can test it and, and evaluate its performance.

Speaker A:

So that's like the first wave and those are the things that happen first.

Speaker A:

And that's still basically where we are.

Speaker A:

And I think we're not even very deep into that process because there's still so many processes, so many business applications that are yet to be implemented just with relatively straightforward AI applications.

Speaker A:

And then there's the things that we, we know are coming but we can't see yet.

Speaker A:

These are the applications that will naturally arise because of the essentially zero cost intelligence.

Speaker A:

And it's hard to know what, what those are.

Speaker A:

But I think representatives like having a, almost like a fiduciary advisor or a companion who, who rides along with you in your, in your journey online and beyond, who advocates for you, gets work and projects done for you, supports you in all of the ways that you want to be supported.

Speaker A:

I think that is a much more valuable but less direct inevitability because of, of what language models are.

Speaker A:

They don't even necessarily have to be embodied.

Speaker A:

Like it's possible that in the, in the further future these things are embodied into robotic systems and they can help you around the house and, and do things for you as a physical entity as well.

Speaker A:

But, but just the software side of it is, is really quite valuable and I, I suspect that will come, you know, sometime in the next five or 10 years.

Speaker C:

Yeah, it seems like for good and for ill.

Speaker C:

The thing that I know I'm underestimating is what are the potential applications of just having attention and agency applied with no limit.

Speaker C:

Like we're so used to a world, we're just so ingrained to be in a world where like people get bored, they get tired.

Speaker C:

Like you're not getting bombarded every second by something.

Speaker C:

Or you know, on the positive side, like you only have so much mental capacity for your kids, the teacher only has so much mental capacity for the student.

Speaker C:

You know, no one has mental capacity to spare for people in Africa in very poor countries.

Speaker C:

So on the positive side, I can see just these incredible applications where just to have someone who's there, who never gets tired, who never gets bored, to shape minds and attention and your day to day interactions in such a positive way.

Speaker C:

And then on the negative side, like constantly, like literally every second being bombarded by threats and scams.

Speaker C:

Like I was reading over the weekend about this Chinese gangster named Broken Tooth.

Speaker C:

Have you heard of, of this story?

Speaker A:

No.

Speaker C:

So there's this Chinese gangster named Broken Tooth who is in Myanmar now, that's his main base.

Speaker C:

He's most famous for running these pig butchering scams.

Speaker C:

And essentially what they do talk about customer service and high human capital costs, what they do is they just kidnap people from villages in China, you know, sort of on the southwestern border and in Myanmar itself.

Speaker C:

And they essentially enslave them and make them run these scams where they're messaging with people in the US and Europe and trying to befriend them.

Speaker C:

And ultimately the scam is, hey, would you invest in this thing?

Speaker C:

And it sounds so stupid on its front, but it works enough to justify the cost of kidnapping people and bringing them together and all the risk costs associated with that.

Speaker A:

Spin up an LLM that pretends it's your friend and tries to convince you to send Bitcoin or whatever.

Speaker C:

Yeah, well that's what I mean is like imagine Brokentooth must be thrilled.

Speaker C:

Like imagine his cost Structure is collapsing right now.

Speaker C:

Like he can have almost infinite, not infinite, but obviously there is some marginal cost in the electricity and the inference and stuff.

Speaker C:

But relative to.

Speaker A:

But if it makes sense at kidnapping costs, it definitely makes sense at LLM costs.

Speaker A:

Yeah, that's right.

Speaker C:

So, you know, I don't know where I'm going with this, but I, I can see.

Speaker A:

Yeah, well, that that means that there's going to be a lot more of it and it means that the defense mechanisms that we all need are also going to have to step up.

Speaker A:

And so this is sort of yet another indication that there will be pressure to have some kind of defense layer, some representative where, you know, you don't necessarily have to see all of your text messages.

Speaker A:

These, these things can be filtered out in advance before they even get to your attention so that you don't have to deal with the onslaught of AI generated spam and scams.

Speaker C:

One question I had was the kind of first wave of real AI startups and all of these point solutions and sort of picking the low hanging fruit.

Speaker C:

To what extent are those economics going to be disappointingly bad because everyone is so used to SaaS economics and very, very sticky customers and sort of this land and expand model where you know you can afford a big customer acquisition cost because you know you're amortizing that over a very long time period.

Speaker C:

And do you think that investors are sort of underwriting the new startups based on the old model?

Speaker C:

I know that's hard to evaluate and how much disappointment could be in store for some of these guys that like, could you, could you get to a billion dollars of ARR with one of these point solutions?

Speaker C:

Is that realistic or you're just going to constantly be undercut by new entrants with very, very low costs and customers that aren't nearly as sticky?

Speaker A:

You know, it's been this way for 30 years that the startup world is one where the winners can win really big.

Speaker A:

But most companies don't win.

Speaker A:

The reason for this is that there are fairly high fixed costs relative to marginal costs.

Speaker A:

And so it costs a bunch to build enough to serve the first customer, but then it doesn't cost anything to serve the second customer.

Speaker A:

And so once you build it and it's working correctly, then you can really serve as many customers as you want.

Speaker A:

And so the companies that succeed end up with a huge volume of users.

Speaker A:

And rather than a broadly fragmented market with lots of winners, you end up with a real concentration of success.

Speaker A:

And so people will lose money for a long time in order to buy market share and capture users and build a big user base and try to get economies of scale so that they can be the winner.

Speaker A:

But it costs a lot to do that.

Speaker A:

And while it's happening, you, you often will see companies lose quite a bit of money.

Speaker A:

This is like the box model where they're losing a ton of money all the time, but they're, they're building up a lifetime customer value that they think will, will eventually pay off.

Speaker A:

And so their contribution margins from, from new customers is negative.

Speaker A:

They can get a small contribution margin from existing customers where they're growing that the application's usability with the, with that customer and then really the renewals of existing customers that are more mature.

Speaker A:

That's where they have really strong positive contribution margin.

Speaker A:

So I think this is all still true now.

Speaker A:

It's not a new phenomenon with language models different from SaaS models in that sense.

Speaker A:

There's just going to be a continuation of this very strong concentration of success and the winners should be really big.

Speaker A:

And most companies will never get to the point where they have sufficient user base to overcome the fixed costs.

Speaker C:

And historically, I know it's impossible to say, but how much of that sort of concentration is due to kind of the fixed costs of actually building the product versus the fixed costs of building sort of brand awareness and awareness among customers.

Speaker C:

So like, you know, you read these interviews with the CIO of a big an enterprise and they're like yeah, we, we considered four different solutions because those are the ones I had heard of or like they were in Gartner or whatever.

Speaker C:

Like how much of is that?

Speaker C:

The, you know, the self reinforcing exclusionary factor versus well, you know, these guys have the most money so they built the best product and it's just going to kind of go on and on from there.

Speaker A:

It depends.

Speaker A:

There's plenty of room for companies to build a good reputation that is organic.

Speaker A:

So these things happen all the time.

Speaker A:

OpenAI is sort of an example of this.

Speaker A:

They built a better mousetrap and they've got to a giant user base faster than anyone had in the past.

Speaker A:

So that can happen and I think will continue to happen.

Speaker A:

But, but CAC or the cost of customer acquisition that will continue to play a role.

Speaker A:

And if you have to spend a lot to acquire a customer, that can be pretty strong disadvantage.

Speaker A:

So companies that already have big distribution already in place, then they can take advantage of that.

Speaker A:

So large companies like, like Google and, and Apple who have user accounts with large populations, they can roll these things out and have A very strong advantage in capturing new markets with new products before a fast, high quality innovator without that distribution can catch up.

Speaker A:

So yeah, it's an uneven playing field for sure.

Speaker C:

I think when Miles and I wrote the piece on Gen AI and how it could change the cost structure of early stage startups and stuff, which I sent to you a ways back, I think one of the things we sort of speculated about this was me speculating, so I don't want to tar Miles with this brush in case it's stupid.

Speaker C:

Well, one of the things I was speculating about was that customer acquisition costs could go up quite a bit if you have a lot of new companies that aren't spending as much on engineering or kind of early stage product development and can allocate more, more of their capital raise to trying to spend on customer acquisition and a lot of them going through many of the same channels and stuff.

Speaker C:

Does that pass the sniff test in your real world experience?

Speaker C:

And what are you seeing in terms of any trends on the customer acquisition front?

Speaker C:

Is it harder, is it easier to get new customers for startup companies than it was three or four years ago?

Speaker A:

It's easier to target.

Speaker A:

So if you have a niche that's well defined and you're going after them, it's getting increasingly easier over time to find and target those people specifically rather than painting with a broad brush.

Speaker A:

It's more expensive to go after a narrowly targeted audience than it is to go after a broad audience on a per person basis.

Speaker A:

But companies can experiment with that and identify where the best ROI comes from.

Speaker A:

So I think it's getting more efficient over time to find your audience and advertise to them.

Speaker A:

But I think the audiences are also getting more suspicious over time and less responsive to advertising.

Speaker A:

So I'm not sure what's going to be the dominant factor going forward.

Speaker A:

There's always going to be companies that are overconfident and willing to overspend on customer acquisition without sort of rationally grounding that spending in terms of the real long term value.

Speaker A:

So it's dangerous to overspend on customer acquisition.

Speaker A:

I much prefer to see product market fit where the customers are coming to you and where they're organically talking to each other and driving lower cost customer acquisition for the company.

Speaker C:

One more super quick question just to kind of round it out, just zooming back out again.

Speaker C:

When you think about sort of the opportunity set today in the early stage in a part of the world, specifically with regard to Gen AI and sort of point solutions, is that where you're spending a lot of your time right now looking at those opportunities.

Speaker C:

Is this something that the wave has already past as far as companies that are going to be seizing those opportunities, raising money, like where is your focus?

Speaker C:

Just kind of putting this in the context of bespoke and people who have long term investment horizons.

Speaker C:

Where is the I'm seeing a lot.

Speaker A:

Of opportunities in generative AI and spend a lot of time talking to founders in this space.

Speaker A:

I'm really glad to be in an investment firm that is sector agnostic and stage specific.

Speaker A:

There are opportunities across sectors.

Speaker A:

We're investing fairly heavily into healthcare and that includes both the sort of customer interfacing applications where language models are helpful, but it also includes machine learning applications where research is facilitating more advanced drug discovery and therapeutic techniques.

Speaker A:

In fintech, these kinds of applications are driving down costs and creating new types of services.

Speaker A:

In insurance and forecasting, organizational operations are streamlining dramatically with new language models that can help organizations to manage themselves.

Speaker A:

So there's a wide variety of applications and we're seeing opportunities, particularly at the smallest, earliest stages that are that are very inefficiently evaluated by the investment universe prior to the Series A.

Speaker A:

There's just not enough capital and not enough sophisticated capital that can distinguish between the great opportunities and the simple applications of language models that are inevitably going to disappear with with relatively free, copious competition.

Speaker C:

Well, that's probably a good place to leave in and we both got to go.

Speaker A:

Yeah, I'd love to do this more.

Speaker A:

I feel like we could keep talking for hours.

Speaker C:

I know.

Speaker C:

I think we will definitely have the opportunity, but I know you gotta run, so.

Speaker C:

All right man, I'll talk to you later.

Follow

Links

Chapters

Video

More from YouTube