Artwork for podcast Top Traders Unplugged
IL44: From AI Hype to Transformative AGI ft. Aubrie Pagano
31st December 2025 • Top Traders Unplugged • Niels Kaastrup-Larsen
00:00:00 01:02:41

Share Episode

Shownotes

In this Ideas Lab episode, Kevin Coldiron speaks with venture capitalist and former founder Aubrie Pagano about what stands between today’s AI hype and a truly transformative AGI economy. Rather than treating AI as destiny, Aubrie maps the frictions that hold it back: hard power limits, fragile industrial data, and agents that still cannot coordinate with humans or each other. She explains why we may be a full capital cycle or two away from real AGI and why that delay is precisely where the best opportunities lie. The conversation then widens into the “Aquarius Economy,” a possible future in which human agency, not algorithms, becomes the scarcest and most valuable asset.

-----

50 YEARS OF TREND FOLLOWING BOOK AND BEHIND-THE-SCENES VIDEO FOR ACCREDITED INVESTORS - CLICK HERE

-----


Follow Niels on Twitter, LinkedIn, YouTube or via the TTU website.

IT’s TRUE ? – most CIO’s read 50+ books each year – get your FREE copy of the Ultimate Guide to the Best Investment Books ever written here.

And you can get a free copy of my latest book “Ten Reasons to Add Trend Following to Your Portfoliohere.

Learn more about the Trend Barometer here.

Send your questions to info@toptradersunplugged.com

And please share this episode with a like-minded friend and leave an honest Rating & Review on iTunes or Spotify so more people can discover the podcast.

Follow Kevin on SubStack & read his Book.

Follow Aubrie on LinkedIn.

Episode TimeStamps:

00:00 - Why agent coordination is still clunky and unreliable

00:38 - Intro to Top Traders Unplugged and performance risk framing

01:34 - Kevin sets up the Ideas Lab and today’s focus on AGI

02:59 - Aubrie’s background as founder, researcher and VC shaping her lens

06:04 - Why she wrote about AGI and the Aquarius Economy

07:21 - AGI as the last cycle built on labor scarcity

10:41 - The blockers framework and why AGI may be a cycle or two away

12:58 - Blocker 1: energy constraints and the race for firm power

15:17 - Where investment opportunities emerge in power and grid resilience

17:43 - Blocker 2: foundational industry data gaps and Moravec’s paradox

22:00 - Skilled trades as a bottleneck and a long term opportunity

24:34 - Blocker 3: agent to human coordination and walled gardens

29:04 - Early agent marketplaces and trust mechanisms

34:41 - The Aquarius Economy as a framework for future societal structure

42:09 - Outlier groups, human agency and implications for investors



Copyright © 2025 – CMC AG – All Rights Reserved

----

PLUS: Whenever you're ready... here are 3 ways I can help you in your investment Journey:

1. eBooks that cover key topics that you need to know about

In my eBooks, I put together some key discoveries and things I have learnt during the more than 3 decades I have worked in the Trend Following industry, which I hope you will find useful. Click Here

2. Daily Trend Barometer and Market Score

One of the things I’m really proud of, is the fact that I have managed to published the Trend Barometer and Market Score each day for more than a decade...as these tools are really good at describing the environment for trend following managers as well as giving insights into the general positioning of a trend following strategy! Click Here

3. Other Resources that can help you

And if you are hungry for more useful resources from the trend following world...check out some precious resources that I have found over the years to be really valuable. Click Here

Privacy Policy

Disclaimer

Transcripts

Aubrie:

That all needs to get worked out for these systems to be really smooth. Because right now it's very clunky to create trust and memory between systems. Then you put that on crack when it's like agent to agent, right? Because the agents, if there's not a human in the loop, it becomes that much more incumbent to basically have like the USB-C for AI and that just doesn't really exist yet.

Intro:

Imagine spending an hour with the world's greatest traders. Imagine learning from their experiences, their successes and their failures. Imagine no more. Welcome to Top Traders Unplugged, the place where you can learn from the best hedge fund managers in the world so you can take your manager due diligence or investment career to the next level.

Before we begin today's conversation, remember to keep two things in mind. All the discussion we'll have about investment performance is about the past. And past performance does not guarantee or even infer anything about future performance. Also, understand that there's a significant risk of financial loss with all investment strategies and you need to request and understand the specific risks from the investment manager about their product before you make investment decisions.

Here's your host, veteran hedge fund manager, Niels Kaastrup-Larsen.

Niels:

For me, the best part of my podcasting journey has been the opportunity to speak to a huge range of extraordinary people from all around the world. In this series, I have invited one of them, namely Kevin Coldiron, to host a series of in-depth conversations to help uncover and explain new ideas to make you a better investor. In the series, Kevin will be speaking to authors of new books and research papers to better understand the global economy and the dynamics that shape it so that we can all successfully navigate the challenges within it. And with that, please welcome Kevin Coldiron.

Kevin:

Okay, thanks Niels, and welcome everyone to the Ideas Lab podcast series. Our guest today is Aubrie Pagano. Aubrie is a general partner at Alpaca VC, which is an early-stage venture capital company. And before joining Alpaca, she was an entrepreneur. She built and then later sold the online apparel company Bell and Drape.

Aubrie joins us today to talk about a white paper she just published about the transition to artificial general intelligence, how it will and how it won't transform society. And also given her day job, what investment opportunities lie in the near and distant future.

So, I think it's a topic we're all wrestling with right now. Very timely. Aubrie, really excited to have you on the show today. Thanks for lending us your time and welcome.

Aubrie:

Yeah, I'm so excited to be here and talking about this Kevin, it's honestly what I've been talking about and thinking about for the last six months. So, it's fun to have more forums to do it.

Kevin:

All right. Well, you know, I've been wanting to do a show focused on AI for a while and I've been struggling to find kind of the right guest. So, when I read your paper, I thought, ah, this is it. And I think the reason is that you are coming at this from a perspective similar, I think, to most people listening to the show. You're not an AI insider, you don't build AI models, you don't study them as a researcher, but you do need to understand their impact in order to thrive personally and also professionally. And I think that's kind of what we're all trying to do one way or another.

So, perhaps could you start off by just telling us about your professional background, your experience as an entrepreneur and then how that led to, you know, where you are now?

Aubrie:

Yeah, of course. I will give you all my context. So, first, I've lived through both sides of technology and culture for a long time. As you mentioned, I built and exited a digitally native brand called Bow and Drape, which was the first mass customization company for apparel. So, think of it like a build-a-bear for women's clothing.

And so, we scaled that in over 800 department stores, we had shops and shops, and so just navigated a lot around the real world in terms of supply chain manufacturing, real estate, retail, and consumer. And so, combine that with my experience. I have a really cross disciplinary background, kind of education as well, which is, I think, why I like writing so much.

So, I had studied history and literature and had training and kind of primary research. I also spent years consulting, before I ran a business, doing primary and secondary research at Fidelity Investments during the great financial crisis. And so, just come at this from a very entrepreneurial and kind of consulting lens.

And then if you layer on that, you know, the last five years I've been investing after I exited my business, really in the foundational industries and consumer that I ended up building in. So, I have been investing behind supply chain, manufacturing, energy and so have built a track record and whisper networks (which we'll talk about later) where we started to have these conversations.

And so, while I'm not an AI expert, I'm obviously now in venture, seeing the front lines of all of this and really started to think about, okay, how do we think about this for culture? You know, that's really where we're investing. We're investing in the real world and how it runs and then investing behind culture. And so, I come from that background, from outside AI looking in.

Kevin:

So, your paper is titled Our Transition to AGI and the Aquarius Economy. And that kind of reflects two parts, obviously. One is kind of an analysis of the current and future impact of AI. And then the second is kind of more speculative. You're kind of imagining the contours of the future economic system, which you call the Aquarius economy. And then you sort of work backwards to think about, okay, what might that mean for our lives and also for investment.

So, I thought let's start, you know, let's start at the beginning, the first part. You say the transition to AGI is the last economic cycle built upon labor scarcity. And the transition is organized by blockers, things that stand in the way of AGI reaching its full economic potential. And that's kind of what caught my interest because it's the blockers that create the investment opportunities. These are the problems that someone can solve and you have a huge economic reward. So, let’s maybe use that framework.

Can you start by telling us what you mean by the last cycle built on labor scarcity and how that's kind of directing AGI investment right now?

Aubrie:

Yeah. So, when we set out to think about the impacts of AI and AGI (Artificial General Intelligence) on culture and humanity really, we wanted to first zoom out and really say, okay, we don't actually know when this is going to happen. We know that there's a lot of change right now, and we know that the facts on the ground are astonishing, and contradictory, and kind of incoherent, and really existential.

So, what we think AGI has the promise of doing is really automating away some jobs but making some jobs so cheap to do that, as you come down the cost curve of inference and compute, it makes it so that labor itself becomes infinitely accessible and that labor becomes no longer scarce. Which, over the prior industrial revolutions, we were sort of changing the nature of work but it kind of had a finite cost in the sense that it was very human centric. And technology was used as a tool, not as a replacement.

billion in:

eme for energy there that, by:

So, it's like we have this fact set on the ground that's like, okay, I think for most people it's confusing. It's like we're saying it's going to change the world, but we also are hearing a headline that there's going to be a bubble. And ChatGPT is cool, but are we really adopting AGI? And so, we said, okay, we don't actually know when this economic cycle for AGI will come to fruition.

It may happen tomorrow, which I think a lot of tech accelerationists are saying. We actually, as we dug into the data, and given our, again, my background, which is much more in kind of the real world and the movement of goods globally, I was like, I actually don't actually think this is around the corner. I think we might be a capital cycle, at least, away from this.

And so, when we started to think about that framework of what are the blockers to AGI, it's sort of like what are the blockers to the cost of labor and the cost of inference being so cheap and abundant that it becomes almost like a utility, that it becomes like the Internet, which I think is the premise of the promise of AI. And so that was kind of the framework that we investigated, and where we saw that there was pretty substantial coordination problems that exist right now, before we achieve that.

Kevin:

I gotcha. So, let me see if I can reflect that back to you. And as you were talking, I was thinking about that we had Philip Carlsson on the show, the chief economist at BCG, and he was talking about the potential impact of AI on productivity, economic growth. And you know, you have some of these statistics in your paper too, but they're all over the place. AI is going to increase productivity by a little bit or a lot, But he said, look, hey, for AI to impact productivity, it has to replace labor at scale. And what that does is it raises real wages. And then people with higher real wages spend on other stuff and it's the other stuff that creates the new jobs. Like, that's kind of been the historical cycle of how technology gets embedded into the economy.

And so, that reminded me of what you were talking about. So, you're really saying, well, in some sense, what are the blockers for AI to get so cheap and ubiquitous, that it replaces labor at scale? Is that right? Have I kind of like reframed it in a reasonable way?

Aubrie:

Yeah, that's right. Like, what do we have to unlock for it to get so cheap at scale that it replaces labor? And I think where we sort of take a, I guess a more pessimist view is that we actually think through that cheapness of labor, that our comparative advantage, in terms of opportunity cost for humans, erodes a little bit.

So, I think in what you just quoted, if I heard it right, it was, oh, well, you know, humans are going to go work elsewhere and they're going to get higher real wages. We think that's true in parts, but then we also think that creates a lot of peril in the short and medium term because it's not actually clear where, you know, the tens of millions of knowledge workers go in the short to medium term. But overall, yes, it's like, how do we even get to the point where that's a problem? We think there are some real investment opportunities in the short-term.

Kevin:

So, let's talk about some of those blockers. The first one, you've already mentioned it, and it’s something that's popping up again and again in the headlines is energy abundance. So, sort of summarize what the issue is. Why is that a blocker? And then, you know, putting your sort of VC hat on, what are some of the opportunities for the companies that can unblock.

Aubrie:

Yeah, of course, and again, the way I've tried to write the paper is to be very approachable to a wider swath of people. So, the shorthand of it is it takes a lot of energy for all of this compute to happen, for AI to run, for these servers to run. And so even if you bet that compute gets more efficient, that we have more efficient chips, which we think in some senses will happen, there's good evidence to say already that compute costs have gone down.

Even on that sort of linear curve that it's been going on, it cannot scale without massive amounts of stable power. We just don't have enough power in the US to do it.

You know, Goldman, there's a bunch of stats, like Goldman Sachs has said that data centers will require 50% more global power within the next three years than it has. Anthropic alone said that it's going to need like 50 gigawatts of new power over the next two years. Just to give scale, that's like 4x the peak demand of New York City. You know, in North Virginia there are data centers with a seven year wait time. There are just so many examples of this. And so, obviously, this is a massive problem and blocker to us achieving what we think AGI can achieve. And so, we see a bunch of opportunities there to invest.

Kevin:

What would be an example of an opportunity? Are you talking about ways to bring power online faster or optimize the existing power that we have? How do you…?

Aubrie:

Yeah, it's like, D, all of the above. It's a wide open net, it feels like. You know, we need more firm power. So, I think nuclear and geothermal are the ones that are most interesting to us because they're energy dense by square foot. And we think of them as sustainable (not everybody does but we do). So, we just need more power online. We need better grid resiliency. The grid can't handle the existing loads. There is better demand response that needs to happen.

So, the idea that we're utilizing the grid during peak times, potentially even peak shaving and like selling energy back to the grid when people aren't using it during peak times. There's a lot of efficiency within the system that hasn't been solved yet. And then there's even some like compute optimization tech that needs to happen.

So, it's just, you know, we need a lot more energy either through the grid and through the utilities, and then outside of them and kind of behind the meter. And so, we've invested, it's one of the pieces that we've already started to invest behind but we actually don't think it's near getting solved. So, we think, over the next years there's a lot more that can be done there.

Kevin:

What about putting power stations on the moon? I've seen Elon Musk put that…

Aubrie:

Yeah, we've seen a lot…

Kevin:

Is that the ultimate sign we're in a bubble or does that just me not being able to think creatively enough?

Aubrie:

Part of me thinks that that's almost like, it's like billionaires realizing that we're in a potentially post growth environment where it's like, yeah, we literally can't put enough, fast enough, in the world, and so it's like, what are the wackadoo ways we can do it? You know, I don't know if that's the most cost efficient way to get power online, but it is a potential literal moonshot way.

And so, yeah, I think it more points to the fact that it's like people are going to throw spaghetti at a wall to figure out the fastest, quickest way to get there. Because I think without that, you're just not going to see AI advance quick enough. Because right now the models are not efficient enough to do as much as they promise to without more power.

Kevin:

Gotcha. Okay. Okay, so that's blocker number one. And the second one was fascinating to me. You call it ‘foundational industry resilience’. So, can you explain exactly what that means and why that's a potential blocker?

Aubrie:

Yeah, of course, probably the example that I've used the most is Moravec’s Paradox, it's called, which is this idea that we talk a lot about, in the real world, in the real economy, all the things that we think AI and robots can automate. We see the fancy demos of all these robots cleaning dishes and you know, it's very Judy Jetson.

Kevin:

I actually... Sorry to interrupt. I was at a restaurant at this hotel, airport hotel, last week and a robot brought out a birthday cake and sang Happy Birthday to the person next to me.

Aubrie:

Yeah, exactly.

Kevin:

That's like the embarrassing jobs the waiters don't want to do. We'll give them to the robot.

Aubrie:

They’re going to be automated first. That, actually, it's probably good utility for them. But yeah, and that's a perfect example. Right? It's like something that is an easy, repetitive task that doesn't have a lot of edge cases is something that can be achievable in the short term, like making coffee or bringing out a birthday cake and singing a song. What ends up being true is that the actual human edge cases that happen are incredibly hard for robots to achieve.

Like the example of picking up a blueberry and dealing with the fact that it drops. That's something that, you know, my 14 month old can handle. That's way harder for a machine to handle. And so, what we actually think is that's kind of one layer of the real world that needs to catch up. Actually, you know, there are edge cases that exist that are very, very challenging to overcome, to actually automate away the vast majority of these jobs that take up a huge percentage of the economy.

The other piece is that you can't really have good AI and automation without good data. Right? They all run on good data. And what's really interesting to us, and what we've seen again as operators and investors, is that these, we call them foundational industries, but we're talking now, just to be clear, manufacturing, supply chain, agriculture real estate, construction, like these big physical movers of the economy, they have terrible data.

In manufacturing, 65% of manufacturers don't have usable data. And so, you know, I think it's very cart-before-the-horse to be like, oh yeah, we're going to automate all these factories. Well, in nurseries for plants, for example, for agricultural, they still take inventory on paper for most nurseries. You can't really have robots doing anything when that's the case because they don't have any data to go off of to train. And so, there's this kind of resilience and coordination problem on the ground, for the real economy, that needs to catch up before we get to these really sophisticated general purpose robotics.

And on top of that, when you actually look at the numbers, there are real blue collar labor shortages that, right now, need to be solved before people are willing to invest - you know, the CapEx behind some of these robotics. It's something like 450,000 workers a month in the US.

Kevin:

Yeah, that was extraordinary. And, you know, I keep coming up against this. I think you say that many manual and skilled roles, plumbers, electrician, welders, etc., they're indispensable, maybe not ultimately AI proof, but certainly in the imaginable future.

And then I had a question. This is a little unfair maybe, but you know, I'm wondering if the kind of ‘investment opportunities’ there are more kind of like a personal kind, in terms of like career choice, rather than a kind of a structural kind of thing. I’m thinking about my kids, they're older now, they're in their 20s, but when they went to high school, the high school was like, hey, 95% of our graduates go on and get a college degree. You know, that was the selling point of the high school, the ‘college prep’ high school literally is what they call themselves.

And that's fine. But I'm wondering, you know, is that actually what you want to be doing now as a high school? I mean, you're a parent of a young child. I mean, do you start thinking, you know, maybe the track that we've all been told is the right track isn't necessarily the case. You know, trades are a more viable option. I don't know, it's a bit of a rambling question.

Aubrie:

No, it's a good one. And I think it's top of mind, right? I think, you know, enrollment for young men has dropped in college. People are facing debt there as knowledge work is the first place where some of these jobs are getting automated away - entry level knowledge work. A lot of people are saying, well that's what I went to college for, like, I went for a marketing degree.

And so, I do think it raises the question of what are these degrees good for and where are people going to earn a living? Which, you know, is a question for most people still. And so, I think it's one of the areas that we are excited to look into is just this skilled trade enablement. There's lots of areas to that. That's in the education layer, that's in the training and retention layer, that's in the upskilling or reskilling layer. There's just a lot there alongside all this other stuff around huge opportunities for data normalization and still supply chain visibility. I feel like supply chain visibility has been like a problem for like, probably before I was born. But I just think it hasn't been solved before you get to these really sophisticated visions that everyone has of these general purpose robots that are like, you know, fixing your plumbing.

Kevin:

Okay, well that's good, that's good. So, you're saying, hey, you know, we need a lot more skilled workers. There’s a skilled worker shortage. You know, the investment opportunity is there, is something that can help with that. And then also, you know, creating the data that is kind of like the first step in this kind of much more advanced automation.

So, the third blocker you talked about was you call it ‘agent and human coordination’. And I guess there's like coordinating between chatbots and people, or agents and people. And there's also coordinating between agents and agents, chatbots and chatbots, to use a simple example.

So, give me an example of what you see in terms of like an agent to human coordination problem and why that's a, from an investor point of view, why that's a blocker to AI.

Aubrie:

Yeah, and this is, it's probably our most amorphous category. It's kind of a catch all for like we have kind of V1 of all these copilots and pilots. If you think of copilots being the tools that humans use and the pilots being the agents that are doing the actual work together, it's just still a bunch of clunkiness to it, you know, if anybody listening has played around with this stuff. You know, probably in the agent to human space, to your question, it's like agents still lack a lot of… There are a lot of trust issues. There are a lot of emotional resonance issues. Like people don't feel connected to them, which sounds silly, but it's kind of part of the UX that makes them really hard for people to use. And then there's also a complete lack of interoperability.

So, if you want to go between your ChatGPT, and you want to look at Claude and have them interact like they're totally walled gardens, the memory layer between the two gets lost. So, if I want to transfer something over, I can't. My memory window closes, for example.

And so, as you're trying to build, with these tools, more sophisticated workflows and actually trying to automate away real tasks, you run into this clunkiness that we call coordination, that ends up blocking.

Kevin:

That's a really good point because, you know, I think about a lot of tech, you know, the tech, and this is the tech business model. This may be oversimplifying, but you know, the idea that hey, you rush to get scale, you get a ton of users in your network or whatever, but it's not really in your interest for them to go outside that network. If you're on Facebook, we don't want you doing another social network. If you're on a particular search engine, we don't want to make that… You know, that's part of the business model.

Aubrie:

Totally.

Kevin:

So, it sounds like you're saying (which totally makes sense), that's also part of these kind of agent models, ChatGPT, etc., let's make our agent as good as possible. But we don't really have any interest in making it interoperable.

Aubrie:

100%, and that's between models and that's also between platforms, to your point. So, internally we use a lot of these tools both to be efficient and to create alpha, but just like to learn. So, you know, like we use Anthropics MCP, we use Claude like that. We would love to be able to scrape LinkedIn.

We can't do that, it's a closed system.

So, we actually have to do all these interesting workarounds, and data dump all of our connections, which they make very hard for you to do intentionally because that data is those companies IP and their monetization path. And so, it's made so much worse by AI because AI is necessarily about opening up those context windows that you would search normally to do work. It just is not solved at all and intentionally, in some ways, obtuse.

And so, it's a big problem that's still… We've talked about it a lot, internally, as sort of like being on the back of a wave. Like it's changing so much and daily you see these hundreds of V1 application layers like just becoming obsolete.

Kevin:

Version one application layers, that's what you mean?

Aubrie:

Oh yes, sorry, the V1 of these different applications. So, like Chat GPT, OpenAI just launched a shopping app. I've maybe been pitched 50, you know, shopping agents. All of a sudden those are all made obsolete because ChatGPT said, oh, we're not going to coordinate with anyone else, we're just going to build it ourselves.

And so, I just think a lot of these dynamics of what do the agents own? What do these different platforms own? Which data sources and protocols are going to be open to MCPs versus not. That all needs to get worked out for these systems to be really smooth because right now it's very, very clunky to create trust and memory between systems.

Then you put that on crack when it's like agent to agent. Because it's like the agents, if there's not a human in the loop, it becomes that much more incumbent to basically have the USB-C for AI, and that just doesn't really exist yet.

Kevin:

Do you see any examples? I mean you gave a couple examples, I think, in the agent to agent coordination section where you talk about kind of highly personalized agents.

Aubrie:

Yeah.

Kevin:

Can you maybe give an example, or kind of imagine what that might look like?

Aubrie:

Yeah, and there are some interesting ones. We started to look at some companies that are around kind of pitting agents against each other to see which one is… There's one called Yurp, there's one that we are looking at super early called Dialectica where it's basically, as agents become very good at fulfilling tasks and also have knowledge… If you imagine a world where these agents, which already exist, they've identified certain minerals and things that humans haven't before. Like if these agents become so smart that their knowledge actually supersedes humans, how do we trust them? How do we know that it's true? How do we know which one is right and which one to use?

There are interesting marketplaces, for example, that have popped up to kind of pit agents against each other, or agent marketplaces where you can vet out your agent, and people can test it, and use it, and pit it against their agent. That's kind of a new frontier that is still emerging to try to create better coordination and try to separate the signal from noise in terms of the feedback that we are getting from some of these agents. So that's one example of what we've seen.

But I still think it's kind of early days and I do think a lot of this may get solved by some of the bigger players too as they sort of align to what we call in the paper, like this big AI superstructure where OpenAI talks to Facebook and talks to Google. I don't know if they're all going to be closed gardens forever.

Kevin:

So, you think that, at some point, they realize, hey, it's in our economic interest to find, I don't know, an adapter, a plug that we can use to coordinate, to merge (if that’s the right word).

Aubrie:

Yeah, I mean, and I think it's like the free markets. Like, of course, Gemini wants to like own all of the memory system because they already have like your Google, they have your Gmail, they have an Android user, but, you know, Apple doesn't necessarily agree. And maybe Facebook and Meta don’t agree. So, it's like at some point someone's going to try to make a move.

And so it's like, I actually think as the free market moves toward these large businesses that need to show growth on top of these mega, mega, mega investments and valuations that have been garnered, I just think it's going to necessitate people playing together because otherwise the whole system becomes non operable in reality. Yeah.

Kevin:

Okay, so maybe we can pivot and talk, get a little more, speculative is probably not the right word, but you do say that it's not a typical VC white paper. It's part investment thesis. And we've talked about that, but you say, hey, it's also part science fiction, with a little bit of philosophy in there. So, let's talk about that science fiction and the philosophy.

In the second part of the paper, I think what you were trying to say is, hey, let's think much, much longer-term about what the impact on society and the economy is going to be. You call it the Aquarius economy. And you say, well, we don't really know what the specifics are, but we can start to see the broad contours of that.

And then we can kind of walk backwards and say, well, in that world, what are the specific roles of people in society, and again, what investment opportunities are out there? So maybe, can you tell us a little bit about, first of all, just how you decided to do that? Was that always part of the plan or was it like, hey, there's no other way we can kind of really try to understand the very long-term impact?

Aubrie:

Yeah, that's a great question. And yeah, I can try to walk it back a little. So, as we were doing all this research we just went through, first we were like, okay, everyone's talking about AI and AGI, as we unpack this we're like, we actually don't know that it's here. We think we might be at least a capital cycle, maybe two, away from this really happening. So, if that's the case…

Kevin:

You've said that before, and so let me… I keep interrupting, but that seems quite important to me. So, when you say a capital cycle or two away, do you mean that we go through this investment cycle? It doesn't get us to the kind of full AGI we need. I don't know, another five years with another boom and then another one, is that what you're talking about?

Aubrie:

That's exactly what I mean, yeah, exactly. In the same way that if you look at like the clean tech energy capital cycles, it's like we had V1 and then we had V2, and I think we're on like V3 of people investing in clean energy where the promise of like full decarbonization hasn't happened. And it's taken a lot of infrastructure, and government intervention, and a lot of deep tech work, and research, and infrastructure to have the needle moved. And, and we see this similarly where we don't believe that, with all the investment that's gone in, to date, it is enough.

We believe that we're riding toward a peak. It will retrench a little bit in terms of the markets pricing all this. And then we still think it has the real potential, maybe more than any other technological advancement, to create real value.

So we don't think people are going to, like, just go away from investing in it, but we think there may be a little bit of a boom, retrenchment, reinvestment, reinvigoration that happens before we actually achieve all that we think it can achieve, just given what we just discussed, all the blockers. So, that's exactly what we think. And we don't actually know how. We're not here to say, especially in our position. I'm just an investor who likes writing. It's like, I don't know if that's 5 years, 10 years, 15 years, 20 years into the future. It's actually not my job to think about it. And so, that's kind of how we backdoored into this. How do we describe this to people? Because we don't actually know how long it will take society to fully evolve.

If you assume AGI is in some unknown future, and you assume that has real implications for, like, the cost of labor, for the shape of people's work, for the shape of how people spend their time, if that's all unknown, how do you start to talk about it in a framework that's usable? Because that just sounds like a big mystery.

And so, to us, or to me, the way that my brain works, which is how I wrote it, was, I was like, okay, you know, like I said, I was a Hist and Lit major, let's write about it in a kind of narrative format. Let's write about it kind of like a Sci Fi book where it's like, just imagine sometime in the future. It's almost like the Star Wars opening, where it's got like the big text going across the starry screen. Imagine some distant future where AGI has finally arrived.

And I think if you think about it that way, one, it's a little less terrifying because it seems Sci Fi. And then the second is, if it seems so narrative, it allows, I think, for better extrapolation because you're distancing yourself from any current biases you have on how the real world operates now.

So, that's kind of why we did it. We're like, let's just like think of a Sci Fi future and think of a framework for where we think the world might go. And then that becomes kind of like our language, which we use internally. It becomes our language to think about how these shifts in culture in the real world will happen. And so that's how we kind of backed into it.

Kevin:

That's cool. So, tell us about the sketch of that world. I mean, you talk about (I'm not sure if this is the right place to start, so, if it's not, let me know) two hierarchies. There's the ‘techno core’ and there's the ‘hegenomy’. So, if that's the right place to start, maybe explain what those two things are.

Aubrie:

Yeah, so the way we think about it is, like I said, sometime in the future AGI is fully achieved. We are calling this the Aquarius Economy, which was a nod to the fact that astrologically and astronomically, we are shifting out of the Piscean age into this Age of Aquarius. So, sort of symbolizing that there's this new era of how the world works. And we think about it in terms of these big superstructures.

So, assume AI fully comes online, assume we've automated away a lot of work. We have these two superstructures. We have the ‘techno core’, which is what we called it, which again, is a little bit of a nod to Sci Fi, to Hyperion, if you guys have read it. And so, the techno core is kind of like the digital superstructure that's running AGI. It's like the powerful AI overlords that are like running the way that AGI goes through our work, it's the Judy Jetson robot at home. It is what is controlling all of our devices and modalities.

And then the other is the hegemony, which is basically like institutionalized humans. So, think of that as like the corporate, political, dominant, elite families and resource owners. And those two, the techno core and the hegemony, have a serious interplay. They sort of rely on each other to continue.

So, it's like the Elon Musks and the Jeff Bezos’ elite families on crack, where they're sort of owning and centralizing the power of AGI and they're running society. So, we see sort of massive consolidation of power in a controlling techno state is how I would describe it. So that's how we kind of set the stage.

Kevin:

Okay. And then within that techno state, you say there's some key, what you call outlier groups. And you actually say, well, you could think of them as much as kind of like state of mind as opposed to like physical groups. But can you identify who those groups are and what their role is in society?

Aubrie:

Yeah, so, the way that we thought about this is, okay, if you assume that society is kind of the hegemony and the techno core is this superstructure that kind of wraps it, then there are folks who exist within this. Where the core differentiator (and this is kind of the thesis at the end for us, and why we started to talk about Aquarius) is this idea of human, what we call, agency or human emotion. This thing that is uniquely human is what these outlier groups all have in common, is their expression of their human agency.

And so, in some way they're sort of breaking out of the norm of the hegemony because they're expressing their human agency, which is really where we see humans most survivable future is in this like extreme expression of that. And where people don't express that, we actually think that people will flail a lot and that will cause some potential opportunities for investment, but also some problems in society.

So, the groups we've identified are, one, we call them the ‘nomads’, which are folks who kind of reject this centralization. They're very much about human connection, off the grid. They are nomadic, obviously, so they're folks who like aren't tied to one place. We almost think of them as kind of the new age hippies where they're thinking about the earth and connectedness to the earth and the rejection of the techno core and being fully sucked into the matrix, essentially.

Kevin:

Why would the techno core allow those people to exist? You know what I mean? Is it just that they're too annoying to get rid of? Or, in some sense, do the people, the uber powerful families and corporations, do they… I suppose they need people to continue to exist and reproduce, otherwise what is their their power?

Aubrie:

Totally.

Kevin:

So, that maybe they have some interest in allowing dissent as long as it doesn't get too serious…?

Aubrie:

That's kind of the way we see it. It's dissent, if it's not too serious. And it's sort of like “off the grid.” So, it's sort of like, okay, no harm, no foul. It's like, okay, if I'm nomadic and I'm building, in Palm Desert, a totally off the grid living community. And we are espousing the idea that we are self-sufficient, we are self-organizing, and we are intentionally not connecting into AI. Like, you know, maybe it's just overlooked. So, that's, I guess, a real world example of it.

And again, I don't think these are meant to be taken like, exactly, but, okay, if that were true, we think there's enough people in the world. Even if you'd look at, you know, the folks that are… I think of like blue sky, and some of these outcroppings of even digital communities where people are like, we want to preserve the right to independence, we want people to preserve their data privacy. There's enough of that ethos that we think that will exist in some form in the future.

But you're right, yeah, if there's a nomad uprising, maybe the techno core will squash them. The second group we talk about are the gurus, which are... And maybe gurus… You know, it's the word we use. Gurus kind of have a negative term. It's kind of like the Tony Robbins of the world, which I don't know is the best. But, it's the people who we think are individuals with the most authentic kind of human spirit and relational influence in society. So, think of these as like artists, healers, athletes, these kind of superhumans in the sense of like truly living their most embodied spiritual selves.

And so, we see those people actually elevating in terms of people looking to them for inspiration, their sort of ‘proof of humanity’, in that they're producing things that are authentic and not AI generated, not AI sloth. There's going to be a premium for what those people produce because it comes from this very human centric place, and it's the high art of that.

And so, we think that those folks will be a special outcropping and have kind of a unique place. And then the third that we'd call out, which we think is kind of maybe as big, if not bigger of a group, we call them incels, which is kind of tongue-in-cheek to what people talk about now as incels. But I think incels is also potentially a good word for them because it's really people who are spiritually, and socially, and physically, disconnected from the rest of society.

So, these are folks who are kind of victims of the hegemony, and the techno core, and kind of the technical nihilism that sets in, in some future where they're just really isolated. And we already see, again, for a lot of this stuff, we see tendrils in modern life. But we think, as there's whole generations of people who come up AI native, as AI and AGI and this kind of like elite techno ruling kind of oligarchy takes over, there's real implications for society and for young people especially. And we think that'll cause a lot of peril and some implications that will hopefully create some opportunities to help.

Kevin:

How do you go from that kind of sketch of a world to thinking about investment opportunities? It almost sounds a little bit like, you know, you're going from a very philosophical perspective to something that's very concrete. But I mean, maybe just give one or two examples of how that kind of thinking, at least, could lead to an investment idea. And then, are these investment ideas that one should be thinking about acting on now or is it like, hey, keep it in the back of your mind and wait to see how things develop?

Aubrie:

Yeah, that’s a good question. Yeah, and if I haven't lost anyone yet, if you're following me through, the way that we think about these… Maybe I'll answer your first question first, which is we think about, again, we think about this as like a shared language internally where, as we see opportunities, we're like, oh, that's Aquarius coded; oh, that sounds like a guru platform; oh, that sounds like this is like nomadic financing. Like we've started to use it as this, okay, if we believe that this is our future end state, these opportunities are reflective of that future.

So, that's really how we start to use these. We don't actually think we're there tomorrow, but we even see today opportunities that look and feel and rhyme with that potential future, and monetizing some of the opportunities with these outlier groups.

So, a couple examples. So, one, like we said, we think this hyper digital connection actually leads to hyper isolation for some of these incels. So, some of the things we've already seen are sort of interesting therapy models where it's like a hybrid human plus AI caregiving model. We've seen kind of new third space cooperatives to help people connect in person in a way that's been lost. We've seen things like sensory gyms, for example, where people learn to come in and like touch and feel and be and have contact.

These are things that we see today, in some form or fashion, being pitched to us. And we see that as kind of early innings to… We think some of it may be a little early, but we think some of it is spot on.

Another example is, let's say we think a lot about, we call them whisper networks, right? It's like…

Kevin:

Yeah...

Aubrie:

Oh yeah, go ahead...

Kevin:

That was the next question I was going to ask about, so I just blurted that out. But yeah, because you mentioned whisper networks early on in the conversation. So, I'm curious to hear, you know, what those are and why you think they're so important?

Aubrie:

Yeah, we think that, as we increasingly have these digital lives, there is what is available to us and necessarily then AI, online, and then there are things that our digital presence cannot capture. And we call these kind of whisper networks.

So, they're networks of people, relationships, connections, that happen through serendipity, that happen through recommendation, that happen through human-to-human contact, that's very hard for AI to infiltrate. You know, maybe over time, you use Mira glasses and Metas glasses and they have a parser that integrates all those real-world encounters. But we actually think those whisper networks and the way that the real world operates, as far as huma- to-human connection, is going to have a premium for a while. And so, we think about what does that look like in a future state?

How do you take advantage of the fact that, as more and more of this digital gets commoditized, these whisper networks become increasingly important. You know, we've already seen platforms like this get pitched to us where it's like a marketplace for introduction.

There's a B2B company, that got pitched to us, where it's like you can upload your connections, B2B companies looking for those LinkedIn connections can talk to you, and you get a bounty for connecting these companies to the right champion internally. So, that's like a way to monetize these whisper networks. But we see other ways to do this, there may be these kind of…

Kevin:

Isn't that, in some sense, like antithetical for what they're trying to do?

Aubrie:

Yes.

Kevin:

Yeah, you're talking about serendipity, human contact. Okay, let's create a digital platform for it. I mean, it's…

Aubrie:

Yeah, in some senses, yeah. It's like, if you play that out, if you become known for the person that's like monetizing and shilling your networks, are you that valuable of a connector over time? I'm not sure.

But again, I think it's like it's early innings. That type of platform would not be as interesting, I think, five years ago. Whereas now, I actually do think, I'm like, oh, okay, well, it is kind of a saturated market. And if we see all this playing out, you know, the ability to kind of leverage those networks is interesting. So, the way we think about it is a little different. Like, you know, something that could be interesting could be like an encrypted reputation market where maybe it's encrypted and anonymous, but your reputation, and there can be whispers about you to kind of show as social proof. Maybe there's… We call it like a guru as a service platform where you, as someone expressing your own authentic human knowledge, can rent that out and sort of itemize that. Because that's something that's really hard to replicate. So, things like that, again, it's like within this framework you can start to build a language around that and think about it. It doesn't solve the underwriting, but for us it helps us to think about the interesting ways that relationships in society and work will really change.

Kevin:

I want to talk, just to wrap up in the last few minutes here, and this is going to be maybe a bit of a downer given what we've just been talking about. But you had a quote in your paper from Ethan Mollick, who's an expert on AI, at University of Pennsylvania. He's written a book about AI and he says, “Assume this is the worst AI you will ever use.” And I have to say that that kind of got me a little bit riled up. I was a little… Why is it bugging me so much?

And I think what I eventually realized is that technically, I'm sure he's right. I'm sure this is the least technically sophisticated AI model you'll ever use. But is it really going to be the worst end user experience? Because we've seen this over and over again with technology that's great at first, attracts a large user base, and then just basically becomes extractive. And I don't mean that in a kind of like political way so much as it's just, yeah, we're going to make money out of this thing now.

And the quality of the actual product goes down. Like Google search is terrible now compared to what it used to be. A lot of these social networks aren't very good anymore. They certainly don't connect to each other. And then, you know, like just the general automated customer service is just pushing the work down onto individual, you know, to the customer as opposed taking away from the company. I used to think this is me being an old man ranting. But the more guests I've had on the show, including very well-known economists, we had Diane Coyle talking about this. The time tax is real. It's a real thing. The work is being shifted from companies to their users. So anyways, that's my little mini rant.

But my question is, can that happen with AI applications? Could this be the best ChatGPT you're ever going to use? I don't know. I mean, just playing devil's advocate a little bit, is that something that we have to concern ourselves with?

Aubrie:

Yeah, I think your intuition is probably right. My sense from that quote from Ethan Mollick was that he was really thinking about, if you think about agents on the scale of AGI and what they can do versus like the AI slop you see online, they're going to get better. I feel like that was the intent of that quote and I somewhat believe that that's true.

And we've already kind of experienced that. Like even if you look at Nano Banana and like Gemini's photo rendering, like just the model updates have been immense in terms of like how photorealistic they are. So, part of me feels like, okay, Ethan, you have a point. But to your point, you know, the second thing I would say is yes, it's like how could it get worse? Because the amount of AI slop that we see online is just so, so bad already. It's like part of me, if in genification…

Kevin:

What do you mean by AI slop?

Aubrie:

Oh yeah. So, AI generated content that is being posted all over. Because people have access to this tool, they are just generating, generating, generating. There's a couple stats that I've seen that has said that AI content has surpassed human content recently on social media networks.

So, it's like whether it's your ChatGPT LinkedIn posts, whether it's your video of random personified anthropomorphized like vegetables eating things as if they were humans, like mu dang videos of hippos, like I've seen so much random stuff. Any cat video online now I'm like, oh, that's AI. But they're just so much. That's What I mean by slop.

It's just content for content’s sake being put out that is recursive, and like non-original, and just like volume. It has kind of infested these social networks to the point where now you hear some of these social networks are putting controls on it.

TikTok is putting labels on things and is actually tamping down on AI content because it's actually hurting engagement. So it's like the antiunification of these platforms is almost like happening by the users itself. And so that was another point I was going to make where I was like, yeah, the slop waiting, I actually think, is creating a real moment.

But to your point, that is a different point than, okay, if we assume that we have all these tools now for free, and we get to use them, and we get to generate AI slop, what happens when the literally gajillions of dollars OpenAI has invested needs to start to turn profitable after they do their trillion dollar IPO?

You know, it's like there's going to be a point, which is when unification happens, it's like when Google needs to monetize, when Meta needed to monetize to prove value to its shareholders, they start to turn on their users and make the platform work for advertisers. And so, there's that looming in the future too, you know.

And so, part of me thinks we're still a long way off because these platforms are so subsidized, they raised so much money that they can, like Uber, they can give us discounted rides for a long time.

Kevin:

Well, hey, that's a good place to wrap up. This is really fascinating, and thought provoking, and I appreciate you sharing your work and taking the time to talk to us. So, thanks so much.

Aubrie:

Thank you, Kevin. I'm honored to be on here, and thanks for giving me a place to share it.

Kevin:

Okay, so, you can get a copy of Aubrie's white paper on her Substack, which is called I'd Buy It and also on the Alpaca VC website. So go out, get a copy. It's a fun read, challenging, and I think you can tell from the conversation that a lot of these topics are not being yet discussed enough on mainstream media. So, for all of us here at Top Traders Unplugged, thanks for listening and we'll see you next time.

Ending:

Thanks for listening to Top Traders Unplugged. If you feel you learned something of value from today's, episode the best way to stay updated is to go on over to iTunes and subscribe to the show so that you'll be sure to get all the new episodes as they're released. We have some amazing guests lined up for you, and to ensure our show continues to grow, please leave us an honest rating and review in iTunes. It only takes a minute and it's the best way to show us you love the podcast. We'll see you next time on Top Traders Unplugged.

Chapters

Video

More from YouTube