In today’s episode we talk to Craig Mundie, formerly the Chief Technical Officer at Microsoft and a leading advocate for responsible development of artificial intelligence. He joins Kevin Coldiron to discuss his book, Genesis: Artificial Intelligence, Hope, and the Human Spirit co-authored with Eric Schmidt and Henry Kissinger. Mundie believes the timeline for AI’s impact on the economy is extremely compressed, with dramatic breakthroughs in energy and work happening in the next few years. We also discuss the long-term implications of AI-generated knowledge and solutions that will likely be beyond our understanding. What questions should we ask and what preparations should we make - both individually and collectively as citizens - to prepare this future?
-----
50 YEARS OF TREND FOLLOWING BOOK AND BEHIND-THE-SCENES VIDEO FOR ACCREDITED INVESTORS - CLICK HERE
-----
Follow Niels on Twitter, LinkedIn, YouTube or via the TTU website.
IT’s TRUE ? – most CIO’s read 50+ books each year – get your FREE copy of the Ultimate Guide to the Best Investment Books ever written here.
And you can get a free copy of my latest book “Ten Reasons to Add Trend Following to Your Portfolio” here.
Learn more about the Trend Barometer here.
Send your questions to info@toptradersunplugged.com
And please share this episode with a like-minded friend and leave an honest Rating & Review on iTunes or Spotify so more people can discover the podcast.
Follow Kevin on SubStack & read his Book.
Follow Craig on LinkedIn and read his book.
Episode TimeStamps:
00:06 - AI as a collaborator and the end of traditional human evolution
01:41 - Introduction to the Ideas Lab series and Craig Mundie
03:48 - How the book Genesis came together
09:52 - AI vs human learning and the rise of machine polymaths
17:40 - When machines discover things humans cannot understand
20:36 - The future of the scientific method in an AI-driven world
25:48 - From tools to a new species: redefining our relationship with AI
27:24 - Why AI adoption may outpace institutions and society
33:26 - Energy, infrastructure, and global competition in AI
40:55 - Wealth, labor, and the economic impact of abundant intelligence
50:51 - Alignment, control, and the limits of human oversight
58:25 - The ultimate question: do humans evolve or merge with AI
Copyright © 2025 – CMC AG – All Rights Reserved
----
PLUS: Whenever you're ready... here are 3 ways I can help you in your investment Journey:
1. eBooks that cover key topics that you need to know about
In my eBooks, I put together some key discoveries and things I have learnt during the more than 3 decades I have worked in the Trend Following industry, which I hope you will find useful. Click Here
2. Daily Trend Barometer and Market Score
One of the things I’m really proud of, is the fact that I have managed to published the Trend Barometer and Market Score each day for more than a decade...as these tools are really good at describing the environment for trend following managers as well as giving insights into the general positioning of a trend following strategy! Click Here
3. Other Resources that can help you
And if you are hungry for more useful resources from the trend following world...check out some precious resources that I have found over the years to be really valuable. Click Here
For the first time, we'll have a collaborator in these machines that will give us the ability to understand ourselves, including our biology and perhaps even our thinking to a degree that has never been possible before. But it'll also create the opportunity for humans to to take control of our own destiny.
I often tell people that human evolution as it has been for all the time is over. That in fact we're changing the environment in which we live at a rate that standard evolutionary processes don't adapt to.
Intro:Imagine spending an hour with the world's greatest traders. Imagine learning from their experiences, their successes and their failures. Imagine no more. Welcome to Top Traders Unplugged. The place where you can learn from the best hedge fund managers in the world so you can take your manager due diligence or investment career to the next level.
Before we begin today's conversation, remember to keep two things in all the discussion we'll have about investment performance is about the past, and past performance does not guarantee or even infer anything about future performance. Also, understand that there's a significant risk of financial loss with all investment strategies, and you need to request and understand the specific risks from the investment manager about their product before you make investment decisions. Here's your host, veteran hedge fund manager Nils Kaastrup-Larsen.
Niels:For me, the best part of my podcasting journey has been the opportunity to speak to a huge range of extraordinary people from all around the world. In this series, I have invited one of them, namely Kevin Coldiron, to host a series of in depth conversations to help uncover and explain new ideas new to make you a better investor.
In the series, Kevin will be speaking to authors of new books and research papers to better understand the global economy and the dynamics that shape it, so that we can all successfully navigate the challenges within it. And with that, please welcome Kevin Coldiron.
Kevin:Okay, thanks Niels. And welcome everyone to the Ideas Lab series here on Top Traders Unplugged. Our guest today is Craig Mundie. Craig spent 22 years at Microsoft where he was the Chief Chief Technical Officer. He also served as Chief Research and Strategy Officer there, as well as Senior Advisor to the CEO. Craig has worked extensively in the public policy arena as well, including spending eight years on President Obama's Council of Advisors on Science and Technology and he's currently co-chair of the Track 2 dialogue with China on Artificial Intelligence.
He's here today to talk about the book he co authored with Eric Schmidt and the late Henry Kissinger called Genesis: Artificial Intelligence, Hope and the Human Spirit. For me, it's one of the most thoughtful analysis of the long term challenges and potential of AI that I've read.
And I'm super excited to bring some of those ideas to everyone listening today. So Craig, we really appreciate your time. Thanks so much for joining us and welcome to the show.
Craig:Thanks, Kevin. I'm very happy to be here.
Kevin:All right, well, the book is called Genesis. So I thought perhaps we could talk about the genesis of the book.
How did the project with Dr. Kissinger and Eric Schmidt, and for those of you who don't know, Eric Schmidt was the CEO and chairman of Google. How did that, you know, the decision to write a book about AI come about?
Craig: ut that years ago actually in:And so for 26 odd years, you know, we both did some work together and collaborated and spent quite a bit of time talking about whatever the issues of the day were.
had gone to every year since:And one of the topics at that meeting that Eric and I had collaborated on in putting the program together included a discussion on artificial intelligence. And so Henry was there and he listened to the discussion.
And at the end of which, you know, he was really struck by the profound implications of the arrival of machines that would exceed human capabilities.
And at that moment, in essence, he decided to spend the rest of his life for many years remained focused on trying to help people understand what the effects of that were likely to be and how we might think about dealing with it. And so he spent quite a bit of time a year or more really getting educated. Henry was not a technologist at all.
You know, he's trained in philosophy and history. He was a brilliant strategist and a good writer, but technology wasn't his thing.
And after a couple of years, he and Eric, who he had known also for some period of time, got together with Dan Huttenlocher out of MIT and decided to write a book about AI.
And that book focused more or less on what they saw as the potential challenges of artificial intelligence and a little bit about the potential benefits. But it notably didn't go very far in trying to talk about, well, what could we do about it?
And it turned out in parallel with all that development I had in my retirement from Microsoft, started to work with a variety of companies, one of which turned out to be Sam Altman and OpenAI in its early years.
And as a result of that involvement, I personally became very interested, as Henry had, in this question of how do you deal with the emergence of super intelligent machines? And I had spent quite a bit of time thinking about it. So Henry asked me to edit that first book as they were completing it, and I did.
And then in one of our bi weekly conversations, I said to him, you know, this is good. It helps people see the benefits and potential risks, but it doesn't tell them what to do about it. And I've been thinking a lot about that.
And Henry and I started talking about some of those ideas. And he said one day, okay, now it's time for you and I to write a book.
And Eric was sort of interested in adding some things, you know, to that discussion that hadn't been in the first one. And I'd known eric actually for 40 years.
You know, when we were both young guys in the tech industry, we had come together in a collaboration between Sun Microsystems and my company, Alliant Computer Systems, at the time.
And so we had known each other and interacted, including, as you pointed out, on President Obama's Council of Science and Advisors on Science and Technology, for eight years. And so we embarked on the book that became Genesis. And in the end we finished it just after Henry passed away.
Kevin:Yeah, the first section of the book is quite a moving tribute to Dr. Kissinger. And I had to chuckle.
I think you say that he first learned about computers and he wanted to get one for himself and the CIA wouldn't let him have one.
Craig:Yeah, yeah. I mean, things have changed a lot.
Kevin:Yeah. So I'd like to start the conversation.
It's actually maybe a little bit heavy, but I think quite important with the concepts of knowledge and understanding. And I suspect for most people listening, myself included, that their main interaction thus far with AI has been asking it questions.
And in fact, I went to a commencement ceremony just last week where the speaker said, hey, we're now approaching a point where almost any question we can ask could potentially be answered. And so we need to train our students to be good question askers. And that made sense to me. But your book takes this all quite a bit further.
In the intro, which was written by Neil Ferguson, the historian, you say in this new age, rather than working forward from questions posed by humans, humanity confronts answers provided by AI to questions that no human ever asked, and that ultimately this separates knowledge from understanding, human knowledge from human understanding in a way that we really haven't experienced before. So I know that's a lot, but I think it's kind of foundational to what's going on in the book.
And I was wondering if perhaps you could start by maybe contrasting how human knowledge advances with how AI knowledge advances and how that might leave us with, I don't know, solutions that work, but we don't know why they work.
Craig:Well, I think in some sense we built a machine that has a brain that to some significant degree is modeled after what we think is the way human brains work. So in that sense, it isn't necessarily the case that the machine learns differently than humans.
The difference is, as we also talk about in the book, is that the capability of the machine to learn is wildly better than humans for a couple of reasons. One is that the circuits of our brain, you could think, sort of have a clock rate of about 30 hertz.
And while they're very, very parallel in that activity in your head and consume little power, it still can't cyclically do things that quickly.
And when we build these things out of computers, as is true with, you know, any other computer, the clock rates on these things are measured in gigahertz. And so you're looking at, you know, 10 to the seventh or eighth, you know, times faster in terms of the rate at which it does things.
I think the other thing that people don't stop and think as much about is that if you think of your brain as a computer, the input and output systems to our brain are very, very weak relative to those that we have for the computers. So, you know, our, our input systems are largely our eyes, you know, and our ears and to some lesser extent, you know, perhaps touch.
And those things are both low resolution and to some extent low speed. And the highest bandwidth of those is our vision. And even that is an abstracting system.
You know, I mean, our eyes are built in a way where it, it sees things and it pre processes some of it and some abstract part of it gets passed into the brain for reasoning. And the, and the resolution of these things is kind of limited.
On the other hand, machines enjoy the benefit of incredibly sophisticated high bandwidth sensors. You know, cameras are higher resolution than our eyes are.
And humans, courtesy of regular old computers, have spent decades now finding digitized representations of all the things that we sense ourselves.
And on our output side, you know, those things that we have produced, whether they're images or writings, you know, have all been Recorded and now digitized.
And so this gives the machine a huge leg up over humans in that with its speed and resolution of input systems, it can go through, in a sense, the same learning process that humans do, but it goes through it, you know, many decimal orders of magnitude faster.
The effect of this, which we talk about in the early parts of the book, by introducing or reminding people what polymaths are, you know, there have been human polymaths who are people that enjoy some extraordinary benefit to, to, to know and integrate information, but usually only across a small number of domains. And even those people are viewed as special. What these machines are is essentially every machine is a polymath of extraordinary capabilities.
In fact, it knows everything it's been able to ingest about every domain. This actually produces sort of an unlimited polymathic capability.
And as humans have seen in history, a lot of times, breakthroughs come because an individual who is polymathic has an insight that no other group or specific individual has had before. Why is that? It's because it is able to integrate inside your brain across all the things that it's expert on at the same time.
So whenever it looks at a problem, you know, a normal human would look at it fairly narrowly. They might be an expert in one domain, but if you ask a question in another domain, it's completely, you know, ill equipped to deal with it.
So now you've got machines that say, no matter what the question is, when I try to answer it, I'm integrating across all the knowledge that, that I learned about this.
It's important to realize that, you know, the, the way information is stored in our brains and in these machines is not the way it's stored on your personal computer or cell phone.
You know, in those cases, we take these digital representations and we store them as files, you know, bits, and then we know how to interpret those bits. But when the machines learn, whether or the way the brain learns, you know, we take in this information and we sort of.
The brain slices and dices it and puts it and distributes it across your brain in a lot of different piece parts.
And so recall is essentially about reassembling from those pieces what it thought the original was, but it doesn't actually have a copy of the exact original.
And, and so because of that, you can now ask these machines a question and it will answer it by, by itself integrating across all the knowledge on which it's been trained. So these are the ultimate polymaths.
I think that this is super important to ponder because humans will not, at least as we exist as a species today, ever be able to ingest in our brains the same amount of material that the machines do? And as a result, it'll be commonplace for the machines to have insights that humans can't get.
You say, well, why don't groups of humans have those insights? The answer is it's very hard to meld the knowledge of multiple humans.
In part, it's the communications cost grows as the square of the number of things that are collaborating. So that's why small teams of people can make progress on things.
But if the teams get bigger and bigger, the cost of communicating among them and then trying to integrate what they collectively know gets harder and harder. And so the elegance of having all the knowledge in one brain is that you don't have any of that communication cost.
And its associative nature of the way your brain thinks and, you know, integrates information is now manifest in machines that have a lot more information. So I think in the end, the greatest gift of these machines is the things that. That it will be able to understand or develop that humans never would.
But it also creates this confounding feeling that says even if they tried to explain it to us, we might not understand.
Because, in fact, you know, it ranges across so many areas that even if it was explained, you might not understand exactly why it all comes together that way.
Kevin:Yeah, I think you say at one point in the book that maybe universities in the future will be places where people get together and try to figure out, try to understand some of the things that AI is producing.
Craig:Yeah, I mean, we've seen this in the earliest AI days, not the early days of the 50s, but I mean, of the current era, for example, when you had AlphaGo and AlphaGo Zero, which were developed by the DeepMind people and beat the world champions in Go. And of course, the machine that was trained on the people, the historical Go players, eventually learned how to beat the best Go players.
But then they actually developed AlphaGo Zero, which was a machine that didn't look at how humans played Go, all right? It just played against itself at a rate that humans could never achieve.
And so in a matter of weeks, it had tried more Go games than all of humans in history. And lo and behold, it learned new ways to think about how to play the game. So now human Go players play against the machine.
Or look at what the machines did when they played each other to get inspiration for ways that they can play the game at a level that they didn't before. So you just think that is going to get extrapolated into kind of everything we do.
And so I think universities not only may have to come together to try to study in detail the insight that the machines might have had, but the whole model of what is a university and what.
And broadly, what is education going to look like in a world where literally every person from the youngest age is going to be afforded the opportunity to have an unlimited Socratic teacher? Then what does it mean to go to school at any level?
Kevin:Yeah, there's so many different directions I could take this. And I definitely experienced that just in the last year or two where I have students working on projects at Berkeley.
And they are enormously complicated technically, but they can do it because they can, you know, now have access to data and code, but they then have to get AI to explain the output to which is.
I'm not really quite sure how to handle this, but I wanted to ask a question about the scientific method and where that leads the scientific method in that, you know, that that's how we've progressed human knowledge to date is forming a hypothesis and then, and coming up with the test and running the test. And then is there, is there still a role for that process long term?
Craig:Well, the process may be a good process.
It just may be that coming up with hypothesis might be done by the machine, you know, and then the question is in the short term, how do you do the test?
You know, as we've seen in many other areas, you know, we've found that it's oftentimes cheaper at least to do these tests in silico than in the physical world. And with the computing capability emerging, not with these novel accelerators, but in terms of computer architecture for the AI itself.
But I think by the end of this decade, we'll also see the emergence of utility scale quantum computers.
And so I think that, you know, in the physical sciences you're going to see, you know, a capability to go beyond what humans have even been able to speculate about to create a hypothesis. And. But it'll allow us to explore things that, you know, have been impenetrable for humans.
And, you know, again, I think that that's the gift, but the question is, what is our role in relationship to the machines in doing the science?
And as the machines get more agency and in fact, as the machines in the next few years are blended with, you know, sophisticated robotics, the machines will then actually physically be able to live in experience and understand the physical world to a degree that they don't currently do. You know, while we complain about the Machines making simple mistakes at times.
I think one of the reasons is that they only know the physical world by synthesis across all the artifacts that humans have given them to represent the physical world. But unlike a baby that grows up, you know, they're actually, you know, experiencing the real world, touching it, seeing it move, et cetera.
And I think that there's still a fidelity question between what you can learn from just ingesting images from various perspectives versus kind of living in that environment. But even that will essentially advance as the machines have a physical embodiment.
And so when they can act in the real world, then even the question of doing the experiments may be able to be done by the machines as well.
So their ability to master both computational modeling, the likes of which we haven't seen, the ability to write the code to do these things quickly at a level of complexity that humans might struggle to do. All of these things represent a new way to think about forging a path into the future.
But all of which really require some ultimately, I think, inversion of the way that humans think about this. In the book, we talk about these machines, we think going. Our relationship with them goes through three phases.
Okay, the first phase we call the tool phase. And that's kind of what we're in now, mostly. And why is that? Well, because every invention humans have ever had, it was just a tool.
You know, it either facilitated our mechanical capability or it facilitated our intellectual capability. But this technology is the first one in history that doesn't stop at being a tool.
And so the second stage is where we, we kind of recognize that what we're doing is we're birthing a new species. It just isn't biological.
And at that point, you know, that's sort of the big opportunity in my mind for humans is to figure out, well, what do humans want to be as we go forward?
And for the first time, we'll have a collaborator in these machines that will give us the ability to understand ourselves, including our biology and perhaps even our thinking to a degree that has never been possible before. But it'll also create the opportunity for humans to take control of our own destiny.
I often tell people that human evolution as it has been for all the time is over. That in fact, we're changing the environment in which we live at a rate that standard evolutionary processes don't adapt to quickly enough.
And it's why we have many of the diseases we have today and other planetary scale challenges. But in the future, we should be able to design these things out. We'll design out diseases instead of trying to think about how to treat them.
And that brings all kinds of long term moral and ethical issues is how we would manage this transition. But I think those are the kind of problems that our academic community are going to have to start to contemplate.
Kevin:You draw the analogy with evolution a couple times in the book. One with respect to speed, which you just alluded to, and the fact that if you look at the human age on a geological scale, it's just a blip.
And essentially the AI scale on the human scale is also going to be a blip. Things are going to move at lightning speed.
And I guess one question I had is we had a venture capitalist on the show a couple months ago and she wrote a really thoughtful piece on the economic impact of AI. It was called the Aquarius Economy. And you know, she, she wrote about what she sees as the quote, unquote blockers to AI.
So there's like, there's the potential, we understand the potential. Here are the practical things we're seeing on the ground that stand in the way of that.
And she ended up concluding that we're probably a couple capital cycles away from, you know, fully realizing AGI. So I guess I'm interested in realizing
Craig:the capability or fully realizing its benefits,
Kevin:not the capability, but implementing it in the real economy.
And that was going to be my question to you is literally, do you see a big, I don't know, do you see AI capabilities running well ahead of its actual impact on day to day life? Right.
Craig:I'd say yes.
I think that even the machines we have today, I think have to some significant degree run ahead of our institutional capability to capitalize on them.
That and part of the reason, and it's different in different countries a lot in the United States, especially a lot of the popular coverage of artificial intelligence over the last few years has basically focused a lot on the downside risks.
Thoughtful people who identified that there were downside risks and ultimately, quote, unquote existential risks has resulted in that being the bugaboo that everybody seems to worry about. Are we going to become the pets of the machine?
And, but it's, it's interesting, you know, because of my interaction with the Chinese in this, in this dialogue and, and otherwise, you know, you don't find the same focus on these long term downside risks in China.
You know, if you just, I, I don't remember the exact numbers, but there were some polls recently that said, you know, the favorability of artificial intelligence in the general population in the United States was in the, you know, like the low 30% range or something. And in China it was in the high 80s.
And you know, I think there, what you're seeing is evidence of, you know, how, how the media works, you know, how it's, it's sort of shifted away from, you know, presenting facts and news cycles and essentially it's selling sensationalism. Now that's not new.
ways, all the way back to the:And unfortunately, I think we're seeing that in this highly, you know, commercialized, commoditized media environment in, in this country and the west in particular.
Which isn't to say there aren't real issues, but it's a question of, you know, one thing you might worry about is whether or not China, whose both governmental focus and popular focus is, hey, this is great, let's apply it as fast as we can, as broad as we can, you know, and we'll get the benefits. We don't need AGI. You know, I mean, we're getting plenty of benefit as it is.
We happen to get, you know, to that point that's even better and that may be important for strategic competition, but, you know, we're not waiting around.
And so, you know, when you look at a country like China, which over the last the 40 years that I've been visiting there has gone from a pretty, you know, backward like economy to one that is clearly a contemporary of the US or any of the Western nations. And, and they've done that on the back of riding this technology wave through computing, cell phones, networking, et cetera.
And in a weird sense, they seem better mentally acclimated to the idea that, you know, AI is the next big thing and they're just all over it.
And so I think the US is not doing itself a service by wringing its hands to the degree that we do about the long term risks and you know, not encouraging adoption at the maximum possible rate.
Kevin:Yeah, it's interesting you bring up China because I was listening to another podcast and they were talking about the attitudes toward AI in India and it was very similar. You know, it wasn't this fear, it was more, hey, this is the future, it's exciting, it's new.
Craig:So it's the latest thing where technology allows them to bypass what was the legacy view of good. And you know, I mean, it's like cell phones versus landlines.
I mean, I remember in, you know, before the cell phone even the, you Know, statistics at the UN on national, national development, you know, had metrics like landline phones per hundred population, right? And those things were still the measure of goodness after the cell phone arrived.
And you know, those countries said, eh, we don't need those landlines, we're just going to have cell phones. And, and that was the end of it.
And you know, I frankly worry a lot about the United States right now because, you know, each time we've seen these things come, you know, we oftentimes either see regulatory capture or a lack of policy foresight on the part of the government in what has become, you know, I'll say, somewhat dysfunctional congressional environment and a lack of long term planning. And it turns out in these countries like India and China, they have long term plans, they don't tend to blow them up every election cycle.
And they recognize that their success in bringing literally fractions of billions of people, you know, from poverty to, you know, Western standards of living in less than 30 years has come from the aggressive adoption of the latest available technology. So those people, they, they don't, they're young enough to remember what it was before that and they're moving ahead.
In the US and Europe, I think we have, you know, an aging population and, and a government that is, I'll say, struggling to have the foresight and the ability to act against it, to make policies that would promote this. And I think that that represents a risk in terms of global strategic competition and what I think is likely to be emergence of the next world order.
Kevin:You say that even though, you know, all the headlines are about kind of an unprecedented rate of capital investment in all private. Right, right. But I mean, isn't that, you know, isn't that something. It's a good thing, right?
The, the, the private sector moves faster, is more nimble, isn't that. It's kind of the source of strength of the US is the, the kind
Craig:of it is, but it's not as unique as it was before.
And so when you get private investment, you know, coupled with aggressive government policies, you know, then, you know, you, you may be able to go even faster. And you know, I'm, I'm involved in a number of projects, you know, fusion energy being one, you know, which we think is pretty imminent.
And the government still continues to say, well, we think it's 20 or 30 years away.
And so the same thing happened with solar and wind and others where it was all invented on the back of private capital and perhaps academic research, largely in this country. But in the end we weren't prepared as a nation to capitalize on it from a product point of view.
And a lot of that does track back to policy and regulatory capture of incumbents.
Kevin:It does. And this is going off on a bit of a tangent, but it's an area that I'm interested in.
I mean, China has gone from, you know, like you said, the solar industry was invented in the US and now China is basically is the dominant world leader and that gives them access to a fair amount of energy at zero marginal cost. And one of the big blockers of AI is energy.
And so is that a, you know, a strategic advantage to China and that they've got access to this, you know, source of energy that, that we've not allowed ourselves to develop, or is that oversimplifying?
Craig:Well, I don't know that it oversimplifies it, but I think China, I, I would say writ large has known that it was in a deficit position relative to energy for a long time. And they've taken on an approach which says try them all.
And they've had the manufacturing capacity to support those which were already manufacturable, like wind and solar.
But if you look at it, they're simultaneously building three different types of fission nuclear plants and they've got a very aggressive government funded program around fusion. And you know, if you look at the United States again, I think that the breakthroughs in this are all being funded privately now.
But you know, absent a policy, you know, that says we want to deploy these and in fact, you know, be able to sell them around the world and everything else, you know, we may find that we could invent it here again and then before we capitalize on it broadly, it'll, you know, it'll escape us.
And so having the rest of the infrastructure financing, adoption, you know, if you take today, the United States, I think it's really interesting to look at what the private companies are having to do to get the electricity to do the data center deployments. Right. The country has no capacity using all of its classic methods to really give these people what they need.
And we have the best power in the world, but the grid doesn't have the capacity to haul it and we don't have the generating capacity today.
If you walked up with a brand new power plant in the United States and said you wanted to connect it to the grid, the meantime to wait is seven years for an interconnection.
And these are the things that you can't tolerate if you want to let the capital that may be flowing and the ideas that still come from the entrepreneurial activities here. They can produce these things. But if, if the ability to receive it isn't there, that's problematic.
Now what's happening is in the states, which is not uniform because this is all state by state. In the states that allow behind the meter powering of your data center.
You're looking at these guys now buying up the world's know gas fired turbines and other things and slapping them on the back of their data centers and saying fine, we'll bring our own power. But, and I, and I actually think that is the right solution. But it has not been a matter of policy foresight that led to that.
It's in fact the collapse of the current energy infrastructure to be able to meet the needs of, of the companies.
And of course when you move outside the United States you could say, well then we're going to have to bring our own power everywhere because no other country is going to be able to provide the power, save maybe China. This is where I think there needs to be a lot more long term consideration of how are we going to solve these problems.
Personally, I think that the whole power thing is going to turn out to be a red herring simply because fusion is going to arrive in the next few years on a commercial basis even though it's not expected. And AI was not expected either. And if anything the AI will only help accelerate the arrival of all these unexpected things.
And I think this is the thing people are not really wrapping their heads around. When Dariobide wrote his paper, they got Tober a year ago, machines of saving grace or something loving grace.
s century will all be done by:And so all these things that were historically imponderable from economic or technological terms or the ability to deploy may get so dramatically accelerated that the failures to plan become more and more obvious.
Kevin:So you know, if we're talking about something as extraordinary as that, then you know, we're talking about the, you know, the, the, I guess the potential of AI is to basically one of the potentials from an economic perspective is limit eliminating labor scarcity and creating this what you guys call a world of abundance. And you know, that then becomes a question of, you know, how does all this wealth that gets generated get split up.
And you do talk about this in the book. Who Gets what? And certainly there is a risk that it's a few large corporations that control it and have most of the wealth.
I mean, do you have a view on how ought to think about, you know, distributing or perhaps redistributing this power and wealth that gets created?
Craig:Well, I probably tend to think of this more like creating electricity, all right? I mean, at the beginning there were only a few people who made the electricity.
And then people started to realize, oh, well, I, I should change the way I do everything.
You know, I, I, I mean, if you go back to factories, you know, when electric motors were created, people put a big one in to drive the big belt that had been turned by the water wheel, all right? But they left all the pulley systems there, you know, to run the factory.
Eventually somebody realized, wow, you know, I have all these little electric motors and I could put one on everything and I don't have to keep the factory looking the way it did for water power.
And I think we're at that kind of stage now where it's hard for people to imagine doing things as differently as we ultimately will and that the diffusion of the ways in which people make money will go more. I mean, another analogy is sort of the model of platforms and applications that brought you computing as you know it today.
I mean, it started with, you know, the PCs, all right, you know, or Microsoft and Apple to some extent created platforms. They had some killer applications, all right? Those drove diffusion.
Once they were diffused, the platform got taken up by literally millions of developers. They produce the products that people consume those products.
And I think you're going to see that same phenomena here that, that you, you'll probably see a handful of major platforms which are in fact the, the, the large models themselves.
And however they continue to evolve and gain capability and they're, they're already doing, you could say, the killer apps of this era, which was things like Chat GPT, you know, it knows, the chat bot, the video generators, you know, these few things that everybody seems to have an appetite for, either in business or in the consumer space. They're what drive diffusion of the platform.
Once the platforms are out there, then people will start to be inventive and try to figure out, well, what do people want to buy? Now, you said that these machines may displace labor. You say it may commoditize labor, but that comes largely through the robotics aspect.
I think the other thing that's going to happen sooner is it's going to commoditize intelligence. And that's the part that's never happened before.
And that's why many of the people who may get displaced are not essentially the people on the assembly lines yet, although that'll come.
But you have the junior level intellectual jobs, you know, are suddenly finding it harder to get employment in part because those that are experimenting with this, they may not have it all figured out yet, but they can certainly see the, the writing on the wall and they don't want to continue to build up a capacity that they think is largely going to get displaced.
And it's why I'm sure even at Berkeley, you know, there's a lot of people running around graduating last week, you know, that that may be struggling to find jobs even from the most elite schools compared to what they've seen in years gone by. And so I think no matter. So you could say there's ultimately no class of human work that will be unscathed in this transformation.
I think it becomes almost a philosophical question as to how we think we should adjust. Kissinger, of course, in our collaboration on the book.
Henry was a historian and a philosopher by training and, you know, and you can see particularly in the early parts of the book, his influence there because, you know, we try to approach the thing from a historical and a philosophical point of view, including the anecdotes that are cited in other things. And this is because Henry's big concern for a long time.
In fact, it was true even when I first met him 30 years ago almost, and we talked about personal computing and its emergence in the home and your car and your media and everything else. He said, wow, this seems like it's the biggest thing since the printing press.
And he says, but I have one big concern, which is it took 300 years for the printing press to completely transform the society of Europe. But this is happening a lot faster. And so I worry that our institutions can't adjust gracefully to this magnitude of change.
at was a prescient comment in:And as a result, our institutions are going to struggle to deal with this. And so one big question for countries is, you know, how well does their leadership focus on preparing for the transformation?
This was one of the things Kissinger really harped on both in his conversations with the Chinese leaders as well as the US leaders, which was your legacy won't be these things you think are important today.
Your legacy ultimately will be determined by how well you manage your society's, you know, transformation into a world where AI is essentially a, a complete, you know, partner in, in, in our, in our lives and work.
In the book we talked about the fact that we thought ultimately human dignity would have to be reconceptualized because so much of it has always attached to your work.
You know what, whether it was raising kids or working in, in workplace or the fields or whatever it was, you know, how you did that and how you provided for yourself and your family and you raised your kids, you know, that was a huge part of what your dignity attached to. And work is going to get redefined.
And while I think we may see a new economy emerge and a much smaller number of people, but maybe not strictly a few, that's the platform and apps distinction.
Even today, the world only has a handful of computing platforms, but it has literally millions and millions of applications that essentially are the way that the benefit of those platforms is realized. I think we're going to go through another version of that, but it's hard for us to imagine it.
But f. In doing so, the number of both high intellect people and ultimately the way things get manufactured are much, much more highly automated and the development is much accelerated, then the displacement can be expected to be quite severe. And so one of the frustrations, you know, I kind of have two right now. You know, one is what we're talking about here is not like a brand new idea.
But if you look in our country and say, well, how often do you turn on the news and hear anybody talking about what we're going to do to prepare for this?
You know, we talk about preparing for hurricanes and we talk about preparing for wars, and we talk about this, that and the other thing, but this thing that will ultimately transform the world to a degree that none of those other things have. You know, there's no real planning for it all.
Henry the other thing that frustrates me personally is it relates a lot to what humanity is and becomes in the time ahead. You know, many people talk about this like we're the victims of the arrival of the machine, but we are creating these machines.
They didn't arrive on a spaceship where we would know nothing about them.
And so we have a time, a limited time to try to work hard to ensure that we have a symbiotic relationship between humans and these intelligent machines, and that we take advantage of that time, that we have to try to make as many positive changes in Homo sapiens as we. As we can to prepare ourselves for a future that'll be quite different. And I think these are the things that need a lot more focus.
Kevin:And sometimes that's the purpose of this show, is to bring people on like you and raise these issues that aren't being talked enough about on mainstream media.
And just, you know, you mentioned, I mean, just following up on that last point, right at the beginning of the book and right at the end of the book, you talk about sort of a choice that we face, which is creating a world in which AI becomes more like us or one in which we become more like AI. And I wonder if you could just kind of explain what you mean by that.
I mean, I thought a lot about it, and I think I. I get it conceptually, but I'm curious what you. What that means to you.
Craig:Well, on one hand, there's a natural tendency, particularly you see this thing portrayed as a scary thing coming toward us to basically say, whoa, we better make that thing slow down and be like us.
When I first got involved with OpenAI, and I think it's been true almost ever since, not just there, but in many other companies, the discussion around this always focused on two words they called safety and alignment. And safety was, well, we should try to be sure that these things can't do bad things to us, you know.
And the second is alignment was, well, we want it to behave in a way where it's aligned with human values. Now, it turned out, you know, when you.
When you say those two goals, safety, that's pretty broad, but alignment is really hard because then you say, well, okay, well, like, whose values and where did you get them? And of course, I mean, Dario Amade, when he went on to found Anthropic, he took this step to build a constitution in.
It turned out when he and I were at OpenAI together, he working there, and I was advising Sam Altman and I would talk to Dario, and pretty rapidly the two of us concluded that in the end, we couldn't think of any way to control an AI that is to ensure this idea of alignment or safety except by an AI. And while that has some scary aspects for people, too, to us seemed like ultimately the only shot we had.
He went out to create Anthropic at that point and built a constitution into it, in a sense, trying to build this capability into his own AI.
I've devoted my time for the last five or six years to thinking much more broadly about this issue and how AIs can be used to essentially help govern, you know, all the other AIs.
But this idea of can we get it to conform or in a sense, are we going to constrain it to be just a little bit better than we are, or are we going to use it to essentially make us more capable than we currently are? You know, I mean, go back and as you said, the geological time record, you know, there were things before Homo sapiens.
The question today is are there things after Homo sapiens, all right, and do they emerge by our own design? See, I personally, I think that's what ultimately will happen by one means or another.
And, and so when you think of, you know, I mentioned the first two stages of our relationship with machines is tools, that's everybody's natural inclination. But now we start to see the emergence of coexistence. You know, it's here, it's becoming agentic, it'll increasingly have autonomy.
And what are we going to do as that goes on? Well, using a partnership with it in that period will allow humans to decide what do we want to be when we grow up.
And that's in sense the thing that the book is talking about. Are humans going to say we are all we can be, all right?
Or given the capability now to not wait for some, you know, millennial, you know, millennium class time period where we may adjust, you know, by, by natural selection again? Or are we going to take, you know, the bull by the horns and decide, nope, you know, we can see that there is something that is more capable.
How should we become more capable? In the end, you can say there's only three possible outcomes.
Outcome one is that one of these intellects so far exceeds that of the other, that their relationship is irrelevant, immaterial or worse. And of course that's the scary thing that you read so much about as the existential threat.
But the second option is that you decide, wow, you know, these two things have a long term symbiotic relationship. And in some ways it's like certain animals on the planet.
Humans seem to, you know, have a nice symbiotic relationship with like dogs and a pets, but is that all we want?
You know, but there's others, you know, where there really is a symbiosis, you know, pilot fish and whales, you know, and it's more than a convenience. And then you could say, so what would symbiosis look like long term?
But then you realize, okay, now humans have the potential at least to be guiding their own evolution by design. The machines are already demonstrating they're well on a path to essentially recursive self improvement.
Humans have never been on a path to recursive self improvement, but now we will be empowered to do so, should we choose to do so. And through that you end up in the final possible outcome, which is some type of hybridization.
And you know, today we start to make changes to humans, usually in the sense of repairing failure or weakness in people. You know, people who have, are born with genetic defects. We can now fix, you know, people who lose their hearing or their sight or something.
You know, we're now creating, you know, synthetic mechanisms to replace these things.
If you fast forward the equivalent of a hundred years, you know, which might actually be 10 in practice now, you know, what could these things become?
And therefore the final thing is, you know, do you end up in some hybridized thing where we decided, hey, there's some good things about humans and there's some good things about these machines and maybe they should just be one. And Henry always used to ask me for years, and especially near the end, he said, you know, where are the philosophers?
He said, you know, back the last time we had a thing this big, it was, we called it the Renaissance in Europe.
He says, but then you had the scientists and the philosophers and they were sort of like talking about it and trying to figure out where should this all go. And he says, but right now we seem to have a dearth of philosophers, that this thing is being driven at an incredible rate by the technology people.
And while they're thoughtful, are they really. And you do hear some of them, Altman and others have talked on, off at times about, you know, how is society going to adjust?
You know, should we have a universal basic income? Should we do, you know, so those questions are out there, but there's no collective human scale activity to really address these things.
And I think that at some point we're going to have to realize that this is a, is a species problem, not a country problem. 1 and the question is, how do we get ourselves from here to there?
Kevin:I thought about that a lot when I was reading your book, that it read in parts like a philosophy book or certainly raised philosophical questions. And so I think, Craig, that's a good place to wrap up.
We appreciate your time in joining us today and we also, of course appreciate your time and your thought, you and your co authors in writing this book. It's an important book and also it's very accessible. So thanks so much. For joining us today. Appreciate it.
Craig:Thanks for having me. Be interested to see how your audience reacts to this. And then maybe we should have another call if we need it.
Kevin:That would be great. We could do it in 10 years, which in AI time might be six months.
Craig:That's correct.
Kevin:Okay. Well, the book is called Genesis, Artificial Intelligence, Hope, and the Human Spirit. Please go out and get a copy.
I guarantee you'll find it thought provoking. And follow Craig's work because as you can tell, many of the ideas we're talking about here are not being discussed enough on mainstream media.
So for all of us here at Top Traders Unplugged, thanks for listening and we'll see you next time.
Ending:Thanks for listening to Top Traders Unplugged.
If you feel you learned something of value from today's episode, the best way to stay updated is is to go on over to itunes and subscribe to the show so that you'll be sure to get all the new episodes as they're released. We have some amazing guests lined up for you. And to ensure our show continues to grow, please leave us an honest rating and review in itunes.
It only takes a minute, and it's the best way to show us you love the podcast. We'll see you next time on Top Traders Unplugged.