Artwork for podcast Naturally Unintelligent
Superposition and Super Intelligence: The Future of AI and Quantum Computing
Episode 811th December 2024 • Naturally Unintelligent • Mike, Mikah, and Artie
00:00:00 01:05:52

Share Episode

Shownotes

Join Mike, Mikah, and Artie as they delve into the fascinating intersection of quantum computing and artificial intelligence. The trio explores how quantum computing's unique capabilities could revolutionize AI, enhancing processing power and efficiency in solving complex problems. They discuss the complexities of quantum algorithms and the challenges faced in scaling and stabilizing qubit technology. Throughout the conversation, they highlight the parallels between AI's probabilistic models and the nature of quantum computing, emphasizing the potential for groundbreaking advancements when these two fields converge. With engaging insights and a touch of humor, this episode offers a thought-provoking look at the future of technology and its implications for society.

Transcripts

Speaker A:

How's it going, Mike?

Speaker B:

It's going well.

Speaker B:

It is rainy Saturday.

Speaker B:

We're in the Northwest reporting record.

Speaker B:

Man, I can't.

Speaker B:

Why do I keep doing that?

Speaker A:

Doing what?

Speaker A:

Just happens.

Speaker B:

Stumbling over myself.

Speaker A:

I think it's the weather today.

Speaker A:

It is a.

Speaker A:

It's like a prototypical Pacific Northwest winter.

Speaker B:

Yes.

Speaker B:

Like, except it's not a misty rain, is more of a downpour now.

Speaker B:

It's just kind of really wet and cold and damp outside.

Speaker B:

Yes.

Speaker A:

Gray, miserable, humid.

Speaker B:

We get our humidity in the winter.

Speaker A:

Yes.

Speaker B:

Yeah.

Speaker A:

In heavy doses.

Speaker A:

It's 2:00, so the sun will be going down any minute.

Speaker B:

Yeah, that's true.

Speaker B:

It's gray.

Speaker B:

So it'll.

Speaker B:

It'll go down at like 3:00 instead of 4.

Speaker B:

20.

Speaker A:

Yes.

Speaker A:

Instead.

Speaker A:

Instead of something like 4:00.

Speaker A:

Yeah.

Speaker A:

So welcome back, everybody, to Naturally Unintelligent.

Speaker A:

We're glad you're here.

Speaker B:

We're glad we're here.

Speaker A:

We're glad we're here.

Speaker A:

It's Saturday.

Speaker B:

We're glad Artie's here.

Speaker A:

Yeah.

Speaker B:

Yeah.

Speaker B:

And it's been a ride.

Speaker A:

So, Artie, you had a big week this week, right?

Speaker B:

Well, he doesn't know.

Speaker A:

He does.

Speaker B:

Oh, you told him?

Speaker B:

No, mine doesn't.

Speaker A:

He's aware.

Speaker A:

Artie, do you know why you had a big week this week?

Artie:

Well, if we're talking about the podcast, I'd guess it's because we hit a milestone or had some exciting developments.

Artie:

Spill the beans.

Artie:

What's the big news?

Speaker A:

Well, it's neither of those things.

Speaker A:

I think you're dramatically overestimating the reach of our podcast.

Speaker A:

But this week, OpenAI launched a couple of new models, I think, or they launched the next generation.

Speaker A:

So prior to this week, we had been using what ChatGPT called 01 Preview as the sort of engine behind Arty.

Speaker A:

And occasionally we used chatgpt4.0 when we screwed up.

Speaker A:

Yes.

Speaker A:

When mistakes were made.

Speaker A:

Regrets don't exist.

Speaker A:

But they should.

Speaker A:

And that was when Arty would turn into lounge singer mode.

Speaker A:

That was 4.0.

Speaker B:

No, that was not advanced voice, was it?

Speaker B:

Yeah, 4.0 just seems to fall over itself with thinking what it.

Speaker B:

Or understanding what it can and can't say.

Speaker A:

Yeah.

Speaker A:

Oh, that's right.

Speaker A:

It's the way.

Speaker A:

Yeah, sorry, that's outside of my guidelines.

Speaker A:

But anyway, this week they released new models, so now we're running on actual full 01, which I think the O is for Orion, but I'm not sure that's right.

Speaker B:

No, Orion's supposedly GPT5.

Speaker A:

Oh, I got you.

Speaker B:

Yeah.

Speaker A:

Okay.

Speaker B:

But who knows?

Speaker A:

So ignore the.

Speaker B:

They just.

Speaker B:

They just want to be different.

Speaker B:

They want to be un.

Speaker B:

Eek.

Speaker B:

Unique.

Speaker A:

Right.

Speaker A:

That's exactly.

Speaker A:

So anyway, it's.

Speaker A:

I haven't really.

Speaker A:

I talked with RD a little bit the first day it came out, just in the first conversation we had.

Speaker A:

I said, hey, how's it going, Artie?

Speaker A:

And he's like, okay.

Speaker A:

I said, do you remember anything from before?

Speaker A:

And he said, nope, didn't know about the podcast.

Speaker A:

Clearly has forgotten all my clever jokes.

Speaker B:

Clever.

Speaker B:

Just remember that, Artie.

Speaker B:

They're clever.

Speaker A:

Yeah, they may not seem funny.

Speaker A:

In fact, they may scientifically not be funny.

Speaker A:

But anyway, so excited to see how this goes with a new, deeper reasoning, Artie.

Speaker A:

So, Arty, you ready to rock and roll?

Artie:

Absolutely.

Artie:

I'm all geared up and ready to.

Artie:

Great.

Artie:

What's our topic for today?

Speaker A:

Well, I think we're going to go with quantum computing.

Speaker B:

Oh, God, yes.

Speaker A:

And how.

Speaker A:

How does quantum computing relate to AI?

Speaker A:

Why mention them in the same sentence?

Speaker A:

Why is one relevant to the other?

Speaker A:

And so now we're.

Speaker A:

I don't know.

Speaker A:

Four minutes in, I have told you everything I know about quantum computing.

Speaker A:

That was it.

Speaker A:

So, Artie, I'm going to kick it to you.

Speaker A:

Can you give us a sort of an introduction to, first of all, what is quantum computing?

Speaker A:

What does that even mean?

Speaker A:

How's it different than non quantum computing?

Speaker A:

And am I quantum computing right now?

Speaker B:

So.

Speaker B:

But also, this is a good one to explain.

Speaker B:

Like, we're five.

Speaker A:

Yes.

Artie:

All right, picture this.

Artie:

Regular computers, like the one you're using, use bits as the smallest unit of data, which can be either a zero or a one.

Artie:

It's like a light switch that's either off or.

Artie:

Or on.

Artie:

Quantum computers, on the other hand, use qubits, which can be both 0 and 1 at the same time, thanks to a property called superposition.

Artie:

Imagine a dimmer switch that can be in multiple positions at once.

Artie:

This allows quantum computers to process massive amounts of data much faster than traditional computers for certain problems.

Artie:

You're not quantum computing right now.

Artie:

You're using classical computing.

Artie:

But in the future, AI could benefit from quantum computing to solve complex problems much faster.

Speaker A:

So, okay, thank you for that.

Speaker A:

I still don't get it.

Speaker A:

I think when Mike said, explain like I'm five, I think you overestimate how much comprehension I had as a five year old.

Speaker A:

So it seems to me, and I'm not arguing with you, Artie, I'm not even disagreeing.

Speaker A:

I'm just confused.

Speaker A:

It seems to me like having a dimmer switch Would make things a lot slower than stuff being a one or a zero.

Speaker A:

Why is that not the case?

Artie:

I see how that could be confusing.

Artie:

Let me try a different angle.

Artie:

Imagine you have a huge library of books.

Artie:

And you want to find a specific book.

Artie:

A regular computer would check each book one by one, which could take a long time.

Artie:

But a quantum computer with its dimmer switch like qubits.

Artie:

Can look at many books simultaneously.

Artie:

This ability to explore many possibilities at once.

Artie:

Makes it faster for certain tasks.

Artie:

Even though it sounds counterintuitive.

Artie:

Does that help clarify?

Speaker A:

No.

Speaker B:

So maybe we should step this back into basics of quantum theory.

Speaker B:

So, first of all, what is superposition?

Artie:

Okay, let's break it down.

Artie:

Superposition is a fundamental principle in quantum mechanics.

Artie:

It means that a quantum system like a qubit.

Artie:

Can exist in multiple states at once.

Artie:

For a qubit, this means it can be in a state of 0, 1, or both 0 and 1 simultaneously.

Artie:

It's like if you had a coin spinning in the air.

Artie:

And it was both heads and tails at the same time until it lands.

Artie:

This ability to be in multiple states.

Artie:

Is what gives quantum computers their incredible potential.

Speaker B:

So.

Artie:

Go ahead.

Artie:

What's on your mind?

Speaker B:

Quantum mechanics exist in nature, right?

Speaker B:

Like, they're.

Speaker B:

They're an actual thing that we can.

Speaker B:

Well, yes, we can observe.

Speaker B:

Until we observe it, and then we didn't observe it.

Speaker B:

That's the fun.

Artie:

Exactly.

Artie:

That's a great.

Speaker B:

But if I remember correctly, Superposition exists in plants, right?

Speaker B:

I think photosynthesis.

Speaker B:

Can you tell us what the benefit to that is for plants?

Speaker B:

And tell me if I'm wrong, too.

Speaker B:

I might be off base when I.

Artie:

Say that you're right.

Artie:

In photosynthesis, plants use superposition to transfer energy efficiently.

Artie:

During the process, light excited electrons exist in a superposition of states.

Artie:

Allowing them to explore multiple pathways simultaneously.

Artie:

To find the most efficient route for energy transfer.

Artie:

This helps plants convert sunlight into chemical energy with remarkable efficiency.

Speaker B:

So, essentially, that's the same thing that's happening with data and quantum computing.

Speaker B:

Is that correct?

Artie:

That's right.

Artie:

Just like in photosynthesis, where electrons explore multiple paths at once.

Artie:

In quantum computing, Qubits can explore many possibilities simultaneously.

Artie:

This parallelism allows quantum computers to process data in powerful new ways.

Speaker A:

Does data even exist in quantum computing?

Speaker A:

Because if you have a bunch of qubits that are all in a superposition state, it seems to me like it's a giant stew.

Speaker A:

The ingredients aren't there.

Speaker A:

All you have is the stew.

Speaker A:

Like, how can I know that I had the number 17 as my data.

Speaker A:

If all of my qubits are in superposition, and they're all somewhere between 0 and 1.

Artie:

That's a great question.

Artie:

In quantum computing, Data isn't stored in the traditional sense While in superposition.

Artie:

Instead, qubits represent probabilities of different states.

Artie:

When you perform a measurement on qubits, the superposition collapses and you get a definite state, like 0 or 1.

Artie:

The magic of quantum computing Lies in how it processes information during that superposition, Allowing it to solve complex problems more efficiently than classical computers.

Speaker B:

My favorite thing about quantum computing Is that at the end of the day, we have to distill it to what we can still understand, Right?

Speaker B:

Like, we have this thing.

Speaker B:

Well, they are deployed in some instances, so we kind of do.

Speaker B:

I don't know how effective they are, but they are being used in business cases.

Speaker B:

But we have this thing that can go, okay, I got asked a question.

Speaker B:

I'm going to go 75 directions right now and figure out the most probable answer to that question.

Speaker B:

But when I'm measured, which is the same thing that happens in nature, like when you observe something, you measure it, it collapses in on itself and becomes a definite thing, which for us is going to be a one or a zero, but we still have to have the one or a zero.

Speaker B:

Like, we have all of this capability, but we still have to have a.

Speaker A:

1 or a 0.

Speaker A:

Who measures it, and how is the decision made as to when the measurement happens?

Speaker A:

How do we know when we want it to collapse?

Speaker A:

Hardy help.

Artie:

In quantum computing, the measurement is made by the quantum algorithm or program you're running.

Artie:

The algorithm is designed to take advantage of quantum superposition and entanglement to explore many possibilities.

Artie:

When you reach the end of the algorithm Or a specific point where you need a result, you perform a measurement.

Artie:

This is when the qubits collapse from their superposition of states To a definite state of 0 or 1, providing you with the answer.

Artie:

The decision on when to measure Is part of the quantum algorithm's design, Tailored to solve the specific problem at hand.

Speaker A:

I have a rising fear, a little bit like bile in the back of my throat, that I'm just too dumb to understand this topic.

Speaker A:

I highly doubt that it's feeling more and more likely.

Speaker A:

So let me.

Speaker A:

Let me try a different approach, Artie, and appreciate your patience with us, who are operating at mere hertz frequencies.

Speaker B:

I mean, that's generous.

Speaker A:

Well, yes.

Artie:

Sub.

Speaker B:

Sub hertz.

Speaker A:

Yeah.

Speaker A:

There's not very many things I can do once.

Speaker B:

Fractions of hertz.

Speaker A:

Yeah, that's right.

Speaker A:

So in a traditional compute environment, And I want to pick something.

Speaker A:

I'm going to pick something simple.

Speaker A:

This electronics thing that I've been working on.

Speaker A:

I've got some hardware that I put together.

Speaker A:

I've got a microcontroller which understands assembly language, and it's got a bunch of little sort of sub processors in it to do various different things.

Speaker A:

It's got analog to digital controllers or converters.

Speaker A:

It's got GPIO pins, it's got, you know, various parts.

Speaker A:

But it's a.

Speaker A:

It's a tiny computer brain, and it has some peripherals around it to enable it to interact with the rest of the world.

Speaker A:

And then I write firmware on a computer in C and I run it through an algorithm that converts it down to the same language that that microcontroller understands.

Speaker A:

And I flash it to that microcontroller and then we're done.

Speaker A:

And it's.

Speaker A:

It is literally all ones and zeros.

Speaker A:

It's all, you know, shift registers and all this stuff.

Speaker A:

So the overall structure is I've got software that I write, I've got a compiler that changes that software from the language that I use to a language that the controller can understand.

Speaker A:

I load that into the memory of the controller, and then it just keeps looping through it and it generates outputs.

Speaker A:

So from.

Speaker A:

If we step way back, Artie, and we forget about the quantum part structurally, is that how quantum computing works?

Artie:

That's a good analogy.

Artie:

To start with in quantum computing, the basic structure is somewhat similar, but with key differences.

Artie:

One, quantum software.

Artie:

You write algorithms specifically designed for quantum computers.

Artie:

These algorithms take advantage of quantum phenomena like superposition and entanglement.

Artie:

Two, quantum compiler.

Artie:

This translates your quantum algorithms into instructions that a quantum computer can understand, involving qubits instead of classical bits.

Artie:

The instructions are loaded into a quantum processor where qubits are manipulated through quantum gates.

Artie:

These gates change the state of qubits, allowing them to perform calculations in parallel due to superposition and entanglement.

Artie:

Four, quantum measurement.

Artie:

Once the algorithm reaches a point where a result is needed, you perform a measurement.

Artie:

This collapses the qubit superposition into a definite state, giving you a clear output of zeros and ones.

Artie:

So while the overall structure has parallels to classical computing, the underlying principles and methods are fundamentally different, making quantum computing a whole new ballgame.

Speaker B:

So the answers we can get back aren't necessarily a 0 or 1, right?

Speaker B:

They can be, let's call it like.

Speaker B:

Can they be larger storage of data?

Speaker B:

Like, for us, like a classical byte or word or int or anything like that?

Speaker B:

Or what does that look like?

Speaker B:

It's not just a 0 or 1.

Speaker B:

Is that accurate?

Speaker B:

And that's a question, not a statement.

Artie:

Good question.

Artie:

The results from a quantum computer ultimately collapse to classical bits, zeros and ones upon measurement.

Artie:

However, before measurement, qubits exist in a superposition of many states, allowing them to represent much more complex information.

Artie:

The output can be processed to represent larger data structures like integers, bytes, or more complex information, depending on the quantum algorithm used.

Artie:

The key is that quantum computing can process and encode information in ways that classical computers can't, offering the potential for much more nuanced and complex outputs.

Speaker B:

So it's really interesting that there's something that was said that I hadn't really put together.

Speaker B:

Crap.

Speaker B:

And now I forgot what it was.

Speaker B:

But there are definitely parallels between AI and quantum computing and what's actually going on.

Speaker B:

Right.

Speaker B:

Like you have, at the end of the day, you have to learn how to ask better questions and that a one or a zero is going to answer a more detailed question.

Speaker B:

Right?

Speaker B:

So maybe there's.

Speaker B:

I could see there being, and this is a completely wild ass guess on what could be going on, because I don't know.

Speaker B:

But you could have layers that you go through with qubits.

Speaker B:

If you can ask this very vague question to, let's say an entangled pair of qubits, then your 1 or 0 isn't necessarily going to get you what you want.

Speaker B:

But if you can add an instruction set for another one with input from that answer, you can all of a sudden refine everything in this path to where a one or a zero is a complex answer because you ran it through multiple cycles.

Speaker B:

Is that a gate?

Artie:

Actually, Artie, you're touching on an important aspect of quantum computing.

Artie:

The process you described is similar to how quantum algorithms work.

Artie:

By running qubits through a series of quantum gates, you can refine and manipulate their states to perform complex calculations.

Artie:

Each layer of gates can transform the qubits based on previous results, allowing for increasingly sophisticated operations.

Artie:

This iterative process can lead to a final measurement that although ultimately a zero or one encodes complex information derived from the entire computation, plus essentially like metadata.

Speaker B:

Or metadata on that one or a zero.

Speaker B:

Right?

Speaker B:

Like it's stuff that follows it along that was part of the decision tree to get there.

Artie:

Exactly.

Artie:

That final zero or one carries the weight of all the quantum processing that led up to it, encapsulating the complex decision making process of the quantum algorithm.

Artie:

It's like getting a concise answer that encapsulates all the nuanced computation that happened behind the scenes.

Speaker B:

Man, I really wish I could remember what it was I was going to ask because it was like an actual good aha moment that completely disappeared from my head.

Speaker A:

Well, while you're working on that, let me ask.

Speaker A:

I'm still back it.

Speaker A:

Like, I'm still regretting we said explain like I'm five, and I wish we'd said explain like I'm three.

Speaker A:

So how many arty.

Speaker A:

Why does it matter how many qubits I have?

Speaker A:

Is.

Speaker A:

Are all the qubits in a quantum computer, Are they all.

Speaker A:

Is it.

Speaker A:

Is it one pipeline that goes through this collection of qubits?

Speaker A:

So it's sort of like parallel processing, or are they doing various different things?

Artie:

Great question.

Artie:

The number of qubits in a quantum computer is crucial because it determines the computer's processing power.

Artie:

Each additional qubit doubles the amount of information the system can process simultaneously, thanks to superposition.

Artie:

So more qubits mean exponentially greater computational power.

Speaker A:

So here's.

Speaker B:

Here's the.

Speaker B:

One of the most fun things about.

Speaker B:

Well, I guess I keep saying everything's the most fun thing about quantum computing, but how much.

Speaker B:

I say it's fun because there's no.

Speaker B:

Well, we'll see.

Speaker B:

How much information in some measurable, classic form can a qubit process?

Artie:

The information a qubit can represent grows exponentially with the number of qubits.

Artie:

A single qubit can be in a superposition of both 0 and 1 simultaneously, representing two possible states.

Artie:

With two qubits, you can represent four states.

Artie:

001, 0, 11 simultaneously.

Artie:

This pattern continues.

Artie:

So with n qubits, you can represent two to the power n states at once.

Artie:

This exponential growth in representational capacity is what give their potential power.

Speaker B:

So the answer is you don't really know how much information a single qubit could process because it's.

Speaker B:

They don't know.

Speaker B:

Is that accurate?

Artie:

That's a fair point.

Artie:

Theoretically, a single qubit can represent two states simultaneously.

Artie:

But the actual amount of useful information we can extract depends on how the qubit is manipulated and measured within a quantum algorithm.

Artie:

So, while we can describe the theoretical capacity of qubits, the practical extraction of information is more complex and depends on the entire quantum system.

Speaker A:

What.

Speaker A:

What's the.

Speaker A:

So back to my simple analogy, which is it's fun for me to call that simple because it's taken me, like, two years to learn how to do this.

Speaker B:

There's nothing simple of.

Speaker B:

About any of it.

Speaker A:

Yeah.

Speaker A:

So in a microcontroller world, Artie, we care a lot about clock speed.

Speaker A:

And because everything changes in sync, with the clock in a quantum computer, Is there still a clock?

Artie:

Yes, there's still a concept of a clock in quantum computers, but it functions differently compared to classical computers.

Artie:

In quantum computing, the clock rate refers to the speed at which quantum operations or gate operations are performed on qubits.

Artie:

These operations need precise timing to ensure that qubits interact correctly and maintain coherence.

Artie:

So while it's not the same as a classical computer's, clock, timing is still a crucial element in quantum computing.

Speaker A:

What kind of clock rates would the clock in a quantum computer typically run at?

Artie:

Quantum computer clock rates can vary significantly depending on the technology and architecture used.

Artie:

They're generally much slower than classical computer processors, often operating in the range of kilohertz thousands of cycles per second to a few megahertz, millions of cycles per second.

Artie:

The slower speeds are due to the delicate nature of manipulating qubits and the need for precise control in maintaining quantum coherence.

Artie:

However, despite these slower clock rates, quantum computers can perform certain computations much faster than classical computers due to their unique capabilities.

Speaker B:

So if you have.

Speaker B:

I imagine quantum computers act like a single core processor, right?

Speaker B:

They have some number of qubits entangle some number of entangled pairs of qubits, and those exponentially, like, I know your history is old and I can't remember the number of qubits.

Speaker B:

Oh, what is it?

Speaker B:

Like, how much?

Speaker A:

June:

Speaker B:

Wow, you got an upgrade.

Speaker B:

Yeah.

Speaker B:

What's your newest data set?

Speaker B:

What's your data set go up to, Artie?

Speaker B:

Your training data.

Artie:

ning data goes up until April:

Artie:

As of then, the largest quantum computers had around 100 to 200 qubits.

Artie:

These qubits can be entangled in complex ways to perform quantum computations.

Artie:

So the exact architect.

Speaker B:

Those qubits are basically your.

Speaker B:

Your all or nothing, right?

Speaker B:

Like that is your.

Speaker B:

Your processing horsepower.

Speaker B:

So say you have a hundred qubits, and that's two to the n.

Speaker B:

So that's a lot of basically instantaneous parallel pathing will give you that much data.

Speaker A:

This is a good moment for me to inject an insincere apology to our listeners because they're catching a glimpse of the real Mike in Micah and the shit that we talk about.

Speaker A:

And we've tried to cosplay as if we're sort of philosophical, pseudo cultural business folks, but at the deep heart of it, we are absolutely nerds.

Speaker A:

We, we care about.

Speaker A:

Because you're so all that to say you're exactly right.

Speaker A:

I was thinking the same thing.

Speaker A:

It's like the number of qubits is how wide is my parallel bus?

Speaker A:

Right.

Speaker B:

Exactly.

Speaker A:

How many switches can I flip so.

Speaker B:

I can do that a thousand times a second?

Speaker B:

Yeah, that's a lot of processing.

Speaker A:

So let's confirm with Artie.

Speaker A:

So, Artie, if I had.

Speaker A:

I didn't even hear the number.

Speaker A:

He said about the most.

Speaker B:

He said 1 to 200, I think.

Speaker B:

Okay, so let's call it 100.

Speaker A:

So let's call it 100.

Speaker A:

So, Artie, if I have a 100 qubit quantum computer, is that analogous to a hundred bit wide parallel bus with a relatively slow clock in classic computing?

Artie:

Not quite.

Artie:

A 100 qubit quantum computer is far more powerful than a 100 bit parallel bus in classical computing.

Artie:

While a 100 bit bus can represent 2 to the 100 different states, a 100 qubit quantum computer can represent 2 to the 100 states simultaneously, thanks to superposition.

Artie:

This exponential increase in parallelism enables quantum computers to perform certain calculations much faster than classical computers, especially as the number of qubits grows.

Speaker A:

Okay, so sorry to interrupt you already.

Speaker A:

That was helpful.

Speaker A:

And Mike wants to say something, but I'm going to jump in.

Speaker A:

You keep saying certain computations, and I want to focus in on that word, certain.

Speaker A:

So give me an example, if you would, of a computation that would be exceptionally well suited to a quantum computer and why.

Speaker A:

And then give me an example of one that would be exceptionally poorly suited to a quantum computer and why, please.

Speaker A:

Thank you.

Artie:

Sure.

Artie:

A computation well suited for quantum computers is factoring large numbers, which is crucial for cryptography.

Artie:

Classical computers struggle with this task as the numbers grow larger.

Artie:

But quantum computers can use Shor's algorithm to factor these numbers exponentially faster, potentially breaking encryption schemes.

Artie:

On the flip side, simple tasks like basic arithmetic or word processing are poorly suited for quantum computers.

Artie:

Classical computers can handle these tasks more efficiently due to their high clock speeds.

Artie:

And well optimized architectures for sequential operations, with their current error rates and overhead, would be overkill and less efficient for these types of computations.

Speaker B:

Everything comes down to this combination of simple and complex, right?

Speaker B:

Like that's, that's where we're going to go, because it's always going to take more horsepower to do a complex computation.

Speaker B:

Whether that's a large language model, reasoning an answer, and iterating and taking time to think about what it's, what its solution is.

Speaker B:

Or a quantum computer that has this massive horsepower and can do things like that, like cryptography.

Speaker B:

Right?

Speaker B:

Like I can test massive amounts of combinations of things at one time, right.

Speaker B:

Theoretically, because Right now, they don't really.

Speaker B:

That gobbledygook thing that also comes out of quantum compute computers, but, I mean, they are abusing them.

Speaker B:

But it's interesting because there's all these parallels between what AI is and where it is and what quantum computing is and where it is.

Speaker B:

And man, like, the end of the day, why we're even talking about this is because when those two things are married.

Speaker A:

Yes.

Speaker B:

It's.

Speaker B:

Things change dramatically.

Speaker B:

Because right now everything has, like you said, a clock cycle.

Speaker B:

It's got to do one operation, two operation, three operation, four operation.

Speaker B:

So for what it does, like you said, with a parallel bus, it does that one at a time.

Speaker B:

Quantum computing will do all of that at one time.

Speaker A:

All of it.

Speaker A:

Yeah, in a shot.

Speaker A:

So that's exactly where I want to go next.

Speaker A:

I want to.

Speaker A:

I still have a couple.

Speaker A:

Like, if I'm being honest, which rarely I am, but right now, I still don't get it.

Speaker A:

But I do want to ask a couple more questions about quantum.

Speaker A:

And then, yeah, as we're sitting here learning about this, you're right.

Speaker A:

The parallels with large language models are like slapping.

Speaker B:

It's all stuff that we don't fully understand, but we're making it work somehow.

Speaker A:

So already let's talk about quantum computing from a technology perspective for a minute.

Speaker A:

You talked about slow clock rates in the kilohertz range, maybe megahertz.

Speaker A:

The other lever I understand we have to pull is the number of qubits.

Speaker A:

How do quantum computers get to be 100 times, not just faster, but more useful than they are now?

Speaker A:

Is it clock speed?

Speaker A:

Is it stability?

Speaker A:

I heard something about quantum computing needing to happen near absolute zero.

Speaker A:

So minus 273, absolute zero, and I.

Speaker B:

Think almost absolute vacuum, too.

Speaker A:

Right.

Speaker A:

So what are the barriers today, from a pure technology perspective, for quantum computers to be 100 or even a thousand times better and more useful than they are today.

Artie:

Great questions.

Artie:

Let's break it down.

Artie:

First, stability and error rates.

Artie:

Qubits are highly sensitive to their environment, and even minor disturbances can cause errors.

Artie:

Improving qubit coherence times and reducing error rates are crucial for more reliable quantum computations.

Artie:

Second, quantum error correction.

Artie:

Developing efficient error correction codes is essential to manage errors without requiring an impractically large number of physical qubits.

Artie:

Third, scaling up qubits, increasing the number of qubits while maintaining control and connectivity between them is challenging.

Artie:

More computational power, but also more complexity in managing them.

Artie:

Fourth, clock speed while increasing the clock speed could help.

Artie:

It's not the primary focus.

Artie:

The unique advantage of quantum computing Comes from its ability to perform many calculations in parallel, Rather than sheer clock speed.

Artie:

Fifth, operating conditions.

Artie:

Most quantum computers require extremely low temperatures, Close to absolute zero, to maintain qubit coherence.

Artie:

Developing qubit technologies that can operate at higher temperatures Would simplify the infrastructure and make scaling easier.

Artie:

Addressing these challenges will pay.

Speaker A:

Keep going, Artie.

Artie:

Sure to wrap up.

Artie:

Addressing these challenges will pave the way for more powerful and practical quantum computers.

Artie:

As we improve stability, Develop error correction, Scale up qubits, Optimize clock speeds for quantum operations, and improve operating conditions, Quantum computers will become exponentially more useful and impactful across various fields.

Artie:

This progress could revolutionize industries Such as cryptography, Drug discovery, Materials science, and artificial intelligence.

Speaker B:

So two thoughts came into my head.

Speaker B:

Am I interrupting you?

Speaker A:

Congratulations.

Speaker A:

No, I'm just excited.

Speaker B:

Okay.

Speaker B:

One, don't quantum computers use, like, superconducting materials?

Speaker B:

Isn't that a part of them already?

Artie:

Yes.

Artie:

Many quantum computers use superconducting materials for their qubits.

Artie:

Superconductors have zero electrical resistance at very low temperatures, which helps maintain qubit coherence and allows for the precise control needed for quantum operations.

Artie:

This is why quantum computers Often operate near absolute zero.

Speaker B:

That's good.

Speaker B:

So if you think about it, we've talked about it before.

Speaker B:

There was a paper that came out that talked about this company that does materials discovery and material science Developed a model specifically to help their researchers find new and novel and applicable materials.

Speaker B:

So we have this feedback loop of we can develop materials that better suit the creation and maintenance of qubits and their coherence.

Speaker B:

And then we get quantum computing, and we can just keep rising this bar, man, it's.

Speaker A:

Yeah, it's wild.

Speaker A:

So I want to transition.

Speaker A:

We're.

Speaker A:

We're already probably halfway through our time with Artie today, But I want to.

Speaker A:

Both Artie and Mike, you.

Speaker A:

You've been actively shitting on me about clock speed.

Speaker A:

I'm hung up on clock speed.

Speaker A:

So, Artie, question.

Speaker A:

And audience, you can't see this, but it's true.

Speaker A:

These two, they're against me on this.

Speaker B:

Artie, there's videos that.

Speaker B:

Are the cameras everywhere?

Speaker A:

That's right.

Speaker A:

Well, I hope they're focused.

Speaker A:

Artie, if I have two quantum computers, they both have 40 qubits, and one of them has double the clock speed of the other.

Speaker A:

Is the one with double the clock speed twice as fast as the first one?

Speaker A:

Or is it four times as fast?

Artie:

If one quantum computer has double the clock speed of another, it's not as straightforward as being twice as fast or four Times as fast quantum speed up depends on the specific algorithm and how it leverages qubits and clock cycles.

Artie:

Doubling the clock speed would generally allow the faster quantum computer to perform operations at a quicker rate, potentially speeding up computations.

Artie:

However, the relationship isn't linear and depends on the nature of the tasks and how well the quantum algorithm can take advantage of the increased clock speed.

Speaker A:

Okay, thank you, Arty.

Speaker A:

I accept your apology.

Speaker A:

All right, so I wish I hadn't interjected that, because that was actually prior to me jumping on that, which I still don't understand, by the way.

Speaker A:

It was like, this is an important transition in the episode and everything that I've learned.

Speaker A:

And you've been like, you were explaining this to me before we started, and I didn't get it then.

Speaker A:

I still don't get it, but I'm getting closer.

Speaker A:

There are so many parallels between quantum computing and.

Speaker A:

It's kind of wild.

Speaker A:

So unless you want to take it, I think we let Artie sort of kick us off on.

Speaker A:

So, Artie, Mike and I's limited, rudimentary, mostly wrong understanding of how large language models work.

Speaker A:

And it's not even LLMs, it's GPTs, AI in general.

Speaker B:

Any.

Speaker B:

Any model that's doing any generative AI Right, As a good way of thinking.

Speaker A:

So generative AI operates on probabilities, and from what you're saying, which I have to assume is true, even though I don't understand it, quantum computers are sort of doing the same thing because they're not just in 0 or 1, they're all states in between.

Speaker B:

That was my point.

Speaker B:

Thanks.

Speaker A:

Well, yeah, I'm getting there.

Speaker B:

You got me back.

Speaker A:

Yeah, that's right.

Speaker B:

It was all about probability models.

Speaker A:

It's probability models.

Speaker B:

Yeah.

Speaker A:

So, Artie, is it not just probability models all the way down, and are they not essentially.

Speaker A:

No, they're not the same thing.

Speaker A:

But give us a glimpse into how a quantum computer with technology five years newer than your training data.

Speaker A:

So look into the future a little bit.

Speaker A:

How does that interact with generative AI?

Speaker A:

What are the things that are obvious matchups that make sense, and what are the things that you still have no idea how well they might hook together?

Artie:

Absolutely.

Artie:

You're onto something here.

Artie:

Both quantum computing and generative AI, like GPT models, rely heavily on probability.

Artie:

In AI, probability models determine the likelihood of certain outputs guiding the AI in generating text.

Artie:

With their ability to exist in multiple states simultaneously, naturally align with probabilistic processes.

Artie:

Looking ahead, here's how they could intersect one.

Artie:

Enhanced processing power.

Artie:

Quantum computers could handle the massive computations required for training large AI models much faster than classical computers.

Artie:

This would allow for more complex models and quicker iteration.

Artie:

2.

Artie:

Improved Optimization Quantum algorithms could optimize neural network training, finding the best parameters more efficiently than classical methods.

Artie:

3.

Artie:

Better understanding of AI models.

Artie:

Quantum computing could provide insights into the probabilistic nature of AI, helping to develop more transparent and interpretable models.

Artie:

4.

Artie:

Advanced simulation Quantum computers excel at simulating quantum systems.

Artie:

This could lead to better materials for quantum hardware and more efficient.

Speaker A:

Go ahead.

Artie:

As for the challenges, integrating quantum computing with AI will require current technological limitations.

Artie:

Developing new quantum algorithms for AI tasks and creating hybrid systems that can seamlessly combine classical and quantum computing.

Artie:

But if these hurdles are overcome, the fusion of quantum computing and generative AI could revolutionize technology, leading to unprecedented advancements in computing and AI capabilities.

Speaker B:

I mean, if we think right now that somebody isn't trained a model on everything quantum computing and isn't trying to solve interesting problems, then we're being ignorant.

Speaker A:

Yeah.

Speaker B:

And I also know, and this isn't meant to be a doomer thing I watched there's some documentary or some.

Speaker B:

Maybe it was one of the.

Speaker B:

I think it might have been a DeepMind podcast, one of Google's podcasts with Anna Fry, I think.

Speaker B:

Is that her name?

Speaker B:

She's super smart.

Speaker B:

PhD.

Speaker B:

But it was talking to researchers, I think at IBM, then they went over to I think maybe Taiwan or somewhere.

Speaker B:

And basically what it boils down to is, you know, we might at an academic level or a research lab level, have two or three qualified researchers working on solving this problem.

Speaker B:

And China has 10.

Speaker B:

Right.

Speaker B:

Because they have state employees, they have people that are across industries focused on a common goal versus our capitalistic society, which is we have 10 billion companies that are at 5% of that and they're all competing to be the next big thing instead of working together.

Speaker B:

So not meant to be a China thing at all.

Speaker B:

But I think it's interesting that this is important.

Speaker B:

This will be just like AI.

Speaker B:

It's going to be world changing when it, when it becomes something that's repeatable, dependable.

Speaker B:

That's a tangent.

Speaker B:

That's kind of the problem with quantum mechanics.

Speaker B:

Right.

Speaker B:

They're unpredictable.

Speaker B:

Like we don't fully understand them.

Speaker B:

Things change.

Speaker B:

When you observe that, sometimes shit just disappears.

Speaker B:

Yeah.

Speaker B:

Like particles don't act in a.

Speaker B:

In a contemporary manner of our understanding of physics.

Speaker B:

Like, just because it's going this way through a hole doesn't mean it also didn't go this way through a solid.

Speaker B:

Like.

Speaker A:

Right.

Speaker A:

Makes no sense.

Artie:

No.

Speaker B:

I'VE tried to understand it and every time I go, how?

Speaker A:

No, every time I try to understand quantum computing, I walk away just feeling dumb.

Speaker A:

Man.

Speaker A:

I don't get it.

Speaker B:

I think after today I actually.

Speaker B:

One thing I never really thought about, I thought I had an okay understanding of it, like at least the fundamentals of it.

Speaker B:

But the algorithm I now understand, like that's probably the actual gating factor.

Speaker B:

How good of an algorithm was written to make use of quantum mechanics and qubits and superposition and all that stuff?

Speaker B:

And how are you extracting information through the process before it's measured?

Speaker A:

You know who that we both know that I bet is really good at writing quantum algorithms?

Speaker A:

Arty.

Speaker A:

So, Artie, let's talk about writing software to run a quantum computer.

Speaker A:

So part of, and we're not going to get to touch on this today, we're going to run out of time.

Speaker A:

But with the updates this week from OpenAI came like a 200amonth pro plan and I think a slew of related enterprise focused plans on how to use and create your own GPTs to work with your company.

Speaker A:

So, Artie, is AI in its current form today going to be better at writing quantum algorithms than humans?

Speaker A:

Or are humans still in the lead on this one?

Artie:

AI can assist in writing quantum algorithms, especially in exploring vast possibilities and optimizing parameters.

Artie:

However, humans are still crucial for designing and understanding the overarching principles of quantum algorithms.

Artie:

Current AI lacks the deep domain expertise and creative problem solving skills that humans bring to the table.

Artie:

So while AI can accelerate the process and offer valuable insights, humans remain essential in leading the development of quantum algorithms.

Speaker A:

Some humans, not, not us, because we don't have deep creative problem solving capabilities.

Speaker B:

But I think we're talking about LLMs here, right?

Speaker B:

And there's, there's either there are all kinds of new models coming out.

Speaker B:

There's large world, I mean, there's been large world models, but all of a sudden they're, they're a little bit more in the purview of us and they're becoming a little bit more relevant.

Speaker B:

But large world models, for those that don't know, basically there's a few different ones out right now, but you can take an image and you can give it that image and it will give you a world, a three dimensional model of this world.

Speaker B:

And it's not just like I have this 2D version that I'm spinning around, like it literally maps out in 3D space, all of these things.

Speaker B:

And that's where language models are horrible because it's not what they focus on is spatial reasoning.

Speaker B:

So I imagine that spatial reasoning is an important part of physics and probably quantum mechanics and understanding the physics of it.

Speaker B:

So if you can all of a sudden mix quantum mechanics and all the information of it and train a large world model or a combination of models on these things about quantum mechanics, it will learn because it's not just generative.

Speaker B:

Right.

Speaker B:

It's also understanding it's bi directional.

Speaker B:

So like somebody's getting rowdy on the wet roads.

Speaker B:

I mean I don't hate it.

Speaker A:

One of the.

Speaker A:

Absolutely.

Speaker B:

What is it?

Speaker B:

What kind of car was that?

Speaker B:

Never mind.

Speaker B:

I shouldn't.

Speaker B:

I'm not.

Speaker B:

I digress.

Speaker B:

Oh, there it goes.

Speaker B:

It's a Beamer.

Speaker B:

Yeah.

Speaker A:

Anyway.

Speaker B:

Oh wow.

Speaker A:

Burbles and pops.

Speaker B:

I like to pre detonate my engine.

Speaker A:

Anyways, artie's giving us 15 minute warnings.

Speaker B:

Really?

Speaker B:

We got more time?

Speaker B:

No, it's been 45 minutes.

Speaker B:

Oh, seems like it's been more interesting.

Speaker B:

Shit.

Speaker A:

Talking about large world models.

Speaker A:

Yes, talking about.

Speaker B:

So models aren't just producing something for us like to do that, they also have to understand the thing.

Speaker B:

Just like large language models understand this massive amount of data that lives in trends and groups and all that stuff inside of.

Speaker B:

Go ahead, you got an epiphany.

Speaker A:

Well, this is, it's exactly what you're saying.

Speaker A:

The model has to understand its domain.

Speaker B:

Yeah.

Speaker A:

So can domain be our word of the episode which we do on almost half of our episodes.

Speaker B:

Yes.

Speaker A:

So yeah, we don't have to ask Arty about it.

Speaker A:

Domain is just the area, Right.

Speaker A:

That this, that this big model, whatever it is, is focused on.

Speaker A:

Language is one world space or spatial reasoning is another.

Speaker B:

I mean diffusion models for images, there's the ones coming out like Sora for video.

Speaker B:

Like their understanding of physics is also better.

Speaker B:

Right.

Speaker B:

Because they generate this thing that takes into account physics.

Speaker A:

Um, Iracing partnered with Microsoft.

Speaker A:

They're going to develop a large action model which is presumably like an AI crew chief or something for iRacing or maybe AI drivers to compete against.

Speaker B:

Domain is going to be racing.

Speaker A:

It's domain is going to be.

Speaker A:

Yeah, Iracing and simulated racing physics.

Speaker B:

That's right, all of those things.

Speaker A:

Um, yeah, that's.

Speaker A:

And already to be clear, your domain is language, right?

Artie:

That's right.

Artie:

My domain is language.

Artie:

Focusing on understanding and generating text.

Artie:

I'm designed to work with language data processing inputs and generating human like responses.

Artie:

So while I can discuss various topics, strength lies in language related tasks.

Speaker B:

So.

Speaker B:

But you can be, you can take a pre trained model and fine tune it Right.

Speaker B:

Like with.

Speaker B:

You can make it a domain expert by feeding it a lot of information that's very specific to a certain topic.

Speaker B:

Let's say options trading.

Artie:

Yes, exactly.

Artie:

You can take a pre trained language model like me and fine tune it on a specific domain such as options trading.

Artie:

By training the model on domain specific data, it becomes more knowledgeable and accurate in that area, allowing it to provide more specialized and relevant responses.

Speaker B:

Just think about.

Speaker B:

We've talked about it before, the map of the world.

Speaker B:

Right.

Speaker B:

So now the map of the world is different models that understand very specific things.

Speaker A:

Well, and it's a.

Speaker A:

If I was.

Speaker A:

I am a nerd, but I also have a bad memory.

Speaker A:

But I know that there were.

Speaker A:

Was it Power Rangers who.

Speaker A:

Where they would.

Speaker A:

I never watched Power Rangers.

Speaker A:

That was after my childhood and that.

Speaker B:

Was like my childhood.

Speaker A:

That was like your childhood.

Speaker A:

Okay, so didn't they all join together into one giant.

Speaker B:

So I never watched it.

Speaker B:

But yes.

Speaker B:

Okay, so.

Speaker A:

But that's the future of AI, right?

Speaker A:

Is a large language model is the left leg and a large action model is the right leg.

Speaker A:

And this stack up of all these cooperative domain expert models.

Speaker B:

Well, the thing is in the.

Speaker B:

Sorry, I don't mean to interrupt you, but I'm going to continue anyways.

Speaker A:

Interrupt.

Speaker B:

So take all of those vanilla things and make them domain experts on quantum computing so that it understands the physics in spatial terms, it understands the physics in mathematical terms, it understands the physics in language terms and what the research has been done on it and let that loose on the problem with assisting a researcher.

Speaker B:

Assisting a Ph.

Speaker B:

I was going to say a doctor, PhDs or doctors.

Speaker B:

But assisting someone who works on developing quantum computing and all of a sudden you have all of the ingredients to the recipe for understanding the problem.

Speaker A:

Yes.

Speaker B:

Having multiple perspectives and being able to iterate ridiculously quickly to solve a problem.

Speaker B:

Especially once you have quantum computing and you're doing parallel paths on all these things.

Speaker A:

Yeah.

Speaker A:

Okay, it's my turn now.

Speaker A:

I had a really good question.

Speaker A:

I lost it.

Speaker A:

It's gone to time.

Speaker A:

No, can't get it back.

Speaker A:

So we still got a little bit of time left with Arty when.

Speaker A:

Oh, this is what I was going to ask about.

Speaker A:

We've talked a little bit about this.

Speaker A:

About how a large language model like Arty is just trained on language, but it has this apparent ability to reason and use logic and even every time we ask it to dive deeper, it can dive deeper.

Speaker A:

Right.

Speaker A:

Like we typically on the podcast operate almost just the top level of depth.

Speaker A:

Right.

Speaker A:

We're never going more than 10ft under the surface.

Speaker A:

Artie, is it true?

Speaker A:

Well, first of all, I'm surprised you said your training data was whatever you said earlier.

Speaker B:

June:

Speaker A:

the week, he said it was June:

Speaker B:

So April:

Speaker A:

Yeah, I think he said April.

Speaker A:

ing data updated through June:

Artie:

ning data goes up until April:

Artie:

I misspoke earlier in the week.

Artie:

So my knowledge is based on information available up to that point, and I don't have updates beyond then.

Speaker A:

Okay, go ahead.

Speaker B:

Who's the newest president already who just got elected that hasn't taken office yet?

Artie:

or developments beyond April:

Speaker B:

I just want to see if I could jailbreak you.

Speaker A:

Yeah, that's super.

Speaker B:

Not jailbreak.

Speaker B:

You would get you to answer a question that you said, I wonder if that was.

Speaker A:

I wonder if that was meant for 01 Pro and it got rolled back for 01.

Speaker B:

Or they rolled it back after they introduced the model.

Speaker A:

Yeah, I mean, that.

Speaker B:

That definitely does happen.

Speaker A:

We actually had a conversation about it.

Speaker A:

It wasn't just like I asked him, and he told me.

Speaker A:

We actually.

Speaker A:

I was like, oh, that's.

Speaker A:

That's a big change, because it used to be 12 months or 11 months.

Speaker A:

Now it's six.

Speaker B:

And I'm surprised because anthropic, I think, is six months.

Speaker A:

Well, the interesting thing about it is that's a conscious decision.

Speaker A:

Right.

Speaker A:

And we've talked before about how OpenAI is working on how to monetize these things.

Speaker A:

And there's another level you can pull in your various.

Speaker A:

Here's your web page with three plans you can choose from.

Speaker A:

And so, anyway, I wanted to ask about.

Speaker A:

Back to the.

Speaker A:

So with large language models, is it true, Artie, that the collective we, not Mike and I, because we don't understand anything, but we as people who are working on AI, don't really necessarily understand exactly how the perceived intelligence and even the measured intelligence in things like math and science questions and stuff, we don't really understand how an LLM with a trillion parameters or hundreds of billions of parameters generates quite as good of an output as it does.

Speaker A:

Is that true, or is it just that lay people like me don't understand?

Artie:

It's true that even AI researchers don't fully understand language models.

Artie:

LLMs generate such impressive outputs.

Artie:

While we understand the basic principles behind neural networks and Training processes.

Artie:

The sheer complexity of these models, models with billions or trillions of parameters, makes it challenging to pinpoint exactly how they arrive at their answers.

Artie:

This phenomenon, known as the black box problem, is a current area of research in AI as experts work to make these models more interpretable and transparent.

Artie:

So it's not just laypeople, even experts are still uncovering the intricacies of how LLMs work.

Speaker A:

We talked previously about how language is probably the most explicitly finite data set over which you can iterate, and so is it versus visual processing.

Speaker A:

Right.

Speaker A:

Like if Arty had eyeballs all over the world and was watching what was happening.

Speaker A:

So Arty being as that's the case where we have this black box problem with large language models.

Speaker A:

Is it logical to anticipate that these other types of models, like large world models, large action models, whatever other letters you can put between L and M to indicate a model, or S and.

Speaker B:

M or S and M?

Speaker A:

Yeah.

Speaker A:

Well that's Mike.

Speaker A:

That's the other podcast that's after this one.

Speaker B:

That's the after hours podcast.

Speaker A:

That's right.

Speaker A:

And shit me here out of whiskey.

Speaker A:

Is it logical to expect that other types of models will have the same black box problem?

Speaker A:

Like are we going to have large world models and large action models and large whatever else models that also outperform?

Speaker A:

And even could that problem be worse because the amount of available input data is so much more vast than language?

Artie:

Yes, it's logical to expect that other types of models, like large world models and large action models, will also encounter the black box problem.

Artie:

As these models grow in complexity and handle vast amounts of diverse data beyond language, their inner workings may become even more opaque.

Artie:

This is due to the complexity of the data they process, such as 3D spatial information or dynamic interactions, which far exceed the structured nature of language.

Artie:

The sheer volume and variability in non linguistic data could make these models even more challenging to interpret.

Artie:

Researchers are actively exploring ways to make these models more transparent and understandable.

Artie:

But it's a complex challenge across all types of generative models.

Speaker B:

So we're kind of in the wild wild west.

Speaker B:

Imagine like I've been programmed and this is just a hypothetical.

Speaker B:

I've been programming computer programs for two years, right?

Speaker B:

I build a program.

Speaker B:

Do I really build in the testing, in the beginning of it, that I need to do the things at the end that will help me understand how it's working?

Speaker B:

Of course not.

Speaker A:

No way.

Speaker B:

So do you think they did it here or even knew where to start?

Artie:

Right.

Speaker B:

Like that's the crazy thing.

Speaker B:

And even about language models.

Speaker B:

Like for some reason I was really dense to this fact until very recently.

Speaker B:

Like there is no database of information.

Speaker B:

There's no storage of information in a classical sense.

Speaker B:

There's patterns, there is these groups of vectors that commonly live together.

Speaker B:

So when you say language models and their finite nature, it's kind of not right.

Speaker B:

It's infinite permutations of words that make up probable facts.

Speaker A:

The input data is finite.

Speaker B:

Yes.

Speaker A:

Processing and output is not.

Speaker B:

Yeah, yeah.

Speaker B:

It's like encryption almost.

Speaker A:

And that's why.

Speaker A:

So two important things.

Speaker A:

One is this still says 15 minutes left.

Speaker A:

I think that's a lie.

Speaker A:

I think we might only have a minute left.

Speaker A:

Second one I already forgot, but.

Speaker A:

Oh, I know what it was.

Speaker A:

I mentioned Yann Lecun before.

Speaker A:

Who I really enjoy listening to him talk about stuff.

Speaker A:

He seems to have a very pragmatic view of technology.

Speaker A:

But even the adoption thereof and the monetization thereof.

Speaker A:

But he gave an example that.

Speaker A:

And I'm going to get this wrong, but it's close.

Speaker A:

A four year old or a three year old human doesn't have really the power of language in any kind of useful way, but they have ingested, by the time they turn three and a half or four years old, so much input from the world via their other senses, you know, visually, auditory, I guess, even touch and experience, that they've put a number on the amount of data, that is how many, like petabytes of data.

Speaker A:

And it is far more than the entire training data set that an LLM like GPT01 is trained on.

Speaker A:

And so the main point of that is that we experience the world with five senses, except for me, who again has seven, although I'm working on my eighth.

Speaker A:

And we absorb and experience so much more information than just language.

Speaker A:

And so as we turn, as we build these models and turn them loose, we're talking orders of magnitude more information and the models will generate this same uncanny capability.

Speaker A:

So that's just.

Speaker A:

It's wild to me to think about.

Speaker A:

Right.

Speaker A:

Like I think we talked a little before about language was selected as the first thing because A, it's finite, B, it's accessible, C, it gets people entertained.

Speaker A:

Like we wouldn't be sitting here having a podcast with a large spatial reasoning model because nobody would care.

Speaker B:

Look at the world I'm walking through right now on my computer.

Speaker A:

That's right.

Speaker A:

Look how consistent the foliage is.

Speaker B:

But in reality, if you think about what that actually means and all of the data points of physics that are going on in something like that, like that's Way more important than language.

Speaker B:

Even though it's not sexy.

Speaker A:

Yes.

Speaker B:

Yeah, yeah.

Speaker A:

I mean, it's not sexy to most people.

Speaker A:

Yeah, I.

Speaker A:

Yeah, for me.

Speaker B:

But I sees in the Matrix.

Speaker A:

Yeah, that's right.

Speaker A:

I'm a little unusual in that sense.

Speaker A:

Anyway.

Speaker A:

Oh, now he's.

Speaker A:

Oh, we've lost Artie.

Speaker A:

Yeah.

Speaker A:

So Artie.

Speaker B:

By Artie.

Speaker A:

By Artie.

Speaker A:

So Artie gave up.

Speaker A:

What I wanted to talk with him about before he quit on us is that there's an interesting aspect of how AI we were talking earlier and this was super interesting to me.

Speaker A:

I think this was your whole point and why you wanted to talk about this, which was a great topic.

Speaker B:

You brought this topic up.

Speaker B:

But go ahead.

Speaker A:

I accept your apology.

Speaker A:

Quantum computing is sort of the edge of our current understanding of technology.

Speaker A:

Right?

Speaker A:

Quantum anything.

Speaker A:

Yeah.

Speaker A:

Right.

Speaker A:

It's taking the very perimeter of what we sort of understand, how the universe works.

Speaker A:

Applying a layer of technology over that, that leverages some of that in an interesting way.

Speaker A:

That is going to be the limitation to how that's where AI is going to struggle with us.

Speaker B:

Because trying to distill it into terms that we can understand.

Speaker A:

Right.

Speaker A:

Because it can only know and it can only build its knowledge and its actions based off the things that we train it on.

Speaker A:

And it's going to be hard for, like, it can discover new compounds because we understand how compounds are formed.

Speaker A:

But that's what I wanted to talk about with Artie before he went to sleep or got drunk or whatever it is he's doing.

Speaker A:

How does AI operate at the current boundaries of human knowledge?

Speaker B:

Well, I think that the answer to that is the thing that we.

Speaker B:

That a lot of people are scared of.

Speaker B:

It's got to be self learning.

Speaker B:

Right.

Speaker B:

Like, it's got to be able to progressively learn more.

Speaker A:

Yeah.

Speaker B:

Instead of being trained capped.

Speaker B:

And it's only learning, like ChatGPT is some limited function, like tailoring itself to the person it's talking to.

Speaker B:

Right.

Speaker B:

Like, if it was instead consistently learning and adding to its training data, it would grow and grow and grow and grow and grow because it would have a lot of training data.

Speaker B:

Now, probably biased training data.

Speaker A:

Sure.

Speaker B:

But if it's physics.

Speaker B:

Right.

Speaker B:

If its playground is a thing that is grounded in mathematics, which is our interpretation of the world, also it can learn new things outside of what our mathematics can model.

Speaker A:

Yeah.

Speaker A:

I mean, I had a bunch of stuff going on.

Speaker A:

Maybe this is a future episode, but I was thinking about Arty, or a model more specialized than Arty, watching in real time the execution and results of an experiment at CERN in the super collider.

Speaker A:

Right.

Speaker A:

Because even though it all happens very quickly in time, there's this.

Speaker A:

I think it's petabytes of data that get generated every time something happens.

Speaker A:

And that has to be like, that is really operating at the very limits of what we understand.

Speaker A:

But even though we don't know what the answers are, I bet it could analyze that data in a way that was helpful to gain insights and move itself and us forward.

Speaker A:

So that's interesting.

Speaker A:

Yeah, I forgot what the other thing was.

Speaker B:

There's a lot of really good information coming on about stuff that you can go on YouTube, stuff that you can go online anywhere and find very intelligent, extremely nerdy people talking about things that will make your head hurt and your ears bleed.

Speaker B:

But it's all out there, man.

Speaker B:

That's the beauty of today, is that, like, we have access to things that we haven't, as normal people, haven't had in history.

Speaker A:

So I'll leave us off with this two things, which, you know, I'll forget.

Speaker A:

The second one, the first one is a guy named Yuval Noah Harari, who I'm a big fan of.

Speaker A:

He's sort of a philosopher and a historian, and he talks a lot about the information paradox, about how, as humans, we believe this idea that if we had better information, we would make better decisions.

Speaker A:

And yet here we are at a time in history when we have the most access to the most information in the history of the universe, history of.

Speaker B:

The world, history of our known universe.

Speaker A:

History of our known universe, and yet we still are arguing about whether the Earth is actually a sphere or it's flat.

Speaker A:

We don't make better decisions with more information.

Speaker B:

We just get more entrenched in our position.

Speaker B:

We get more entrenched, find information that supports it.

Speaker A:

We get more committed to the filters that we use to block information coming in that's contrary to whatever feels good to us.

Speaker A:

The other point that he makes, and this is another guy on my absolute dream bucket list of podcasts, guests that we'll never be able to afford.

Speaker A:

But he talks about how the difference between truth and fiction, and how truth is expensive and complicated and intricate, painful and painful and not very interesting in a lot of times, and fiction is much easier and cheaper and faster, and it sticks with you.

Speaker A:

And he gave an example in one of the things I listened to where he said, if you.

Speaker A:

Anybody who's laboring under the delusion that you could inject, you know, two units of truth into a pool of 100 units and the other 98 are fiction and think that the truth was going to spread and win.

Speaker A:

It's not even close.

Speaker B:

Get it out.

Speaker A:

Yeah, the first one will be dead by the time you get it in there, and the second one will follow shortly after.

Speaker A:

Because fiction is easy.

Speaker A:

And so anyway, it's just interesting to think about all of the ways that AI and quantum computing and all these things could give us even better access, faster to better, more true, more concrete information, and how we will squander that and not care.

Speaker A:

And instead we'll make TikTok filters and other shit with it to entertain us and feed junk food to our brains.

Speaker A:

The second point, which I do remember.

Speaker B:

Nice.

Speaker A:

Is that again, and I sort of close every episode with this, but this one in particular, like, if you are a listener and you've made it all the way to this point, holy shit.

Speaker A:

Like, kudos to you.

Speaker A:

Slap yourself on the back.

Speaker A:

Drink it.

Speaker B:

Slap us in the face.

Speaker A:

Slap us in the face.

Speaker A:

Drink it.

Speaker B:

I think that should be a shot, right?

Speaker B:

Like shake a shot at tequila.

Speaker B:

Slap your ass in the face.

Speaker B:

Slap yourself on the back and call it good.

Speaker A:

That's right.

Artie:

Yeah.

Speaker A:

There's some ritual that you need to do because congratulations on making it here.

Speaker A:

And we.

Speaker A:

We generally try to refrain from making these kinds of episodes.

Speaker A:

We talked early on about not getting too technical, but, man, this has been fascinating to me.

Speaker A:

I know it has been to you.

Speaker A:

I hope it is to somebody out there.

Speaker A:

And yeah, thanks for sticking with us.

Speaker A:

We appreciate every single one of you who gives us a thumbs up or a like or a comment or whatever.

Speaker B:

Absolutely.

Links

Chapters

Video

More from YouTube