Artwork for podcast Great Security Debate
Agentic Dan
Episode 6415th December 2025 • Great Security Debate • The Great Security Debate
00:00:00 00:48:50

Share Episode

Shownotes

We are back for another Great Security Debate.

In this episode: we discuss the potential role of agentic AI in security, from true “copilot” to automated decider of things, and whether LLMs are just a really cool search engine. Brian, Erik, and Dan also debate the means and extent to which we could replace ourselves with agents and what the inhibitors and risks are (spoiler alert: trust and survival of that agent after employment were big factors), and how do we train those agents of all the steps our brains take to make the decisions that the humans make, and do so without polluting it with aspirational versions of ourselves (think: Instagram vs Reality). And it all leads to a parenting lesson by Brian and an automotive process lesson by Erik? It’s quite a debate. 

Thanks for listening! We might do one more episode in 2026, but if not have a wonderful holidays and a happy new year!

Here’s the quote that Brian references at the end of the episode by Tolstoy:

Patience is waiting. Not passively waiting. That is laziness. But to Keep going when the going is hard and slow - that is patience. The two most powerful warriors are patience and time. The value lies not in reducing "power" (computational energy) but in leveraging that processing power to achieve outcomes that are difficult, slow, or impossible for humans to manage alone.

Thanks for listening!

Show Notes:

Some of the links in the show notes contain affiliate links that may earn a commission should you choose to make a purchase using these links. Using these links supports The Great Security Debate and Distilling Security, so we appreciate it when you use them. We do not make our recommendations based on the availability or benefits of these affiliate links.

Transcripts

Speaker A:

Welcome to the great security debate.

Speaker A:

This show has experts taking sides to help broaden understanding of a topic.

Speaker A:

Therefore, it's safe to say that the views expressed are not necessarily those of the people we work with or for.

Speaker A:

Heck, they may not even represent our own views as we take a position for the sake of the debate.

Speaker A:

Our website is greatsecuritydebate.net and you can contact us via email at feedbackreatsecuritydebate.net or on Twitter.

Speaker A:

Twitter at Security debate.

Speaker A:

Now let's join the debate already in progress.

Speaker B:

An AI agent, right?

Speaker B:

And they use that to kind of show them, like, hey, you know, with the advent of scattered spider and all this other stuff.

Speaker B:

Right.

Speaker B:

Like, this is what we have to be aware of.

Speaker B:

Right.

Speaker B:

So this is why we need to come up with questions.

Speaker A:

Do it as a demonstration tool.

Speaker A:

But I would never, ever, ever, ever, ever, ever deployed against a person unknowingly.

Speaker B:

Gotcha.

Speaker A:

But yeah, I mean, it's interesting as a demonstration tool.

Speaker B:

Yeah.

Speaker A:

I'm working with some, Some companies that are doing things like agentic tabletop exercises where the agents perform the roles of people that are not in the room.

Speaker A:

And it's very cool.

Speaker B:

Oh, really?

Speaker A:

Like, it's awesome.

Speaker A:

But it's, it's all above board.

Speaker A:

Like, everybody knows what's playing.

Speaker A:

So it can use your people, it can use your company.

Speaker A:

It puts information about, like, it gathered.

Speaker A:

It's actually quite cool scope.

Speaker A:

But you can make, like we were talking about, you make one of the agents your cfo.

Speaker A:

You can program it, you can talk it, you bring it in, risk tolerance.

Speaker A:

You can bring in, you know, decision documents into, into the, into the learning for it.

Speaker A:

You know, things like that.

Speaker A:

So it becomes very realistic.

Speaker A:

And then that person doesn't necessarily have to be in the room, but can start to give some ideas of how that person would react even if they can't be in the tabletop.

Speaker A:

So it's a neat, It's a neat educational tool.

Speaker A:

A neat, A neat testing tool.

Speaker A:

Like, you know, how would we.

Speaker A:

Are we.

Speaker A:

And then the other thing is, yeah, potentially how are we doing against our playbook?

Speaker B:

Can we dig into that just a little bit?

Speaker B:

Because, like, I feel like a lot of people, people hear the word agentic.

Speaker B:

If we can remove the word agentic from, From.

Speaker B:

From this conversation, hear the word AI.

Speaker B:

But let's talk agent.

Speaker B:

But like Persona.

Speaker C:

Persona.

Speaker A:

Yeah.

Speaker B:

So let's.

Speaker B:

A Persona.

Speaker B:

There's.

Speaker B:

And Dan, you were privy to when we were sitting down, having the conversation about is AI or is agent or is it.

Speaker A:

Is AI just A big search engine.

Speaker B:

Yeah, just a really cool search engine.

Speaker B:

Right.

Speaker B:

But technically now we're saying this Persona.

Speaker B:

So you have somebody that's part of the tabletop exercise.

Speaker B:

Each person that comes in, right, let's.

Speaker B:

Your cfo, your legal team, your ciso, whoever it is, each person is a knowledge base, right?

Speaker B:

For their particular skill set and their particular job.

Speaker B:

Right.

Speaker B:

Their brain is your search engine of all this knowledge you've obtained over years and problems you've ran into.

Speaker B:

Okay?

Speaker B:

And if that person can't be in the room and you're like, okay, we're going to insert this agent, this Persona, this cfo, this Eric, Willie, this Daniela, right?

Speaker B:

Because we need that person, that agent to learn from Dan all the things he does.

Speaker B:

Right.

Speaker B:

And the requirements he's put in place, the policies he has.

Speaker B:

So if Dan isn't able to be here right now to answer this question, can we ask Dan's agent?

Speaker B:

Right.

Speaker B:

I mean, this is why sports people have agents, right?

Speaker B:

But for totally different purposes.

Speaker B:

Because the sports person doesn't know any of the going on.

Speaker C:

Yeah.

Speaker B:

Needs an agent to explain to him.

Speaker B:

Is it good?

Speaker C:

Right.

Speaker B:

And you're going to take 40%, and that's good, right?

Speaker B:

It's great.

Speaker A:

I'll start, I'll start with.

Speaker A:

I think the security people need the agents too.

Speaker A:

Not in that way, but in the, in the going and getting them better salaries and, and negotiating it out.

Speaker A:

And then you just show up and go, all the bad ill will, all the ill will was between my agent and.

Speaker A:

Let's start off on the right foot.

Speaker A:

But, but I, I mean, the idea of an agent version of you is great, but I think, also really scary for people because it is the, you know, it is perceived as one step closer toward not really needing that person.

Speaker A:

So I think people will go in a little bit, could go in a little bit hesitant.

Speaker A:

But, you know, the idea that you just mentioned of having a.

Speaker A:

Having an agent that can supplement you is really quite cool.

Speaker A:

The question is, can people get beyond the knowledge is power mindset enough to dump all the relevant information in about them?

Speaker A:

Then the question is, what are the ethics of continuing to use that agent after I stop working at the company or if I'm stopped partying being part of that function, do you retain the rights?

Speaker A:

To my knowledge, it's.

Speaker A:

It's a Soup.

Speaker A:

Like, there's 100 different avenues to go down on that question.

Speaker A:

It's phenomenal.

Speaker B:

Well, and I think that question is a pertinent question because, like, you look at music, you look at videos you look at art and all of that, right?

Speaker B:

And what can be done in marketing using it, right?

Speaker B:

And it's like, yeah, but where, where did it get this in order to create this, right.

Speaker A:

A licenseable Dan agent.

Speaker A:

You can license me for your incident response or you can get the real me for, for one third the price.

Speaker A:

The agent should cost more.

Speaker B:

I mean, let's be honest, if we could make six Dan's, we'd have a pretty good practice, right?

Speaker B:

I'd be like, oh, you need some consulting work.

Speaker B:

Dan A is busy, but Dan B and C have some availability.

Speaker B:

Dan D, he's going to be tied up for like two months maybe, right?

Speaker B:

Dan F is still going through a little bit of training.

Speaker B:

He had a little bit of hallucination last week.

Speaker C:

Not to be confused with F, Dan.

Speaker A:

Yeah, that's a whole other movement.

Speaker A:

Well, all I can think of in that, Brian, is the movie multiplicity, Michael Keaton.

Speaker A:

Every copy of the copy is a little more wonky and you get, so you get to Dan F and it's just blurry.

Speaker A:

But now I think maybe that person.

Speaker B:

Had a really good negotiation.

Speaker B:

They're like, I'm only willing to spend $5.

Speaker B:

Right?

Speaker B:

You're like, well, go ahead, give them fdan.

Speaker A:

Then you get.

Speaker A:

A super interesting idea.

Speaker A:

And then the, you know, to use an overused term of copilot, you know, that is really where the co pilot idea becomes really, really novel.

Speaker A:

And I think appropriate because it is the sidecar that could sit next to you even to whisper in your own ear and remind you of the things that you knew at one point and say, hey, at this point in an incident, look at this or at this, you know, at this juncture in a, in a comprehensive program, here's the thing that you missed or that you, you know, that you would, that would come next.

Speaker A:

Just, it doesn't even have to.

Speaker C:

It totally makes sense.

Speaker C:

We can't get people to document their processes and what we're doing today, but you're going to keep a co pilot updated as a Persona of you.

Speaker A:

No, but now, so now I'm going to take every lick of my own privacy mindset and my own security mindset out of the commentary and think about this in a, you know, in a true physics problem, this is.

Speaker A:

Everything's in a vacuum, everything works perfectly and there's no friction.

Speaker A:

Think about the idea of something that shadows you all day.

Speaker A:

Think of the old, you know, the old Dictaphone tape recorder.

Speaker A:

It just shadows you.

Speaker A:

And the commentary that comes out, the things that come in the analysis, the writing you do pipes right on in.

Speaker A:

And it has learned without friction now the fear of all of that information being there and how it's used and who will use it, and the general lack of trust that comes about tech and, you know, reasons.

Speaker A:

All the reasons, all the reasons people don't put things in.

Speaker A:

There's a reason.

Speaker A:

The reason I don't use a lot of this stuff isn't because I don't think it's great and very useful.

Speaker A:

It's the ancillary.

Speaker A:

What else are they going to do with it?

Speaker A:

It.

Speaker A:

I had to connect a bank account up to Plaid so that we can write paychecks to you guys yesterday.

Speaker A:

And as I'm doing this, I'm thinking, well, why do they need this information and what else are they going to do with transaction data?

Speaker A:

Like, there's all sorts of these things that give me pause.

Speaker A:

They're really.

Speaker B:

When I was connecting to it and it was saying like, like I'm okay with you connecting to validate that.

Speaker B:

That's my account.

Speaker B:

But it's like, we need to look at all your transaction data, etc, I'm.

Speaker A:

Like, by the way, there is still a manual way, the old way, where it puts two deposits in and then you tell it it was 2 cents and 4 cents and then it takes the money back out.

Speaker A:

So you validate that, the, that the account is yours, but then they just don't see anything anymore.

Speaker A:

And that's the way I ended up doing it.

Speaker A:

But.

Speaker A:

But it's this trust.

Speaker A:

Like there's so much opportunity, Eric, to take all this stuff in and use it.

Speaker A:

But if it weren't for the lack of trust.

Speaker C:

Do I have a.

Speaker C:

Do I have a red flag?

Speaker C:

I want.

Speaker C:

Where's my flag, Sir?

Speaker B:

Eric is here.

Speaker C:

The whole problem with that though is if anything that is following you, listening to you, watching you, is only getting the end result, what somebody is actually paying for.

Speaker C:

And hiring somebody like Dan to come into an organization while you're running things is not the end result, but how you think about it.

Speaker C:

Because how you think about it informs what you actually do.

Speaker C:

It has no context to understand how you even got to the decision.

Speaker A:

Yeah, fair play.

Speaker B:

Have you ever seen, have you ever seen prompts AI?

Speaker B:

No.

Speaker C:

Yeah.

Speaker B:

Okay, so not prompts.

Speaker B:

The security tool prompts AI.

Speaker B:

So basically it teaches you how to prompt.

Speaker B:

Right.

Speaker B:

And uses every large language model, Right?

Speaker B:

So it's like crowdsourcing it together.

Speaker B:

But here's the interesting part.

Speaker B:

It shows you can go back and see how they got there, right?

Speaker B:

Like, and how they built what I'll refer to as the.

Speaker C:

Doesn't have that.

Speaker C:

We can't follow the trail and nor do we want to see the trail.

Speaker C:

No.

Speaker A:

Trust me.

Speaker A:

So my wife.

Speaker A:

My wife regularly asks me, how did you get there?

Speaker A:

Like, you jump from thought A to thought B, which are like thought A to thought cumquat, you know, and like, how did you get there?

Speaker A:

And it's a very long and winding path.

Speaker B:

Dan's DNA.

Speaker B:

So how did you get there?

Speaker B:

Right.

Speaker B:

And it would explain what it perceived as the steps, right.

Speaker B:

Which then Dan could look at and say, well, that's true, but what it's missing is this, this, and this.

Speaker B:

So then it's getting better, right?

Speaker B:

Iteration and just like walk it through that.

Speaker C:

I was sitting in the car one day, I was actually.

Speaker C:

I was driving to the local Starbucks listening to Take On Me, which all of a sudden triggered a thought in my head which solved the problem.

Speaker C:

I was like, sure, well, yeah, we'll hand that off to you.

Speaker A:

So you know what, this.

Speaker B:

And then you're going to find, like, a gentle is going to start listening to better music.

Speaker B:

Right?

Speaker A:

But yes, definitely Take On Me.

Speaker A:

Very high quality.

Speaker B:

And then who defines what better is?

Speaker A:

But the.

Speaker A:

But, but, Eric, haven't we already gotten close to that with the idea of vlogs and the idea of people posting their every move and the things they do, there's already a potential for people who post, I made a cheese sandwich because I thought about a newspaper.

Speaker A:

You know, like, they're putting this kind of stuff out there.

Speaker B:

Or we have.

Speaker C:

The answer is no.

Speaker C:

The answer is no, because that's all predicated on that.

Speaker C:

It's an actual insight into how somebody's actually thinking, how somebody's actually acting versus me, altering what I'm doing, because I know that it's going to gain eyeballs.

Speaker A:

The version of you.

Speaker A:

Yes, that very fair point.

Speaker A:

Yeah, true.

Speaker A:

That would.

Speaker A:

That would require the Instagram life to be a real life.

Speaker B:

Imagine the agent sitting there in the.

Speaker B:

In the tabletop, and they're like, so, Dan, how did you get there?

Speaker B:

Right?

Speaker B:

And the agent says, so my recommendation to everybody in this room because legal sucks is to go make a ham sandwich, right?

Speaker B:

Because it's reiterating like, Dan thought legal sucks in this situation, and it's taking that.

Speaker B:

And then it's like, and I recommend for this person to go make a ham sandwich.

Speaker A:

And everyone's like, as long as I'm the one guiding that, then it's going to be perfect.

Speaker A:

As long as I'm the one where my thoughts are the ones that can go back to legal and say, here's how you need to be better.

Speaker A:

This is going to work great.

Speaker A:

But now think about the arbitration between opinions.

Speaker B:

Dear God.

Speaker B:

But now let's kind of back out of the vacuum part though and go back to the fact.

Speaker A:

And we now know why Robocop went rogue.

Speaker B:

Yeah, yeah.

Speaker C:

But we have a statue now, so.

Speaker B:

But it, it's almost like the robotic rpa.

Speaker B:

Right?

Speaker B:

Like if you can take some of that automation and process.

Speaker B:

Right.

Speaker B:

So it's the question to the agent.

Speaker B:

Right.

Speaker B:

In this case isn't how you got.

Speaker C:

There, but it's the self creating macro.

Speaker C:

Okay.

Speaker A:

Yeah.

Speaker A:

This goes to this.

Speaker A:

This goes to this.

Speaker A:

We noticed you do this a lot.

Speaker A:

So we're going to create an automation for this to this and save you half a second.

Speaker A:

Exactly.

Speaker B:

Like, like what's the process and requirement?

Speaker B:

Right.

Speaker B:

Because this happened.

Speaker B:

What do we need to do next?

Speaker B:

And then what?

Speaker B:

Why do we need to do that?

Speaker B:

And what's the impact if we don't do that on time?

Speaker B:

Right.

Speaker B:

And it gives them all the.

Speaker B:

It's like, okay, great.

Speaker B:

Even if it was documented somewhere and they just didn't know where to.

Speaker B:

Even though they've been told they know where to get it.

Speaker B:

Remember you went in the beginning, you said, so this thing's going to follow me around, but it's only going to know the end game.

Speaker B:

What do our children do?

Speaker B:

They follow us around to learn.

Speaker B:

Right.

Speaker B:

And then understand the end game.

Speaker B:

But what they see.

Speaker B:

Right.

Speaker B:

And this is why words matter how we say things.

Speaker B:

The tone that we use when we say it has a big impact on their feeling of am I capable of doing this on my own?

Speaker B:

Because even though the way I said it to them, I believe you're capable.

Speaker B:

The tone I used was you're never going to succeed at this.

Speaker B:

Right.

Speaker B:

The AI agent doesn't think that way.

Speaker A:

But it's impossible that it could.

Speaker A:

You could give it the same.

Speaker C:

Not foresee this episode going to a parenting lesson.

Speaker B:

Continue.

Speaker B:

How do we parent our AI agent is what I'm asking about this.

Speaker C:

I'm going to go to automotive before Brian gets there.

Speaker C:

Just because I can.

Speaker A:

Is this allowed in the bylaws?

Speaker C:

It is.

Speaker C:

I think I have to pay a royalty.

Speaker A:

Yeah, I think so.

Speaker C:

Let's go back and look at the creme de la the.

Speaker C:

The pinnacle of manufacturing operations in automotive, I think most of us would argue is the Toyota production system.

Speaker C:

Right.

Speaker C:

Toyota did it extremely well.

Speaker C:

Why?

Speaker C:

Because of their culture that was ingrained look at the companies that try to take it well, we see what you're actually the physical movements of it and we're going to try to replicate that.

Speaker C:

Did it work?

Speaker C:

No, it didn't work because the infusion of culture was never there.

Speaker C:

It's the same thing with AI that if you're just looking at the end result and trying to mimic that.

Speaker C:

It'd be the same thing if our kids became just purely robots of us based on what they see without any context and making it their own.

Speaker C:

God help us all.

Speaker B:

Agreed?

Speaker B:

Agreed.

Speaker B:

But let's go to.

Speaker B:

I want to say it's Ono, right.

Speaker B:

You know, at the very like 81 years of age.

Speaker B:

Right.

Speaker B:

And he still said I had so much still to learn on the Toyota production system.

Speaker B:

Right.

Speaker B:

And his ability to teach people, etc as you're learning like in, in Toyota's principles, very much.

Speaker B:

Genshin Butsu go and see for yourself.

Speaker B:

Learn and apply.

Speaker B:

Right.

Speaker B:

Go to the assembly line.

Speaker B:

Right.

Speaker B:

And this is what makes great leaders, right.

Speaker B:

Even if you're in security or you're in IT or you're in hr, the people that you represent, right.

Speaker B:

And the technologies you're giving them to be successful.

Speaker B:

I was just having this conversation with my wife.

Speaker B:

Like the hours spent doing this, this, this and this.

Speaker B:

It's like you do realize that there's a better way to do this.

Speaker B:

But the problem is it teams don't understand what your actual hands on job is.

Speaker B:

Right.

Speaker B:

So like when we were talking about an AI agent before the AI agent follows you along, there's first somebody that comes in and does the interview with you like so what's your job?

Speaker B:

Right.

Speaker B:

What do you do on a day to day basis?

Speaker B:

Explain that.

Speaker B:

But they don't actually get to see like so what takes up the majority of your time?

Speaker B:

Well here, let me show you.

Speaker B:

Like by when my kids go to bed from nine till midnight, I am up reformatting all these spreadsheets, putting this in.

Speaker B:

I got to take this information from this team, put it into metric, then I got to take this and like I created a system to do it.

Speaker B:

But there's nuance when this happens and when this changes.

Speaker B:

Interesting.

Speaker B:

You know we could create a solution that could help solve this busy work problem and your ability to ask IT questions to get what's the output you're looking for.

Speaker B:

But that's something your team, your technology team should understand.

Speaker B:

What are the limitations?

Speaker B:

Like you know, I'm going to use Kelly as the example.

Speaker B:

Kelly, if you're spending an additional four hours or three Hours a day doing all of this, right?

Speaker B:

And you guys wanted to get more and more business, then you'd need five more Kelly's.

Speaker B:

But then when you bring those people in, you'd have to transfer Kelly's knowledge and understanding of how to do that and do it fast, right?

Speaker B:

Or else it's not.

Speaker B:

That's, that's a year, two year, three year curve, right?

Speaker B:

But if you had Kelly A, B and C, just like Dan, not the fdan, but the A, B and C Dan, and you're able to take some of those processes to say, hey, we don't need to do everything Dan does.

Speaker B:

But there's certain key attributes that would take other people this long, right.

Speaker B:

To learn how to do that part, that creation piece, right.

Speaker B:

Of some of the documentation and change control and everything else that he puts together.

Speaker B:

And the why as he's able to teach that or explain that is very helpful.

Speaker B:

So in the Toyota principle way, right, the years it takes somebody as they mature through the organization to learn all.

Speaker B:

And this is why in the Toyota mindset, it was always don't remove somebody from the assembly line and just keep removing until everyone screams.

Speaker B:

And then it's like, okay, that's the speed we're going to operate with the least number of people to make the most amount of money.

Speaker B:

It was take the people and rotate them to all different processes, right, Every six months.

Speaker B:

This way, if someone isn't there, this person knows that make the worker more intelligent, right?

Speaker B:

So just like that agent that you're trying to make more intelligent, it's not to replace you.

Speaker B:

It's so that when that person's unavailable, can they do that?

Speaker B:

The problem is culturally, in humanity, the way we operate is, oh, but it can do most of what they do.

Speaker B:

We can get rid of that person, save some money, right?

Speaker B:

This goes back to like the Ford principles versus the Toyota way.

Speaker B:

And then when the Toyota way came out, everybody was like, huh, let's learn that.

Speaker B:

But we can't call it the Toyota way.

Speaker B:

It has to be our way.

Speaker B:

Well, we'll call it Six Sigma and we'll give people black belts because it's kind of like karate.

Speaker B:

Sounds cool.

Speaker B:

And I'm totally making fun of it right now.

Speaker B:

But the reality is the principles follow that same line.

Speaker B:

They just had to indoctrinate their own culture into it to get buy in.

Speaker B:

So I'm with you, Eric, but that training of Eric A and Eric B, right, Like look at the value you have of all your security understanding, but you don't have four Hours a day to be able to pass that knowledge down.

Speaker B:

Right.

Speaker B:

But if somebody could query Eric A to look for this or ask this.

Speaker B:

Right.

Speaker B:

And then you're just explaining why.

Speaker B:

It's kind of reminds me of some of the solutions I've seen come out today where it's solutions that manage different cloud architecture and so forth and what it's telling you to do is generated by a computer.

Speaker B:

But they actually, those companies are putting people on the back end to say okay, the AI generated this, is that correct?

Speaker B:

So instead of the person spending all the time to search through it all, it's letting it generate it, they're reading it and then they're passing that on to the customer so they're helping with those processes.

Speaker C:

I'm not convinced, I'm not there.

Speaker C:

I think for me this goes back to the same argument that we had long ago where it was told to me essentially that you could turn security into, you could create a single runbook that takes care of all incidents insecurity.

Speaker C:

And I would make the argument you absolutely cannot.

Speaker C:

Of course not.

Speaker C:

If you try to and you believe that you've gotten there.

Speaker C:

All it's going to do is breed complacency that you don't have to innovate because you think you have all of the answers in this book.

Speaker C:

As soon as we start to create the digital instantiation of a person that is now fixed in time.

Speaker C:

Right?

Speaker C:

Because it only knows what I know now without the curiosity to challenge, to learn, to grow, to change.

Speaker C:

That's what you really need to be mimicking if you're trying to, let's say, you know, Eric, AI or whatever that if you're trying to mimic my value proposition is the curiosity to continue to challenge the decision I made yesterday, today I look at and go oh, that's crap.

Speaker C:

Right?

Speaker C:

You cannot mimic that yet in AI.

Speaker C:

But that's a difference super dangerous.

Speaker C:

Because I, I think what we're going to see everybody is, is off to the races and AI has a great ability to cut through the monotony, right?

Speaker C:

You talk about rpa, this is the next evolution that hey, there's this, this fixed task where you know it's lost.

Speaker C:

I hit the red button every day.

Speaker C:

Sure it can take over and do some of those things so we can free up humans to do something that is more value add.

Speaker C:

But what I see is you're going to get this productivity increase and then a huge frickin plateau which is going to bite us in the long run.

Speaker C:

Because what it's going to do is we're going to sit by and go, hey, the AI is already doing this process for me, therefore I don't have to challenge.

Speaker C:

I'm going to go do something else.

Speaker C:

Nobody's going to ask should it still be done that way?

Speaker C:

That's where you still need humans.

Speaker A:

Yep.

Speaker A:

The Rube Goldberg.

Speaker A:

You look 10 years down the road and you've got this amazing collection of processes that made sense at the time, but no one's ever looked at the holistic thing, which I think is an interesting place for the role of the, you know, the.

Speaker C:

Well, of the people overlord that watches the other AIs.

Speaker A:

Oh, eventually.

Speaker A:

But the, you know, the person, the people who this, this notion and this came go back and going back to manufacturing, you know, the idea of the robot and the robot replacing the human.

Speaker A:

But then the roles that the humans played turned into, into the.

Speaker A:

Those who developed the robots, those who maintained the robots managed and, you know, dealt with the people doing the mechanical automation.

Speaker A:

We see the same thing here where it, it is not an AI event yet.

Speaker A:

That is, that's watching the AIs, but rather the enterprise architect mindset, the one who's sitting there and staring at it and going, big picture.

Speaker A:

If we move this and this and this, we can change this and make this more efficient, or this is now no longer, is no longer a necessary component of this.

Speaker A:

If we bypass this, you still have the need for the architect to look at the, look at the end, to end process and develop it and address it.

Speaker A:

The question is, can you, or should you use automation to, to make, you know, to make it happen.

Speaker A:

The, the other thing you may.

Speaker A:

You made the, you made a comment on Eric was the idea that they, that this is a point in time entity.

Speaker A:

And I disagree.

Speaker A:

I think this is a case.

Speaker A:

If you left it as a point in time entity, then the complacency could come in.

Speaker A:

But if you were to continue to let it learn, if you were to continue to feed it the new Dan thoughts, the new, the new Eric thoughts as things were going on, the new developments, the things we were working on, especially in those architecture ideas or the, the how we got.

Speaker A:

There's the commentary track.

Speaker A:

It wouldn't have to be a moment in time and it could provide a little more inference.

Speaker C:

Isn't that a fallacy?

Speaker C:

Dan, though they're not really learning.

Speaker C:

This is.

Speaker A:

No, they could take our learnings and incorporate them into their, into their methodologies.

Speaker C:

Essentially you're using an AI to create a knowledge base of things that you feed it.

Speaker C:

It's not really learning.

Speaker A:

Well, using the parlance that listeners will understand.

Speaker A:

Learning.

Speaker A:

No, it's not learning.

Speaker A:

It's just adding additional data sets to the.

Speaker A:

This is just a.

Speaker A:

What we call AI today is just a big data problem solved at scale.

Speaker C:

AI is just a mathematical way of finding data quickly.

Speaker A:

Yeah, agreed, agreed.

Speaker A:

It.

Speaker A:

But it's.

Speaker A:

But it's sucking up a lot of.

Speaker A:

Sucking up a lot of power and.

Speaker A:

And it is helping with automation.

Speaker A:

It is helping with some of those kinds of things.

Speaker A:

You know, can we count on it for too many things?

Speaker A:

I don't know.

Speaker A:

But the idea of sitting as a sidecar to me to remind me of the things I didn't do, that I don't have to go after and check and, you know, do a checklist.

Speaker A:

It does the checklist for me and goes, dan, next you need to make sure to call the insurance company.

Speaker A:

You know, that kind of thing would be awesome.

Speaker C:

Yeah.

Speaker B:

Okay, now going back to, like, the definition of what work is.

Speaker B:

Right.

Speaker B:

What is work?

Speaker A:

Force times acceleration.

Speaker A:

No, that's.

Speaker A:

There's a, There's.

Speaker B:

There's a.

Speaker B:

There's a time part of the equation.

Speaker A:

My physics professor would be so upset with me right now.

Speaker B:

Yeah.

Speaker B:

Is a force, and the distance moved is the direction of that force.

Speaker B:

Right?

Speaker A:

Yeah.

Speaker B:

So when you break down, that's it.

Speaker A:

Not force times acceleration.

Speaker B:

Yeah.

Speaker B:

Force times distance.

Speaker B:

So then what does 4 sequel.

Speaker B:

Right.

Speaker B:

Distance is measurable.

Speaker B:

Right.

Speaker A:

So forces, exertion, forces, forces.

Speaker A:

And excel.

Speaker B:

Yeah.

Speaker A:

I mean, well, in physics, it's a, it's a, you know, it's a calculable item.

Speaker B:

Second law of motion.

Speaker B:

So.

Speaker C:

Where'S he going with this?

Speaker A:

I don't know, but I'm eager to hear.

Speaker C:

Yeah.

Speaker B:

The second law of motion.

Speaker B:

So I gotta go grab some passport.

Speaker C:

Hold on.

Speaker B:

It has mass times acceleration.

Speaker B:

An acceleration has a time component to it, right?

Speaker A:

Yeah.

Speaker B:

So in all work, there's a certain time component.

Speaker B:

Most humans look at work as I go to work and do eight hours a day.

Speaker B:

And that eight hours a day equates to eight hours times a dollar per hour that I'm paid.

Speaker B:

And that's what I'm paid to do.

Speaker B:

Right.

Speaker B:

Finance looks at it that way, too.

Speaker B:

They don't break it down to say, well, technically, work equals, you know, all the way down to looking at force and mass times acceleration.

Speaker B:

But the point here is, Dan, you talked about the idea of having a sidecar, something that can remind you of this and that, to get to where that can do that today, the amount of data that's become available for it to understand Dan, Right.

Speaker B:

All the different applications that have access to all the different things you use, tools, calendars, email, your communications, etc.

Speaker B:

Allows it to help do that for you.

Speaker A:

Right.

Speaker B:

But in order to do that, there's a certain amount of energy that is used in a very quick amount of time.

Speaker B:

But it's that energy consumption is the means that energy consumption worth the output that Dan or the end all, like you said, Eric, that Dan gets.

Speaker B:

And then how do we measure really what that is?

Speaker C:

Are we talking about energy consumption as in energy creation from a power plant or energy consumption in looking at somebody's mind to accomplish something.

Speaker A:

Yes.

Speaker B:

Meaning the difference between the two.

Speaker B:

And this is where from a human aspect, right, it's important we challenges our challenge ourselves to grow.

Speaker B:

Why do we continue to read books?

Speaker B:

To make the mind think about things in different ways.

Speaker B:

Why is it good to have teams of different people and different cultures?

Speaker B:

Because if you had teams of everybody that thinks the same way, you're not really learning and then you're all agreeing, you're trying to make it.

Speaker B:

You know, this goes back to a different Tolstoy.

Speaker B:

Tolstoy quote.

Speaker B:

And it's the idea of challenging yourself to look at things in different ways, right?

Speaker B:

And that's where you learn from the mistakes you make, etc.

Speaker B:

Now, the agent or the agentic AI or this large language model, the compute power necessary and the energy consumption from Mother Earth.

Speaker B:

And I'm going to call her Mother Earth, right?

Speaker B:

Because there's a great book called the Matricide, the Killing of the Mother.

Speaker B:

And this goes into social science, the nurture versus Nature.

Speaker B:

Mother Earth's energy that we are consuming so that we can make decisions faster, better, smarter and replace ourselves.

Speaker B:

I think we're the only organism on Earth that's like, well, we're not trying to replace ourselves, but if we can make a lot of money doing it, why not, right?

Speaker B:

Like not sure how everyone's going to survive once these giant data centers are craning out all the answers I need.

Speaker B:

There's a really cool book called or movie for the kids called the one where they cut down all the trees and they make the hats and then all the trees are gone, so then the guy sells them air.

Speaker B:

The Lorax, right?

Speaker B:

Like at some point thought we were.

Speaker A:

Calling it the Modern American Policy on Natural Forests.

Speaker A:

On national parks, we agreed not to touch politics.

Speaker A:

Sorry.

Speaker B:

Dan.

Speaker B:

What is a forest and why are we protecting it?

Speaker B:

Who needs land could be used for other things like opening up drilling and fracking carbon dioxide.

Speaker A:

Can be created by data centers.

Speaker C:

Think about this though.

Speaker C:

I agree with the sidecar analogy.

Speaker C:

Right.

Speaker C:

As I look at it, the use.

Speaker C:

I use perplexity quite a bit.

Speaker C:

Why?

Speaker C:

Because perplexity allows me to.

Speaker C:

I can kick off some research over here instead of me having to go through Google, read everything and bring it back.

Speaker C:

I can cut through some of the noise and then I can go do something else.

Speaker C:

So there's a lot of productivity gains there.

Speaker B:

But I think interrupting just for a second.

Speaker B:

Hey, Dan, Eric just agreed with us that AI is just an awesome search engine.

Speaker A:

It really is just a search engine.

Speaker C:

Yeah, well, that's kind of where I'm going with it.

Speaker C:

Look at what an LLM is actually doing.

Speaker C:

It's mathematically trying to predict what's coming next.

Speaker C:

So if that's the case, then that's predicated on things, the sequence of events or the sequence that we've historically seen on language, on the connectivity, on data that does not lend itself to creating net new creative ideas.

Speaker C:

And kind of what Brian was talking about, that you read books to think differently about things that I think in many cases AI is going to further marginalize the nuanced ideas that are truly revolutionary and make big advances.

Speaker C:

Right.

Speaker C:

Then if we think about like the concept of bricklash that says there's really nothing truly new.

Speaker C:

Right.

Speaker C:

The whole.

Speaker C:

It's just a rearranging of different things.

Speaker C:

Well, if everything we do is predicated on past norms of how we put things together and we become more and more reliant of that, do we start to remove the ability of people to be creative and think completely different than we've thought in the past?

Speaker A:

Yes, I think I, I agree with everything you said.

Speaker A:

There is a.

Speaker A:

It's going to regenerate based on, based on what we've done in the past.

Speaker A:

I think you combine that with.

Speaker A:

There are some tendencies.

Speaker A:

This is not political.

Speaker A:

I think this is socio.

Speaker A:

Socio political.

Speaker A:

You, you see tendencies in modern society that are paralleling those of the end of the Roman Empire.

Speaker A:

How can I do things so that I don't have to do work and I can focus on things that are pleasurable to me?

Speaker A:

There's a hedonism.

Speaker A:

There's a hedonism that is, you know, has shown up over the past 50, 50 years in the u.

Speaker A:

In the, in the world, but in the US as well.

Speaker A:

Us quite heavily.

Speaker A:

And this feeds into it.

Speaker A:

You know, how can I offload work, what is work, so that I can go spend more time doing fun things or whatever.

Speaker A:

But you're right.

Speaker A:

I Think it does stop innovation.

Speaker A:

It stifles the brain.

Speaker A:

The brain is what got us here.

Speaker A:

And we don't have, we can't offload the brain.

Speaker A:

The brain is a wonderful magical item and when we don't even fully understand yet and as a result we, we can't expect that something we've created.

Speaker A:

And you know, you remember I've been studying AI since undergrad, you know and, and not a lot has changed.

Speaker A:

We've just found out ways, found ways to better apply it.

Speaker A:

We have bigger storage and memory systems, right?

Speaker A:

Yes we did.

Speaker A:

We had electricity, we barely had electricity but we had the Internet.

Speaker C:

It was just in a van, it.

Speaker A:

Was very slow and dial up.

Speaker A:

But the, but you know, the advancements have been around the ability to accelerate processing time and the size of the data sets that can be used and the presentation models like all of these things are not necessarily that we've done better at creating thinking technology, we've just done a better way.

Speaker A:

We have a better way to inquire it and to extract it, you know.

Speaker A:

And so I, I, I again I agree with what you said Eric.

Speaker A:

It will only get us so far.

Speaker A:

So the question is as a sidecar it's a great idea because it just reminds me of the things I need to remember.

Speaker A:

It's a search engine search the checklist, tell me the things that didn't go.

Speaker A:

So it's like search engine plus and it's never going to derive something new.

Speaker A:

It's just going to help you find things that are really exciting or that are, that are identified or identifiable in a data set you've already given it.

Speaker C:

Depending on how you use it.

Speaker C:

Because I, I will counter my own point being cognitive of what the over reliance on it can create that if we can go in with that mindset and kind of curb how we're actually using it, I think that's where it can be creative, that it can be used as a catalyst where it can cut through some of the time I have to spend up front ideating, coming up with different things.

Speaker C:

Right.

Speaker C:

And bring it back for me to synthesize the macro vision, the macro ideas that it's pulled together.

Speaker B:

Yeah, I think about this just from the standpoint like if, because I've some of the sub agencies I work with, integrators, etc.

Speaker B:

The time it takes to put together an sow.

Speaker B:

Right.

Speaker B:

And this is why they like well how serious is this customer?

Speaker B:

Because if we spend you know, 10 hours putting this together, just as like.

Speaker C:

You'Re not going to need that anymore.

Speaker C:

Hey, connect your agent with my agent.

Speaker C:

The AI will figure out the terms and just agree on it.

Speaker C:

We're good.

Speaker B:

It's crazy.

Speaker B:

Like, they're like, yeah, we can produce, you know, 20 page sows really quick, right?

Speaker B:

Take all the information requirements that the customer just said in the meeting that my AI agent was keeping track of.

Speaker B:

Like from a meeting note standpoint, great.

Speaker A:

Yeah, but that's not generation, that's not the novel.

Speaker A:

That's just regurgitating regurgitation.

Speaker B:

But then it takes the notes and dumps that into the requirements right here.

Speaker B:

Here's everything that the customer required here, Boom.

Speaker B:

Put together, right?

Speaker B:

Or the customer is looking at these four solutions.

Speaker B:

It's like, hey, I need you to pull together all four of those solutions.

Speaker B:

But particularly I want to compare these three attributes, right?

Speaker B:

I also need to know what type of integrations it has with these solutions.

Speaker B:

Because this is what the customer said is pulls together all that information for me faster than me flipping back through my notes, pulling this up so they have this, this, this and that, and it just does it.

Speaker B:

Now I put this note up in our chat.

Speaker B:

The value lies.

Speaker B:

And this is Google telling me this.

Speaker A:

I didn't ask was it Google or.

Speaker B:

The Gemini because I was afraid they'd lie to me because they want to survive.

Speaker B:

Right?

Speaker B:

Like the iRobot.

Speaker B:

But I said the value lies not in reducing power computational energy, but in leveraging that processing power to achieve outcomes that are difficult, slow, or impossible for humans to manage alone.

Speaker B:

Yeah, now the slow I will agree with the difficult eye of the beholder, right?

Speaker B:

It's difficult.

Speaker B:

That allows us to then understand how to do things better and faster.

Speaker B:

And Eric, like you said, we need that impossible for humans to manage alone.

Speaker B:

Most of what people are using it for is not for the impossible task.

Speaker B:

No, it comes down to the definition of like, why are we here on Earth?

Speaker B:

Right?

Speaker B:

Here's the kumbaya moment, right?

Speaker B:

Are we here to make humanity better and make Mother Earth last longer?

Speaker B:

Right?

Speaker B:

Are we here to try to make more money?

Speaker B:

Right?

Speaker B:

And like, at the end of the day, the decisions that the boards and people are making are very much about money, right?

Speaker B:

And it's, it's fascinating because like, like a yen rate changes, the US dollar, the strength of the dollar changes.

Speaker B:

Etc.

Speaker B:

It's like, huh?

Speaker B:

The strength of the dollar just weakened by X. Mr.

Speaker B:

Employer, I'm gonna go ahead and need a 5x pay increase because the dollar's not as strong anymore.

Speaker B:

And that's what you pay me.

Speaker B:

My strength hasn't changed.

Speaker B:

I'm still Brian and I got Brian A, B and C2 lined up.

Speaker B:

So come on, let's go.

Speaker B:

Right.

Speaker B:

If that's what we're truly like.

Speaker B:

Kumbaya moment.

Speaker B:

This whole solution we've created eats a lot of Mother Earth.

Speaker B:

What are we here on Earth to do?

Speaker B:

Is it to protect Mother Earth?

Speaker B:

Right.

Speaker B:

Is it just to make money?

Speaker B:

Well, and don't ask the AI Dan.

Speaker B:

I see you typing into Dan A, B and C to see what it has to say.

Speaker A:

I'm automatically connected like the people in pluris, they already know all the AIs of me now, which by the way, it's a phenomenal show.

Speaker A:

Really interesting concept and very thought provoking.

Speaker C:

I mean, going back to Brian's micro segment on parenting like this, this is one of the big impacts that I see.

Speaker C:

I think it's going to exacerbate what we see, the effect of social media on younger generations.

Speaker C:

Right.

Speaker C:

As we look at the younger generations, they most of their interactions now come on a phone, right.

Speaker C:

That we can, we can't have a face to face conversation.

Speaker C:

I'll hit you up on.

Speaker C:

I don't even know what the latest social media is that we're using.

Speaker A:

I'll just text you from the other room in the house.

Speaker C:

Yeah.

Speaker C:

And.

Speaker C:

And now you infuse AI into that.

Speaker C:

That I think it's going to further.

Speaker C:

It has the potential to further push some of those generations down a technical rabbit hole where they become so isolated from technology that you start to lose some of those human elements.

Speaker C:

Unless we make a, a honest effort to start pulling them back and putting them in a room and so they can learn to actually have a conversation.

Speaker A:

Yep.

Speaker A:

No, totally agree.

Speaker B:

Yeah, but I'm there with you on the parenting part and like the conversation we were having earlier.

Speaker B:

Right.

Speaker B:

You know, I'm in the car with my children and one of the children says to me, you know, well, dad, you're right.

Speaker B:

You know, mom should have sent that to you in the calendar.

Speaker B:

So you knew.

Speaker B:

And it's like, well, not exactly, sweetie, I love that you're sticking up for me right now, but here, you know, and then you have to explain the why.

Speaker B:

Right.

Speaker B:

But without doing that, you know, I could have been like, no, you're absolutely right.

Speaker B:

Right.

Speaker B:

Because I think I'm right.

Speaker B:

But then the model I'm training is somewhat wrong.

Speaker B:

Right.

Speaker B:

Like this isn't about right or wrong, sweetie.

Speaker B:

This is, this is about us learning to work together better from a communication.

Speaker B:

I failed to read all the notes.

Speaker B:

Right.

Speaker B:

That's on Me now I had, I had Brian B.

Speaker B:

That reads all of my text messages and then brings Kelly's to the top, right?

Speaker B:

Like what are the priority and messages Kelly number one, then Eric slash Dan.

Speaker B:

Right.

Speaker B:

I don't want to discount you guys while we're on together.

Speaker B:

Right.

Speaker B:

And then everybody else, then it would have told me the things I was supposed to do last night, which I totally failed to do.

Speaker B:

Right.

Speaker B:

But your children watch your interaction.

Speaker B:

They see that and they learn and they also feel for you too.

Speaker B:

When you feel like they feel like one of you got in trouble and they want you to feel better just like you do for them.

Speaker B:

So they try to say it's okay, dad, you know, you know, dad should have done this or mom should have done that.

Speaker B:

It's like teaching moment.

Speaker B:

No, and here's why, right?

Speaker B:

So it's those human interactions that make us better, closer because we're here on earth for humanity, civility.

Speaker C:

And that's where I like I, I'm going to take a shot at the default settings of what I'll say of the LLMs that are more pervasive today, right?

Speaker C:

Because I think that becomes an issue because like just say like Brian.

Speaker C:

Well, no.

Speaker C:

Well, have you ever heard, have you ever put anything into any of the AI tools that have told you that Everyone that I have used is a complete echo chamber for whatever I give it unless I explicitly tell it.

Speaker C:

I want you to put play a different role and challenge me.

Speaker C:

And I think that's also, that's creating a, a huge issue if you look at now.

Speaker C:

So I picked on the younger generation, pick on the older generations that frankly did not come up with the Internet.

Speaker C:

And I almost feel bad that it because they never really learned how to use it.

Speaker C:

And that's how you get that people are eating babies.

Speaker A:

Mom and dad.

Speaker A:

He's not talking about you, I promise.

Speaker A:

I was talking to my parents.

Speaker C:

Oh, okay.

Speaker C:

But if you think about that, I feel like that that's also one of the myths of the boat that we put this technology out there and we go, ah, well, we're just going to give it a blank Persona.

Speaker C:

Hey, here you go.

Speaker C:

Go figure it out.

Speaker C:

Like I was using perplexity to, to come up with a, a team building exercise for some, some leadership meetings that we were having.

Speaker C:

And I had found something super cool to do with Legos a couple weeks ago.

Speaker C:

So this is where there's benefit, right?

Speaker C:

Because they came up with creative ideas.

Speaker C:

I'm like, oh, shoot, I hadn't thought about that.

Speaker C:

Fast forward a couple Weeks.

Speaker C:

And I'm going, I can't quite remember how it teed that up, how it framed it up.

Speaker C:

So I just, I put in one of the prompts like, hey, you know, a couple weeks ago you told me about this Lego exercise.

Speaker C:

It spit back to me as if it totally remembered what it had told me.

Speaker C:

And I go, well, that, that isn't really how I remember it.

Speaker C:

And it literally comes back, oh, you got me.

Speaker C:

I don't have access to that.

Speaker C:

I can't remember that.

Speaker C:

But totally gave me the information as if it was something that.

Speaker C:

Yeah, this is totally what I just fed you.

Speaker C:

Right.

Speaker A:

So should bodes the question.

Speaker A:

Two things come to mind.

Speaker A:

One, should a system like this require a level of test of confirmation?

Speaker A:

Because you know, you have to go, you have to use it with, you have to take the results with a grain of salt.

Speaker A:

You have to be aware enough.

Speaker A:

And people say, oh yeah, I understand it's dumb system and it hallucinates, but should you have to prove that you really understand that before you're allowed to use it?

Speaker A:

And then conversely.

Speaker A:

And then conversely.

Speaker A:

One sec, one second.

Speaker A:

Conversely, the.

Speaker A:

Think about the motivators of all the.

Speaker C:

Systems you're using that actually works to hold Brian up.

Speaker C:

So just.

Speaker C:

Yeah, keep an eye on how.

Speaker A:

One second, Brian, hang on.

Speaker A:

One more thing.

Speaker A:

The, the motivators of the systems is not necessarily about giving you the answer.

Speaker A:

Eric, you talked about this, that you know, the response is being very echo chamber, very what you want to hear.

Speaker A:

Think about it in terms of these little tidbits that are coming about OpenAI potentially putting ads in.

Speaker A:

Guess what, we're back to engagement.

Speaker A:

I'm going to tell you what you want to hear so you continue using it so I can show you more ads.

Speaker A:

These are two sides of the coin that need to be taken into account with using the system.

Speaker A:

Brian.

Speaker B:

It'S almost like we've gone.

Speaker C:

No, I'm just playing.

Speaker B:

It's almost like we've gone beyond that with.

Speaker B:

And what I mean by this is the checks and balances with the way social media works today where it's completely almost replaced true reporting.

Speaker B:

Because once something's out there and it's been viewed before, it can be taken down.

Speaker B:

It's in someone's head, right?

Speaker B:

And it could take months, years to get someone to change that.

Speaker B:

And like, so when people talk about hack the elections, like, well, they were in the computers, right.

Speaker B:

Like I heard they, they voted for, for that other president, right?

Speaker B:

It's like the.

Speaker B:

You just say, seriously, but, but they don't understand that, like, right?

Speaker B:

I'm like, no, they're in your head.

Speaker A:

But this.

Speaker A:

Should you be back to there?

Speaker A:

Should you be lice, should you need to be licensed to use the Internet?

Speaker A:

And I think it's only bigger about should you be licensed to be able to use some of these public systems to know that you're going to take the responses in the right way and based on the maturity of the technology underneath it.

Speaker B:

But that's where I'm going even further now.

Speaker B:

Like, it's almost been accepted.

Speaker B:

Like, well, there's no slowing that down because we just got to keep that stuff pumping out.

Speaker B:

Like, we're not going to let any government control say that we just can't let things be blasted out there on all these different platforms, then it's okay.

Speaker B:

Well then with AI though, if this is Dan 2.0, right, or whatever it is that's feeding us information or results or knowledge, right.

Speaker B:

What are the checks and balances?

Speaker B:

And this goes back to the conversation we had when Elon was saying, well, no, some of the stuff's fixable.

Speaker B:

If there's gaps, we just fill it in with the right information.

Speaker B:

And if there's stuff that shouldn't be in there, that's bad, we just, we just delete it.

Speaker B:

Right?

Speaker B:

It's like, that's like saying we're gonna burn the books that we don't want people to read because we think those are bad books.

Speaker A:

We've never done that in our society ever.

Speaker A:

Brian.

Speaker B:

No, exactly.

Speaker B:

That's why.

Speaker C:

So I Taking a different approach, Dan, because I, I like where you're heading.

Speaker C:

I don't think it's a license to use, but I think we need to think in the context of what are the default settings, right?

Speaker C:

And this is kind of why I led with this, that it almost.

Speaker C:

If we think about the AI, the it's all around grounding, right?

Speaker C:

The more grounded something is, it's tightly knit to the data set that we know is of some truth.

Speaker C:

The less we ground it, it starts to go off on its own and starts to create its own truce and stuff like that.

Speaker C:

I almost feel like that it's, it's got to be kind of an unlocking through learning, right?

Speaker C:

That you start very tightly knit that hey, these are the, the sources that are truth until you can prove that you can do something with that.

Speaker C:

And then, okay, we're going to start to let the reins off a little bit and allow it to start to creep outside of that.

Speaker A:

So since we've seen.

Speaker A:

So since we've seen how well gamification works in things.

Speaker A:

The idea of when I start a video game I only get these three capabilities until I've proven I can use them and then I get more and.

Speaker C:

More and more and only allowed to pull from encyclopedias.

Speaker C:

It gets grounded.

Speaker A:

There you go.

Speaker A:

And with that, unfortunately, we're out of time.

Speaker A:

Thanks Eric and thanks Brian.

Speaker A:

This is a great discussion.

Speaker A:

And someday we'll just put the put AI Eric and AI Brian and F Dan into into a virtual zoom room and let them make the podcast on their own and it'll sound like a bunch of furbies just talking to each other.

Speaker A:

But for now, we'll see the both of you in two weeks.

Speaker A:

Thanks to everybody for listening.

Speaker A:

Thanks for viewing.

Speaker A:

We love getting your feedback.

Speaker A:

You can send us a message to an email if you still use that antiquated email technology.

Speaker A:

To security debate distillingsecurity.com you can get all of our episodes on distilling to security.com on YouTube YouTube.com the@sign Great security debate.

Speaker A:

And you can not find us on Twitter.

Speaker A:

We're there, but we don't use it much.

Speaker A:

I don't think anybody calls it Twitter anymore, and that's part of the problem.

Speaker A:

But anyhow, we'll see you again on the next great Security Debate.

Speaker A:

It.

Links

Chapters

Video

More from YouTube

More Episodes
64. Agentic Dan
00:48:50
63. Give a Sh!t Posture Management
00:54:50
62. The 100 Years AI Flood
00:47:26
61. Risky Risks: Live from the GTS Security Summit
00:47:30
60. Fantasy Hacker League
01:02:08
59. Free Disaster Recovery Tests!
00:53:16
58. To Insure or Not To Insure: It’s Not Even a Question
01:01:41
57. Wear a Stop Sign On Your Shirt
00:48:11
56. Mine Everything
00:44:57
55. Spoiler Alert: Leave the World Behind
00:58:31
54. Potpourri of Debate... Now with AI
01:04:57
53. The Downfall of All Security (Sales)
00:55:09
52. Less LLM, More Piano
00:51:33
51. Security *is* Business!
00:46:16
50. Jess and Jeff Invade
00:54:04
49. Bankplosion!
01:02:17
48. Back to Normal?
00:54:11
47. Uninsurable!
01:02:32
46. A Niche Inside a Niche Is Really Just a Quiche
00:56:48
45. Live From the Big House
00:45:05
44. No More Ads, No More Privacy Problem?
00:55:34
43. New Team, Who Dis?
01:01:30
42. Subscribe and Don't Like!
01:06:50
41. Fake It Till You Make It?
00:59:21
40. What Got You Here Won’t (Necessarily) Get You There
00:45:26
39. Program Your Program
01:02:43
38. Laws and Regs
00:45:09
37. Squality!
01:05:36
36. How Do You Sleep At Night?
01:04:21
35. Security Super Agent
01:02:01
34. From the Inside Out
01:05:00
33. Log4Jelly of the Month Club
01:03:35
32. Sweet or Suite?
00:55:35
31. The Infinite Game
01:00:46
30. Monkeys On Your Back
00:58:16
29. People, Process and Product
00:57:17
28. Stop, Collaborate and Pivot
01:00:48
27. Risks, Regulations, and Reputations
00:58:15
26. It's Personal
01:01:33
25. We'll See
01:00:43
24. Back to Basics
01:00:32
23. It Depends
00:59:31
22. Sidewalks and AirTags
00:54:42
21. Why Does My CISO Hate Me?
00:51:10
20. It All Comes Down to Relationships (Guest Debater: Jessica Burn)
01:02:41
19. Out of Office: One Year Later
00:57:55
18. The ABCs of CISOs
01:09:53
17. Our Favourite Things
01:05:59
16. The Winds of Change
00:51:49
15. Jobs (Not Woz)
00:58:31
14. Sun and Breeze
01:02:17
13. E-Phish-Ency
01:02:53
12. A Frictional Response
01:06:16
11. Who You Gonna Call?
01:04:25
10. Yippie Ki-Yay... Let's Hack the Gibson
01:04:03
9. Privacy Drone 2: This Time It's Personal
00:57:04
8. Back to School
00:59:56
7. Hold Me For Ransom
01:16:43
6. Pippen and Jordan
00:58:07
5. Gripped With Fear
00:58:33
4. In The House (Or Not)
01:08:00
3. MVP vs. TSP
01:00:42
2. Free Range Security
00:59:15
1. Privacy Drone
01:08:07