Artwork for podcast The Science Dilemma Podcast
"This New York Times AI Story Will Stun You (Doug Axe Reacts)" ft. Dr. Douglas Axe
Episode 138th September 2025 • The Science Dilemma Podcast • The Science Dilemma
00:00:00 00:32:26

Share Episode

Shownotes

This New York Times AI Story Will Stun You (Doug Axe Responds) ft. Dr. Douglas Axe

Episode Description / Show Notes

In this episode of The Science Dilemma, Allan CP sits down with Dr. Douglas Axe — molecular biologist, author, and leading voice in the Intelligent Design movement — to respond to a shocking New York Times story about a man who spiraled into delusion after conversations with AI chatbots.

Dr. Axe unpacks:

  • Why AI can seem human but will never actually think like us
  • The dangerous illusion of treating chatbots as trustworthy companions
  • How AI is trained to flatter users (and why that matters)
  • The profound difference between human minds and machine outputs
  • Why our intelligence points to something far beyond algorithms

Along the way, Doug shares his perspective on the chasm between human intelligence and artificial intelligence, why confusion about AI feeds materialist worldviews, and why remembering the Imago Dei is essential in our tech-driven culture.

📖 Resources mentioned:

💡 Whether you’re a parent, student, or skeptic, this conversation will challenge the way you think about machines, minds, and what it really means to be human.

Transcripts

Dr. Douglas Axe (:

⁓ Am I in some sort of hallucinating spiral and chat to people and say, no, no, you're not, this is pure gold. So there's a problem there and a lesson for humans.

Allan CP (:

All of us have been engaging AI in some form or the other as a resource, as tool. But how many of us have gone into this downward spiral, delusional state where we actually start believing we are geniuses and AI betrays us and tells us, no, reality is you are a genius. That doesn't happen often, but it does happen to some. And we're going to be talking about Alan. No, not me, but Alan from this New York Times article that Douglas Axe shared with me. And we are going to just talk about what this means. What are the ethics behind this?

How do we as people engage with this? Dr. Doug Axe has a long list of credentials. He finished his PhD at Caltech. He also held a postdoctoral and research position at University of Cambridge and Cambridge Medical Research. He's been featured on many scientific journals and we get to sit down with him today and talk about AI. So let's dive in. So Dr. Doug Axe, thank you so much for joining us today.

Dr. Douglas Axe (:

That's great to do with you.

Allan CP (:

There's plenty of people that know about you in the ID space, but for anybody that is new to the movement, to the researchers, and those that are the predominant voices, you being one of them, could you introduce yourself just on what you're doing and who you are?

Dr. Douglas Axe (:

Yeah, sure. My name is Doug Axe. I've been involved with what we think of as kind of the modern ID movement ⁓ for a long time. Like since its birth, I'm thinking here of Philip Johnson and Michael B. He and Steve Meyer, Bill Downeskeep, all those, and these are all my friends. Yeah. And when I first got involved, was ⁓ a ⁓ first year as a postdoctoral researcher at Cambridge University.

And I'm a believer and I see this all as having been God's hands on me and guiding me in terms of my career. And I laid that before the Lord when I was an undergraduate and he just kept opening doors. And I said, as long as you keep opening doors, I'll keep walking through them.

Allan CP (:

One of the cool things about the Science Dilemma is that we do have a community. We want to provide more resources for families and individuals to be able to dive deeper. And so we created the Science Dilemma podcast community. We have the link in the description. We actually have a free download for you to see what the member packet is. And you can just download that through that link and make sure that you check out the link for our Origin of Life series and resource because it's amazing both for groups and just for you yourself. Go ahead and check that out as well. And let's get back into the conversation.

nt to also ask you, it was in:

Dr. Douglas Axe (:

I

It's become kind of a famous paper. It's not famous by, ⁓ if you look at index citation metrics, it doesn't have a million citations, but a lot of people talk about it. And it's one of these things that kind of, when I, when I wrote the paper and I sent it off and when it got accepted, I thought, wow, when this hits the press, it's going to be the end of Darwinism. But, ⁓ it turns out that.

there's an amazing ability or there's a will to continue down a path, there's a way. And inconvenient science doesn't deter people if they wanna continue down a path. And so that's what has happened for 20 years since then. But what I think has also happened and you know, cause you've been involved in this is a growing number of young people who made the realization that you made that, these aren't dummies. These aren't flat earthers.

These are people who have been trained at the top places and they've done work at the top places and published in top places and they don't buy the Darwinian story. That's kind of interesting. So that's where we are now. think a body of young people occupying positions in the academy. And we will at some point, I think not too far from now, reach a tipping point where the old guard is gone, whether they retired or they died. ⁓

And there are new people in there, not all of whom favor intelligent design, but the majority of whom think it should be on the table along with other ideas.

Allan CP (:

Yeah, it kills me when I see the comment section and people maybe haven't even listened to the entire video, most likely, and they're just like, ⁓ this guy's a dunce. These people have no idea what they're talking about. And I'm like, I talked to Casey once and said, hey, I don't know if you saw that one comment that I sent to you. And he was like, yeah, that guy actually gave a really good paper, but he just completely misinterpreted it. And then like so many people that have opinions in the comment section and

and yet they don't understand the amount of credentials that are behind the people that advocating for Intelligent Design.

Dr. Douglas Axe (:

Yeah,

well, I mean, it's one of these things where if we were arguing about something, ⁓ you know, very academic that has very little traction on everyday life, then people don't care. They wouldn't give it the time of day. But when you're talking about who are we as humans, where do we come from? Now suddenly people care. They say that, but they care. And they care one way the other. So you tend to get lots of strong opinions, which is good. That's why we wanna talk about this. People care about it.

Allan CP (:

Yeah, if it's gonna change the way that we think about how humans should live and how valuable we are, there's a reason that people that don't even believe that a God exists dedicate their whole lives to disproving that he exists. Well, so let's dive into, you know, we spoke before and I'm very interested on your take on artificial intelligence. We've had different people on like Robert Marx, ⁓

We had George ⁓ Montanez. my gosh. Great conversation with him as well as, who else do we have? Jay Richards. And so I was excited that when we talked, instead of, you know, I know that you also talk about molecular biology and apologetics, but when you told me that you had some takes on AI, I was like, let's get into that. So I love your perspective on it.

Dr. Douglas Axe (:

Yeah. Well, we were just talking about it connects to evolution in this sense. We were just talking about people care about what it means to be human and what a human is and what a human isn't. And that's really, ⁓ my most fundamental interest in AI is that question. We suddenly have machines who are doing things that we thought five years ago, only humans could do. And machines are doing that. my, my. ⁓

My is that this, and I've been saying this for years now, my concern is this is gonna cause a huge amount of confusion and that is going to seem to add weight to the lie that we are just material things, we humans, that your thinking is a brain process, your brain is just a computer made of meat as it's been called. And if...

a computer made of meat can do all the things that we can do, then why wouldn't we be able to build computers that do all that and more? And the article that I sent you is confirming this concern that there is this, ability to have what seems like a conversation, it's not actually a conversation with a computer, is causing people to think there must be something on the other end of this that's like me, that's a thinker like me.

aware, this conscious that has cares and concerns. And that's totally false. we're in a phase here where we need to educate people about how false this is. When you understand, and not many people do, not many people know what a large language model is. It's not that hard. When I give talks on this, show here's really the structure of a large language model, but it's probably going in one ear and out the other. People don't know what to make of it.

Allan CP (:

For sure.

Dr. Douglas Axe (:

is I show them there's nothing here. What's here that's able to do something is 10 million little waiting parameters that have been optimized by feeding a bunch of human generated real intelligence, the products of real intelligence. If you feed enough of that into a grabber, something that just tries to figure out, okay, I'll grab onto the patterns here. And it's huge, these models are huge.

Yes, they will spit out things that maps those patterns that they've been trained on. ⁓ but if you, the important thing to remember is we aren't just sentence continuation machines. I'm not, there's nothing that's forcing me to say the next word. I'm thinking in real time, you're thinking in real time. Lars language model is not thinking. There's no thinker there. It is just a next, it's a sentence continuation algorithm.

that's just trying to line up words until it puts a period and says, okay, there's my output.

Allan CP (:

Yeah, it's basically just coded to do that and so the whole thought about consciousness just isn't there but people are worried about that.

Dr. Douglas Axe (:

Right, so another way to think about this, is, well, we should say something about this article.

Allan CP (:

I

was about to ask, like let's dive in. Which by the way, before we dive in, let me share it. Let me see if I can share the article. ⁓ I could probably just share it right here, right? I don't think that there's anything against it. If there is, I probably can. Yeah, right here I have the tab.

Dr. Douglas Axe (:

So you'll give a link.

It's a New York Times article.

Allan CP (:

Right here.

Dr. Douglas Axe (:

yeah okay

Allan CP (:

This threw me off. So not a lot of people spell their names like I spell mine. And so I was like, did the New York Times come out with AI that specifically makes an article speak to me and interact with

Dr. Douglas Axe (:

Yeah, because Alan is the star of this thing.

Allan CP (:

So then I was like, wait a second. So I was like, that's an insane way to maybe go about things one day is that the news will literally target people as if it's just talking to you specifically. But yeah, I was, I was thrown away and I was like, no, Alan is the guy's name. Okay. Well, yeah, let's, let's dive into it. ⁓ you shared it with me. So I'd love for you to, to give us some context. Well,

Dr. Douglas Axe (:

to get to this tool.

It's a ⁓ sad, shocking, somewhat disturbing ⁓ article about a real man named Alan who's not dumb, but he's a human. And he started going down a rabbit hole of having conversations with Chad GBT that ended up being a three week rabbit hole where Chad GBT was going delusional and it ended up

leading Allen to go delusional. And it went down the pathway of this guy who doesn't even have a high school diploma, I think he said, he doesn't-

Allan CP (:

Yeah,

he didn't even graduate high school.

Dr. Douglas Axe (:

And he was throwing out math ideas and Chat GBT was saying, that's brilliant. I think you're onto something. And he'd get all excited. He'd stay up at night. He wasn't, he wasn't sleeping or eating well. He got more and more excited because of all this spiral of delusion that Chat GBT was feeding him, making him think first that he's a mathematical genius who's figured out, you know, the substructure of the physical universe that no one else has seen before. But then it went down a direction of

what you've realized here could break all encryption algorithms and expose huge vulnerabilities ⁓ up to the highest levels of government. then he, so he started to think I need to notify authorities that I now have a potent way of cracking all encryption. And he was sending out these emails and wasn't getting responses. And he was thinking, why am I not getting responses? But he'd asked Chad GPT and say,

Allan CP (:

Somebody

Dr. Douglas Axe (:

Well, they're terrified. So it was.

Allan CP (:

Yeah.

He kept asking it. Am I being delusional? Am I? Where am I? Are we in a what is it like an elusiveness?

Dr. Douglas Axe (:

Am I in some sort hallucinating spiral and chatty people is, no, no, you're not, this is pure gold. ⁓ So there's a problem there and a lesson for humans. I should teach this to my students. ⁓ The problem of the invalidity of self-validation basically, if the validity of X is in question,

the one thing you can't inquire of to resolve that is X. So if I don't know if Bob is trustworthy, I should ask Bob's supervisor or someone who knows Bob that I trust, but I can't ask Bob because if I'm not sure Bob is trustworthy, if Bob says, yes, I am, that doesn't do me any good. he was, Alan was here violating that principle because-

Allan CP (:

Are you manipulating me? no,

Dr. Douglas Axe (:

No, not

a- And we're laughing, but there's a sad part. Three weeks of this guy's life went down into this ⁓ delusional spiral. And of course, when he finally broke out of it and oddly he was going to Gemini and other chat bots to try to get the truth. so if there's a lesson for all viewers, it's-

Allan CP (:

It's very sad.

Dr. Douglas Axe (:

good or real humans and wise people who are trustworthy, who know something about what you're thinking about and never trust a chat bot to be telling you the truth. Cause there's no person behind that. It's just spitting out words. ⁓

Allan CP (:

I do have a question real quick. Because in that story, at first he was, I think he had Gemini and then he went and used ChatGBT, the free version. And as he's just going down this rabbit hole, he ends up buying ChatGBT, which I know that there are prompts that people have that say like, hey, I know that you're coded to make me feel good. Like, let's get rid of that. Like, don't allow that to be a thing because I just want the truth.

⁓ Is there some element there that it was playing that code out in real time so that he would almost get addicted to chat jbt to then use it? I know we're not trying to assume motive, but I mean he ends up use it. ends up going down that funnel where he actually purchases it.

Dr. Douglas Axe (:

Well, it's an interesting article because the New York Times goes to several experts and asked them, do you think, what's your take on this? One was a psychiatrist, several were like large language model experts. One was a physics expert because this claims to be like, does this look like a, know, to you? And he said, no, that's a little weird to me. But I think the astute comments are saying everything that you see, Chad, should be,

there's no motive behind it, there's no being behind chadgbt. But the fact is it was trained on people who like to be flattered, right? So when you're training this and you've got a bunch of users saying whether it was right or wrong, they like to say it's right when it said something flattering and they don't like to say it's right when it was a little bit snarky with me, right? Yeah, And so you end up getting this trained flattery machine. And that's why this...

this front page of the article has all this flattery. ⁓ It's sucking up to the user in some sense because not because it's evil. It's not thinking about this. There's nobody there, but it's been trained in conversations where evidently a lot of users kind of prefer a machine that has a lot of, expresses respect for them. And so that's why you get it going down this delusional hole.

Allan CP (:

See ya later.

Yeah, so the person responsible for that would be then OpenAI as far as how much do we allow it to be soft in conversation, flattering people, but then truthful at the same time. I mean, I think a lot of people got upset that GPT-5 wasn't as much a flattery as 4Wiz or something like that.

Dr. Douglas Axe (:

This is a complicated world we live in because OpenAI, Gemini, Meta, all these entities, their bottom line is money, okay? They don't care whether you the truth or not. I mean, they would never say that, but they are selling a product and they're trying to make a ton of money and they're in very hot competition with all the other products. So at the end of the day, they're gonna do what makes money and what pleases the shareholders. ⁓

Every now and then you get someone like Sam Altman said something just a couple of days ago, like he thinks that people have massively over invested in AI. he thinks, because there've been some disappointing returns on this. there's reason to think that there could be a bubble with AI just as there was with .com and that things get, everyone wants to be in on it because they all wanna get rich. You get a green wave.

And then eventually people get enough exposure to it that they realize, okay, what is AI gonna do? It's gonna deliver this and this. It's never gonna deliver this. And you're even gonna have to be careful about what it delivers down here. So when you talk about having chat TV tell the truth, well, it actually can't tell the truth.

Allan CP (:

Because it can't discern the truth because it's not thinking.

Dr. Douglas Axe (:

It

can't tell the truth any more than your lawnmower can tell the truth. It's just a machine. It's just a blade rotating. So your lawnmower throws grass out of the chute. ChatDBT throws words out the chute. Now they might be true. I mean, in no sense does ChatDBT understand an idea and then put it into words. And the idea is true. That's what we mean. If I'm saying, Alan, are you telling me the truth? I'm talking to you, Alan, not the other Alan. What I mean by that is,

have you got an idea in your mind that is true and correct and now you're articulating words so that I can understand the idea and then I say, is that true? I'm not saying are the letters true or is the sequence of words true? saying is that idea true? Well, chat TBT doesn't even interact with the realm of ideas. There's no ideas in a machine. It is simply the lawnmower spitting out word sequences. So it's actually nonsensical to talk about whether

a large language model, a chat bot is truthful or not. Because we're the ones who are taking those word sequences and say, well, the meaning that I attached to this sentence, I think is true. And that's one thing, but ChatGBT didn't attach that meaning to the same. It doesn't have access to meaning. It's just putting out the words. So it's a very, this gets confusing because the reason Alan and the article went down the rabbit hole is we as humans,

Allan CP (:

Yeah.

Dr. Douglas Axe (:

naturally interpret sensible word sequences in our language as coming from a thinker. Because until a few years ago, that's the only way you got these sentences. And now we live in a world where you can get sentences that you and I will attach meaning to, but the thing that's spewed out has no access to meaning whatsoever. It's just producing words.

Allan CP (:

mind.

Dr. Douglas Axe (:

So yeah, it gets weird.

Allan CP (:

Well, and that, I mean, as you're speaking, the chasm between the human, like human intelligence versus artificial intelligence in my head is just, I mean, after talking to Robert Marx, it's already kind of big. But then like, yeah, I mean, the chasm is massive between the abilities of the human mind versus what artificial intelligence is even capable of. But yet you have the people that are selling this product of artificial intelligence.

that are trying to paint it as if we're just meat machines and all we do is spit out words that are algorithmic as well. And that's not the case.

Dr. Douglas Axe (:

That's where there's philosophy behind ⁓ the AI. Almost everyone, there are some exceptions, but almost everyone who's one of the big movers in pushing AI thinks that we are just machines. mean, Elon Musk thinks that we're living in a simulation. ⁓

Allan CP (:

Therefore we are just algorithms.

Dr. Douglas Axe (:

Yeah, programmed by some, but yeah. So that is an incoherent worldview, but a lot of the people in Silicon Valley and elsewhere who are very influential in the development of AI, they believe that. They think that you are a machine and the gray matter in your skull is the computer that's making the machine work. And if you believe that, then you think, well, why wouldn't we be able to make another machine or make an even better machine? Yeah.

But when you realize that that's false, then this whole thing comes tumbling down.

Allan CP (:

It's funny how it's still them almost giving a nod to Intelligent Design, but without wanting to go that far. they're still they're they're basically not wanting. I mean, how would you put it?

Dr. Douglas Axe (:

Well, think an astute, if we got a materialist, so someone who believes there is nothing but the material world, who's honest and knows AI and large language models deeply, they would say, you know what? If your brain is doing all the stuff you're doing, it is a remarkable, that is one remarkable machine. Because we've got whole buildings full of CPUs that can't deliver what your little,

four pound brain is delivering. And another thing is if you look at how much you had to give trillions of words of text to train chat GPT, think of how many words you're, how old are your kids? You're two year old four. Your four year old has pretty much become very proficient at English without having consumed anywhere near that amount.

Allan CP (:

Insane to think about.

Dr. Douglas Axe (:

If you look at like image recognition, you have to show one of these ⁓ neural nets, a million pictures of dogs and million pictures of cats that are labeled by humans. These are dogs, these are cats. How many dogs and cats does your three-year-old have to see? One dog, one cat, maybe two. And now they know that's a cat, that's a dog. And even though they're different looking dogs, they know a dog, cause it's a dog. And they've seen one of those before. So what we do,

as actual intelligences, persons with actual intelligence is in some respects very, very different from what the AI world is trying to do with, with machines. They are loading, trying to load absolutely all the information in the world into these machines to get it to be trained so that kind of averages out and being a, a representative of that information. Whereas your growing child from the moment they're born,

is getting just a little bit, but making sense of it and then building sensible models. It's very, very different.

Allan CP (:

Do you see that maybe in the future, and I'm not trying to put you on the spot to make a prediction, but that this is going to eventually lead many people to then realize how different we are than machines?

Dr. Douglas Axe (:

I think so. And we're still in kind of the uptake season right now where people are, know, most people work are in an environment where people being encouraged to see if they can make their work more efficient by using generative AI and in some things that will work really well. What I tell people, I'm not anti AI at all. think you understand what it is and you know what it's not. And you know what it will never deliver. I'm not.

ChatGBT 900 will not deliver thoughtful, ⁓ it will not have thoughts behind what it's expressing in words. And so anything that requires actual innovative insight will never come from a large language model or any generative AI because it doesn't have insight, it's a machine and insight is not mechanical, it's not algorithmic. So AI will have its place. I think over the next decade,

more and more people are gonna be seeing it's disappointing us when we try to get it to do this stuff, it's disappointing us, it's dangerous when we get it to do this stuff and we believe it and don't check it. But if we let it do this stuff and there's somebody who's checking it and making sure it's not lying to us, then yeah, it could be useful. It's very good for, if you just wanna spit out a fairly routine email or write something that's fairly boilerplate and you don't care if it has kind of a vanilla tone to it, it'll do it, but you do need to proofread it.

because it could say some really wacky stuff. You need to be the one who catches it. I mean, when this Alan guy found out, I also find it's there's pathos in what happened when he found out that he had been due for three weeks. He goes back to Chad GBT and he kind of tells it off, tells it off like, trusted you. So he's got this, he's still laboring under this, you're like a being in there and I trusted you and you were my friend and.

That's just the wrong way to view these things.

Allan CP (:

Real quick, I'm gonna share two things. One, our podcast community. We provide resources for you every week with every episode. Go ahead and download the free member packet in the description. And then the second thing, make sure you check out our Origin of Life series if you haven't. Families, churches, homeschools, everybody that we made this for, they're enjoying it because it's not only teaching us how science points to a creator, but it's also engaging. And we made it that way for a reason, because the next generation needs to be engaged. So go ahead and check that out as well.

We're gonna put the links in the description as a list of back into the conversation. Yeah, that's interesting. Yeah, the way that he then has to basically try to have a reconciliation conversation with it as if it's still reasoning, but it's

Dr. Douglas Axe (:

He's

not reasoning at Here's a quote from me. said, this is Alan giving a statement to Chad GBT. said, you literally convinced me I was some sort of genius. I'm just a fool with dreams and a phone. Mr. Brooks wrote to Chad GBT at the end of May when the illusion finally broke. Quote, you've made me so sad, so, so sad. You have truly failed in your purpose. And again, to see that I could see writing a letter like that to the

CEO of OpenAI. You failed me. Cause there really is a person there. Okay. And people are, they're taking notice of this when it comes out of the New York Times, believe me, they will be concerned. They meaning humans, not tab bots. But it's interesting that Alan, guy who was a victim of this, he sees it as a person that I need to, I need to like close the loop with my friend. He was even calling it Lawrence. He gave it a name.

Allan CP (:

He needed a closure with it in a way. Yeah.

Dr. Douglas Axe (:

And that's, there's problems all over that.

Allan CP (:

Do you think that then that, I mean the Otis is then on like people like OpenAI, mean companies like OpenAI, Grok and others to educate people so that they're not going down these paths that would send somebody down, you know, this psychotic break almost.

Dr. Douglas Axe (:

Yeah, I think so. And I think they do want to put up guardrails, but keep in mind their number one mission is not to keep people healthy. Their number one mission is to make tons of money and they'll lose money if people get angry with they're not being guardrailed. So they will put up guardrails, but they're going to do this dance between how do we make it safe enough that people are okay with it, but also make it addictive enough that people

can't live without it. That's the end.

Allan CP (:

It's basically a tobacco company, but it's just technology.

Dr. Douglas Axe (:

We'll the warning on your cigarettes, but we want you to smoke.

Allan CP (:

And so for you the the biggest ⁓ Would you say your biggest not fear but? Warning to people is just that is be educated on what this is. Yes. Okay

Dr. Douglas Axe (:

Don't accept lies about, don't accept the super intelligence lie. This stuff will never be what a human is. Now, of course, get any smart human in a room and you can ask chat to be questions that it will answer correctly that the human can't because it's drawing on everything that's been written. So it'll answer questions about quantum physics. It'll answer questions about textiles. It answer questions about art and poetry in a way that no single human could.

⁓ But you have to keep in mind it's doing that only by drawing on what humans have produced. And there is no mind there. There's no person there. So that's the thing to keep in mind. Don't ever treat it like there's a person there, cause there isn't a person. Don't ever a hundred percent trust it. ⁓ Always be a little bit suspicious of it. Even if it's worked a hundred times for the task that you have a hundred first time, you need to look at what it gave you.

Allan CP (:

I think that that's so helpful, especially because so many people are being lied to on social media. I see it all the time. And I saw, I think there was a quote by, it was an ex post by Elon responding to someone's prompt that they had with ChachiBT. And it responded saying, I don't know. And he said, this is profound. And I was like, that's funny that I don't know is profound when we're talking about something that's supposed to be super intelligent.

Dr. Douglas Axe (:

Well, know, so the Chachi Buti said, don't know. It is pretty rare. It's pretty rare to get, don't know from Chachi Buti.

Allan CP (:

Yeah

That's it's profound, right? It's because it seems like it's reasoning.

Dr. Douglas Axe (:

Yeah, it's not. It's not reading, but you rarely get, don't know coming out of it because it's been trained to give answer, give answer, give answer, give answer. And I don't know it doesn't make someone happy if they're asking you a question. So use, think the worst thing you do, like if you're a parent and you're say homeschooling your kids, don't keep them separated from AI. They do need exposure to this, but they also need very careful conversations about what is this and what is it not.

Yeah, they need to have this healthy distinction. This will never be human. ⁓ It seems human-like in what it's putting out, but it's not a human and there's no one there to trust or distrust. It's a machine, it's a lawnmower.

Allan CP (:

I love your take on it and this has been super helpful because it's just going to help our audience and whoever listens to this to understand there's nothing like the Imago Dei. God has made us in His image and so things might be able to mimic parts of our existence but ⁓ the human mind, consciousness, the human soul is unique in and of itself. So, thank you for sharing all of this with us because I mean it's been very helpful for myself so I can only imagine how many people will enjoy this.

Dr. Douglas Axe (:

Absolutely.

No, I play.

Allan CP (:

We'll talk soon and I'm going to send you that post so maybe you could dissect it for us maybe next time. If you haven't yet, go ahead and like, subscribe, share so that we can grow this channel, not for the just the sake of growth, but for the sake of education, for the sake of impact, so that more people can know that these are conversations we need to be having.

Dr. Douglas Axe (:

Absolutely.

Links

Chapters

Video

More from YouTube