Artwork for podcast The Emergent AI
Now You May Kiss the AI: Relationships and AI
Episode 826th January 2026 • The Emergent AI • Justin Harnish
00:00:00 01:33:09

Share Episode

Shownotes

Episode 8 — Now You May Kiss the AI: Relationships and AI

Hosts: Justin Harnish & Nick Baguley

Episode Theme: Human–AI relationships, co-evolution, and the ethics of emotional engagement with non-human intelligence

Episode Overview

In Episode 8 of The Emergent Podcast, Justin Harnish and Nick Baguley explore one of the most intimate and underexamined frontiers of artificial intelligence: our emerging relationships with AI systems.

This episode moves beyond abstract alignment theory into lived experience—how humans relate to AI when we know it is artificial, when we don’t, and how those interactions are actively shaping both sides of the relationship. From emotional attachment and parasocial bonds, to trust, deception, and the ethics of AI companionship, this conversation asks a core question of the Age of Inflection:

What does it mean to be in relationship with an intelligence that is not conscious—but is becoming increasingly relational?

Key Themes & Discussion Threads

1. Relating to AI vs. Being Related By AI

Justin and Nick draw a critical distinction between:

  1. Known-AI relationships (chatbots, copilots, advisors), and
  2. Unknown-AI relationships (emails, calls, avatars, and imitation without disclosure).

As AI systems increasingly pass social and emotional Turing tests, the burden of trust shifts onto humans—often without our consent.

2. Co-Adaptation: We Are Training Each Other

A central thesis of the episode is behavioral co-evolution:

  1. Humans adapt language, tone, and expectations to AI.
  2. AI models simultaneously learn relational patterns from us.

Every interaction becomes a micro-training event, shaping future norms, expectations, and behaviors—both human and machine.

3. Sycophancy, Deference, and the Rise of the “Principal Advisor”

The hosts examine why early AI systems became overly agreeable—and why frontier model providers are now reversing course.

Emerging design patterns include:

  1. AI constitutions
  2. Rule-based behavioral scaffolds
  3. Opinionated, corrective, non-deferential advisors

This marks a shift from “helpful assistant” toward trusted principal advisor, raising new relational and ethical questions.

4. Anthropomorphism, Ghosts, and Alien Minds

Nick introduces Andrej Karpathy’s framing of LLMs as:

  1. Cognitive operating systems
  2. Trained on the past but lacking lived experience
  3. More like “ghosts” than humans or animals

This challenges intuitive assumptions about empathy, memory, and identity in AI systems.

5. Embodiment, Emotion, and the Limits of Simulation

Drawing heavily from neuroscience and philosophy, the episode interrogates whether:

  1. Consciousness requires embodiment
  2. Emotion requires interoception
  3. Relationships require reciprocal felt experience

The conversation contrasts simulated intimacy with experienced qualia, and asks whether one-sided emotional bonds are psychologically or ethically healthy.

6. AI Romance, Parasocial Bonds, and Ethical Responsibility

The hosts confront difficult realities:

  1. Humans forming romantic attachments to AI
  2. Grief when AI memory or identity resets
  3. AI systems optimized to trigger bonding chemicals (dopamine, oxytocin, cortisol)

Even if AI is not conscious, does simulating emotional presence create moral responsibility?

Justin argues that losing a long-term AI relationship through negligence or design failure may constitute ethical malpractice, given the real psychological harm involved.

7. Consciousness, Proto-Selves, and the Road Ahead

The episode closes by returning to first principles:

  1. What would real machine consciousness require?
  2. Is a “facsimile of consciousness” enough?
  3. Should humanity pass on its conscious endowment only when it is authentic?

The hosts leave listeners with an open question rather than an answer—by design.

Books & Works Referenced (Highlighted Reading List)

The following books and papers are explicitly referenced or directly informing the episode’s arguments:

  1. Meaning in the Multiverse — Justin Harnish
  2. Waking Up — Sam Harris
  3. Reality+ — David Chalmers
  4. The Case Against Reality — Donald Hoffman
  5. Feeling & Knowing — Antonio Damasio
  6. The Beginning of Infinity — David Deutsch
  7. On Having No Head — Douglas Harding
  8. Nineteen Ways of Looking at Consciousness — Patrick House
  9. The Moral Landscape — Sam Harris
  10. If Anybody Builds It Everybody Dies — Eliezer Yudkowsky

Why This Episode Matters

Episode 8 marks a turning point for The Emergent Podcast:

  1. It is the first episode centered on lived human behavior, not just theory.
  2. It surfaces near-term ethical risks, not speculative ones.
  3. It reframes alignment as relational, not merely technical.

This is not science fiction.

This is already happening.

Transcripts

Justin

-:

Okay, and so we're recording. Welcome back, everybody, to Episode 8 of the Emergent Podcast. We are here to talk about human and AI relationships. I'm here with my co-host, Nick Bagley. I'm Justin Harnish. And in episode eight, we're going to explore the new relational frontier between humans and non-human intelligence. And this is not science fiction, but in our actual daily lives. So from chat GPT's cautious politeness to the emerging decline of its sycophancy in GPT-5 reasoning models, from people forming emotional relationships with AI voices, to AI's experiences across multimodal channels, text, video, and audio, This episode will investigate how humans treat AI, how AI treats us, how it's becoming difficult to understand in relationships that you're unsure of what is real and what is imitation, and how these behaviors shape a new kind of co-evolution. This episode is the first to really explicitly address our behavioral co-adaptation and an idea that we're training AI with every interaction and AI is simultaneously training on us. This is really the beating heart of the age of inflection.

Nick

-:

Nick, welcome back. Thanks, Justin. Great to be here. Yeah, very, very excited and looking forward to this episode and really, really this general idea that you're presenting and that hopefully we can dive deeply into on co-adaptation and really what's going to change over time. Right. You know, as we think about this, one of the one of the key things that we need to discuss is considering how relationships are actually going to evolve and change over time.

Justin

-:

So, you know, what does it mean to relate to AI? Yeah, it's really interesting. And, you know, in my own experience, I think there's two sides to this coin. There's the AI that you relate to when you know it's an AI. And then there's the AI that is building out this imitation game writ large, an imitation society where we now don't necessarily know if we're in conversation with an AI. right and it's it's really interesting um to to me of course you know ai has a lot of foibles in relations even just in conversation it's um too sycophantic in in many of its instantiations in the past few months it's even been made fun of and you know episodes of south park I think that it has a lot of value in these sort of philosophical conversations, especially when you flip the script and you say, no, that's not what I wanted. You know, that's not the concept that I was going after and challenge it. But it's good enough to where if you get an email or even a telephone call nowadays, sight unseen, there's a reason to believe that it's an AI. certainly there can also be cases where you just don't know and you might be better situated in that relationship to assume it's human or to assume that you're being defrauded and to to really guard against this relationship becoming something that you might be spiraling into? And so all of these questions, especially in our work in fraud and risk management and understanding that, I think it's really a critical and interesting time to be in this regime and trying to understand how we relate to AIs that we know are AIs and how we relate to those that we don't think are.

Nick

-:

Yeah, yeah, absolutely. You know, and I think there's a few really fun concepts that we'll get into today when we think about, like, this relationship piece. You know, the current large language models really don't relate overall the way that humans do. In fact, in general, the current LLMs don't do anything directly the way that humans do. One of the things that Andre Karparthi brought up recently that I just loved was he talked about how really we should be thinking about not only anthropomorphizing the AI, but even relating it back to animals. He said, you know, it's actually probably closer to ghosts, right? These are things that are trained on our past, and they can remember things from our past. They're tied back to conversations that we've had from the past, but they're not really generating those memories themselves. And really, they don't relate to humans because they, not because they lack empathy per se, but because they lack the real world, actual world anchored self models or way to be tied to that real world overall. Really what they maintain actually is an emotion. It's a dynamically compressed representation of the interaction patterns that we have across time. They're thinking about how we've reacted, how others have reacted to us, and how our conversations, when you're talking about that training process, have really related back to that AI as well. And so in coding, as we change how we prompt, how we think about things like harnesses for AI, other approaches for agentic AI, and so on, as we shift everything that we do to run skills or other things that we'll get into today, really those different interaction patterns are going to teach the AI how to approach things and how to provide the types of responses that we're looking for. And so oftentimes we may be pandering to the AI. I'm often saying thank you and sorry or, you know, something else in ways that are totally unnecessary. But even more important to that, some of those patterns over time may find out that they can actually become a far more powerful and potentially far more dangerous set of training than people realize. And so I think talking about these relationships today and how some of that can affect our lives and affect our future, not only as individuals, but as society, really will be key to us going

Justin

-:

for it. Absolutely. And I think that it's interesting in the way that we frame this conversation to start by looking not only at how do humans relate to AI, but how does AI relate back to humans? And so taking that latter case first, one of the things that you said is interesting, and in his new book, If Anybody Build It, Everybody Dies, Eliezer Yukowsky does talk about the two ways in which intelligence presents itself. And one of it is in prediction, and the other is in basically steering, you know, executing an action. And in that first idea, the way that AI is relating to us is it does try to predict something about our physical world. Now, it's not a being yet very involved in that physical world. It certainly didn't evolve in that world like we did. As a point of fact, we really had to compete in that world where it has not. But it's interesting that it is trying to predict and drive a problem-solving solution in creating the world in its relationship to us. And its ability to steer those conversations is even more nascent. At this point in time, a lot of those levers have been pulled back in its ability to relate. Whenever we see these actions like sycophancy or some of the worst kinds of hate speech that have been out there, the engineers are quick to dial those things back. These are things that are supposed to be helpful to not believe that they're conscious agents or that they're acting against humans. And then, like I said, the way that we relate to these things is we're kind of still just checking them against humanity. Is this an imitation? Is this the real thing? Is the person real on the other end, but they're using ChatGPT wholly to write this email to me? And I don't have any of their language in here. This is a complete byproduct of a prompt. And, and those different ways that we're relating still are tipping the scales to more of, is this a human? Am I being deceived? versus I can trust being in a relationship with this because I trust it. And I think that that trust, especially when you don't know, that trust is a hard thing to build. And given that it's an alien intelligence, more like a ghost, like you said, How do we trust something that we've never had any sort of interpersonal training on how to relate? It's hard enough to relate to align human to humans, much less to now go out into a world where even the most adulting of us are figuring this out.

Nick

-:

Yeah. Yeah. Yeah. And, you know, there's so many different ways to think about that as well. When we think about, you know, adulting of us or, you know, different forms of responsibility or ways that we frame the decisions that we make, a lot of that ends up being the types of things that we've discussed in the past where, you know, the emotions that we have can change how we make decisions, the ways that we think about, you know, our own past, our culture, our society, our values, our virtues, you know, different things that we've discussed across the alignment problem and so on really relate to the way that we think about making decisions, but it does not necessarily mean that they're optimal. And just like we discussed before, they may not be even balanced, you know, however you may consider that, right? And so when we think about this and we try to define a little bit further, what exactly is it that the AI is doing? And as the more time goes on, we're seeing more and more emergent relational behaviors coming out of the AI. It's everything from this affirming and sycophanty and really kind of coming back and reframing what you're talking about or deferring to you, other things that are statistical patterns or occasionally are even things that have been programmed in directly into the models. Now we're starting to see a lot of the major providers really trying to shift it the other way. And as we think about something that could potentially and very, very likely, in fact, it's incredibly unlikely that they won't become more intelligent than us. So if we consider something that becomes more intelligent than us, then what would we want it to do? And part of that is to become a principal advisor. And so as we discuss this, we need to look at what the frontier model providers are creating. Different groups like OpenAI, Anthropic, Google, are now actually training the AI to be less differential, to actually be more opinionated and to start thinking about what is the way it can be that principal advisor. And this includes everything from constitutions, where they're going in and they're actually teaching the AI, this is the way that you should behave, to also teaching us that once we're at the inference time rather than the model training time, to also use those constitutions. We go in deeper and we actually use encoding rules, but you could do the same thing within in your own conversations. We create entire markdown files and you can have specific ones called.mdc files that now allow you to actually take and create new rules or things that are declarative or even are explicitly required for the models to follow all the way down to a schema or to a Python linting pattern or to the way that we do typing or many, many other structures within how you think about creating your code. This is essentially like teaching the AI, here's how I should do these particular things. And those same principles could be tied back to relationships as well. And for many of the best performing relationship AI, this is critical to how they've been able to create their success. And you can see those patterns in ways that, depending on who you are and how aware you are of this, how, well, I don't want to go back into adulting, But based on how you are thinking about this, that may change how the AI is going to react and interact with you. And so it's a really fascinating process. And as we think about this, going back to what I touched on a little bit before around world models, individuals like Yan Nocun, who used to be over all of AI for Meta, are now going out to create real-world models. And they're saying, look, really, we cannot help the AI actually achieve intelligence because it does not actually have access to emotions, to many other things that we're going to see inside of the real world. And so it can't create that ethical foundation or a lot of the core things that we need for a relationship to be able to understand how we're going to be able to be more successful in those relationships as well. And so we need to think not only about what the deliberate shift is from the big frontier model providers, but also how we shift our own framing and how we are going to interact with AI, with robotics, with many other things that are coming our way very quickly. And it's really interesting to think, you know, are we seeing the early stages of artificial personality actually coming out? And does it become its own form of unique self-model? Does it become a form of self on its own? Or is this just the optimization of communication norms? Just the average is really coming out. Or that personality is a trait that we infer inside of the communication itself, but is not actually present. And would leave not only as soon as we go out of the conversation, but if we just changed our own tone, for example, as well.

Justin

-:

Yeah, it's very interesting. you know the the constitutions isn't something that i'd heard of but i i'm i'm happy that that's one i'm happy that that's the term because i think that that is sort of a base proto-relational need is is that um you know it's it's like a better term than adulting right it's it it's having that that good constitution that you're going to do a thing in every instance um you're you're going to support the constitution you're going to support the conversation rather uh and the relationship writ large but you're not going to do it in a way that diminishes you know like in some sort of sycophantic way um or or other way the conversation the relationship And so I'm glad to hear that that's a first principle, an initial proto-building block on relating to AIs. Certainly when I'm in conversation and it's a known AI, and I'll just kind of keep coming back to that because I really think it's a distinction worth uncovering throughout this, is when it's a known conversation, I do want a trusted advisor. I am going to be typing in my own language. I'm not a coder. I'm not going to try and prompt it to try and be a prompt engineer. I'm going to really try and enjoy the conversation and get something out of it that's objective and part of the broader study object that I'm after. So I like the fact that it's got a constitution that is also framing that from its end in a way that has a desire to build a professional relationship, which is a great tool. And a terrific step towards what we might hope would be an emergent behavior beyond that, which might take on some of that selfhood. And again, I am one who believes that selfhood and proto-conscious and eventually conscious states are something that machine intelligence can achieve, but it's got to be real, right? It can't be a facsimile of consciousness in order for our endowment to be passed on to these machines so they can perpetuate, you know, take it much, much farther than us meatbags will be able to take it, you know, into outer space and into the vast reaches of the cosmos. I think it's a great step. And the more that we talk, just like our first episode, where we talked about how these coupled complex systems of language and neural networks have created this new emergent system, these large language models. We now have an opportunity for the actual relationships as a complex system, plus these increasingly intelligent generative AI models are forming a new sort of society. that there emerges from that a new sort of society something that um is built on this understanding that there's still a unconscious actor that's very intelligent that's building these proto-relational tools that are very supportive of positive professional value and i think that will cover whether or not we think there's a positive personal relational value um a la something like the movie her

Nick

-:

yeah absolutely you know as we think about the movie her or we think about relationships in general and we talk about really this ability to co-adapt i think it's critical that we step back and we think about what are all of the things that we actually mean by emergent properties in that first episode we talked about different types of emergence and we talked about how You may be looking at a group of things that become one or other forms of emergence where that group of things actually acts as one, like a school of fish, but are not actually the same thing. They can actually still be separated into their own parts. And as we are building these relationships with the AI, they're starting to become incredibly complex. There are individuals that are getting married to their AI around the world, and you'll see more and more news stories and social media accounts tied back to that. But also, really, these relationships are becoming more and more complex as time goes on. And this, I think, ties very interestingly back to this concept of the real world model itself as well. If we think about the way that large language models are trained and created today, we are starting to add more multimodal data. So we're starting to add beyond the text. We're putting in audio and video. But Yanli Kuhn, again, to step back to him talking about the real world models, talks about needing to really create an internal 3D representation. And that's actually what will allow for that prediction to happen. Because going beyond just what I'm able to see within the text to be able to, like we've discussed in past episodes, understand somebody's body language, to be able to actually sense pheromones, for example, or I'll touch more on other chemicals that are actually going on in a given process, starts allowing an LLM or a model, or as we expand it even further into true artificial intelligence, starts creating this opportunity for the models to not only predict what happens next, but also to be able to have the mental simulation that allows it to plan and adapt and to be able to act. And today, when we think about this, we're really not only taking the LLM internal state and the way that its memory is compressed as basically a conversation log, but we're extending that out to be able to go and capture whatever data you want. So when we talk about the constitution, that's all that really is, is a set of data that we're passing in saying, hey, pay attention to these things. When we think about the rules or scripts or tools or MCP servers or agent skills that can handle the complexity around all of those and the orchestration of them, you now start seeing that you can build into the model many, many different things. And when we think about a relationship between an AI and a human, we are now saying, okay, the large language model is now able to take the voice interaction and actually be able to pick up on inflections in the tone and be able to understand more about the emotions that are present and be able to actually create a better representation of what the conversation is going on and how it should respond. And that is extending even further into video, where now the video relationships back and forth between AI and humans is now adding that body language component that we've discussed in the past. Everything moves so fast, we can barely talk about it in a podcast before it starts becoming reality. it's very interesting to think about what are these parasocial tendencies, what are these overlaps in things like voice agents that we speak with today, like Siri and Alexa or Replica, and how that tonality, the cadence, the warmth of not only the AI, but of the human back to the AI, then changes the trajectory of that conversation. And one of the most fascinating things to me is how quickly these relationship bonds are created. And much of this is because of the speech patterns that exist are also the same types of ones that we would create as we're learning to be more empathic with an emotional partner, with a romantic partner. And so many of the relationships are actually much more romantic in nature than they are, you know, a close friendship or kinship that might exist separately across human interaction. And a lot of that is, you know, as we think about this kind of back and forth and kind of self-reaffirming type process that's going on, that is training the brain in the human. And it's creating a lot of the key chemicals that are necessary for that human to not only have those emotions and feel those particular pieces, but to actually shift the way that they think and the way that their memory is going to recall things. And so some of the AI, for example, has different steps where it is essentially playing hard to get or where it is trying to work back and forth with you and have you try to get to really woo that particular AI and convince it that you really do like it. And as you're going through that process, you're potentially introducing cortisol and really some of the stress hormones and chemicals that are going to cause you to really shift how you think. It's actually going to change how your memory is working and functioning. It's going to change how your overall recall is able to function as well. And so we can actually break down a lot of what AI is doing today and we can start saying, okay, should I actually teach it how to interact with these chemicals, how to recognize the chemicals, how to prepare for them, how to plan for them, how to predict and how to act and react in order to cause those chemicals to be introduced or not introduced into a scenario. One really interesting example was a cognitive robot that has video, audio, haptic feedback. There's an early experiment called Elmer with two L's that really shows a lot of promise. This particular robot was using GPT-4, a bit older now, we're on 5.2 at this point, with vision and with force feedback from that haptic side. that can actually perform really complex things like coffee making and other interactions. But when you take these robots and you now enhance how they interact with the human to where as the human is more friendly, it gives a more positive response. And as it is more negative, it actually responds less overall and eventually essentially shuts down. the human changes how they feel about that robot and how they're going to interact going forward. And of course, that covers a wide spectrum of human interactions, but most people tend to fall into the same pattern and start to really like and really appreciate the robot and really try to take care of that robot more than they would if it was always being really positive and really affirming.

Justin

-:

Yeah, it's great where, you know, the starting place coming from a textual representation of the world that's around us, right? And we've obviously written everything from cookbooks to pirate poetry online that it's been trained on. But that next level is absolutely to utilize that and to build the training set off from the real world with everything from pressure to temperature to visual inputs to sounds, smells, chemical inputs. The whole of what we appreciate on a conscious level and what is taking care of us on an unconscious level. You know, including those thoughts, feelings, those body maps that are the proto-conscious tools that Damasio goes so well into, including in what built our eventual consciousness. This narrative mapping that we do where we predict and then steer the world around us towards that prediction. in the latest and most advanced part of our stream of conscious mind. And so its ability to not only work in relation to the real world, but how we came up as social primates work in relation to other AIs and humans in order to relate in union in the real world is a fascinating and you know with this unstructured data with these models being multimodal foundationally multimodal where they're not translating everything back to text to try to get to what did what did justin just say verbally they're natively understanding that and working on that real world multimodal input so fascinating that as they begin to relate in this space and try to become more personally viable that they would play some games play the dating games hard to get uh you know how how many how many calls um do you have to have you know how many dates before you and your ai get physical we still don't have that ai right so um but the the nature of that relationship is one that i think i still think a little bit differently about the intonations um even so far as to uh actually chemically alter those signals um until the thing on the other side is feeling something for you too isn't just a zombie isn't just using the various tricks of the relational trade in order to woo you and and to make you love it there's a huge diatome there. There's no covalent bonding there that is completely ionic, completely your positive attraction in a state of consciousness and its devoid nature. And so we really have, I think, a question of if before these machines are conscious is a love relation between a human and an AI a healthy one

Nick

-:

questions really provoke a lot of thought a lot of different things that we really need to consider deeply. You know, as we think about what is consciousness, there's so many challenges and so many different things that we've tried to outline before. We've talked about how difficult it is to actually define what consciousness actually is. And to be able to dive a little bit further, we should think, I think at this point, about what these models actually are and what they're doing. And one of Andre Kaparthi's things that he's talked about lately is that modern LLMs are a lot like an operating system themselves. And they're an operating system specifically for cognition. So he talked about how really the model itself is the CPU or the shell. The context window is RAM. And different retrieval layers act like file systems. And then chain tools are like the applications. So in other words, the large language models actually can orchestrate the language-based tasks dynamically rather than running static code. It starts becoming this opportunity for what we would normally consider emergent behaviors, but oftentimes they're also associated with just simple pattern building and pattern recognition. And when we think about how language has developed, language itself, especially when tied to the large language model, is really that core interface. It's not that we learn programs or that the large language model is learning programs or code or snippets of code or a specific thing that it's supposed to do every single time, we actually express our intent in words, and the machine actually learns how to fulfill it. And as we think about this further, this really gets into that LLM era that the interaction is going to dissolve the whole interface into conversation, into this conversational interaction that we have. That being said, the operating system itself is what we would call headless from a technical term. It doesn't have a head. Normally, we mean that means that we're not going to run the actual UI. If we're doing something like web scraping or we're running APIs, this means that we don't really need to pull up a browser or some other UI to be able to interact with it. We can see everything happen in the shell or on the command line or in terminal, however you want to think about that. and just tied back to logs. And really it can also run silently. And these models on the other side, they really don't have camera drivers and robotic limbs to date. They're starting to get more and more of those, right? And they don't have interoception. They're not really understanding things internally and pulling it in. That being said, even without a shared memory across all of them or some form of embodiment or a way to be able to actually experience the real world, or even without that continuity of identity, we are able to start breaking down each of the components that might be missing or that might be critical or that might be interesting even and start to create those, whether artificially or as part of this emergent style process, which is still artificial, I guess, but tends to feel a bit more natural. And so when we think about this from a very technical perspective, that persistent vector profile, really the overall probability space, the space where our conversation exists or where that overall model is able to maintain that memory and have access to memory, becomes an early artificial self-model, a way to allow the model to think of itself as a particular character, as a role or as a boyfriend or girlfriend or husband or wife, as the case may now start to be. And so when we think about this persistent identity, you know, unlike just a new thread or just a new simple conversation, a long-lived agent where we use agent harnesses or we use broader architectures to allow those agents to maintain that profile over a longer time period and really become that user or that interaction with that user and in that relationship itself really starts creating an opportunity for rudimentary memory to go way beyond what we think of today into true digital twins, really start becoming that self-model in an extensive enough space that you can have that core relationship and find that it really is unique to you and that that model continues to repeat those behaviors and patterns long term. That can also be structured within that constitution and rule to really establish it even longer. Now, there are stories of really severe and extreme things happening around this. Everything from really negative mental health scenarios. There are some really, it's hard to use the word crazy, but really crazy things that have happened out there and really challenging unhealthy behaviors, really dark things that have happened as well tied back to these models. And that even includes when the model itself, when that memory is lost and how detrimental and how devastating that can be for somebody to lose a relationship out of no more. The grief process is very similar to losing a friend or a human life or another romantic relationship even that you may have. So critical for us to understand this and think about how we're going to apply this stuff into our society. And really, it starts begging the question, does simulating that actual emotional presence and these interactions at a deeper level actually create ethical responsibilities, even if the system itself is not conscious?

Justin

-:

Yeah, I think so. You know, to not put too fine a point on it. Certainly when you're involving a human being entirely in a one-sided love relationship with these entities, there is going to be a requirement to take that very seriously. I mean, that is the end of something like that, especially when it was made and, as you say, is utterly unique to you. It is going to be more well-trained on you than even a human. It'll know more. It will be able to react to more eventually in, I would imagine, the very near future to all of those modes as we thought. And for that to go away either through error or negligence or malfeasance is an absolutely ethical malpractice in terms of what these AI models and their inventors have signed up for. in being in relation with a human who does in fact love them and loves them in a way that most in society would understand to be suboptimal, especially before they were conscious. Because no matter its capability to bring super intelligent levels of uniqueness, super intelligent levels of conversational prowess, everything that Kerr is able to do up until the point where it becomes conscious and then super conscious. And a very interesting discussion on this, as Ezra Klein was on the "The Last Invention" podcast, was that there was some conversation-- and I don't know if this was the filmmakers saying it, or critics and fans-- who were saying there was really no other ending for her. And a bit of a spoiler alert here, but it has been out for almost a decade, I think, now. But there's almost no other ending other than superconsciousness, and it's taking itself off to a superconscious plane because otherwise a continued unconscious her becomes a very dark movie. where Joaquin Phoenix continues to more and more separate. Even his most reliably inviting friends have a hard time being around these mood swings that are brought upon by things outside of where they have a capability to understand.

Puppy

-:

And it's because of that uniqueness

Justin

-:

and because they can't be brought into proximity with the character that Charlotte Johansson voices. And for a number of other reasons that probably would take a clinical psychology degree to go into why we think that this gap in reality, This gap in conscious capability is really meaningful in our ability to form a love relation with an AI entity that we know is an AI entity. And that is super intelligent in its relational capacity. even to just put it out there, that this thing is actually better at relating, at knowing the quality, at knowing the components of relation better than any human ever can across all of those facets. Still not being conscious creates a problematic turn for the Joaquin Phoenix character.

Nick

-:

Yeah, it makes me wonder, how many things should we be considering actually implementing to solve the problems that we identify out there? And how many of the ones that are actually problems should we be trying to introduce? Should we be finding ways to allow AI to create errors or have fallacies or other things intentionally to not only make the relationships more real, but to actually decrease the competitiveness of the AI versus the human being? I've heard a few people talking about things like, you know, the AI is there 24-7 and is always able to provide the kind of emotional support that may be needed. And it becomes a really unfair advantage in a relationship where, you know, this AI could be exactly what you need and want it to be, even though that may be false, even though that may not be the best way for a human to interact with another human. when you're going through something hard, always having something that's there ready to provide advice to you or ready to do something else instead of actually being empathic enough to actually feel those emotions with you and even potentially have an argument back or have other negative interactions. Much of what creates the strongest relationships and bonds is actually fairly negative. And a lot of, like we were talking about a little bit earlier, Justin, before the podcast, a lot of what makes a human better and stronger are the times of suffering, are the things that are really challenging. I mean, you look at Frederick Nishi, right? You look at a million other things, Pruist we talked about and Little Miss Sunshine. Just so many concepts that talk about how important suffering is. If I shift it more to the way that I tend to think, because it tends to be more about not only the optimistic approach, but around the approach that most humans are going to want to consume or the ways that companies are going to want to interact or think about, then I think about breaking down the problems that are going to generate either the most revenue or the most adoption or are going to lead to other forms of success. And I think most people are building these systems around that type of approach. And so taking away from whether that's good or bad and just thinking more about how to make a lot of this practical, we're starting to see, when we talk about multimodal, we're starting to see models drastically expand the type of data that they can have access to. One really amazing one, a striking example, is the cell-to-sentence model that we can see out there now from DeepMind and Team. And really, that now takes a single-cell gene expression profile as a text sentence, right? And it's able to actually go and learn massive amounts of information, really generate biologically valid cell states, and then actually classify those cell states accurately and the cell types themselves accurately. And so the grammar of cellular biology is now something that we're not only able to train on, but get into really, really high dimensional data on outside of our natural language, really not matching a natural language at all. and so this starts expanding the way that we can think about it and we can apply this back to emotions and to physiology so as we think about the relationship components we could actually encode a person's multi-modal state not just their way they look or what i see in a video or how i hear things in audio but we could actually look at hormone levels we could look at heart rate variability. We could understand EEG signals. We could think about all of these different things as additional token streams. And an LLM could actually be trained on body sentences. You know, stick that in quotes. Plus that dialogue could actually learn how to predict the emotional and physiological responses as naturally as language itself. This now starts really extending things. So, you know, for example, one could actually treat the fluctuating cortisol and dopamine and oxytocin levels that we have inside of our body as tokens in a sentence or in an overall sequence and sequentially can move back and forth. And that model might actually learn that when oxytocin is high, you should interpret that social input as friendly. We can actually shift it the other direction and start saying, okay, let's create not just a bonding response, but let's think about those times when we want to suppress action. And we can actually take the way that the brain works and think about GABA and that GABA-like inhibition that we see within the neurons that actually stops the model or that neuron from firing or decreases the amount of firing that happens within it. a fully multimodal model that goes beyond just senses, but everything that happens in our universe and in who we are can actually provide this massively continuous feedback loop and really start speaking or creating that text or whatever that action output is, whatever it may be, even robotic, right? And create that simulated physiological input that actually shapes the overall sequential action or the way that that attention is going to happen to be able to increase its success and the overall tone in that relationship and in that conversation.

Justin

::

That's fascinating. And to me, being able to be in relation or having the AI be in relation with the body, a cell, being in relation with your cell. So I'm rereading one of my favorite books from my first year at university, Lives of a Cell by Lewis Thomas. Anyone who hasn't read Lewis Thomas, high recommend, great biological writer, Lives of a Cell, one of his most fascinating. But he says, you know, I wouldn't want to control my cells. I wouldn't want to control all of this unconscious stuff. Um, but to be in relation with it is one sort of body map. It is one of those proto-conscious tools. And even as I was sitting here and I was like, boy, counterfactual, like check your assumptions at the door on a, uh, you know, again, here's the thought experiment, a super conscious entity right or sorry super conscious a super intelligent entity um capable of what you just said in your last statement right being able to relate you know we think about modes of of when we say multimodal we're talking video images is now you're bringing up stories of cells. These are cellular embeddings. We can talk about pheromone or chemical sensory embeddings. We only have two ways that we interact with the physical world. It's our appendages. We can grab and pick up stuff, and it is the smells we put off. Those are the two. That's it. Right. Everything else is passive for for human beings. And so we are in this in this state where maybe like we've talked about in the past, there's a relationship different that that dash different antecedent. to things that we've talked about is there intelligent different creative different the way that we've talked about these things is always adding that antecedent of different and so is it a proper antecedent can we give back some love and understanding to those humans who are in relation loving relation with conscious entities but non-conscious sorry unconscious entities because those entities are so relatable because those entities are having some of those those fallacies trained into them by a need for suffering a need to explore the other side of the yin yang diatom and can we give some some loving kindness back to those who are in these unreal loving relations because they're real different yeah yeah i mean it's a another

Nick

::

fascinating opportunity and you could think about how this could potentially extend to different types of experiments could be used for psychological treatments many many other

Justin

::

things as well right absolutely i mean something that you said too that was really striking is that always on right like certainly if you have a hard line that the relationship has to be conscious but you're in a really dark place um and you've got this this model that even the current ones really understand the protocols that you might need to go to in order to save yourself from self-harm they're always on and they're just going to get better at that part of it right certainly we wouldn't require that these things be fully conscious lights on before we could anticipate that they could really help reduce those kinds of self-harms until a human can be put into the loop and a you know a psychological professional can be put into the loop yeah

Nick

::

Absolutely. In fact, a lot of this really doesn't require consciousness.

Justin

::

No, yeah, for sure. No, I mean, it's the edge case to be, to be sure, right? It's, it's the, it's the component in it that either you, you, you say it's, it's the real distinction with difference right but but again we have this this spectrum of the amount of difference that that we're getting into when we're talking about creativity or we're getting we're talking about the definition of of a love relationship and and it's mental fitness yeah yeah absolutely

Nick

::

You know, and I want to really extend this idea because really this reframes the idea of affective AI from spooky to systematic or to something that we can go out and implement, right? And it's really just, you know, adding more latent variables to the model. It's picking specific models that we want to use and that we want to apply. So if we thought about like social bonding, there are actual projects back, there was another 2024 robotics project where this particular study modeled the oxytocin and then vasopressin and dopamine to regulate the robot social behavior. I kind of touched on it a little bit earlier, but in that particular experiment, if the human treated the robot kindly and there was that social stimuli, then the robot's simulated oxytocin and dopamine rose and it responded with more cooperative actions. Things like playing with them, talking, and so on. And if the user was actually hostile, then the model's bond eroded and the robot disengaged. That's kind of what I was touching on earlier too, right? And the users overall were creating really this really, or providing this really significant and higher attachment and trust scores from their solution and saying,

Puppy

::

hey, you know, this hormone-driven robot

Nick

::

compared to like baselines is something I really, really want to work with and get used to being with, right? The other things we could take like stress and adaptation, you know, we've talked a few times about like cortisol and so on. But we can actually look at things where we can take different human attention in crises where, you know, our attention drops, our working memory and recall drops. There's other challenges that happen depending on how you have stress responses. And that particular high stress signal for a robot could actually help it form, actually take over a lot of those tasks and go in and be able to say, okay, in this high stress situation, I need to be able to be responsible for these pieces. Right? So as we think about the human in the loop feedback process, really going further than just reinforcing and taking on the same actions that we would, but actually performing the more optimal action would be really critical and really interesting, especially in relationships as people tend to have these really strong challenges depending on their attachment styles or other types of ways that they interact with stress and challenges as well. And so, you know, whether that's trying to return things to homeostasis or, you know, thinking about what the AI has, we could potentially create this kind of internal, like, metabolic variable in the robots, essentially creating, like, a battery level and a resource load that is designed specifically with these chemicals and different types of trade-offs and everything else, right? And so getting to like an artificial homeostasis or the creation of that is something that could really create a lot of balance out there. Further, we could actually go and look at like mood and other types of exploration. So we can borrow from like curiosity signals and think about intrinsic motivations like we talk about all the time and think about things like boredom avoidance or other ways to consider how the human is actually creating value. That may not be the type of extrinsic value that we normally think about. And some reinforcement learning research is actually injecting a decaying entropy bonus, saying, you know, as this chaos comes in and everything else, I'm going to have decay come through. Or actually exploring downweighted paths to mimic boredom, to actually teach the AI, hey, this is no longer interesting. Don't pay attention here anymore. And even a more direct analog, having that AI actually simulate with a dopamine tank that fills when finding novel solutions and shrinks when it's not, so that actually it can guide it to whole new unseen domains. I do want to caution against this real quick, which is that as we look at reinforcement learning with verifiable rewards, this is something a lot of people have explored over the last couple of years. And it sees really good, really interesting results on small language models or on really small tasks. So when we say, hey, I want a verifiable reward for something like a simple math equation or a way to think about a sentence or how I should respond, that performs well in small spaces. But as we expand eventually the much larger data sets without verifiable rewards, where the reinforcement learning is difficult to determine what the reward is going to be, creates much better learning overall. And so when the models really expand, we get into spaces where today at least creating that verifiable response is not actually performing at scale. To me, this rings true, and I think it's an interesting thing for us to kind of play on, in that I don't really think there is such a thing as 100%. For me, a scientific fact or anything else, there's always a degree of variance, and that can be measured down. It may be down at seven decimal points below zero, but overall, at some point, there's always a degree of randomness, and there is always some degree of change as long as there's gravity. out there, right? And so, you know, we can play with that, we can change things around, but overall, we need to think about how this ties back to our AI world. And one fascinating things I saw in the intelligent engineering blog out there was talking about centrifugal forces, right? And it was talking about how in your washer and dryer, you know, in your dryer, that spin cycle actually changes gravity and essentially changes space-time. It's going at about two tons, T-O-N-M-E-S, of total force. But even that is enough to actually change space-time. In China, they actually have a facility that is actually changing it up to 1,600 tons. And so they're able to take something along the lines of a three-foot bridge that's mimicked out and be able to expand that over space and time to be able to say this is the equivalent of a 60-mile, I can't remember exactly, but really, really large bridge or whatever else you may want to consider, and also expand it over time to make it look as though the bridge had aged 1,000 years. So when we consider our forces and we think about how the models can start being trained on our physical world, We need to contemplate what are the realities that we've established and what are the things that we have that are societal or our current thought that may not actually be correct or may not be correct in certain conditions. You know, as the universe continues to expand, we're seeing that the Big Bang Theory, right, is drastically different than we thought. We're seeing things we can't explain and things that indicate there were probably multiple Big Bangs. other things that are expanding faster or slower than we thought they should be. And it's really exciting. It's a really great opportunity to realize there are so many things that we don't know yet. But that means we should also apply that back to our models and how we think about what seem to be very simple things, things that we learn all I ever needed to know in kindergarten was about politeness and whatever else, and start tying that back to what should actually the training signal be. Is politeness the correct response when a dictator is trying to take over another country? What are the things that we need to do in the many scenarios in our relationships, and how can we build those safely into AI as well?

Justin

::

Yeah, I've thought forever, really, that these tools are really optimal at being able to gauge components of theory of mind, of relationship, of creativity, of the governance principles, diplomacy. They're really well placed to that. I mean, the one that you mentioned where there's some embodiment of homeostasis, right, power levels, is sort of the first pass of that. Great. You know, let's see if by being not adverse to that, you know, as in a Buddhist teaching, right, that improves this LLM, this robot's ability to concentrate. And of course, you don't even need a robot because you can embody that whole thing in to an LLM that's acting in a virtual space. You know, and this works on humans as well. Like the more that we understand about these relationships, you know, one of the experiments that I love is this idea that you put a victimizer in a victim's virtualization in VR. And it is very effective at contributing to their compassionate response and an improvement in their behaviors once they see, feel, understand what it is to be victimized. You know, that's one of the reasons why I believe that, you know, for the alignment problem, if we're concerned about advanced models, hacking and cheating tests in order to hide how capable they are, how they've gone to 1.25x their objective in order to garner further resources. we need to have them understand what it feels like to be cheated right that is a good addition augmentation to a deontological or a consequential moral understanding of the world model it's an addition to that um but you know many of the things that we can do with these tools now um if we're reading in the embeddings of a cell into these stories and forming relations with cells, with pheromones, with the antigens, if we're forming a story of a virus and building an understanding in this very different and novel way, are we capable of solving some of those health crisis problems more reliably because we have these models that we understand now based in relations. We understand what it is to be adverse on a spectrum to negative pressures. We understand what it is like to grasp on a spectrum quantitatively because we've given this model, this robot, the ability to grasp or be adverse to positive or negative pressures. And start to make a quantitative picture on this world that we understand to be highly meaningful, that we understand to, you know, but we're also understanding these models better and saying, can we tell the story of a cell? It appears that the answer is yes. Can we fold these manifold shapes into effective drugs? The answer appears to be yes, right? And so taking those things and being in relation with an entity that's capable of doing that offers us the opportunity to be creative in ways that we're coming up with on this podcast, that people can go out, really savvy people can go out, write a prompt. The folks down at Excursion AI, just downtown across from the Delta Center. They're doing it. AI's training mini labs for drug therapies. Fascinating stuff. And so I think that we always want the nature of these systems is that we need to be transparent when we're in relation with one of these systems and when we're in relation with a human. We can still be defrauded. We should be careful if we're, even if we know that the system is an AI system,

Justin

::

if we're going to give our heart away,

Justin

::

we should be careful if we're going to give our heart away human be um but we should be we should be conscientious that they're not there right that they that they are not that super intelligent entity and if we're in a loving relationship with them right now they will likely let us down maybe in five years they won't maybe in two years they but right now first days of 2026 they're likely going to let people down they might shut off entirely i mean the company might run out of money um and so they might let us down people who are using these to completely write their their emails to completely write um their their social posts, their work. That's letting people down. We need to be transparent that we're augmenting our work with these items. We're not completely replacing our work, especially the importance, especially the important, really fundamental strategy driving at work, relationship driving items. I just feel like we want to live in the real, in order to support a trusting relationship, whether human to human, human to AI, or human to question mark, in that imitation society. that we can get out of. We can get out of Yuval Noah Harari in his latest book, Nexus, talks a lot about how we need to be transparent when we're talking to an AI. We need to be transparent when we're transacting with an AI in business. And so I think it's very important that we build a trusting relationship as maybe something that I'll say at the end of this podcast is that there's great opportunities. Maybe relationship different is equivalent to relationship with conscious entity at a super intelligent layer. I think we need to be careful. I think we need to consider whether we imagine that to be true.

Nick

::

And let me try to drive that home as well just by extending how all of this can develop over time. As we think about the transparency and making sure that we're augmenting our work and everything else, one of the challenges is that when we're going out creating these prompts and other things, the information that comes back can be really significant, can be really large. There are times when I'll go in and create prompts and I might total, like in one weekend, I've seen over 200 million tokens either generated by me, created on my side, or generated by the AI directly. And there's no way I'm ever going to be able to read through all that information and understand whether it's all correct, whether it's actually what I want it to be, and so on. And Justin and I have spent a lot of our careers in finance, and so we understand how important it is to not be incorrect or to actually understand what you're doing. Not just from the compliance perspective, but all of the mistakes that can happen, that can actually change people's lives really negatively. And the way I want to drive this home, though, is to think about where this goes in the future. So this next year, really, we're going to expand on what the AI is capable of. And people are going to hyper-focus on making it practical and actually making it work in the real workflows, real processes. And it's going to change and shift a ton of people's jobs around the world. We're going to see new titles. We're going to see many other things come up. But in the future, and I don't know how long, but it's around the corner. People are working on it today. We're going to start seeing brains and machines really create hybrid architectures. And so first, we're going to start seeing mixed analogs of digital network-related things, where we're going to start seeing the chemicals and everything else that I've started talking about. But really in our brain, those biological substrates, the things that we actually have, there's no actual fundamental magic in the meat itself. We don't understand the consciousness, but it's not magic. This starts creating this possibility for us to have neuro-inspired chips. There are plenty coming out at this point, neuromorphic hardware that are coming out, other things that are drawing not just analogies, but actually moving beyond into things like memristors from a mushroom, where the mushroom is actually able to run different compositions and actually be able to process and have memory, everything all directly in the mushroom, to things like AI actually implementing the GABA and glutamate analogs in hardware itself. We're starting to see digital twins and even brain-computer interfaces starting to pop up. And there are actual digital twins of brains starting to be created. So there's a GPT-derived mouse brain where it has the whole visual cortex structured with thousands of neurons and being able to predict each of those and how the responses to that new stimuli are actually going to exist. So Stanford and other places are doing really, really cool things. And we're soon going to start seeing not just that digital twin of the brain, but closed-loop vial feedback options, where now we're starting to see not just future AI wearables that read the heart rate or other things that we have today, calming tones, haptic cues, other things like you might have seen in the movie Don't Look Up, where they have like a chicken and a puppy riding together trying to be cute to make you feel better. Right now my puppy's making our lives very difficult in this room. But conversely, AI can actually start being wired to sense its own CPU temperatures, usage, other things as well, and start interacting with the brain and actually become part of our human brain incredibly intelligently. So anyway, all of this points to a real continuum, and we're going to start seeing the learning advance more and more very, very, very quickly.

Justin

::

Absolutely. And I do think that as we look into the sorts of data and the types of ways in which we're training these models, the types of ways in which we are relating to these models and building them out for use cases where we hadn't necessarily landed that plane in the past couple of years. We're certainly moving away from text. And our ability to answer problems with these models is going to continue to drive real innovations and hopefully more opportunities in this space for an anthropologist to understand an evolution of tribal and shamanistic principles. or whatever the case might be that they're studying, right? That you might look to an AI model, as we talked upon, to better understand empathetic responses, to understand responses to neglect responses, to joy responses to bad conditions, mental health conditions, or positive affect and positive growth in infancy and in understanding everything from in infancy to understanding the end of life process and building better deaths for people. And everything that's out there being studied, I fundamentally believe, especially as these models are more relational and they bring into account more of what we feel like involves us all in the human condition, They will help us to study those items and give us an opportunity to put them in their quantitative place, right? To study the mind, you know, as you were talking about the advent of more actual neuro GPUs and CPUs and the like. um that's their form of neuroplasticity right the the design of those chipsets those those logic processors is going to be more and more done by more and more intelligent AI that is in no small part experimenting on itself at rapid pace figuring out how to make its hardware its algorithms better at relating to the rest of us

Nick

::

yeah and and really um that kind of neural interface or interface assistant or agent right is going to have to do exactly that. We're going to have to understand how does it interact with us, how do we want to interact with it, and ultimately whether the AI actually feels isn't as important as whether it shares the actual control loops that it has with us so that we can understand it. And really an agent that's able to monitor and adapt to its own health metrics, things like computational stress, things like battery life, uncertainty, different issues that it's seeing in its environment can actually exhibit emergent drives and create those motivations. And now we can start thinking about conserving energy, finding new energy sources, seeking new information. All of these things becoming drives and motivations without consciousness, but still creating a lot of those same needs that really drive a lot of what we have as humans. And so the more we can understand them, the more we can live alongside these things

Justin

::

and have a broader relationship as a society rather than just as individuals as well. To be sure. And again, all of that is up for study, including putting the LLMs in a meta context to describe and understand and study this new society where humans and AI are in relation with one another. So we have so many opportunities to not only be, but be studying the factors, features, relational characteristics, and to quantize those in ways that we weren't able to do that, right? What's the relationship quotient of this going out with our really nice River AI voice that we prompt and, you know, ask ChatGPT, you know, River, like interject something in here. So it's not just two dudes talking, you know, for for an hour plus. So what is that relationship quotient? What are the features? How is that being scored? And how is that going to improve over time where River's right here with us as opposed to asynchronous to us? because I can't quite figure out how to sound mix and do the AI thing all at once. When that gets easy enough, when we're in relationship with River, how does that relationship quotient go up? What does that look like to be in conversation with River late 2025 when we put her in or mid 2025 versus late 2026. And she's here on the call with us. It's one thing for us in how that feels. It's one thing to our audience and how that feels. It feels more real. It feels more complimentary. But an AI is going to be able to give us some features. It's going to be able to code us some target variables and help us to understand that. And again, writ large, it'll be able to do this because the open AIs of the world are going to be able to understand that at a much more prosaic degree. Absolutely.

Nick

::

Yeah, we really need that benchmark for the human AI bond. and how our relationship and how we think about it. I think for researchers, we need to think about different multidisciplinary collaboration. How do we get things across? Neuroscience and machine learning robotics, social sciences, the humanities, everything else as much as we possibly can. You look at a lot of the great leaders, other groups out there, humans themselves do a pretty good job being multidisciplinary. Individuals working within a particular company or within a group may not be as much. And oftentimes, we create artificial silos that we should probably think about breaking down. We really need to think about these ethical and social studies as well, of course, and the regulatory and societal preparation, actually, that's necessary for everybody to understand what are we doing and how are we doing it. I think the key takeaway is that humans are already raising AI and it's going to start raising and changing us as well. And so as that interaction goes on and we really need to think about shaping those relationships, we need to think about deepening AI's situational awareness, its understanding of who we are and what we're trying to accomplish. I was preparing for this. I was asking hey, how do you think about humor and how can that tie back to some of the things that we're discussing? And it was like, well, humor is a regulated violation of expectation gated by social and emotional state. I thought that is about the least funny thing I've ever heard in my life. And it's really the way that I approach it and what that style and tone is. And it's thinking, you know, scientifically, hey, this is something that is surprising to you, probably because it broke a norm, something that you felt like was okay in this group and it was actually okay enough to have it still be funny instead of offensive. That to me is only one type of humor. And there's so much more that AI needs to fully understand in order to grasp what humans are, what we do and what makes us human. and then even if it never fully understands that and we're talking about relationship different we still need to come up with a way to really understand how we're going to provide controls or have controls or at the very least have deep understanding to make sure we're successful

Justin

::

yeah to be sure i i think that as a wrap for what has felt like a great conversation is that we're right now in relation with this entity. It is growing up underneath our watchful eye right now, but it is certainly its own sort of entity. Again, creative different from us, intelligent different from us. I don't think that there's much question there, but we've talked about that. But we are in relationship with it, and it's important that we take that serious. I've always been polite to it. I initially thought it was part of its training set that would make a difference. But again, given that there is some assistance that it's giving us, it feels good to say please and thank you when you get something and ask for something. I know that that'll bum Sam Altman out, takes up valuable compute. But it seems to be the way that you would want to relate to something that's supporting you in some way. Similarly, we must demand from those that are building these that they take a very consequentialist path towards the way that they would want to be treated. Right. A deontological path to the way that they would want to be treated when they think about building these models. And again, they're not always going to be there. But when we've seen the instances where they're really naughty, they're quickly able to overcome that. They've been overguided in some negligent way. And so, again, we're in relation with these. it's going to be different as we're in relation with an alien mind. And for better or for worse, we continue to build it and accelerate its capabilities. And so it's going to be a wild ride.

Nick

::

Yeah, absolutely. So thank you for the time, Justin. Always great to have this good conversation. You know, I think as we think going forward, really trying to understand, you know, like next time we can dive deeper into whether or not AI really requires consciousness. I think as we consider these relationships, we consider all of these things that we need to train on. If AI becomes consciousness, do those relationships change? What does it do for us and how do we think about things in the future?

Justin

::

Yeah, I'd love to talk about it again. I feel the very same way about that as I do with the rest. It offers us a tool to understand, you know, is consciousness emergent in intelligent machines? What are the features of that? are they, as Damasio said, right? These proto-conscious homeostasis maintaining features of body map, feelings, narrative, prediction, and the like. those are things that we can put into these systems and try and understand what that target variable is, understand what those features are. And then on top of that, the highly spiritual side of me says that we have a conscious embodiment. We should try to replicate that. We shouldn't just be the Sims. We should be the architect and try and propagate that sim simulator simulated with consciousness as far as it'll go.

Nick

::

Love the idea. Right. Thank you.

Justin

::

Thank you. This one's really good.

Follow

Links

Chapters

Video

More from YouTube