This episode’s guest, Robbie Carlton, is the host of The Sane and Miraculous podcast.
Robbie and our host, Michael Porcelli, were one-time colleagues who trained facilitators in the relational practice called Circling. Robbie and Porch also bring their long-shared interest in computing, science, philosophy, and consciousness to this wide-ranging conversation exploring the influence of artificial intelligence on economic sectors reliant on human relationships, such as therapy, coaching, and education. They also speculate on future ethical dilemmas, including AI sentience and rights.
They delve into how increasing digital interactions through remote work and social media are changing our social world, especially as AI agents become more prevalent in our daily lives. Do we risk further isolation and loss of human connection? What constraints on these trends are needed to preserve the essential aspects of our humanity?
On the flip side, they discuss how AI could relieve us from mundane tasks, allowing more time for enjoying human connection, creative expression, and solving engaging problems. Are there limits to how much of human economic activity can become experiential and relational?
Foreign.
Michael Porcelli:Welcome to Relational Conversations, a podcast about relationships, communication, and the interplay between the two. I'm your host, Michael Porcelli, and it was a real pleasure to record this conversation with my good friend and former colleague, Robbie Carlton.
What you might not know about me is that I earned my first degree in computer science with a minor in philosophy and and then had a career as a software engineer before earning a degree in psychology and making the switch to a more relational career as a coach, facilitator, and educator. My friendship with Robby started near the beginning of my second career when we met in a relational facilitation workshop.
My relationship with him has been incredibly rewarding in the depth of connection we've experienced together and in our shared intellectual interests in both computing and philosophy. I've even had the privilege to be a guest on his podcast, the Sane and Miraculous.
As artificial intelligence becomes an ever increasing part of our daily lives, we humans have the opportunity to reshape our economy so that we spend more of our time, attention and energy sharing our experiences and enjoying our connections together. But that future is not guaranteed. More dystopic futures seem just as likely if we're not careful.
So in addition to relationships and communication, Robby and I get into more wide ranging topics like AI, economics and a little bit of history.
t we recorded this episode in:Okay, so without further preamble, please enjoy this conversation with Robby and me about AI and its potential impact on relationships and the economy. Today, I'm joined by one of my nearest and dearest friends, Robby Carlton. Hey, Robby.
Robby:Hello.
Michael Porcelli:So, by way of introduction, Robby, you and I go back over 10 years now and we met at a workshop for learning relational facilitation skills, you might say a specific practice called circling.
And since then we've become good friends, we've become colleagues, we've co created curriculum for people to learn relational communication and all kinds of other things that we enjoy doing together. Anything you want to say?
Robby:Yeah, I mean all of that. The main thing we did is we trained a bunch of people in circling for a long time.
Michael Porcelli:That's right. Yep.
Robby:And we, we flew back and forth across the country training people in circling for several years.
Michael Porcelli:Right. And we love to nerd out and geek out about all kinds of things.
Probably another relevant thing here to say is both Robbie and I share a background and enthusiasm for computers technology, computer science and artificial intelligence, as well as kind of nerdy philosophical topics like consciousness studies, integral theory, and these kinds of things. And so those interests all sort of overlap and come together and will be kind of at play in our conversation today.
Robby:Can I insert a little plug here?
Michael Porcelli:Plug away.
Robby:Which is just to say I'm also the host of a podcast called the Saint and Miraculous, and we have actually covered some of these topics, Porcelli and I, together on that podcast.
This is going to be a different conversation, but if you, after however long of this conversation, you feel like you haven't had enough of hearing us talk about AI and the various implications, there's a whole other podcast episode of us talking about that on the Sane of Miraculous. So, yeah, just wanted to throw that in.
Michael Porcelli:Awesome. Yes. Robbie and I love talking. We enjoy our conversations with each other quite a bit, and we're going to take you along for that ride today.
And today we're centering the topic, especially because it's been in the news a lot the last year or two. And people are.
Some people are freaking out, some people are really enthusiastic, some people are trying to think about what are the implications of this for things like jobs, what we do with our time and relationships, and how that can impact our emotional and mental health, at least in our little corner of the Internet. I know some people talk about ways of how can AI be used to potentially assist facilitators or coaches and there's.
Or companions, you know, this idea that we can relate with the AI chatbots and have like some kind of relationship with them.
It gets kind of closer to that when the AI is sort of not just doing some rote task, but it's actually kind of conversing with you through this kind of chat bot interface. And it's easy to start thinking there's a real entity over there. It's displaying signs of intelligence or, or sentience and.
Or people kind of almost want to believe or something like this that there's something there that they're in relationship with.
Robby:There's nothing else in reality that has conversations with us. And a whole history of being humans and pre human other than other sentient beings.
The only thing we've ever had conversations with until the Last kind of 20 years, maybe 40 years, has been sentient human beings.
And so there's no wiring in our neurology to distinguish between human beings, sentient Human beings and machines that seem like they're sentient human beings, there's just nothing. We don't have the tools to do that. So we're really easy to trick into thinking that these things are sentient.
Michael Porcelli:Right, right.
And there was a news item from last year where this Google engineer who subsequently, I think, got fired from his job, but he was like, it's alive and gonna publish this thing. And then more recently, there was a New York Times technology columnist who got into it with. I don't know if it was Bing or ChatGPT or something.
And it started telling.
Robby:Had like an affair or something, like, had a weird. Like it was trying to seduce him or.
Michael Porcelli:Yeah, right. It said it was in love with him and said he should leave his wife. And anyway, it actually ended up on the COVID of the New York Times.
It was really kind of like a cover page. It was kind of funny. So, I mean, I think there's potentially. I think there's potentially like a.
Just a rabbit hole of philosophy we could go down, which I think we're going to touch into. I don't think we're going to resolve it here. You know, I think there's debates.
But something I've thought for a long time now is that sort of independent of the, you might say the metaphysics of it or the philosophy of it, like, is it really sentient or not, is the fact that as it moves in this direction, more and more people will start to believe that it is.
And that kind of has interesting implications, both kind of legally, like what status the things sort of have, or do we give them rights and this kind of thing, but also what role we have them play in our lives. Right. Like the ways that we use them or utilize them. You know, it's.
Robby:Yeah, I mean, it's. I'll just say, I understand, like, when you're drawing a line between whether, like the, you know, the ontology of the consciousness. Right.
Whether or not these things are really sentient or not.
And you're kind of saying, let's not talk about that for a very good reason, which is if we start talking about that, we get sucked down into a deep rabbit hole of it. And so I just want to say, like, yes, and it does feel like tying one hand behind the back, which I think is a sensible thing to do.
But that question deeply informs all of these kind of other ethical questions. And a bunch of people believing they're sentient when they are has very different implications than a bunch of people believing they're sentient.
When they're actually not. Right. And that question is very, very difficult on how we're going to decide that. It's a whole other rabbit hole.
So I'm happy to like not go down that rabbit hole, but I do want to put a big signpost saying that's really important. Yeah. And for sure, like, all the questions you raised seem super interesting.
Michael Porcelli:Cool.
Well, before we just kind of just bypass your little signpost, let's just paint a quick summary because I think there's ways you and I have overlapping perspective and differences and we're not going to resolve them here. But we, you know, I mean, what do you think?
I mean, do you think that the current version of the chats are sentient or do you think that it's possible that even if the current ones are not in the future they could be, or do you think it's in principle impossible? Or do you think eventually we're going to kind of get there? Just not yet or for a long time? What do you think?
Robby:I think. Yeah. Thank you. I think that the current things are absolutely not sentient.
I think that the current technology is not on the road to bringing a sentience by itself.
It might be that some version of this technology is used in later artificial beings, which I want to say, like when we get intelligence, we should think of them not, not as artificial intelligences, but as artificial beings which have intelligence. And I think that we're very likely. And this is, this is a, you know, this, well, less likely.
As I'm going further out, I'm, my, my, you know, probability on each one is getting lower.
So, But I suspect that they will be embodied, that when we get sentience it will, it won't be something which is just living in kind of a software cloud.
It will actually that this, this, the mechanism of sentience will turn out to require physical instantiation, not purely kind of information processing. So that's the whole thing. But, so that's kind of very, very briefly my position on what's happening. So I don't think that's sentient at all.
I don't, I don't. I think it's worth asking ethical questions about the treatment of them as a kind of just to check ourselves and be like, let's make sure.
I don't know, maybe they sense. I don't think they are. Maybe they are. We should be asking the questions.
But from where I'm standing, I don't think there are any ethical implications to the treatment of these things. And I also think that it's delusional and potentially dangerous to model them as sentient in your worldview. So that's where I live currently.
Michael Porcelli:Yeah, I mean, I'm roughly approximately close to where you live. I mean, I think I agree that about the embodiment thing kind of strongly in a way.
I mean, my version of that is I think there needs to be some kind of like perception action loop that is in relationship with some kind of exterior environment. Now that could potentially be in a virtual environment.
But like, you gotta have this kind of perception action loop with, with an environment to be fully sentient in the way that humans are. Then it kind of raises this broader question of panpsychism.
Are there kind of like, what do you mean primordial or proto versions of consciousness that exist maybe everywhere? Like are molecules kind of proto conscious? And maybe there is some version of that.
I mean, I definitely think like dogs and cats are okay, so then how far down does that go? And does it go down to something like a reinforcement learning algorithm in a soup of data or something like this there?
I'm agnostic, I don't know, it probably has some proto something in there, but I don't necessarily think that is a good idea to emphasize or to think of it as conscious or sentient in a human like way or to start thinking of it in terms of like a moral patient, like a being of moral concern, or that we should start incorporating kind of a regime of rights into our legal system. I think all of that is way premature.
And actually, even if it's hypothetically possible to do so in the future, again I become more uncertain the further out we go about anything, about what it's going to look like. I don't know if it's a good idea to go in that direction at all. Right.
Like, I think there's a fork in the road, a differentiation you might say, between AIs that sort of are tool like that sort of just follow orders that exhibit things that seem like very intelligent behavior. Like kind of the ways that like the go playing AI or the chess playing AI was like, wow, right?
That's, you know, this high achievement of human intelligence that it's just better than in these particular ways. We don't exactly know how it works, but it's impressive.
So like intelligent, exhibiting intelligent behavior at superhuman levels I think is totally going to happen all over the place.
But the creation of something that starts to appear like another being that we're interacting with, you know, it's kind of like, well, how much investment should we make in that.
Or should we kind of try to steer away from that and keep the AIs as more just like extensions of our own minds and bodies rather than their own beings themselves? I think that's probably better.
Robby:This is kind of like a different ethics question, like an industrial ethics question, as opposed to kind of like, I don't know what the distinction was. The moral, the ethics question of how do we treat these beings? This is the ethics question of what do we work on? What do we build?
What's a good idea to build? Which of Pandora's boxes should we leave closed?
Michael Porcelli:Right, yeah.
Robby:Just to go back to the panpsychics, because I'm a pretty confirmed panpsychist and my response to that idea is that the, that the machine learning algorithms have as much consciousness as the servers that are running Facebook or the bitcoin mining computers.
Michael Porcelli:Gotcha.
Robby:Yes, they do have some amount of consciousness due to the fact that they are. Exist ultimately living on matter. Yeah, that they exist.
But it's not more because of the particular kind of information processing that's happening on those silicon chips than bitcoin mining or any other information processing. So that's where I like. And we're not. And I know we're not.
Michael Porcelli:This is the rabbit hole. We're not going to rabbit hole.
Robby:Okay, no more.
Michael Porcelli:Good. Yes, totally. I'm glad you said it.
And yes, just this is where you and I start to part ways in terms of our overlap starts to diverge right around there. So to bring it back to the human world, I, you know, I kind of had these ideas of, of, you know, people have talked a long time about automation.
There's a whole history of this, right. Like you know, the, the weavers of the, the people that hand wove like textiles, right.
And then we automated those with like steam powered weaving and the, you know, it was like the Luddites, right, they were like fuck this, going to take away our jobs. And now it's kind of like it's really great that we make ultra cheap ass clothing for basically everyone on earth more or less with machines. Right.
We're not kind of like bemoaning the loss of those jobs. And there's many examples of sort of freak out where industrial robots and it's like oh no factory workers or whatever.
And now we're in a new wave of that especially because these things. So once you get a thing that's very rote and repeatable, right.
Like a position on assembly line in a factory and then you can automate it and like there's, it's easy actually, it's even easier in many ways if the job is rote and is just sitting in front of a computer. Right? There's a lot of rote things that, things like a podcast producer or a graphic design agency or whatever that they do.
A computer programmer that's like. I'm not saying there's not creativity involved in those jobs. There totally is.
It's just the percent of the time that you're just sort of sitting there like click, click, repeat, you know, doing a similar thing over and over again is actually pretty high.
And anytime you have that and you don't have to even really interact with the physical world, like with machines and stuff, it actually becomes very easy to automate.
And we're in this kind of moment where it's like, whoa, we could be automating away a huge bunch of what have been thought of as like white collar jobs, right? Kind of the college educated sorts of information work jobs.
Some people even say like, if your job can be done in front of a computer anywhere, that's a job that the AI is going to take over pretty quick here. Like, who knows? But soon. Ish.
What do you think of this current wave of freak out about that huge categories of work sort of like being taken over by AI and the impact on the economy? I mean, we're speculating, we're not professional economists, but I imagine you have an opinion. You are a coder, right?
And you, I imagine you use AI tools to help you to some degree sometimes.
Robby:Yeah, yeah, sometimes. And they are sometimes helpful and sometimes really annoying. Actually, it's not. They're, they're not.
I, I don't, yeah, I would say that I, you know, I, I chat GPT has kind of joined Google as a tool that I use sometimes for looking things up, basically. Like if I, you know, because it's, it's actually a little bit faster than Google to look things up because you don't have to click through.
So if you just want to, you know, if you just kind of want to know how to make some API call, it's sometimes faster just to ask ChatGPT than Google. And, and you know, and then nine times out of ten it gets it right and then one time out of ten it gets it wrong and that could be really annoying.
Michael Porcelli:So like, because it's hard to tell at first.
Robby:Well, yeah, because you just, you go okie doke. And then it doesn't work and you're like, well, okay, am I doing something wrong?
And then you Kind of figure out, you go look at the docs and you're like, okay. So, like, you know, I, I would not say that it's become some revolutionary time saver for me. I might.
I think I'm more conservative than a lot of people in, in this, you know, software industry in terms of using it. I. I don't know about a lot of people, but I'm sure there's a bunch of people that are using it a lot to generate huge chunks of code.
I also feel kind of like there's a distinction between if I'm coding for my own project versus if I'm coding for work. For work. I'm more likely to just hand craft everything because that's what they're paying me for.
And, you know, and obviously they're paying me to do it as efficiently as possible, but there's, you know, it would just be a really bad feeling to write. To use ChatGPT to write some code that I submitted professionally for review.
And then somebody looked it over and was like, well, all these mistakes, and I'm like, oh, man. So I'm. I'm more conservative in that sense than I am if I'm playing with my own projects. I've actually used a little bit more.
But yeah, it does stuff like it. It's. You know, I remember when I first. I mean, here's the story. We had a. We were doing tech interviews for. For something. And I.
Have I told you this story.
Michael Porcelli:Me, I don't know.
Robby:Okay, well, we were doing tech interviews, and this tech interview was like a coding challenge. So when you're doing tech interviews, you'll often say, give to a programmer.
Hey, solve this problem, and you'll give them a little weird toy problem that you wouldn't really encounter in real life, but that exercises some of your coding abilities. And this was an interesting toy problem that was a little bit challenging. And definitely people.
We have half an hour because there were short interviews and people were like, you know, not getting it done inside a half an hour. And I, you know, I sat down to do it myself to make sure that I could do it.
And, you know, it took me a bit longer than half an hour to kind of get a working solution. And this was around the time that ChatGPT was blowing up. So I fed the problem into ChatGPT just as written. Like, you know, didn't do anything.
Just copied and pasted this very casual text document that's just like, write a program that does this. And it gave me a perfect answer in seconds.
So it can do stuff like that, which was like, okay, like you know, yeah, but, but it's, that kind of problem is like, is very self contained, doesn't require a lot of like context. And so I think it's good at solving those little self contained problems.
As soon as you have a bunch more context, it's, it just becomes like, it's not actually that, that useful or I haven't found it that useful, but it's not zero.
So anyway, I don't know if that's really what you asked me, but that's kind of, you know, that's a, that's a very personal perspective on, on, you know, as somebody that is, you know, doing knowledge work and definitely in an industry which is like kind of, you know, getting ready to, to, to maybe make some shifts. So I think I've, I've oscillated between being alarmed.
Like I remember when the art stuff was coming out last year and I was like, fuck, I'm so glad I'm not because this must be just a nightmarish right now. And then a couple months later the chatgpt cogeneration stuff was coming out. I'm like, okay, well it came sooner than I expected like that.
I'm also in a similar position of just like, you know, being potentially economically impacted. And I've gone back and forth between pretty alarmed and pretty like, I don't know, I don't think we're there yet.
And I'm, I think a little bit more on the, for this. I don't think we're there yet.
I do think there's no doubt this is going to be, we're at the beginning of a, of a, of an, of a revolution, of an economic revolution of you know, a huge scale. Like we talked about this before about like, you know, this is on the scale if not more than the information age, right.
Like we're entering a new age. And so I think that's real. I do think it's going to have huge economic impact. You know, like this the argument that like.
Well we've been, we've, we've been alarmed in the past and we've been wrong to be alarmed. Like the argument you made about the weavers and the Luddites. It's a reasonable argument.
I make a similar argument about people that think the world is ending. Right. Like, you know, well, we've always said the world is ending. Everybody's felt like they've lived in the end times.
There isn't, there hasn't been a generation of Human beings where there wasn't some chunk of them that was like, this is the end times. But one generation will be right. You know, there will be a generation that thinks they're living in the end times and they're right.
So it's not a dispositive argument. And I think the same is true for this. Like just because we were wrong in the past, I do think that needs to be weighed in.
But it doesn't mean, you know, CGP Grey has a video from like 10 years ago at this point where humans need not apply, where he makes the argument that this is, this one's different. And I don't know, it's interesting and it, but, but I do think it's going to have a huge economic impact and it's going to change a bunch of stuff.
So I think that that's real and there are going to be winners and losers and the, the winners are going to be fewer and the losers are going to be more unless we do something somewhat drastic.
I think that's another position I have, is that given our outdated economic ideas that are embedded in our systems, this shit's not gonna benefit humanity as much as it could versus benefiting a small handful of people that kind of happen to be the right place at the right time.
Michael Porcelli:Yeah. Yeah. Well, thanks for your thoughts there. That's great. Gonna riff on this idea.
I'm gonna lean a little more into maybe the utopian view of it for a second here because I mean sci fi utopia stuff has often been the domain of science fiction. And people usually write off discussions like, hey, that's science fiction, whatever, this is reality.
But now we're in a moment where science fiction sounding conversations are kind of like, feel like we're talking about the actual reality we're in or about to be in, especially of recent years.
And you know, there's a, there was a presidential candidate in the most recent presidential election in the United States, Andrew Yang, who kind of used this thesis of like, we're gonna have massive job displacement because of AI, robots and automation, so we should create universal basic income. Now, his arguments aside, it's part of the conversation that's kind of interesting. And then this has gone back for a very long time.
Some people who've always kind of argued that, you know, the idea of like leisure or, or, or laziness or idleness or time to just be creative is something that we should strive for. And this automation stuff, I mean, it goes way back.
eness, which he wrote back in:More recently, this guy wrote a boldly titled book called Fully Automated Luxury Communism, which is an idea that like. Yeah, right. Like the AIs and the robots do basically everything and then we just get to do the human things, right?
The human to human, the stuff where we really want it to be humans. And this is pretty close to my view or at least my hope or like as a direction that we could or should steer in. And the.
owers. And he did a TED talk,:And he basically makes this kind of argument that like this whole kind of like workaholic thing or this whole way that, you know, the hustle culture or the way that we like derive meaning from, from the work that we do, right. Like maybe we're gonna enter into an era after that or post that where like we will want and need meaning, but.
And the way we spend our times, we will want to still be meaningful, but we're not going to be able to get the kind of meaning of life or meaning in life, you might say, as John Vervaeke will say sometimes like out of more of the human to human stuff. So in that sense it would be like some people call it the experiential economy.
Another aspect of it I think of is the relational economy where it's kind of. It's like, I'm going to go do a cool thing. Watch me go, right?
And I have my GoPro or my whatever, you know, and I'm going to like share, I'm going to bring you along. You know, this is all like influencer culture on YouTube, like this kind of thing, like be a part of my life in this way.
And you have your little following people kind of vibe with you, but also these other kinds of more intimate professions that you and I are used to, like coaches and facilitators and counselors and therapists, healthcare workers, hospice workers.
You know, there's this whole, even like teachers or mentors, like this whole thing where a big part of the value of the activity itself comes from the fact that it's you and another person in a kind of relationship and that's a big part of what makes it valuable. And like, I've, in my hopeful moments, I'm like, cool. This is the part of the economy.
Like, maybe it's sort of like engineers get paid a lot, school teachers get paid very little, or elderly companions or something. Hospice workers, right. And you're like, but what if suddenly the AIs automate most of what the engineers do?
So suddenly it's actually like super cheap and it's actually costs way less.
And then the kinds of things like healthcare workers and hospice workers become the highly paid professions because those are the ones where we want it to be a person. And that's kind of part of the hopeful relational economy future that I like to think we could head towards. What do you think?
Robby:I have a lot of different thoughts about this. So let me see if I can. So I watched the TED talk. You referenced Kai Fu Lee. He seems like a lovely and charming man. It was a very beautiful story.
And shocking that an investor in multiple AI companies is an AI optimist. Right?
Michael Porcelli:Booster.
Robby:Yeah, like crazy. So at the end I was left with a little bit of skepticism of like, yeah, you know, I think that's hopeful.
Like, you know, I, like, like, what do I say?
I mean, it's funny, I keep coming back to the, you know, I'm not a Marxist, I'm not an economist, or even like a big, you know, like, so I, I don't, you know, I don't know what the economic answers are, but I can see the problem.
And so when you talk about, like, you know, the engineers are gonna, you know, we replace the engineers and the engineers are gonna, you know, stop getting paid a bunch. And so then we'll pay the teachers and the elderly companions more because those are humans and we, we can't automate that.
But the, the thing is, the value that the engineers were generating didn't go away. So, so the value is gonna keep being the dominant value.
It's just no longer gonna be distributed amongst, you know, even right now, it's distributed amongst the engineers and the software people. Instead it's just going to be distributed amongst the owners. Right?
Like, it's going to be distributed amongst the very small number of people that kind of, that put these things into place. So, and so one way to. So I don't think that those people. And then what's going to happen is there's going to be.
Yes, there's maybe going to be the only economic sector left is going to be these helping professions, these human professions. But the supply is going to be gigantic. There's going to be 7 billion people that don't have anything else to do. So I don't know.
So just, you know, economically there's still a missing which is currently our systems are set up to disproportionately reward the winners and AI lets us win better. And so what that's going to do is disproportionately reward the winners.
And so by itself, I don't think that AI automating away a bunch of computer jobs is going to mean that the teachers and therapists and elder companions are suddenly going to be the, the kind of the wealthy class. I think it instead has a danger of creating like a, a kind of like a whatever, like a super rich class and then everybody else, which is frightening.
Like that sounds like really bad news.
Michael Porcelli:Totally. Yeah. I mean I to. To this is maybe a slight pushback. I just will fully.
Before I do that, I'll just embrace, I totally embrace what you're saying that there's the winners, there's the losers, these big players who have the resources to build the AIs, these big platform companies, people like Google and Microsoft and the Chinese sort of equivalents of these companies and so forth. Like they're still going to capture a lot of value by being able to do this.
And I think implied in what you're saying is right, there's probably an additional, you might say like larger scale reform or upgrade evolution sort of needed. This is at the level of governance and the economy more so than it is just about algorithms and technology and includes those things.
But it's bigger than those things. That's right, yeah. And I would agree. And this actually touches on another topic category which is a huge rabbit hole. We won't go down.
But it's like this distinction between the Web 2.0 and the Web 3.0, right. Web 2.0, where there these Internet companies that provided ways for users to generate content.
At first it was blogging and then it was, you know, YouTube and social media and all these kinds of things. And on the one hand it's actually kind of cool, right?
You can hop on there and like make a name for yourself and become like a minor YouTube star and all this kind of thing. However, the platforms, the kind of these big companies that are essentially they're operating an online.
They're controlling a whole marketplace, right?
Like the advertising dollars and the YouTube minutes or like Amazon between the buyers and the sellers and the third party vendors and it's like, well they can just, they're like the uber middleman and that's also a problem.
But then you know, recent years and these discussions, it's almost like people in the co op movement, which, this goes back to the 19th century and these kind of newer ideas around like cryptocurrencies and how you can distribute ownership, you know, they've kind of come up with these ideas called like the, the platform cooperative.
Can we merge these ideas where you know, it starts to feel like it has some features of like libertarianism and some features of socialism kind of combined in an interesting way and there's way more to talk about here, but like, yeah, that's a potential direction of this hybrid, whatever technological governance, economic sort of reform to the system that would allow us to create platforms where this human to human exchange can take place more readily.
Robby:But interestingly, I work in that space, right? That's how I work in the kind of distributed computing space and this kind of democratizing of, of computing.
And what, what I noticed hearing that is like the problem with looking to Web 3.3 to, to like solve these governance problems is that the, that web3 is like an attack vector for malicious AI.
Because as soon as you're opening up these kind of distributed like user controlled co op platforms where there isn't some central person that can hit a switch and turn everything off, then it makes it that much more dangerous.
If there is such a thing as a malicious AI that kind of comes in and starts doing something whether it's sentient or not, but, or it's just, you know, some program that's really good at optimizing some result and it's extracting, you know, dangerously extracting value out of a system in a way that you know, has like bad externalities or whatever. Like there's, it also gets more dystopian.
So at the same time, like I do think that part of the promise of Web3 is potentially solving some of these kind of captcha problems that we have.
Michael Porcelli:But it could also enable essentially the blockchain equivalent of an Internet worm, which could be super devastating and much harder to deal with.
Robby:Much harder.
Michael Porcelli:Yeah, it's a security issue at that level. Okay, rabbit hole, like let's go past this one.
Let's just say we recognize the need for kind of an evolution of governance and economics along with this thing.
If we're gonna have a world where people that do more human to human like work, you might say, or just they spend Their time doing more human to human stuff. And they're not essentially, you know, having. I mean, it's a weird thing. What's the ultimate version of this?
Like, people hanging out and somehow earning a living or like being able to meet their basic human needs or the economy somehow. It's a weird thing, right? I mean, if I think about some of the things that you and I have done in our, the human side of our career, right?
It's like a lot of that kind of is, right?
It's like we're gonna sit in a circle and we're gonna share our feelings and then we're gonna share what it's like to hear about this person's feelings back with them. And you're kind of like, okay, what are we doing here? It's like.
And then we're like amplifying this kind of like felt sense of connection, intimacy, and we're like, wow, we can even teach people how to like generate this thing. And then it's.
It has such a high degree of what you might call like intrinsic reward or intrinsic value that it becomes kind of like there's almost like a danger of it becoming too addicting at times where people are like, quitting their jobs, just sitting around talking about their feelings all the time. And we have seen some of this in some of our communities and practice where it's like, cool, what do you want to do?
I just want to go back, you know, over to the community center and, you know, find some people and talk about my feelings some more. Because it feels really good, right? And you're like, okay, fascinating. But like, in some sense it's a good thing.
I can see how it can become too much of a good thing.
But when I sort of think about like an ideal future, it's like there is something about, you know, sharing in the human experience together and the way that it is intrinsically valued, valuable. It seems like this is a really great way for people to spend their times, their time in some kind of utopian future. Assuming that the.
So let's just kind of assume for the purposes of the discussion that the AI's work and the economic system upgrades it and like, okay, cool, like we're there. Is this a good idea?
Robby:I mean, I think that's a. And it's reminding me.
I knew there was something else I want to say about, about the, the kind of the super optimist, the Kai Fu Le, like AI is going to take away our need to work so that we can spend time with our families. And I Think if, you know, he seems like a real, like a workaholic, right? Like the guy, you know, I mean that's his whole TED talk, right?
Maybe he was right. Like his whole TED talk is the story of what a workaholic he was. And for those people, maybe they need to work less. Like, you know, I.
There's also something that's like, well, work is fun. Like it's interesting to work. It, it does give meaning. And I do. I don't think that that's wholly pathological.
I think it's been, it's been co opted by, you know, by our culture and society and in ways that become pathological.
And I think American culture particularly, but also, you know, it sounds like, I don't know China, but it sounds like China is like not even worse great on the work life balancing. But like, you know, compared to Europe, like I was just in, in Germany and it's way more chill there.
People do not work as much and they, and I think they're happier for it. Right.
So, so like, I'm not saying like America work ethic is great, but I do, I wouldn't want to live in a world where there was no work to do because it's fun for me to work. Like it's fun for me and because what is work? Work is solving problems.
Work is looking at like a gap or a problem in the world as it is today and imagining a better future and moving towards that. And I don't like the idea of a world where all of the problems have been solved and there's nothing left to do.
I also think that it's just never going to happen. It's not available. So that's also good news.
But that's the other thing I would say about the kind of super utopian vision of this is it's just a safe bet. When electricity was coming around, we talked about this with the atomic stuff, right?
Like people when a really on computers, when the dawn of the web, we were like, okay, this is just going to, you know, everybody's just going to love each other now. We're going to understand each other. We can communicate across the whole world.
And it's just going to create this like global consciousness of unity and love. And it's like, yeah, that's what happened.
Michael Porcelli:Right.
Robby:Like, but, but I'm not also like, you know, like a Luddite of saying like there are people that are like God, the, you know, the Web, especially Web 2.0 was just an absolute unmitigated disaster and it's made the world worse. Like, and. But the same with electricity, the same with atomic stuff.
You know, whoever first made the wheel was probably like, shit, like, my problems are solved now. I can roll this thing right.
When we look at the wheel today, we don't think about the wheel as being some kind of magical solution to all our problems.
So to me, the most likely thing is that we are going to continue to live in a flawed and beautiful world and that AI is going to make our lives better in a way which we will very rapidly become completely accustomed to and take for granted and not really enjoy the benefits in the same way. You know, Louis CK has that gag where he's on the plane and the guy's like, there's WI fi on the plane. And the WI fi cuts out. He's like, oh, man.
He's like. And it's like, you know, we. We immediately just take for granted all of these amazing new technologies. And then we're back to.
And this is where I kind of get a little bit Buddhist. We're back to like, nothing is going to solve the fundamental problems of being human. There is no solution to them.
And that's not what, you know, what. What technology is for. So anyway, I. That's maybe a little bit of a tangent from. From what you were initially asking, but yeah, no, it's.
Michael Porcelli:It's relevant. There's. There's a sense of if.
If I reflect on my own life right there, there is a sense of meaning and purpose that I feel has this real intrinsic pleasure to it that is human centered.
This comes in the form of my friendships, my family relationships, my intimate partnerships, and hanging out, talking about movies or going to the movies together. There's all kinds of going for a hike and just spending quality time.
And we think of these things as, you know, maybe they're ephemeral or if we were to kind of put a price tag on them, it would feel a little icky, right?
And it's like, no, we want that to just kind of be genuine, kind of freely exchanged, not under the purview of, like, marketplace dynamics is kind of part of, like, what preserves or kind of correlates to this intrinsic value thing. And then on the other hand, I get some amount of meaning and purpose of like, accomplishing something.
I mean, maybe it's like I want to sit down and draw something or, you know, write a song. I've been known to do that in times past. Solve a project, build a thing. You know, build.
The last one I remember doing was like, oh, hey, Build these, like, little end shelves on the side of the cabinet in the kitchen where we could put our coffee mugs. And I'm like, yeah, right. Like, I don't want to hire somebody to do that.
And some of the stuff is kind of like, yeah, somebody to clean the toilets, let's hire somebody to do that. Right?
But, you know, home improvement project is actually a place where people do a lot, a lot of that nesting and kind of like, I want it to be me who kind of comes up with the idea and then does the work to accomplish it to some degree. So I don't think that, like, the doing of the things has no intrinsic value or meaning. It's just that, I suppose, what is the good life?
And in a world where people kind of imagining this utopia, again, don't have to work in order to have their basic needs fulfilled, then we don't necessarily get this extractive market dynamic where we have to find some way of attaching emotional meaning to some rote, kind of like, repetitive thing that we do, right? Like, in order to, you know, whatever feel good about, like, having to kind of toil away in order to, like, not starve to death. We don't.
We don't have that anymore, right?
Then the world become a mix of like, whatever, personal projects and having a good time, you know, playing games together, hanging out, designing games for each other. I don't know, like, creating little, like, Minecraft simulations. They come to my little Minecraft thing or whatever. I don't know.
Robby:But yeah, it's interesting. I mean, like, I fully. I. What, what seems utopian to me is this idea that.
Not in a bad way, but like, as in, yes, I'm on board with this Utopia is everybody in the world has their basic needs met.
Everybody has a comfortable, pleasant place to live, enough good healthy food to eat, access to education, clean water, healthcare and other human beings and nature. Right.
I don't know, there might be some more things, but that sounds like a decent list of let's get everybody to that and see what people do and get them to that without them having. To everything you just said.
Without them having to kind of contort themselves into some weird shape to do something that they wouldn't be inclined to do of their own. Steam and, and. And see what they do. And.
Yeah, and then I think, you know, I mean, there are visions of this that are more or less kind of interesting, right? Like Star Trek initially was a utopian vision, right. Like it's later, I think, not so much because Hollywood, but the earlier Iterations.
The culture novels. Have you read the culture novels? Ian Banks or. I think it's E M. Banks.
Michael Porcelli:Really?
Robby:Oh, dude, you would love them. They're so good. But they're like totally. It's like, how do you write novels in a, in a utopian future that's run by AI? I mean it's literally.
That's the, that's the, the kind of question he's answering is like, how do you still tell stories? What. What are the conflicts? What's interesting. And he goes in small, really wild places. It's super. Yeah, they're great. But that's. That utopia. Yes.
Let's do it. I don't know. I don't know. I was like waiting for the but. And I'm like, no, let's do it. And yeah. And, and yeah. What would people do?
Michael Porcelli:They do.
Robby:Yeah, I mean they would. I, you know, I think that they would continue to build things.
I think that they would can, right, like that they would continue to create new structures. They would make, you know, they would, you know, that we would go to space, right? We would build, you know, new kinds of cities.
We would, you know, and, and new kinds of communities. And we would. And, and art, all the different kinds of art. Like the widest possible interpretation. And yes, some of. And we would for sure.
You know, one of the things that happens, right, like this is Maslow's hierarchy of needs, right? Like as you're lower level, right.
Like the story you were talking about, the work that you and I did together for years was we were working on a very high level on the hierarchy of needs. We were working on self actualization, right. Like for the most part. And you know, with some kind of healing and relational lower level need stuff.
But essentially the only people that were showing up to that were people that more or less had those lower needs handled and were interested in doing, or at least successfully bypassed and were interested in doing self actualization need stuff. And that I think as the lower needs get more automated, you get more room for that. And it's. Yeah. And it's.
And so that for sure, some of what will happen is there'll be more and more people in various ways. Whether it's relationally or whether it's by going to spiritual teachers or whether it's by whatever else, you know, astrology.
But they're gonna get into like deepening their own sense of their unique human expression in the world, right?
Michael Porcelli:Yeah. I take a pretty broad view of what I think people will do and I do think currently, if you kind of take this top 10% of the globally wealthy.
Both of you and I are kind of in that we live in modern Western democracies, capitalist, blah, blah, blah, this thing. We already are starting to see that. And you touched on a few of them. What would people sort of do if they had the kind of spare time to do it?
It's like, yeah, maybe they would go do a lot of transformational workshops or they might join communities where they get to sit around talking about their feelings. Or maybe they go off and be like, let's do that thing.
Let's be an intentional community and have a farm and it's organic and we're all going to live together and it's going to be whatever, polyamory or not something else. Like these kinds of pursuits of, you might say, kind of experimental living. I think there's a. There's a correlation here with, you know, just.
Just how many people go off and into cults in. In. In. In kind of modern kind of affluent societies. Let's go over here. And it's like a lot of stuff built around.
Around wellness or around spirituality kind of leads in that direction. But also to kind of broaden the inclusiveness here, in a weird way is, you might say, like, junk.
YouTube is that if you think about, like, the amount of weird little. Check me out, I don't know, watch me do my makeup or watch me eat a raw steak or listen to me chew on it or all these weird, obscure things.
And people are kind of like, cool. I get to do this weird thing and then invite people to kind of join me in doing it. And they do. And I kind of get to make a living doing it.
And people go like, I can't believe I get to do this for a living. You know, it's sort of weird, like. Or I'm gonn perfect my body through working out. I'm going to share people my process.
Look at this new little piece of whatever exercise equipment or let me show you how to do the proper form of this or that. I mean, there's. And then you have an audience for it, and then you're like, all right, cool.
I'm just inviting people to join me in doing what I would normally do with. For myself. And you kind of created this, like, it's like a niche ifying. It's just this hyper.
Whatever you want to call it, like, massive amount of specialization across a huge swath of the market. And people get to do that now. Not everybody, but a lot of people.
Robby:Yeah, yeah. And that's you know, I think that it's, that's a version of self actualization, right?
Like that to the degree, you know, when somebody says my job is just being me and I'm getting paid for it.
Like, you know, maybe that's self actualization or maybe that's actually just getting to that utopian vision accidentally a little sooner than the rest of us. Like, you know, like, I don't, like, my job is not. I don't just get paid to be me. Like, I get paid to work.
Like, I get, you know, I have to make myself think about stuff that, you know, I. Otherwise, you know, I'm interested in thinking about.
But not necessarily every day I don't wake up and be like, oh, well, it's this, wow, it's amazing that I'm just naturally doing the thing that I, I'm getting paid to do that I would otherwise just do for my own entertainment. Right?
So it's interesting because there's here, we're kind of, you know, we keep hanging out in economics, which I hope is okay, but like, you know, because there's two ways of imagining that. Like, there's one where, you know, the current version is people have found a way to monetize that stuff, right?
Like the guy that's eating raw steaks has found some way of like, you know, he goes on YouTube basically. And so. And how is that monetized? While YouTube is selling advertising. Advertising.
And so the advertisers are extracting enough value out of the people watching him eat steaks to be able to share some of that wealth with him via YouTube, who also takes a cut. Right. And so all of that's pretty gross, as you say that, right? Like, you know, it's like, what's the version where there, you know, where.
And maybe there's always should be prizes and this is, you know, I think the difference between, I don't know what the difference between communism and socialism is.
I don't know if anybody does, but maybe the difference is in one, there's still rich people and not so rich people and the people that figure out a way of extracting value or generating value or extracting value win more. So I don't know what I'm saying.
I'm saying that thinking about like making YouTube videos of yourself eating steak as a job is what if it were just a hobby and, and instead we just all had hobbies.
Michael Porcelli:Yes. You're kind of getting close to what I, what I imagine it would be like.
I mean, one more note on the economics and then I want to bring it back to the relational, because that's more relational conversations here. But, you know, one, these platforms can open up possibilities of hacking them.
And I do think whatever it is, you know, the algorithm sort of tends to reward outrage. But, like, the number one YouTuber right now is Mr. Beast. And his shit is like, feel good charity stuff. And it's total clickbait.
It's total junk Lord, whatever. You know, I'm gonna give a million dollars away to whoever.
But, like, it's a weird thing where he's like, cool, I'm gonna monetize the shit out of this YouTube channel where I get like, you know, 200 million views on every video. And I make millions and millions of dollars, but I'm not pocketing it.
I'm just taking the money that comes in from this and turning it into the prize money for the next video. And I just create this, like, clickbait, you know, a number with a bunch of zeros after it. Watch me give away. Blah. And then everyone.
And then all the kids are kind of like going like. And they're. And they're savvy enough to know, kind of cool. I just keep my eyeballs on this to the. From the beginning to the end. I get entertained. Mr.
Beast gets money. Mr. Beast gives it. It's like you're. It's like you're extracting value from advertisers from rich companies and giving it to fucking regular people.
Robby:But you're not, because where do the advertisers getting their money from?
Michael Porcelli:Sure. From you. Right.
Robby:I mean, like, the kids maybe think they're immune, but there's no fucking way they're immune. Otherwise, the advertisements.
Michael Porcelli:I'm not saying it's a perfect system.
I'm just saying it's like, that's why it's an interesting conduit that sort of creates some capital flow in the reverse direction, which is kind of interesting.
Robby:Yeah, I mean, it's. You know, it's. I see what you mean. That it's. It's weirdly monetizing philanthropy. Yeah. Which is, you know, which is. That's weird.
Michael Porcelli:And it's crowdsourcing it too, because you're kind of like, cool, I'm going to turn your eyeballs into money and give it to somebody who needs it. Right. Like.
Robby:Right.
Michael Porcelli:And it's working. Whatever it is, the combination of the algorithm and Mr.
Beast's brain and, you know, this whole fad of Junk Lord, YouTube or something is working for. For the time being, you know, it's it's.
Who knows if that little loophole will close eventually, but then I imagine there will be kind of another one. Right? Like, anyway, back to relational stuff.
Robby:Great.
Michael Porcelli:So I'm gonna do a little kind of historical place setting here again, like early on in the history of computer science and AI ish things, like this guy, it was back in the 60s who created a thing called Eliza, and that's an acronym for something. And maybe people have heard of Eliza more recently because of all this discussion of ChatGPT, but essentially it was just a very simple chatbot.
It was text only. And it modeled itself off of a new form of psychotherapy.
At the time, it was pioneered by a guy called Carl Rogers, which is essentially more or less just repeat back to the person what you just heard them say, right? And then they have this experience of like, I'm being heard and understood and just. That feels better, right, Than the way you felt before.
Like, oh, man. Right. I'm feeling really sad because I just had a fight with my girlfriend. And then the chatbot goes, yeah, I hear you're feeling really sad.
You just had a fight with your girlfriend. And then the principal's like, yeah. And it was. I mean, it's just so basic and it's so.
It's a million miles more primitive than the fucking chatbots we have now. The sophistication is just so small. But people were like, loved Eliza. Like, they were like, this is good for me. This is good for my mental health.
I like this. This is great. And then recently, I mean, people like, as soon as ChatGPT came out, people were like, what do we use this for?
Let's see if we can do therapy with it. And some people like, this is really good, this Coco thing. This founder of. Did you hear about this one?
Robby:No.
Michael Porcelli:Cocobot. So it was like a peer to peer thing. So they have a network of like peer to peer support.
It's like, not quite like suicide hotline, but like, you know, it's like chat roulette, I guess. You know, reach out and find somebody who, like, you can just have a conversation with it.
And we've been around circles of people who've like, trained in things like co counseling, kind of like techniques for doing this really well.
But it's like, hey, it's not quite like these bigger platforms, like dial a therapist sort of things, but it's kind of like connect to another person. And they were like, let's see if we can have the ChatGPT, like, listen in on the chat between these two people and suggest things to say.
And it was funny, like, the ratings of these were higher than the average human responses without the ChatGPT support.
But then as soon as they revealed to the users that the responses were enhanced by ChatGPT, then people were like, more disappointed with essentially the same output. So there was this.
It was like just having that extra little meta knowledge that it was sort of chatbot enhanced decreased your satisfaction with a thing.
And I think this kind of raises a bunch of different things where it's, you know, like, on the one hand, maybe having computer assisted, you know, support bots. Chat bots would be good, right?
Because in a sense, like, you know, when I'm at my lowest or when I sometimes I just like, you know, scribble on a piece of paper I'm journaling or whatever, right? It's like, do I. Do I really need another person? You know, sometimes it kind of helps.
But, you know, then you kind of get into this whole thing of like, the helping professions like therapists and counselors, they experience things like empathy and compassion fatigue. And it does incorporate this weird kind of marketplace dynamic into something that is human to human, which sort of makes it a little strange.
Like, I don't know. Any thoughts on this general area?
Robby:Yeah, this is, I mean, so I'll throw in, because I.
This is as good a place as any to throw in this image because, you know, one of the things, like, I've Definitely read about ChatGPT doing text therapy and being more effective than human therapists doing text therapy, right? Which, you know, okie doke. But, like, I've never done text therapy in my life and I have no interest in it. Like, I do therapy with a human being.
I mean, I do therapy over zoom and I'm like, would rather, you know, rather not. But that's just kind of how it works.
Well, it's complicated anyway, but so the image that popped into my head recently was the realization that we're a couple minutes metaphorically away from there being ghosts in our Zoom meetings, there being these kind of disembodied video entities which have language processing, like ChatGPT, have audio synthesis and audio, you know, processing, like the audio synthesis and audio processing things we have now and video synthesis.
It's not going to be long before all that stuff is real time and indistinguishable from an actual human being in an actual physical space that's being videoed and put onto Zoom. And whether. And then how we, you know, how people experience those things when they know that they're talking to a human versus when they.
Sorry, when they're told that they're talking to an AI versus when they're not told they're talking to an AI. I think it's going to be super interesting and weird. Will we be able to tell?
Will there be some creepy feeling that we get that lets us know in the way that you kind of can right now? You can tell what? When AI generated imagery, you can basically pretty much always tell.
Seems like with imagery that's not going to last, pretty soon it's going to be impossible for human beings to distinguish when it's an actual being. That's interesting because a lot of what therapy is about is co regulation.
A lot of what therapy is about is you are getting into connection with a human being who has a nervous system and their nervous system is teaching you something through that connection about how you can. How your nervous system can operate. And so can that be simulated? That's a really interesting question.
Like, and does it matter at that point if I know? Like, if I know, say you're the robot therapist and I know you're an AI, but when I say I know you're an AI, you're just so.
You have the perfect response, you know, that just like touches me even though. And then I'm like, I don't want to be touched by your perfect response. I know that you're just a machine.
And even then, you just still keep having this perfect. You just have, you know, the perfect thing to say because you've been trained now, kind of.
What's interesting, and maybe where this doesn't end up working is what you can't do with a robot AI that you can do in other deep learning situations is train at a crazy rate. Like, the way that AlphaGo got as good as it did is by playing against itself at superhuman speeds. Yeah, you can't do therapy at superhuman speeds.
Yeah, because, you know, because you can't train, you know, therapy bot against therapy bot.
Like, because they're gonna go off into some weird land that when you put it in front of a human, it's gonna be like, you know, they're gonna be doing therapy for AI that's like, not gonna be relevant, you know, so maybe that's where, you know, we don't end up having to confront this in a real way. But yeah, so I. Yeah, I don't know. I don't know. It's weird. I don't think I would want to do therapy with a AI. Like, it doesn't sound yeah, like.
Like I would be getting what I want from it. But what about coaching, right? What about coaching where it's like, I'm not. I'm not there to have an experience with a human being specific.
And what's weird about coaching is coaching and therapy are kind of on a continuum. And any good coach sometimes is really doing therapy, even though they're not allowed to say that.
But part of coaching is just kind of way more mechanical and way more like.
And that's interesting where it's like, yeah, what if I could have a coach that was just, you know, instead of costing hundreds of dollars to get on the phone, was just like free or, you know, a cheap subscription and was actually helping me with my strategic life stuff?
Michael Porcelli:Like, yeah, totally. I mean, we're definitely in territory where I don't feel even like I've adequately formed my own perspectives on it.
Like, I mean, in the process of, like, this is partly why I wanted to have this conversation with you, right? Because I can see a lot of different sides to this, right? Like, like you said about coaching, I could see how some of that could be mechanized.
And there's a. There's like a social benefit to that too, because essentially now you're kind of the whole whatever body of work of, like, coaching.
Imagine a chatbot that could sort of do that, and now you have, like, access to pretty decent level coaching to millions of people at a much lower price point. Like, that could really improve people's lives in a whole lot of ways, right?
Especially if you're kind of like, I don't really care if it's a chat bot or not, right? Like, there's certain, like, and this is sort of a little bit weird, right?
Like, as the job is more util, like, maybe at the other extreme is like one of the human things that the AI automation folks are excited to automate away is customer service, right? It's just like, what are you doing? It's like, well, I have this problem. And then it asks you a question. What about that? It's like, okay, cool.
Well, imagine if you could, like, create the ultimate chatbot customer service agent for your software application and you don't have to have a fucking person over there. Like, now try rebooting. Now try, like clicking here. It's just like, given that.
Robby:Given that, currently when I get onto a, you know, like a robot phone system, all I do is just make a be. I'm just like, human being. Human being. Talk to an agent. Talk. Like, I'm just like, like, I'm I'm nervous about that.
Like, there's a kind of, you know, there's a kind of dystopian, almost like Kafka esque or not, but like version where you're on the phone with the customer service robot and it's not understanding you and. And you're in hell. But you know.
Michael Porcelli:Yes, but there, there is also the like, like millions of very crappy actual people, customer service agents. That is also a kind of a.
Robby:That's true. It's not like it's both people on.
Michael Porcelli:Both ends of the line.
Robby:Right, right, right. That's true.
Michael Porcelli:And people, people do cheat, treat customer service agent kind of shittily. And it's a really entry level job. Whatever, whatever this thing is, you blah. And you're just like, I need you to help me, whatever. Right.
And it's a little bit. You're kind of tooling somebody to kind of just help you achieve an objective.
Like we don't really care all that much that that's another person over there. Right. It's this. It's a strange kind of thing, you.
Robby:Know, like at least we're not incentivized to. I try to, but. No, but there is.
Michael Porcelli:Right, Right. Yeah, yeah. I have to kind of work to be like, remind myself, okay, treat this person kindly. They're just a customer service person, you know.
But along that continuum, right, you can kind of go over to like this high end kind of like. Yeah, like the deep thing, you know, I need if I'm going to unlock some of my deepest held traumas.
I mean like if I go for a really in depth, kind of transformational bodywork session and you know, like I've gone to some really high end bodywork people and like there it's kind of like it would be difficult to imagine a robot doing that for me, you know what I mean?
Like, it's like I'm in a kind of like somatic dialogue with another person's nervous system and that's like unlocking layers in me and helping me release deeply held whatever knots in my body, mind system or however you want to describe it.
Robby:It reminds me of Carl. Carl Bukaid of nlp. Marin talks about this. Like whenever somebody complains about like God, we just talking about parent stuff again.
And it's like, yeah, you were raised by parents, therefore you have parent issues. If you were raised by wolves, you would have wolf issues.
And like when we can generalize that and say you were raised by humans and so you have human issues and there are only those Those human issues are likely only resolv with humans. And that's what therapy is.
It's like you got your human issues that you got from all your interactions with humans and now you're dealing with, you're working with a human to try and repair some of that damage.
Now as children are more and more raised by AI, they might have more and more AI issues which actually can be repaired by AI later down the road by better AI.
Like the first generation of kids that's raised by robo parents will have to go to the second generation of robotherapists to heal the damage that the robo parents did. But we're not quite there yet.
Michael Porcelli:Yeah, we're not quite there yet.
I mean, I do think that there's going to be a spectrum of things like certain you can get, you know, there's this whole kind of crazy online market for these body toys. You get a machine that shakes your legs around or something, or like a massage table that you lay on and it massages you automatically.
It's like, okay, that's pretty cool, right? Like, I don't necessarily want to pay the high ticket price for the deep somatic trauma release person.
Maybe I do that, I don't know, every few months and I do my little massage chair or something like that every day. And that's a good combination.
Maybe there's sort of an equivalent here in the kind of conversational or mental health sort of space where it's like, cool, give me AI Chatbot that does some combination of just like listening and hearing me. So I hear myself reflective listening plus load up some kind of coaching suggestions. And that's good a lot of the time.
And then some of the time I want this high end thing. I mean, I think that this seems like a reasonable kind of combination of things.
I mean, yeah, it also kind of raises the question too of like, you know, therapy was invented not that long ago, historic in historical terms. Right. Like, so what was before that? I mean, I think at some level people were just kind of like sucking it up. Right.
You know, more lower level on the Maslow's hierarchy was kind of the way we were focusing our attention. We didn't really have the luxury to kind of unwind our childhood traumas or whatever.
But on the other hand, I think there was like some equivalent, like you might say like priests, I guess, or ministers did something somewhat equivalent to that kind of in a larger context.
And I don't know, maybe there is some, you know, in a way, you know, when I think of like, working out my relational issues, it helps to have external help.
Like, if, you know, if I'm going through a tough time with my girlfriend, it would be like, well, I want to talk this over with a friend or my own therapist. So I'm not like, having to bring all of that or feeling like I need to bring all of that into my relationship with my girlfriend.
I can just kind of focus on the relational thing.
So it's like there's some amount of distribution there that is kind of like, I want a human on either end of that, kind of the support person and the main person.
But then there's also kind of like, I still need to know how to communicate effectively with my girlfriend, you know, like, that the need for me to be able to do that is not going to go away. And I think that's where kind of relational communication and training kind of will probably never go away fully, you know, Anyway.
Any thoughts on that?
Robby:Yeah, I mean, it's interesting.
My brain's going a few different directions, but the question about what preceded therapy, and I think in some sense, well, it was a legitimate innovation.
That's, like, what preceded alleopathic medicine was bunch of nonsense and some, you know, probably some good folk tradition stuff, but, like, a lot of stuff that didn't work as well. Right. So I think that it's reasonable to say that we just weren't as good. You know, there's a.
But, you know, James Hillman, who's, you know, this Jungian analyst, has a book that's a great title. It's something like, we've had 100 years of therapy. Why is Everything Getting Worse? Which I think is, you know, it's a great question. I don't.
I haven't read the books. I don't know what his answer would be, but probably he's not actually that big of a fan of therapy. I don't think so.
He'd probably be like, yeah, it doesn't really work. But, you know, and then there's these. These questions of, like, well, modern, right?
Like, like in the modern world the last kind of 100, 150 years, the unique pathologies that. That are kind of showing up because of that, a big one being, I think, this, like, atomization of community in this isolation. Right.
And so the work, like the relational.
And this comes back to, you know, when you were saying about people quitting their jobs and just saying, I just want to come to the community center and do circling all day. Because I, you know, I just, like, that's what I Just want to do.
And like, I think to me that's a kind of like when people get addicted to something like that, that is a, it's a response to a fear of future scarcity, right?
Like, it's like I had a scarcity of something in the past of some vital nutrient which I think just like human, basic human connection, being with like feeling your heart with another person, feeling seen, feeling like you see somebody, like that is a deep, important nutrient.
And also to be witnessed in a community, not just like on a one on one, but to be witnessed over time with other people that can see who you are today, who you were in the past, who you're going to be in the future. Like all of that is this deeply important nutrient which we're hungry for, we're missing and we have this kind of scarcity.
And then a bunch of, there's a bunch of pathology that goes on top of that. And then when you find it, you become gluttonous, right?
Like, then you go like, ah, I gotta just like get as much as I can in because I don't know if I'm ever gonna, you know, I don't know when I'm gonna run out and when I'm not gonna have it again. And so like, yeah, I don't know how this all relates to AI at all.
But like, like that it's a nutrient that we, that we need and that we culturally, like, I guess maybe this, how it relates is like modernity stripped that away in a really kind of painful way.
That if one of the things that, that the AI age can bring is enough room in people's lives to be able to recreate that, that community, then I think that that's a way that it could be really beneficial and beautiful.
Michael Porcelli:Yeah. Okay, so now we're kind of smack in the middle of like, where I feel very ambivalent about.
Like it's a, it's a little bit, it's actually analogous to the thing of like, do we want the AIs to kind of have their own sentience and agency or do we want the AIs to mainly be sort of tools that we use to get other shit done and we don't really relate with them as like other beings with rights and all this kind of stuff?
And I'm like, I definitely favor the second and especially when it comes to this area of like relational and emotional labor or whatever you want to call this, right? It's like I want the AI to. So I think you're right in your historical Accounting like modernity sort of stripped away this kind of community thing.
And if we maybe imagine some idealized hunter gatherer past, which is like, hey, who we are. And some people even say that this is actually the way that the brain has evolved.
Like, who we even think ourselves to be is a kind of somewhat distributed whatever process, which is like, you know, it's not just my idea of myself inside my own head. It's actually, you know, that way I communicate with the people who have known me my entire life, right?
The parents, the children, the siblings, the cousins. And there's this kind of way that like, we tend to shape our identities around the group inclusion that we're a part of.
And if that group inclusions is more or less kind of like this, I guess, sort of organic thing that doesn't really change. Like people didn't. You didn't ghost other people when you were in a fucking hunter gatherer tribe, because you'd be dead, right?
Like, but here it's like there's just so many. We can just be like, nope, you're gone. And then replace, right?
And like, there's a way we sort of with the marketplace dynamics or the online communication, we tend to start tooling each other. And then I'm like, oh man. So, so is this a correction?
And like if the correction, let's just say one version of the correction is people learned how to actually be in real human to human contact with each other better, maybe it's kind of a remedial. We need to go relearn how to be decent people with each other again.
And this is where our work and authentic relating and circling and my work currently with meta relating is like, can we help people become better at that again?
Especially if kind of the modern, socioeconomic, cultural sort of modernity like you said, and postmodernity has atomized it and separated us and removed essentially this developmental nutrient in the relational dimension that like, it is very scarce. Right?
Like, but the answer is like not to put a, an AI at the other end of that exchange, but actually help people become better at doing that with other people. Because that's the place where we want it to be.
Robby:Totally, totally. And I, yeah, maybe I wasn't clear. I'm.
I'm saying the AI automates enough of our tasks that we have the time and space to be able to go do that with people.
Michael Porcelli:Yeah, the other taxes.
Robby:Yeah, the not. Yeah, exactly. Yeah, yeah, yeah.
Like the AI does taxes so that, you know, whatever it is, like, so that we can, so that we can Spend more time in community.
Michael Porcelli:Yeah, yeah. But the.
Here's where the gray area kind of comes into being for me, which is kind of like, like, okay, so to what degree is the kind of like AI assistant, like, helping with that? Right? Like the AI.
Like, there are people in our circles who are like, I'm building an AI empowered coaching app or relational facilitation app or this sort of thing.
Robby:And I'm like, an insane amount actually.
Michael Porcelli:Do you know what the fuck you're doing? Like, you're literally taking the one thing that we don't want to automate away entirely and you're accelerating that. I'm like, no, no, no, no, no.
Like, like, let's not take human relationships. And I mean, there's, there's weird precedents for this, right?
Like, I mean, the whole kind of sex bot, as soon as we had robots, people were like, let's make sex bots. Right? Or like, in Japan they have this.
Have you heard of these rental family services where you're like, I need to go to a funeral, so I'm gonna rent a girlfriend or something. You know what I mean? Like an actor who pretends to be like, this is kind of part of the. We make.
Robby:The.
Michael Porcelli:One of the symptoms of the illness of modern secularism is this kind of thing where like, whoa, we actually put marketplace dynamics on actual embodied person relationships. Right.
Robby:Like, right.
Michael Porcelli:Or sex workers who provide the girlfriend experience.
Robby:Yeah, no, I mean, that's why. And you know, I mean, Japan's super interesting because they kind of. They have.
I forget the name of it, but they have this phenomena which I think started there, but is spread out of these young men, I think specifically just kind of locking themselves in their rooms and just being like, I'm, I'm just not going to leave my house, not going to leave my room. I'm going to get all of my needs met via, you know, my parents putting food, you know, under my door.
And, and then other than that, I'm just going to be online and I'm going to, you know, whatever it is, like, watch. Watch tv, go. Maybe go to chat rooms and just like, completely like, talk about isolated. I mean, just like, completely isolate from. From society.
And like, you know, Japan is obviously this hyper modern country in this like, super. And you know, I, I mean, you know, you know, like Tokyo do, you know 37 million people live in Tokyo?
That's, that's more than half the population of the uk. Like, it's like, you know, that's. I mean, it's close to California, right? Yeah, in Tokyo. In one city. I mean, it's. It's crazy. Anyway, Japan is close.
Wild place. But, like, what is that? Like, I. Like, what is that? Like, I'm really, like. Because it feels like that's the end game of this pathology, right?
And there's, you know, there's a dystopic vision of a world where everybody. Right? Like the. Where everybody.
Michael Porcelli:Everybody.
Robby:Everybody stays in their house. Like the VR, like the Facebook fucking. What do they call that, the thing they're trying to do now? Like, that's just, like, metaverse evil.
It's just so clearly evil, and it's just bad news. Zoom. Zoom is enough. Like, it's. Yeah, I'm. I'm. I'm becoming increasingly lit up.
But, like, there's this dystopic vision where everybody's in a tiny little box that's just large enough for them to physically be. And they're walking on their treadmill and they're eating their nutrient paste, and they're.
And all of the life that they're experiencing is coming through devices where they're doing VR and they're. And they're having all of their relationships and all of their life. And the.
The thing about that, when you do that, people become very cheap to maintain, right? Like, relative to actually needing, like, enough physical room and to physically be able to walk to the grocery store to interact with people.
So there are. There are incentives in reality, economic incentives to push to that kind of dystopic world. And that's very frightening.
Michael Porcelli:Yes, this is part of the problem, right? Like, and some of the things that I.
If I have, like, a admonition, I suppose, on the people that are trying to accelerate the automation of, like, you know, emotional labor in the marketplace, I'm like, no, no, no, hang on. Put the. Pump the brakes on this one especially, is because, like, part of.
At least the way I think about it, part of what accelerates that dystopic end game is when you start substituting, yes, AI is for people, right?
It's like, you know, I mean, a simple version of it is like, I remember reading some articles and, like, you know, kids are learning how to, you know, be jerks because they're talking down to Alexa or something like this, right? It's like, what. What are you doing? You're like. You're like, it's not a person. I don't care. It just does what I want.
And you're like, okay, well, now you have a substitute friend that, like, never experiences empathy fatigue, right? And you just, like, get to. You get to blab on and on and on about you complain, complain, complain. And it just fucking validates your feelings.
Oh, man, that sounds tough. You're like, yeah, so fucking out, man. You just. And you're like, this is like having a bad therapist, like, an enabling.
You're creating, like, a codependent relationship with an AI that never tires but totally knows how to, like, get you in a way that, like, has you feel good about yourself. And then you're like, give me more of this. I don't want people in my life anymore.
Robby:Right. It. It fuels the narcissism that, like, I don't have to experience, like, discomfort or.
Or missing or, like, all of the stuff that happens in real human relationship. Yeah. It's funny you say that. I. When I interact with chat, I make it, like, a practice to be polite.
Like, I say please, I say, thank you, I say, can you do this? Like, you actually. Because I. Because I don't want.
Not because I think that it has feelings, but because I don't want to get into the habit of treating my conversational partner as if they don't have feelings. Like. Yeah. So, yeah, yeah, yeah.
Michael Porcelli:I mean, the extreme version of this is Westworld, where you're like, this is, for all intents and purposes, like, a real human person in front of me, but you're like, cool, but I can murder or rape it because it's a robot, and I don't care. Like, what effect does that have back on you as a person? Is this good? What good is this? Like, is it just amplifying?
Not the better angels of our nature, but are, like, inner demons in some way? Like, I mean, I could sort of see this kind of going in this way. And, I mean, maybe there's like, an autocorrect in here.
Like, do you remember what it was? Agent Smith was talking to Morpheus about the first version of the Matrix. In the movie, he's like, we made it totally wonderful.
And, like, people did not accept the programming because they wanted something, like, kind of conflict. And in a way, I could sort of see that as like a. Oh, yeah, look at those stupid humans who, like, you know, want to be shitty to each other.
But then on the other hand, I'm like. Like, Stepford wise. Do I want a girlfriend that just, like, does everything I want all of the time? Like, is that a real relationship?
Like, isn't part of what makes the relationship interesting? The differences, like, the places where we overlap and then the places where we're Different.
It almost kind of comes like this kind of hedonic set point. It's like, if everything is good, then nothing feels good anymore because you. There's no contrast. Right. You got to have.
Robby:Right. Or, like, you know, like, why, you know, like, sex is more interesting than masturbation.
But with masturbation, you have a partner who can exactly feel exactly what you would want to feel and is perfectly attuned. But it's way more fun to be masturbated by somebody else than it is to masturbate yourself because I hope you're allowed to talk about this.
Michael Porcelli:Totally.
Robby:This is no relational string because, like, there's something tantalizing about the ways that they're not totally attuned. There's something exciting about not being perfectly met because then it gives you room to reach for something. And then the moment where you are met.
Oh, that's so satisfying. And then there's like, oh, it's not quite, you know, like that. That dance is. It's so much more satisfying than some kind of utopic. Yeah.
I mean, I think the point in the Matrix, I. I don't think it's just a.
You know, I don't know how they intended it, but I don't read it as just like these dumb humans or, you know, maybe that is how they intended it. But I. I don't. I don't like that because I do think that, you know, Alan Watts has this.
This gag, whatever this case, basically him repackaging Hinduism where. Or advice of Vedanta or where he says, you know, if you could dream if.
When you went to sleep, you could dream any dream that you wanted and it would last as long as you wanted. It would be completely real and completely perfect. And, like, what would happen?
Like, the first few nights you would dream of, like, ridiculous bliss and a heavenly experience, and it would be this amazing thing. And then you would get bored of that. And then you would say, well, let me, like, I want to be surprised by something. Okay?
It's mostly going to be utopic bliss. But, like, give me something I'm not expecting. Just. Just to liven things up a bit.
And you would gradually increase the adversity and the surprise and the kind of chaos and the unknownness of it all and the ups and downs of it all, you would increase it, increase it until you were in exactly the situation you're in right now. When you are living this life that you're living right now would be the choice that you would make. If you could choose anything Right.
It's so fucking good. And it's ironic that Alan Watts is, you know, the character that seduces away the robot girlfriend in her.
Michael Porcelli:Yes.
Robby:Because, like, you know, I don't know why, it's just like. Or it's a connection. I don't know if it's ironic, but like, that, I, like, that feels like deeply human and like, actually, what are we here for?
We're not in heaven. We're on Earth. And so what does it mean to be human on Earth? It means not being satisfied by every single thing that happens all the time.
It means having the ache of the gap between what we can imagine and what we're experiencing.
Michael Porcelli:This to me is like, this is kind of part of the.
If I sort of imagine the AI alignment problem, what are some of the things we want to make sure that the AI overlords sort of preserve into the future? And. And there's something in this territory. I think it's part of it. It's like I want to still be surprised.
I want to still have some degree of subjective experience of my own agency, and I want to have some degree of some relational something where there's the. There's the self and the other, and then we're in a dynamic with each other.
Like, if you just sort of obliterated all the others, or all the others were just more or less slaves to my whatever, you know, I mean, you could invert it and you could say, like, cool, the AI is going to. I'm gonna give you a bunch of slaves and only give you what you want.
And it just like, programs these agents to figure out how to give you exactly what you want all the time. And you become the slave, right? It's just this weird, right? And like.
And maybe in a way, like, your consciousness sort of fades out because it's like you're not really perceiving differences anymore or you're kind of your. Your behavior has become nearly 100% predictable to the agents that sort of like enclose you in this chamber of like, feedback loops.
And it's like, well, what are. I mean, now it really is kind of the majors. What are you just plugged into something, right?
You're in this vibe, you know, and our sci fi nightmares, not just the majors, but like the Borg in Star Trek is like this too. And they talk about it. It's like, yeah, it's like I'm in some kind of like, bliss state, but like, I'm not really there anymore, right?
There's no difference. This is kind of this bliss of being part of a hive mind. Or.
Robby:It's really interesting, your instinct that if the, you know, if the AI built this world around you that was so perfectly met all your needs, that your consc. Consciousness would start to fade. Because I had the same feeling and I just. That's so interesting. Like what? Like. Like, what does that point to?
I think is a very. Like. And. And is it.
Is it that the absence of the consciousness in the people around you, or is it the absence of adversity or, you know, because the other. You know, talking about, like the gap. Right.
Like, if all of your, you know, way earlier on, I was like, well, I like to work, and I don't want a world where work has been taken away. Well, the work is. Is the, The. The effort of trying to meet.
Bridge the gap between the world you can imagine and the world that you're in right now in some way. And so, like, if the, if the AI has perfectly bridged the gap and there's no gap, like there's no imagination anymore.
Michael Porcelli:Yes. It's a weird. I mean, I could put this in terms of information processing, and this is kind of inching towards our little rabbit hole.
We don't want to go down. But like, there's these concepts of the Bayesian brain or predictive coding.
Like, a lot of what your sensory motor loop is doing is sort of matching prediction to what's happening. And that's actually a big part of how you perceive the world around you.
It's a pretty cool theory, and I think it gets most of the way there, but it's like. I don't think it's a perfect description of the whole thing, but one of the critiques of it is, what about the dark room problem?
If you really want to reduce prediction error, why don't you go into a sensory deprivation spot forever? Then your prediction error goes to zero, and when your prediction error goes to zero, this is kind of where your consciousness will just fade away.
There never is anything that is no longer meeting the expectations that you have.
So maybe the consciousness function is really there to kind of like help negotiate between you know, these different drives that you have in response to stimuli that wasn't perfectly predictable. And you're kind of like, that's what the consciousness is there for. Like. Right.
Robby:I mean, it's like. I mean, the metaphor we have is waking up. Like, well, what wakes you up is when something happens, you weren't expecting a surprise.
Like, like a surprise like. Like when you know what's. What is A joke. I mean, a joke, right? Like a joke is when you're broken out of a trance and you. And so you're.
Because something happens you're not expecting, and it breaks you out of a trance and you're more conscious. You're suddenly more awake. Yeah. I mean, all. Yeah, yeah.
Michael Porcelli:Yes. But to bring this to the relational, again, like, this is. I think this is a ground truth of relationships, whoever they are with. Right.
Like intimate partnerships, but also just friendships and coworkers and colleagues and everything. Like, part of the joy of doing the thing, part of the value you are experiencing while doing it. Yes.
It is helping you meet your survival needs so you can pay for your food. And yes, it is partly that you're getting to make a certain thing happen, like solve a problem or create something novel, like a project.
But then there's this other part, which is like the relational goodies. Right. There's something about that dance of being on a team together. Right. It's like, what, the pleasure of being on a.
A sports team or the pleasure of, like, partner dance. It's like we're going back and forth, and it's like we're not gonna always be able to perfectly execute all of the dance moves.
And, like, it's exciting. The better and better and more refined and more in sync.
We get into, like, a flow state, like in a partner dance flow state or a team flow state is cool. And you might say, oh, we're sort of like reducing prediction error, which is kind of helping the flow go. But the.
But the other part is kind of like being on the lookout for the surprising thing. Right? Like, oh, that was a misstep. Let's figure that play out again. Let's correct it and go forward.
And the fact that that's always sort of there because there's like an agent, there is a being on the other end of that relationship that is not entirely predictable to you. That is like the.
It's almost like a prerequisite for all of these other relational things that we want, like love and a sense of connection or a sense of belonging or a sense of acceptance, harmony. Right. Like, the harmony feels good because it's a solution to disharmony, like, in a way.
Robby:You know what I mean? All of those things require an other.
Michael Porcelli:Yes.
Robby:And that is sentient, Right? Yeah.
Michael Porcelli:Right. So this is like, don't try to automate our relational world away into AIs. Like, it's not a good idea.
Robby:No, no, no.
Michael Porcelli:The AI should just be tools, in my opinion.
Robby:I 100% agree.
Just to go back to this image, which I love, of the ghosts in the zoom rooms, and I was thinking about this and I was like, at some point in the next five years, I'm going to be in a zoom meeting and there's going to be a ghost in there that's doing work. Right. In some way. Like, whatever it's doing that's not human and that'll be really weird and we'll all have weird feelings about it.
And everyone in our zoom meetings has, like, a check in where everybody says what they did the night before or whatever.
And then the ghost will come and said, you know, I read all of, like, Sanskrit literature or something, whatever, you know, and, like, we'll have whatever weird relationship we have with it.
And then there'll be a day when you show up to a meeting that you've scheduled and you've just brought in various of your colleagues that are required for that meeting, and you show up at the meeting and you realize you are the only person with a body in that meeting. And there's like. Like five disembodied faces. And, like, what does that mean? Yeah. For your sense of meaning and enjoyment at work. And like. Yeah.
I mean, I think it's different. Like, and not part of. When I say, like, I want to work, like, yeah. And not just by myself. Like. Yes, with people.
Like, part of what's fun about work is working with people.
It's solving a problem somebody else had, and then I come in and solve it, or I come in with a problem somebody else solves and it's like, like, thank you. Your mind helped me do something I couldn't do by myself. How nice is that? Like, that exchange of whatever is.
Michael Porcelli:Yeah.
Robby:Anyway. Good.
Michael Porcelli:Yeah. I mean, part. But the temptation will be like.
And this is maybe a little bit kind of what happens in her is like the movie her is like, make the AI agent, like, have its own personality and have its own agency. And then we're kind of getting something like the simulated relational goodies. And this is where I feel. I feel concerned that, like, the.
The temptation to build in, like, personalities and surprise and agency and all of this stuff will become very strong in some. In some areas. And then it's like, we're good. We're. They're just gonna build the sex bots, right? It's just gonna happen.
Robby:I mean, it's happening. I mean, you know, it's like, I don't.
Michael Porcelli:Yeah.
Robby:Not the physical robots, but, you know, there are. There are AI girlfriends right now. You Know, there are people who. Who are paying for an AI girlfriend. Right?
Michael Porcelli:Right. There's a company that does this camera, what they're called. They. You can have a. A trained sort of companion.
Robby:I mean, I'm sure that, you know.
Michael Porcelli:It'S mainly about being a companion rather than being like a tool. Right. At least in her. The AI assistant starts off by kind of like, I'm gonna sort all your email. I'm gonna help you compose a new mess. Right.
And then eventually she's like, he's just like, I just wanna spend time with you. And she's like, okay. You know, and then he has. Right. Then it becomes a little bit different. It's like, I don't know. Like, I mean, maybe I want. I.
Yeah, don't go there. I don't wanna go there. Right. I want. I want my AI assistant to not be relational with me beyond a certain point.
Robby:It's so funny. Cause I had, you know, when I had the image of these zoom ghosts, I had the same response.
And like, I've had this response, multiple AI wave in a way I haven't had about other technologies of being like, can we stop? This is getting a bit much right now. And I think partly that's because we're getting older and this is what happens to people.
It's like, when you're young, you have a lot of room for the world, and as you get older, you construct the world that you feel comfortable in and the changes start to get more uncomfortable. I also think this one is particularly kind of scary and disorienting. But it's funny that you're, you know, you're having.
Because I think of you as being more optimistic than me, but you're also having a version of, hey, like, let's not do that. Like a version of some kind of alarm. And like, let's pump the brakes.
And I think what's scary and challenging is, like, there's no, you know, people are just going to build it. Like, people are going to build these things. Like, if. If somebody wants it, somebody's going to build it. Like, that's just.
Just, you know, so we're not. We're not going to. The way out is through. Right.
Like, we're not going to be able to deal with the problems that AI presents by trying to stop people from using these tools. I mean, maybe. I mean, maybe there'll be some kind of disarmament, like global treaties of, like. But.
But for that to happen, it's going to have to get really scary first. There's no way that's happening based on what's happening right now.
Like some, you know, some AI is going to have to like hack into some whatever like military system and start firing weapons.
And then people will be like, okay, we need to, you know, it's not going to happen through this in response to the slow degradation of like human well being. Right. Like that's not how, you know, international treaties get, get formed. So. Yeah, so I think that, Yeah, I think maybe I. What I'm.
What I'm trying to say is like you're alarm and you're kind of like, let's pump the brakes. Maybe it needs to be redirected into. Here's how we deal with the fact that that is going to happen.
Michael Porcelli:Yes, totally.
I mean maybe in the end state it looks a little bit like the end of the third Matrix movie where it's like the AIs agree, let the people who want to wake up out of the Matrix get out and go be real, you know, in the flesh humans. And then sort of like cypher in the first Matrix. If you want to go back into the pod and become a pod person again, you can.
And maybe like the future, like it forks, right? There's the pod people who are just like, I don't care.
Pleasure, whatever relationships with AIs, you know, nutrient fucking injection into my IV or what. I mean they don't care, right. And then maybe some as it kind of comes around where you're like, no, this does matter to me. I want whatever.
And then it's kind of like cool. Then the AI is cool. You can leave, push this button and they flush you out of the pod and yay, I'm a natural free human.
I mean we already have a little bit of that.
I mean if you think about what the Amish do or what some of these intentional community people do, where they're like, let's move off grid and have a farm and like whatever, no more whatever WI FI waves around us.
Robby:I spend way less time on my phone than most people I know deliberately, you know, because, you know. But I'm also, you know. Yeah, there are people that are way further down that road. Yeah, I mean that's. Yeah, we're already doing that, you know. Right.
Or like the folks, I wish I could remember the name of that phenomenon. But the folks in Japan and increasingly elsewhere who are locking themselves into their rooms and spending all their time on the computer. Right.
They're making that choice already.
Michael Porcelli:Yeah, they're the proto pod people.
Robby:Yeah, they're Saying I would rather live in this dissociated world of comfort than deal with the stresses and the anxieties of, of, of the real world.
Michael Porcelli:Yeah. So let's, let's start wrapping it up.
I'm gonna just paint a little picture for the listeners of kind of where what I want it to be like and a little bit connecting it back to like my purpose in doing this company meta relating. It's connected, right. Like I do think we need to solve the economic thing we talked about a little bit. Right.
Like so it's not just platforms extracting value.
And then if we automate kind of like the have to do it kind of rote work that's just kind of boring almost entirely away, then we get to do some combination of hobbies, right. Projects and spending our time just experiencing each other. The human to human stuff, the kind of intrinsically valuable relational goodies.
And I think some people's personalities will prefer more of one and less of the other and vice versa.
And then maybe there's some, if there is a group of people that want to become pod people, we just make sure that the exit ramp is available to any pod person who's like, get me out of this pod. You know, so they can kind of join the free range humans or whatever, right. But also an on ramp for the pod.
If somebody's like, fuck it, you know, I don't want this, whatever this is organic life, plug me into the pod, right? And they can go the other way.
But then a lot of what we do with each other, at least in the organic human free range world is like making sure that the entity on the other end of the whatever the relationship is a person. Because that's the part that we want is we want, right? And if that is what's happening, then folks are going to need to have good relational skills.
And like in this transition period, I would say the more we have screen mediated interaction through, through text and social media and zoom even is like, yeah, the developmental growth in the face to face, in the flesh interpersonal dimension is actually maybe more sparse and people are less good at it. Whatever the case is, I still think people can get better at it. As like you can substitute a coach or maybe a counselor from an AI some of the time.
But in the end it's like, why would I be processing my interpersonal traumas if not to get better at interpersonal, actual interpersonal relationships with another person? And at which point being a good communicator with another person comes in handy. It's sort of, you know, a prerequisite.
You don't want people to be like cool.
You know, I'm over here with my AI therapist and I get to feel good about myself and then when I meet actual people, I treat them like I'm an asshole. Like I talk down to Alexa or something. It's like, do you really want that and what do you do?
Then you just go back over there and process your trauma from being an asshole or a sociopath. I mean, no, you want to actually be the, the real interpersonal, loving, relational kind being when you're with the other people.
Like when you're actually with them, that's when you want to be experiencing those things and not just treating people like devices. Right. Like that's an inversion. That's a weird colonization of the life world I suppose through from our technologies.
Anyway, that's kind of the world that I want is for people to value the relational and to recognize that the wanting an actual other human on the other end of your relational activities and then wanting more of your life to be that. That I think is what meta relating is in service of in the long term.
If my company could exist 100 or a thousand years from now, that's what it would be doing. Anyway. There you go, that's my wrap up comments. How about you?
Robby:I love all of that. I feel aligned with it all.
And yeah, I would just say that I guess the thing that I want which feels completely aligned with that is for these technologies as much as possible to go towards providing the basic human needs for all human beings. And that rather than enriching and empowering small handful of people that the whole planet benefits.
And if the AI could solve the climate change stuff, that would be really great.
Michael Porcelli:All the existential risk problems we wanted to solve.
Robby:How does it solve the AI risk problem?
Michael Porcelli:That is the topic for another conversation. Anyway, Robby, it's been a real pleasure to have you here. You want to toot your own podcast once more, Sane and the Miraculous?
Robby:Yeah, it's called the Sane and Miraculous.
It is about how do we integrate the kind of the spiritual and numinous dimensions of life with the rational, scientific and modern worldview in a way that values the contributions and the wisdom and the power of both. And include them both. And you can find it@podoflions.com or you can search Saint and Miraculous wherever you listen to podcasts.
Michael Porcelli:Thanks Robbie.
I hope that you've enjoyed my getting to share with one of my best friends a conversation about this topics of AI relationships and jobs and the future, and hopefully this stimulated some interesting thought for you and join us next time for more Relational Conversations.
Relational Conversations is the official podcast of Meta Relating, an innovative approach to communicating effectively about your relationships, whether personal or professional.
If you've enjoyed this episode, please rate, review and subscribe wherever you get your podcasts and check out our resources, training courses and events@metarelating.com this podcast was produced and edited by me, Michael Porcelli, founder of Meta Relating. Thank you for listening and stay connected for more.
Robby:It.