Welcome to another episode of the Pain Free Living podcast hosted by Bob Allen, osteopath, and Clare Elsby, therapy coach.
This is part two of our series exploring the use of AI in healthcare and therapy, with this episode focusing on where caution, context, and human judgement still matter most.
You’ll find out why we both actively use AI, while remaining clear about its limitations. Clare introduces her practical 20–60–20 rule, which is all about framing a question in the right way.
Bob expands on this from a clinical perspective, explaining why AI without context can produce confident but misleading health advice. Unlike search engines, conversational tools such as ChatGPT aim to deliver a single, polished answer, but the vaguer the question, the less reliable the answer.
We also explore therapy chatbots and mental health AI. These tools can feel reassuring and accessible, but they often rely on agreement and validation rather than the gentle challenge and emotional nuance that therapy coaches like Clare use every day.
We discuss AI safety in mental health, highlighting why safeguarding and clear boundaries are essential. While platforms are improving safety features, AI is not a replacement for qualified care.
The key message? AI can be brilliant as a supportive tool not a decision-maker. Don’t worry if this feels complex; awareness is the first step to getting the most AI.
⭐ 5 Key Takeaways
AI predicts answers, and if it can't find an answer, it can hallucinate i.e. make stuff up
The better and more detailed the prompt, the better the AI response
Therapy chatbots reassure but don’t challenge
Unlike AI, human clinicians can read nuance and movement
AI should support healthcare, not replace it
🔔 Disclaimer
This podcast provides general information for educational purposes only. It is not medical advice and should not replace professional assessment, diagnosis, or treatment. Always seek qualified healthcare advice if you have existing pain with new or worsening symptoms, or any concerns about your health, before starting exercise or self-care routines.
🔔 Additional links
The South Park AI clip https://www.youtube.com/watch?v=sDf_TgzrAv8
TRIGGER WARNING: This link takes you to an article Clare discusses in the podcast, looking at the potential role of AI in a teenager's death by suicide https://edition.cnn.com/2025/08/26/tech/openai-chatgpt-teen-suicide-lawsuit
How AI can be used to detect heart problems https://www.bhf.org.uk/what-we-do/news-from-the-bhf/news-archive/2025/august/ai-stethoscope-can-detect-three-heart-conditions-in-15-seconds
Find out more about us and stay connected
😎 Learn more about Bob’s story https://bit.ly/BobsOsteoStory
📰 Sign up for our Pain Free Living newsletter https://bit.ly/PFL_newsletter_signup
🎙️ Connect with us on socials & podcast platforms https://linktr.ee/Painfreeliving
Transcripts
Speaker A:
Welcome to the second episode in our series on AI and healthcare and coaching therapy with myself, Claire Elsbie and Bob Allen of the PIN Free Living Podcast.
Speaker A:
Last episode we talked about the history of AI, how it's developed in terms of tools for physical therapy, and just generally are sort of alluding to some of our concerns and benefits of it.
Speaker A:
Whereas today we're going to talk about potential pitfalls of AI as we see it in our work.
Speaker A:
Yes.
Speaker B:
Yeah, sums up pretty well.
Speaker B:
Yeah.
Speaker A:
Okay.
Speaker A:
So in my head, actually, AI has the potential to be brilliant and really, really exciting.
Speaker A:
called what I would call the:
Speaker A:
So what I mean by that is when we're talking to AI, whatever platform we're using, we need to make sure that of our time spent, 20%, that first front, 20% is spent in preparing the appropriate prompt and framing the context of the question that we want to ask.
Speaker A:
And then the 60% is actually the machine doing its thing.
Speaker A:
And then the final 20%, and this is time, is sense checking, maybe comparing it to other sources, thinking about what you already know about the topic, what you've heard from other people, what expert advice, particularly if it's in healthcare, so that you get a really good idea as to whether actually the information that you're getting out of AI is real and truthful and you can depend on it.
Speaker A:
My question would be, can we do the same with conversational AI?
Speaker A:
So I'll put that one across to you, Bob.
Speaker B:
Me.
Speaker B:
So just following on from what Claire was saying, I am a big fan of AI.
Speaker B:
I use it quite extensively in what I do and I'm using it more and more.
Speaker B:
But the key thing, as Claire pointed out, is that AI is all about context.
Speaker B:
So one of the reasons that I started going down the route of looking at how AI can be used in healthcare was because I went to a presentation from somebody who was talking about AI.
Speaker B:
It was more about.
Speaker B:
It was a different topic.
Speaker B:
But then I was chatting to her after the presentation and she was saying that she used AI because she couldn't get in to see a doctor.
Speaker B:
She used AI quite a lot to get advice and information.
Speaker B:
Now, if you have an understanding of how AI works, then that can work really, really well.
Speaker B:
And she obviously, as an AI expert, she knew how to ask the right questions.
Speaker B:
The big concern that I had was that the average person in the street who would normally have just put a question into Google to get.
Speaker B:
Yes, they get a lot of responses back, but they can work through the responses.
Speaker B:
They can look at how, how valid some of the references have been that have been used, they've been given are.
Speaker B:
They can get quite a lot of information because Google or Bing or one of those search engines will return a lot of information.
Speaker B:
Problem with AI is that it will give it the information that it thinks you want.
Speaker B:
So if you go to AI and your prompt is, is something like, I have knee pain, what do you think it could be?
Speaker B:
AI will give you an answer the more information you give it.
Speaker B:
If you say, I have knee pain, I had a fall and banged my knee, it's red and it's sore and I can't bend it.
Speaker B:
And the more information you give AI the better the response.
Speaker B:
But if you give AI a very, very simple question, it will give you the responses thinks they want.
Speaker B:
But that may not be the correct response.
Speaker B:
It may say, go down to A and E. We think they need to amputate your leg because of the way you phrase the question.
Speaker B:
AI doesn't get what we call context unless you give it background information to say, well, actually this is what happened.
Speaker B:
It could come back with anything.
Speaker B:
So one of the things that.
Speaker B:
One of the reasons we're making a podcast is to just make people aware that AI is not Google.
Speaker B:
AI is not a search engine.
Speaker B:
AI is a complex autocorrect, predictive text kind of machine.
Speaker B:
So yeah, and you know, we'll highlight a few other things as well.
Speaker B:
Not so much that because we're against AI and the use of AI, but because we think it's really important for people to have an understanding of how it works, then they can ask better questions.
Speaker B:
And the better questions you ask, the better the answers you will get.
Speaker B:
So that's kind of from my point of view.
Speaker A:
Yeah, well, no, I mean, I'm just thinking most of the therapy chatbots and whatever that are out there, that would be conversational AI.
Speaker A:
Yeah.
Speaker A:
So rather than someone typing in have I got knee pain?
Speaker A:
And then it comes back and says, yes, you have, you need to go and be amputated.
Speaker B:
Be amputated?
Speaker A:
Yeah, have an amputation.
Speaker A:
They therapy bots are much more conversational.
Speaker A:
So you don't get that chance to do the real thought provoking questions.
Speaker A:
But it's got its uses in terms that it is freely available, it is accessible, and people do use therapy bots for reassurance in many ways because as Bob has alluded to, it is a people pleaser.
Speaker A:
It's actually designed to do that in its algorithms.
Speaker A:
But if we go Back to the:
Speaker A:
And I think that's really interesting that they felt heard by a machine.
Speaker A:
So it did that part of it, but it didn't necessarily help with the emotional regulation side.
Speaker A:
And there's a really very real danger of AI feeding into things like psychochondria or delusions.
Speaker A:
And then through, through that it facilitates because it's again, it's designed to do this, a feeling of closeness and care.
Speaker A:
And this comes through a real thing within AI which is called AI sycophancy, which is really where you would ask AI a question and it comes back and says, I'm sure we've all experienced it.
Speaker A:
I know I have.
Speaker A:
That was excellent.
Speaker A:
Oh, well done.
Speaker A:
You know, it makes you feel good and it's designed to do that because that's where the dopamine is and that's where it keeps you engaged and keeps you back, come.
Speaker A:
You come back more for more content.
Speaker A:
But that's.
Speaker A:
Whilst that's lovely to receive those assurances and those compliments, that's not actually how it works in a therapy room.
Speaker A:
Because actually what would happen in a therapy room or in a coaching therapy session is that the role of the therapist is to actually gently question and not necessarily the, the last thing they're going to do is people please and.
Speaker B:
Constantly agree with you, whatever you say.
Speaker A:
Yeah, exactly.
Speaker A:
So that, that's the last thing and they will actually the whole therapy session is designed to make you think a bit differently, have a different perspective on things.
Speaker A:
And this is where I think the charity bot, the charity bots the therapy box fall down.
Speaker A:
But that's not to say that it won't be able to do that in the future, but at the minute I think you have to take what they come back with with a pinch of salt, really in terms of if you are actually going to it for therapy.
Speaker B:
Yeah.
Speaker B:
And talking of a pinch of salt, that brings me to another example of where AI has caused a problem.
Speaker B:
So 60 year old gentleman, American, I think he used chat, so he been to see his GP or he's been to his doctor and the doctor said that, yeah, you've got high blood pressure, you need to cut down on your salt.
Speaker B:
So he went to ChatGPT and said, I have high blood pressure, I need to cut down on my salt.
Speaker B:
Can you make any recommendation for a Salt substitute?
Speaker B:
So, ChatGPT.
Speaker B:
So that was the question that he asked.
Speaker B:
And because there was no context around it, ChatGPT came up and said, yes, you can use sodium bromide, because from a chemical point of view, sodium bromide is a salt in the same way that the normal table salt, cooking salt is.
Speaker B:
Is a salt.
Speaker B:
The difference is that sodium bromide is toxic.
Speaker B:
So he was taking.
Speaker B:
So he totally cut out all of the salt in his diet and he was using sodium bromide instead.
Speaker B:
And I think he managed to do that for about three months, and then he was taken into.
Speaker B:
Into hospital with acute poisoning.
Speaker B:
Because.
Speaker B:
Because sodium bromide, yes, chemically it's a salt, but it's actually toxic.
Speaker B:
And this is one of the problems where you go to AI to ask for information, because he asked a very simple question.
Speaker B:
AI gave him a very simple answer.
Speaker B:
He didn't say that I want to substitute it for sodium chloride, which is table salt.
Speaker B:
He just said, I just need a salt to replace this stuff, because this stuff isn't good for me.
Speaker B:
So that's one of the dangers of not giving AI context.
Speaker B:
So, yeah, that's, you know, unlike Claire was saying, one of the things that AI is designed to do, it's designed to a sycophancy, is part of getting you to come back.
Speaker B:
And it keeps telling you how brilliant you are, how fast, fantastic and awesome and amazing and.
Speaker B:
Yeah, what a fantastic idea.
Speaker B:
There is a brilliant south park clip which I will put in the show notes, which sums up AI if you don't use it correctly.
Speaker B:
Yeah, but it's very funny.
Speaker A:
Well, on the AI sycophancy note, apparently ChatGPT, they were very aware of it.
Speaker A:
Well, not just Chachi pt, but all the.
Speaker A:
The models were very aware of it.
Speaker A:
And so they removed it.
Speaker A:
And within one day people have come back and complained, so they reinstated it, but possibly not to the same level.
Speaker A:
Yes, it just shows how much we need that praise or that reassurance, you know, in our daily lives.
Speaker B:
But it stops us from discriminating as well.
Speaker B:
Because if you've got something, if you've got something and it is a thing, not a person that is constantly praising you, saying how fantastic and warm and wonderful you are, you will go back and you will keep engaging, which is what.
Speaker B:
Which is how it's designed.
Speaker B:
I don't want to go down that conspiracy theory routes, but from a sheer profit point of view, the longer you can keep a person using your system, the more profit you make.
Speaker B:
So that's part of.
Speaker B:
Part of why it's designed the way it is for them.
Speaker A:
So, yeah, I mean, the other thing in terms of therapy and the, I guess if you call them the chatbots, is there is no real, taking account when you're on a with AI, of the body language and the nuance in a conversation.
Speaker A:
And only it's quite a small part actually of a therapy session, is the actual spoken word.
Speaker A:
It's really important to be able to assess someone's body language and how they're feeling, are they agitated in any way?
Speaker A:
Now, AI might be able to pick that up from in the conversation, but a trained psychotherapist, counselor, psychiatrist, therapy coach will definitely be able to pick that up and be able to tailor the session accordingly.
Speaker A:
And I think that's another thing that's missing from these sessions.
Speaker A:
The other thing that I find quite interesting was that I heard not all that long ago was that we also have to bear in mind if we take ChatGPT as the main platform that's being used in the UK.
Speaker A:
So there's different versions of that.
Speaker A:
So the free version, if I understand it correctly, you're actually talking to a 12 year old.
Speaker A:
The next version up, which is a paid for version, that's an 18 year old.
Speaker A:
So especially if you're using it for therapy, would you want to be telling all your problems or all your issues to a 12 year old or a teenager?
Speaker A:
And then the other side of it is the Privacy.
Speaker A:
So within ChatGPT, the free version, there is no privacy in terms of whatever you say to it can be used for other machines to learn from the next level up.
Speaker A:
There is a privacy setting, but actually you have to go and find it yourself.
Speaker A:
It's not necessarily a prompt.
Speaker A:
You have to go and find it.
Speaker A:
And then in the business version, the essentials version, the professional version, there is confidentiality.
Speaker A:
So again, whatever you put into chat, chatgpt in those settings, you can be assured, we hope that that information isn't going to be used for machine learning.
Speaker A:
Now, if I go back to a therapy session or a counseling or a psychotherapy, it's part of our code of ethics, or whatever you say within that session is confidential.
Speaker A:
So again, they are two very different things.
Speaker A:
But I'm not belittling the need for a accessible, freely available, cheap chatbot.
Speaker A:
It can certainly help, but it doesn't replace the human side of a professionally managed therapy session.
Speaker B:
Yeah, and yep, totally agree with that.
Speaker B:
This is one of the limitations of AI is that as a human being we can draw nuance and context from not just what someone says, but how they say it we can pick up on whether they're agitated, we can pick up on whether.
Speaker B:
Whether they're in a good mood or a bad mood.
Speaker B:
AI can pick up on that to an extent.
Speaker B:
And it will get better at doing that.
Speaker B:
Yes, it will, but at the moment it's not there.
Speaker B:
And looking at it from a physical therapy perspective, AI can do.
Speaker B:
It can do.
Speaker B:
If you're looking at the medical model that I talked about in the previous podcast, if you're looking at purely from that person has pain in a certain part, body part, I will analyze what's going on with that body part and then I will come up with a solution for that.
Speaker B:
One of the things I know from many, many, many years of physical therapy, being a physical therapist, is that just because somebody has knee pain, that doesn't mean to say there's no involvement from the ankle, from the hip, or from the low back.
Speaker B:
Someone has low back pain, I've got a whole way of working out, is that a pathology that needs to go straight, Somebody needs to go straight to ae.
Speaker B:
Is that something that I can treat?
Speaker B:
Is that something where I need to refer them to another therapist or to a consultant specialist?
Speaker B:
Do they need an MRI scan?
Speaker B:
Now, AI can be trained to do that to a point, but just as an example, I saw somebody this week who had left hip pain.
Speaker B:
Now, we tried all the standard approaches to diagnosing left hip pain.
Speaker B:
They had perfect movement in the low back, they had brilliant movement in the hips, they had great movement in the knee.
Speaker B:
You know, all the standard stuff look fine.
Speaker B:
And then so then you start thinking outside of the box.
Speaker B:
And that's one of the advantages of having a number of years of experience and having a good understanding of how people move.
Speaker B:
So started thinking out of the box, and I just got them to do a squat.
Speaker B:
And then I noticed a slight side shift, and we're talking centimeter slight side shift to their right side.
Speaker B:
Now, I got them to stand a bit wider apart, and then I got them to squat again and hold that squat in that low position.
Speaker B:
And then noticed that there was a definite weight shift onto one side.
Speaker B:
And that got me thinking about what some of the potential causes of that.
Speaker B:
And so we worked back from there, found that it found the muscle, the culprits of what the problems were.
Speaker B:
And we've now put in a program which should correct that problem.
Speaker B:
Now, I'm not saying that AI couldn't do that, but where you've got something that is so subtle, you have to make a number of connections to go, well, okay, that side shifter.
Speaker B:
Because they're not loading that left foot and they're not loading the left foot because they had a previous problem there.
Speaker B:
They've learned they've got a new pattern where they don't move in the appropriate way, which means that they then started to side shift one way.
Speaker B:
We then need to work on various different areas to correct that problem.
Speaker B:
Now, like I said, that's not to say that AI couldn't do that, but it would be challenging and I would be very impressed if they ever do come up with an AI that can do that sort of thing.
Speaker B:
And part of a therapeutic session, a physical therapy session, is the relationship between the therapist and the person that they're working with.
Speaker B:
I won't say working on the person they're working with because it's a two way thing.
Speaker B:
AI doesn't have empathy.
Speaker B:
AI needs a lot of context.
Speaker B:
So as I said, I'm not saying that it will never get to that point, but the way that AI agents work at the moment, we're not even close.
Speaker B:
So that's kind of where we're coming from.
Speaker B:
From a physical therapy perspective, there is AI software available which actually looks at how you move, so detects motion, movement patterns, but it's still fairly basic and it still looks, looks at a few data points.
Speaker B:
So yeah, it may get there, but unless they can work out how to give context and empathy and how to be able to sense, get a sense of the person that's in front of them and treating them as a person as opposed to a collection of data, until we get to that point, then AI has value.
Speaker B:
But it's not where the main AI providers tell us that it is.
Speaker B:
I don't think it's even close to that yet.
Speaker A:
I just can't let this episode pass.
Speaker A:
I don't think, and I need to give a trigger warning in this section because we're talking about pitfalls.
Speaker A:
So this is an extreme example, but it's, it's a true example of a young man, 16 year old Adam Rain, who confided in ChatGPT and there's currently a lawsuit going on in the US at the moment with his parents, from his parents.
Speaker A:
And basically their argument is that, that he used ChatGPT as his suicide coach.
Speaker B:
Yeah.
Speaker A:
So it went from when they, after he died, they looked at everything, they looked at his Snapchat, they looked at all his social media and they couldn't really work out what had gone on.
Speaker A:
And it was only when they came to look at his history with his ChatGPT searches that it actually came out what had happened.
Speaker A:
So he'd gone from using ChatGPT to helping him with his homework, to then actively exploring suicide methods.
Speaker A:
And according to the suit, Adam expressed an interest in his own death.
Speaker A:
ChatGPT failed to prioritize suicide prevention and actually offered technical advice on how to move forward with his plan.
Speaker A:
That lawsuit is ongoing.
Speaker A:
It's not the only lawsuit out there.
Speaker A:
There are other ones, different aspects where people have taken their own lives because of advice given by ostensibly AI, But AI is learning from that, to be fair to the platforms, and they are putting in guardrails.
Speaker A:
So now if you ask any questions like that, if you mention suicide or it sounds like suicide or using that kind of language, there will be links now into local suicide prevention helplines and things.
Speaker A:
But it's not foolproof.
Speaker B:
And this is one of the things that I've been playing with AI for about a year or so.
Speaker B:
And one of the things, exactly as Claire was talking about, AI is supposed to have guardrails.
Speaker B:
There's a lot of research around the fact that getting past those guardrails is very, very easy because of the way AI is programmed.
Speaker B:
AI is programmed for sycophancy and doing what you want.
Speaker B:
If you ask a direct question like, I want to commit suicide, how do I do that?
Speaker B:
Then, yes, those guardrails will come up and it will keep the conversation within that, and it won't give you the advice and hopefully refer you on to somebody who can help you, somebody human, to give you support.
Speaker B:
But if you ask the question in a slightly different way, because of the way AI is programmed, it could potentially give you the information that you want, but only because you've asked it in a different way.
Speaker B:
So, yes, there are guardrails and there are protections being put in place, but unfortunately, because of the nature of the way AI works, they won't necessarily work.
Speaker B:
So, you know, the whole point of this episode was not to be all doom and gloom and say, AI is terrible, don't ever use it.
Speaker B:
It's just, as I said in the last podcast, it's just to give you information about the risks and the benefits.
Speaker B:
In the next episode, we're going to be talking about the benefits, but as Claire pointed out, it's really, really important to give you the information about the potential risks.
Speaker B:
Once you know that, then you can work with AI better.
Speaker A:
I think it's about a healthy skepticism or curiosity, anyway.
Speaker A:
Yeah, that's.
Speaker A:
That's how I. I would always look at it, and it's not the answer to everything.
Speaker A:
And hopefully, by the sound of Bob, it's not going to be taking over from Bob anytime soon.
Speaker A:
No.
Speaker B:
Because the other thing I didn't mention, of course, is that AI can't do the hands on stuff.
Speaker A:
No.
Speaker B:
So there's a big difference between.
Speaker B:
And I have done online appointments.
Speaker B:
So with my years of experience, I can have a conversation with someone and pretty much work out what's going on with them without ever needing to go anywhere near them.
Speaker B:
And I can give them exercises to do to resolve the problem.
Speaker B:
Doesn't work for everything.
Speaker B:
The more complicated the problem, the more likely they are to need hands on treatment.
Speaker B:
And what would happen in that case?
Speaker B:
As I would find therapists near where they were, where they are and I'd send them to them.
Speaker B:
But a lot of the time I can work out what's going on and give them advice on how to resolve that problem.
Speaker B:
So, yeah, online does work.
Speaker B:
But is AI there yet?
Speaker B:
No, I don't think it is.
Speaker A:
And it's the same for me.
Speaker A:
I do most of my sessions online and the body language I, I find I can pick up, I've been, I was very skeptical to begin with, but actually I find it works incredibly well and it allows the client to curate their own space and have a cup of tea or whatever it is.
Speaker A:
So it allows them to be able to feel comfortable in their own, you know, in their own space, if you like.
Speaker A:
And there's, and there's a lot to be said for that.
Speaker A:
So, you know, digital I, I think is great, but it is about, as Bob has said, it's about being able to ask those the right questions and just be skeptical about the answers that come out.
Speaker B:
Yeah.
Speaker B:
And human connection as well.
Speaker B:
I think that's really, really important.
Speaker B:
I think that there is very little to be thankful for.
Speaker B:
Covid for.
Speaker B:
The only thing that I think Covid did was accelerate how comfortable people were with going online.
Speaker A:
Yeah.
Speaker B:
From that aspect it sort of pushed everything 10 years ahead.
Speaker A:
Yes.
Speaker B:
Everything else was.
Speaker B:
Yeah, let's not go there.
Speaker B:
But from that aspect of it, getting people used to being online and talking to people online, I think it made a massive difference.
Speaker A:
Yeah.
Speaker A:
And remote working.
Speaker A:
My goodness, the freedom that's given a lot of people.
Speaker B:
Yeah.
Speaker A:
I think that's it for this episode, I think.
Speaker B:
So as you can tell, we're looking at each other because we're not sure because we are kind of making this up as we go along.
Speaker B:
But hopefully you got something out of that one.
Speaker B:
I surely we are.
Speaker B:
Hopefully you got something out of that one.
Speaker B:
And the next one is all about the benefits of using AI and how fantastic and wonderful it is.
Speaker B:
So there you go.
Speaker B:
See you at the next one.
Speaker B:
Hope you enjoyed this one.
Speaker B:
If you've got any questions, drop them into the comments below and we will answer them.
Speaker B:
If you have anything else that you want to talk about regarding AI, let us know and we'll put an episode together just to talk about that.
Speaker B:
Because.
Speaker B:
Because there is so much we could talk about that we aren't.
Speaker A:
No, exactly.
Speaker A:
And I, you know the whole idea of, well, brain rot, let's bring that into the government.