In this Decoding Academia episode, we take a look at a 2025 paper by Daria Ovsyannikova, Victoria Olden, and Mickey Inzlicht, asking a question that might make some people uncomfortable/angry, specifically, are AI-generated responses perceived as more empathetic than those written by actual humans?
We walk through the design in detail (including why this is a genuinely severe test), hand out deserved open-science brownie points, and discuss why AI seems to excel particularly when responding to negative or distress-laden prompts. Along the way, Chris reflects on his unsettlingly intense relationship with Google’s semi-sentient customer-service agent “Bubbles,” and we ask whether infinite patience, maximal effort, and zero social awkwardness might be doing most of the work here.
This is not a paper about replacing therapists, outsourcing friendship, or mass-producing compassion at scale. It is a careful demonstration that fluent, effortful, emotionally calibrated text is often enough to convince people they are being understood, which might explain some of the appeal of the Gurus.
Source
Ovsyannikova, D., de Mello, V. O., & Inzlicht, M. (2025). Third-party evaluators perceive AI as more compassionate than expert humans. Communications Psychology, 3(1), 4.
Decoding Academia 34: Empathetic AIs?
01:40 Introducing the Paper
10:29 Study Methodology
14:21 Chris's meaningful relationship with YouTube AI agent Bubbles
16:23 Open Science Brownie Points
17:50 Empathetic Prompt Engineering: Humans and AIs
21:17 Study 1 and 2
31:35 Study 3 and 4
37:00 Study Conclusions
42:27 Severe Hypothesis Testing
45:11 Seeking out Disconfirming Evidence
47:06 Why do AIs do better on negative prompts?
54:48 Final Thoughts