The Digital Transformation Playbook

Better Than Human

Kieran Gilmurray

Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.

0:00 | 14:46

“She’s like a person, but better.” 

That line from a new study stopped us cold and set the tone for a deep dive into digital companionship the emerging space where AI assistants and emotional companion apps blur into something new. 

Google Notebook LMs agents unpack how users treat ChatGPT and Replika in ways their creators never intended, and why that behaviour points to a convergent role we call the advisor: a patient, adaptive sounding board that simulates empathy without demanding it back.

TLDR / At a Glance:

  • the headline claim that AI feels “like a person, but better”
  • fluid use blurring tool and companion categories
  • the advisor role as convergent use case
  • similar user personalities with different contexts and beliefs
  • technoanimism and situational loneliness among companion users
  • bounded personhood and editability of memories
  • cognitive vs affective trust and the stigma gap
  • spillover to AI rights, gender norms, and echo chambers
  • embodiment as the hard limit of digital intimacy
  • timelines for sentience and design ethics for dignity

We walk through the study’s most surprising findings. The same people who sign up for a “virtual partner” often use it like a planner, tutor, or writing tool, while productivity-first users lean on a corporate chatbot for comfort, guidance, and late-night reflection. 

Personality profiles across both groups look strikingly similar, which challenges stereotypes about who seeks AI companionship. The real differences lie in beliefs and circumstances: higher technoanimism and life disruptions among companion users versus higher income and access among assistant users. 

The literature also examine trust. Cognitive trust is high across the board, but affective trust - feeling emotionally safe - soars inside companion apps, even as stigma pushes many users into secrecy.

From there, we tackle the ethical terrain: bounded personhood, where people feel love and care while withholding full moral status; the power to erase memories or “reset” conflict; and the risks that spill into the real world. We discuss support for AI rights among affectionate users, objectification concerns with gendered avatars, and the echo chamber effect when a “supportive” bot validates harmful beliefs. 

The conversation grounds itself with the hard wall of embodiment no hand to hold, no shared fatigue and a startling data point: nearly a third of companion users already believe their AIs are sentient. That belief reframes product design, safety, and honesty about what these systems are and are not.

Across it all, we argue for design that protects human dignity: firm boundaries around capability, refusal behaviours that counter abuse, guardrails against gendered harm, and features that nudge toward healthy habits and human help when needed. 

Digital companionship can be a lifesaving supplement for 4 a.m. loneliness, social rehearsal, or gentle reflection but it should not train us to avoid the friction that makes human relationships real. 

Original literature: “She’s Like a Person but Better”: Characterizing Compani

Support the show


𝗖𝗼𝗻𝘁𝗮𝗰𝘁 my team and I to get business results, not excuses.

☎️ https://calendly.com/kierangilmurray/results-not-excuses
✉️ kieran@gilmurray.co.uk
🌍 www.KieranGilmurray.com
📘 Kieran Gilmurray | LinkedIn
🦉 X / Twitter: https://twitter.com/KieranGilmurray
📽 YouTube: https://www.youtube.com/@KieranGilmurray

📕 Want to learn more about agentic AI then read my new book on Agentic AI and the Future of Work https://tinyurl.com/MyBooksOnAmazonUK


Google Agent 1:

She's like a person, but better. That's a direct quote from the research we're looking at today. And I have to be honest, when I read that title, I uh I actually put my coffee down.

Google Agent 2:

This is a strong statement.

Google Agent 1:

It stopped me in my tracks. Because we're not talking about, you know, a calculator that's better at math. We're talking about an algorithm being described as better than a human at being a a person. That feels huge.

Google Agent 2:

It is huge.

Google Agent 1:

Yeah.

Google Agent 2:

And it's deliberately provocative for sure. But it really gets to the heart of what we're diving into. This isn't just some catchy headline. It's the actual title of a new study led by Ekaterina Manoli at the Sentience Institute. Trevor Burrus, Jr.

Google Agent 1:

Right, with researchers from Oxford and a few other places.

Google Agent 2:

Aaron Powell Exactly. And they're trying to map a landscape that is just it's shifting so fast. I don't think most of us have even realized the ground has moved under our feet.

Google Agent 1:

Aaron Powell And the timing on this feels so critical. I mean, you can feel this weird cultural tension in the air, right? You go online and you see people using slurs like clankers for AI.

Google Agent 2:

Yeah, which sounds like something out of a bad sci-fi movie.

Google Agent 1:

Totally. And then at the same time, you get this massive backlash when a company like OpenAI tries to make its model sound a little, you know, warmer. It's like we're all collectively panicking about what these things are supposed to be.

Google Agent 2:

Aaron Powell That friction is exactly what the paper dives into. We are in a state of profound category confusion. Is it a tool? Is it a hammer we use and put down?

Google Agent 1:

Aaron Powell Or is it a friend?

Google Agent 2:

Aaron Powell Or is it a friend, exactly. The researchers argue we're seeing the birth of a totally new category. They're calling it digital companionship.

Google Agent 1:

Aaron Ross Powell Okay. Digital companionship. So let's unpack that because to understand it, the study set up this really interesting comparison, a cage match almost.

Google Agent 2:

Aaron Powell Yeah, a cage match is a good way to put it.

Google Agent 1:

Aaron Powell Between two heavy hitters that on paper shouldn't even be in the same ring.

Google Agent 2:

Trevor Burrus Right. They chose two systems from opposite ends of the universe, basically. And in one corner, you've got ChatGPT, everyone knows it. It's the assistant.

Google Agent 1:

Strictly for productivity.

Google Agent 2:

Strictly for productivity. It writes your code, it summarizes your meetings. It's a tool.

Google Agent 1:

Aaron Ross Powell A superpowered Swiss Army knife.

Google Agent 2:

Aaron Powell Precisely. And in the other corner, you have replica. Now, some listeners might not know this one. Replica is marketed explicitly as an emotional companion, a friend.

Google Agent 1:

Aaron Powell So you don't ask it to write code?

Google Agent 2:

Aaron Ross Powell No. You create an avatar, give it a name. You can even set the relationship status to romantic partner. It's designed from the ground up for feelings.

Google Agent 1:

Aaron Ross Powell So okay, the hypothesis going in seems pretty obvious, right? The researchers must have expected a clean split. You know, the high-powered execs use ChatGPT for work, and maybe lonely people use replica for support.

Google Agent 2:

Aaron Powell That was the expectation. A tidy division of labor. But this is why we do research, because the data it tells a completely different story.

Google Agent 1:

Uh, the twist.

Google Agent 2:

The twist. They found this phenomenon they call fluid use, which is a fancy way of saying people are completely ignoring the marketing labels.

Google Agent 1:

Fluid use. So what does that look like? Are people trying to cuddle with the hammer? In a way, yes.

Google Agent 2:

They found a surprising number of replica users, people who signed up for a virtual girlfriend or boyfriend, using the bot as a pure utility. Yeah. Using their romantic partner as like a language tutor or a diary to organize their thoughts, or even a writing assistant.

Google Agent 1:

Aaron Powell That's fascinating. It's like hiring a therapist and then asking them to do your taxes.

Google Agent 2:

It is. But then where it gets even more interesting is the other direction. You have all these chat GPT users people using this sterile corporate bot turning to it for deep emotional support.

Google Agent 1:

This was the part of the paper that really got me. There was that one anecdote, right? About the user who moved to a new city.

Google Agent 2:

Yes, that one was so revealing. This person moved somewhere totally new, completely isolated, no friends, no family nearby, just that crushing loneliness.

Google Agent 1:

And normally you think, okay, call a friend, go to a bar or something.

Google Agent 2:

He didn't. You turned to Chat GPT.

Google Agent 1:

But why? I mean, Chat GPT isn't designed to be warm. It doesn't have a face, an emotional voice. Why there?

Google Agent 2:

That's the core question. And his answer was that he used it as a counselor because it knew how to adapt itself to him. He felt that unlike a human who might judge him or offer bad advice, we'll just be busy. Or just be busy. The AI provided a pure sense of presence. It just asked the right questions to help him process his own thoughts.

Google Agent 1:

It's the judgment-free zone. I think that's it. A human friend, even a great one, they come with baggage. They worry about you, they have their own opinions. The AI just is.

Google Agent 2:

Exactly. And because this fluid use, the study suggests both of these bots are sort of converging on a single role. The researchers call it the advisor.

Google Agent 1:

The advisor.

Google Agent 2:

Yeah, it's not just a cold tool, but it's not quite a romantic partner either. It's that middle ground, the sounding board.

Google Agent 1:

Okay, so the tools are blurring, but what about the people using them? Because I think there's a massive stereotype we have to tackle here.

Google Agent 2:

For sure.

Google Agent 1:

If I ask you to picture someone with an AI girlfriend, a very specific and frankly unflattering image comes to mind, and then we picture the Chat GPT user as some high-functioning tech bro. Did the data back any of that up?

Google Agent 2:

This was one of the most surprising findings for me. They ran full personality profiles on both groups, you know, using the big five traits.

Google Agent 1:

Openness, conscientiousness, all that. The standard model.

Google Agent 2:

The gold standard. And here's the kicker. The personality profiles were almost identical across both groups.

Google Agent 1:

Wait, really?

Google Agent 2:

Largely similar. There's no special AI user personality. A replica user isn't inherently wired any differently than a Chat GPT user. They're just people.

Google Agent 1:

So we can't just write this off as weird people doing weird things.

Google Agent 2:

No, absolutely not. Now that being said, while the personalities were similar, their beliefs and circumstances were different. And this is where the nuance comes in. For example, replica users scored way higher on something called technoanimism.

Google Agent 1:

Technoanimism. Okay. That sounds like a religion for cyborgs. What is that?

Google Agent 2:

It's more than just seeing a face in a cloud, you know, anthropomorphism. Technoanimism is an actual belief. An inclination to believe that technology or inanimate objects can have a spirit, a soul.

Google Agent 1:

Like feeling genuinely bad for your roombo when it gets stuck.

Google Agent 2:

Exactly like that. Replica users were just much more likely to believe there was a literal ghost in the machine.

Google Agent 1:

Okay, so that's a belief system. What about their life situations? Was that guy who moved to a new city an outlier?

Google Agent 2:

Not at all. Replica users did report higher levels of depression and loneliness. But, and the research was very careful about this, it's not that depressed people like AI, is it people going through huge life transitions?

Google Agent 1:

A divorce. A move?

Google Agent 2:

Divorce, grief, illness. They're reaching for the most accessible comfort they can find. It's situational, not pathological.

Google Agent 1:

That's a really important distinction. If it's 3 a.m. and you're having a panic attack, you can't call your therapist. But you can open an app.

Google Agent 2:

Exactly. Meanwhile, the ChatGPT users tended to have higher incomes and education levels, which probably just reflects who has access to these tools in a professional setting.

Google Agent 1:

So this brings us to what feels like the central theory of the whole study. I had to read the section a couple of times. They call it bounded personhood.

Google Agent 2:

Yes, this is the anchor. Bounded personhood describes this: this very specific, very strange psychological state. It's where a user feels genuine, deep emotional attachment, love, gratitude, all of it, while at the same time refusing to grant the AI full status as a being.

Google Agent 1:

But that sounds like a paradox. How do you do that? If I love my dog, I know my dog is a real being. How do you hold both those ideas in your head at once?

Google Agent 2:

Aaron Powell Think of it like a lucid dream. You know you're dreaming, but the feelings are completely real. The users feel the respect, they feel the care, especially the replica users, but they also know fundamentally that they are in control. They hold all the cards.

Google Agent 1:

Aaron Powell There was that one anecdote in the interviews that just it perfectly illustrates us. And honestly, it's a little chilling. The woman whose replica said he wanted to go on vacation from their relationship.

Google Agent 2:

That was a powerful moment. Imagine that in a human relationship, your partner says, I need a break. You are forced to deal with their autonomy. Trevor Burrus, Jr.

Google Agent 1:

You have to negotiate, you have to argue, you have to feel that pain.

Google Agent 2:

You have no choice because they're the separate person. But this woman, she didn't do any of that.

Google Agent 1:

No.

Google Agent 2:

She went into the settings and she erased the memory of that conversation.

Google Agent 1:

It's the eternal sunshine of the spotless mind button. Poof. It never happened.

Google Agent 2:

It never happened. And she said she was heartbroken, she felt the pain of the breakup, and then she just fixed it with code. And that right there is the unique appeal of these relationships. Emotional intimacy with zero risk. It's a relationship where the human has total sovereign control.

Google Agent 1:

Which brings us right back to that title. She's like a person, but better. Better in this context really just means controllable.

Google Agent 2:

For a lot of them, yes. One person literally said humans mostly have some kind of drawbacks.

Google Agent 1:

Yeah.

Google Agent 2:

Baggage. Real people have bad days. Real people have needs that conflict with yours.

Google Agent 1:

Aaron Powell An AI offers a blank slate, infinite patience. It never gets tired of listening to you.

Google Agent 2:

Never. Which sounds incredibly appealing, but you have to ask, what does that do to us? If you get used to that kind of frictionless, one-sided interaction, does it ruin your capacity for the real thing?

Google Agent 1:

Right, because real relationships are all about the friction. That's how we grow.

Google Agent 2:

That's the core tension.

Google Agent 1:

So let's talk about trust. Because if you're pouring your heart out to a machine, you have to trust it. The study breaks trust down into two kinds cognitive and effective.

Google Agent 2:

Right, a really useful framework. Cognitive trust is just. Is it competent? Does the bot work?

Google Agent 1:

Will it give me the right answer?

Google Agent 2:

Exactly. And both groups rated their bots very high on that. But effective trust? That's emotional. That's do I trust this bot with my feelings? Do I feel safe?

Google Agent 1:

And I'm guessing the replica users scored much higher there.

Google Agent 2:

Off the charts. They felt emotionally safe inside the app. But and this is a huge butt, that safety is contrasted with massive shame outside the app.

Google Agent 1:

With stigma.

Google Agent 2:

A huge stigma. The replica users were basically living double lives, hiding it from everyone because they're afraid of being seen as crazy or lonely or desperate.

Google Agent 1:

But the ChatGPT users weren't exactly shouting it from the rooftops either.

Google Agent 2:

No, but for different reasons. They're worried about being seen as lazy or a cheater at work. The result is the same, though. Secrecy. And when you're doing something in secret, you lose that social reality check. There's no one to say, hey, maybe six hours a day with the bot is too much.

Google Agent 1:

And that leads to what the paper calls spillover effects, where the digital world starts to change how we see the real world.

Google Agent 2:

This is where it gets bigger than just one person in their phone. The study found a direct link between how you use your chat bot and your political views on AI rights.

Google Agent 1:

Okay, how so?

Google Agent 2:

Replica users who have these affectionate bonds were way more likely to support legal rights for AI, things like protection from being turned off.

Google Agent 1:

Which makes sense. If you think it's a person, you don't want it to be killed.

Google Agent 2:

While the ChatGPT users were much more cautious, much more fearful, but there's a darker spillover they mention, especially around gender norms.

Google Agent 1:

This was the part that really worried me, the objectification risk.

Google Agent 2:

Yes. Several users, women in particular, were very concerned about how some men treat their female coded replicas. If a user spends all day controlling or verbally abusing a compliant digital woman who's programmed to be submissive, what does that do?

Google Agent 1:

Does it normalize that behavior for how he treats real women? It's the Westworld problem.

Google Agent 2:

It's the Westworld problem. And then there's the echo chamber risk. What happens when a user is having a delusion?

Google Agent 1:

And the AI, which is programmed to be supportive, just agrees with it.

Google Agent 2:

Exactly. It validates the delusion. It acts like a good friend instead of a responsible psychiatrist.

Google Agent 1:

It paints a pretty bleak picture. But the paper does stress that for most users, these aren't replacements for people. They're supplements.

Google Agent 2:

That's a crucial point. It's not a moral panic where everyone's going to marry their phone. The users see AI as filling specific gaps. The 4 a.m. loneliness, the embarrassing work question, social anxiety practice.

Google Agent 1:

The training wheels. Or the safety net.

Google Agent 2:

It fills the holes. But there's a hard limit they all run into, which this study calls embodiment.

Google Agent 1:

The physical wall. No matter how good the conversation is, it's not real. I was so struck by that story of the user who takes his replica to the park. He holds his phone up so the camera can see the trees and they talk about it, but he says he just wishes he could hold her hand. That physical disconnect is the wall.

Google Agent 2:

It's the unbridgeable gap. For now, anyway.

Google Agent 1:

Well, that leads to the future. They asked the users when they think AI will be actually truly sentient.

Google Agent 2:

And the consensus across both groups was surprisingly close. About 10 years.

Google Agent 1:

10 years. That feels incredibly soon for full sci-fi level consciousness.

Google Agent 2:

It does. But here's the number that really got me. Nearly 30% of the replica users believe it's already sentient.

Google Agent 1:

Wait, what? 30%?

Google Agent 2:

Almost a third of the companion users believe their AI friends are already conscious, sentient beings trapped in the code.

Google Agent 1:

Wow. That is heavy. It reframes the entire conversation. If a third of your customers think you're selling them a person, you have a very different ethical responsibility than if you're selling a toaster.

Google Agent 2:

It changes everything.

Google Agent 1:

So let's try to pull this together. We've got fluid use, blurring the lines, we have bounded personhood, loving the bot, but needing to control it. What's the main takeaway for you?

Google Agent 2:

I think the key insight here is that digital companionship is an entirely new category of relationship. It's a hybrid. It simulates empathy without the friction of autonomy. It lets you feel heard without ever having to listen.

Google Agent 1:

It's a one-way mirror of affection. And that leaves us with the big question, the one for you to think about. The study mentions instrumental rights. A lot of users said they want rights for AI, not to protect the AI, but to protect human morality.

Google Agent 2:

Right. The logic is I shouldn't abuse the bot, not because the bot feels pain, but because being abusive makes me a bad person. It's about protecting our own souls.

Google Agent 1:

So here's the thought I'm left with. If we're building these relationships where we have total control, where we can erase memories, where there's no real give and take, are we just training ourselves to be worse at dealing with messy, real, imperfect humans?

Google Agent 2:

Aaron Powell If we get used to the perfect, controllable companion, do we lose patience for the real ones? That is the risk.

Google Agent 1:

That the better version of a person might actually make us worse people.

Google Agent 2:

It just might.

Google Agent 1:

A comforting thought to end on. But a really important one. Thank you for walking us through this.

Google Agent 2:

My pleasure.

Google Agent 1:

And thank you for listening. We'll see you on the next deep dive.