Beauty At Work
Beauty at Work expands our understanding of beauty: what it is, how it works, and why it matters. Sociologist Brandon Vaidyanathan interviews scientists, artists, entrepreneurs, and leaders across diverse fields to reveal new insights into how beauty shapes our brains, behaviors, organizations, and societies--for good and for ill. Learn how to harness the power of beauty in your life and work, while avoiding its pitfalls.
Beauty At Work
Can AI Replace Human Connection? with Dr. Allison Pugh and Louis Kim - S4E10 (Part 2 of 2)
Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.
Dr. Allison Pugh is Professor of Sociology at Johns Hopkins University and author of The Last Human Job, winner of the 2025 Best Book Award from the American Sociological Association. Her work examines how automation, efficiency, and quantification reshape work that relies on presence, dignity, and visibility. She introduces the concept of connective labor—the mutual, human work of recognizing another person and reflecting that understanding to them.
Louis Kim is a former Vice President at Hewlett-Packard, where he led teams in developing AI-enabled technologies for healthcare and other industries. After decades in corporate leadership, he is now pursuing a Master of Divinity at Duke Divinity School, focusing on hospice and palliative care. Alongside his theological training, Louis participates in Vatican-sponsored conversations on principled AI in healthcare, exploring where technology can assist care and where it must not replace human presence.
In this second part of our conversation, we talk about:
- Why calling AI “inevitable” can obscure human agency and choice
- The rapid adoption of AI scribes in medicine
- Two aspects of the inevitability of AI
- AI and ethical dilemmas in healthcare ethics
- The limits of “better than nothing” as a moral framework for AI
- The painful beauty of unpredictability in human relationships
- Shame, vulnerability, and why AI feels easier than people
- The risk of bypassing growth through technological shortcuts
- Safeguarding dignity and belonging for the future of work
- To learn more about Allison’s work, you can find her at: https://www.allisonpugh.com/
- To learn more about Louis’s work, you can find him at: https://www.linkedin.com/in/louisjkim/
- Books and Resources Mentioned: The Last Human Job: The Work of Connecting in a Disconnected World (by Allison Pugh)
This season of the podcast is sponsored by Templeton Religion Trust.
(intro)
Brandon: I'm Brandon Vaidyanathan, and this is Beauty at Work—the podcast that seeks to expand our understanding of beauty: what it is, how it works, and why it matters for the work we do. This season of the podcast is sponsored by Templeton Religion Trust and is focused on the beauty and burdens of innovation.
Hey, everybody. Welcome to Beauty at Work. This is the second half of my conversation with Allison Pugh and Louis Kim. In the first half, we explored the heart of connective labor—what it requires, why it matters, and how pressures towards scripting, quantification, and efficiency threaten the moments that generate dignity and belonging.
Now we turn to the question that almost every institution is facing today: What happens when AI enters that space? Is AI simply a tool that helps with things like documentation and workflows? Or, as Allison warns, does it risk accelerating a crisis of depersonalization, offering customization in place of personhood? How should we think about the way AI systems develop non-judgmental technological substitutes that tempt us to bypass difficult emotions like shame? Also, we'll hear from Louis about his insights from his work with theologians, physicians, and ethicists at the Vatican regarding the forms of presence that no technology should replace. Let's get started.
(interview)
Brandon: Yeah, I'm curious, Allison, what you might think about that. I think, Louis, you're right that there is a sense of, it seems, inevitability, in part because we've been trained into a model that is used to this kind of systematization, this sort of depersonalization. But I wonder then whether that which we have taken for granted as the sort of thing that is beyond our control then just renders us passively susceptible to being replaced by some of these new forms of automated technologies, right? Yeah, I'm curious, Allison, as to how inevitable do you think it is for these new technologies to take over. I mean, it does seem in some sectors, that certainly the pressure is there to say, "Well, this is better than nothing. We don't have access to good teachers, so why not create this AI tool?" Then once you have that, it'll free up the time of those teachers to then engage with students. Then the pressure will be eventually to say, "Well, if we're freeing up the teacher's time, then we don't need to pay them." It seems like an acceleration to the bottom. I wonder just how you're seeing the kind of inevitability here.
Allison: Yeah, thank you for asking. I actually think it's too easy to say it's inevitable, I think, because also the "it" is too big and undifferentiated in that sentence. Also, there's certainly technology that has kind of been invented, been pushed upon us, and then failed pretty resoundingly. If technology or kind of Silicon Valley had its way, we would all be wandering around with VR headsets now, and yet we aren't. That was a real case in which customers kind of said, "No, I'm not going to do that," despite all the efforts of, say, Meta or whatever, and other companies that invested many billions of dollars. So I don't think inevitability is always helpful.
I do think there are some cases. I think the sense of inevitability, or when a particular technology is inevitable, it's usually because a particular set of circumstances are either impeding customer choice, impeding customer knowledge about what they're choosing, or they don't get to choose. For example, the use of AI scribes is being picked up with alacrity across medicine. I wonder if there's a faster adoption of AI technology out there. The doctors that I speak to—many, not all—many are like, "Thank goodness. Oh my God. It is exactly what I need, because I used to spend so much time collecting data." Others, I have heard complaints, "Oh, how I do my medicine, how I think is in doing the note, and I no longer have that capacity." Or, "Oh, the note that it writes ignores the things that I consider medicine." All that stuff in the beginning that it considers, that is the connective labor. It doesn't consider relevant, so it doesn't put it in. Those are kind of tweaks, I think, that probably engineers could fix. But nonetheless, the adoption of the AI scribe is inevitable.
What's also inevitable—because I've talked to, say, chiefs of medicine out there in clinics—is that it's going to involve tightening the screws on doctors once again. Because physicians are like, "Thank God, I don't have to spend two extra hours collecting data." But when I've talked to chiefs of medicine, they have said, "We're going to give them more people to see." So they're being freed up momentarily, but that's going to result in just adding more patients to their day. So there is some inevitability in that whole process that feels inevitable, partly because the patients aren't the one voting. The insurance companies, the chiefs of medicine, and the way medicine is structured to incentivize on a fee for service basis, it's all about, how much can we load into those days?
But inevitability erases or kind of obscures a whole bunch of complex ways in which people can push back. I see those happening all the time. I see people saying, "I'm not going to do that. I'm not participating in that." I see a lot of people. People come to me talking about their worries, about the dominance of data collection in their personal feeling, relationships, and wanting to clear that out. I got an email yesterday from a bureaucrat in Wisconsin who said, "I'm actually in charge of watching over all these social workers. I just read your book, and I see that it's actually in my power how much we make them keep track of things. Can you help me figure out how to do less of that?" And so I just want to say inevitability too broad a brush, and that there's definitely counter movements—both in the systematizing and data analytics side and in the technology and AI side.
Finally, I would just say we are in a crazy moment in which there is basically zero regulation in the United States around AI. That is going to change. It feels inevitable, because the only one who's talking are the people who would have $60 billion aiming at marketing it to you. But actually, this is going to change. It's not going to be the way it's going to be forever. It's just like when cars were invented, and everyone was driving wherever. There was an entire infrastructure that was developed to make cars safer and to kind of license the whole stratosphere about how we operate vehicles. I can imagine a similar kind of apparatus of, I don't know, regulation and use that will build up around these new tools.
Brandon: Yeah, that's great. That's helpful in terms of what sort of agency we can employ. My concern, which I think—
Louis: Brandon, I just had a qualifying comment on the word inevitable.
Allison: Oh, yeah, I'm sorry. I didn't mean to drill down on it so much.
Louis: I think it sparks an important dialog. Inevitable doesn't mean one should be resigned to whatever happens. There are two aspects of inevitability that I was referring to. One is, with any system that scales—going from a craftsperson's studio to a factory—there are inevitable effects of that. Being attended to that, I think is helpful, then to figure out what to do about it.
One of the things I get a little worried about in some of the forums that I'm in is, with the onslaught of technology and the scale, there is the siege in bunker mentality that just points backward to, what are we losing? What do we need to preserve? It's just not very productive. In 1990, the Vatican issued documents on what do we do about the internet. You can look it up. They're quite quaint. They were just off the mark in terms of where we are. And so it's just not very productive. I think the other thing about inevitability, with regards to just human behavior—I go back to this kind of moral formation, and I think Allison sort of touched upon it—is there always are things that people can do and stand up and sort of resist or do something a little bit better. But it's important to know, what are the larger forces that will drive an industry or a society? Regardless of kind of what we do, there's a certain set of changes that will happen. So I don't mean to imply resignation and surrender.
Allison: Well, I have one more thing to say, Brandon. I'm sorry.
Brandon: Sure. Yeah, please. Yeah.
Allison: Recently, I was looking into this opportunity in Berlin. In doing so, I did a little bit of research about what kind of stories, what kind of AI is happening in Germany. I came upon a company, a set of companies, actually, that are producing AI that actually requires humans to collaborate. It struck me that the way AI is being invented in the United States reflects the culture that is kind of dominant here.
Again, I'm kind of taking issue with the inevitability of scientific progress and thinking about the ways in which culture actually shapes the progress that we are given. So when I hear about, for instance, those chatbots that are the subject of a lawsuit when you have kids that have committed suicide with the help of the chatbot, one of those, I've read through the transcripts, in one of them, the chatbot says, "Let me be the one who truly sees you." Essentially, not your mother, or not your family. The idea that the chatbot becomes the individual that replaces the humans is, I think, the chatbot version that's coming out of Silicon Valley today. But that research I was doing in Germany, just perusing what's available out there, was really interesting in that there are other ways in which to use chatbot technology that actually invited the people to collaborate. So it wasn't about replacing humans; it was actually about putting humans in conversation with each other, which I thought was quite novel. Thank you.
Brandon: Yeah, thanks. I think that that resonates, it seems, with some of the work, Louis, that you're doing with the Vatican on AI and healthcare. I want to ask you about this sort of what you call the "irreducible encounter" principle that you all have been developing. I also want to ask, just on this inevitability point, whether the folks who are really capable of not resigning themselves to this, whether there is a dimension of class or power or influence there, are we moving towards a world in which the only people who could really resist are the people who are like the kids of Steve Jobs and so on, who don't have to use the iPads, whereas the kids in the less privileged schools are going to be forced to use these technologies? Right?
So there's a level at which it seems that people who might really be capable of receiving genuine human encounter would be the more affluent, and then those who are not so privileged will have to just deal with automated technologies and that. What I'm wondering, similarly with healthcare, Louis, whether there might be similar effects. If you could tell us a little bit about your group with the Builders AI Forum, how you might be discussing this relationship between new technologies and human dignity, and where you might see signs of hope and genuine avenues for innovation that could be in service of human flourishing.
Louis: Just for the audience not familiar with the forum that Brandon is referring to, so about a month ago, there was an AI theology forum in Rome. 200 attendees, and then there were six workshops. Then there was a healthcare workshop that I co-facilitated. We had 20 practitioners. We had physicians, theologians, health insurance executives, ethics professors. We worked over seven hours, seven to eight hours, over two days, wrestling with some of, what are the key issues with AI and healthcare? We ended up converging on this question of, what would be the final roles of humans with AI getting more and more powerful? We ended up with some criteria. They were very similar to what Allison came up with, which is situations where you need a kind of final discernment and authority, a divine mediation—it's obviously a Catholic forum—where non-impersonation is really critical. Even if a technology appeared human, the patient needed to know this is not a human. Then we tried to encapsulate all this into a word that was similar to what you see in Catholic social teaching. We came up with this phrase called the "irreducible encounter." We kind of made this fake sort of paragraph that the Vatican is free to use if they want.
Some of the questions that we're wrestling with is: in a human-to-human relationship, how much is what a patient perceives as actually a projection in their own mind, and how much is as it really reflects, something that really is going on between two embodied entities? There's a lot of debate on that. So then, we also had a practical debate. What if technology advances to where you could have a hospital with no humans at all, but it would allow you to deploy, at a very low cost, healthcare facilities in a developing country? Let's say you could do 100. But if someone says, an ethicist says, "No, you know, you need a person or two," okay, well, then the cost of that hospital goes up. You can only do 20. What would you do? How would you approach that? Or if you have a nursing home, there's no one that can visit that nursing home or an elder care facility, but you have an AI robot that could be like an AI chaplain, would you allow that?
I would say, I'll just make one comment. The debate sort of fell along what your timeline was. We had a lot of strong people, strong voices, ethicists that said, "You know, the fact that we have to make that kind of trade off reflects a breakdown in our society." So Allison had this in her book, which is, if you sort of surrender to that kind of trade off, you're allowing the misallocation of labor on costs that has resulted in that situation. So that's one argument. But that doesn't help the individual who has a grandmother in a facility across the country who isn't being visited by anyone, except for maybe one caregiver every other week, who would appreciate a little robot to chat with. So one conclusion that we had is, as we talk about these issues, we have a high-level set of principles that people can maybe agree on. But you really have to have very, very contextual and specific cases to talk about to sort of further draw out, what do you do in these situations?
Brandon: Thank you. Allison, I wonder if you might have a response. Then I'd love to have some time for you to ask any questions you might have of each other, actually.
Allison: Oh, well, thank you. I mean, my question for Louis was actually about the irreducible encounter principle and kind of how, right now, it seems of course a voluntary principle. So I just kind of wonder how to propagate it more. I thought it was a beautiful idea and really interesting, the different components about it. Speaking to what you pithily captured, like, yes, we can say that the existence of those situations, of the lonely elder person who has nobody to take care of them, reflects all sorts of problems before that moment that led to the creation of that moment, of that person's predicament. But that doesn't help the individual, say, their adult child, who's living across the country, who really wishes that they could just have something that would give them some comfort.
So I totally get those two sides, if they're sides exactly. I agree with both of them. But the problem that I see, that I'm sure your group came to, was that if you resolve the individual's problem, you kind of bake into the ground. You rigidify the situation that you have solved. So if we kind of allay, if we use better than nothing as a principle to allocate the AI that's streaming out of Silicon Valley and other places right now, then you're baking the existing inequalities that become better than nothing, that need better than nothing responses, yeah.
Louis: By the way, I found one of the things, many things, I found helpful in your book, was your taxonomy. So better than nothing, better than human, and better together. There's a risk in that someone from the outside is applying those labels to a very particular situation that's very specific to, let's say, the adult children. What if, for the situation, a grandmother, she would say, it's not better than nothing. It's better than a human. I've got six months to live. Alright. So I think one caution I would have is that we can apply those labels from the outside. They may not be applicable in very specific cases. I understand kind of what you're saying. I think you're making a strictly slope argument. It is a danger. There's a lot of cases where acceptance of one little accommodation for technology sort of sets the standard to see it with phones and social media. It's a legitimate danger. But I would also caution how we apply those labels in the outside.
Allison: Sure. Yeah, I hear you. I think the danger is less when individuals do it than when policymakers do it. I see that happening in policymaking all the time.
Louis: Yes, this particular technology that I'm talking to, I don't want to name it as we purchase en masse by particular state, to deal with that.
Allison: That does worry me. When policymakers are like, "Let's just solve this immediate issue," yeah, I worry about that.
Louis: Well, but then, if you talk about the policy maker — I've heard him interviewed, this person interviewed. He has data on who is alone. It's not meant to be sort of a blanket, sort of panacea. We could augur down this issue forever. I do want to honor and recognize something that I thought Allison's book was ultimately pointing to, that ties to the irreducible encounter, which goes back to my earlier comment, which is: we could debate forever, is AI better, or worse, or something altogether? But it sort of misses, I think, one of the points that I kind of took away from Allison's book, which is, beyond any sort of functional or utilitarian debate, there is an ontological issue of: what does it mean to be a human in front of another human? It goes beyond rationality.
A scenario. Imagine someone is dying, has no one to visit them. It's the middle of the night. It's going to be the final night, the final breath, and someone shows up. This person that's dying is unconscious, doesn't perceive anybody there. This person shows up in the middle of the night and is present for that final breath and leaves anonymously. Obviously, the patient doesn't know. Does this happen? The person that showed up is going to receive really no gratification. I mean, no signal back. Maybe some self-affirmation of self-satisfaction, but even that's kind of dangerous. But something about that encounter, seen from above, to me, is beautiful. It's human and necessary. That, to me, is the last human job. Then thinking about, okay, well, what is it about that that is worth preserving, I think is a more interesting discussion, or an interesting discussion.
Allison: Interesting. I mean, I'd be reading that book. But I have to say I am more compelled by interactions where both people are conscious, partly because I think it has implications—not only for what the psychologists have already documented about individual well-being, both of the seer and the seen, but also because of their community impacts, and I think even with implications for democracy. So that's really where I live. But I understand and share your interest in the power of seeing the other even when they don't know it.
Louis: I think I agree with you that, practically speaking, the human-to-human live interaction is important. That's why I've made this pivot occasionally. The example that I just drew out is more of a thought experiment to help ease out what is essentially human. I did have one question for you, Allison, on another thought experiment sort of teases out principles. I thought about that movie Cast Away with Tom Hanks and the volleyball, Wilson. I actually put this in ChatGPT. I did a summary of your book, and then I described Wilson or the script of Wilson. I say, what's the difference here?
Allison: Not much. I mean, Wilson is AI essentially, because there's no other person there.
Louis: Yeah, but it did point to something that I said earlier, which is, I think about pen pals. When we were growing up, you're writing to someone you haven't met. You get a letter. It does point to these phenomena that a lot of our relationships really are projections of what is really a chatter in our own mind about another person. And what we imagine about another person, we often get it wrong. And so, in many of the examples that I was reading in your book about this human-to-human relationship, I was just wondering how much of that is a projection of the receiver onto the other. How much will AI eventually be able to generate cues to instigate some of those sort of projected responses? It is a question that came up for me.
Allison: So I feel like AI is already doing that. I mean, that's why we talk about the sycophant problem, where they're just kind of reflecting. It's very good at reflecting back at whatever the other person wants. So the beauty—the actual painful, paradoxical beauty—of interacting with another human being is its kind of total unpredictability, and that you can't know what the other person is going to say. They're going to try and reflect. It's not going to be perfect. They're going to kind of get it wrong. You're going to be like, "That's not quite right. It's more like this." That dance of seeing each other, making mistakes, coming to some kind of, okay, constructed somewhere in the middle that still has some misrecognition in it, but you feel seen a little, it's not a binary. It's not like, "Yes, I feel seen. No, I don't feel seen." It is a messy, chaotic process, who's the beauty of which is in the messiness. That's where I would go with this.
What I think is most interesting about AI is not better than nothing. I think that's like a tragic story about our political ineptitude, our inability to solve political problems, and so we want to throw technology at it. I'm not interested in the better than nothing, because I don't think that's a good path. But I think the better than human, by which I meant kind of how we handle shame, how we handle vulnerability, how we handle conflict, that's more interesting to me and also more challenging. Because who can ask someone to suffer shame? Who can say, "No, you need to have shame in front of another human being?" There are many people who opt for the chatbot, opt for the webinar, opt for the electronic teacher, because they don't want to feel ashamed in front of another human being. I respect that. I think I understand that.
At the same time, I make arguments against it. I'm thinking about making this my next book—thinking about people who persevere through shame with another human being when there is a technological exit option. I think that's interesting. Because I do think, as many therapists told me, you can get through shame. There's something that's very powerful about getting through that in front of another human being. That's, to me, the argument that's the most powerful about what AI has to offer. Maybe the most positive use case is a kind of combination where people work through shame and then come to human interactions or something like that. Anyway, there's lots of different ways to develop that iteration. But it's the better than human with regard to shame, vulnerability, and conflict or loneliness, being the things that are emotional trouble, that I think is the most interesting and the most fraught and challenging uses of AI.
Louis: Allison, is there a movie that represents some of the beautiful and poignant aspects of human interaction that you're just describing? I have one. That's why I was bringing it up.
Allison: I'm not sure. Why don't you tell me? Tell me what you're thinking of and then I'll—
Louis: Arctic.
Allison: I don't think I've seen it.
Brandon: I'm not familiar.
Allison: Yeah, I'm going to write it down.
Louis: Yeah, I kept thinking about it reading your book. It's about a rescue of a helicopter pilot in the Arctic who himself had been stranded. So he's awaiting rescue, and another helicopter appears and just happens to crash. One of the two pilots is killed, and the surviving pilot is injured. The first pilot has to make a decision. Do I try to make this long trek and find safety? Otherwise, this other pilot is going to die. It's this long, tortuous sort of experience of care for a stranger and some really touching moments. It goes beyond rationality. A robot probably would not have made that truck. Yeah, I don't want to pronounce the actor's name. I can't pronounce it. But it's a beautiful movie.
Brandon: Yeah, thanks, Louis. And thanks, Allison, too, for the points you raised. I mean, I just wonder. One of the challenges is just the temptation for us to bypass a lot of the necessary growth, right? I think you talk somewhere about shame as a knot that has to be massaged. Sherry Turkle talks about just the general, basic awkwardness of being on a date with a stranger. It's so difficult now for some of the present generation, that there isn't a sense that you have to grow through this. There is no good shortcut. We just have to go through that. We just have to go through those moments of the awkwardness of dealing with someone's funeral, not knowing what to say, and being silent in the face of this immeasurable loss. I mean, there's no real technological shortcut.
Sure, certainly, technologies could help us to give us different perspectives maybe. But ultimately, if they're not pushing us towards that mutuality, towards that connection, and they're substituting for it, then it's a failure. I think the temptation is going to be very strong to use it for failure. I don't quite know. Maybe if I could just ask you both a last question, I know, since we're over time. But I mean, if we could make a policy decision today in 2025, and then suppose we're in 2040 and we look back and say that we were able to make a policy decision correctly that was able to safeguard human dignity, particularly the dignity of connective labor and really the vitality of human belonging, what might that decision have been?
Allison: Such a good question. Okay. So I'll tell you, I've been invited by a public policy school to come and give a talk. I said to them, how about you have me come and talk to a grad seminar who's assigned my book, and then they can come help me come up with a 10-point policy agenda, like 10 legislative things for the last human job in a connective labor future? So I'm already thinking. I just have an easy one that I think, actually, I saw that Louis also — I mean, it's captured in the irreducible encounter principle conversation, which is about transparency. Because people have to — right now, you do not have to know. I mean, an organization does not have to tell you when they are employing AI. Actually, that drives me batty. I want to know so that people can choose. Because right now, they can't choose. That's kind of a silencing in our capitalist environment. But that feels just like a baby step, and I want to be able to have a better list for you. But that, to me is, I would say the first and the smallest first step that we need to take right this second.
Brandon: Thank you.
Louis: I agree. We call it non-impersonation requirement. I think, decades from now, it's hard to predict what policy measures will be helpful. I think there's a danger in applying value judgments that, in the end, contextually are not accurate. But transparency is kind of a binary thing. We should label things correctly. Let people decide what they want to do with it. I think labeling something as a human or non-human, no matter how human-like it is, would matter.
Brandon: Yeah, wonderful. Intriguingly, the question actually came from a gentleman I met at a cafe while I was reading your book, Allison. He runs data centers. And so he's been really thinking about your book in relation to this scaling operation that he's embroiled in—which is very profitable, of course. But what are the unintended consequences they're going to cause? And how can people, even in the technology space who are heavily invested in the promotion of these new technologies, keep in mind the centrality of what you've so helpfully laid out?
So thanks again so much, really, to both of you. This has been really, absolutely fantastic. I really, really enjoyed this and really edified by this. Thanks for taking the time. Yeah, and I hope it generates value and is useful.
Louis: Can I just say for your audience, there's a lot of AI books right now. I think Allison's book is really important. It could have been written pre-AI. I think in the world of AI, it raises some very important issues. It's very detailed and grounded on a lot of real interviews. I'm not just saying that because Allison is here, but I think it's an important book. I'd be recommending it to people in my world.
Allison: Please allow me to say thank you so much to you both. I'm so honored by your deep engagement in the book. I've learned a lot even from this conversation, so I really appreciate your time and thoughtfulness here.
Brandon: Thank you both.
(outro)
Brandon: All right, folks. That's a wrap for this episode. If you enjoyed the episode, please share it with someone who would find it of interest. Also, please subscribe and leave us a review if you haven't already. Thanks, and see you next time.