AI Made Simple

Tris Papakonstantinou on Why People Distrust AI and What Leaders Can Do

Valeriya Pilkevich Season 1 Episode 9

Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.

0:00 | 30:42

Most organizations treat AI trust as a compliance exercise. Check the box, file the documentation, avoid the fine. But compliance and trustworthiness are not the same thing. 

In this episode, I'm joined by Tris Papakonstantinou - cognitive scientist, co-founder of the Digital Trust Council, and PhD researcher at UCL - who studies why people trust or distrust AI and what it actually takes to shift those beliefs. 

We discuss: 

  • Why people judge AI on outcomes but judge humans on intentions - and what that means for leaders
  • The difference between regulatory compliance and genuine AI trustworthiness
  • How cognitive offloading is quietly eroding critical thinking and decision-making skills
  • Why the best AI adoption strategies start with framing, transparency, and long-term thinking 

Need help building AI capability in your organization? Book a call. 

SPEAKER_01

Why do people distrust AI and what does that really mean for leaders rolling it out across their organizations? Welcome to AI Meets Simple, the transformation series. I'm Valeria Velkievich, and I talk with global leaders, innovators, and practitioners for shaping the future of work in the HFAI. In this episode, I'm joined by Tris Papa Constantino, cognitive scientist, ex-Google DeepMind, co-founder of the Digital Trust Council, and PhD researcher at University College London. We talk about what trustworthy AI means in practical terms, and why no credible standard exists yet, the difference between regulatory compliance and genuine trust, how our brains judge AI failures categorically while forgiving human mistakes, the hidden risk of cognitive offloading, and what leaders can do to counteract it. Hi Tris, it's very exciting to have you on this podcast. Hi, great pleasure to be here. Thank you so much for the invite. Trace, you started out studying philosophy and the history of science in Athens, then moved into cognitive science, then into a PhD in London, studying how people changed their minds, and now you co-founded a nonprofit that is building trust standards for AI companies. That is not a straight line. Can you walk us through how those dots connected for you and what made you realize that trust in AI was the problem that you wanted to solve?

SPEAKER_00

Yeah, um, I do realize it's a it's a pretty unconventional path. Um so yeah, growing up, I've always had um a sort of passion for the intersections of humanity and technology, humanities rather, and technology. I had an interest in philosophy and I had an interest in maths and physics and computer science. And so I started this program. Uh, it was called History and Philosophy of Science, and my concentration was philosophy. And there I became exposed to obviously a lot of around philosophy and philosophy of mind, psychology is what led me to cognitive science, but also a lot around how technologies are built, how knowledge is created in a society, and also like the starting blocks of things, started thinking about things like data justice and AI trust. It was very early days. Uh, back then in my undergrad, I did an internship in a bioethics committee back in Greece, and that was kind of what opened up my curiosity around trustworthy AI. It was right then, uh around the time 2017-2018, when um technoethics had started becoming a thing. And I worked on a project there looking at ethical deployment of neural networks, which was like completely new, especially for Greece's standards. And that was kind of always been part of my path and my curiosities. And I continued kind of publishing and working in that field of bioethics and technoethics for a while. My passion was in cognitive science. And at the time I didn't know also how these two could intersect. Um, and that's what led me kind of to London and to the program that I'm currently finishing up, a PhD in cognitive science. And sort of somewhere along the way, keeping up the thread of my interest in bioethics and public policy around those issues while working in cognitive science, the dots started connecting for me as also the technological landscape was evolving around me. And more recently, um, I completed an internship at Google DeepMind, which was kind of the perfect marriage of these two worlds, where I got to think a lot around those socio-technical questions about how humans interact with AI and what do we want from these systems at large, but keeping my cognitive science kind of lens on.

SPEAKER_01

Very good. Thank you. For our audience, can you explain what trustworthy AI or what trust in AI actually means and why it is important to think about it? So, and not just as a tech issue, but as something that affects how your whole organization operates.

SPEAKER_00

Yeah, this is a question I'm hoping our work at the DTC is going to be able to answer. And that is why this organization came into existence in the first place, is because we really have not defined as a society what trustworthy AI means, and what trust means in practical terms, not just as a vague kind of philosophical idea. To me, an AI is or a trustworthy AI system is about transparency, accountability of organizations that are developing and deploying the technology. And transparency means understanding of what the system is actually doing, technically, inside the black box, as much as we can, but also in terms of how we interface with it and how different parts and participants in an organization are using it, are affected by it, and are driving its development.

SPEAKER_01

So at the DTC, you have an advisory board that includes people from Harvard and Pinkfrost and Microsoft, and yet you say they that no credible evidence-based standard for trust for CAI exists today.

SPEAKER_00

Why do you think it might be the case? The real reason I think is just time. It's just a very nascent technology in the form that it's currently in. And at the same time, it is moving and developing in an unprecedented speed. The field of kind of AI safety, the papers coming out every week, it moves at such a speed that papers that came out two weeks ago are already old news. They're already debunked. So that both the development of these systems because of the corporate incentives that drive it is so rapid. And also the research around how these systems even work, which is an unknown, is also rapid. Because it is the nature, the nature of this technology is such that it is inherently a black box that we don't know exactly how it's doing what it's doing, what the capabilities are. And so the research is behind and it is moving really fast with corporate incentives behind it, while regulation and standards for it are moving it at a completely different pace that cannot catch up. And we haven't had really the moment as a society for experts that understand this technology from different points of view to come together and uh bring some consensus on what it is, is it that we want and what we mean as a society, as a collective, when we demand a trustworthy AI, trustworthy technology. So this is our mission is starting from defining it to then going through the more practical side of how do we kind of enforce this responsible development. We don't even know. So our our goal is kind of to make that moment happen and bring people together from different disciplines and organizations to start thinking what do we mean exactly when we talk about trust in AI.

SPEAKER_01

Talking about regulation. In Europe, there is AIU Act, and it's pushing the organizations to get their governance in order, but also if they develop the AI systems or using high-risk AI systems to make sure that there is also a human in the loop and it's trustworthy and accountable and so on. And many organizations are trying to abide by the principle because obviously they want to be compliant and they don't want to get any fines. But what's your perspective? What's the difference between a compliant organization and a trusted one? Why do organizations still need to think about this question of trust?

SPEAKER_00

Yeah, that's an excellent point that you're raising. And thank you so much for doing that. I think there is, as you say, a difference between kind of compliance and trustworthiness. And we do want to combine a sort of carrot and stick approach and blanket regulations such as the EU AI Act is necessary to prevent some sort of very obvious and high-risk harms that can come from AI within an organization and in society at large. So I think that's definitely a good thing. Um, and having the incentive of avoiding a fine is also like at the high level a good thing. However, I I see, we see at the DGC a lot of potential to move, as you say, from compliance to true responsibility, true trustworthiness. And that is more than a kind of checkbox exercise. It's an active conversation between an organization within an organization, between an organization and a regulatory body, but also between the organization and the market. And our kind of philosophy of the DTC, and my own personal philosophy as well, is that we can partly trust the market incentives to do this work for us. This means that organizations are going to be inherently motivated by profit. And we believe that the way people are responding to this technology, and the more and more stories are coming out in the news and the media about, you know, some harmful cases of this technology being deployed, people are motivated to support organizations that have them in mind in a way, that have them in mind in short-term ways and also long-term ways. So we see people choosing technologies, not only AI, social media and other sort of platforms as well, that prioritize trust. And our bet here is that if we are able to aid organizations to have them actively participate in those trust-building exercises, they will be able to signal that to the market as well. People will be inherently, customers and users will be inherently more motivated to opt for them versus a competitor. And what that means in this case and how it's different from like a regulatory act, is against an active participation from the organization. And it is a kind of ongoing exercise. And it's not just a one-off audit or a one-off kind of, okay, have this documentation available in case we need to be compliant for the EUAI Act. It's more of an active exercise and more of the using the natural incentives of the market to have uh not only comply but also drive the conversation forward themselves.

SPEAKER_01

What I hear is that it's more about educating the companies or the leadership in the companies that they are not doing it for the sake of to be compliant, but they're doing it for the sake of their customers, of their employees. So they are building long-term business with this, with uh with the trust in mind.

SPEAKER_00

Exactly. It's about educating the organizations, the leadership, but also the general public as well, of what the actual harms of this technology is, but also what the benefits are of a technology such as this, that it's developed responsibly. Um, and this is what we want to highlight and have the conversation be both ways.

SPEAKER_01

Your research found something that might surprise also some leaders that are listening to it, that once something goes wrong, so in human psychology, people judge humans based on intentions. And we say, But you know, he had his best intentions in mind, even though something went wrong, or or she, you know, but the intention was good, so we won't blame them too hard, right? But with AI, they judge it categorically. So if let's say there was a chatbot that did something wrong, or your AI tool hallucinated, then it's literally people distrust all the AI as a whole. So can you explain to us why is this happening? And maybe even like looking at the organizations, what does it mean for leaders who are trying to roll out AI across organizations? What do they have to keep in mind?

SPEAKER_00

Yeah, um, thanks for that question. I think, yeah, what we found is that, um, and there is a caveat with that, that this is uh research that was done a few years ago, way before LLMs really became a thing. So it was referring more uh to self-driving cars or kind of recommendation algorithms. And what we found, as you say, is that in uh cases where, you know, there's a comparable case where a human did something wrong or an AI did something wrong, people judge humans based on their intentions and AI based on the outcome. I think this is sort of the conception of us being developing as a uh sorry, evolving as a social species and having that more kind of anthropomorphizing view of humans, obviously, and viewing AI as a tool. And and the reason I want to caveat this is that I don't think that's um the case anymore. I think that has largely changed. We see um AI being a lot, a lot more anthropomorphized now in its current form that we see like through chatbots and LLMs. Um and I think again, people do conceptualize it as a tool to a large degree, especially when it's the moment of kind of making a judgment on a moral issue. And I think people don't assign intentions to tools, but they still, most people kind of demand an accountability. And that usually moves up to a human, whether that's the leader of the organization that developed this tool, the developer that uh did it, or someone, so a key person making that decision and deploying this tool. So I think that is basically our cognitive kind of perception of how a human operates versus how an AI tool or a tool in general operates. I see that changing, however, but I still do see uh people, um, if you look at any of the media stories of any current kind of scandals around chatbot deployment, whether that's kind of some um cases of like AI psychosis that have currently uh saturated the news or various other harms that have been seen, people are still blaming the company and the company leaders or whoever kind of deployed that tool in the context and chose to use it versus a human. Um, again, that is our cognitive bias of viewing humans as more complex beings with intentions and maybe also excusing some behavior, using kind of theory of mind and putting our um selves in that person's uh shoes. Well, we can't do the same with tools. So again, our brains are looking for a human to blame. And I think that's something for leaders to have in mind that when they are deploying a technology within the context of their organizations, that they are going to be accountable. The key decision makers in that organization are the ones that are going to be held accountable from a kind of lay perception point of view, but also a legal point of view as it stands. And that is obviously something to have in mind. No one's going to blame the AI for making the wrong decision, they're gonna blame the leader of the organization.

SPEAKER_01

I also see it a lot in my trainings, to be to be honest. So if something goes wrong and AI hallucinates, it would be like it's so bad, you know. Why would I even use it? It cannot even give me the correct answer, right? I I also see this kind of I still see this categorical thinking in a way in the trainings. And then, you know, it's either a perfect expectation that it it if I if I give it a task, then it does everything perfectly. And if it doesn't, if it you know makes mistakes, then why would I even use it? But again, with humans, like your coworkers also do mistakes sometimes, whether you are employees or or interns that you have.

SPEAKER_00

So yeah, as you say, I think we still view it as a tool, as a technology like any other, but it really isn't. It really is special in some ways. Like there's a lot more flexibility that we have to cultivate with it. And I think that's also an excellent point is educating people that use AI within an organization if encouraging the use of AI is the outcome that leadership wants, is kind of to be able to explain this nuance. And because we see this outcome-based thinking, it's something to have in mind for leaders or people such as yourself that train organizations on the use of AI, that it is not a calculator. It is a tool that is more complex and we need to treat it with more flexibility. Identify the areas where it's good, identify the areas where it's bad, but there's no reason to kind of throw the baby out with a bathwater.

SPEAKER_01

So I want to approach this trust topic, uh, but from a bit other direction. So when we talk about AI adoption, we see that many employees still resist AI. It's been three years that we have AI democratized and accessible for everyone, but there is still might be also some trust issues or some fears. Your work suggests that people's beliefs about AI sit inside of a web of connected assumptions, and you cannot just swap out one belief without addressing the others. Can you explain us what it means exactly in the context of trust in AI?

SPEAKER_00

Yeah, it's a really complicated topic in terms of the adoption within organizations. And a lot of it has to do with what you mentioned, that it's uh it doesn't sit in a silo. It's kind of something that you're asking people to integrate in their daily life, in their daily workflows. And for many people, it's workflows and uh kind of routines and ways of working that they've had for decades. And then you have this disruptor coming in that people come with preconceptions about from the first place, whether that's the kind of level of literacy that they have with technologies such as this generally, their level of understanding of what it actually is doing, but also a lot of fear and resistance coming from the fact that they see a lot of their skill being replicated by this technology and fears of them being replaced. Obviously, to me, I'm very pro this technology. So it is about highlighting the ways that it can benefit people and how they're the quicker that they're able to adopt this technology and adapt with it, the more likely they are to kind of keep their jobs and their skills to remain valuable in the workforce. But I can completely understand what that resistance is coming from. So I think from the point of view of organizations, it's about introducing the tools in a way that keeps an open conversation going between their workforce and themselves and not just kind of dumping it onto them, addressing a lot of those fears and explaining and educating their workforce on the ways in which this is going to make the work better and not replace them, and the ways in which they can collaborate with this technology in a meaningful way to make their lives easier, not harder, to make their jobs more um kind of quicker, better quality, with less kind of roadblocks, with less friction and some of sort of mundane tasks that they can avoid. So introduce the benefits first while addressing concerns around serious issues such as cognitive de-skilling or kind of general de-skilling that this technology does bring. And I think it's something also that organizations are not maybe addressing as much as they should have. But it is about having a kind of integrated view of, okay, this is a technology that is gonna replace some of the work you do. Um, but hopefully it's the tasks, you know, that are mundane and repeatable and that you're gonna get away with kind of giving those to AI now, and you can spend more time doing the more meaningful work that you have been trained and gathering expertise on through many years or not so many years, depending on the level of seniority. But yeah, I think it's a complicated topic. And I think we also need to have a better consensus around what exactly are the workflows that are being affected the most by this, and what are the risks that we are running of diskilling our workforce of relying on these tools for things that maybe we don't want to be trusting them so much on. Because as we say, they're good at some things, not so good at other things, and we want humans to step in where they're not so good.

SPEAKER_01

Yeah, I I really like the points that you mentioned. So basically, what you said is first it's the framing, or in a way, it's like marketing, right? How do you sell AI to your employees? And you have to do it, right? Not just by we have a new tool. There is a prompting workshop now. You have to use it and you have to show me you save 10 hours a week, you know, so that I can give you more work.

SPEAKER_00

Yeah, yeah.

SPEAKER_01

But we're like marketing it and framing it from perspective that, you know, it's like cadet, right? As a helper, as an assistant that can, as you mentioned, take over some of the mundane tasks that you maybe don't enjoy doing the doing anyway. So you can do more strategic work or more creative work, or like, you know, just spend for the training and upskilling yourself, you know, in areas where you want to upskill yourself. Uh so the framing but uh you mentioned also being transparent, right? So I think it's also something that especially leaders need to keep in mind that being transparent about how they're going to approach this transformation, are they going to be like reskilling initiatives? Are there going to be even even if there are going to be some layoffs, like they have to be very transparent about it or what how they're going to approach this uh whole change. But also, as you mentioned, transparent about the risks that this technology brings. Like, yeah.

SPEAKER_00

Yeah, as you say, it's about targeting it and making the differentiation. Okay, we are introducing this technology and trying to do it in a bottom-up way as much as possible, rather than dumping it on onto people and then demanding uh, you know, additional capacity. I I would say that leaders need to keep the timeline for this quite broad. They can't dump this tool and then say, okay, now you have 10 hours extra a week. What are you doing? If you if you want to do this, do this a lot more long term. But as you say, it's about framing it in a targeting targeted way, that this is what you should use it for. It's a good idea to use it for. It's not such a good idea to use it for this sort of thing. So maybe give it a try and let's see where we're at, rather than kind of imposing it on employees that have learned to work without it for potentially decades.

SPEAKER_01

You mentioned cognitive offloading, right? Uh the risk of cognitive offloading. Tell us more about why it is coming to place. And uh maybe if you have a couple of ideas how we can counteract this cognitive offloading or in a ways how each of us individually but also organizations can roll out and think of AI that it does not really affect the cognitive capacity of the employees rather enhances it.

SPEAKER_00

Yeah, it's a really difficult issue that I've been thinking a lot about. And I think largely it comes down to the kind of design interface and incentives that drive that more than the actual technology itself. And it's about the way it's being adopted by users day to day. I think most of my concern comes from people using it in a personal capacity rather than a professional capacity. And I think initially people were quite cautious with using this technology potentially because of hallucinations and stuff like that, maybe not as trustworthy, but as it became more and more part of our daily life, we see people, myself included, kind of over relying potentially on it for lots of different things. Like we don't even Google things anymore. We just pose a question to the Oracle and the Oracle kind of answers. And that really robs us of a lot of our capacities that we've developed and evolved as a human species of being able to kind of look around our environment, seek information in the appropriate places, integrate information from different sources and arrive at a kind of consensus or assume consensus and a decision. And we see people offloading either than decision making for key kind of life events to chatbots, which is not such a good thing. And I think we're seeing this trend we're gonna be seeing this trend of kind of gradual disempowerment and loss of agency from humans and loss of confidence that comes from not being or not feeling able to make a decision on your own because you now have this really low cost way to offload both the thinking around a decision but also the responsibility around a decision. And that's really like a big caution. I think a big problem connected to that comes down to the fact that these interfaces are designed optimizing for engagement. And that is really something that I see no benefit from for society at large. The only benefit is for the corporations that are developing and deploying these technologies. So I think this is maybe a point where regulation needs to step in and intervene against it. We need these chatbots to be designed with a bit more friction so that humans are not completely, you know, losing skills such as writing, such as again seeking and integrating information, decision making for critical matters. These are is no benefit to society and humans individually of us losing these skills and the confidence to do these things. But there is benefit to the corporations and this is really where I see a key point of regulation that needs to regulation needs to address as soon as possible. Yeah and the level of organization I don't know if there's much that leaders can do other than educate the workforce around these risks. I think a lot of people fall into these dynamics with their chatbots kind of unconsciously but I think if it's a cautionary thing to have in mind before people fall into these dynamics, maybe they'll be less likely or you know make themselves introduce some of the friction and have some boundaries with the technology. But really yeah it's it's something that needs to be thought off in the design stage and it's not currently taken into account by any of the kind of key corporations that are developing the technology.

SPEAKER_01

I I like to position it or actually also mention it in the training as it it comes from um our business review the concept of AI being a co-thinker and AI being a co-worker. AI is a coworker and uh it's when you use the tool to offload certain tasks uh for example writing or let's say report generation or or something else. So you kind of automate your job and using AI as a co-thinker is more a concept to augment your job. So let's say to tell AI your your decision and let it spot flaws in your decision, right? So that you actually try and reflect on your own judgment with the help of AI. So using it again as a co-thinker and and both modes of course could exist but I think it's important to be aware so maybe in for certain use cases you want to use AI rather as a co-thinker and not as a coworker to be able to rather support your own judgment and not to completely replace it.

SPEAKER_00

So and it's a lot of it comes down to a personal decision, right? You have to make a decision about what skills you can live without and it's a benefit to you to use AI and not much of a downside and what is not like for me doing low-level coding tasks I'm happy to lose that skill and I offload it to AI forever. That's fine by me. That might not be fine for someone else. So I make a decision okay I can offload that to AI and I am aware that in the long term that means I'll be worse at that. I'm not so good with um you know offloading my decision making about big life decisions or writing for me personally. It's not a skill that I want to lose. So I have to be conscious and have some boundaries around the technology for myself. And I think this is work that everyone needs to do personally and it's about awareness and consciousness more than anything.

SPEAKER_01

Yeah so bringing responsibility back to ourselves right like each of us are the end responsible for our own future also our own cognitive abilities as the future.

SPEAKER_00

As we should be and that comes down to the use of any technology same with social media you know that's also very addictive technology that we can lose a lot on. So yeah it's about a personal decision some of the time.

SPEAKER_01

If you could change one thing about how organizations approach AI trust or AI in general today, what would that be?

SPEAKER_00

I think a lot of uh we've seen this wave obviously and that comes with any kind of disruptive technology of people kind of jumping on uh and young companies jumping on to technology okay are we we're gonna be AI driven we're gonna be not everything uh benefits from AI integration is what I'm gonna say and it it is knowing about again having targeted ways that this can actually help your business it doesn't mean that suddenly all businesses and all companies need to be an AI-centered company there's still um and I think that market is like very quickly being oversaturated it's about I think people need to conceptualize AI more like we did the internet. It's like a disruption is a changed landscape and think about how your company can exist within that landscape and make best use of this as a tool but it doesn't need to be kind of centered around that. And then again it's thinking how can I deploy this tool responsibly for the sake of internally my workforce, my organization, my brand name whatever that might be, but also in society at large what is the value that this company is bringing in an already kind of oversaturated landscape I think I would like to see more less of people jumping onto the kind of trend and kind of you know having this okay how can we ride this wave and make the most out of it and rather having a bit of more of a long termist thinking I guess is what I'm trying to say.

SPEAKER_01

That's almost uh the contrarian thought but I really like it. And I have two far sight questions for you. So just very short answers. What's one AI tool right now you personally rely on the most honestly chat GPT.

SPEAKER_00

Okay very good I'm not very sophisticated.

SPEAKER_01

And uh what's the worst piece of AI advice somebody has ever given to you?

SPEAKER_00

The worst piece of AI advice don't use AI for writing. It's brilliant. I'm not a native speaker I use AI to edit and it makes my writing a lot better and clearer for people. I don't let it write things for me but I let it edit I think that's a really good thing for me and the people that read my work.

SPEAKER_01

Same for me with German. I always try to scramble something first but then I let ChatGPT rewrite it in a more polished way without mistakes. Yeah thank you Trace it was a very insightful discussion. Thank you. You can find Trace on LinkedIn and learn more about the Digital Trust Council's work on building evidence based trust standards for AI. All links are in the show notes. If you enjoyed this episode follow AI Made Simple the transformation series for more conversations with researchers and practitioners shaping how AI is actually adopted inside organizations. Thanks for listening