AI Made Simple
AI Made Simple: The Transformation Series explores how AI is reshaping how organisations work, lead, and scale. Hosted by international AI trainer and speaker Valeriya Pilkevich, the show features conversations with senior leaders, innovators, and practitioners driving real-world AI transformation. Each episode reveals what it really takes to make AI work — from leadership and culture to data, governance, and everyday workflows.
AI Made Simple
Dr. Elisa Konya-Baumbach on the Psychology of AI Acceptance
Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.
AI adoption is failing in most organisations - not because of technology, but because of psychology. Even when AI demonstrably outperforms humans, people resist it. The question is: why?
In this episode, I'm joined by Dr. Elisa Konya-Baumbach - Professor of Consumer Psychology at Bern University of Applied Sciences and Co-Founder of humest - who has spent years researching the irrational human reactions that block AI adoption.
In this conversation, we explore:
- The psychology behind AI resistance - including "uniqueness neglect" and why rational arguments don't work
- Why most AI training programs fail - and how to design literacy programs that actually drive adoption
- How reframing "AI" as "smart technologies" or giving AI assistants names changes employee behaviour
- The four levels of AI acceptance - individual, organisational, societal, and technological - and what to fix first
Need help building AI capability in your organization? Book a call.
Valeriya Pilkevich (00:00)
Why do people resist AI even when they know it works better? Welcome to AI Made Simple, the transformation series. I'm Valeria Pilkevich and I talk with global leaders, innovators and practitioners who are shaping the future of work in the age of AI. In this episode, I'm joined by Dr. Elisa Konya Baumbach, Professor of Consumer Psychology at Bern University of Applied Sciences, co-founder of Humest.
and one of the leading researchers on the psychology of AI acceptance. We talk about why employees reject AI tools despite good training, the difference between cognitive trust and affective trust, and why most companies only build one, how reframing AI as smart technologies changes adoption completely, and what organizations get wrong when they design one size fits all AI literacy programs.
Valeriya Pilkevich (00:51)
Hello, Elisa. Thank you for joining me on this podcast.
Elisa (00:54)
Thank you for inviting me.
Valeriya Pilkevich (00:56)
Elisa, you research and consult on something most organizations overlook, the psychology of AI acceptance. can you tell us what that actually means and what led you to focus on this?
Elisa (01:07)
Well, yeah, what does it mean? That's actually, it means what it says. Meaning that, in my opinion, AI or technologies, know, evolving around AI, they are a little bit special in terms of getting accepted or not getting accepted. So the psychology of AI is basically concerned with all of the...
individual human reactions that we have that sometimes are not as rational as we would like them to be. And so I'm exploring the more irrational phenomena around the acceptance or non acceptance of AI. And that's where the whole psychological lens comes into play.
Valeriya Pilkevich (01:48)
regarding this irrational that you mentioned, your research shows something counterintuitive. Even when AI outperforms like in medical diagnostics, for example, people still resist it. So can you tell us what's actually happening psychologically when someone rejects something they know works better?
Elisa (01:58)
Mm-hmm.
Yeah, that's a very good question. There can be a lot of different mechanisms at play. the short answer is it depends. So there's not the one answer to this question. But what you just mentioned, like the medical context, that's pretty common that actually AI performance is much better than, you know, by human doctors. However, here,
there is not only the rational decision maker at play, but much more psychological factors that weigh in. In this case, this can be a sense of uniqueness neglect. That's a phenomenon that has been researched, which basically means that humans, individuals tend to think that AI cannot really...
grasped the whole uniqueness of their case, in this case the health concerns, which of course is a fallacy, right? Because if you have a well-trained AI with enough data and the right data, then of course this AI would have seen much more cases and also special cases that might be very
than any doctor could ever see or treat or diagnose in his or her career. But this is a phenomenon that has been seen and this is one psychological mechanism that can be at play. But there's also a lot of other stuff going on. It's age-related. doesn't have to be like, it's not the main driver, but
can be that ⁓ people that are just used to and have been used to, you know, have a personal relationship with their medical care provider, they just feel more comfortable, they feel more, yeah, they trust their medical care provider, like the human doctor, much more than an AI. And then there comes a lot of stuff into play in terms of transparency, like how transparent and understandable is this diagnosis or decision.
And then we go into the area of explainable AI. So there are a lot of different boxes actually that I could open right now. But yeah, it really is a complex field where you cannot say like there's only one psychological mechanism in play, but it really depends and
Valeriya Pilkevich (04:10)
Mm-hmm.
you distinguish between cognitive trust, which is, know it works, and affective trust, which is concerned with safety or I feel safe when using this technology. And obviously we've seen a lot in the companies when it comes to AI adoption or AI initiatives that many people close up because maybe they don't feel safe anymore or they feel like
Elisa (04:23)
Mm-hmm.
Valeriya Pilkevich (04:42)
AI is going to take their jobs and we all have seen this news titles and media titles. And from what I see that most AI initiatives focus on technology and everybody thinks, if the technology is great, we're going to create the best AI model or trade on our best data or we're going to ⁓ buy this best tool, the best licenses, right? But what actually happens in the companies when the emotional layer gets ignored?
Elisa (04:53)
Mm-hmm.
Yeah, well, that's actually one of the biggest challenges for companies because we are driven by KPIs usually. and you know, having this performance in mind, which kind of makes up for this cognitive trust, like we know it works and we know what it does and we can rely on it basically. Well, this is something that is more easily guaranteed or that can be worked on very well. But this
Valeriya Pilkevich (05:16)
Mm-hmm.
Elisa (05:34)
emotional layer as you call it or this effective side of trust, this is much less tangible, which makes it harder to measure, harder to track, harder to make evident also in
But in general, as I have found in my research and also my consultancy work, that we as individuals, as human beings, we are always looking for
some kind of connection, some kind of relation, right? And the cognitive trust is only accounting for so much, but this emotional layer, the effective trust is much more about this building this connection, bonding, right? And we are also doing this kind of with ⁓ an AI counterpart. And you can actually also design AI interfaces or AI systems in a way
⁓ that they promote, facilitate, enhance effective trust. But you have to keep it in mind that this is actually also a design issue or an issue of how we want the AI to talk, which conversational style is actually adapted. Do we want to have an empathetic AI? Do we want to have empathetic talking or not?
Do we want to convey emotions or not? To which extent?
And then this actually helps people to gain this effective trust and to then also be much more open to adopting AI in the long term. if companies actually don't have this in mind, what I see a lot is that, you know, just the maybe the technology is good, but people won't use it.
Valeriya Pilkevich (07:04)
I actually want to dig deeper into this humanizing AI part. we've been using technology and in fact AI for many, many years now, everyone, right? We open Netflix, it uses AI algorithms or navigation systems or translators. But once you say artificial intelligence or
Elisa (07:08)
Mm-hmm.
Valeriya Pilkevich (07:25)
large language models or machine learning. That's where people, especially non-technical people, they get shut down. And a lot about like framing or humanizing AI. You do also research in it. And I really like, like this perspective. Can you tell us more about the research, about the findings? And if you have some practical recommendations for business leaders, for companies, how they can
Elisa (07:31)
Hmm.
Valeriya Pilkevich (07:49)
frame this artificial intelligence, this scary big thing into something that people want to engage with.
Elisa (07:56)
Definitely, yeah. So the aspect that you just mentioned, this kind of humanizing AI, which is in scientific terms called anthropomorphization, which is kind of a tongue breaker. So this is something that we actually as humans engage in very naturally. if we, for example, have a conversation with a generative AI like Chachi P.T. we usually, you know,
Of course not all of us, not in every context maybe, but usually we have the tendency to apply social rules and norms. So probably everybody can relate to that and probably everybody has said thank you to generative AI when he or she completed a task. Which is of course not necessary because it's not a human, but we are looking for these social cues.
Valeriya Pilkevich (08:40)
Yes, yes.
Elisa (08:48)
which is why we naturally also enter for morphi. So we actually are giving in our head a human image to this AI entity. And you can, of course, play with this and use this in contexts where this might be helpful to, for example, the trust, effective trust
And you can, for example, then use human-like pictures or names, or you can, you know, like talk, like how does the AI talk.
⁓ being more emotional or more empathetic. And in some contexts this might be very helpful. Maybe in service settings where people are looking for or seeking for a human counterpart. This might be helpful. In other contexts we don't need it as much. So this is how you can kind of enhance this social connection and the so-called social presence that is perceived by
us. And yeah, because this is the natural tendency that we actually it's in research, it's called it's the so called casa paradigm, the computers as social actors. So we, we just perceive, you know, AI to be social counterpart. Although we know it's not a human being, right, but still we have this reciprocity in terms of social norms and the action that we apply.
Yeah, and what does this mean for companies? It really depends on which area this company is in. So there might be, if you, example, are talking, I mean, we talked about medical before, but if you really talk about mental health, of course, here, empathy and anthropomorphization is much more helpful also to get people to be...
heard and to share what they actually have in mind. But in organizations in general, what I have also seen, and this might also be maybe a good hint to think about, we have heard so much about AI and AI is like everywhere, like in the press, in media, and usually the associations that people have with AI, are not
I mean, they can be positive, but usually it's very mixed, right? Because you also hear a lot of, you know, stuff that maybe people might be afraid of. So AI in general is a topic that is polarizing a lot. So people, yeah, can have very extreme reactions to it. And this is why I think it's always good for companies to kind of...
Valeriya Pilkevich (10:49)
Mm-hmm.
Elisa (11:04)
make themselves aware about like how do we actually what's the connotation what are the associations that we have with AI with artificial intelligence and then you can have you can actually make a little exercise internally in the organization and just think about like what what would be different if we would actually not call it AI but for example smart technologies right because it's basically
saying the same thing, right? It's not something different, but it's totally differently connotated. So because usually with smart technologies, we don't feel threatened, you know, we don't have, we are not afraid, but it's rather seen as a help as a tool, which actually AI is also but you know, for a lot of people, AI has so much more associated like it has a very heavy weight attached to it, which can
by itself be a barrier to adoption, right? So if we can kind of lift this weight and just make it a little bit more neutral, right? Like the term per se, then this might already help in organizations to kind of get it going, like to kind of get this whole adoption and usage process going. And this might be a good entry to that.
Valeriya Pilkevich (12:10)
These are some great insights. in fact, I noticed even in my trainings, with custom GPTs, for example, I felt like when you call it, you know, custom GPT or copilot agent or whatever,
people are like, what's in it? But when you put it in a way that, I'm actually a solopreneur, but I have a team of 20 AI assistants and there is this sales assistant Anna, and she has the image generated with Mid Journey which looks like Anna. And then there is this copywriter John, or podcast producer Mark, and they all have images and they all have these names and they all have...
Elisa (12:28)
Mm-hmm.
Mm-hmm.
Valeriya Pilkevich (12:45)
specific tone of voice and how they communicate, And it completely changes the perspective and then everybody's like, cool. I also want to have a team of, little helpers that's going to take over these little tasks that I have day to day. So I find it really fascinating how this framing can change the whole reaction during the training, for example, and subsequent adoption as well.
Elisa (13:06)
Absolutely. Yeah, that's also something that I saw. And just calling it like seeing it as a colleague and giving it names. also, especially if you work with teams that where there's a high resistance and rejection and, know, skepticism, this can be very helpful to kind of define a role, a job role and, you know, just like a job posting. basically, yeah, then it's much more like an assistant that helps out instead of, you know, the AI that is taking our jobs.
Valeriya Pilkevich (13:34)
Yeah, definitely. what you also mentioned is that it's not just on the individual level, but also organizations have to look at the company level, how they talk about AI? do they talk about as a transformation and something good or as a threat? But what you also talk about is that it's not just two levels, it's actually three levels. It's also individual, then organization, and then society, societal levels. So maybe the country as well.
Elisa (13:58)
Mm-hmm.
Valeriya Pilkevich (13:59)
Can you walk us through what influences acceptance at each level and where maybe also organizations that are stalled, where should they go and look at what can they influence?
Elisa (14:10)
Yeah, definitely. So you mentioned that there are these three levels and actually usually I like to think more about it as four levels because you're absolutely right, there's this individual level, the organizational level and the societal level which basically goes from small to bigger community. But ⁓ I also think that the technology itself is a dimension that weighs into the acceptance because
Valeriya Pilkevich (14:19)
Mm-hmm.
Elisa (14:34)
We talk about AI very broadly right now, but this can also range from two. ⁓ This is also very influential. For each dimension, there's a variety of influential factors that weigh in. I'm happy to give you some examples. to go through all of them would probably take us a very long time.
Valeriya Pilkevich (14:39)
Yes.
Elisa (14:55)
For example, ⁓ we just talked about the connotation of how you talk about AI or what is associated with AI with the term itself. And this is actually something that is influencing AI perception, AI also acceptance on a societal level. So it depends in which ecosystem are you actually embedded into. And there can be a very negative
connotation of AI basically and this is something that is surrounding us in the cultural embedding that we are in. So this is a factor that comes in at the society level. If we think about the organizational dimension, what can have a pretty big influence is AI ⁓ actually seen as a strategic imperative.
do companies actually see and think AI is important to implement and to use and to just incorporate into the company's strategy, or not? Of course, that's a big thing. mean, of course, it's a very dynamic field. And nowadays, we can actually almost not imagine that there are companies that don't have AI in their strategy, but there actually are a lot of them. So yeah, that's striking.
And even then, I said, you know, given that you have this strategic imperative and you say, okay, AI is important for us and we want to use it, then you also actually as a company have to allocate the resources, right? So if you have a strategy, have to also, you know, let actions follow that kind of bring the strategy to life. Otherwise it's just, you know, on paper. And that's what I see a lot also in companies that, you know, people say, well, we
we think it's important, you know, we had this kickoff and we, you know, we have a new strategy paper and it's in there, but how do you actually feed it in the culture of the organization of the company? And that's much harder than just writing it down and, know, having a mission. So for a lot of ⁓ companies that I have consulted, I actually saw that they would like to people to use AI.
But one of the influencing factors here, one of the barriers basically is if the workload is very high in those companies and they don't really have a strategic prioritization in terms of really financial resources and temporal resources allocated to this topic, then it won't fly, right? Because people at the beginning, if you have to learn new skills, you have to readapt, you have to unlearn habits that you had before and now you have to do it in another way.
Valeriya Pilkevich (17:17)
you
Elisa (17:26)
which in the end might be much more efficient, but on the way, you know, when you're on the way on learning it, it usually takes some more time because you try it on arrow, you go back and forth, you know, you don't have this efficiency, the productivity increase right away, right? You have to work on it. You have to kind of get better at prompting, you have to get better at, you know, understanding which cases are good, which way can I use it, how.
So this a learning process. And if this is not acknowledged as a learning process by the organization and the time is not allocated for that, then that might be ⁓ this usually is a big barrier in terms of organizational level. Yeah, then on the individual level, mean, there is really a lot of stuff going on in terms of, know, we're like everybody ⁓ is individually.
very different in terms of how we socialize, what is our ⁓ age and cohort, like even gender is different. And all of this weighs into how we perceive AI, like we have different personal tendencies. Some of us are maybe more autonomous and they love to be autonomous and others they love...
security much more and they like to have it safe and they like to they like stuff that they know and others are more open to innovations and they are more risk-taking maybe you know and this is this can go on much longer in terms of like personality traits and how they actually influence how we perceive AI and also how we how open we are to actually engage in using AI.
⁓ This also of course goes hand in hand with skills. Like do we have the skills to do it? Do we still have to learn it? Are we eager to learn new skills? Or are we just like, ⁓ no, I actually am pretty good with, I want to stick to what I've done like the NASCAR for 10 years, right? Yeah, so that's a lot of human barriers that can factor in. Yeah, and last but not least on the technology dimension, just in general, even here you can in a company also influence
how the technology is seen, right? So if you have a certain AI agent assistant that you would like to introduce, it's, and this basically, it's basic marketing actually. But a lot of times I see also actually for startups that are very techie, know, have all of this tech knowledge, but they don't have, like they have their own language and you can't, sometimes you have to translate this into, you know, just common knowledge language.
Meaning that I as a user in the end, in the organization of the AI assistant or agent, I'm not interested in learning a lot about the features. about the technological background of how does this work. I maybe it's good to have an understanding of general stuff, but you don't have to be very technical about it or go about all of the features.
Valeriya Pilkevich (19:49)
Mm-hmm.
Elisa (20:11)
What I'm interested in as the end user is basically what are the benefits from like, what do I get out of it? What are my benefits? And you can actually in communication, can do this, you can talk about the benefits rather than the features. It's the same thing, but it's just phrased very differently. So in terms of instead of, know, ruminating about the, I don't know, the technological specificities that basically make something more efficient, you can just
⁓ tell people like how much time do you save basically I don't know I don't care how but you know how it works at the end but how much time do you save like for a lot of people this is much more effective to know that and to get communication about they can grasp it much better the benefits if they are communicated like that
Valeriya Pilkevich (20:53)
what's in it for me, To How can I benefit from this training technology rollout and so on?
Elisa (20:55)
Absolutely,
Absolutely.
And actually this can also have different levels because what's in it for me, for some people this means finding an intrinsic motivation like I want to learn this because I can actually get better, get faster, get, I don't know, have more time for other stuff that I would love to do, right? But it can also be a motivation in terms of, well, what's in it for me, it also is maybe a skill that you will have to learn in the years to come.
just to stay competitive. And I think that's also a pretty important factor that sometimes we don't have in mind on an individual level to stay competitive because, you know, it's not AI taking your job, but maybe it's a colleague or, you know, another applicant that is using AI more effectively than you who is taking your job, right? So it's kind of a skill that is, yeah, that you at some point might be necessary to have in certain jobs.
I mean if you want to stay competitive as an organization I think it's my opinion that you don't have a choice whether or not to you know use AI but rather you should think about how can I do it like how does it benefit my company.
Valeriya Pilkevich (22:06)
Yeah, or it comes back again to the mindset, right? It's not like, ⁓ we still wait and decide how others do it, right? Or no, we don't want to do it, but rather, OK, it's there. We have to do it. It's not going anywhere. So how do we approach it, right? OK, I want to reiterate on the points that you mentioned. So you've emphasized that acceptance is deeply individual. And each of us is different. Each of us has different.
Elisa (22:17)
Absolutely. Yeah.
Valeriya Pilkevich (22:31)
⁓ fears or motivations, each of us has different, as you mentioned, starts with demographics and psychographics and so on. What I often see in companies when I'm doing AI literacy trainings is that often because of budget constraints or other constraints, it's often one size fits all. like, there is this series of workshops, two, three workshops that we sort of have to cover for the compliance to make sure we are AI literate.
Elisa (22:36)
Mmm.
Valeriya Pilkevich (22:55)
And then we do it the same workshop, same use cases for everyone. And maybe you have a practical tip. What should organizations actually consider when designing AI literacy programs, AI trainings? So how do you move from this one-size-fits-all approach into something that actually increases the adoption after this training?
Elisa (23:07)
Mm-hmm.
Yeah, that's a very good question actually and a lot of like the question that not a lot of organizations actually have in mind. And I think that's that's actually crucial to think about like, where can we get everybody where he or she stands right now, right? And I know that's also very complex, but actually, also AI is helping us here to be more personalized, right? So what I would advise such companies to do is
I mean, of course, you can do this one size fits all approach. It's better than having no upskilling at all. But if you want to have it more sophisticated or more tailored to the specific levels and needs of employees, what you could do is to kind of have an assessment, which doesn't have to be a very big thing, but just like a self-assessment, like a questionnaire online before you engage in the training.
that ⁓ might help you to better evaluate which, first of all, is this employee, is he or she resistant or skeptic towards AI or not? And then also, how much is he or she involved with using AI?
regarding the specific job requirements. So there might also be a difference, right? And also maybe how much is he or she interested in learning more about how does it work actually? And all of this could factor into, you know, just having a calibration of who runs through which program or who can get like, I would actually think about it like a modular approach, having different modules.
that you can kind of mix and match to make a ⁓ more suitable package for each and everyone.
Valeriya Pilkevich (24:50)
I love the idea of using technology. There are many modern learning management systems, right, that, like Sana, I have in mind that allow you to tailor these kind of experiences to make them individual. But then also what I have seen working well, not just having this, OK, one, elite or city training, it could be good that it's somewhere there, but also more engaging people, right, making the challenge or the hackathon and then.
Elisa (24:53)
Yeah.
Valeriya Pilkevich (25:13)
doing maybe some on-site workshops for functional teams, like one day for marketing, one day for sales, and then doing maybe another champions enablement. So just not, okay, we've done sort of a literacy that that's it, right? This one, the same workshops for everybody, every employee has to attend, but rather, okay, how do we live it, right? ⁓ On a day to day, what else can we do? But with all of it, as you were saying, you need to actually plan for it.
Elisa (25:25)
Mm-hmm.
Mm-hmm.
Valeriya Pilkevich (25:40)
both resources, time and budget and people. ⁓ Elisa, after studying all the ways humans resist AI, what gives you optimism about where this is heading?
Elisa (25:42)
Yeah.
Mwah!
Well, know, actually, think, I mean, I know my research is a lot focused on the barriers, but rather to just better understand the barriers and then to kind of get solutions to overcome the barriers.
what gives me optimism is that we as human beings usually are pretty good at adapting. Adapting to changing environments, to transform. But some of us, of course, it takes a little bit more time for some of us and a little bit less time for others. But I think just to stay curious and stay open to...
new technologies because mean, AI is not the last thing that will happen to us. has been a lot of technological innovations that have basically had this radical nature. Just to keep in mind that it's not, you know, it's always an awesome opportunity to develop oneself and to grow and to be better. I think that's my...
my optimism, where my optimism comes from.
Valeriya Pilkevich (26:48)
Yeah,
wonderful. So growth mindset, that's what I'm hearing.
Elisa (26:53)
Yeah, and also future skills in terms of, you know, uncertainty,
just embracing, I'm not embracing uncertainty, but just being able to cope with it because, we don't know how everything will evolve and how like it's just uncertain times, uncertain environments. So and being flexible and making the best out of that, I think that will help a lot. Yeah.
Valeriya Pilkevich (27:12)
Do you have anything else that you would want to mention from maybe what we haven't covered yet
Elisa (27:19)
maybe just the one takeaway, like if someone hears that and just thinks, well, what is my takeaway now? I think the most important thing for me is to convey is that for organizations, but also for individuals, just take in mind that you should think human-centric. Like the human is always the most important thing. And even if you have the best AI, you know, it doesn't...
help a lot if it's not used by humans. So there's a great quote actually that I would like to quote at this point. It's from Sherman Coda-Bande from Boston Consulting Group. And what he said is, "The single most critical driver of value for my eyes is not algorithms nor technology. It is the human in the equation." And I think that's basically the most important takeaway from my side.
Valeriya Pilkevich (28:08)
a great quote to end this episode. Thank you so much, Elisa.
Elisa (28:09)
NAH!
Valeriya Pilkevich (28:11)
You can find Dr. Elisa Konya Baumbach on LinkedIn and learn more about her consultancy, Humest, at humest.da. All links are in the show notes. If you enjoyed this episode, follow AI Made Simple, the transformation series for more conversations with business leaders, researchers, and practitioners shaping how AI is actually adopted inside organizations. Thanks for listening.