AI Made Simple

Dr. Wendy Rasmussen on Why Upskilling Alone Won't Fix AI Adoption

Valeriya Pilkevich Season 1 Episode 12

Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.

0:00 | 28:58

Most organizations invest heavily in AI upskilling - and then wonder why adoption stays flat. The missing piece isn't more training. It's understanding whether your workforce is psychologically ready to change how they work. 

In this episode, I'm joined by Dr. Wendy Rasmussen - licensed clinical psychologist, U.S. Navy veteran, and founder of Alpenglow Insights - who developed a diagnostic that measures the hidden psychological barriers blocking AI adoption across organizations. 

We discuss: 

  • Why "I don't have time" often masks deeper fears about identity and job security
  • The four conditions that must be in place before any AI upskilling can stick
  • How mandatory training programs can actually activate resistance instead of adoption
  • What two types of AI champions your organization needs - and how to deploy them differently 

Connect with Dr. Wendy Rasmussen:
LinkedIn: https://www.linkedin.com/in/wendyrasmussen
Website: https://alpenglowinsights.com/

Connect with Valeriya:
LinkedIn: https://www.linkedin.com/in/valeriya-pilkevich
YouTube: https://www.youtube.com/@aimadesimpletalks
Podcast: https://aimadesimple.buzzsprout.com/

Need help building AI capability in your organization? Book a call. 

SPEAKER_00

What happens when you invest in AI tools and training, but your people aren't psychologically ready for the change you're asking them to make? Welcome to AI Made Simple, the transformation series. I'm Valeria Pilkievich, and I talk to global leaders, innovators, and practitioners who are shaping the future of work in the HFAI. In this episode, I'm joined by Dr. Wendy Rasmussen, licensed clinical psychologist, US Navy veteran, and founder of Alping Law Insights, where she helps organizations diagnose the hidden barriers blocking AI adoption. We talk about why upskilling alone doesn't create lasting behavior change, how professional identity threat kicks in long before anyone actually loses their job, what people really mean when they say I don't have time for AI, and why mandatory training programs can activate resistance rather than the adoption they're designed for. Wendy, it's great to have you on the show. Thank you for taking the time. Yeah, thanks so much for having me this morning. Wendy, you went from serving as a Navy psychologist, running organizational diagnostics in high-stress military environments, to leading clinical strategy at the digital health startup to now diagnosing why entire workforces resist AI. Can you tell us more about your journey and what is your current focus?

SPEAKER_01

Yeah, I recognize my background's a bit nonlinear and non-traditional, but really the continuous thread through it all is I'm very curious why smart, capable people get stuck when the environment around them changes. So, as my training as a therapist, we learned that resistance is data. It's not just people being difficult or obstructive. And so when I hear resistance, I just get really curious. Like, what's going on there? What's really driving that? Um, and so this started in my work in the Navy. I was doing initially individual therapy and helping people kind of get unstuck. And over time, I started working with larger groups and going in, kind of talking to people, doing surveys, seeing what was going on, and then making recommendations to executive leadership about how they could potentially make things better on the behavioral health front. From there, I I did go to the digital health startup, and what we were trying to do was build AI-enabled tools for therapists. And therapists have a lot of distrust for good reasons about technology. And so again, this like how do we help them kind of get unstuck around it and when what are their concerns? And now we are in this world of AI transformation, and the same things are popping up. People are really uncertain, there's a lot of hype, there's a lot of doom, what's what, what's real. Um, and so my job now is to go in and find out for an organization what is it that's going on that's blocking people from engaging in the AI investment that the organizations have made.

SPEAKER_00

So it's really all about the change, and it is different with AI now.

SPEAKER_01

Yeah, I would say the thing that is similar to being in the military is that in the military, change is constant and you don't have a lot of control over your environment. You don't have a lot of control over your day-to-day in terms of you have to show up, you have to do certain things. What's happening with the AI transformation is it's kind of being thrust upon the workforce, right? Like you had this job that you took maybe a year or two ago or even longer. And now all of a sudden leadership is saying, hey, we're expecting you to completely redesign how you work. And so to feel like you don't have a lot of control over that is really stressful and it's really destabilizing for some people. You have this sense of, I don't have a lot of autonomy sometimes at my job right now, and layer that on top of the world in general is a very uncertain place right now. We have a lot of economic stress, a lot of uncertainty about where we're heading. Depending on what country you're on, I think that's pretty consistent, is there's just a lot of uncertainty. And that in itself doesn't give us like a really solid foundation to then layer on even more transformation and change.

SPEAKER_00

Definitely. You use often the phrase AI implementation moves at the speed of technology, but adoption moves at the speed of people. I think it shows very well uh how things are developing today. When you started working with companies on consulting them on AI adoption, what were you seeing that business leaders might have been missing?

SPEAKER_01

Yeah, initially it's just a lot of fear of missing out or FOMO and leaders saying we're under a lot of pressure from our board. We need to just roll out AI. And, you know, there's also this saying of like, I just want to sprinkle AI on everything. And I think that's not really effective if they want to get return on that investment. Um, and for example, BCG has said over and over, 70% approximately of AI value comes from people and processes. It's not the technology. And so I think leaders are also under a lot of stress, a lot of uncertainty. Everyone's crunched on time, everyone's crunched on cognitive capacity, how much bandwidth do I have to roll this out? And so I think asking leaders to slow down and make sure they're bringing their people with them can feel like a big ask. But the companies that are really doing that well are having a much better experience, I would say, and really focusing on how do I help my people learn how to use AI to create new value and that that way they can hang on to the headcount. It's not just let's just do efficiency and cost savings through cutting headcount, but let's take the people we already have and help them redesign their workflows and help them think creatively through what else they could be doing if they're able to augment the work that they do day to day.

SPEAKER_00

So, what does bringing people along mean for you? Because you know, sometimes we talk about communication being important right from the top from the CEO, like being transparent about what are the expectations. Is the new expectations that everybody is using AI? Is the new expectation that uh you don't get a headcount if you cannot prove that AI cannot do it, right? So the communication, what for you are other parts of bringing people along on the journey?

SPEAKER_01

Yeah. Um, you'll hear me and others working in organizational design and AI transformation talk about psychological safety because we're asking individual employees to figure out how else could your job look? You know, we're gonna pay for these tools, we're gonna pay for this software, but it's on you to figure out in your own workflows and in your department, what does this actually look like to get to a successful place? Um, and that requires the ability to be able to experiment, um, be able to experiment and learn together. So I would say what's missing with this is the conditions for behavior change that we already know from psychology. So, psychological safety being the big one, of course. If people don't have safety to experiment, they're not going to be able to figure out what else their workflows could look like, how these tools could be used to make themselves and the rest of their department more efficient if that's the goal, more creative if that's the goal, free up time just to figure out what these new workflow designs could look like. I think we also need some clear skill pathways. So this has to be function, um specific to a function, because what's a clear skill pathway for, say, a software engineer is not the same for your sales team. And so figuring out what these skill pathways could look like. So maybe you are someone who's really good at managing agents. Let's figure out a pathway for you to become a leader doing that. But maybe you're someone who actually does like to lead people. And so, what is a separate pathway for that to make sure that you're getting the right people in the right spaces in once they become AI augmented? You talked about leadership transparency. And I think that's also missing at times because leaders are worried about saying the wrong thing. They're worried that they're gonna make promises they can't keep. And so they also need the safety to be able to be transparent about this is what potential head count cut could look like. This is what our new metrics are gonna look like in this AI augmented space. We want to make sure everyone's driving towards the same goals. And so leadership needs to also be transparent and also be transparent about this is what your new roles could look like. Like these are the types of roles we're looking for. I know Zapier has been pushing really hard to be AI native across the board. They're screening for it and they're hiring, and they've been very public about that. And I think it's that kind of transparency of this is the type of company we want to be, um, I think is really important. So that then like the individual worker also knows, like, oh, this is the direction I should be aiming as I'm learning how to use these tools. And then also freeing up some protected learning time goes a long way. And this isn't just like an hour a week or like a one-time, one-day workshop. This has to be an ongoing, you and your team are are, yes, you'll have some individual protected time, but then your team will also come together and share best practices, share use cases, share what's working across your department. And I think what happens if you don't have that is then learning time gets pushed to personal time and then it also gets pushed to shadow usage. So people may not necessarily on their free time want to use the tools that are provided. And so they may go find the tools that work best for them. And so that's also a risk when you're not protecting learning time.

SPEAKER_00

What I find uh very interesting, and actually what I observe happening a lot is that the focus is more on the upskilling, and the upskilling is done based on the thing as we've always done. We've used to do this reporting or we've used to do this uh whatever emails or or data or documents and so on. And it's good. I mean, I'm already uploading them when they are doing upskilling, like structured upskilling or upskilling for different uh types of people and different uh stakeholder groups, but uh fewer uh in my experience are thinking about how the roles will look like in the future, let's say in a year from now, in 10 years from now, do we even need to learn how to draft emails with AI if at some point, any point in the future there will be an agent that's going to do it for us automatically? So why do we need to learn now how to, you know, how how to use this tool for? So I really like that you you mentioned that this is something that each individual has to think about, but also from the organization, they have to be provided the pathways. And this is unfortunately something I'm seeing. Not that how has your experience been about it?

SPEAKER_01

Yeah, I think when you have it's really interesting. So there's senior leaders of large enterprises, mostly that I've seen, that are, you know, a few years out from retirement. And so there's less motivation potentially to learn this whole new technology and figure out how to roll it out across an organization. It's like, okay, we're just, you know, we're gonna come in, we're gonna do the upskilling, and then I'm just kind of like gonna, you know, ride this out until I ride off into retirement, basically. I think when you have senior leaders who are actually really interested and invested, and I see this more in like, I would say like mid-market size, because I think the transformation is a little more personal, and I think it's a more manageable size company. Um, those leaders want to know how this all works, and they're, you know, pursuing individual coaching, they're pursuing their own type of upskilling. And so if leadership doesn't understand how this technology works, then they can't really have input on shaping what it could look like in the future. And so I really push hard for it's not just your ICs and your middle managers that need to understand how this works. It's like it needs to be top-down and bottom up. And so, really important that those senior leaders also understand how this works.

SPEAKER_00

Wendy, literally every second guest on this show is saying the same thing about leadership, needing to understand those things first before they, you know, move on to strategy or tools or reskilling or upskilling.

SPEAKER_01

That's what pops up. So, one of the things my diagnostic looks at is organizational enablement. So, do I as an employee feel like management understands the problem in the space and that management is also giving me the tools and the time that I need to make this happen? And that's like one of the key questions that I ask is what do you wish your manager knew about your work with AI? And that gives us a lot of good information of finding out when the perception is management doesn't understand how it works because it's like, well, why am I going to invest my, even if it's like my personal time to learn how this works when you don't even understand how it works? And so, yes, it definitely has to be modeled. Also, having leaders share these were use, these were my attempts at creating new use cases. Here's what didn't work, and here's what I learned. I mean, that's also what builds psychological safety, safety to experiment. It has to be modeled from the top.

SPEAKER_00

They already mentioned your framework, so your AI readiness enablement diagnostic measures for specific conditions, threat and confidence, bandwidth and overload, perceived usefulness, and organizational enablement. Can you walk us through those? And if a business leader, a CEO is listening right now and thinking we rolled out copilot six months ago and usage stack is at 25%. Which one of these four would you look at first?

SPEAKER_01

Yeah, great question. So I'll walk through the four domains first. So the first one is psychological or workforce readiness. And this is looking at things like identity threat, job threat, anxiety around job loss, and confidence. So do I actually feel like I'm confident enough to even learn this new technology? The second, bandwidth and overload. So do I have the cognitive space? I'm already stretched thin in my day-to-day at work. Like, where am I going to find the time? And do I even have the space in my brain to figure out how this works? Can I absorb all this change? Third, motivation and perceived usefulness. So do I actually think that AI is going to help me do my job? And does it feel like it's just been imposed on me? Um, and then organizational enablement is that going back to that. What do I wish my manager knew? Do I feel like we have good governance structures in place? Do I trust that we're handling as a company this rollout responsibly? Now, going to, all right, we bought licenses for copilot, we're getting 25% adoption, it's not looking great. We've done the upskilling. Why isn't anyone using it more? And I'll say these these are not the only reasons that someone might not be using copilot, but I've these are the necessary conditions. So if if someone doesn't have all these pieces, then it doesn't matter how good the upskilling and and the tool itself is, right? Starting from this place, like, so if you think, I actually wouldn't pick one specifically because it could be a mix of all four. And it could look different across each person and it could look different across each department. So often, what I'll go back to your engineers need, like they're probably high motivation, but potentially low bandwidth. They're probably, depending on the organization, a decent amount of psychological safety because they're used to building. Whereas I think about like marketing teams, high motivation, but anxiety related to being replaced by the tools, you know, getting trained to do their job. Or there's also a competence threat piece there. Sometimes in some organizations, and there's research that has found this, uh, using AI actually makes you look incompetent to your peers, especially if they're not using AI. So this is also a really fascinating gender dynamic. So one very large study published in HBR, it was looking at software engineers, and the ones who did not use AI rated their colleagues who did use AI as roughly 9% less competent. And there was an extra penalty if you were female. So women using AI were rated 13% less competent compared. And so the idea is like, well, let's not cater to the non-adopters, right? Like, how do we change the behavior of the adopters to see what they could, what else they could be doing if they integrated AI with their role. So that's the type of things that I think when we're rolling out upskilling, when we're rolling out the tools that people aren't necessarily thinking about, or they just don't know how to address it. And so I would, what I do is I give a each department has their own sort of heat map to show the different on the four domains where they're scoring. And then I give very actionable recommendations of what they can do for each department uh to help get people unstuck.

SPEAKER_00

Windy, you talk about the difference between what people say versus what is actually happening. So things like I don't see the value or tool slows me down or I don't have time. And your framework maps each of these surface signals to a different root cause. Can you walk us through a couple of those kind of translations to our listeners?

SPEAKER_01

Yeah, so this is where the mixed methods approach um is really helpful. So I get quantitative data, but I also get some open text so I can get some qualitative data as well. Um, and so it helps me have a little more clarity on what the quantitative data is saying. So things like I don't have time for this on the surface, like, okay, bandwidth. Let's just free up some time. But what could actually like that might just be what they're saying to like get people off their back or or to just push back, that's a very safe answer. But it could potentially be low perception of control. And this goes back to our need for autonomy to feel like we're in a good place. So if someone feels like AI is being imposed on them and they're not choosing to use it, their role identity potentially is being threatened. And so it's easy for me to say, I don't have time for this. It's less easy for me to say, I don't want to do this because the tech might take my job or I might get fired or laid off as a result. So the intervention for that, so like I started with, the intervention for I don't have time is just here's two hours of time. The intervention for I don't feel like I have control over the situation is gotta be more supportive of helping them have a sense of autonomy. So let them choose what their entry points are. Reframe AI as capability expansion, or like we're gonna let you actually control parts of this and not replacement. So we're not planning to lay off half your department. So go forth and learn and create new value and figure out what else your job could look like. I tried, but it wasn't useful. This one is is interesting because on the surface, it sounds like the tool isn't great, but it just could be a workflow mismatch. So, like different tools needed for different workflows. Um, and so this is where you can do like some task-level integration mapping. So, what that looks like is identifying three high frequency tasks for each role, and then figure out where does AI actually create immediate value here? And again, it's gonna look different depending on the department and depending on the function.

SPEAKER_00

I want to reiterate on what you call professional identity threat, because uh you highlighted often this underestimated barrier to AI adoption. Can you can you break that down? Professional identity threat. What does it mean? What are the consequences? What are the ways to counteract it, or how to bring these people on board along on the journey?

SPEAKER_01

Yeah, and I think, listen, I think like senior leaders are experiencing this as well, right? The world is changing around them. Again, this goes back to autonomy. I don't have a lot of choice or control because this is changing everything at work. So, what does it mean when I've spent 20 years developing expertise? Either maybe I've stayed in this same job, but maybe I've also changed jobs. We do that a lot in the US. It's kind of like the new way to or newer way to get promoted is you actually just go to the next company, and then you want a promotion, you go to the next company. So if I've worked at all these different places, I've built all this expertise, and now this is more for knowledge work roles, but now you're asking me to train an agent or build an automation that makes the rest of the department more efficient, but yet that wasn't in my job description. Like, what does that mean for me? So I've been thinking a lot about like the social contract between employer and employee, and what is that going to look like going forward? Um, when and we've seen this come up with the writer strike in Hollywood is okay, if you're gonna use all of our, you know, IP and our creative work and to just be able to like write the perfect script, how should we be compensated for that? So compensation is one way to think about it. And proposal that I've heard floated is what if an individual employee arrived to a role with their context already? And so maybe this is a markdown file. You know, there's a lot of different ways to think about this, but like this is part of the redesign that has to happen, is because it isn't just about leasing time from workers like it used to be. And now should we be leasing expertise as well? In terms of identity threat itself, is most roles aren't going to be automated, but that's kind of besides the point because the threat to identity kicks in before anyone actually loses their job. And so I think the way that we work with this is we just have to talk about it openly. We have to give people space to be able to say, like, hey, I'm really concerned that the direction that this job is going in is not what I signed up for. Um, and and having employees be a part of that conversation, because I think the more you do involve them in the conversation, they're gonna be more empowered to inform what this new organizational design looks like. And so, again, any way that you can bring people to the table, give them autonomy, have them be part of the process, I think is really helpful.

SPEAKER_00

Really like the idea of involving people and making not I will redesign your role or I will tell you what to do in the future or how your role will look like in one year from now or in five years from now. Rather, let's look at what you're doing right now, let's look At what potentially AI can take over, or tell me what are the things that you hate doing in your role? Let's let's create an agent or let's create an automation together that that's going to do these parts of your work, maybe 20 or 30%. And then what do you like doing instead? Do you like technology? Do you want to um do the agentic operations? Maybe you want to do something more with people, maybe you want to take this time for training and develop yourself. So I think there are many options. And uh one of the guests here also was saying it's often a lack of imagination in a way in a company's uh or in our own heads, and there are there are different ways how you can do it. And I like that you emphasize um multiple times during the conversation that you can bring people with you and they can also kind of design this future for themselves in the best possible way, how they see it.

SPEAKER_01

Yeah, I have been talking a lot recently about meaning at work and how this is going to be evolving for people. My background working in healthcare, when we look at quality, one of the elements of quality was to look at the provider experience. So, did they still have meaning in work? And we find that if they do, that buffers against burnout if you're a healthcare provider. And I think the same kind of idea can be applied here. So if my role is going to evolve, if the expectations of me are going to evolve and my professional identity is going to shift, then how do I find meaning in work? And I think engaging people in that conversation, like you said, of what would you like to do? Like what are the pieces of this job that or this role or this work that you've been doing over time? What do you really like? What do you not like so much? Another thing we found is that some people really like to, and this was looking at a group around who are the AI champions or the AI change agents at your organization. We found two personas. So one is I really like to dig deep on a single problem that I have, and I'm gonna stay up till 2 a.m. I'm gonna be trying different tools, I'm gonna be mapping it out until I figure out how AI can help me solve this problem. And then you have the people who like solving other people's problems, and these are more of your like community builders. And so you need those two types. You need the type that's going to go deep. Don't ask them to manage people, ask them to work on problems. And so maybe they are, if you're gonna have like your group of AI champions, it's like, okay, work with this team and figure out, you know, how to help them solve their problem. And then the propagators, as I like to call them, is they're gonna then take all these different use cases and new knowledge and then go out and spread the word and help other people solve problems.

SPEAKER_00

Related to this question about training and upskilling, what you're mentioning is that mandatory training programs produce activity. You have your workshop, even if you have your learning journey of four or six or eight workshops, which is already much better. But how do you then produce this real lasting behavior change?

SPEAKER_01

Yeah, I I agree with that statement of it produces compliance. Mandates produce compliance. And I think over time, for some organizations, it is working, but it doesn't necessarily create lasting behavior change. And this is something that we've known for a long time. Harvard research has found that it man these mandatory pro um programs uh can actually activate resistance and resentment rather than the behavior that they're trying to create. So people don't like being forced to do anything, right? So I think again, I'll go back to autonomy, volition matters. They're gonna adopt faster if they get to choose to engage, not when they're forced to. So I think you have to actually create a few different entry points and then let people decide for themselves which entry point works for them. So again, your person who likes to dive deep in a problem is probably going to be more self-motivated, probably could do some um autonomous learning. Maybe it's software-driven coaching. Um, and then your people who are more people people, they might learn better in a group setting where they are engaging with their peers and they're talking through different problem spaces. Second thing, social modeling definitely matters, going back to what is leadership doing, what is leadership modeling? You need people visibly using the tools and the permission to experiment. So I'm gonna come up as a department head, I'm going to talk about there was this workflow, I thought an agent could do it. I stayed up till midnight working on, like, maybe don't model that so much necessarily, but I really dug in, put in all this work. And at the end of it, I realized this is not the right thing. Like I should actually be building just a simple automation, would have solved the problem. And I overcomplicated it. Um, so that social modeling really matters. Protected time, I will repeat myself. If this is valuable to your organization, you have to make the space for people to be able to learn it. Um, and again, necessary for any kind of organizational change, psychological safety. So people need to talk about how they failed, they need to be able to ask what feel like stupid questions, they need to be able to produce bad outputs and not feel like their peers are judging them or their management is judging them. I think we also need to think through the um productivity curve. So if if you're rolling out new technology, naturally productivity is going to go down for a little while while people are figuring out how to use that and understanding that that's a normal part of the process and that managers won't be penalizing people for it.

SPEAKER_00

Thank you. Is there something that we haven't discussed that you want to share with the audience? Maybe an advice for for a CEO who is uh, you know, who's about to sign off major AI transformation investment?

SPEAKER_01

Yeah, I I think those last four points are important, but I would say what I see happening is they're throwing out a lot of money on upskilling and the tools, and they're not actually figuring out if people are ready. So I think a lot about like, okay, I have a garden at my house. I don't just go out in the spring and just like throw out seeds and wish for the best. I actually will go out and I will till the soil and I'll put in mulch and I'll figure out how do I actually prepare this in order to have the kind of plant growth that I'm hoping for, the kind of harvest that I'm hoping for. So I think that leaders should not just be investing all this money without understanding if their organization is ready. And so that's again where my diagnostic comes in. See where are people ready to roll and start with pilots in that space and where do people need some pre-work? Can we help them get to the point of being ready for pilots so that you're not just wasting your AI investment and not finding that out until six months down the road when your metrics haven't shifted at all? Wendy, thank you so much. Thanks so much.

SPEAKER_00

You can find Dr. Wendy Rasmussen on LinkedIn and learn more about her AI readiness and enablement diagnostic at alpenglowinsights.com. All links are in the show notes. If you enjoyed this episode, follow AI Made Simple, the transformation series, for more conversations with practitioners and researchers shaping how AI is actually adopted inside organizations. Thanks for listening.