
The SkillsWave Podcast
Welcome to The SkillsWave Podcast, where we explore the challenges and innovations in corporate learning.
In each episode, guests from some of the most innovative businesses and educational institutions from around the world share their unique approaches to corporate learning. They provide specific, actionable insights into how they’re preparing workforces and learners for the future, and the ways they’re addressing the evolution of skills in their industries.
The SkillsWave Podcast
Why the ROI of Corporate Learning is Hard to Measure—And What To Do About It | Tom Whelan
This week we talked with Tom Whelan, director of corporate research at Training Industry. Tom is an organizational scientist and research professional with over 15 years of experience working with companies to improve enterprise and employee outcomes.
In this episode, we dig into some of Tom’s recent research, Redefining the ROI of Corporate Learning. Tom discusses more modern approaches and models to measure the effectiveness of corporate learning and provides advice on how to overcome common challenges—including mining for the right data for different stakeholders and how L&D leaders can boost learner engagement.
Resources we talked about in this episode:
1:23 - "Redefining the ROI of Corporate Learning" - https://www.d2l.com/resources/assets/redefining-the-roi-of-corporate-learning/
1:53 - "A Practitioner Friendly and Scientifically Robust Training Evaluation Approach," Richard Griffin - https://doi.org/10.1108/13665621211250298
3:43 - Kirkpatrick model - https://www.kirkpatrickpartners.com/the-kirkpatrick-model/
4:45 - Raymond Katzell - https://www.psychologicalscience.org/observer/in-appreciation-raymond-a-katzell
8:16 – Kraiger, Ford and Salas - https://psycnet.apa.org/doiLanding?doi=10.1037%2F0021-9010.78.2.311
8:25 - LTEM model - Will Thalheimer - https://www.worklearning.com/ltem/
8:52 - ISO/TS 30437:2023, Human resource management, Learning and development metrics - https://www.iso.org/standard/68714.html
22:47 – What Learners Want: New Strategies for Training Delivery - https://trainingindustry.com/research/content-development/research-what-learners-want-new-strategies-for-training-delivery/
Intro:
Welcome to The SkillsWave Podcast—a podcast for organizations that want to future ready their workforces, hosted by Malika Asthana.
In each episode, guests from some of the most innovative businesses around the world share their unique approaches to learning and development. They provide specific, actionable insights into how they’re preparing their workforces for the future, and the ways they’re addressing skills gaps in their industries.
You're listening to The SkillsWave Podcast.
Malika:
Tom Whelan is an organizational scientist and research professional with over 15 years of experience working with companies to improve enterprise and employee outcomes. He holds a PhD in industrial organizational psychology from North Carolina State University. Tom is currently director of corporate research at Training Industry, an expert resource for learning professionals seeking information about best practices and innovative approaches to developing effective training. Tom, welcome to The Skill Shift.
Tom:
Thank you for having me. It’s a pleasure to be here.
Malika:
Absolutely. So let’s get right into it with our first question. Tom, why do you think that businesses struggle to measure the return on investment or ROI of their learning initiatives?
Tom:
In the research that we just did and the report that’s going to come out on redefining the ROI of corporate learning, the headline finding we had was that only a third of businesses are tracking ROI. And historically, it’s always been a struggle for a lot of organizations. And I think what’s interesting about it, I mean, everybody’s always been on the struggle bus, but why they got on that bus in the first place I think, it can come from a lot of different places.
So some research I found from about 15 years ago, one of the complaints that people had was they just don’t have the tech, or the learning management system they’re using doesn’t have an evaluation function that can do what they need it to do. Nowadays, that’s largely been taken care of by a lot of platforms.
So it sort of leaves us, I think, with the classic struggles a lot of companies have had. So things like they think it’s too difficult to isolate the impact of training versus the impact of other things. I mean, it hasn’t stopped other departments, so I don’t know why it stops training. There’s issues with do we even have data that are standardized so we could compare across different things. Or that it might just cost too much. Or the stakeholders at the top just don’t care about evaluation data or they don’t know how to interpret it.
So I think classically that’s why a lot of people have shied away from trying to demonstrate ROI. And as we saw in our report data, only a third of them roughly are engaging in it now. Like I said, I think there’s been some technological easements in what it takes some of these organizations to get there. But despite all of that, they’re just still not there. And like I said, I think the reasons why they’re not there can be one of the things I listed or some combination of all the above.
Malika:
It sounds like there’s an infrastructure gap in a lot of cases. But I’m also wondering if there is a differences in approach as it relates to frameworks. So one of the things that we talked about a little bit in our preparation was this idea of the Kirkpatrick Model. So I’ll get into that for our listeners who aren’t as familiar.
One of the key tools that training and development leaders have used to evaluate the effectiveness of training programs is what’s called the Kirkpatrick Four-Level Training Evaluation Model. And I know you’ve got opinions on that, so we’ll get to that in a second. But this model consists of four levels, each representing a different aspect of evaluation, reaction, learning, behavior and results. It’s been a cornerstone for many in the field for years, but there are concerns that it doesn’t represent the realities and complexities of training and development today. What’s your take on this?
Tom:
So my take on the Kirkpatrick Model is it’s been very useful. It has guided a lot of conversations for people. But I’m a fan of saying or reminding people at least, that it was something that was a product of the 1950s. Kirkpatrick, depending on who you read, borrowed it from another researcher named Raymond Katzell. And Kirkpatrick himself published something to that effect in the mid-fifties. But the original four levels, as they were, was a series of four articles that he wrote for an industry trade magazine just describing here’s some of the things to pay attention to.
So the model, as it’s called, wasn’t ever really intended to, I think, be a framework, at least not initially. And I think the reasons it has been so popular is because of its simplicity. But due to its simplicity, I think it also misses quite a lot. I mean, to gauge the effectiveness of training programs, what does learner satisfaction have to do with that? The effectiveness of can your employees recall skills or knowledge at some later date after you’ve given them a training, learner satisfaction doesn’t really have a bearing on it, or at least not directly. And anybody that would say, oh, well they’re correlated, researchers have tested that supposition and found that whether people like it or not, there’s not a correlation to you get better organizational impact because everybody circled the smiley face on the smile sheets. And I think a lot of people know too, that whether they circle a smile or a frowny face could have just as much to do with the sandwiches that catering had at the training rather than in any of the actual training material itself.
So I think the realities of just what employee learning looks like today just needs different and updated approaches, and I think ones that take into account some of the complexity that just exists in the business world. I mean, it’s an easy knock to make, but one of my favorite knocks against the Kirkpatrick Model is it was created before we put man on the moon. It was created before most people were able to take commercial airplanes. It was created before we all had computers on our desk, before the internet existed.
So it’s weathered a lot of those developments over time, but has it ever really been sufficient? It’s been, I think, the best option that a lot of learning leaders have had available to them, but that doesn’t mean that it’s the best option. It’s just the most convenient one, I think, for people to put their arms around.
Malika:
And I think it’s interesting that you’re saying that too, given your academic background as well as the research that you’ve been doing in training industry and before it as well, because you’ve had a sense of the different types of trends that have happened across companies. And there’s some sense that sometimes people are reaching for things because they don’t know where else to turn. So if we’re not recommending Kirkpatrick as the model that leaders should turn to, where else do you recommend they go to?
Tom:
Granted, Kirkpatrick is not the only person that ever tried to throw together one of these models, it’s just is the one that gets used all the time. There’s a couple other popular models that I’m familiar with. There’s one authored by Kraiger, Ford, and Salas in the nineties that some people have found useful. There’s the, I can’t remember what the acronym stands for anymore, but it’s the LTEM model created by Will Thalheimer maybe five years ago or so.
So there’s been no shortage of stuff that’s out there, but none of it really seems to, I think, grab people because it doesn’t have the easy-breezy attractiveness of, oh, it’s just four levels and that’s all you need, and you have everything.
But ISO released, the name is just lots of fun, the 30437 standards for L&D evaluation, I think in June or July of this year. And at least as far as training industry is concerned, it’s finally offering up, I think, a different way for people to think about training evaluation and to look at how they approach evaluation in their own organizations and in a way that tracks, I think, with what more organizations need.
I mean, to begin with, it does away with the concept of levels, because at least why, why do we need it arranged in levels? Just because it was? That doesn’t seem like-
Malika:
It’s almost like linear in steps as opposed to parallel measures.
Tom:
And there’s been plenty of substantiation over time that it isn’t linear and that the levels don’t always… You can have great outcomes in one and terrible outcomes in another, and they don’t necessarily need to coincide with one another. Just because you have good findings at level two doesn’t really tell you a whole lot about level four. I mean, we all cross our fingers and hope it does, but it doesn’t really give us something to sink our teeth into. Whereas the ISO standards, I think it requires people to break a little bit how they think about training evaluation because it starts people right in on thinking about the stakeholders.
So who is this evaluation data for? These metrics that are being collected or reported, who are they being reported to? Because a learner is going to not want to know a whole lot of stuff, but they’re going to want to know something. Certainly not as much as the head of L&D is going to learn or what they are going to want to get out of the metrics. And whoever’s sitting in the C-suite, they don’t care about all the stuff that the head of L&D cares about, and they probably don’t care about anything that the learner cares about either.
So all three of those stakeholders, and there’s more than those three, but all three of those stakeholders have different data needs, or at least what information would be helpful to them in terms of is training worthwhile, is it doing anything, is it good, is it bad, what can be said about it? They’re not all on the same sheet of music in terms of what they care about.
And so I think just right off the bat, that being part and parcel of how ISO is suggesting that organizations approach this, to me is already a win. I mean, if we look at Kirkpatrick, you have level 1, 2, 3, and 4. Who are they for? I mean, I’ve talked to a lot of practitioners and stakeholders, and the common sentiment is there’s nobody in the C-suite that cares about level one or level two. Maybe you can get them to care about level three a little bit. If they care about anything, it’s level four. But if you walk into their office and start talking about level four, do they know what you mean?
So there’s I think an adjustment that the ISO standards allows people to make and gives them a different language to talk about. So getting away from concepts of levels and oh, I have good level two results, it’s fantastic. And that means what? That impacts what? How efficient was that training? It gives you something, but is that something actually worth anything? What can you do with it? Is it useful?
Now, granted, the ISO standards aren’t just four simple levels, so it’s a little bit harder to jump into the pool with them, but I think it updates a lot of… It doesn’t fix everything, but it updates a lot of things that I think were shortcomings of the Kirkpatrick Model and at least the way that the Kirkpatrick Model got applied over time.
Malika:
I want to go back to something you said a little bit ago, which is this idea that we’re not trying to measure how much people liked going to the training. It’s this idea of how effective was it at meeting key performance indicators? Basically, how do you measure success, and is that definition of success consistent and valuable across different parts of the business?
One of the things you talk about in your research is this disparity between businesses who find it challenging to demonstrate ROI in corporate learning, so that’s 46% of survey respondents, and those who consider their corporate learning programs a success, which is 61% respondents.
What insights do you think we can draw from that inconsistency? And how do you think that businesses should be defining and measuring success in corporate learning instead? What variables should they be looking at?
Tom:
Great question. And I feel like has been the million-dollar question for a couple of decades.
So to me, the insights that we can pull out of the inconsistency is that… I mean for starters, one of the things that is I think clearly drawn in the research report is we find some of this disparity because there are inconsistencies in how people define success and how people even define what ROI is. There’s a whole section in the report where, I mean, we don’t poke fun at them, but we display… We ask people, how do you define ROI? And we got a range of different answers. Everybody doesn’t come at it the same way.
So I think because we are absent a universal definition of what success could be or what it might look like, I think that’s part of why we see the inconsistency that we saw in the data, that there are people that are wrangling with how do I show return on investment, at the same time that they’re trying to stand up effective training programs.
And I think maybe the other kind of boogeyman hiding in the closet with this is just the idea that with these metrics and with the wide variation in definitions, you end up, I think with a lot of learning leaders that are looking at success in terms of not maybe what should I demonstrate, but what do I have the ability to demonstrate? What can I demonstrate given what I have? So they’re looking to show worth and trying to glue it together with bubblegum and wishes based on what’s accessible to them.
I think it also bears mentioning as well that you can have a hard time demonstrating ROI and still have “effective training.” It’s just then I think it begs the argument, well, if you can’t judge ROI, how are you judging effectiveness? Your measurement is good here, but over here it falls down. Totally plausible, it can happen, but if you have a broader valuation vision and a strategy for how you’re going to try to pull metrics out of training, ROI should probably be one of the problems that is fixed or at least attacked first versus just some effectiveness metrics.
Again, people attended training. That doesn’t mean they learned. They gave it good ratings. That doesn’t mean they learned. We gave them a test afterwards and they did okay. It’s like, okay, they can perform well on a test, but did they learn? Are they taking it back to work? So I think the disparity is a function of a lot of noise and dirtiness in how these things get defined.
And in terms of what should businesses be measuring instead, again, going back to the ISO standards, it offers suggestions for what businesses could be looking at, but I think what it points a lot of companies to is thinking a lot more broadly about what it is we’re measuring, and I think classifying it appropriately. Because the ISO standards break things down into efficiency, effectiveness and outcome metrics.
And so efficiency metrics are, I think the low hanging fruit stuff that most of us think about like how many people went through a training. Effectiveness metrics, that starts to get into, well, did people like it? How do pre-post test scores look around the training? And then outcomes actually gets into impacts.
And I think when we start to talk about impacts, it always seems like a vague sort of word, but I think if learning leaders start to try to break them down and categorize them in different ways. For instance, you can look at impacts in terms of are they direct? Are they indirect? Are they intentional? Are they unintentional? What was the impact of the time spent for this training, both in terms of L&D personnel and resources? And what’s the cost of employee time to participate in the training? How much do they need to engage with follow-ups or something after the fact?
So there’s all these ways to look at what is the impact, what is the cost of this, that I think if we sort of divorce ourselves from Kirkpatrick and level four ROI being this nebulous thing, and actually get into some details, rather than it being scary or being, I think an obstacle to measurement, it should help clarify things, or at least point learning leaders in a more fruitful direction.
Malika:
I think the piece that you mentioned about how, I love the expression bubblegum and wishes holding things together, it’s great. I think you’ve highlighted actually a lot of skill sets that HR leaders and learning and development leaders also need to develop, which is this idea that we’re not operating in a silo. Our focus is really on aligning to company goals and maybe finding a better way to do more with data analytics or measuring program benefits and recognizing the cost of disruptions that you highlighted. But really making sure that there’s a broader vision for why the training is even taking place, and aligning that to strategy and tactics that can follow through.
I think one of the interesting things about, we think about the word strategy, there’s always the big S strategy, the strategic plan that comes out company-wide and is shared potentially with stakeholders or the board or put onto an intranet website for everyone to look at. But I think sometimes what we neglect to talk about is the fact that every business unit also needs to have a strategy and vision for what they’re trying to accomplish, and align that back to what the broader goal is. Otherwise, you can’t really ensure that you’re rowing all in the same direction.
Tom:
Precisely. Yeah. And to that end, thinking about how we’re evaluating training and what metrics we need in terms of the stakeholders rather than what level are we on. Exactly to your point, or can get businesses a lot closer to where they need to be and where they’ve wanted to be all along.
Malika:
That’s right. And the training isn’t just coming out for no reasons. I mean, hopefully there is a reason that it’s happening. And I think it can look really different if there’s something that’s safety or compliance motivated, than maybe the metric is around, okay, how many incidents have we avoided, or are we in compliance with all of the regulatory orders and we’re avoiding fines or things like that. And are we preparing our workers as technology is changing so quickly, to be continually safe and compliant with all of those needs?
It gets a little bit different, I imagine when you get to human skills like trying to develop critical thinking or communication. As you mentioned before, it really is about also measuring the practice of those skills in the workplace and trying to recognize that the training doesn’t just end after you leave the physical or virtual classroom. It has to be continually tested. And not tested always in such an explicit way, but having some kind of way to keep the learner engaged and interested in and continuing to use some of that information.
Some of the explicit challenges that businesses shared in your research were around keeping learners engaged with programming and sustaining the impact of that learning. Could you provide some practical advice for organizations looking to enhance their learner engagement and ensure the long-term effectiveness of their corporate learning initiatives?
Tom:
So for organizations looking to enhance learner engagement, one of the pieces of research that we’ve done specifically looking at learners and what drives what they care about in training and why they decide to engage with training initiatives. One of the things driving engagement oftentimes is just access. And so offering a training in different modalities can help engage a lot of learners. Now, obviously if the content is bad, then you have a different set of issues to deal with.
But to get learners engaged, I mean, if we think about all the different roles and functions in just normal standard organization, it doesn’t matter what industry it’s in, you’re probably going to have salespeople, marketing people. You might have some R&D, you might have manufacturing, maybe, maybe not. You’re going to have some administrative. It’s going to be a whole bunch of different jobs.
And for those job roles and the structure of the tasks during the day, some of them might be fine with or might prefer like, yes, take me out of this world and put me in a classroom where I can engage with it there. Others might be like, I don’t want to do that, or I can’t afford the time to go do that. Give it to me in another modality.
So sometimes engagement is more about the willingness of a learner to take the training in a format, more so than there’s some other sort of magic carrot we have to dangle in front of them to get them to want to engage with whatever it is.
But I think in terms of ensuring long-term effectiveness of training, one of the things that organizations need to wrestle with is this uncomfortable truth that learning and satisfaction aren’t always necessarily correlated together. And when they are, they tend to be negatively related, which is not the most fun thing to tell people.
So when people are learning more, they aren’t necessarily always enjoying themselves. If they’re enjoying themselves, they might not be learning that much. And yes, there’s exceptions to this. And you gamify learning, it’s hopefully making something that is perhaps a little stale, a little bit more fun. But in general, that learning and satisfaction relationship kind of works like that. They don’t both rise and fall together.
And I believe some of why in L&D we’re hesitant, I think to engage with that or those sort of facts about learning is that it’s about a better understanding of what it actually means to learn something. So well beyond, oh, this is just checking a box, we did something, it’s organizations contending with the fact that training isn’t always going to be fun and there’s no way around that.
But you also have to market your training to the employees. Employees need to understand the purpose of the training. I think a lot of people will put up with stuff if they know it’s important, if they know there’s a good reason for it, versus if you just push them into a room and go, “Go to this training,” and it’s boring or they don’t like it, then yeah, not only have they not learned anything and you’ve burned however much of their time out of their day, took them away from other work tasks, but now you’ve also annoyed them about corporate training. So next time you’re trying to get them to engage with some sort of training initiative, already their sentiment is going to be negative before they’ve even started.
So I think engaging with learners is a bit of a dance. And same sort of thing with sustainment. I mean, sustainment, I’m always interested sometimes, when learning leaders talk about sustainment, it’s like, what do you mean? What do mean when you say sustainment? It’s not about a learner has seen information multiple times, or we can substantiate that they’ve engaged in these learning activities and so we can say they’ve sustained something, or we gave them a training on something six months back and vaguely six months later they did something that was maybe based on that training, like aha, sustainment. And it’s like, it’s not really that. You have to substantiate that employees can retrieve information from memory. That is systemic. And so if you do it three months later, if you do it six months later, that’s what it is. And it’s fairly simple.
And ironically, assessments that gauge sustainment, so that test employee knowledge or employee skill sometime after they’ve taken a training, are the best way to help employees retain information. So it’s a weird paradox. You want to drive sustainment, assess sustainment, and that actually ends up driving sustainment.
And the reason that that is, is because in any job, yeah, we put people in training, we expose them to information and just hope it sticks. When they get to the job though, if they’re using that knowledge, they just have to pull it out of thin air and apply it to something right then and there. I mean, that’s the way we work with things in the real world. I mean, that’s the way you’re driving somewhere you don’t know or you haven’t driven in a couple of years. Maybe you look at a map, maybe you don’t. You’re doing decision making as you’re going through it.
And same kind of thing with a test. If you test an employee on that knowledge, they’re trying to remember, is it left up here, or right? And so you’re bonafide testing, what do they know. Versus if you just re-expose them to the information, they can go, “Ah, yes,” and nod because it’ll be familiar, but you haven’t substantiated a thing. And again, hilariously, to drive sustainment, just assess sustainment and it’ll kill two birds with one stone in a sense, I guess.
Malika:
What’s that saying? It’s like what you measure is what counts or something like that?
Tom:
Or what gets measured is what gets changed or something to that effect.
Malika:
And I think it’s the same thing with assessment. Sometimes we’re so focused on trying to get trick questions in there at the end of learning and training initiatives, and it can be really demoralizing if you’ve just spent so many hours doing something and you feel, oh gosh, I still haven’t got the answer right. Now, I have to review the whole thing again, something must be wrong.
But if you test it in a way that gets you to critically think, and as you’re saying, retrieve the knowledge, it’s a very different type of test. It doesn’t need to be 20 questions. It could maybe be three questions. Get you to start thinking about the different ways that it can be used in those areas.
And I think it’s a skill to also teach. We haven’t gotten into that at all, but the structure of the training and the way that you deliver the information is so unique for adults that are also managing inboxes that might be going off during their training or trying to figure out how this applies to their job and making those connections as they’re going through it. I think it is a skill that is hard to develop, and plays in a lot to the measurement of the results too.
So we’re seeing a lot of trends when it comes to the future of work and learning. I’m impressed with myself that we’ve made it 30 minutes into a conversation, this is the first time artificial intelligence is coming up. But artificial intelligence, technological advancements, even demographic shifts in the workplace, people are retiring and creating this whole new crop of workers. Needless to say, change is the new constant. And there’s been a lot of talk of the need for lifelong learning to be integrated into a workplace’s identity. I think it’s sometimes easier said than done though. Developing a culture as with any type of change management is very hard.
Do you have any advice to organizations on what steps they can take to create a culture of continuous improvement? And really, when it comes to lifelong learning, how do they instill that mindset that employees should be learning consistently and should be engaged?
Tom:
So if we’re trying to compel employees to think about training as lifelong learning, and engage with it as that, and to think about the training that they take as having a career impact rather than like, oh, this is just something I need for my role, or they are making me do this, whoever the they might be. And so rather than it be something that is sort of imposed on employees, I think to get people to… I’m not going to say most, but plenty of people I think see training that way. I think to shift their mindset, we need to market it to them, quite simply.
So within the organization rather than L&D just being a cost center or something that like, oh, if you’ve done something bad, now you have to go do training, rather than people see the function that way, I think if L&D goes on a bit of a marketing blitz and says like, hey, this is why we’re here. We’re here to help you in your job. We’re here to help you get to the next step in your career. We’re here to help make sure your skills don’t get irrelevant. We’re here to make sure your knowledge doesn’t go out of date. So we are almost a service organization rather than just a cost center.
And so I think if L&D thinks about what they do as a service, I mean, not that it is, and certainly from an administrative standpoint it’s not, but if they think about it that way and try to talk to employees with the same sort of tone, or at least with some of the same messaging of why should you engage with training, surprise, surprise, you might find people respond.
And so if you’re trying to get feedback from learners about what don’t you like about training or just what are their sentiments, what could the company do to improve the learning, if you’re reaching out, people are going to respond.
But I think the reach out has to happen first because I’m sure everybody, well, not a hundred percent everybody, but most people give some sort of post-training surveys, even if it’s just, did you like it, would you do it again? And an NPS score or something like that.
You can do those all day, but without the piece on the front end of why are we doing this, how should you think about this training, without that front piece, employees aren’t going to make that connection themselves, or at least not all the time. Some of them will, but maybe not all the ones that you wish would, or not all the ones that you wish would stay because they’re good, but they see this thing as a weight on their back or one of several weights on their back, rather than possibly the avenue through which they can move on to a different role or change where they are in the company and keep the institutional knowledge that they have on the payroll at least, even if they go remote or whatever the case may be.
So I think if L&D is talking to employees and marketing why the training happens, what training is, what it’s supposed to do, how it happens at company X, whatever it might be, I think if they take those steps, then from the learner viewpoint, continuous improvement or trying to build that culture, I mean, it won’t happen immediately, but I think that’s how you start sowing seeds.
Malika:
And I think it’s that piece of seeing the employee as the consumer, and ultimately the one that’s going to make the training effective or not, which is a new way to think about it, right? There’s so many variables associated with that. But trying to empower them to take charge of their own careers and see the learning and development or the training, whatever term we want to use as part of that, I think is really, really cool and really important.
I had a leader a few years ago who used to talk about this idea of voice of the customer, as we all know, but in the specific idea of your brand should be something that’s delightful. It should be something that delights people and that should be the thing that they take away from it. And I think it’s so interesting that we started off this conversation by talking about how satisfaction is not the be all, end all of metrics, but it is still important to ensuring the success of the training. It’s just one thing that can’t be the only metric that is moving things forward. It’s a qualitative piece that has to compliment quantitative measures as well.
Tom:
To piggyback on that, while you were talking, what struck me as well is to the extent that learners are a stakeholder in their own training, in their own learning, the ISO thing explicitly calls learners out as a stakeholder. Or to what we were talking about earlier, where does it do that in Kirkpatrick? In Kirkpatrick, learners are happy face or sad face at level one.
Malika:
That’s right.
Tom:
And then from levels two on up, they’re a cog. What did the learning do to employee? Let’s poke them with a stick and see what happens post-test. Rather than they are an active agent and a participant in training, as you alluded to.
Malika:
Yeah. I love that description. What a great way to tie our whole conversation together.
We’ll end on a final note about really this culture of continuous improvement as it relates to organizations. So the original question of how can feedback loops and data analysis be leveraged to drive some of those ongoing enhancements to L&D? We’ve made huge headway in terms of thinking about data and data across the organization that can be used to sort of cobble assessments of effectiveness together. How do you think that that’s going to be able to help people not only measure what’s happened in the past, but to make improvements so that it’s effective in the future?
Tom:
So I think what the influx of metrics allows a lot of learning leaders to do, which not that many years ago, they were unable to do, is actually start thinking through logistics. So do I have the resources? I mean, everybody always had some sense of it, but at a very granular level, learning leaders can get to how are our programs doing, how are they doing in different functions?
And I think we’re moving into an age where people are tying this data together by themselves. They’re not wholly dependent on what metrics does our LMS have, and if it doesn’t have them, whoops-a-daisy, we’re just not measuring those. I think there’s a lot more data sophistication in L&D departments nowadays, than there ever has been.
That said, though, how do organizations best leverage feedback loops and data? I’ve always maintained that from the jump you have to set up those handshakes. Organizations nowadays are just awash in data. I mean, there’s far too much of it for people to really know what to do with. And classically, the problem that a lot of training managers had with substantiating, I mean ROI beside the point, just any outcomes, was I don’t have access to data. Or the stakeholders that have the data, whoever owns it, they won’t let me have it. I can’t get it from them.
Malika:
Or it’s not clean and I don’t know how to clean it up.
Tom:
Yeah. So I think from the jump, if L&D tries to set up handshakes with whoever that is, so more of a, what’s the mutual benefit? I can give you training data if you give me performance data on your employees. Setting up some sort of loop that benefits the other party rather than just thinking about, well, I need all this stuff coming into L&D so I can crunch my numbers. If you scratch the other person’s back, it’s probably a little bit easier to get those numbers out of them.
And I think once you set up a lot of those relationships… And we’re talking about databases, talking to other databases, but people have to tell those to do something. So whoever it is that does that, you don’t necessarily have to go buy them a coffee, but at least be friendly with them so that there’s some way that you can begin to engage with data stakeholders.
Because going back to what I said, when you have all those dots connected, you have the logistics now to actually go attack some things and think about, okay, if we’re going to drive continuous improvement, if we’re going to ask questions of our data that aren’t just monitoring what we have or being like, I don’t know, how does it look a year ago? Is it better or worse? And just kind of hoping a lot. I think if you have all these feedback loops set up, then you have the ability to move. You have the ability to say, strategically, what are we trying to do? What are we trying to do with this particular training? There’s initiative why that we want to roll out everywhere. How successful could we be at that? Where do we have shortfalls and resources? Just to have a much better feel for what can be accomplished tactically?
But again, the prereq for all that is you have logistics in place. If you have all the data together or you know what data you have available, what’s even possible in the universe to lace together in the
organization, from there, once all that’s together, then I think the opportunities for where can we improve things, they’re not necessarily going to reveal themselves, but it’s going to be a much less painful migraine to say, “I want to know something about training activities here at our company.” And then being able to go and actually answer that question in a substantial way based on data that doesn’t just come from training, that is tied to outcomes from other places. Or if there’s anything there that can be trended, that it’s looking at those trends.
I’ve always thought, I know the reality from organization to organization is very different. But there are organization level outcomes that are probably easier to calculate than people first assume they are. Because if they’re thinking like, oh, I have no way to know what our organizational results are. How do I tie anything to level four in Kirkpatrick if I don’t know what it is?
And I’ve always felt like, how much have you looked? Depending on what company you’re at, do they have an investor relations part on their website? You don’t need to go ask somebody for financial outcomes if you have them already. And so it’s like, look at them across quarters. Marry up where you started certain initiatives or where you changed things. And yes, it’s going to be a dotted line at best that you’re drawing, but that’s how you begin. Or that’s at least how you begin to ask the right sort of questions that are at the right sort of scope to pull some of these sort of answers out of the data.
Malika:
Right. I like that so much. One of the things I was reflecting on as you were just sharing that answer is this idea that there are so many different skills that you might not typically associate with someone in learning and development, but are very much there, especially these days.
So just to recap that idea, that strategic mindset, that growth mindset that things can be improved, the curiosity element of oh, I have this problem, how can I solve it, problem solving, the stakeholder management, the communication, data analytics, and just intelligence gathering, it’s so much more of a complex job than the title sometimes gives credit for.
Tom:
Absolutely.
Malika:
I think it’s really great to see that you’re out there talking about measuring the ROI of corporate learning and how it’s so difficult, but how it’s not impossible. So Tom Whelan, thank you so much for your time today.
Tom:
Thank you for having me. It’s been a pleasure.
Outro:
Thanks for listening to The SkillsWave Podcast. Check out skillswave.com, Spotify and Apple Podcasts for links to the resources we discussed in this episode, related content, additional episodes and more.
Thanks for joining us.