
HR Data Labs podcast
The HR Data Labs® podcast is dedicated to Human Resource professionals hearing the latest thoughts of innovators and experts from around the world of business focusing on HR Process, Technology, Regulations, Data and Analytics. Sometimes we may get passionate or a little carried away, but we are always fun and insightful. Podcast website at http://hrdatalabs.com. HR Data Labs is a registered trademark of David Turetsky. Reg. U.S. Pat. & TM Off.
HR Data Labs podcast
Martha Curioni - How to Responsibly Integrate AI into HR
Martha Curioni, People Analytics Consultant at Provisio Data Solutions, joins us this episode to discuss some best practices for responsible AI implementation in organizations. She also explains the importance of integrating AI models into existing processes and allowing users to submit feedback on the AI.
[00:00] Introduction
- Welcome, Martha!
- Today’s Topic: How to Responsibly Integrate AI into HR
[05:33] What does “responsible AI implementation” mean?
- Intersection with data security and privacy
- Data governance regarding new AI processes
[14:30] What are the essential steps for responsible AI implementation?
- Redesign HR processes around the newly implemented AI
- Checking that the new AI is accurate and reliable
[26:59] How can organizations avoid training AI models on bad data?
- Building unbiased AI systems
- Implementing user feedback mechanisms
[35:33] Closing
- Thanks for listening!
Quick Quote
“Companies need to ensure that the AI they’re going to be using is implemented in a way that is transparent, minimizes the influence of bias, supports fairness, and empowers employees and managers to make better decisions.”
Resources:
Contact:
Martha's LinkedIn
David's LinkedIn
Dwight's LinkedIn
Podcast Manager: Karissa Harris
Email us!
Production by Affogato Media
The world of business is more complex than ever. The world of human resources and compensation is also getting more complex. Welcome to the HR Data Labs podcast, your direct source for the latest trends from experts inside and outside the world of human resources. Listen as we explore the impact that compensation strategy, data and people analytics can have on your organization. This podcast is sponsored by Salary.com, your source for data technology and consulting for compensation and beyond. Now here are your hosts, David Turetsky and Dwight Brown.
David Turetsky:Hello and welcome to the HR Data Labs podcast. I'm your host. David Turetsky, alongside my best friend, co host and partner at Salary.com, Dwight Brown. Dwight Brown, how are you?
Dwight Brown:I am wonderful! How you doing, David?
David Turetsky:I'm okay. Well, we just got over some health scares, which is good, because today we're talking to one of the most brilliant people we've actually had on the HR Data Labs podcast. Martha Curioni. Martha, how are you?
Martha Curioni:Hi, thank you for having me back. And I am good. It's been sunny these days, so I'm enjoying the sun while it lasts.
David Turetsky:Yes, yes, we're getting into winter. Well, we're actually getting into fall, which for a lot of us, turns directly into winter with very little lag. But for those of you who don't remember Martha, Martha and Dr Adam McKinnon were on many moons ago, and they were talking to us about how we can use machine learning to fix data problems in HR, and it was one of the most popular episodes. And we're gonna actually have a link back to that episode in the show notes, but we're also going to have a link to the code that Martha had built, and it's on GitHub, so easily accessible and extendable, and we're going to probably speak a little bit about that today, but more so we're going to get into another topic. But Martha, why don't you explain to some of our newer guests who you are?
Martha Curioni:Hi, yeah, so I let's see. Where do I start? I have a extensive background within the HR space, having started in recruiting and worked my way through kind of talent, I guess, workforce strategies, space and recently or not, not that recently anymore. Time flies. Few years back, I decided to train myself as a data scientist. So that's when I learned how to code and build AI and machine learning models and so forth and and now I am working as a people analytics consultant. I do advanced analysis, I support implementation of people analytics tools looking at processes around AI, as HR organizations are looking to implement that and so forth. So that's kind of where I am today.
David Turetsky:And one of the more interesting things about Martha, Martha, where are you located?
Martha Curioni:I am based in Italy, which you cannot tell by my accent, because I'm originally from California, but I have moved to Italy about four years ago.
David Turetsky:Hashtag jealous, one of my favorite places in the world. So Martha, what's one fun thing that no one knows about
Martha Curioni:I don't know if I would say no one knows, you? because of the whole class of people that know. But being in Italy and being an expat and working remotely, there are days where the only other adult I speak to is my spouse, which I love him, but sometimes you need to speak to other adults. I decided to sign up for a theater class, which is all in Italian.
David Turetsky:Wow!
Martha Curioni:And it happens once a week, and I, you know, it's definitely brings me out of my comfort zone, even if it were in English. And so then, being an Italian, it takes it to a whole new level, but at least the extrovert in me gets a little bit of social interaction once a week. So I'm enjoying it.
David Turetsky:That's wonderful.
Dwight Brown:That's cool.
David Turetsky:That is really cool.
Dwight Brown:Now, are you fluent in Italian, Martha?
Martha Curioni:In a social setting, yes, when it comes to work, I would say very good level. I wouldn't say fluent. But in a social setting, yeah, I can have a conversation.
David Turetsky:Well, now you're going to test that boundary!
Martha Curioni:Uh Oh, in the class? Yes. I mean, I thought you were going to have
David Turetsky:In the class! Not here, oh gosh no
Dwight Brown:I was waiting for that too. I'm like, yeah, and?
David Turetsky:no, that's about my limitation on on Italian. No, we're good. We're good. So that's really cool. So we're gonna see you win a Tony Award at some point soon?
Martha Curioni:I don't know. Maybe we'll see or whatever the equivalent is in Italy. I don't know. I don't know what kind of awards they have.
David Turetsky:actually it would be, the Tony Award, because that's Italian, right? The Anthony award. Hey, Anthony, how's Martha doing? She's great. She's really great.
Announcer:If you guys can see,
Dwight Brown:oh, who let you out of your cage today?
David Turetsky:Sorry. Hashtag dad humor. So let's transition to the topic now, because this is the reason why we love doing what we do. We're going to talk about a really cool, very, very important topic for today, and that's the responsible implementation of AI in HR. So Martha, let's talk about it. What does it actually mean to implement AI in HR in a responsible way?
Martha Curioni:Yeah, so to start, let's just define what responsible AI is for anybody that doesn't know or is not familiar with the term. It's essentially it involves kind of the design, the development and deployment or implementation, if we want to use that word interchangeably, of AI in a way that's going to help you to minimize risks that could happen with using AI and other negative outcomes. So if we translate that then into an HR setting, you think about many. So there are some HR use cases that are lower risk, right, maybe automating tickets and some of that kind of stuff. But there are many HR use cases, at least all the ones you hear about if you go to a new HR Technology Conference, right? Are things like, you know, are we? Who do we hire? Who do we promote? In some cases, who do we fire? If people are looking to your companies are looking to lay off employees and you know, or how much of a salary increase I've seen people use it to inform salary increase recommendations. So, you know, minimizing risk and other negative outcomes, I think we'd all agree are extra important given these use cases, right? And this is why I think companies really need to take the appropriate steps to ensure that the AI that they're going to be using is implemented in a way that is transparent, that minimizes the influence of bias, supports fairness and really empowers employees and managers to make better decisions, right? So that, to me, is what responsible AI means, and that doesn't only mean picking a model, or ensuring the model, or developing a model that offers these things, right? That's only the design and development side. The deployment side is then taking the extra steps, or the additional steps, to make sure that people are using the model in the way that it's intended to be able to ensure that these things are happening, right? You can't just put the tool in people's hands and trust that they're going to use it the way that they're supposed to. That never happens.
David Turetsky:Is there, is there another aspect to it which also goes to the data that you're going to use to train the model on? You know what? What data are we using to the to the point I made before? Has it been cleaned? Or do we have faith in it? Do we trust it? Have the decisions that were made using the data? Are those things we want to actually be basing our our forward, going decisions on? Does that come into it?
Martha Curioni:For sure, you know that that becomes, you know, one of the key points, right? In selecting the model, or picking a model and data that's going to be used. So some vendors out there maybe they train it on their own data and then they want to unleash it on your future decisions. Okay, well, I don't know if that's going to work. Many organizations don't have their data in a place where they can do it with their own data, so there ends up needing to be a lot of kind of data cleaning, a lot of data preparation and so forth that's needed and really understanding, you know, even doing a descriptive analysis before you get to that point, to understand, you know, looking at, maybe we use the example of promotions, past promotion decisions. Do we see that there are groups that are maybe, you know, getting promoted less or more, or what have you, in proportion, obviously, to their share of the overall head count, right? The overall population, right? And really understanding your data first is important, I would say for sure.
David Turetsky:One of the other considerations that I'd ask is, is there also a potential issue with the where is the model located? Meaning, is it on our premise, or is it in the cloud, or is it on premise of the application provider or the model provider? And the reason I ask that is because the wild, right? Having our data and having our model and having our decisions in the wild, and who would have access to the data, the decisions, the outcomes. Is that something that comes into this conversation as well, or is that really just kind of a, you know, don't worry about that, David, that's down the road. It's not an issue for right this second?
Martha Curioni:No, I think it's definitely so I would think it's separate from responsible AI in the way that I'm defining it. But when it comes to AI in general, you know, it's definitely important, right? Even, you know, for example, I don't recommend somebody saying, Oh, let me use my personal, you know, account with chatGPT or Claude or what have you, take all this employee data and upload it into the, you know, ask it to analyze the data for me, right? Because there are a lot of risks. But that's more of a data security privacy side, as opposed to, you know, making sure that to your point, the data is appropriate. The model is not does not have biases, and then it's being used as intended.
Dwight Brown:It would seem that the Yeah, part of that data quality, aspect of things, is just understanding where your data is coming from, where it's where it's pulling from. What are the data sources you can control? What are the data sources you you can't control?
Martha Curioni:For sure. And I would think the other thing I would add, and I I've gotten on a high horse about this lately, it's something that I bring up anytime I can in a conversation, are the processes that are capturing data. And I think so many times there are processes that are designed or sometimes just haphazardly come together, and then there's data. And a lot of times, the people that are designing the processes don't think about the data implications. Or, you know, it's kind of, here's the process, here's what we're doing, and the data is an afterthought. And so what that means is, for example, you know, if I want to look at mobility for my organization, for whatever reason, but mobility moves within the company are not captured consistently in a way that allows me to then map those, then it makes it almost impossible for me to do that kind of analysis. And you can take that to you know promotions. If promotions are not captured correctly, were they promoted or did they apply for another job? And with that job became a promotion, right, right? And then, if you're going to use that to inform future promotion decisions, how are you going to do that if you're not capturing the data consistently?
David Turetsky:Well, that gets to Dwight's favorite topic of data governance, right? And making sure that HR has a good data governance model.
Dwight Brown:And that's exactly it, because it really gets to that data trust factor. I think that's one of the pieces with AI that's a little bit scary, is the fact that, you know, there's, there's a big aspect of this that is just sort of a black box. You don't know how the data is being put together. Sometimes you don't even know all the data sources that you're dealing with. So you know it, it really gets to that data trust factor and and how do you get that? I think that's a key question.
Martha Curioni:So for me, one of the ways to address the trust factor is when you have Explainable AI as part of the interface, right, or the model output. So there's, you know, there's some, some models inherently have it, right? Regression models you can look at a driver analysis, or with other cases, you might have to put additional tools on that on top, right? So there's shap, there's lime and probably others that are coming out to be able to offer that transparency so that you say, okay, these you know, we're recommending David for a promotion. Here is why. Here are the things that here. Here are the reasons that we are recommending him. That way, the user can then look at those and either agree or disagree, right? Oh no, that's not true about him. Or Yes, that's true, but that's not a factor that we want to consider in this case, whatever it might be. But that's how you A, address some of the true the trust issues and then, B also, again, it goes back to the AI shouldn't be making the decision. The human should be making the decision, and by empowering them with that information, that's how you ensure that that happens, so that again, they're using the AI as intended.
Announcer:Like what you hear so far? Make sure you never miss a show by clicking subscribe. This podcast is made possible by Salary.com. Now, back to the show.
David Turetsky:Well why don't we talk about that as part of the second question now, which is for HR organizations that are planning to actually implement some kind of artificial intelligence, what are the most important steps that they have to take to ensure that it's actually going to be implemented responsibly?
Martha Curioni:So the first step is something we've already covered a little bit, which is, first check your model right? Don't just trust the vendor or the data scientists that you hired to make sure that they're doing, taking the steps necessary to make sure that it's a good model, right, right? Make sure it's transparent. Make sure the end of the users can understand the output. Ideally, it will have Explainable AI, so it's not that black box that Dwight mentioned. And test the model yourself, right? Run through it, see what recommendations come out. And you know, do you notice bias? Are you seeing bias come through? Do the recommendations make sense? You know, that's how you want to test it before you implement anything. Once you've done that and you say, okay, the model is good. I you know, I'm good. I like the recommendations, then you want to be clear about your goals and objectives of how are we going to be using this model? What are the outcomes that we expect to have? Is it, you know, more fair decisions? Is it saving time for managers, whatever it may be, define those ahead of time, so that over time, you can track those measures and decide, is it working? Is it doing what we wanted it to do? And if not, why? And should we keep using it, right? Because otherwise you're just going to keep using something that maybe is making things worse. The next part is, and this one I can't emphasize enough, is you need to redesign your process around the AI, don't just bolt it on top of an existing process, because if you do that, there's a really big risk that it's not going to be used as intended, or, you know, it's going to not get used at all right, which is also a shame if it is something that you're hoping can help make better decisions. So, you know, work through from beginning to end. What should the new process be? Incorporating AI, incorporating checks and balances, making sure that the users, you know, that there are points where the users are being prompted to make sure that they're not just auto clicking through things and so forth. It actually reminded me. It reminds me of an example I heard. I was listening to a podcast the oh, gosh, I can't remember who it was, but anyway, the that NASA, they they build, and because they've been using automated systems for years, right? And so they build in these kind of, I guess, faults that everybody knows that they're there, so that you don't go on autopilot because you know that they're going to be random, like bad things coming up, or things that you shouldn't trust, so that people don't just go on like autopilot and do things right? So can you build a process incorporating something like that? I don't know, this is an idea that came to mind. But, you know, designing that process, with the process then comes proper training, right? You don't give somebody a car without teaching them how to drive it.
David Turetsky:I don't know about that. You haven't driven in the US, maybe for a few years, but
Martha Curioni:Ideally, you would teach them how to drive it. You know the risk?
David Turetsky:Ideally, yes,
Martha Curioni:yeah, the risk and everything, right? What to do, and if this happens, and then you run a pilot. One recommendation I have would be to have, you know, one group do it with, with the the model, another group do it without, and then compare the outcomes, right? And understand again, are we achieving the objective that we want to achieve? And then, over the long term, continue to monitor not only the outcome, but also how are people using it, right? As much as you can.
Dwight Brown:Yeah, one, one of the things that I think about with this is that it would, it seems like the possibility of of over trusting the data is probably, you probably see it more in AI. Because if you think about AI output, you know, for instance, if I go to chatgpt and put in a query, what it outputs is really, sounds really good, you know? And a lot of times it seems on point. But if it's a topic that I don't know much about, I could be kind of star struck with all the output. And, you know, the the way that it words things, and it it's easy to forget just exactly what the potential pitfalls are with that. And I think it, I think that gets to what, what you're talking about, where you've there's got to be some education around it. There's got to be some understanding around it that's there upfront, otherwise we end up just sort of blindly trusting it.
Martha Curioni:And there are a lot of studies that show, there's one in particular that comes to mind I couldn't tell you where to read up on it, but you could probably Google it, where there was a building, they there was an alarm, like a fire alarm or something, and they had a robot who was clearly taking everybody in the wrong direction. People knew what the correct direction was, but they still follow the robot, right? And so people tend to definitely get over confident in the output of AI, because they think, Oh, well, you know, this is technology. It's been trained, and it should know better than me that is, you know, faulty or what have you. So for sure, you definitely have a lot of that.
David Turetsky:And let me, let me expand on that a little bit. We know and have seen that some of the answers that have been coming out of chatgpt are actually lies, and that chatgpt actually doesn't know the answer. They're making guesses which are wrong, and there are kids in school have been using verbatim the stuff that comes out of chatgpt, and it's just wrong because where whatever it's pulling from is just not true, or it doesn't have enough answers, so it makes shit up. Pardon my French. And the one thing I want to talk about in the description that you just gave and the six steps is you're you're basing your career, you're putting your company at risk for developing a model, and you need to make sure that the thing it's doing is actually doing it correctly. Now you mentioned before, Martha, sorry, I'm going a little all over the place here, but you mentioned before that some of the ways in which AI has been implemented are the bots doing a specific task, like, is this form filled out? No. Send it to the right person. Get it filled out, and then it sends on either to another bot or to a person when it has the correct information or the information. And in that way, you can actually check it, you know what steps it's trying to follow, and you can make sure that it's accurate, and you can QA it. Some of these more interpretive models, some of the more sophisticated models, the steps you mentioned, they're gonna have to be pretty complex, aren't they? You're gonna have to do a lot of QA work to make sure that the model is actually generating what it's supposed to!
Martha Curioni:Yeah. I mean, that's where Explainable AI comes in, which obviously is not available with all models, right? Like a large, large language model, Explainable AI becomes a lot more difficult. Those tend to be a lot more black box, right? And so, if we so, let's, let's first address the models that you can have an excellent Explainable AI component. Those ones, you make sure that when they get the recommendation, they're also getting the, you know, kind of the reasons behind the recommendation. And then maybe in the process, there's some kind of step to make sure that they're reading that where they you know that they agree or disagree, or they have to add in some comments, or whatever it may be. You have to work the other I'm also a big fan of human centered design, right? So you're going to work with the, you know, the practitioners, the employees, the managers, whoever, to understand from them what is going to be the best way to design it so that it's not annoying to them, because then you get, you end up getting practices of people just, you know, putting in a space in that text box just to bypass it, or what have you, while also making sure that you're achieving your objective. So there are, you know, those types of models it becomes a lot easier to go through those steps of, hey, you know, let's build in some of these checks along the way to make sure that people understand the recommendation, agree and know that they have every power to disagree with the recommendation. When you get into some of these more black box models where the Explainable AI is not as accessible, then it becomes to Dwight's point, a lot more about the education side, right? Helping them understand that, you know, it could make mistakes, or it can make things up, or whatever it may be, and maybe that's where the NASA example comes in, right? Where you say, Look, we are going to randomly give you fake answers so that you can to keep you on your toes, right, and make sure you're checking your sources, or what have you. You know, I don't have all the answers for that. Again, you have to work through this specific use case, and, your organization, the culture and so forth. But education is key.
David Turetsky:It seems like what you've outlined is, and I'm not trying to demean it. It seems a very expensive process. And yes, it should be, because we're we're building in a new technology, but the the six steps you mentioned, you know, there's going to be a real cost involved with not only implementing this, the training, the education, the pilot, even the just the technology and the data itself, that's a lot of lot of investment. Or are you thinking that this could be relatively small things, small samples, and it doesn't need to be that expensive?
Martha Curioni:I would say that for the technology, the cost of that and so forth, you know, that depends on the technology. Or maybe you have a team, an in house team, that's building something, but when it comes to, you know, redesigning a process, that's obviously a lot of work, as you mentioned, training and a pilot, but a pilot, by nature, should be small scale, right? So if you're able to do it small scale, test your assumptions, make sure it works. Read as you know, tweak the process. Because once you put it into place, inevitably, there's always something that doesn't end up working quite as you imagined it would, right. And then you tweak it and so forth before rolling it out to, you know, the broader organization or a business unit within the organization, you can still scale it out slowly. But by doing it in a pilot setting, you definitely minimize the cost. So then maybe you have, you know, one person within your team who is responsible for, you know, kind of the this whole those these six steps, right? And the the workshops around the process, and workshops, around the training, and everything else, but with the end goal, or, shall I say, the reason why you know what your goals and your objectives are and measuring that is so much more important, because you want to make sure that it's worth the investment, and you want to make sure that you can accurately gage whether the pilot was successful or not before you roll it out. And do start to spend more money, or, in some cases, put your organization at more risk depending on the use case.
David Turetsky:Hey, are you listening to this and thinking to yourself, Man, I wish I could talk to David about this. Well, you're in luck. We have a special offer for listeners of the HR Data Labs podcast, a free half hour call with me about any of the topics we cover on the podcast or whatever is on your mind. Go to Salary.com/hrdlconsulting to schedule your free 30 minute call today. Let's get to question three, which is, to me, kind of the one of the things that we've been kind of talking about most of the episode, which is, we all know that HR data, if people ever listen to this podcast guests, you know that HR data is far from perfect, and if the AI is trained on that bad data, there are real risks that can make the AI generate or or inform the AI to generate bad recommendations. How do those steps that we just outlined in question two help with that challenge?
Martha Curioni:Listen, I think it would be great to have AI that has perfect recommendations, but we all know that that's unlikely, right, because we don't have perfect data, like you said. So my question to the both of you is, or to the to even to the listeners, is, can the goal just be to make better decisions than humans are going to make alone? Right? I've done a lot of research in previous roles on DEI, and I just, I really don't trust people to make good decisions if you leave them to kind of their own devices, aka biases, because that's why a lot of times the data is so bad, right? Because in the past, they've made decisions with these biases and so forth. You know, the good news is that there is a lot that can be done to address bias or bad data in models, right? You can clean the data. You can test it for biases in many, many different ways. It's not just, you know, let me be clear. It's not just, let me take gender and race and age out of the model. There are other data points that can ask act as proxies for those. So taking the steps, the appropriate steps, to address some of those things. Then you add on top the Explainable AI factor or the transparency factor, and then you start to have kind of a model that hopefully can make recommendations that are going to be better than a person making it themselves. But you know you again, as I mentioned before, you need to think beyond the model. You also need to think about, are people using it as as intended? And so that's where the redesigning, the process and the training really comes in to make sure that people are using the model. The human in the loop is not something that is just a term, but people you know, are actually doing it. Because, let's be honest, people are busy and in many cases, just lazy, if they can just, you know, take the recommendation. How many people managers are managing way too many people, because so many companies have tried to increase span of control and all of these other things to save costs. And then you put this tool in them that makes recommendations, and they're like, hey, now I don't even have to think about this. I can just, you know, right? The model says to promote
Dwight Brown:Go on auto pilot.
Martha Curioni:Exactly. So you know that that's where, again, the process, the training and so. Forth, come in and monitoring its usage to make sure that it's being used appropriately.
Dwight Brown:And that continuous view feedback loop, that when things are when things are discovered about the data having having a way to have that feedback, so that people who are using the data, using the using the AI to pull the data, kind of get a more refined lens the longer that they're doing this. Because that's the, you know, I think that helps to bring up the blind spots that that might otherwise be missed, and kind of an overall process that just keeps going, as opposed to being a defined starting point and a defined end point, there really isn't a defined end point. It's just a loop, much like data analysis, with using Excel and everything else.
Martha Curioni:Oh, for sure. I mean that, and that could be part of your process, right? You position it not as a Hey, we're putting this step in here to make sure you don't go on autopilot. You position it as Hey, this is how you give us feedback. We recommended you, and you, you promote David, you look at the reasons why you don't agree. You need to tell us why, so that in the future we can make the model better, right, right, yeah, or you do agree, and so forth, and that feedback loop. And I, honestly, I don't know that there are too many tools out there now, chat GPT obviously, you know, has the little thumbs up and thumbs down at the bottom and stuff like that. But within the HR space, just thinking of tools that I've used, I don't know that I've really seen, aside from, like, do you like this job recommendation or not? I don't know that I've seen too many opportunities to give that feedback. So it's definitely something any, any HR tech vendors that are listening, you know, something to think about.
David Turetsky:Well, we've been, we've been trained on this a little bit, Martha in the consumer side. Because if you look at, you know, Netflix or other streaming services, and they do the thumbs up, thumbs down, you know, and you know, is this recommendation? Did you like this series? Did you like this movie? Thumbs up, yay. Okay, well, I'm gonna recommend more movies like this. So we're definitely getting that in the consumer side. And I think that does inform how the feedback loop can help with these recommendations, because at least you're getting that immediate feedback of, Did this help you? Did you use this? Be this additional information to make your recommendation? And in that way, then we can actually get at least some understanding about whether it was good or not, but it would be better if we actually had a little bit more. Like, are you kidding me? You're promoting David? He's a terrible performer! Why would you do that? Or, yeah, I think David's a bad recommendation because he doesn't have the skills or experience necessary for this. So it would be better if it was be it would be more verbose, but at least the thumbs up, thumbs down as something, I think at least we've gotten a little bit more used to
Martha Curioni:No for sure. And I think to your example, you know, another thing to consider is not just giving it to managers, but making sure the HR business partner has the same model output, so that they can also, as part of the process, hold the managers to account, to make, you know, hey, you know, Dwight wasn't on the recommended, recommended for promotion list. Talk to me about why, you know, not, not in a way that's challenging them, right? Because you don't want them to feel like, oh, I have to work from the list. But, you know, talk to me about Dwight like, well, you know what's going on there, or to your example, you know David? Why did you know? Why are you suggesting that you promote him? He's a terrible employee based on other things you've said about him before, which I don't think that's true, David. But to extend your example, and so when you when you empower the HR team with kind of the same information, it positions them to be able to have those conversations, to challenge where appropriate, and to, you know, again, help make sure that you achieve your
David Turetsky:In many cases, these might be self service
Martha Curioni:Yeah. And you know, you might say, Okay, well, objectives. tools, and if the manager requests kind of a slate of successors, let's just say for a job, and that HR business partner should probably get a console that tells them that the manager did make a request for a slate of successors, so at least they have that good understanding to be able to check. Because otherwise they'd have to find out kind of after the fact instead of knowing, you know, here are the alerts of things that that my managers have requested, and here's what the results were, so I can at least be informed, as well as be a good business partner to them making those decisions and inserting myself to be able to provide context for that decision. If that makes sense where's the HR business partner going to get time to do that? But in their case, instead of them having to dig through the data and try to make that list, all they're doing is validating the list and having some conversations around it, right? They skip that first step.
David Turetsky:So it might be part of the loop, part of the workflow that we defined in the six steps. We could talk about this all day. Do you have a couple more hours so we can continue?
Martha Curioni:Unfortunately, no, I have a little bit more minutes, but hours, I have to gosoon.
David Turetsky:No, I'm just kidding. Yeah, I myself am getting hungry for lunch. So I think what we're going to have to do, Martha, if you don't mind, we'll have to come back to this. Because the there's going to continue to be an evolution of AI in the world of HR, you know, we've been talking about it for years, but this year especially, and most especially if, if you hear some of the episodes that we have from the HR technology show in 2024 pretty much AI was everywhere. And so I think we're going to have to bring you back again, if we can, if we can get you back to talk a little bit more about it. Not just the ethical nature of AI and the implementation of responsible artificial intelligence, but also, then what happens when it goes bad, or some other, some other outcomes, and kind of the lessons learned from that, if that's okay?
Martha Curioni:Yeah, I'd love that.
David Turetsky:Well. Dwight, thank you very much.
Dwight Brown:Thank you. Thank you for being with us, Martha!
Martha Curioni:Thank you for having me!
David Turetsky:Martha, thank you very much. You're awesome. It's always a pleasure to talk to you. I always learn a ton from you, and that's the reason why we love having you on the HR data labs podcast.
Martha Curioni:Thank you.
David Turetsky:Thank you all for listening. Take care and stay safe.
Announcer:That was the HR Data Labs podcast. If you liked the episode, please subscribe. And if you know anyone that might like to hear it, please send it their way. Thank you for joining us this week, and stay tuned for our next episode. Stay safe.