HRchat Podcast
Listen to the HRchat Podcast by HR Gazette to get insights and tips from HR leaders, influencers and tech experts. Topics covered include HR Tech, HR, AI, Leadership, Talent, Recruitment, Employee Engagement, Recognition, Wellness, DEI, and Company Culture.
Hosted by Bill Banham, Pauline James, and other HR enthusiasts, the HRchat show publishes interviews with influencers, leaders, analysts, and those in the HR trenches 2-4 times each week.
The show is approaching 1000 episodes and past guests are from organizations including ADP, SAP, Ceridian, IBM, UPS, Deloitte Consulting LLP, Simon Sinek Inc, NASA, Gartner, SHRM, Government of Canada, Hacking HR, McLean & Company, UPS, Microsoft, Shopify, DisruptHR, McKinsey and Co, Virgin Pulse, Salesforce, Make-A-Wish Foundation, and Coca-Cola Beverages Company.
Want to be featured on the show? Learn more here.
Podcast Music Credit"Funky One"Kevin MacLeod (incompetech.com)Licensed under Creative Commons: By Attribution 3.0http://creativecommons.org/licenses/by/3.0/
HRchat Podcast
Transforming Workplace Culture in the Age of AI with Dr. Jessica Kriegel
In this episode, guest hosts Pauline James and David Creelman welcome culture strategist and author Dr. Jessica Kriegel for a conversation on how leaders can use AI thoughtfully to enhance, not replace, the human side of work.
In this lively discussion, Jessica unpacks what’s really happening inside organizations as AI, culture, and change management collide. She explains why this is the “moment for change management,” how leaders can create experiences that shift employee perceptions, and why the illusion of control is the biggest barrier to transformation. Jessica also shares fascinating examples, from manufacturing firms using AI for data insight to companies experimenting with “culture-measuring” chatbots.
Key Themes
- How leaders can move beyond “the action trap” to shift employee perceptions through meaningful experiences
- Real-world AI use cases that reveal both opportunity and risk, from sales analytics to culture analytics
- The fine line between empowerment and surveillance in workplace technology
- Why “surrender” is the surprising secret to stronger leadership and sustainable change
Why HR Professionals Should Listen
For HR leaders navigating the dual revolutions of AI and culture, Jessica Kriegel offers a rare combination of candor, intelligence, and humor. She speaks the language of both strategy and humanity, reminding us that change doesn’t start with systems or data, but with what people believe.
Her upcoming book, her co-authored with Joe Terry, Surrender to Lead, which will be released in January 2026, expands on her insights. Jessica invites leaders to let go of the illusion of control and focus on what really drives results: their presence, mindset, and the ability to create meaningful experiences for employees which inspire alignment and action.
Feature Your Brand on the HRchat Podcast
The HRchat show has had 100,000s of downloads and is frequently listed as one of the most popular global podcasts for HR pros, Talent execs and leaders. It is ranked in the top ten in the world based on traffic, social media followers, domain authority & freshness. The podcast is also ranked as the Best Canadian HR Podcast by FeedSpot and one of the top 10% most popular shows by Listen Score.
Want to share the story of how your business is helping to shape the world of work? We offer sponsored episodes, audio adverts, email campaigns, and a host of other options. Check out packages here.
Welcome to the HR Chat Show, one of the world's most downloaded and shared podcasts designed for HR pros, talented defects, tech enthusiasts, and business leaders. For hundreds more episodes and what's new in the world of work, subscribe to the show, follow us on social media, and visit hrgazette.com.
SPEAKER_04:Hello and welcome to the HR Chat Podcast. I'm Pauline James, founder and CEO of Anchor HR and associate editor of the HR Gazette. It's my pleasure to be your host along with David Krillman, CEO of Krillman Research. We're partnering with the HR Chat Podcast on a series to help HR professionals and leaders navigate AI's impact on organizations, jobs, and people.
SPEAKER_02:In this episode, we are speaking with Jessica Kriegel. Jessica is Chief Strategy Officer at Culture Partners in California. She has a strong background in the world of technology, having spent a decade in change management at Oracle. Since then, Jessica has become a recognized expert in organizational culture and strategy. Her work has been featured by CNBC, CNN, The New York Times, the Wall Street Journal, and Forbes, among others. We connected with her as we were eager to hear the use cases for AI that she has seen amongst her clients and their impact, as well as to draw on her change management expertise. We're also happy to share that Jessica has a book coming out in January. It's called Surrender to Lead. So we look forward to that.
SPEAKER_04:Jessica, just to kick us off, can you tell me about the advisory board that you were on where you met David?
SPEAKER_01:Yes, I would love to. We're both on the Workforce Institute, which is a think tank that's part of UKG that focuses on the future of work, what are best practices to get results, what are the needs of employers and employees, and where do those needs meet and where did they diverge. So it's been an absolute pleasure meeting David there. Thank you.
SPEAKER_04:I see you do a lot of podcasts, keynotes. I'm interested to hear what HR audiences are asking you to speak to right now, what they're most interested in.
SPEAKER_01:Um, right now, the biggest talk track I have to talk about is change management. This is the moment for change management. Interestingly, I'm not at HR Tech, but at this year at HR Tech. It was standing room only in the change management session put on by Deloitte. That is the big question mark for people because of the nature of the speed of change and the number of changes happening in any organization at one time. And that's a very people-oriented challenge. Thank you.
SPEAKER_04:With that, is there anything additionally you feel that they should be focusing on that you like to draw, pull their attention to?
SPEAKER_01:Yeah, my opinion, my expertise is around how do you get the hearts and minds of your employees aligned with the organization's objectives. That's really the key in my mind. And with change management, what it typically looks like is a bunch of training and a bunch of PR and a bunch of trying to get people on board with an idea with communication and rousing speeches in town halls, multiple emails and systems that'll uh allow you to implement that change, whatever the change is, right? The change could be, oh, we're going to AI. The change could be we need to do a digital implementation, or the change could simply be we need to grow by X percent. And that's a change from the amount that we've been at up until this point. And a lot of leaders get stuck in the action trap of what do we got to do? What do we got to do? We got to do this, we got to do that, do this, do that. And I think that they lose the underlying motivators that people have that will get them to offer the discretionary effort to get on board with whatever it is they're trying to engage their workforce in. And that's really what do they believe about the change? And I think if more leaders thought about the beliefs that people hold about their work and about the company, they'd be way more effective in driving results than just worrying about what do we got to do.
SPEAKER_04:On that, I'd just like to pause to uh to underline what you've said or to give you a chance to underline what the practical ways organizations can tap into people's belief systems to bring them along psychologically with change.
SPEAKER_01:I mean, when we work with our clients, the first thing we do is ask them the question what beliefs do your team hold right now that are getting in the way of you achieving your results? And usually when you have that conversation, the same themes bubble up. You don't need a consultant to ask that question. You get people in a room and you ask that question and the truth will arise. The question is, how do you shift those beliefs? So you identify the beliefs that are not aligned with the results you're trying to achieve. And then the more powerful question is what do you want those beliefs to be? And you shift beliefs with experiences. So leaders are really in the experience management business where they have to think about what experiences am I creating and what beliefs are those driving, and how do I need to change those two things.
SPEAKER_04:I like that because you're also shifting their thinking around how you need to show up to really model and support change and meet people where they're at and address their underlying concerns, thoughts, perspectives.
SPEAKER_01:Yeah, I mean, I think a lot of leaders are deluded by the idea that because they have a job title that says manager, they can control people that report into them. I mean, if it was as simple as I'm gonna tell you what to do and you're gonna do it, we wouldn't have all these leadership books being written, right? It's not that simple. You can't control people, you can only control yourself. And so the question is, you as a leader, how do you need to show up? What experiences do you need to create to get beliefs in line with the actions you need people to take to get the results? That's the voodoo, you know, the jujitsu of leadership. I have a book coming out in January called Surrender to Lead. And it's really about this idea like you have to surrender this illusion that you have any control over anything outside of you and figure out how do you need to show up best in order to drive results. And there's a way to get results without waving the white flag of surrender. It's really surrendering to the illusion that you have power you don't have.
SPEAKER_02:So, Jessica, one of the drivers of change requiring all these change management and mindset management interventions is, of course, AI. And that's what we like to talk about on this podcast. So, to make it concrete, I'm interested in what you're actually seeing in the organizations you work with in terms of what they're doing with AI. So, so let's start with one of the sort of easiest use cases. Have you seen anyone making especially good use of AI for creating content?
SPEAKER_01:Oh, yeah, absolutely. And by creating content, if you're talking about marketing content, that is an output that I've seen lots of marketing teams across industries using. And that I think is pretty, I mean, I use it, right? I create content, I use Chat GPT to help me create that content. And that's that's like a well-known, long-used use case of AI. But I also think sometimes the content is internal, right? As a leader, I might be writing an email to someone. That's content that I am using AI to facilitate. So that simple use of just language creation making that easier and faster, I think is pretty widespread at this point.
SPEAKER_02:Have you seen anybody do anything you think, oh, well, that was particularly clever?
SPEAKER_01:Particularly clever? I mean, no. I haven't seen anything that isn't maybe certain kind of prompt engineering is more sophisticated than other prompt engineering, and that might be quite clever, and that depends on how far have you leaned into AI. But that's I haven't seen organizations embrace that wholeheartedly versus others who don't. It's really at the individual level. Are you embracing AI? And if so, have you done the work to learn your skills necessary to best leverage AI? And there I've seen some people be better than others.
SPEAKER_02:Yeah, and I I like that because it gives us a clear focus that the low-hanging fruit is individual use. And if you can get individuals doing it well, there's a lot to be gained there.
SPEAKER_04:Yeah. And leveraging it as a productivity tool and enabling supporting the organization. And we'll come back to thinking through how we how we enable organizations.
SPEAKER_00:Thanks for listening to this episode of the HR Chat Podcast. If you enjoy the audio content we produce, you'll love our articles on the HR Gazette. Learn more at hrgazette.com. And now back to the show.
SPEAKER_04:I'd also be interested in hearing of use cases related to leveraging data more effectively. And wondering what you've seen. It doesn't have to be an HR example, whether you've seen particularly good use of leveraging the capabilities of AI to analyze data, to use internal data more effectively.
SPEAKER_01:Yeah, that's where I think the most advances are being made right now. Content creation was the story of last year. This year, I think the story is churning through data. So there's one manufacturing company that we've been working with where in the support systems, let's call it finance. Well, it was, I'll tell you, it was the sales team and then it was the finance team after that. They have had particular leaders within the organization who have embraced the idea that AI is coming, it's going to help us be more efficient. And what they've leveraged AI for is churning through data, analyzing data. And so it started with the sales team, where they wanted to be able to interpret the pipeline, where leads were coming from, what leads were translating to closed deals, and asking that question the way that it was when they began this process was let's go into Salesforce, let's pull a bunch of data based on what has been entered into the system, and then let's send it to our data analyst to then answer these particular questions that they have, right? What they did is they integrated an AI tool. So now the lay person on the team, which means lower in the organization, frontline managers, can go into this chatbot and say, Can you explain to me where the most, uh the highest close rate lead sources are? And now they can ask questions that previously they probably couldn't ask because it was the executives who were directing the data analysis work. And it's allowing more visibility and more understanding of data. When this particular manufacturing team implemented that, well, then the finance team saw that and said, Well, wait a minute, let's hook them up to our systems and see what kind of financial analysis we can do. Important to note here that no jobs got replaced in that situation, right? It wasn't like, oh, we need less sales managers now or we need less data analysts now because the data analysts were on the finance team. The finance team still has all those people because those people are now rather than doing the work in spreadsheets, they're checking the work that the AI tool did and then interpreting that work for the managers asking the question. So it's still not replacing jobs in that situation in the clients that I've seen, but it is allowing for more visibility and ease of computation.
SPEAKER_04:Thank you. I just find it so interesting as you consider that when organizations are able to have more advanced use cases, the expertise becomes more valuable. Rather than displacing people, we have more opportunity to leverage those skills. Building on that example, would welcome your thoughts on what the lesson for HR is from the examples that you're seeing within operations.
SPEAKER_01:Well, I think HR has been trying to get a seat at the table for the last 30 years, and they've they've still not quite gotten there because I don't think they can speak the language of the CFO as effectively as they should. They're still focused on engagement scores and things that don't make the board members jump. The board members jump when the CFO says, I got a metric here, that's a problem. They don't jump when HR says, I've got a metric here, that's a problem. And so this ability to look into the analytics and the impact of the analytics on the bottom line will probably get them closer to having that seat at the table. But it also requires, once again, the mindset shift. Um, that's how I would be using it as HR. What I've seen in HR is they're still focused on the people side of it, which trust me, there's no person that cares more about people than me. When I'm a culture expert, this is my thing, right? But I also know that people-focused initiatives are short-term if they aren't tied to results, because ultimately capitalism will win. And so you have to see what works for results and people. And so what I've seen HR people do is um, for example, there's one technology company that implemented a tool, you know, these AI Zoom note takers that listen to your calls and then they summarize. That's fine. This particular tool that they implemented actually does psychoanalysis of the culture on your call. So it's listening to the call. And then I read the report of one call. I watched a call and then I read the report afterwards. And um, the CEO was on the call, and it was a business review call with one particular department, and multiple departments were represented on the call. And the CEO was cursing and saying, Well, I don't know why the F we would care about that, et cetera, and so forth, right? And I knew this CEO, and I knew that it's just a very casual tech company, right? I mean, these are the companies that they wear hoodies and it's just like that. No one on the call, based on my understanding of that particular culture, was feeling like he was being aggressive. It was just vulgar and casual, right? So the AI bot pumped out this report that said, well, this CEO is cursing, and that can be interpreted one of two ways. Either he's being aggressive, or this is a level of psychological safety in which that is a norm of behavior. Based on the fact that this other person cursed later in the call, I believe it's because of the former. I mean, it's like giving you this opinion of the way that culture is manifesting in the conversation on the Zoom call. That's interesting. Now, what are you gonna do with that data, HR, right? Are you gonna go give the CEO feedback about cursing? Are you gonna report to the board that the CEO cursed and this could mean one of two things? I don't know that that's necessarily gonna move the needle, but we're starting to see ways in which AI is psychoanalyzing people. That's one interesting way. I've seen other problematic ways at other companies that uh maybe getting into legal hot water. That's what I'll be watching.
SPEAKER_02:One of the things you mentioned in our pre-discussion was you've seen organizations using AI for logic. And I wasn't quite sure what you meant by that.
SPEAKER_01:Yeah. So, well, this actually dovetails well from the previous thing I was just describing, the psychoanalysis. So there's one organization that basically provides like background checks as a service, and they implemented an AI tool to look at the data that comes back from the background check and then provide a psychoanalysis of the person based on their background check results. So typical background checks right now come back and they say this person has a lien on their house and they've been arrested, or you know, they have whatever the background check data is, right? It's just factual data presented that is given to an employer when you're considering hiring someone. Well, this exploration was around what does this mean for the profile of someone like this? It's like, well, based on the fact that they had a DUI 15 years ago, but their credit was really bad and then it turned really good. We actually think this problem person probably achieved sobriety and therefore is on a different path of life. And so it's a low-risk DUI as compared to someone who had a DUI three years ago and also is still in bankruptcy, right? I mean, that's what the AI tool was doing. It was analyzing someone, which is, I mean, first of all, illegal when it comes to who you're gonna hire and fire to profile people in that way. I was just talking to someone who does a lot of consulting in this space, and they have this. I think you're not allowed to use AI to analyze candidates at all, according to some regulation, but you'll have to fact check me on that because I don't actually know. But what other use cases are there for background checks? I mean, it could be to whether to hire or fire, but it could also just be for non-work-related things, private investigators, for example, looking into people and trying to piece together something they've been hired to figure out. That's totally creepy in my mind, right? And they pulled the plug on moving forward with that because they had the same creep factor come up for them when they did the analysis. So, logic in that case is we're gonna take this data and we're gonna make sense of it. AI is really good at language model, is creating language around the input prompts. But to logic, I don't know that it's reliable enough necessarily. I have another example, is a professional services organization that had a bunch of different um clients that they serve. And one of the things that they did was they suggested that they could match the clients' services with the clients' needs that they had and using AI to do that, asking AI to find, okay, here's all the services that these clients have. Here's also the needs that our clients have. Can we pair them with each other? That kind of logic is not very easy or effective or reliable right now at the AI level. And so that's what we're still working on figuring out how to do well with AI.
SPEAKER_04:I appreciate the examples and also the complexities that you're pointing to as well. And we know in the Canadian marketplace, if any part of a decision is related to protective grounds, um, employers are on the wrong side of employment law. So if you had something predict, right, that someone was suffering from an illness, addiction, then that would absolutely create a ton of risk within the higher rank process.
SPEAKER_01:Well, I wonder how insurance companies are using AI for this, right? I mean, because they have to predict risk as part of their business model. I just had um the CHRO of AFLAC, or no, he's the chief strategy officer, but he is the head of HR, which is an interesting title dynamic. Anyway, um, they're in the business of risk. I should have asked him if they're using AI to predict risk with their consumers, you know. I mean, is that illegal? I don't even know.
SPEAKER_04:So really underlining there's there's the advances and there's the cautions, and just it comes back to the importance of testing, of piloting, of making sure that when we're applying a use case, it's just something that we have we have a solid foundation of and we've uh assessed the risk. Before we move to risk, though, we'd like to dig a bit deeper on the culture side and whether you've seen use cases around to measure culture, to support culture.
SPEAKER_01:Yeah. So um I had a call with a vendor who said that they one of the big four consulting firms was their client, then they were reselling it. And this vendor has a tool in which it integrates with Outlook, with Slack, with the internal communication systems to help facilitate better communications. So you're writing an email, basically, it gets a profile of everyone in the company based on some psychological assessment that they've taken, right? So let's say uh ACME Company hires this vendor, right? Every one of the employees at ACME Company fills out a psychological survey and it understands now this is how you like to communicate, this is how like you like to be communicated with, this is your priority in workplace dynamics. Well, then I'm gonna write an email to Joe at that company. And I write the email, but then before I send the email, this AI bot is reading the email, understanding Joe's profile, and making suggestions to me about how I could make my email resonate better with Joe based on his profile and the way he likes to be communicated with. So it'll say, well, Joe is really likes to prioritize relationship over tasks. And so maybe you want to start with asking how his day was, right? And it'll rewrite the email for you before you send it to Joe, which sounds great. Then Joe receives the email. It's a wonderful email in Joe's mind because it was carefully crafted for his personality type. And then Joe wants to write back, blah, blah, blah. My email, my weekend was great. Thank you so much. Let me send it back to you. And then the AI bot sees Joe's note to you and says, Oh, you know what, Joe? Jessica actually really prefers when people just get down to business. Why don't you take out the weekend stuff and just make a bullet point of tasks because that's how she likes to communicate? Joe's like, that sounds good. He sends it off. I think Joe is so great. I love the way Joe communicates. What's really happening though is that we're not communicating with each other. We've filtered how we communicate with each other through this AI tool that's changing it to be the way I like to receive communication. And I actually don't know much about Joe because Joe is not talking to me the way Joe talks to me. Joe is talking to me the way I like to be talked to through an AI filter. I haven't had any clients implement it, but I know clients that they've listed that did implement it. And I have serious concerns about what that. I mean, their argument is it's gonna make culture better because everyone's gonna be happier with the way that it's gonna reduce friction and communication. But come on, I mean, is it really communication if it's just robots talking to robots, you know? Thank you.
SPEAKER_04:That's such a just a great example of how we can endeavor to improve something and remove friction and and we're actually uh damaging it and without thinking through just the consequences of people not really getting to know each other or understanding their preferences or getting to know them in the way they want to be known as as well. Are there use cases that even if you haven't seen it, that excite you that could actually move culture forward in a way that would be more helpful or ways that you would like HR to consider how to leverage it in an effective way rather than a way that uh is more getting along through smoke and mares.
SPEAKER_01:Yeah. So the instill AI example I gave, which is they listen to the calls and then they do a psychoanalysis of those calls. I've been excited about that. And the reason I'm excited about that is because the culture partners, what we're always doing is trying to share with people that if you really want to get people's hearts and minds aligned with the results you're trying to create, you got to get at the belief level, right? And what instillAI would theoretically be able to do is track, measure, and analyze whether those beliefs are in place or not. And if you see people not having those beliefs in place, that's a predictive tool to show you that your change effort is not going to be successful. And it can flag to the leader, here's where your beliefs need to shift. It's like a mirror, right? It also is a self-awareness tool because it can send a note to the leader who cursed on that call and say, Hey, maybe you want to think about not cursing on that call, which no one on the call is gonna say to that person because they all report into him, right? So the self-awareness thing is the big unlock that we need to create better leaders. We need leaders to be willing to be self-aware. And it's hard to get leaders feedback because they're the leaders. They're in charge of your job, they're in charge of your salary, they're in charge of your security at this company. And so I'm not gonna tell my boss something he doesn't want to hear at my own peril. I'm not that principled. I don't have that much integrity. I want to make sure that I have a job so I can send my daughter to school before I care about saying the truth to power, you know. Sorry, that's just my perception of, you know, my priorities. So this AI bot can speak to truth to power, though. And I think that's interesting. I'm excited about that.
SPEAKER_04:It's interesting because you think of the examples that we're working through, how many of them keep coming back to supporting individual productivity, individual growth, self-awareness, and how that can support the broader organization? Yeah.
SPEAKER_02:Now, uh, Jessica, we've been focusing on some of the interesting things organizations who are leaning into AI are doing. What about those organizations you've seen who are failing to make use of AI? Do you see any commonalities in those organizations that are just very slow on adopting AI?
SPEAKER_01:I think more organizations are in that category than not because it's still so new to transform an organization around this brand new technology requires a lot of work. And so you're seeing it in pockets based on individual buy-in to the idea. You're I I haven't seen, I mean, Fiverr just made this announcement this week that they're gonna go back to being a startup, they're laying off 250 people and they're gonna be AI first. That's interesting, right? The CEO of Fiverr, when he made that announcement, got absolutely slaughtered online. I mean, he was just inundated with comments and messages about how dare you! I'm never using your service again. You know the worst thing about corporate greed and blah, blah, blah, right? Because the CEO said, we're gonna be AI first and it requires us to lay people off. He is the 6,000th CEO to make that decision. And yet he's still getting the pushback. Other CEOs, when they've done it, what are they doing? Are they replacing people with AI yet? No, no one is. I haven't seen any clients. I have zero clients that have replaced large groups of people with AI efficiency. I think that's going to come with agentic AI, which is just around the corner, but it hasn't happened yet. What you're seeing is people laying off their populations of employees to make room in the budget by reducing labor costs so that they can invest in exploring the possibility of AI efficiencies. It's it's technically replacing people, but not because the efficiencies already exist. It's because ideologically we want it to exist. So let's get rid of these people, which is really, I think AI has become a scapegoat for overhiring and laying people off, even though the LinkedIn comments look bad. The market responds positively to layoffs now, which it didn't 10 years ago. So you make an announcement that you're laying a large group of people off, and the the market responds with great, you're a forward-thinking, efficiency-based leader that's results oriented. We love this for you. We we have we're positive about your future. 10 years ago, or let's really go 20 years ago, if you announced layoffs, it meant you were at your last straw before the bankruptcy and breaking point of your company, and so you must be about to fail. That's not the perception of it anymore.
SPEAKER_04:Do you have any thoughts on how organizations are governing AI, how they're managing risks, and would love your thoughts on what you see as ineffective, what you see as effective.
SPEAKER_01:I think AI, the AI equivalent of cybersecurity, is really in its nation stages, but will be very, very popular as more high-profile AI failures become well known. And there have been many of them, right? I'm thinking of there was one real estate organization that started using AI to analyze what properties they should invest in. And the model was built based on historical economic data. And so the model assumed that the economic situation was static as it did its analysis. Well, what happened is the market shifted and the housing market shifted, but the AI model was still based on assumptions that we were in a different kind of economy. And so it was popping out a bunch of recommendations that were totally bad recommendations. And the company invested millions of dollars in those recommendations without realizing that they were bad recommendations because it hadn't adjusted for a new economic reality. That's a very straightforward case of AI taking a business down a bad rabbit hole because it didn't have securities in place. And there's other things that AI can do, the hallucination, for example, right? There's um healthcare institutions that have implemented AI for, for example, analyzing radiology, right? You analyze radiology using AI, and what you saw in one particular case was it started hallucinating. Um, things and basically it was detecting problems, you know, 80% more of the time than it actually existed. It's better than if it had detected problems 80% less of the time than it actually existed, but it was just as likely to go in one direction or the other because it didn't have this is the logic thing. It didn't have the kind of logic that can understand nuance and context that the people reading the radiology exam before did, right? That the people analyzing the investment portfolio before did. And so that's where work still needs to be done. And that's what AI quote unquote cybersecurity looks like. It's different than cybersecurity where you're looking, you know, you got scammers coming in and trying to get your data, right? It's like, well, what is AI's error point, failure point going to look like? And how can we protect ourselves from that? That will be, I think, uh something that gets popular next year and the year after.
SPEAKER_04:We've seen practical ways organizations manage the risks and opportunities.
SPEAKER_01:Yeah, people checking the work. I mean, that's why I don't think you've seen that many layoffs. It's great, you've popped this out. Now we're gonna look at it and see if it's remotely accurate or not. I mean, that is the people saying that AI is not gonna replace people because of that. I think that's maybe true for about four more months, you know, in my opinion. Uh I anyone who says they know what the future of AI is going to be, they're trying to sell you something. This is just my opinion. It will get better. It will replace people. Agentic AI has so much opportunity. There are these big high-profile like Klerna. They laid off a bunch of people, said they were going to replace the organization with agentic AI, and then they went back because it failed miserably. And now they want to be known as the person who will guarantee that you'll get a person on the phone. I have one organization that's a utility company that's exploring the idea of replacing their call agents with AI. They bought the tool, they implemented it. It is uncanny. I listened to a recording of a call agent helping a customer, and the customer had no idea they were talking to AI. If I listened to a recording and didn't know it is AI, I would not have thought it was AI. And yet, they're still not pulling the trigger because they're worried about, well, are we sure? You know, it's just not, we're not quite there yet, but it's just around the corner. Thank you.
SPEAKER_02:And just I would just wrap up my thoughts on this, that one just has to move forward with caution. Explore, uh not necessarily believe what the vendor is telling you. Um, but if you explore, you might find it works extremely well, better than expected, or you might find it's doing things badly worse than you had thought, and you just need to do the pilots and keep an eye on how it's going.
SPEAKER_01:Well, this is the this is the challenge of capitalism and growth, right? Innovation requires us to fail fast and take risks. And so what risks are you willing to take? This is these are the decisions that leaders have been making for a long time. Now they're pointing it at AI, right? I mean, personally, and this is irrelevant to your listener because who am I? But personally, I'm a little bit scared of AI, right? And I would love if everyone just said, you know what, forget it. It's not worth it. But I feel the same way about social media. I feel the same way about technology. I mean, if it if I had my drudhers, we'd go back to being hunters and gatherers, you know? I'd go back to pre-agricultural age because I think that's probably better for our mental health. But that's not the nature of the world. So here we are. People are trying to make money, they're going to continue to innovate, and that innovation is going to have consequences. And so I'm here to try and help people navigate the reality of the world the way that it is. So the reality is no one's going to slow down. As long as they don't have to, they're not going to slow down. And therefore, as an employee, as a person in this world, how can you adapt to deal with that reality? It's again going back to this idea of surrender. You can resist that the future is AI because you think it's unsafe, or you can accept that it's happening. And so, therefore, what am I going to do in this new reality?
SPEAKER_04:Any practical tips in that regard? Like how leaders can accept the practical reality, how they support their teens accepting the practical reality, how they support enablement, democratization of capabilities in this new realm.
SPEAKER_01:Number one, so I have a model in the book. It's called the shift model that is around this. We don't have to go through the whole shift model, but first step is stop fighting reality. And reality is everything happening outside of you. We've spend so much time in our heads wishing our employees were different. If only they were more engaged, if only the leaders created lower objectives for me to hit. If only the administration didn't blank, if only the competitors weren't X, I mean, there's so much, it should be this way, if only it were that way. That think of how much energy and time you spend spinning your wheels on that. That's a lack of surrender, right? We call that below the line thinking. So to go above the line thinking is to say, okay, what's going on? And what about that can I control? Let me just put my energy there. And if I just focus on making the personal choice to focus on what I can control to drive results, well, now I've at least focused my energy in something useful instead of spinning my wheels on the way I wish it was. Then once you've identified where you actually can make an impact, well, then you can make decisions about how you can show up to create the right experiences that will shape the right beliefs, that will get the right actions, that will get the results. I mean, I think the number one thing people need to do is get out of the action trap of constantly trying to do something, that endless cycle of activity that feels like progress, but is really not moving the needle. And get out from below the line thinking where you're pointing fingers, you're like, well, that's not my job, and it should be this way. And it, those folks need to be different. All of that is wasted energy. There's something in between that I think is where we need to spend our time.
SPEAKER_04:Thank you, Jessica. This has been a short and interesting conversation for us. For those who would like to get in contact with you, what's the best way to do so?
SPEAKER_01:You can go to my website, jessica.com. Uh Kriegel is called K-R-I-E-G-E-L, or you can go to our company's website, which is culturepartners.com. And I have a weekly newsletter on LinkedIn if you want to join that. I pop out insights every Wednesday called This Week in Culture, and it just tells you what's going on in the world of culture. Thank you.
SPEAKER_03:Thanks for listening to the HR Chat Show. If you enjoyed this episode, why not subscribe and listen to some of the hundreds of episodes published by HR Gazette? And remember, for what's new in the world of work, subscribe to the show, follow us on social media, and visit hrgazette.com.
Podcasts we love
Check out these other fine podcasts recommended by us, not an algorithm.
HR in Review
HRreview
People and Performance Podcast
Fidello Inc.
A Bit of Optimism
Simon Sinek
Hacking HR
Hacking HR
TalentCulture #WorkTrends
TalentCulture
A Better HR Business
getmorehrclientsThe Wire Podcast
Inquiry Works
Voices of the Learning Network
The Learning Network
HBR IdeaCast
Harvard Business Review
FT News Briefing
Financial Times