The Haylo Effect Podcast

The Human Inside: Balancing AI and People in HR

β€’ Trish Hewitt β€’ Season 2 β€’ Episode 6

What happens when artificial intelligence meets human resources? That's the fascinating question at the heart of my conversation with Amanda Arrowsmith, former People and Transformation Director at the CIPD and a transformation specialist with over three decades of experience in HR.

Amanda brings a refreshing perspective to AI in the workplace – seeing it not as a threat, but as a powerful tool that can enhance human capabilities when used thoughtfully. "AI is a tool, like Excel is a tool," she explains, drawing a practical comparison that grounds our discussion in reality rather than science fiction.

The most exciting possibilities? Personalisation tops Amanda's list. Just as Netflix customises content based on our viewing habits, AI can tailor workplace experiences to individual needs, potentially revolutionising both inclusion and performance. Add to this the automation of repetitive tasks and real-time insights into workforce patterns, and the potential for HR transformation becomes clear.

But Amanda's enthusiasm comes with important caveats. Her three golden rules for HR professionals using AI – people first, transparency, and continuous auditing for bias – frame a thoughtful approach to implementation. The "human inside" must remain central, particularly for high-impact moments like redundancies or disciplinary conversations where empathy and nuance are irreplaceable.

We explore the shifting landscape for entry-level HR roles, the importance of cross-functional governance (which Amanda brilliantly likens to "Avengers assembling"), and the growing need for data literacy among HR professionals. Throughout our conversation, Amanda balances technological optimism with a deep commitment to human-centred practice.

Whether you're an HR professional curious about AI's implications for your role, a leader navigating technological change, or simply interested in how work is evolving, this episode offers valuable insights into maintaining humanity in an increasingly automated world. Listen now to discover how to harness AI's power while keeping people at the heart of HR.

00:00 Introduction and Guest Welcome

00:21 Amanda's HR Journey

01:37 Exciting AI Applications in HR

03:10 AI for Repetitive HR Tasks

04:10 Golden Rules for AI in HR

05:26 Transparency in AI Usage

07:44 Addressing Bias in AI

11:48 Future of AI in HR

15:20 Red Flags in AI Usage

17:48 The Impact of AI on Company Reputation

18:10 Governance and Responsibility in AI Usage

18:50 The Role of IT and HR in AI Implementation

19:09 Ethics and Compliance in AI

19:54 The Avengers Analogy for AI Governance

24:00 Balancing AI and Human Learning in HR

27:12 The Future of Entry-Level HR Roles

29:16 Engaging with AI Tools for Personal and Professional Use

31:31 Concluding Thoughts and Future Discussions

πŸ“©  Want to contact Amanda?
πŸ‘‰πŸΎ https://www.linkedin.com/in/amandaarrowsmith/

πŸ“¬ Stay in the know: https://www.haylohr.com
πŸ“± Follow us:

Twitter/X: @haylohr
TikTok: @trishinhr
Instagram: @haylo_hr

GET IN CONTACT
https://www.haylohr.com/

IMPORTANT INFORMATION: This video is published by Trish Hewitt of Haylo HR HR. The information in this video is for general guidance only and, although the presenter believes it was correct at the time it was recorded (August 2025), the law may have changed since then. You should always seek your own legal advice. This advice adheres to employment law within England, Scotland and Wales.

Speaker 1:

Right. So welcome back to another episode of our podcast, and I'm delighted to be joined by the amazing Amanda Arrowsmith Now. Today we're going to be talking about AI, but looking at it from a HR perspective and looking at how it's going to impact, or is impacting, the world of work. So, amanda, welcome to the podcast.

Speaker 2:

Thank you so much. It's so lovely to join you.

Speaker 1:

Oh, fabulous, thank you. So what I usually do is start off by letting people tell us a little bit more about themselves, and obviously you've had an absolutely amazing HR career, so tell us why our listeners should be listening to you.

Speaker 2:

Oh gosh. So I think my curiosity and my genuine interest in all things people will make it interesting hopefully, for other people to listen to. I don't think I get it right all the time and I know throughout my career I haven't got it right all the time, but I think that my interest in that and my experience hopefully will add some value. So I've been working for 33 years, I realize now, and I've done the bulk of that in HR. So from the end of 1990s working for solicitors in a personnel office back then, through working in public sector, private sector, and then for the last kind of 15 years really focusing on transformation and change. Most recently I was the Chief People Officer at the CIPD looking after people in transformation. I finished there at the end of June as they move into their next phase and I yeah, I'm taking the summer off, but I'm an interim CPO who loves transformation, change and he's just endlessly curious, which is why AI is really interesting for me, because I love to find out new things.

Speaker 1:

Oh, fabulous stuff, okay. Well, let's get straight into it then. So, from your vantage point, your perspective, what would you say is the single most exciting thing about AI when it comes to kind of hr and people functions?

Speaker 2:

I think, um, for me, the most exciting thing and I haven't seen I don't think we've seen it emerge yet, but I think there's a real use for it is this potential to personalize. We've seen so much more personalization in our lives. So if you think about how you, how you, just even how you experience tv, you know. So, husband and I, we have separate Netflix accounts. If I accidentally go into his account, we watch completely different things. It's personalized. If we're listening to one of our players around the house, if it doesn't change to my profile or his profile, I'll go in and it plays me music. That's got nothing to do with me because he's been using my profile. It's that personalization and I think we're going to see that in work.

Speaker 2:

What I think is that AI can really help us, both in terms of inclusion and performance. So I think that'll be really interesting. I think that's exciting. We've heard lots about the use for repetitive tasks, so potentially using AI to support some of those repetitive tasks. So that'd be good, free people up to do some of the really important people stuff. And then I think, if we have the right safeguards, um, I think it's gonna be able to surface some really, um, some interesting insights which we might not otherwise see. So perhaps patterns and engagement, attrition, skills gaps, but in real time rather than kind of in that lag. So those, those are for me. So so it wasn't single, typically I went for three. Sorry, but those are things that I think are the most exciting potentially right now.

Speaker 1:

And those repetitive tasks. When it comes to a HR function, what do you think that we can be using AI to cover in terms of repetitive tasks?

Speaker 2:

So I think we've been using some of it already. In terms of accessing information sharing. You know we've been doing mail merge for letters for years, haven't we? But checking those and perhaps personalizing those a bit more, I think would be really useful. There's some interesting organizations who are using AI within their own domains to kind of have an HR chatbot. So instead of having your central services someone having to phone up and say, where's the expenses policy? Can you tell me the maternity policy? How many days of this have I got left? They can use that within AI. So they could perhaps use a chatbot and do that within AI. So I think those are really useful. Those will be interesting Changes, updates, perhaps, things like our data checks. So are we checking the data that we're holding on people? Can we use AI to support that and to automate that in a different way? So some of those things I think could be useful.

Speaker 1:

Oh, okay. And if you had to set three golden rules for how HR can use AI, what would they be?

Speaker 2:

The first one is people first. The IT guys at the CIPD talk about the human inside. So they talk about using AI, but making sure you have your human inside, and I think that's essential for HR because we're still dealing with people. We can use this tool and it'd be really helpful for us, but people first keep that human inside. The second one is be transparent and explainable. So it's really important, if we're using AI in the workplace, that we are honest and open about when we're doing that, that we let people know when AI is in play and actually the understandable reasons for that, so why we're doing it. And then I think there's something around the rules, around bias and making sure we're aware of potential bias and continuously auditing. So making sure that we are testing for any unintended discrimination or exclusion and making sure we're keeping that regular testing and auditing of those. So those three people first, transparent and explain why, but then we'd be really aware of bias and audit continuously.

Speaker 1:

There's loads of things I want to pick up on in those what you just said, but first one is probably transparency. So I mean, I think I know the answer to this, but it's a podcast. I'm going to ask you how transparent do you think organisations should be with their use of AI?

Speaker 2:

So I think where we are at the moment, we're in a position of building trust with AI. So ChatGPT launched what the end of 2022, I think it was and we really started hearing about it in January 23. So we're two and a half years in. It's not that long really. There is the potential for a lot of mistrust and concern because it's a massive change. You know, it's probably the biggest change since email or the iPhone and that kind of in-your-pocket computers. It's probably the biggest change in that.

Speaker 2:

So if we're not transparent now about how we're using it, where we're using it and why we're using it, we don't build that trust with our employees and our other people. So that, for me, is why the transparency is so important right now. We'll probably get to a point, um, in the years to come, where the default is that ai is used for certain things. So you know, you go into your banking app, you get a chat bot that's not a person, that's ai. We know that and everyone knows that's happening now and we'll probably get to that at some point with employment. But in these formative years, being transparent about when we're using it, why we're using it, the fact that we are auditing it and checking it for bias, I think is essential to build that trust, so that's why I would do it now.

Speaker 1:

Oh no, I agree, and I suppose from my perspective, I think, if you've got nothing to hide but and you shouldn't have anything to hide really in business you type why wouldn't you just be transparent with your staff, right, let's just all be adults and talk about what we're doing and also they're you.

Speaker 2:

They're probably using things in their day-to-day life. They're probably out there seeing things, losing things. If we have a culture of openness where we can talk about transparently about what we're doing, we might set some of our best ideas from other people. They're not necessarily going to come from the HR team or the IT team. They might come from somewhere else in the organization. So if you have that transparency and you're open, I think that probably allows that more two-way conversation, allows people to perhaps try things in a safe place. I don't know.

Speaker 1:

Think about that way yeah, no, I totally agree. And you talked about bias as well. So I've been reading people management magazine dutifully um, and there was a story in there around um a potential case again against workday around um bias and discrimination and recruitment with them using ai. You got any particular thoughts on how we can try and stop or limit bias when it comes to AI in those sorts of circumstances?

Speaker 2:

So I think the first thing is that continuous auditing. So let's check it. I don't think bias in AI is anything new. So I think, if you think about the applicant tracking systems that have been using some form of AI to discount CVs, that's been happening for years. There was a study in Australia, which is the best empirical study of its kind, where they did over 10,000 applications for jobs with different sounding names and it was clear that if you had a Western Australian sounding name, you got interviewed for the job. If you had an had a Western Australian sounding name, you got interviewed for the job. If you had an Asian or an Aboriginal sounding name, you were less likely, even if you had the same experience and the same.

Speaker 2:

Now that's people, that's not just AI. The challenge we have is AI is trained by people. So there's been some interesting stuff at the moment about women in AI and how women aren't represented in AI and if you ask it a question, it'll come up with men, or if you ask it something, it's coming from a very male point of view, because AI is being trained by predominantly men. The way I think we can support that internally and support that as we use it in organizations is twofold. One, let's get involved in training the AI. Let's make sure we're in those spaces, let's make sure we've got diversity in those spaces, so we're calling out that bias at the time it happens and therefore fixing it. Two, let's not pretend the bias isn't there.

Speaker 2:

Bias is there in everything we do, and replacing a human with a computer isn't going to take that bias away. Humans are biased away. Humans are biased. Computers are biased because they're built by humans, so that's not going to take place. We need to face into that, be honest with it. I think this is where the human inside comes, though we need to make sure we're checking, we need to make sure it passes the sniff test. If something doesn't feel right, if it feels like there's bias, then make sure we're getting in that and checking that. And that can be challenging, because that means we have to kind of sometimes confront our own biases. That's not a comfortable place to be, is it? That's not? You know, you get to a certain point. That's not comfortable, but the better we get at it, the better these tools will become for us. But we need to not ignore it and face into it. I don't think it's a it's, it's not a reason not to use the tools, but it's one of those things. It's the the safety warning that it comes with.

Speaker 1:

Use it at risk, use it you know, use it knowing these things and how you use it right If we kind of leave it to its own devices, knowing that there are biases and that's an issue? Yeah, completely.

Speaker 2:

The workday. One's really interesting because, like I say, that's not new. That bias in recruitment has been there forever. But the bias in recruitment was the computers might be and the AI might be taking people out of processes because of certain things. That bias has been there because of lots of different things and we've tried loads of things as organisations for years, haven't we? So we've done blind cvs, we've done uh, removing dates from things, we've taken qualifications out. We've done loads of things and there's still bias. Because we are humans and we like people that are like us. We naturally go for people that reminds us or that we. We have that affinity with um. Where the ai may help is with some removing some of that initial bias, but it, if it's built by a certain group of people, it's going to continue to have that.

Speaker 1:

Oh, I love the phrase sniff test. I don't know why.

Speaker 2:

I don't know if it's a. I don't know if it's a common thing anymore. I don't know if it's like an old phase, but maybe it's. It's um, it's the milk, isn't it? The milk doesn't pass the sniff test. You don't put it in your tea. I mean mean, I'm all for the test.

Speaker 1:

It saved me many a time, alrighty. So we've kind of talked about this a little bit already, but in the next five years, how do you think AI will be able to change the way that we do things in HR?

Speaker 2:

Yeah, right, uh, what was I thinking? So, um, I think there's something around skills intelligence. So I think there's some really interesting stuff around um, using ai to identify we intermediate workforce planning, so using it to identify, kind of what the workforce shifts might be. Thinking about that precision you need, what the skills are that you need. So thinking about how you can build that intelligence of your workforce.

Speaker 2:

So back in the day, I remember having knowledge management platforms or trying to build a knowledge platform, probably with a database that everyone had to go in and put their own information in. I think AI will speed that up, will enable that more that you'll be able to interrogate. You know. So we're going to go through this transformation and change, or we're bringing in this new product. What sort of skills might we need to use for that? And you can use the large language models and the agents in that to help and bring this on. So I think that skills intelligence stuff will be interesting and it will help give us more precision and more data to help around that. I think recruitment could be faster and more transparent and we're already seeing the candidate experience improve through the human and AI collaboration.

Speaker 2:

So through using agents effectively, people aren't just getting a blanket. I'm sorry, your CV wasn't right this time. What you're starting to see and I saw something that was really interesting is there is an AI of a person now not sacking people, but could be the same, and I think we're probably gonna talk about that but there is an AI of a person saying we received your CV. These are the reasons you weren't taken forward. This is what we would suggest you do going forward. So it's taking someone's CV, recognizing perhaps they don't meet the things for that job, but instead of sending a blanket back we're not progressing you at this time or worse, ghosting them they're getting a very human feeling AI avatar saying thanks for sending your CV. These are the reasons that we didn't take you forward at this stage. This is what we're looking for and this is what you might want to do going forward, and I think that candidate experience is going to be much better as a result, because you know as well as I do we see it on LinkedIn and other places all the time there are hundreds and hundreds of people applying for every job A recruiter will do their best to get back to, but they may not be able to. So if you can use those tools to make that more human, to work with people. I think that's potentially a really positive thing. Potentially, some policy compliance might be interesting around AI. So I think we'll move from more reactive to proactive. So using that AI to make sure we're supporting that.

Speaker 2:

And then I think the biggest thing for me over the next five years is we've been saying it for a while anyway HR business partners need to have more data literacy anyway. If you're working in HR, you need to have more data literacy anyway. If you're working in HR, you need to have more data literacy. The analytics side has been coming through anyway. That's how we really add value to the business. That's just going to be even more important with AI. So I think, embrace it, don't be afraid of it, but understand that data literacy and access the tools available. If you're a CIPD member, there is a free AI course on the website so you can access those courses. There's loads of other courses out there, so I think you know there's always webinars being touted on on um, linkedin and other places. I think it's important that people be upfront about what they don't know and and be open to learning.

Speaker 1:

That curious side is going to be key fabulous stuff and you started to talk about something that I do want to talk about, um, in terms red flags. So, again, recent case where we've got a company who uses a pre-recorded video to lay people off because they want to use a bit more AI. I know red flag right. If I had a flag I'd be waving it. How, as HR professionals, can we try and manage those things that are red flags? I mean to be honest, I would hope, hope that most people in HR would advise that that's not something that people should be doing. But how do we manage people kind of misusing AI, I guess, in that way?

Speaker 2:

Could you imagine so using AI to replace those meaningful human interactions that have high impact moments, I don't know redundancies, grievances or disciplinary actions? That to me, it makes me physically kind of you can see it just hackles up scary. I think it comes back to that transparency. So if you then deploy that AI without consent or disclosure, that's a real issue. We are, and we will continue to be, the champions of humans within the organizations. It's in the name Human resources, human capital, people and change, wherever you want to call it. We're going to be that and we need to continue to do that. And this comes back to the human inside.

Speaker 2:

Ai is a tool, like excel is a tool. It's great that we use it, but it can't replace us, and if we want our people to give us the best of them, then we need to give them the best of us, and that means turning up being in person having those hard conversations. I mean, I saw the TikTok of the girl who was videoing the woman telling her that she'd lost her job and her email and everything was being her. Access was being turned off immediately and I know it's very American way and I get that work at Will State, but it was just if I'd known which company it was and it was a company that was a consumer goods company I would no longer buy from that company, because that's the other thing.

Speaker 2:

There's that pr impact. There's a real risk. Isn't there you? You would have seen this and you know we want to use ai. What for and why? And it's that transparency. Can you explain it? Can you tell why? So yeah, we've just got to keep championing, we've got to keep explaining. These are people we're dealing with, not widgets. And if we want to keep the heart and soul of our organizations, we need to support our people. But also, it's going to get out there. You start doing that. People are going to tell you. You're going to hear which companies it is. It's going to impact your PR. It's going to impact. It's not going to go down.

Speaker 1:

Well, it's going to impact your PR. It's going to impact. It's not going to go down well. I mean we're talking about it now, right, and not in a positive way, but hopefully the company will learn from it. Fingers crossed, in terms of governance, of kind of using AI tools, I mean from a very kind of purist perspective, ai is a piece of technology. Should it be that IT are the people who are the guardians of AI governance, or do you think that's a joint responsibility with IT and HR or others in organisations? Yeah, I was thinking about this.

Speaker 2:

So I think IT are necessarily leading on AI because it is a technology and there are so many questions about what data you can put in. What happens to who owns what data you put in. Where's this? What's the safety and security within your organization? So I think necessarily it need to be up there, but I think it's a joint task force. So I think it's your hr people coming from a human point of view. I think it's it for your technological integrity and technology within there.

Speaker 2:

Depending on the size of your organization will depend on what else you have in there, but I think you need to have your legal people in there for compliance. There's the european rules around ai that are coming in um. It's likely that the uk will replicate those rules. It's really important. People are using it, you know, within those rules and within that governance, and that then puts us then in risk and ethics. So who's managing your risk and ethics? Have you got a governance structure? Have you got a legal structure? I'm not quite sure. Depending on your organization will depend on this. So I don't think it's one team. I think it's a joint governance responsibility. But right now emerging tech, I think IT need to be leading on implementation and potential adoption, with HR coming in really strongly about ethics, because at the end of the day, it's people that use AI.

Speaker 1:

I totally agree. I feel like I'm a complete geek, so I'm going to make it very geeky. I feel like IT get to be Captain America and then the rest of us are kind of choose a Marvel character who supports your face.

Speaker 2:

The rest of us are kind of, choose a Marvel character who supports your face. No, I am totally in. So I'm so glad that you went Marvel, not DC for starters, because that would have been hard for me, but no, I'm totally in. I think there's something about Avengers assembling for these sorts of things, and this is it is. You know, it is life-changing and there is something about bringing those different skills and views together and if you can get comfortable with having those different skills and views in a room, you're only going to do better. Oh, my God, my English, sorry. You're only going to find a better way of implementing that in your organization because you bring that diversity of view, diversity of viewpoint, but also understanding of what's going to.

Speaker 2:

You know, for an IT person, the latest AI agent, so agentic AI may be the thing that turns them on, whereas for an HR person, it may be. Actually, this can be a coach and a mentor and we can set this up and this can help this person with this product and they can formulate their questions for something they're challenged with before they go into that conversation, before they go into that conversation, before they have that conversation, because they've got someone. That's is they feel is a private conversation. I mean, that's another question. We've heard, haven't we, that open? Ai have said that none of your conversations are private and they are googleable, if googleable is a word but there is something around that.

Speaker 2:

So I think, um, but but yeah, if you bring those different things together. I love the, I like the. Yeah, I like the Avengers analogy. I think that's top.

Speaker 1:

Who's your favourite Marvel character? I'm Natasha Romanoff, obviously.

Speaker 2:

Oh, obviously why? Because she is badass and also she. I think if she worked somewhere she would be a CPO. She's got the driest sense of humour, but she gets stuff done.

Speaker 1:

Love that More of a Scarlet Witch kind of girl myself. Oh nice, yeah, Pre the book of whatever it was. Well, yeah, but she kind of gets better, doesn't she in the comics. I'm sure she comes back and she's not as bad. I can't remember so the comics are, I think, the really interesting.

Speaker 2:

What Marvel had to do, necessarily, is commercialize and make the, you know, but the stuff that's more comic book realistic. You kind of go, oh no, that's a great character. So you look at some of the things that come in and like they're not dark enough, like Kane in oh the time traveler, one that was on. Yes, kane was nowhere near as evil as they are in the comic books. So I think that's the difference there. But yeah, scarlet Witch is a fantastic character, I mean. Yeah, I mean maybe I'm Agatha, I don't know, I know she's not. Oh, that's a good one. Maybe I'm Agatha at the end of Vegas.

Speaker 1:

Yeah, good, I like her, I like her a lot. Oh, we could team up, perfect. You see, the next CIPD conference is all like cosplayed up.

Speaker 2:

Oh, Comic Con and CIPD Woo.

Speaker 1:

Yeah, see, you are the person to suggest that to.

Speaker 2:

I love that idea.

Speaker 1:

I'll leave that to you. Oh, anyway, now I've got to be grown up again and go like me, okay, yeah, but my questions?

Speaker 2:

have you used AI to make yourself into a Marvel character yet?

Speaker 1:

though. Oh, do you know what I made myself into? A Simpsons character, nice, but not a Marvel character. Why what?

Speaker 2:

have you done it? I haven't. I just realised that I've made my dog into one. Obviously, because that's the things I do. I also made my dog into uh. I asked AI to create pictures of my dog at work and I've got my dog in uh like goggles and a lab coat, a hard hat and a clipboard and all those sorts of things. But I may be going on to one of the many AI platforms afterwards and creating, putting myself as yeah out of the heartness platforms afterwards and putting myself as yeah out of the heart on us.

Speaker 1:

Potentially, I'm expecting an email when we learn, all right. So AI questions we talked about it a little bit in terms of kind of entry-level roles, and I guess it's something that I'm a little bit nervous about with HR. I put them AI and HR. I love technology and all the great things that it can do for us, but I do worry about if we erode the things that our entry-level people do, that we're kind of getting rid of an opportunity to learn.

Speaker 1:

So what I mean by that is, when I started in HR 20 years ago, um, one of the things that I did was you know, some of the tedious stuff. Right, like we, we do all of the admin stuff, we learn all of the basics and whilst at the time it feels tedious, actually it's so important because it's the foundation of everything that we do. If we start using ai to automate some of that stuff, I feel like that takes away from some of the opportunity to learn. But what do you think? And if you do agree, how can we kind of balance that out so that we're still using a tool to make things more efficient, but also allowing people to learn in a kind of structured way?

Speaker 2:

So I think this is the balance of the human inside. So let's take a new starter or onboarding. Traditionally, you would write that letter, somebody might pull together the pack to help them onboard, someone would check the references, someone would do the medical check, someone would get them set up on the system and someone would spend all the time doing that. All of that still needs to happen, but a tool can be used to do the administration and therefore the person's role becomes checking that it's working, checking that the relationship with the individual is connected. The role becomes that assurance and that auditing and monitoring to make sure it's done. You still need to understand the law. So if you're going to have someone, someone, maternity leave is a really good example.

Speaker 2:

Um, I am extremely concerned sometimes with some of the young people that come into hr who don't know what's happening with maternity leave. Um, and there's been a shift on that, but that's been a shift anyway. So when you know I'm a bit older than you, but when we started in hr, if we need to calculate maternity, we had one of those wheels where you used to have to go and show the months and those sorts of things didn't have a nice online tool that automatically told us that's already shifted, that's already changed, but you're still going to have that conversation. You're still going to have they know. You don't need to let us know until this date. Officially you're going to get a mat b1 but you're not going to get it till this date. So don't worry about that. This is what we'd like to know. Have a think about this. If you've got any questions, let's talk about your personal circumstances. So you're still going to have all of those things and have those opportunities. You just won't have someone writing the letter themselves and having to actually do that. You could and I'm sure there will be examples and it will fall down and it will end up in tribunal put a robot in to do all of that work, but you'll lose that human bit, so that then you then don't have a relationship with your workers. You don't have engaged and motivated and purpose-driven workers. What you have is people accessing systems, and that might work. In some places that might be the case.

Speaker 2:

I'm also concerned about those entry-level roles because with this democratization of knowledge and with this access to knowledge, there's a real risk that I don't. It's not even earning your stripes, because I don't think that's important, but understanding those basics and building that foundation of practice in HR I believe is essential for a future career in HR, because we understand it, we've done it and we need to support that. We just need to shift our expectations. So it's not. You know, we moved away from paper files, didn't we? Everything's moved electronic. We don't have to have those massive boulders and spend Friday afternoons filing anymore. So we've already changed. It's just adapting to that change and finding ways to allow people the space. It's quite exciting because it might allow some of those people earlier in their careers to really focus in different areas in a bit more depth, because they're not bogged down with 160 pay review letters. They are actually finding other ways of doing it, so it could free up.

Speaker 2:

What we need to do is make sure we don't remove that route and that entry route. Qualification is going to have to change. I know that CRPD were looking at it. They're going to have to think about those qualifications and how that works. Is the level three still relevant in the way that it is if AI is going to come in and replace all of that? So that's going to have to catch up. How do we make sure that people? You know from a slight tangent. How are we making sure that the people we're interviewing haven't done everything on ai and they are real people? I don't have a problem with someone using ai for an application form. I don't have someone with a problem with someone using ai for a cover letter, as long as they've put the human inside and they can back it up. Um. So I think there's all that test and all that work. It's just going to change how we do it and and perhaps those early career roles become more data and insights driven what we've run out of time, oh no, I'm so sorry.

Speaker 1:

I know no, no, no, no, you don't need to be sorry at all, I'm just gutted because I'm really enjoying the conversation. We've made it like grown up and had a bit of marvel, like perfect for me it's like my best life. I love it. If people want to borrow your brain, how can they get hold of you?

Speaker 2:

find me on LinkedIn. I am Amanda Arrowsmith, fci PD. There is another, amanda Arrowsmith. She's an academic. She's amazing. She's not me, um, but I look like this and so you will find me on on LinkedIn. You can find me on Instagram. I'm AJ Arrowsmith, um, and I am HR change expert on TikTok, but I'm dabbling in TikTok because I'm new to it.

Speaker 1:

You know what?

Speaker 2:

I've made a decision. They're going to talk about you anyway, so you might as well put on a show. So you will see my face on some of these platforms at some point.

Speaker 1:

That's my motto for life. Right, you're going to talk about me, then let me put on a show for you.

Speaker 2:

Right, exactly, tell me more about your tiktok oh, it isn't there yet, it's just I've I've started dabbling. I've been a lurker for years. I was a covid tiktoker so I joined. I joined tiktok during covid, you know, when we couldn't go anywhere. We needed interaction to see people. So I've been a lurker for years. Um, I use it quite a lot for my um doctorate studies, so there's some really good phd stuff on there and some fantastic information around that. But also just sometimes that mindless scrolling of just going oh, I just need, I just need. I need to know what Alex Earl is wearing this week. I need to know, um, what is happening in the NFL and who's marrying who and those sorts of things, and sometimes I find it really useful for that who's Alex Earl I don't know who that is she's uh well, no see, I see her because she's the girlfriend of an American football player and my husband coaches American football.

Speaker 2:

So, and my husband coaches American football, so we watch a lot of American football in our house, and so that's why. So that's the other thing that's fascinating, isn't it? Our algorithms are all different. Comes back to personalisation.

Speaker 1:

Well, unless you start going down a bit of a rabbit hole. I started looking at a certain type of thing and then I went down a rabbit hole. Oh no, careful. Well, but I've learned. If I save videos of things that I like, if I've messed up my algorithm, if I go back and look at the save videos, then it fixes it.

Speaker 2:

Yeah definitely we still need that whole um. Yeah, is it safe for work? Should you google this at work? Maybe not?

Speaker 1:

yeah, maybe that's the beauty of being an independent consultant. Thank you so much I'm absolutely love having you on here and, if you wouldn't mind, I'd love to invite you back again so we can do it again oh yeah, you know me, I love your chat and also I mean there'll be new.

Speaker 2:

We we do need to talk about the marvels, because I personally loved it, but it got a lot of pushback, oh I didn't, I didn't, I didn't, I didn't hate it but I didn't like it either. But if we'd like to talk about misogyny in Marvel, then I'll be up for that one.

Speaker 1:

Oh bless your heart, okay, so thank you so much for your time, and what we'll do is we'll arrange another chat and we'll do this again. My pleasure.

Speaker 2:

Thank you so much for having me. Thank you.