City Voices: A City & Guilds Podcast

Fear to Empowerment: How to build a AI-Confident Workforce

• City & Guilds • Season 1 • Episode 2

Send us a text

 Worried about AI use in your organisation? Discover the safeguarding steps every FE provider can implement today.

As AI transforms the workplace, 74% of UK workers express concerns about using AI at work, while 18% of adults lack essential digital skills. In this episode of Future skills, hosts Bryony Kingsland and Gavin O'Meara explore the realities of AI adoption in further education with experts Rebecca Bradley, and Richard Foster-Fletcher.

From AI governance to continually upskilling and reskilling, our experts tackle the urgent need to empower the FE workforce in a rapidly evolving digital landscape. Discover practical strategies for college leaders, the importance of teaching critical thinking skills to recognise AI limitations, and the delicate balance between embracing innovation and protecting data security.

Tune in as our experts share tactical approaches that every FE organisation can implement starting today.

📢 Listen now and be part of the conversation shaping the future of skills.

For further information about the material quoted in this episode visit:

Staffing industry Analysis

Digital Inclusion Action Plan: First Steps

Making Skills Work: The Path to Solving the Productivity Crisis - report

Listen today, or watch on YouTube

Richard Foster-Fletcher:

So somebody said to me the day I'll give you this quick example. He said we're all using the same AI system, aren't we? I said no, we are not. Everybody's AI system is wildly different and producing different responses, and that means a lot as well for training and development and learners.

Gavin O'Meara:

Hi, thank you for joining us today on Future Skills. We're going to be covering AI and digital today. Future Skills is a City Guilds and FE News production. My name is Gavin O'Mara.

Bryony Kingsland:

I'm the CEO and founder of FE News and I'm joined with Hi everybody, thank you for joining us today. I'm Briony Kingsland. I'm the head of funding and insight at City Guilds. This is the second episode of our exciting new series, future Skills, in collaboration with Gavin from FE News, and during this series we'll be diving deep into the skills shortages affecting various sectors critical to the UK's economic success.

Bryony Kingsland:

This particular session we are looking at AI and digital skills. So finding from City Guilds recent research Making Skills Work the Path to Solving the Productivity Crisis highlighted an urgent need to rethink how we upskill and reskill the UK workforce to meet evolving demands of the various sectors. So that report revealed that fewer than half of working age adults feel they left education with the right skills for their careers now and in the future, and 91% of CEOs identify building workforce skills as crucial for boosting productivity. So those findings really emphasise the need for a more coordinated approach to bridging the skills gap. We have an amazing panel of sector experts with updates on how to explore contributing factors and hopefully you'll give us some practical solutions to address those skills challenges currently faced in the UK. And with that I'm going to pass back to Gavin, who is going to introduce our two guests?

Gavin O'Meara:

Yeah, no, thank you, brian. So today, really popular episode in around AI and digital We've got two amazing guests with Rebecca Bradley. We've got Richard Foster-Fletcher. Hey, richard. Hello Rebecca. Hello Rebecca Bradley. We've got Richard Foster-Fletcher. Hey Richard. Hello Eric Rebecca. Thanks for joining us. How are you both? So I'm going to dive straight in. So we've got some really meaty questions which I know you want to tackle. So a real obvious one, but a real tricky one to answer. So I'm sorry I'm asking a really difficult question, like on from the gate, is how do we continuously upskill the FE and skills workforce in a constantly evolving world around AI and digital? These things are not just like you set and forget. You've got to constantly update, richard.

Richard Foster-Fletcher:

Yeah, wonderful question. I think, when I look at AI in any organization, really I think we have and particularly in FE a very clear tactical use of AI and strategic use of AI, and I don't think we should necessarily confuse the two. Tactical use of AI it could be in chat, gpt, but it's the core skills that we need when we're writing a lot of emails. They don't need to be Da Vinci works of art, do they? They're tools that we need every single day. Same with certain internal communications and so on. And when people talk about AI as a tool, I think, yes, I agree, so long as we're talking about using it in that way, or using Copilot, for example. In the same way, gamma for presentations, notebook, lm for these AI podcasts, I agree. But I think the strategic use of AI is far less about upskilling and it's more about the way that you interact in terms of brainstorming and really getting a sort of sparring partner for the way that you work, and that takes a lot of time and a lot of enthusiasm. So I think we need to decide which one we're talking about. But in terms of tactical to start with, which is clearly you know where we must begin and free up some valuable time for the sector.

Richard Foster-Fletcher:

I think it's got to be top down. You know those organizations that have made a big difference around climate change and green initiatives. You go in there and you can really see that the CEO you know, she believes in it, she walks the talk, and that means everybody else knows it's real and I think that's a challenge Well, it was a challenge in climate change, of course, as well but I think there's 24 CEOs in FE that aren't active on LinkedIn. So are these likely to be our digital leaders? Possibly not. So you've got to be comfortable, I think, allowing changes to happen in an organization that maybe you don't fully understand, and you've got to allow people to experiment. And if you're absolutely fixated on GDPR and data being shared, I mean good for you and let's talk about that that governance side of things, for sure. But you've got to let things happen a little bit and let people beg for forgiveness too. I think I'll pause there. You've got to let things happen a little bit and let people beg for forgiveness too. I think I'll pause there.

Gavin O'Meara:

Well, I didn't know that 10% of all college principals are not active on LinkedIn. That's kind of wild and worrying. There's some good stuff on there. Yeah, yeah, yeah, definitely, Rebecca. How would you tackle that around? How do we continuously upskill the FE and skills workforce in a constantly updating world of AR and digital?

Rebecca Bradley:

I think I'd echo what richard says, but maybe come at it from a slightly different angle that I just find, when I'm speaking with people, that this overwhelm that everyone feels about there's so many different things.

Rebecca Bradley:

Even before we started, today we were talking about you could just have one episode on one tiny bit.

Rebecca Bradley:

So I think, really approaching it, that we're stripping it back to those core skills that Richard was talking about and those fundamentals, because the more we can lower that fear, the more people are going to engage with it and the more they're engaging with it, the more we're going to be able to keep on top of what learners may know with it, the more we're going to be able to keep on top of what learners may know, because you know there's going to be that gap as well, that we're going to have instances where perhaps learners know more than tutors do, which I think is something that's really interesting to explore. So I think I would really approach it about simplifying it. Whether it's governance, which is really overwhelming, or whether it is, you know, actually the skills themselves, it's stripping it back to what are the core fundamental parts, because everything changes so, so quickly in this field that we need to think about what are the future proofing side of things and those core skills that we can give people so that it raises their confidence.

Bryony Kingsland:

I think that's really key, what you've just picked up on there, and that leads really nicely into the question that I was going to ask Rebecca and Richard, because obviously there is that level of overwhelm. We want the FE sector and the government to adapt and to use AI because we know it's going to transform teaching, learning and assessment. So how can the FE sector and the government supporting them prepare and adapt for AI?

Rebecca Bradley:

Again, I'm going to talk about overwhelm, probably with every question that comes up, because I really think that this is about empowering people.

Rebecca Bradley:

You know, as you were saying earlier, bryony, about all of the guidance that you've read, it's again, it's really overwhelming.

Rebecca Bradley:

So if we're empowering people within a flexible framework and architecture, then they're going to take ownership of that better, and if they're taking ownership of it, they're going to understand it better. They're going to be more engaged with it, and I've seen some really innovative practice out there where people will have almost like a wiki area that's got all of the guidance in there, but then they're inviting these professionals to adapt it and use it in the best way that fits their organisation and the best way that fits them. So it's a simplistic answer to just say empower people. But that's what I mean by that, you know, giving them the architecture around it and then giving them the freedom, because the more they engage with it and the more they're empowered to use it, the more they're going to keep up with it and understand it, rather than it being this scary thing off to the left where we're all thinking, oh my god, I don't even know where to start with that richard, if you totally agree with what a wreck is saying.

Richard Foster-Fletcher:

I was talking to a ceo, an fe ceo about three or four days ago and we were talking about how to get adoption across the organization. So we're out there having these real life conversations and I said look, if all of your staff got the bus to work and you walked around the college now and gave them all a set of keys to a car, wouldn't they be delighted? I said how is this any different? You know this is a much faster vehicle. So we're gathering about nine enthusiasts in the college on that one to inspire people and show them the difference between the bus and the car. Look how quickly I can do these things. Again, in tactical cases, when we look at the sector in terms of let's get on the front foot, you know there's what 162 odd colleges or groups and everybody's got a back office there where things could definitely be more productive. I did the sums. Well, I asked ChatGP to do the sums actually and it said you know, if we can pull 5% of cost out of just the back offices in those organizations, that's £80 million for the sector and in theory, you know a lot more happier and content people working there as well. That involves no real job losses at all because you just look at natural attritionrition, you just look at where you're hiring them. You don't need somebody. So you know we can really de-risk this from an hr perspective. But you know, what we don't want is to be like the nhs and we do not want the finger pointing at fe in 12 to 24 months from now and and being told you have to be more efficient to having the rug swept out from under us. So let's do it on our terms. Let's move carefully, quickly and effectively and I think, just building on what Rebecca said, we need caution as well.

Richard Foster-Fletcher:

Ai has not been around for two and a half years since ChatGPT. Ai and machine learning has been around since the 1950s. Mkai has been working in this space for nearly seven years now. I worked on data projects five years before that.

Richard Foster-Fletcher:

There's a lot of history with AI and there's been a lot of risks discussed over that period with biases and the ethics. We're now learning about something called adaptive cognitive alignment, where we're finding out that these LLMs match the intelligence level of the person using it. So somebody said to me the other day I'll give you this quick example, he said we're all using the same AI system, aren't we? I said no, we are not. Everybody's AI system is wildly different and producing different responses, and that means a lot as well for training and development and learners about what these tools will say to them and how they will or won't help them in the world of work. When you can be misled by an AI and if you want to test this, just go and ask chat GPT about a business idea and say, hey, is it a good idea? I guarantee it'll go it's a great idea back to you.

Bryony Kingsland:

and they can't all be great ideas saying about that analogy between, like moving from taking the bus to work to being given a set of keys and driving the car. I was thinking there's people there that will not know how to drive. There's that whole issue isn't there, about people just starting to use it, because there is a big fear amongst some of the workforce that if they use this it could potentially steal their job or actually they might get it wrong. There are safeguarding issues there, something that I understand exactly what you're saying and for a a person my age.

Bryony Kingsland:

We use Copilot at work and I use it all the time. But my initial sort of like oh, my word, what is this and how am I going to use it? It's that step past that first almost gate which is saying this is something completely new. I'm not sure where to start. How do colleges and the FE sector encourage those people that are not using it yet and have that innate fear of something that is incredibly new? How do they encourage staff to actually start taking that up and really using it to be more productive, for example?

Rebecca Bradley:

I think that comes back down to the core skills side of it as well. So some of what Richard is talking about. If you're just starting out, that might feel overwhelming, and I often see in organizations you'll go in and you'll find there's one champion in an organization that adopted chat GPT very early on. Of course, ai is so much more than that and they will be the person that suddenly becomes the expert in the organization. They're not trained, they're not certified, but they've got a personal interest in it and they probably have great skills. So those people aren't the people that we need to particularly worry about.

Rebecca Bradley:

It's the people that you're talking about, which is why I'm saying again about empowering here, because it is filling that gap, so that, okay, we have these core skills and these people that are really, really interested in this, but how do we bring everyone along, whether it's learners or staff? How do we level that playing field? And again, that comes back round to those core skills that are beyond the changes that are happening literally daily. It is, you know, the literacy, the AI literacy, and those core skills that don't change from year to year because they are about how you take your car or your bus and how you drive it, you know. So that comes up so often when people again, if they're feeling overwhelmed or they're worried, it's giving them that confidence and those core beginner skills to make sure that it is a fair and level playing field.

Bryony Kingsland:

We have apprenticeship ambassadors. What we need are AI ambassadors.

Rebecca Bradley:

from what you're saying, rebecca, Absolutely, and I think that's a core part of the governance side of things as well, Sort of creeping over into that side of things. As much as it's about empowering people to use it, it's also making sure that those ambassadors are working within a framework. Because if we do have early adopters that don't have a framework to operate within, then you suddenly have the pace being set in an organisation by somebody that hasn't been trained in it, hasn't necessarily used it in the way that the organisation would want, and that's when that governance becomes really important. So we don't just say John in accounts is our expert and we're going to defer to John over every little thing. There's that framework keeping you safe behind it whilst taking advantage of John's excellent skills and interest in it.

Bryony Kingsland:

So not a Wild West.

Rebecca Bradley:

Absolutely.

Bryony Kingsland:

Richard, is there anything you want to add to that?

Richard Foster-Fletcher:

Yeah, I would top up again with that. Is it the opposite problem perhaps in private organizations with shadow AI, where people are using this extensively and not telling the organization or their boss? Because would you want to tell your boss? You can now do your job in three days rather than five? That sounds like it would have implications to me.

Richard Foster-Fletcher:

If you said that there are, and I think you know, if I was a CEO I would be getting on the front foot, like I said, and I would be very clearly articulating pretty much the message that Rebecca's just put there out to every single member of staff and I would show the roadmap that we have around AI, what our intentions are and what the process is, and I would show probably three things in there. I would say we're going to implement the tactical tools, things like Copilot, gamma and so on, and they will deliver as a 5% saving of time not money, but time and this is how we're going to use that time to achieve this or do this or whatever, or more training, and show the positivity there. And then, like you say, rebecca, you know you would use the champions and others to show how some of those things are done. Then then I would say the next phase of that is we'd be looking at compliance and automation. So we're looking at things like enrollment and exams and timetabling and all these sorts of things and show how we're going to be using ai in the background to make those things more simpler. And then I would say the third part of that is we're looking into how we would use our own version of ChatGPT.

Richard Foster-Fletcher:

You know, we all know that the Windsor Forest Colleges Group has implemented Winnie. They're actively talking about it. I'm speaking to them anyway, doing a session with them soon, with Dan and Roddy over there, about how they've done that. You know they're paying £300 a month for circa 1,000 people to have access to, effectively, what is chat gpt. So I'd be talking about the roadmap that we get there to bring that in-house, bring it safe, bring it under gdpr compliance and in the meantime, go ahead and use chat gpt. Obviously we're not going to buy licenses. Here's what the best practice is, here's here's the things to be careful of and not do in there, and and just start from there and I think let's's settle people down, let's show that there's an understanding for the organization and a pathway and then over time, you know, I think you'd be surprised by how many things happen on that pathway that you weren't expecting, above and beyond those areas, and you can adapt. But it can be very, very positive and should be.

Bryony Kingsland:

Yeah, I think everybody that's using it can see some of the benefits. You've both mentioned governance there and the importance of governance. Can you both expand out a little bit on what governance should look like or what's important when it comes to governance in FE?

Richard Foster-Fletcher:

Yeah, I mean this is a huge area and actually just to frame this, of course we have a government here in the UK who step back from AI governance and refuse to sign safety initiatives and so on. So again, we've got to get on the front foot and think about what it means to us. There's I mean, there's a huge amount in this. Like we said, it's a week of discussing at a conference about AI governance in education, when you're thinking about risk and protecting those that are using them, all of the legal elements about data storage and, of course, students and how they may or may not be using this to accelerate their work. But I think it comes down to being proactive, having a realistic understanding of the world of work as well and what's likely to happen. Because if we're talking about ethics, for me it would be unethical to ban chat GPT as well. So you've got to find the middle ground in there. But governance, I mean just to give it a quick round, robin and Rebecca maybe could pick up on some of the key points. You've got inherent biases in the system because it's trained on data which is biased. Take ChatGPT, for example. It's trained on the world's information on the web. I think 60% of the data on the web is English and about the same number actually of all traffic on the web goes to American websites. So you've got a heavy bias towards US English in there and Western values and customs and so on. So we've been dealing with that from the start. Then we've had all the hallucinations, all the stuff it's been making up. When you ask it for images of a CEO, it always shows you a man, doesn't it? And if you ask for an image of a nurse, it always shows you a man, doesn't it? And if you ask for an image of a nurse, it always shows you a woman. So we can see these biases so easily and even just. I think the great, simple example is when you're writing a document and suddenly it's switched over to american english and you've got all the zeds coming in there and you go hey, why have you done that? I told you to write in british english and the reason is because it's always defaulting back to its main state, the kind of the sum of the internet, the average, and that's why it keeps just going back. No, I want to write in American, I want to write in American. And think about what that would mean for somebody working in Turkey or Indonesia. You know these are not fit for purpose in a lot of cases. And then we have the purposeful biases that we're hearing about the way they trying to align chat gpt to your worldview. So if you're very pro donald trump and and it picks up on that it will be very supportive of that. If you're not, it won't.

Richard Foster-Fletcher:

And the problem is, you think it's the truth, just like social media. Again, right, you think what you're seeing is the truth, and that's why I keep saying. My friend said to me that all border directors should be male, because chat gpt had told him this three times. It challenged it three times. It come back and go. No, only men are suitable to be directors. So he said so it's the truth. And when I picked myself up off the floor I said no, what are you talking about? It's lying to you, and so I'll hand to becky on this. Comment here is that you know this is a statistical model, right, it's a machine learning platform built on a huge corpus of data that predicts the next pixel or character. To put, it has no concept of language. That's why it can write in Swahili and German and English just as easily, because it's just predicting the next character, nothing else. It knows nothing about physics or maths or truth, and we need to understand that. We need to know what this beast is as we use it, so that we contain it.

Bryony Kingsland:

That's really interesting. I think there are people out there that think the AI is ethical, but it's really not ethical, is it? Because it is based on information that's been put into the system, into the internet, that we already know is biased, as you say? I've always been aware of bias, but I was not aware of the extent of the bias. When it actually comes back and says to you oh well, ceo should be a white male, that's astounding realistically when you think about it. When I come from an organisation that has got a really strong diversity in DEI protocols and approach to how we work as a business, it's quite shocking to hear something like that. Rebecca, what's your view on governance and how the FE sector and training providers can implement or put in place a governance structure that works for them?

Rebecca Bradley:

Again, I think it's a really overwhelming subject. If you think of all the guidance that's out there on top of everything else that everyone's got to do, it can feel like a burdensome task, but the governance that you already have in place in an organization is the complete guide to how to set this up. Your culture will lead how to set this up and I think that good governance is around. Again, a bit of flexibility of interpretation so that it's not autocratic and can feel, you know, restrictive. But you have to think of the core things would be the security of the system, the ethics that are operating within it. So I think you mentioned a system that was bespoke to a college. You know what information is going into that, to build that, to train it in the first place, because obviously public large language models are trained on the data that's already out there. But bespoke systems, you will be feeding that, what goes in it. So your policy, your culture, all of those things that are important, would be feeding that system. So you've actually probably got more control there than you'd have anywhere else. Privacy of data you know people thinking about the data that they're putting in and the system that they've got. You know, depending on what level of chat gpt you're using, is dependent on whether that data is then used as part of training data or whether it's private to you. And then transparency policy simple policies around how you're using it and what you're using it for. Making those statements for your learners and your staff is all a cultural piece anyway that should be there within the organization. It's just actually rounding that out so that it matches the voice of your organisation.

Rebecca Bradley:

And then bias mitigation. So you know, we know it's an echo chamber. So this again comes right back round to those core skills. Again about teaching everybody that that's what it will do. It will please you. Although I had a conversation with ChatGPT this week where it told me something that was so fantastic that I felt almost a little bit embarrassed by how great it was telling me. So I always tend to have a prompt in there that is please cover off my blind spots, pretend you don't know me. Tell me the other side of this. What is the you know opposite to this, what am I not thinking about? And we know that with young people those critical thinking skills can be somewhat lacking. So it's really important to teach that and have that as a policy driven moment that you're going to mitigate that bias in whatever way is relevant for your organisation.

Rebecca Bradley:

I think that also is an important key point that there are these fundamental pieces that will be the same for every organisation, but then the culture is the part that personalises it, and that's where that flex needs to be, so that you know you have, even within departments there may be, these nuances that are really important to allow for and also to allow their creativity, because I think a concern with the, the rules around this is that you will miss out on some of the amazing things that LLMs and personal systems can do if it's too restrictive.

Rebecca Bradley:

So it's about updating it regularly, you know, looking at the outputs, auditing what's coming out of it so that you can see over time and use those balances and checks in the background, and how you're fueling the information that goes into it so that you're seeing what the outputs are. I always talk about it's just as important to know what the output is as what the input is, and you know this is something that comes up time and again about how you're assessing people, and that can be staff as well as learners. What if we're looking at the outcome? But what if we're looking about what goes into it in the first place? Because if it's echoing what's already happening, then you need to look at what's going into it. This isn't just that it's the information that's out there. It could also be what's happening and being put into it.

Bryony Kingsland:

Yeah, we tend in the UK, I think to teach our learners to pass an exam, but actually, when it comes to use of AI, it's about teaching them, about learning. It's about teaching them how to learn. It's about teaching them how to think and how to think objectively. If we're going to use AI effectively, we really need to teach our young learners and our young people today how to question, how to learn and how to think for themselves, rather than expecting AI to almost think for them and create things that, as you say, could be hallucinated. I think that's a really key point in this. Richard, sorry, you can say something.

Richard Foster-Fletcher:

Yeah, and I think that one-shot prompting, where you might just use it tactically to get a quick email, is completely different to what you're describing, brownie, which is also sort of metacognition. And I'm just thinking back, I was writing an article about this adaptive cognitive alignment. I was writing an article about this adaptive cognitive alignment and I worked on it for three days with AI, probably took me six to eight hours and in the end I binned it. I just couldn't get it to where I wanted to get it. So it still takes an enormous amount of time to get to really really high-quality output with AI and I think sometimes that gets lost. And I think what concerns me as well is that, with all of that, I'll give you a quick side example.

Richard Foster-Fletcher:

The Jaguar Land Rover CEO was asked about their new tagline for the EV world and he said I don't know what it is, but I'll know it when they tell it to me, and whether that's true or not. But the point he's saying is the CEO's job is to know when it's right and then to press go. But how do you know that if you're on the person? How do you know that? If you're on the person, how do you know it's not leading you down the wrong path and better, becky, you made some great points from prompting, but I just think it's really hard if you don't know what good looks like, if you don't know what your boss is going to want to see your church. But it isn't going to help you, it's going to lie to you, it's going to say that's a great piece of work, and then you're going to get it back and they're going to go.

Gavin O'Meara:

This is awful guys, I have a massive question, basically because we haven't got long left. We're looking at ai and productivity. So if we use saying tools like fathom that are note takers or video recorders a burning question for a lot of people around governance, safeguarding gdpr what are your thoughts on that? Around things that we need to be thinking of around using productivity tools like fathom in a suppose it's open and closed systems. We've got like 20 seconds, rebecca. What would you think around some tips?

Rebecca Bradley:

I at this point, would be super cautious over it. I've had a very bad experience with that happening, where it bought a button that then played porn in a live stream. It was awful. So again, it's going to come back to your rules. Test it out first. Test it out in an environment where you can see what's happening before you actually do it with learners would be my best advice, having had a bad experience myself richard, what would you say?

Richard Foster-Fletcher:

I mean, we're in the stage now where you shouldn't even answer your phone. I'm telling you, we've had bots try to come in and we don't know what they're doing. I have bots try to connect me on LinkedIn. They say they work for DeepMind. They don't, and I think that is somebody's bot that is going out there to get data, mine it and bring it back to somebody. That's terrifying. Who's doing that? Had this for so long? You know to call it like the. You know it's the agreement that we've had with technology for so long, with google for so long. It's like they. They will take your data but in return, you get this. You know it's the pact, isn't it? Do you want it or don't you want it? Fireflies, the others they're fantastic tools. We need those note takers. They're not gdpr compliant. You've just got to make hard choices every stage here. Do you want the value or don't you? Are you willing to accept the risk? Understand the risk first yeah, 100.

Gavin O'Meara:

We're way, way out of time, but I just wanted to say thank you, guys, for this. The one thing I would probably say is what everyone said here guardrails on and off is what we talk about with, with ai, and I think it's then, if you're in an open or closed system and make sure the governance is there, it's better say what everyone is is happy with. I would say it's probably your, your best bet within that. It's more work to be done in the background before you switch on fathom, which seems like it could be a really cool tool to use. Guys, I can't believe time has disappeared. We could have done with an hour forever. We've gone over and I'm really sorry everyone that we've gone over a bit but at the same time, what really cool things hopefully be able to bring back into your organizations and think about with regards to governance and safeguarding around ai really, and also how we can get ambassadors into our organizations to adopt, but also be thinking about how we think about safe use of ai.

Gavin O'Meara:

Next episodes after the easter break. Actually, we're going to be on the 23rd of april going to be looking at green jobs, which is going to be a really cool episode as well on future skills. Rebecca richards want to say thank you so much, brian. Thank you so much for a really epic episode. We'll see you all really soon. See you on the 23rd, thank you.