This Week in Leading AI

#1 - 24 Feb 2026

Leading AI Episode 1

Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.

0:00 | 42:33

Welcome to the very first Leading AI podcast. 

This week Kieron and Neil talk about how Leading AI came about, what we're doing now, and some of the key challenges we se in the sector. 

As you'll see, we may not be professional podcasters, but we do know about AI and how it can help transform organisations, so enjoy.

SPEAKER_00

Cool. All right. Well, in that case, um, welcome to the very first um leading AI podcast. Um, this is going to be experimental, so uh welcome, Kieran. Um uh for those of you who don't know me, my name is Neil Watkins. Uh, this is my good friend Kieran White. I was thinking today, Kieran, that it's nearly 30 years since we first met. Now, I know that you think you're only 34, but the reality is we're both pretty old for that. Um, but I think it's a remarkable testament to uh uh to us that we've managed to stay not just um very good friends over the last 30 years, but we've managed to work on lots of interesting businesses together. And leading AI is uh no exception. And and um, as you rightly point out, I think it's probably the best job I've ever had. Uh and you've had too, because uh we're having a lot of fun doing this. Um and it made me think about um what we should talk about today. Uh uh I thought I'd start with a little bit of history just so that people understood where leading AI uh came from. Um and then I want to talk about some of the customers that we've talked to in the last week. And the idea for the podcast really is that AI is changing so quickly. If not daily, then definitely weekly. There's always something new and different and challenging going on. Um, we talk about this uh all the time internally, but um we don't share enough out uh to the rest of the world. Um, so a podcast seems like a sensible way of doing that. Uh, lots of people think you need uh fancy cameras, fancy lighting, fancy microphones, and all of that old nonsense. But I don't have any of that stuff, so we're just going to uh do it in uh in proper uh Kieran and Neil style, and we're just gonna we're gonna blag it from here on in, fella. So that that's the plan. Perfect. Sounds like a good sounds like a good plan. Well, it's definitely a plan. I'm not sure about good, but it's definitely a plan. So let's let's see let's see where we get to. Um so I thought we'd start with a um a little bit of history. So uh the the history of leading ai started in a Korean restaurant in Soho in December 2022 when Kieran got out his phone and he showed us around the table um OpenAI uh Chat GPT 3.5, I think it was at the time. And we went, my God, that's amazing. And uh it really uh was at the time, but but looking back now, it feels like so Mickey Mouse. Uh, but um it was really interesting at the time. And and and the group around the table kind of oh, we should think about how we might be able to uh do something with this AI stuff. Um and we we talked about it for a couple of months. Um, we didn't really do very much about it. What Kieran and I did was book um an AI conference in Las Vegas. Now, some people will say it's just because we wanted to go to Vegas for a few days. It just so happened that it was the biggest AI conference in the world. I'm not sure it still is, but it's still pretty big. Um but just before we went, um uh a chap called Donald, Big D, as we affectionately know him, know him as, um uh Big D came to see us and said, I've been playing with this private chat GPT stuff, making AI secure. I think there's a real opportunity in it. And we said, well, let's go to Vegas and see what it occurs. And and Vegas was nuts. I mean, it was it was a bonkers event. Uh, I think at the time there was something like two and a half thousand people. Last year it attracted 8,000. Um, so it's a huge, huge event. But we came back from that thinking there is clearly a massive opportunity, but we're not quite sure what it is and how it works. And I think at the time, what we saw were a lot of people talking about stuff we didn't understand, like you need to structure your vector database like this. Uh, and uh and we didn't even know what a vector database was at the time, so it would be an interesting learning curve. Anyway, we came back from Vegas and uh talked to Big D, and uh in September uh of 2023 we set up uh Leading AI, and uh we've been obviously going ever since. Here we are, two and a half years later. Uh, there's a couple of interesting things that I wanted to share with you because it's been a bit of a roller coaster, really. Just to bring us up to date, today we got confirmation from our 49th customer that they want to go ahead, which is delightful. And I was talking earlier this week to uh a chat, uh serial entrepreneur, a millionaire who's invested in lots of different businesses and still does uh in lots of different businesses. And I told him we were at 48 customers. He said, that's bloody brilliant. Most people in AI businesses don't have any customers or have got one or two of their mates, they haven't actually got fee-paying annual recurring revenues from customers, and you should make more of that. And I thought that's a really interesting point. Yeah, um, I I didn't realize that there were so few people actually trying to make this stuff work, but it also led me to um to think about something else. Um, I did a little search on our um 42,001, our I saw 42,001 um AI um uh certification. And according to Perplexity, the number of UK companies currently is in the tens. It's not in the hundreds. There's no official number of um uh 42,001 secured organized or certified organizations. So we must have been one of the very first, because we got certified last April. So um we've been going nearly a year.

SPEAKER_01

I think we were the first one that the assessor had done when we were the people we worked with, yeah.

SPEAKER_00

Oh, cool. Well, um uh and I I thought about that because I know you and a bunch of other of the team were on an ISO meeting earlier today. We were indeed, yes.

SPEAKER_01

The joy of our month, our monthly ISO reviewing all of our data security protocols, making sure we're uh ship shape and uh above board. Uh but it's it isn't my most fun meeting, as you can imagine, because it's a whole lot of process, but it's important process, so we must it is.

SPEAKER_00

And to make it easier, we've created an AI tool uh that helps with the uh the whole thing. So yeah, uh nobody will be surprised to hear that we've created our own AI tool to make that make that stuff work. So um uh it is really important, I think, because you you you hear lots of nonsense on or read lots of nonsense on the internet about AI and and people talking about AI security, and actually, shadow AI is one still one of the biggest issues for most organizations. Uh, and indeed, we know of a uh legal firm that um uh is basically using uh Chat GPT to put their private customer confidential information in. It's absolutely bonkers, but they think it gets good results and they don't think they're going to be in any trouble. I suspect they might well be in the future. But yeah, I'm sure you've got lots of other stories of people doing dumb stuff like that as well.

SPEAKER_01

Well, I always say to people, just put into Chat GPT, ask it about your strategy or something that it probably shouldn't know, and just see what it does know already. What's the figure? It's something like 70% of organizations. This is a two-year-old figure as well, isn't it? 70% of organizations think they have shared something they shouldn't have. Yeah, and of course, uh, as you know, that can become part of ChatGPT's training forever, even if it doesn't, it processes in the US, it can at least. Um, and uh that isn't allowed for any of our customers, at least in our public sector customers.

SPEAKER_00

No, I I suspect it's much higher than than 70%, is the honest truth. But uh it's very hard to get any kind of reasonable data or or reliable data on that kind of thing, is there? Um there were there were three things that happened this week um that I thought were interesting and and worth uh chatting about. You probably got um some things uh on your list as well, but the um uh there were three things that I um wanted to talk about. The first was an example of uh using our BidWriter tool, where we helped an organization um uh both compress the time but also increase the quality. Uh, I'll talk a little bit about that. Uh the second piece I wanted to chat about was um co-pilot and the challenges of enterprise-wide solutions, because lots of people have bought copilot licenses. Um, but um I think it might have been the week before last when Microsoft data came out, where something like only 3.5% of all co-pilot pilots turn into paid gigs, which is which is incredible, especially given their uh estimated$31 billion investment in in copilots. So, how are they going to make a return on that? It's gonna be an interesting challenge.

SPEAKER_01

I think the thing I'm always surprised by is even Microsoft's own, I guess, marketing research based uh work will talk about saving 10% of time or something with uh with using copilot. And I was just thinking surely, I mean, some of our stuff is way more time-saving than that, the more specialist things we build. But I'm quite surprised that even with their kind of presumably with their best foot forward trying to tell you the very best thing, it's something like you know, seven to ten percent time savings. Yeah, the reality of that, as you know, the challenge all organizations have is none of that is cashable savings. That's about making your job a bit better and great, and well-being is an important thing, and staff retention is an important thing. But at the end of the day, seven to ten percent of your staff's time is another coffee or two, isn't it? It's not like transformational wow, look what we've done with AI. So good start, but like first step, first half step, I'd say.

SPEAKER_00

Yeah. Yeah. Um, and then the third thing I was I wanted to talk about was the cross-sector nature of things that we're doing. And and interestingly, I might just start there because it's a it's a real quick one, but um uh we've been going, as I said, two and a half years, and uh we've talked to lots of people over that time. Um, we've not taken any external investment, but we have talked to external people. Um and lots of people have said, oh, your your real challenge is you're too you're too broad, you're too too vague, you need to go deep uh in one particular area. And we do have some areas like social care, both children and adult social care uh for local authorities uh and housing associations and FE colleges. But this week I was thinking about people that we've talked to. We've talked to an IT company, we've talked to at least two new housing associations, we've talked to two procurement organizations, we've talked to at least two that I know of local authorities, a university, um, four FE colleges, and awarding body. Awarding body, and um uh and bizarrely, uh maybe not bizarrely, but interestingly, um uh a company that writes uh TV scripts and they want to use AI for things like continuity, which is which is fantastic. So really interesting broad mix. And I think if we had um focused just in one area, we we would have missed a trick. And actually, you know, part of our challenge, uh the challenge that we we we chose to bring right in the early days was AI is a force for good. There's lots of doom-mongering, and and actually, how do we help people um uh use AI in a sensible, proactive, helpful, helpful way? And I I think uh I I don't know about you, but I kind of appreciate the fact that we we haven't gone deep, we've gone wide, and and I think we should stick stick with being wide for quite, although I don't want to get known as wide boys, of course, uh for quite some time.

SPEAKER_01

Well I think and I think the challenge we have though, like I mean we've talked about this a lot, haven't we, is how do you market effectively when you are talking to all people about many or all things? It is a real challenge because our platform, Knowledge Flow, is an incredibly capable AI platform that can be your policy assistant, your bid writing assistant, your data analytics assistant, it can be your corporate memory, it can read it, it can respond to your customer inquiries inbox, all of those things it's capable of doing. And the problem is that's too much for a marketing campaign. And you end up saying, you know, talking about generic AI is good, isn't it? Come and talk to us, which is obviously not that useful. It's a real battle in how do you get how do you get an in interest of people because that probably is a specific hit them with the thing they're worrying or frustrated about, and then potentially lead on from there. But I mean, yeah, as you say, we we wouldn't know what those things are without all the work we've done with our customers on talking to them about what's on their mind, what challenges them.

SPEAKER_00

Yeah. So and the bit the bit that you talk about when you're on stage, especially at things like um housing, but also at schools and others, is is actually you bring um cross-fertilization of ideas, and this sector's doing X and this sector's doing Y. And here's how you could adapt it for your sector. And I I think if we had uh steered deep, then we would have we would have missed a real trick with some of that stuff because um, you know, uh one thing one thing uh all organizations have in common is people, and people do different things and they're creative. And why, you know, we talk about RD, uh Rob and Duplicate. Uh take those ideas and use them, uh use them elsewhere. And I think we should we should continue to do that, absolutely.

SPEAKER_01

Yeah, no, absolutely. The second thing on on um on narrow or deep, I was having a conversation earlier today with um with Perplexity, in fact, one of my favorite go-to tools, on uh how we can get our uh message, our platform in front of more housing association folk, because we've got, as you know, a a number of them that we're working with, but we really want to expand that out and bring it to the a much wider part of the sector, all of the sector, we would say. Um, and interestingly, what it was where it was helping me, as always, if you use AI as a thought partner, I find that particularly useful. But it was suggesting that I should um find a single problem that a target person in a housing association has, and go go on that first off. So that was quite interesting to me. I think that probably will have a crack at that for a while.

SPEAKER_00

And we we've talked a little bit about that before. Your comments about um repairs, you know, every housing association has to deal with repairs, it's a common problem, and actually it's one of those things that uh costs them a lot of money. It's important to get right for um not just cost saving but also tenant satisfaction, if that's the correct terminology in that sector. You know, how you keep people you know safe, warm, dry, and happy. You know, that those those are challenges for them. And actually, how can AI uh do that? And I know that you've pulled together something that you're gonna you're gonna be sharing at um uh uh Scotland's housing festival on the 4th of March. There you go. There's a plug for uh for uh for you, the uh Chartered Institute of Housing. Uh so yeah, I know that's gonna be uh that's gonna be interesting. But it it also it also kind of leads on to why some of those co-pilot things are real challenging for organizations uh and and where we've seen um organisations struggle with that enterprise-wide solution because what Copilot's doing is looking right across the piece uh and it's not being specific enough, and it produces very uh generic average kind of results that aren't helpful. And I was talking to a local authority who who bought, invested uh in um co-pilot licenses, and they were sh they were shifting 25% of their entire co-pilot licenses to new people because the existing people didn't use them. The uh the story I was going to mention was that uh a person had started a new job um for an organization and been invited along with 27 others to a uh co-pilot training session. It was a prompting session, actually, which makes a lot of sense because people are um uh are well known for prompting like their Google. Um, and so they do it, they prompt like their Google search, and it's not they don't do what you were talking about just a few minutes ago about using perplexity as your your thought partner. Um, and and the point of the story was that um only four people out of the 28 actually turned up for the training. And it made me think about why is that? Is it just because they um are too busy? Are they just not interested? Do they not see it as part of their job? Or or are they worried about AI? You see lots of people saying they're worried about AI taking jobs, but I don't quite see it that way. What do you think?

SPEAKER_01

Oh, I think I mean it's really interesting because we see across all of our customers there's some that are prolific users, and the most successful customer that we have, Ambition Institute, I'm sure they won't mind being named, um, they have more than two-thirds of their staff using our tools every month. And what we see when we look at the usage numbers is that it never drops. The number of users per month only goes one way. So it appears that as soon as people are using it, are introduced to Knowledge Flow, uh they stay with it because they see the benefits, obviously, because they're not being forced to use it every month, they're choosing to. And I think I mean what I know happened there, their um uh technology director went to a bunch of their team meetings and sort of forced hands-on keyboards, if you like, saying, Open this, put what is it you do, show me something that's nasty or administrative heavy, and let me show or let you show you, I'll tell you how to look, upload it here, do this thing there, and it will help you. And that her approach, I think, has been tremendous. But what I'm struck by and a kind of deeper thought about really about staff, our background is change management, change and transformation. Um, is what how I met Neil many moons ago, and we built a successful consultancy doing that. And I think that my observation from all of that is if you looked at an organization broadly, and this is anecdotal, so it's not specifics, about 20% of the staff are what you might call entrepreneurial-minded. They want their job, they want to think about how they can improve the way they do their job and how the organization can improve the way it works. I think you've got the kind of bottom end 20% that literally want to come in at nine o'clock and go home at five o'clock and are not that interested in anything other than just uh they'll go along and turn the handle. And the middle ground that seem to me to be, you know, do the job and might be really quite committed and interested and and passionate, but aren't it haven't got any capacity, and you can take that in its widest sense is what I mean by that definition of capacity there to think about their improving their role. And I think that's part maybe, maybe what's behind not turning up to AI sessions. I think it's certainly what's behind people not using AI in their role quite a lot. I think people are given copilot, and so there you go, there's our AI strategy. You've got copilot, they do that first thing, which we've all tried, and normally it's pretty disappointing because whatever you've asked it to do, you probably haven't asked it very well, and it does a pretty half-assed job, and therefore you're put off and you don't go back to it. So I think there's that, but the curiosity, I mean, even with our tools, I see when I go and talk to our customers about how they're getting on, and you know, and and um you know, explore new areas we can go into with them, they're doing stuff with knowledge flow that we hadn't even imagined. Those people that are in that kind of 20% of testing, experimenting, and up for it. Okay, so really interesting. I I mean I don't know what it all means really, but I think uh the the big lesson is the technology director from Ambition Institute, she has been you know demonstrating properly, not just writing about it, not just showing it, getting people hands on keyboards using it, and it is hugely successful. And they're saving, as you know. I think the we did a report for them on 2025, and I think they're saving what was it, something like 250 days a month or something they're saving across their organization. Again, not cashable, and neither should it be right now, but really interesting.

SPEAKER_00

Yeah. Well, that that kind of leads on to something that potentially is cashable, which is the bid writer. I mentioned earlier that uh we'd used uh bid writer for a procurement organization, and and for people who aren't familiar with bid writer, um, it's uh a tool we've created which is trained on that organization's materials, um, marketing materials, case studies, information, data, um, things like uh customer reviews, all of that good stuff. And it's all in uh a rag. And um uh one of the benefits of the tool is that uh basically you load up the uh tender document and um uh and uh ask it uh you know, can uh uh what's our chances at winning this? Is it a sensible bid for us to go for? Um and in this particular instance, um, this bid uh, which probably would in the past have taken at least two weeks, had a first credible draft within less than two days. And then what happened was uh uh put it through the um evaluation piece and said, here's the evaluation criteria. You are a uh a marker, give me marks out of uh it was a zero to eight scale. Um, tell me where we're doing well and tell me where we're doing badly. And it came back and it said, you're six out of eight for this one, your eight out of eight for this one, or you're four out of eight for this one. But here's the things that you need to do. So then you can uh do simple things like ask it to improve it or add in a case study. Uh or indeed, one of the things I I really loved about it that there was something like 280 different customer quotes. Uh give me a customer quote that is specific to this particular question and where we demonstrated the value to this organization. And it pulled out three, four, five, however many you ask for, and you pick the one that you want. So um uh I read the final bid this morning. it was submitted and I have to say I was uh pretty delighted with it and uh uh uh one of the things that I don't know if I said this already forgive me I've forgotten um I had I had a very senior ex-civil servant review the bid and compare it to the AI um scoring and she scored it exactly as the AI had scored it which was really fascinating. Her suggestions were different uh which is really interesting um and she did indeed make some some very good suggestions um but so taking those two things and combining them improved the final product and I know that lots of people say you shouldn't be using things like bid writers for um uh uh writing bids I would argue you shouldn't use um free tools to write your bids because you're gonna get generic answers that everybody else is getting as well but but this was so tailored to um you know give me an example from this sector or give me an example of this type of organization which we've done in the past and um yeah uh the the final bid I think it was something like 65 pages long so it wasn't a short read uh but it was uh a pretty impressive um final result so um I've got my fingers crossed as soon as we hear about that one I shall uh I shall be shouting about it so congratulations yeah isn't it the the bid is submitted isn't it I know good so you can get a huge task it is a huge task most people don't if you've never done bid writing you don't know what a pain in the bum it is trying to find case studies or trying to find information or even making uh you know adapting previous case studies the AI can do a first draft of that within seconds and then allow you to polish that so uh yeah really pleased with with that I think that whole procurement world as you know we've talked about in over many a beer through the years of the frustration and how procurement in general already excludes really most innovation but most small companies medium companies are just nowhere the the the amount of administrative time you have to give up to answering a any kind of RFP or tender response is just enough to knock everybody out and it's a shame because it's the way government does nearly all of its tendering and you'd like to think that government might be able to get hold of innovation and new stuff and they pretty much rule it out from the start just by the amount of admin to go through.

SPEAKER_01

And it's a real challenge I we've been involved in a couple of things where we've been pitching for innovation funding pilots um with leading AI. And the real challenge I think along the way is that you are judged on whether it's innovative by a panel of whoever they are who are they to say I mean it's like if you think about Steve Jobs inventing the iPhone and do you think he puts a panel to say you know we're thinking of doing this I doubt it very much. Those breakthrough technologies are not I mean and almost it's an oxymoron for innovation isn't it to have a any kind of committee or panel look at it there's always a reason not to do something. Absolutely so yeah anyway I get very frustrated about all of that well I think it's roll on as I say sometimes when I'm on stage is the um sort of I talk about our bid writer and then I also talk about some work we've done to create the other side of the house uh a bid evaluator an AI version of that to be the kind of you know the fifth member of the team if you like that's sort of AI in the loop on your uh to challenge you in a different way because it's not it is pretty objective AI generally in that space it's not you know it does have bias we have about 400 biases apparently according to some research I was looking at so AI is probably a lot well who knows maybe as biased as I saw or whatever but that um getting AI in the loop on there and then I normally talk about well we're not very far away from being able to have AI just do the procurement how I normally describe it is kind of at the moment procurement is a really dysfunctional process in my opinion because at the start of it you have a human trying to capture the requirements of an organization for whatever it is they're trying to procure a new HR system or whatever. So they'll go and look at all the things they think and then write all those up into some sort of specification and they'll be wrong. There'll be some right but they'll be wrong in lots of places next they send that to a list of suppliers that they hope might be able to respond to that and they'll be wrong about the list that they've chosen they hopefully got some of the right but there'll be people in there they shouldn't be and there'll be people missing. Next the supplier has to read that specification which is wrong reply to it with how they could best match trying to second guess what evaluators are looking for trying to explain their strengths and that will be a dysfunctional wrong piece of work that comes out the end and then comes back to the humans uh in the in our first company who are now going to read that and judge it and they'll be wrong in their judgment so you got wrong wrong wrong wrong and wrong through the process you could have an AI tool that scours your network and it asks people for information on what they need from an HR system and defines the specification it could scour the world's internet to find the all of the suppliers that potentially match evaluate them based on what is available online potentially back and forward emails asking additional questions and follow-ups and choose for you the best system for the best price that does the thing you need it to do and it probably could buy it and install it if you use some of the really mad new agentics stuff that I hope is a future because I think at the moment this industry of procurement which rules lots of people out which stifles innovation I think is uh rife for a massive transformation and uh upending wow well that is exciting I can imagine a lot of people not liking hearing those words from you Mr Watson there you go when they say isn't it all the marketing advice is to be provocative it is it is yeah yeah well that was certainly provocative have you got anything else provocative to say this afternoon uh well I think I'll share with you a the um I was reading last week about the the new open claw was I think it was called it was called Claudebot for about a week or a month or so maybe and then Claude the big Athropics large language model wrote to them and said you can't keep calling yourself Claudebot. So they changed themselves to Maltbot for about 12 hours and then open claw all around lobsters apparently maltbot being um lobster malts apparently so there you are anyway that is this new agentic is agentic on steroids is really crazy. So agentic AI there's a load of as you know loads of AI washing uh where people will describe anything that our stuff we use a thing called retrieval augmented generation that's what we special specialise in rag AI for for our listener uh benefit um our audience as I heard it mentioned the other day for our audience um we so we do a thing called rag and that will be using your policies or your bids as Neil has talked about your previous proposals and things and doing some amazing stuff with that loads of people are calling that agentic AI it is not anytime you are copying and pasting in and out it is not agentic. Agentic our definition is that it's well the definition is that it's has agency it can take action do things um and our knowledge flow I hinted already can look at inboxes and read and respond to them and do all that stuff. The interest in talking to uh Michael Webb who's the head of AI at GISC about the definition of regent sort of at its full extent is you just give it the uh objective and in fact there is an argument you don't even do that you just tell it you know you go work out what he's doing and off you go and open claw can do some of that stuff and it the first example which was just fascinating before it gets dangerous but the fascinating example was someone had asked open claw to book a restaurant for them um it went to open table couldn't get a reservation on open table because it was so showing us full so it installed an AI text to phone thing on the laptop it went and found it installed it the guy was away didn't even know this was happening it phoned the restaurant made the booking and then put the booking in his calendar and updated him with it it's all done amazing that's proper agentic stuff but it has a sour twist doesn't it there was this guy one of the this guy and I can I can reference it if I put it in the show notes um for our for our listener for our listener yeah um the uh and he had um he he had this one of his agents an open glore agent writing some code it sent him the code for review he rejected it and it then decided that that was it didn't like that and so it scoured the internet found a load of stuff about him and his job role and what he does and put a blog post up about him saying he was protecting his fiefdom and how he was entirely wrong to reject the code and he was uh you know just really gave him a hard time in this blog post and literally on its own just when I did that that is crazy yeah find the reference to that I'd be fascinated to read that yeah it's terrifying it is terrifying and you think that I mean that's where the agents take over so maybe it is important to be polite to uh AI tools although as Sam Altman says the uh if if you could see the amount of energy and token burn that goes behind all the politeness the real problem is interestingly you can say please give me a it's not that's not really going to burn a lot of more tokens probably two when you say thank you at the end of something that's happened because every time you send in a response everything in that context window goes back to the large language model so that it can check and so you can like literally have had a long conversation with it and then you say thank you and it all goes back it's all processed all the tokens are burned and then just go you're welcome let me know if there's anything else I can help you with yeah that's cost you know whatever tiny in general terms but just the combined amount of that is huge. Yeah well yeah they need to keep going sorry now I was gonna say the other thing in the news uh that I was just reading only today is Sam Altman uh he's the uh CEO of openai for anyone for our uh one listener but they um he was saying that that he's he thinks there is so loads of AI washing in terms of in terms of job losses so people and it is really interesting isn't it because there is this moment where people are saying I'm getting rid of a Amazon biggest 30 000 people that they are going to uh make redundant and they're saying you know AI is going to take over the reality is it's not because of AI taking over they need the money in Amazon's case to invest uh is what the pundits say but in real but organizations are using it as an as a good news story for why you're making job cuts which is I guess if you particularly if you have shareholders and publicly listed then that's a a nice get out isn't it rather than we're struggling things aren't going well we have to do a redundancy round you're able to spin it the other way and Sam Altman's view is the job losses are not caused by AI they are caused by executives using it as an excuse.

SPEAKER_00

Yeah I'd heard I'd heard somebody else say something similar that it was uh it was it it it's more indicative of underlying poor performance rather than um anything else rather than the investment in technology or it transforming and I and I guess that kind of rhymes with a bunch of things that we've seen you know even in our small world where we're you know we don't deal with the you know the multi-billion dollar organizations but some of the organizations we deal with are pretty large you know you know if not um well certainly millions of pounds worth of turnover and and and um but even then people in inserting ai into their workflows is a big stretch for most organisations they're still struggling with the how do I how do I stop losing my my my data out of the door and you know what what you know what does good look like how do I get it to work and I think part of the challenge is um uh the hype and the news headlines seem to be just stripping away from the reality for most organisations and um and and I think part of our job without trying to sound sanctify is you know we we help we're trying to help people uh use ai to to improve their businesses that's what we're trying to do not get rid of jobs it's about how do you how do you do a better job um uh but actually you've got to start somewhere and and we certainly see a lot of people who who've dabbled but not really committed and and I think you know at some time over the next few years that that has to change as as people start getting left behind. And you know I write about that quite a lot but I I think it does feel to me like lots of organisations have just been struggling. They don't have somebody like the lady at Ambition Institute who's who's yeah who's driving the pitch who's forcing people to to put their hands on the keyboard and make things work um creating those ai champions for want of a better expression you know that's pretty tough if you've got uh you know we we've dealt with a number of organisations where the where the person at the top of the office is a is a technophobe and doesn't really care for it and then that just that just slow slows innovation down.

SPEAKER_01

And the thing I say to I mean I remember talking to um someone in a multi-academy trust who said a long time ago now but she said um I don't see any first mover advantage in particular in education and I get that I really understand I mean education is a as you know our biggest sector for customers and I understand we don't want our experimenting on our children's futures but the other side of that argument is how how soon is it when a staff member when you're trying to recruit asks you about the AI tools that they will have made available to them. I mean that's not very far away is it you know what questions do you have for us will be what AI tools do you have and if you're there in a multi-academy trust a college a local authority oh yeah we don't not you know we're not into that i you're not gonna get the staff soon and likewise the other way around is if you're a staff member without any AI skills they are that is being asked consistently now in interviews is you know what do you how what do you use tell us about that and I think you know these things as an employer you have a duty to help your staff build skills in using AI and it is simply not acceptable in my opinion to to be an employer that says no you're not allowed AI as effectively like giving them a typewriter and a sheet of paper and saying that's how you work I mean it is isn't it it's not I mean we might be 10 five years away possibly three years maybe from that kind of analogy but it's yeah it is not dissimilar. And those people that are saying that they don't like it, don't trust it are always the people that have barely touched it. Yeah and it's really frustrating because I mean you know I'm forever in sort of meetings with customers and there'll be someone in a large when it's a larger group there's someone who will have that view and you know what about the bias what about they've got some some problem and then you ask them a bit more about their use they've never touched it and yet they're expecting you to take their opinion seriously.

SPEAKER_00

Yeah it's like come on what's the what's the best or worst heckle you ever had while presenting gosh I've had many of the worst yeah maybe you're always terrified maybe not for this audience of one maybe not for this one.

SPEAKER_01

Well I had I mean whenever you ask for questions after you've got on stage you're always there's this sort of fear that comes over me um which is mainly not about um people just giving me that I don't agree with it thing I've got reasonable uh uh points at least to put back but when I'm always kind of slightly nervous there's going to be like a deep AI geek that's gonna be kind of like yeah let me explain and then ask me to explain something in real detail and I can go pretty deep on some things but my specialism is really deeply narrowly in rag AI and understanding how that works. But I was explain I was giving this talk to university staff members and they um I was showing them bias we've got some as you know we've got some slides that sort of show how bias is uh surfaced I guess from an AI tool these images and um and I showed this and then he went off and he he was the professor of AI teacher in their uni and so he came in and was like well I'll tell you how that's happening and he was talking about the diffusion layer and the and all good there's the dog the dog's the son is she's in charge. And uh yeah and I'm just looking at him blankly and going after I've got no clue what you're talking about.

SPEAKER_00

Sorry let me just mute and shout at the dog one listener would really like to hear you shout at Lolly.

SPEAKER_01

Yeah probably would yeah someone at the door that's probably a delivery for another one leave it. Never mind well maybe that's a maybe that's a good place to stop for our inaugural uh podcast is the dog kicking off and we should definitely talk in these um I know you're gonna talk to Donald a little bit about um knowledge flow the platform but I think as a history of where we go and what we're doing and what we're focusing on as you mentioned when we talked about doing these I think that's really interesting to for us probably really really deeply geeky for us but to be able to kind of look back a year or 18 months and think about the the things we were grappling with that then.

SPEAKER_00

You know this week it's Excel embeddings to do so to get proper Excel stuff working pricing strategy which has never gone away those are always on our mind how on earth do we price this in a way that is sensible uh is equitable um and keeping our public sector ethos of we're not here to fleece everybody we want we want to drive the transformation uh and hopefully that will do well for us because we'll have lots of people using and trusting our stuff that's our business model rather than lots of people giving us millions of pounds all those those things I'd love to get into those next time yeah well I I think I'm reminded of um well we we talked about uh it would if we'd have done if we'd have started doing this uh two years ago or even a year ago we would have so much knowledge now but there was a uh I'm reminded of the uh college you stood up on a stage and and and there was a a college who I'll not name who said oh yeah we're doing that and then um three months later they were on the same on a another stage and they hadn't fixed a fundamental problem they they had run the you you forecast where they would run into the stand because we'd spent six months fixing a problem which they didn't know about and didn't understand.

SPEAKER_01

And then the last time you spoke a real simple thing but actually takes some time.

SPEAKER_00

Yeah work through yeah I remember now yeah gosh that was a while ago but yeah but stuff like that you know how and come back to the what I said right at the very start you know this started with you showing us uh uh chat GPT 3.5 on your phone in a restaurant and you know if we think about the think about things we can do now it's amazing so yeah hopefully uh we will share lots of that over the coming weeks um so for our one listener um I hope you enjoyed it I hope you found it useful our audience yeah I hope you found it useful I hope you found it interesting and until next week um keep holding on to that thought catch you next time see you later