The Entropy Podcast

Systems, Strategy & Sense with Glen McCracken

Francis Gorman Season 2 Episode 3

Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.

0:00 | 46:30

In this conversation, Francis Gorman and Glen McCracken explore the complexities of AI in modern organizations, discussing themes such as intellectual atrophy, the speed of AI versus organizational slowness, pilot purgatory in AI implementations, the necessity of a coherent AI strategy, the value of narrow use cases, job displacement due to AI, and the current state of investment and hype in the AI sector. Glen emphasizes the importance of understanding business rules and data quality before implementing AI, and he shares insights on how organizations can effectively leverage AI while maintaining accountability and trust.

Takeaways

  • AI is often seen as a silver bullet, but it reveals underlying issues.
  • Organizations struggle with the speed of AI versus their own operational slowness.
  • Pilot purgatory occurs when organizations rush AI implementations without groundwork.
  • An AI strategy should be integrated into broader technology and product strategies.
  • Narrow use cases for AI often yield the most value and trust.
  • Job displacement is a concern, but new roles may emerge as well.
  • AI can augment human roles but should not fully replace them.
  • The current investment landscape in AI is characterized by both hype and potential.
  • Trust in AI systems is built through transparency and understanding.
  • We're still in the early stages of AI adoption, with much potential ahead.


Sound Bites

  • "AI is a revealer, not just an amplifier."
  • "AI can augment but not replace human roles."
  • "Hype attracts attention and funding."


Join the community beyond the podcast. Shop our Entropy inspired products here: https://www.etsy.com/shop/theentropypodcast/?etsrc=sdt

Francis Gorman (00:01.905)
Hi everyone, welcome to the Entry podcast. I'm your host, Francis Gorman. Before we dive in, if today's conversation challenges you, sparks new ideas or sharpens how you think about the world, don't keep it to yourself. Subscribe, leave a review and share this episode with someone who enjoys staying curious. Today I'm joined by Glen McCracken, the Chief Product and Technology Officer at Lantum, a health tech scale observing the NHS. Glen has over 25 years experience delivering large scale technology, data and AI systems in high pressure environments.

often stepping in when impressive technology hasn't translated into better decisions or outcomes. Glen works at the intersection of product, technology and execution, specializing in post-acquisition integration, AI governance and building systems that scale without spreadsheets, heroics or wishful thinking. Glen is known for cutting through AI hype with practical, sometimes criterion insights grounded in real delivery experience. And Glen, it's lovely to have you here with me today.

Glen (00:56.578)
Thank you, Francis. Good to be here.

Francis Gorman (00:59.067)
Glen, I was reading your LinkedIn profile, as I sometimes do with guests who are coming on the show, and one thing that struck me was a post you had up in the last couple of days. And I think you wrote something around the area of intellectual atrophy. And one part that jumped out at me, you said we're entering an era where you can hold opinions you never formed. You can write without thinking and you can decide to understand the system you were deciding inside, which kind of struck me as

beautiful phrase that has some real poignant realities to it. Can you talk to me about what kind of sparked you to write that post?

Glen (01:37.938)
Yeah, so I suppose I'm becoming increasingly aware of the trend that people believe that AI is kind of the silver bullet to solve many of their problems. So I've been lucky enough to have worked in the field for quite a long time. So back in the 90s, I was in consulting. And at the time, well, at the time, it was called data science as opposed to AI. But we were building a lot of these production grade systems.

that were leveraging the data that was available at the time to make a lot of decisions at scale. And one of the early ones that I worked on was forestry evaluation models. So we would use aerial photography and then image classification and a technique called support vector regression. And we would derive the valuation of different types of forests based on stands, which is like a unit of measure with respect to forests.

And one of the challenges we had there was that we were taking something that was traditionally done manually, and all of these kind of business rules would have been applied to it. But people were not so conscious of what the business rules were. So a valuation person could come back and basically give an opinion on a particular Hector or a stand or whatever.

And it was often difficult to try and derive or distill down the business rules that they were applying as opposed to just the fact that they knew what they were doing. And then the phenomenal thing about utilizing data and AI is that you then get to make a lot of these decisions at scale by applying these business rules and decision logic to data.

And what I was seeing end of last year was this increasing number of people that I was talking to were talking about a sense of surprise when they had a business challenge. They threw AI at it, hoping for the best. And instead of AI kind of solving the problem, it revealed some of the underlying issues they had, whether it be dirty data or that they had good data, but they just didn't have a common aligned understanding of what the business rules that

Glen (03:58.84)
that needed to be applied to make certain decisions. And then I saw that kind of extending more into LinkedIn in some of the content that I was reading. Not just the content, but also the comments. in some cases, I post pretty much every day, mainly therapy for me. So I've worked in lots of different areas and I have relatively strong opinions, some of which are good opinions that I feel confident about. And some of them I just throw out there to get people's

input as well. But there are a few kind of revivest offenders that would basically comment in conflict with what I was saying. And I love those. So normally I love those because it gives you a chance to engage and have a conversation. But what I found, there's a few people that in particular that would kind of take the counter argument, which I thought was good. And then in one case, I reached out to them and said, this is really good. I've enjoyed kind of the conversation that we've had.

we should have a catch up in real life. And so we did. And the person didn't have any opinions at all. And they even said to me, like, I've really enjoyed engaging with you, it's been really fun. And we started talking a little bit and I was saying, what you said about this was really interesting, can you talk to me a little bit more about it? Because it is something I've been thinking about, but I haven't managed to articulate it. And they were like, it's just what GPT told me to say.

So yeah, so was kind of left with this kind of weird moment where at face value, the conversation was super interesting and engaging, but then when you kind of peel back the veneer, it was a little bit disappointing. That being said though, there was still validity to the conversation. Like it was still was interesting and engaging and made me think and was thought-provoking. And so that kind of instilled to me a little bit more of this, what other areas is this impacting on?

And there's a number of really fascinating papers back in 2012, 2013 in the education system that talked about intellectual entropy in the sense of mathematicians or math students decreasing their reliance on kind of problem solving in their own head and relying more on calculators and tools and so forth and even spreadsheets and how there was a flow-on effect to that and how it was impacting on grades and other things.

Glen (06:23.438)
And I suppose there was there were also people at the time talking about kind of the the watering down of critical thinking And there was a number of interesting blog posts and I mentioned one of the blog posts by there was Dave McMahon and that talks about intellectual entropy and and how There's a temptation and that was this was back in 2017. There's a temptation to rely more on the tools

And there's a price to pay for that. And I suppose in my own life, I kind of see it to a degree. If I'm using GPS or Google Maps to go somewhere, I can get there OK with the aid of Google Maps. But if I try and go back there again without Google Maps, it's a struggle for me. Whereas if I'm prompted and have to kind of go through the pain of getting to that place with minimal intervention, but with a bit of a safety net.

then it's a far more memorable experience for me and I've got a much higher chance of kind of navigating that as well. So that's kind of a long answer to your short question, but yes there's just some really interesting trends I suppose that are emerging with respect to some of the dialogue that we're seeing online and some of the way in which the dialogue is created in the first place.

Francis Gorman (07:35.665)
Yeah, it struck me because it really worries me. And I've talked a lot on the show in the last season about critical thinking and the erosion of a kind of a critical intelligence, you know, on the forefront. look like this all assuming individual with all the smarts and, know, can deep dive and then in reality, it doesn't go past the keyboard. So when you take that aid away, that the individual kind of falls down and it worries me from a number of avenues.

You've used one example there. think the example I've often used is the smartphone. And I found myself in the hospital a couple of years ago when they said, watch your wife's number. And I said, Jesus, I actually don't know my wife's number because I've always hit Jill and it's just called her because I had a smartphone when we met. And that kind of stuck with me. I've kind of made a I know it now, obviously, because, know, I confessed to my wife, I didn't know her no more. And that was a non-runner. had to be it had to be learned.

Glen (08:12.93)
Yeah.

Francis Gorman (08:32.825)
I see this argument and go, well, AI is here and it's not doing any harm. And, you know, we said this about the calculator, but with the calculator, you still required a level of knowledge. You still needed to know that to add something together and give you an outcome or to multiply something, give you an outcome. Now we're kind of handing that knowledge off to a system. And I do worry for the next five, 10 years what that will mean for us as a society. Will we still be able to have

practical and pragmatic skills, especially in a world with such geopolitical tension. Will we become a brittle society in the West, whereas other nations will try it because they have an outsource to intelligence. it's definitely a watch item. And know thank you for that post because it did resonate with me and it kind of brought it to life in a far more encapsulated form than I've already been speaking about. It was subtle and revealing all in in the one. So thanks for that, Glen.

One thing I want to talk about is, and this probably feeds into it, it's the speed of AI. So AI, and I think you've touched on AI first rather than AI fit, I think is the way I often put this. Organizations, especially all our organizations or large organizations, they don't run at speed, but AI is a speed-driven entity. It can do things quickly. In your experience, AI is fast and organizations are slow. Can you?

Can you break that down for me? you witnessed that? Have you observed that in the wild?

Glen (10:01.504)
Yeah, I suppose it. I think you're right with respect to a correlation between the size of an organization and the challenges that come with AI. So if you think about AI and kind of the pure sense. It. It really is. It's codifying and then applying decision logic at scale. And and there's various ways in which you can dress that up, so LLMs do it really well with unstructured informations. You can have lots and lots of text.

And it can mine that text and it can provide you with summaries and salient points and deltas between contracts and all those types of things. If you go back to of traditional AI, so the application of logistic regression models and other such things to do simple things like churn prediction and lead scoring, customer segmentation, all that type of stuff, it really is doing exactly that. It's applying

business logic at scale. And what's interesting about small organizations, so if you have three, four, five, 10 people within an organization, then there isn't a lot of tribal knowledge. Then in many cases, smaller organizations, a lot of people are generous or they understand more about other people's roles than you would typically find in a much larger organization. And then you kind of have this demarcation.

or specialization of labor that occurs as organizations get larger. So when you've got a hundred person organization, you're going to have departments and you're going to have sales and marketing and you're going to have natural tension that exists between the different departments. So product is going to be listening to the client and they're going to be expressing those requirements to engineering and there'll be natural tension with engineering to say, well, we've got limited resource. We can only deliver certain things.

And so what becomes really interesting is when you put AI into a smaller organization where a lot of the decision logic is already relatively well known and can be documented quite easily, then what you find in turn is that AI can move very quickly because it's the first hurdle of having alignment around what are the different business rules that are in place.

Glen (12:21.472)
is largely taken care of because it's a relatively small organization and most people know how things are supposed to work across the kind of parts of organization. If you get into a large enterprise where you've got tens of thousands of people or even thousands of people or in some cases hundreds of people, then it's highly unlikely that one department will understand the business rules that a other department makes decisions by or applies logic by. And that's where the bottlenecks often occur.

in the application of AI. It's not so much in the model itself. It's in the of the grunt work that has to occur beforehand, where you are distilling down what are the business rules that we're seeking to automate and to have AI help us with, and then getting alignment on that. And a lot of people talk about AI being like a mirror or an amplifier. And I think I'm...

I think it's definitely those things, but in the first instance, it's a revealer. So it's very difficult for you to automate something that you don't fundamentally understand. And if you are trying to automate functions within organizations and that organization is large and the teams are relatively large as well, then the fact that it's unlikely you'll have everything codified and documented in such a way that there's already strong alignment across the entire organization as to

what those things are and how decisions are made becomes the first kind of stumbling block. And the sexiness of implementing AI sometimes a little bit that is lost when you realize you have to do some of the foundational work first, like making sure you have access to the right data and it's clean and it's traceable as to where it came from and actually having business rules that are documented. And then it may be the first time anyone's actually seen those business rules outside of the particular function.

might be the first time that the manager has seen some of those business rules and understand how some of the workers actually make those decisions. So yeah, there's a few interesting challenges that come with larger organizations, especially when trying to apply AI.

Francis Gorman (14:30.514)
In your opinion is that feeding what we're seeing in terms of pilot purgatory in the AI space? People are rocking up and it has to be an AI driven workflow, but they haven't done the groundwork. There's no requirements. There's no real view of what the outcome should be in that space. And we get these circular pilots that burn time and energy and management are standing over going.

what's coming out the end, you've got 10 talkers and one doer and you know, it kind of disintegrates in itself. And then we go, well, we've got 50 things in pilot and we've had two projects in the middle at the fireside and they haven't really fully formed and we're still working them through. And there are cornerstone products that are going to market. When I look at McKinsey's reports and the MIT had a report last year as well.

It looks like a lot of people are investing a lot of time and money and effort in the AI, but they're that first part. They're not doing the foundation work. They're not doing the non-sexy elements of is the data clean? Do I know what outcomes I want here? They're going with the vendor told me this tool does X, Y and Z. That's obviously what we need. Let's apply it here. Is that what you're seeing in your kind of interfaces with the market in different places?

Glen (15:43.726)
Yeah, it's a tricky one because if you kind of consider the two extremes of the approach, so one approach could be that some organizations take is they say, we want to get that pilot underway. So let's get the vendor in, let's pitch something interesting, let's all get excited about what this project is and the possibility. And in a way that kind of drives literacy around the potentiality of AI.

which is a great thing. So hype is a good thing. It draws attention, it draws people in to help them understand what is possible or what the potential might be. And if used well, it's a great way of getting the mandate to do some of the hard things first. So you do the pilot, the pilot is broadly successful, but in doing so you identify some of the challenges and that was we didn't really have

particularly clean data, it was hard to get hold of it and when we did it was just like a static snapshot. So having access to real-time clean data would be fantastic. We had some difficulties understanding the business logic, we had some challenges understanding who was actually accountable for this thing in the first place. So as a call to action, pilots can be fantastic. They can draw attention to where time and energy and money needs to be spent. The challenge comes

when organizations want the shiny AI without putting necessarily the work in. So then you have the opposite in the spectrum where people will sit there and they will spend time and energy on making sure the data is perfect and that they've got all the foundations in place. And they may be a little bit slow in seeking to adopt AI because they feel as though the foundations are not quite there yet. And I kind of think something in the middle works reasonably well. So data can be good enough.

and decision rules and kind of the accountability of who's making decisions and so forth can be good enough for you to bring in AI tools and see how beneficial they can be, which in turn hopefully will provide the impetus to invest in those things to make it even better. And you do kind of get into that Pareto rule where you have diminishing margin utility of

Glen (18:05.398)
investing more time and energy in making the data even better when the marginal cost as far exceeds the benefit you're getting. So it is an interesting balancing act, I think you're right. think there's almost kind of three camps that I see, the people that are really into the shiny pilots and are maybe a little bit too premature. And the drawback of being a little bit too premature or not having the appetite to invest in the foundation is that it can taint the experience.

And then it can kind of give certain people ammunition to say, well, I told you so, AI wouldn't work in here. The opposite end of the spectrum is you have some organizations that are hesitant to adopt AI because they feel as though they're not quite ready yet. And maybe they're taking a bit too long to try things out. And then you've got the kind of the people in the middle that are saying, I feel as though in this area, the rules are well enough to find and the data is kind of good enough that we can

trial some of these things and see what the benefit is and then kind of ride that wave of momentum to ensure that we are investing the right amount of time and energy in getting the foundations right so we can get the benefits that we want as well.

Francis Gorman (19:20.349)
In your opinion, should organizations have an AI strategy in place before they start doing AI pilots? Or do you think kind of the dispersed model of find a problem locally to your business area and tackle it? And if AI is the right solution, go that way and then we'll kind of filter those lessons back up to the top and that will help define our strategic approach to AI. Have you seen?

different approaches with different organisations and kind of what the key constructs or pillars of a strategy is in this space or if people are going live without a strategy at all.

Glen (19:52.992)
Yeah, I think so the my sound bite answer to that is an AI strategy is about as useful as a PowerPoint strategy. Like I've never met a company that has got a PowerPoint strategy. just, I mean, PowerPoint's a tool, so it wouldn't make sense to have a strategy for PowerPoint. AI is just a tool. And so what worries me about when people talk about AI strategy is that sometimes the tendency is to, it's like Maslow's quote of when you're a hammer, everything looks like a nail.

So if you wield the AI hammer, then you start to be quite myopic in your view of what the solution should be. And you almost run the risk of being a solution in search of a problem. So I know I have to use AI. How can I jam it into the business? As opposed to the other way of saying, well, all organizations to some degree have some sort of a technology strategy or a product strategy.

And as in any modern organization, as part of their product strategy, it would be, and how do we leverage technology? How do we leverage the cloud? How do we leverage PowerPoint? How do we leverage AI? So AI is just another tool. So if you look at AI in terms of the fact that it's just another really useful tool in your toolkit, then I think that's a far healthier way of approaching it. So rather than having these standalone

AI strategies, which might be useful, might be beneficial. I just feel like sometimes the risk of them outweighs the benefit. For me, the real way of leveraging AI is to instill within product strategies, technology strategies, roadmaps, so that you're consciously saying, I know that I have a client need, and I feel as though I can meet the client need using one of these tools.

Of course, AI should be considered as one of those tools, but there are lots of other ways in which you can fulfill your client needs as well as driving profitability and reducing cost and increasing revenue and being more operationally efficient, all the things that organizations are striving towards without always going back to the AI hammer and saying, let's have a separate standalone little PowerPoint that we can high five each other over.

Glen (22:17.639)
and tick the box to say we have an AI strategy.

Francis Gorman (22:21.378)
That's a really great insight. I wanted to talk to a little bit about the use case that you see kind of getting the most buy in across or creating the most value within a business. From a cybersecurity professional standpoint, I often feel uneasy and sometimes sick when someone says, we want to put agentic everywhere. And I go, OK, we spent the last five years going zero trust to minimizing our attack surface. Now you want to put something in that basically has autonomy to everything across the organization.

and it's black box and nobody really understands it. So how do we figure this one and work it backwards that we can enable the outcome without really understanding what the outcome is in most cases. So from your perspective, when you see AI and I'm using AI as broad as term, large language models, agentic, whatever is next robotics, et cetera.

For me, it looks like the least sexy use cases are probably the ones that give the most value and the more highfalutin use cases create a lot more risk but don't necessarily give the benefits. I just wanted to see, that match with your perception?

Glen (23:26.092)
Yeah, definitely. think so. Certainly for me at the moment, mean, a lot of people use agentic and they don't actually know what they're saying when they say agentic. And I get the marketing hype around agentic and there will come a time and a place where the technology is at a point where you can do some pretty amazing things. But most of the things I see at the moment where people claim are agentic are not agentic at all, like they're not autonomous, full stop.

I think you are right though. So my feeling at the moment is that the best types of AI are the invisible ones that have clearly defined boundaries and are constrained. And there's a high degree of trust associated with them because they are applying at scale these decision rules and they're not just going off on some tangent.

And if you think about the in the early days when we talked about AI, was all about we can do things better and faster and it's going to be wonderful and we can scale. And then in the lexicon now, people are talking more about kind of building trust and it being trustworthy and all those types of things. And if you if I use the analogy of bungee jumping. So if I was to say to you, Francis, I've got a bridge where I live. It's, you know, 300 meters in the year.

I've built a bungee, I'm pretty sure it's safe, do you want to be the first one? Then no amount of me saying to you that it's trustworthy is going to convince you to be the first person. But if I have you there for a couple of weeks and you see thousands of people using it and there's transparency around the safety mechanisms in place, there's constraints and guidelines around the weight limitations, the height limitations, the...

wind directions being monitored. Like if I do all these things and show to you that it's doing one thing, it's doing one thing well, and it has been rigorously tested and has strong guardrails in its place and it's been audited by health and safety, all those types of things, then trust is something that you build over time. And typically trust for humans, for us, is through proof, is through not someone saying to us that it's trustworthy, but us seeing firsthand or we've got reviews or some sort of social proof.

Glen (25:52.814)
that it's a trustworthy system. And I see for now, at least I see AI in the same way. So where you've got constraints, where you fundamentally understand the decisions that it's making and why it's making those decisions, you're gonna build trust in that system. And there's very few areas at the moment where you would just turn over kind of Blanche control to an agent to autonomously and arbitrarily make decisions.

that even when it's made the decision, it would struggle to tell you why it made the decision. And that's just not where we are at the moment. So where I see AI being used at the moment and being most successful is in those constrained areas where you've got well-defined boundaries. And typically, it's operating in a single domain. It's not being all things to all people and trying to do everything in an open...

world type manner. That's not to say in the future we might not get to a place where as a society and as organizations we're comfortable in certain domains for the agents to take over, but a lot of the where we have hypersensitivity around bias is just not going to be there. Like loan approvals, AI has been widely used since the 90s in loan approval and the decision trees and the

the justification for why it's making certain decisions are extremely explicit and auditable as well so that you can hand on heart go to your regulator and say, we're not singling out people that are of a certain race or gender or anything like that. And we've seen early attempts and failures of where people have turned over prematurely.

too much autonomy to AI agents, Netflix and Amazon being two prime examples, both of which got into trouble around their hiring practices. So they trained the agents up on the people that were in the organizations, are typically in senior levels are typically white males. And then when it was doing the pre-screening of CVs, it rejected anyone that wasn't a white male because it didn't fit into the mold of the target state of the people it was looking to recruit for.

Glen (28:17.774)
So, I think there's a lot of soul switching going on at the moment, where people are excited by the scale that can come with these autonomous agents and how they potentially could do some pretty amazing things. But there's also this kind of trepidation that comes with it as well to say, are we yet in a position where we feel as though we know enough about what they're doing and there's enough transparency so that we can stand behind what they're doing?

Because ultimately, someone has to be accountable for the decisions the agent makes. The agent is not autonomous in the sense of it has the accountability for the decision. It is a function of its programming. So someone has to step up and say, yeah, I stand behind the decision that that agent made. I am accountable for the decision. And in its current kind of opaque setting, and in many occasions in the opaque setting, that's going to be a real struggle. I think we're still, there's some fascinating.

work that's going on at moment, but we're still a little bit in the early days with respect to widespread adoption, especially in areas where you need to be clear on why decisions have been made.

Francis Gorman (29:28.593)
find the accountability piece absolutely fascinating. I know a couple of people in the legal field and they've wrote about this in length around where does accountability sit when decisions are being made, detached from a human essentially. And I think we're gonna see a lot of cases materialized over the next couple of years as companies kind of hand off automation or decision making and then realize actually we can't do that without accountability and somebody needs to be accountable and you can't make.

machine accountable for decisions or rules that were initially set by a human. it's a, think that's going to be one of those areas that's, you know, going to be a watch item and lots of interesting lessons learned come out of it, you know.

Glen (30:10.574)
Definitely, and again that kind of comes back to my narrow domain argument. So where you have like something like a lead score, where you have a narrow domain where you say I'm going to get a thousand leads per day coming in and the rationale by which I score them is based on both thermographic and demographic information. information relating to a company and in relation to the individual. If I see that they're a senior person in a decision-making position, if I see that the company is more than a hundred people.

turn over more than 5 million. These are objective business rules that you can articulate. Then when the score is applied, can say, why is this a hot prospect? It will say, for these reasons. That narrow application of AI within a relatively constrained system is phenomenally

efficient. So the ability to get through thousands of leads per day as opposed to someone manually having to do it. The ability to combine lots of metadata together to make, it could be warm intro emails are automated. It could be you even have some AI operators working in some sort of pre-screening type opportunity. Again, there's a degree of flexibility you can have there, but it's still within a relatively narrow domain.

And you're still going to have people listening in on some of the calls to make sure that the AI is not saying silly things. You're still going to have people reviewing some of the emails that are being sent out to make sure that it's not going rogue. But it's a relatively low danger zone. But it's a great application of AI and a relatively narrow domain. To expand that and say, well, let's just make the whole sales funnel. So all demand generation, all lead scoring, all lead routing.

all of the initial conversations, the pricing, the provision of the contract, and just do that end to end and let an agent take care of all of it. I mean, the biggest challenge there is going to be acting surprised when it doesn't work. Like it's doomed for failure, at least for now. Certainly elements of it could be done really well with AI, but then looping it all together and providing autonomy and incentive to an agent to say, do what you like.

Glen (32:34.286)
the outcome is what I care about and here are some of the constraints that you have to worry about is dangerous to say that it's negligence. It's not people being smart with AI, it's people trying their luck and seeing what happens.

Francis Gorman (32:55.576)
a nice way of putting it Glen you know let's try our luck and see where it all ends up but no it's a fascinating area. Then one thing I want to talk about we're seeing a lot of reports coming out in the job space especially in kind of graduate roles are being let's say I would say displaced but they're not as as plentiful as they used to be an AI has been called out as potentially the root cause there and I've also had a bizarre experience my first bizarre experience of kind of

human being replaced by an AI recently when I was booking a guest on the show and their PA was 100 % automated. So it was an AI personal assistant.

Glen (33:32.216)
Fantastic.

Francis Gorman (33:35.237)
name the individual but they're in the States. And I had this very fruitful conversation with this personal assistant that wasn't human, had full access to the individual's calendar. We went back and forth, we talked about topics that may be on the show. We had a conversation like we would have had before you booked in and came on. And it's the first time really that I kind of went.

That's a job that is no longer there. An executive has displaced their PA with an autonomous agent of sorts. I don't know what the technology backing was. I will ask them once they come on if they're willing to talk about it. But I'd like your view on where we're going as a society. Do you think the camp of... It's going to be kind of...

creep up us and it's all going to be over a lot of jobs will be lost and you know we can have leaner companies etc. Or will the accountability aspects that we talked about earlier lean in to regulation and other entities and kind of go you can automate to an extent but you still need accountability you still need that human autonomy to operate specifically in regulated industries.

Glen (34:48.042)
Yeah, so think first all, mean, that's a great example as well. The automated PA, a nice narrow use case where they've got access to a calendar and probably knowledge-based information associated with the person as well. But I think you're right. So there's a couple of things you've touched on. So one is in the development sense. So as a developer, as a junior developer, are we going to see some displacement there? I think we are. I think...

I don't necessarily think that's a good thing either. all kind of revolutions that we've had throughout history, there's been a shift. There's always been a displacement, but a shift. And we saw that with the move to the internet in the early 2000s. And we had the emerging of all these new roles that no one had ever heard of that went on to thrive, that we saw the same with whole cloud adoption in South, SAS as well. So I'm I.

I'm optimistic that there will be a larger number of positions that are created than displaced. But I think one of the things that you alluded to, I think, is a concern. And that is, what do we do when our

our kind of job funnel is almost predicated on people coming through the ranks and learning to be, and learning not only the foundations, but also how to handle a lot of the exceptions when those people potentially skip those steps. So if we have a dramatic reduction in the number of junior developers, then the flow and effect for that will be there'll be fewer people in a few years time that have had the experience.

of living and breathing writing code that, and I would suggest that would make them less effective in management roles. So that's an interesting challenge that we're going to have to work through. I think the kind of the apprenticeship programs you see in many organizations are phenomenal in the sense that you get people that come through learning a lot about not just the domain, but also the nuances and the

Glen (37:05.76)
and the data models and how everything kind of hangs together. And one of the concerns I have is when a lot of that can be automated away and you're relying more on the tool, then first of all, the standards invariably are going to be dropping. So there will be a higher degree of technical debt that's created. And we see that already with some of the vibe coding and lovable and regular and other things. So phenomenal from a prototyping perspective, but.

some organizations take it a step further and productionize it. And so you're going to have a proliferation of code that hasn't really been particularly well thought out. And it may have been optimized locally by the agent that was building it as opposed to holistically. So I almost see kind of a dip and a blip. the dip will be as so many organizations at the moment are listening to a lot of the hype and they firmly believe that

You don't need as many developers and you can rely more heavily on AI to help with some of these things. So there might be a natural reduction in the hiring need. And then I think there'll be a little bit of an awakening that occurs where they realize actually AI is a great augmentation and it helps the resources be more productive. It's not necessarily a direct replacement. there's been plenty of studies that show that in some cases that AI actually decreases productivity because of the rework factor.

And then the blip will come where the technical debt starts becoming more obvious. So a lot of these new shiny things are being built and they don't always integrate well or they don't always work in a way which allows them to be scaled easily in the future. And there'll be somewhat of a challenge associated with that. And none of these things are new. There's a phenomenal book that was written back in the 90s called People Wear. And then there's the follow-up book, the Mythical Man Month where

They did exhaustive studies on software engineers with respect to how much experience they had, how much distractions they had, how much the brief changed. And they looked at what made software good in the short term and in the long term. And there were lots of learnings that took place because software engineering had been relatively stable for a relatively long period of time at that stage. There almost needs to be a repeat of that exercise now to say,

Glen (39:31.478)
in the era that we have, we have a lot of AI augmentation and AI created code. What is the best way of doing things now? is there, where is the best practices now? And I know of a few companies that are doing a really good job in leveraging AI and are utilizing it in terms of rapid prototyping and helping with the automation of the tests and the release cycles and the...

client communications. But most of the ones that I'm seeing that seem to be the happiest about their progress are still using it in the augmentation sense. So it's not replacing things per se. It's aiding the developers. And they're still a human in the loop for all those core decisions that being made. And they have this notion of splash zones or blast zones where it's low impact.

they're happy to leverage AI a little bit more. But when it's mission critical production systems that touch something which is regulated, then it's used very, very carefully.

Francis Gorman (40:43.683)
sense and I think it's a watch area. we already be careful we create hollow companies you know for myself personally to get to a leadership position I made lots of mistakes and the mistakes I didn't make at a leadership position there were there mistakes I made before I got there you know I tripped and fell over a lot and picked myself up and went I won't do that again or you know I had different businesses and things went wrong and you know it's a it's a it's a mountain of learning that you build over

over years almost later is that you can't just pick up from doing a prompt. it's definitely something I'm keeping an eye on. Before we finish up then, I'm conscious of time. Looking at the market now and the level of investment and funding that's going around.

in your view and I've asked this number of people recently and I'm a cautious investor you know I've pulled my money out before Christmas and just waited to see does everything collapse or and then it went up again I'm like I don't know what's going on here anymore so it's it's a it's one of those things that I'm watching with a keen eye from someone who's been in the industry for so many years and on the forefront of this technology do you think we're in judecmania are we we in a we in a bubble right now or is it going to sustain?

Glen (41:59.682)
Yeah, think I'm, so the answer I normally give for this is I still feel as though AI is overhyped and underestimated. So I think there's a huge amount of hype around certain organizations with very high valuations that either haven't produced a product yet or have produced a product, and I use the product term there lightly or loosely. And then there's so much untapped potential where AI could be leveraged.

and make a dramatic difference across a lot of organizations. I think most people would agree just anecdotally that it feels bubbly because of that mismatch, because you see some organizations that are operating at very high multiples that suggest that they are overvalued.

But then the counter to that is that we're still relatively early, still relatively early in the adoption of AI. mean, certainly we saw a lot of adoption in the short term around generative AI, thanks to GPT, where from a consumer perspective, people saw firsthand how amazing it can be. And then that, in some ways, it trickled into organizations where, much like the movement to the cloud, so when people first started to try and move to the cloud,

It was kind of enterprises rejected it and then the likes of Dropbox and Evernote came to prevalence. And then when the consumers adopted it more and more, then in turn organizations adopted it and GPT kind of started with that. So it was first and foremost adopted by consumers and now it's spreading to businesses. But I still think you're still in the early phases of that. So I think there's a huge amount of runway left for organizations in adopting AI, which in turn will continue to fuel.

the growth of AI organizations. Maybe the hype will settle down a little bit as there's more solid use cases and solid case studies as to the success that people have had and how it can be. And the best practice is that we'll start emerging more and more as well. So I think for me, it doesn't feel like an imminent

Glen (44:22.956)
the sky is falling type thing at all. still feels like we're in the early stages. We're still in the excitement stage. We're still getting, you still see a lot of money flowing into organizations that have a lot of potential and have got a reasonably good foundation. And I think that's exciting. And like I said before, hype is great because it attracts attention and attracts eyeballs. It attracts funding. And many of the most successful companies that came out of the 2000s were survivethat.com.

because they were fundamentally good organizations that may have not have been funded if it wasn't for all the hype that surrounded the industry as a whole. So I think it's an exciting time to be in the AI field. I think there's a lot more companies and potential unicorns to come, but I don't see it suddenly easing off in the next year or so. And as we've seen with the changing of terminology, so we had...

lots of excitement around Gen.AI and NLMs and then people started moving more into agentic AI and agents. And I think we're just scratching the surface on that and there's still a lot of challenges that we need to get through before they can be really as successful as what many people claim that they could be. And that's going to take a little while to get through and we're already seeing some great standards coming out like the MCP standard that Google

came out with, there's lots of other organizations, large organizations that are supporting it as well. And as those tools continue to spread, I think you're going to have a lot more success in the short to medium term. And then maybe a giant correction at some stage, or maybe it'll just delicately float into the next kind of phase.

Francis Gorman (46:13.073)
Glen, I really appreciate it. Look, it's been a fantastic conversation. I'm sure the listeners are going to get lots out of it as well. So I really do thank you for coming on, spend the time to talk to me and I hope you have a great day.

Glen (46:25.4)
Thanks, Francis.