The Loop

The future of generative AI

April 23, 2024 RSM UK Season 4 Episode 4
The future of generative AI
The Loop
More Info
The Loop
The future of generative AI
Apr 23, 2024 Season 4 Episode 4
RSM UK

In the final episode of our generative AI series, we look ahead to see how this powerful technology might affect the economy and society in the long run. How will generative AI change the economic landscape? How will it challenge the professional service sectors? 

 Ben Bilsland, Tom Pugh, RSM’s economist, and Joel Segal, a business transformation partner at RSM UK, join us to talk about the possible changes that generative AI could bring in the next 30 years. They discuss how productivity and its distribution among society might change, and how work tasks and human skills such as creativity, critical thinking, and communication might evolve in response to the technology.  

They also consider the difficulties and opportunities that AI brings, such as workforce shifts, education reforms, and the need for human-machine cooperation. Listen for a thoughtful discussion on how generative AI could shape the future of work, productivity, social interactions, and the impact of this rapidly advancing technology.

For more insights on generative AI, explore our Real Economy report.

And follow us on social:
LinkedIn - https://bit.ly/3Ab7abT
Twitter - http://bit.ly/1qILii3​​
Instagram - http://bit.ly/2W60CWm

Show Notes Transcript

In the final episode of our generative AI series, we look ahead to see how this powerful technology might affect the economy and society in the long run. How will generative AI change the economic landscape? How will it challenge the professional service sectors? 

 Ben Bilsland, Tom Pugh, RSM’s economist, and Joel Segal, a business transformation partner at RSM UK, join us to talk about the possible changes that generative AI could bring in the next 30 years. They discuss how productivity and its distribution among society might change, and how work tasks and human skills such as creativity, critical thinking, and communication might evolve in response to the technology.  

They also consider the difficulties and opportunities that AI brings, such as workforce shifts, education reforms, and the need for human-machine cooperation. Listen for a thoughtful discussion on how generative AI could shape the future of work, productivity, social interactions, and the impact of this rapidly advancing technology.

For more insights on generative AI, explore our Real Economy report.

And follow us on social:
LinkedIn - https://bit.ly/3Ab7abT
Twitter - http://bit.ly/1qILii3​​
Instagram - http://bit.ly/2W60CWm

Hello and welcome to The Loop. Today you're joining us for the last episode in our series on generative AI. So for this final episode, we're going to be looking at some crystal balls, gazing into the future and thinking about the longer term impact of this technology. Joining me today is our RSM's economist Tom Pugh and Joel Segal a business transformation partner here RSM UK. So Joel and Tom, thanks for joining me today. You're welcome Ben. Yeah. Thanks for having me. So, should we dive in? No gen AI was involved in drafting this script. It goes up saying that this is a big topic. Lots of unknowns, lots of debate, which we've talked about in the previous three episodes. And, Tom, we can only feel the impact AI has today. But can you give us a feel as an economist of what it might look like 30 years down the line, you know, and how AI is relevant on a global economic scale? I think the first thing to say, us humans really have this tendency to really overestimate the impact that new technologies are going to have in the short term and kind of dramatically underestimate the productivity gains that will accrue in the longer term. And when we're talking about, you know, over the next 20, 30 years, we're very much talking over the longer term. So there will be the standard stuff that we've been talking about productivity boost, to professions like lawyers and accountants and that kind of thing. But there's going to be these productivity gains for all sorts of things that we just can't even imagine. And I think that's going to be the major impact. It's this AI will become part of everything that happens in the economy. And that's going to lead to a huge increase in productivity compared to where we are today. There will be questions around whether you're the gains of that productivity accrue to society more generally, or whether they accrue to a select group of companies or individuals, that kind of thing. But I think it's it's inevitable that the productivity gains over the next 20, 30 years are going to be massive. So are you are you saying that actually some of the lethargy and some of the anti-climax people feeling today is like a false trap and that they should be prepared for something really, really fundamental? Yeah. I think fundamentally think about it is there's two kind of stages. There's I guess the installation stage. And that's kind of what we're going through at the minute. It's kind of people working out. How to use this? What is it? How does it fit in this business or that business? What's the best way of using it? And that, it just takes quite a long time to work all that through. You know, these are we still talking about significant investments in capital, in people, in processes. And we're still a long way off perfecting the technology. So there's this long ramp with this installation phase. And the productivity gains during this phase are going to be pretty low, because there's a there's a kind of a cost to changing the way you do business. Once we get off that on ramp and kind of onto the AI motorway, then that's when we'll really see the productivity gains kind of kicking in. And that is the longer term. So that ten, 20, 30 year kind of period. What's your take, Joel, on AI and where we're going to be in 30 years? I think Tom is definitely on the right motorway, so to speak. I think I like to sort of look a bit back and forward at the same time, because I think history is useful when we get a technology that comes along that in this case, I think American colleagues have described it as sort of a Renaissance moment. The problem is the Renaissance lasted quite a while. And I think G AI, if distinct from AI, is a similar sort of journey. So I think we can actually see where it could apply. And I do agree with Tom that his definition, the sort of, professional services, very paper based, heavy crunching sort of areas with his large language involve text formatting those things. I think the challenge comes to it's a trade off between what the human was doing and what we now asking the technology to do for us. So really, what we've got to wait for is the point where humans allow G AI and trust it to do a lot of the heavy lifting for us. So I did a lot in law for three years, and I can see it's really helped create a first draft of certain contracts that were simple. If they were difficult, it didn't do such a good job. And so lawyers were less likely to trust what it could produce. And I think what they've got to do with it is to say, okay, I'm allowing it to take on 20, 30, 40% of what I do today. But if I don't take that 30 40% and do something else with it, then I think it doesn't deliver the value. And I, generally if I, step back. From then I look forward. I think we are entering the wisdom age or the knowledge age where we as humans can't just read stuff and regurgitate it. We have to create that. So what the value, the insight. And I think G AI gives us this opportunity to say, okay, let me remove the drudgery and free up the human has to then take that opportunity. And secondly, use that time to create foresight, insight, and when to look to the future. I'm going to refine my draft to a level where I really believe if I'm a lawyer, that I've got a contract that is better than if I had spent all my time just trying to get a draft out. And that is the conundrum. It just takes longer than we always think. Yeah, I mean, we're taking 30 years because it's a point of future. But we could be talking 20, 30, 50, couldn't we? No one can really put a pin on it. But Joel, you talk about lawyers there as a good example. And like just, because we're not lawyers accounts of myself, we kind of take a step back. You're talking about the power of the technology around first draft. Is there a future where, these technologies progress to a point where we're replaced fundamentally, and the actually what we look like today doesn't exist? I mean, I think if we go back to law and I think people have spoken about is, is there a need for a lawyer when you have an NDA? A very simple document is is there a future where machine to machine can agree these things? If you go back on AI and Alan Turing, who is the father of AI. You know, we talked about the cognitive power and some of these other elements that would come into it. I think I can see a place where as we start to get the machine to machine to work, let's get the ecosystem to ecosystem to really wire up and get things done. I think we can see that productivity gain, and I think we can see if there's an event that is triggered. The AI literally takes over and does it for us, and we are allowed to potentially override under certain circumstances, which today we think about work in a very different way. We think about, oh, okay, I've got to engage in it. I've got to speak to someone, I've got to produce something. And I think in the future, as we move forward, work will look different. It will happen differently. G AI it will be more between business to business. There's a lot of what we've seen today is business to consumer. And some of those shifts I think will move our economy will deliver productivity. And it's, you know, the UK will serve its economy, financial services. This is where we need to put our focus and our time. Is where productivity quite a lot. And actually Tom is an economist. What do we mean when we talk about productivity? What does that mean as an economist? The simplest way of thinking is just how much you produce per hour. How many legal documents do you draft? How many counts can you do you know all of this kind of stuff per hour? And when you think about the economy in its kind of most basic sense, your economy is how many people have you got working and how much does each person produce? That's really kind of what it is. And what if we look at the demographic trends for the UK, but not just the UK? You know, you pick a country really. And we know that the demographic changes we're going through mean population growth. Working age, population growth especially is going to be a fraction over the next couple of decades of what it has been over the over the previous few decades. So we know that having more people working is going to be very difficult. So if we want any sort of economic growth over the next couple of decades, what it's going to have to come from improve productivity. So that's one thing. And really productivity is so important because that's where we get a boost in our living standards from economic growth. Fine is good for a country. But as individuals, the only way that we gain more that we can increase our purchasing power of value is by becoming more productive. And you know, that is something, especially in the UK, that we've really struggled with over the past 10, 15 years or so. And AI is one of those technologies where you can really see it having a huge beneficial impact on productivity. Do we need it then? You mentioned the way society will change and age profiles. And actually we talking about it just because it's a convenient answer to a economic problem. I mean it is a it is a convenient answer. And I think there is a degree of saying, you know, we don't need to worry about the changing demographics and the cost of pensions and all of this kind of stuff, because AI is going to make us all super productive, and we're going to have to worry far more about how we, distribute the gains from AI that, you know, this is all doom mongering and don't worry about it. AI is going to sort everything. There is definitely a school of thought along that kind of thing. And I think maybe that will be true in the 30, 40, 50 year horizon. But we've got to get there first. There's going to be a lot of kind of issues that we have to work out in the next 5, 10, 20 years before we see these kind of huge improvements in productivity and living standards that AI can promise. Yeah. I mean, Joel, you think in the short term what's holding this up, what's getting in the way of gen AI kind of solving everything tomorrow? But I think in businesses what we are really talking about is capacity is the metric. Right. So it's about how do we free up the capacity of humans to get through things so they can get to those economist higher value services? So the bit that's getting in the way is we've got a number of people, the human side, almost all about the human alongside the machine. I think the humans are still resistant. The number of humans who are doing more drudgery and heavy lifting. This is used as a threat to maybe livelihood and work. And partly that's a human fear about, well, what would I do if something comes along and takes away half my work? If you don't back yourself to be able to do the more high value adding service. And so I think the bit that gets in the way a bit is we also embrace it, but we don't do enough of it. Okay. Well, how do we train and educate people? Not generally just use it, but more to think about what are those tasks and activities they should be working on? And who of our population has got those capabilities to move and shift into those higher value services? And what happens to those who can't? I mean, but where do you start with an emerging technology, like when you're talking about training and educating people? And, we've seen some of the universities in the United Kingdom kind of come together and say, we're going to overhaul our syllabus to increase more awareness of data and AI, which is useful. You know, what sorts of things in businesses do we do when we were faced with something which could change everything in 30 or 40 years? But as we sit here today, we're still unpicking. Well, I think there's two side and it goes back to a, a book that was written a few years ago by Noah Harari. One of these sort of I think he's more of a futurist, called Homo Deus. He originally wrote a book called sapiens, which is The History of Man, and then he wrote a bit like your question being a question of what, you know, where does it take us? And it focussed quite a lot around, what might that journey look like? And I actually penned a, an additional, LinkedIn article on the back of that. But but a few years later on, what are the skills that humans actually will need? And he originally had 4C's because when he wrote it, he didn't know about G AI. And actually, having worked with it, I've realised you also need to think about cohabitation with the companies. That one is the human and machine age that got both will be more successful because there are lots of things with judgement and wisdom coming. So the skillsets and things that you've got to start thinking about, if we're organisations, you know, today, our syllabus is very focussed on parrot learning I call it. Regurgitating things. I look at my son doing his Latin revision for his GCSEs its extremely painful. He's basically having to regurgitate and learn lots of Latin prose and then do a translation. I'm thinking honestly is a really good one for co-pilot, Microsoft's, product. So the future we don't want young Joel son coming in. What we want is a person who can say, oh, I'm going to I will instruct co-pilot to produce me somewhere, and then I will do something valuable. So I need that critical thinking. But then I need people who can communicate exceptionally well and read people, I wont go into all of them. But there are there are a number of different areas where I think we need to look and say, what are the things that the machine will most likely not be able to do as well as a human? And those are the skills that we really need to cherish and focus on. Yeah. And I think this goes on to, you know, one of the big fears that you always hear with AI, gen AI is that there's going to be mass unemployment. It's going to just replace millions of people across the workforce. And what you always find with these kind of new technologies is that over time, they create more demand than they save. So you always have kind of growing employment, growing economies. But that doesn't mean that there won't be some pain in the transition process. And that's entirely because some skills are going to become redundant. So the people who are not willing or not able to change, adapt, upskill will find themselves, their skills at least becoming redundant, whereas those who can upskill are willing to learn what is now needed in this kind of new gen AI economy will find themselves becoming more and more productive and have higher and higher living standards. Let's come around to people because it's our fourth podcast on gen AI. Every single one we've touched on workforce and people. You know, Joel, I kind of touched on your thought leadership. You've mentioned it there and you talked about the 5 C's, which I list off here creativity, critical thinking, communication, collaboration, and cohabitation. I thought that was really interesting. I suppose my question is I'm struck by the absence of compute computing and coding. So, you know, what's made you think about those 5 as opposed to, you know, coding, computing, which might lead into more talking about here is software. So I think there's a good point. I read a few things on it. So I think the two things are happening. One is people actually using G AI to code. They are really good no code, software available today. So if you are, a critical thinker and you can draw a process flow of what you want the logic to do. Software today will generate you the code. You do not need to learn Python or something else. Secondly, if you code can send it to it and it will convert it into very nicely structured code and it can do it like that very, very, very quickly. So I don't say that people shouldn't be coders, but actually what they've learned is at schools, if people learn chemistry, biology and actually the ability to conceptualise different structures, to be able to break things down into different structures and put them together again, that is synonymous with being able to put together programs and logical flows. So those are skills that I think humans need to have and develop. The actual production of the code and writing the thing and talking to it, actually the machine should do and we should let the machine do more of that as we move out. We need to focus on those high value thinking processes. Yeah. So what you're talking about my read is these are human skills that the machine or the tool the software can't duplicate but can give the perception of duplicating. We talked about this in a previous podcast. When you have generative artificial intelligence art. Is a piece of art created by someone using gentle AI of equal artistic merit to someone using a paintbrush and paint? So you're entering this kind of slightly more complex world. So I think that's a really good one because my hobby is photography. So I played with Adobe and use Adobe all the time. What you see is actually, yes, you can ask erm one of the G AI products to produce you an image of X combined with Y, and it will look through and put something together, and you don't know quite what you're going to get you. As a human, you have infinite choices. You think, actually, I'm trying to blend these two different things together, and actually you can start to combine, so you can start to, to actually, do some clever masking here. I'm going to mask out Joel's face and we'll put a very handsome picture of Tom's face. So I want a man with a beard and nice a haircut. So these are the choices. So I again, I come back to you. That's the kind of cohabited model of where we work together is where I think we'll find real creativity. And also be able to nuance it and create things that actually also sometimes apply to the emotional level. So what I found very interesting, the G AI stuff is, it's very flat whether it's artistic or with regards for a document, or you ask it to try and produce a script for you. It lacks that emotional edge. It's not going to have the resonance in human that the three of us have got on this podcast. But do you think people are scared because you're talking about like creativity, critical thinking, communication, collaboration and cohabitation? These are these are hard things. These are harder than taking, you know, an Excel spreadsheet and moving one set of numbers to another. And is that partly why people are scared of generative artificial intelligence? I got invited to sit with the head of my son's school and some parents to discuss their syllabus for the next five years. Of course, that will be a really nice test case as a budding economist. Not a real one like Tom. Just test my 4 C model or 5 C or 6. I'm sure we'll keep adding Ben as we think of other things that the human can do unique. And actually they loved it because although they starting to touch it, they hadn't really appreciated that. Yes, there's a syllabus which is very much the parrot learning syllabus. You got to do things and you've got to get through things and you've got to you know, it's a way of of doing things. But the syllabus and the educational skills have not been upgraded. So if you looked at the education of teachers, if you looked at what they need to learn to actually be able to deliver some of these, those are not on the list. I fear our education system today do not address those areas, and those who are teaching have not been trained to address those. Yes, they're helping on mental health. Yes, they help you on some of the other challenges at school. But are they really helping us build the breed that we need for go to Tom's World Service economy. That's going to be a growth engine for the UK and the future. No. I feel like there's a bit of a it's called the sunk cost fallacy. The idea that if you have spent decades learning to code perfectly, or as I did, you know, all through university, you spend months learning matrix algebra and how to do kind of regression stuff, you know, in Excel and blah, blah, blah, and all this. You've spent so much time doing it, and when you've done it, you know you are classed as smart or that's an achievement. Whatever. The idea that actually gen AI means that was all you, that means nothing. Now a computer can do that instantly. It's difficult for people to acknowledge that actually, this skill that I've spent a long time building and I've built up this credibility for, actually is not worth now as much as it used to be. That is a hard thing for people to accept kind of psychologically. Tom, you've been travelling around the UK talking about the economy and people have been asking you about AI of course. What sorts of fears or questions you're getting around workforce from business leaders? Yeah, a lot of it is. There's the short term stuff that we talked about about the UK economy especially, is suffering from labour shortages. We know that that's probably not going to go away for the next couple of years. There's the long term demographic challenges. There's a lot of appetite for how business leaders can address these labour shortages, either by upskilling their current employees to be able to make better use of AI in their jobs, or how they can automate more processes so they just can expand the business without hiring more people. And at the minute, this is very dependent on what industry you're in. Yeah, it's a lot easier to do this in law accounting than it is in hospitality or construction. Be a long time before you get an AI haircut. So at the minute it's still very sector dependent. And then in the longer term, there's this concern that AI is going to lead to this kind of mass unemployment. You mentioned about automation I'll ask you, Joel, when people talk about automation, you know, generative AI leads to better automation. The people saying that lack ambition about this technology. I was worried when people say automation because you sort of have this connotation of robots. I think G AI is less about automation. I think AI can be applied in two ways. One is it helps us undertake work products in a fast, structured way based on the tagging and natural language models that sit behind it. To a degree that we can do it quicker than as a human. The second thing we realise is actually if we want to have automation, we need intelligent process. So that's the flow of a set of activities and to make it go quicker. Yes, we can apply more AI than GI to to have intelligent processes that make sure that things actually get routed in the most efficient and effective way. And some of that we could use a bit of G AI to have some questions and choices that might be made, and based on some thresholds or research that it does very quickly, it could steer the process in a certain direction. But I actually don't think G AI for me is primarily around automation. I come back to what I think it's about, which is about heavy lifting of tasks that generally are very text heavy. Or if you want to quickly get a few visualisations, it could help you think through some things and mock up some things quickly if you use the prompter in a very smart way, but all it's trying to do is reduce some of the heavy lifting capacity and free you up to have more time to then apply your wisdom and insight. Let's play to our day. Jobs have been, you know, you advise organisations on accounting challenges. They might have and you've read a lot and you've seen a lot. But really what they're hiring Ben for is your insight and your wisdom. And I could help you quickly gather some of the latest thinking, look around, you know what others have done, and maybe give you some starters for ten that you might want to check, or just check if the code has changed or gap has moved. But ultimately it's you Ben, with your brain, and your creativity is actually what people are going to pay the money for. The question is, can it help automate some of the heavy lifting? It can gather it and put it together. Is there automation? Maybe. But is it really helping us speed things up? Depends on how much. Does it really save the human time? Or have you just moved your time to spend it on stuff that's more insightful and valuable? I mean, I would build on it to say, I think it can be enormously disruptive. I mean, we're talking about technology driven by data that allows you to create new content, well novel content. You know, I'll give a working example. I can't play the flute, but I could use generative artificial intelligence to create music. And maybe there's something inside me that makes me be able to create great flute music. Now, almost certainly not, because, I'd like to think if I really want to play the flute, I'd have picked it up on the go. But you have this sort of this ability of this technology to kind of break down barriers and change walls. And I suppose that looking out 30 years, I just it's very hard to sit here today and then visualise what those changes might be. It's like saying, you're sat in a mill in the late 18th century and someone says to you, oh, did you know we don't need to use a water mill to power us anymore? You can buy the steam engine and change it. And then you think about how those mills reconfigured around steam. Nobody sat there today, realised the potential of that technology. No one understood at the time how powerful the printing press would be right at the start. So that's to me what we start with this. But there are challenges today. And this is where I think if we're looking kind of the 30 years ahead, the really transformational stuff I think is going to come from not necessarily gen I but AI more broadly, being used as a tool to generate other technologies. If we're ever going to crack fusion power, if we're ever going to crack properly self-driving cars, we're ever going to cure cancer, if we're ever going to get to Mars. These things are going to be done by using AI tools to help us do it. And they're the things that, if you think 30 years down the line, will make our economies and all our societies radically different from where we are today. But to your point, are you both playing with. I'm not disagreeing is that I think it's not a single technology. Because I actually if I could also say it's one of the skills we're going to need to develop and deploy to actually curate our data and get it structured and get it normalised and get it organised so that we can leverage it. How are we going to string different databases and different insight modules together to get us more robust outputs? How are we going to do comparative analysis on different sources? Potentially some G AI, potentially its some data, potentially its some AI to predict and give us views of where we worry about things. And it could also be around the flow. It can be around intelligent automation. And it's also going to be around how the human then uses that information in the right ways. We need to think about the talent that makes up people. What are the skills, the people, you know, the kind of roles that they're going to play. But then what we really need to think about is the how are people going to harness, the data and the digital power. And that's what we get a bit confused at, because G AI is the technology that's separated the data. So we need to make sure we got those two things very clear. And I mean the humans said, well, what is the wisdom or the knowledge or the insight they put across that. That create something that is more valuable. Then just allowing the machine to run as the machine, or what they would have done using their own historical knowledge. So I start with these people. What is the intelligence or the insight that we think we're going to need in the future? Because if we can move to some people, to the knowledge economy, we can move to a more insightful way of working. We can think about value this process, effort and capacity. I think we move the dial to what we have always been talking about for the last 30 years, about what will the knowledge age we call digital age, but what is the knowledge age all going to be about? And how far can man go without his digital twin? And that's really where Alan Turing, more than 30 years ago, started thinking about, okay, where is this going to take us? How far can the machine can really start to capture our cognitive abilities? You know, that's the scary stuff, because that's when we cross into into cyborg territory. Now, some people are worrying how it might be used by crime organisations and others to do things and deepfakes and other stuff, but actually it's the cognitive abilities that we worry about. Can the machine start to really use the way we think, the way we do things, and put together ways of manipulating us? We start to see some of that, but I think some of that stuff is the thing that you can either use in a positive way in the future, or it could be used as a negative way. For you to summarise you saying that it's very difficult to project what insight it can be valuable in the future. To use myself and example, a lot of the things I do today. Insight into accounting standards generative AI will be able to provide similar. But what you're saying is, a good way to look at it today is to think about what is inherently human was inherently a machine good at. And that's a good starting point to look forward 30 years, or at least as good as anything. Correct? I mean, saying exactly that. And actually with clients, we talk about what we call evolutionary science or capability lead strategy is the same as evolution. Just look at nature, look at how it evolves and it survives. Humans organisations are analogous to that. We have to adopt a capability led strategy. And what we're talking about is now here comes G AI. That was really exciting. Didn't see that one coming at the speed it's come along. It's given us a bit of shock. But there will be other things that we can see out beyond G AI, other technologies that are also coming down the track. And actually how are those going to help the machine evolve. And all I'm saying is let's not just focus on the machine. Let's also think about the human and how do we create and improve and create the best humans. I come back to my point, cohabitation, which is how do we hope that humans build the cohabited skills where they realise to be successful, they have to rely on the machine in ways that you and I and Tom never had to. We were far more reliant on our own abilities. Tom, do you have any closing thoughts? Yeah, just revisiting this initial, you know, to bring it full circle, this initial point that we tend to just dramatically overestimate what these new technologies can do in the short term, and people will get a bit disheartened when actually in a year or 18 months time, the whole economy hasn't transformed. And we're not all on universal basic income and at the same time dramatically underestimating the value change, the change in productivity, the change in our societies that will come in these technologies of rate over the kind of 20, 30, 40 year horizon. So I think this is why this podcast has been really useful, because I think that the horizon is where we're going to see the dramatic change in our economies and our societies, not just in the UK, but globally. Great! Well, Joel, Tom, thanks so much for joining me today for discussion. If anyone would like to hear more from Joel or Tom, please do visit the RSM UK website for more of our thought leadership on this area and others. And indeed, you can find all three of us on LinkedIn. Thank you for listening in today. This is the last in our series of genitive AI podcasts. But please do look out for more episodes of The Loop.