Litigating AI

From Middlesbrough To Minister: Policy Lessons For The AI Age

Garfield AI

Curiosity changes lives—but only when the system makes room for it. Philip Young sits down with former Cabinet Minister Greg Clark to explore how the UK can turn raw talent and world-class research into durable AI leadership without smothering innovation or abandoning hard-won rights. The journey moves from school classrooms to Cabinet rooms, from ZX81s and BBC Micros to LLMs, and from London’s legal corridors to creative and industrial clusters across the regions.

We unpack a pragmatic model for progress. Start with education that stretches young minds and builds confidence to explore uncommon interests. Back discovery research with patient capital and let applied partnerships turn breakthroughs into real products. Keep regulation close to where harms occur—privacy, IP, competition—by empowering sector regulators who already hold deep expertise. The aim isn’t red tape for its own sake; it’s guardrails that protect people, boost trust, and make adoption easier. That means taking bias seriously in hiring systems, clarifying training data rights, and ensuring competitive markets so power doesn’t pool in a few hands.

Beyond policy mechanics, we put a spotlight on place. AI advantage grows in mixed ecosystems: gaming tools powering automotive design, motorsport data informing manufacturing, and advanced materials shaping aerospace. Sheffield, the West Midlands, and Leamington Spa show how porous boundaries spark new value. The biggest barrier isn’t technology; it’s silos—between disciplines, departments, and sectors. Break them with intentional convening, joint programmes, and procurement that rewards collaboration, and the UK’s proposition becomes distinctive: innovate quickly, deploy safely, and spread opportunity beyond the capital.

If you care about building with brains and guardrails—where education, research, industry, and fair rules reinforce each other—this conversation is for you. Listen, share with a colleague who shapes policy or product, and leave a review with one change you’d make to accelerate responsible UK AI.

SPEAKER_00:

Hello and welcome to the Garfield Podcast, a series of conversations with the people who are working with AI to improve access to justice. In this episode, Garfield founder and CEO Philip Young is joined by the former Cabinet Minister, Greg Clark. The pair discuss Greg's experience of science and technology policy making, whether AI needs greater regulation, and the need for tech to work with other branches of industry.

Philip:

Welcome to the latest edition of the Garfield Podcast, a series of conversations that I have with various guests in the AI and law tech and policy worlds. This week I am honoured to be joined by Greg Clark. Hello, Philip. It's uh it's great to be with you.

SPEAKER_02:

I've been thrilled to see the success of Garfield.

Philip:

Oh, thank you very much. Yeah, thank you, Greg. What I'm going to do is I'm going to say a few things about Greg's career to date, because I fear that Greg will be too modest otherwise to sort of talk about the series of successes he's had. Greg is someone who is having a very long career of exemplary public service, and he started in business, including at the Boston Consulting Group and also at the BBC, and then moved into politics, where during the course of his career he reached the top of the greasy pole. He was variously at different times the Secretary of State for Communities and Local Government, the Secretary of State for Business Energy and Industrial Strategy, and also the Secretary of State for Leveling Up Housing and Communities in a number of different administrations. And since politics, he continues to sit on the government's Industrial Strategy Council and is doing various other things. And he's always had a very large interest in science. And for part of his career, I believe I'm right in saying that you were the chair of the Science, Innovation and Technology Select Committee in Parliament. I was indeed.

SPEAKER_02:

For the last four years of my parliamentary career, I chaired that cross-party select committee. And I was also earlier in before I was in the cabinet, I was the Minister for Science as well.

Philip:

Having set out in a nutshell a quick potted summary of some of the highlights of Greg's very impressive career to date, I thought I'd ask you, Greg, um, for a few reflections on your life story and perhaps what brought you into public service and uh what you hope to achieve during your career to date. Well, thank you, Philip.

SPEAKER_02:

It was um it's all a long time ago. Um I uh I was born and bred in Middlesbrough in the in the Northeast. Um grew up in the in the 70s and 80s when this was this was quite a hard time for the Northeast in in general and uh Middlesbrough in particular. And I guess what sort of brought me into politics, or at least awakened a political interest, was that I remember in my class at school, I went to local comprehensive school in quite a you know kind of urban inner city part of Middlesbrough. Um, and of the 180 kids in my year, less than a dozen, probably less than ten, stayed on for six-form study. And this was a time, the early 1980s, in which it wasn't that they were going into great jobs, often they were going into either unemployment or you know what was then called the youth training scheme. And I remember at the time thinking, this is not right. There's you know, the the potential, the ability, the capability amongst all of these schoolmates of mine that where their education was coming to an end really before it had kind of begun in terms of any sophisticated sense at the age of 15 or 16, was unacceptable. And I was struck by the fact that this was not regarded as an exceptional problem. The school was a normal school for the area, this was typical of other schools in the area, and I thought there was a complacency there among, in effect, the political establishments, the people that were both locally and I guess nationally in charge, that things needed to be shaken up. I decided that I wanted to to try to make a difference through policy to people's lives. And I guess when opportunities came, first as a local councillor, then working for the Conservative Party as director of policy, I'd also been a special advisor to uh the then trade and industry secretary, I seized those opportunities, and then when I was fortunate to have the chance to stand for and then be elected uh as MP in Tunbridge Wells, uh I did that. And through a very long story, I um I was appointed a minister and uh became a cabinet minister, and feel very privileged to have been able to do so. If I'd thought back to my 16-year-old self in Middlesbrough, I would never have guessed that uh the future cabinet minister to do these things. Yes. Indeed. It was uh impossible to imagine.

Philip:

I wanted to just step back actually to something you said at the beginning because it resonated very much with me when you were talking about being a young person in Middlesbrough and and a school that was doing its best, but an education system that wasn't necessarily bringing out all the talents of all the pupils. And it sort of resonated with me because I went to a comprehensive school in Derbyshire that um was a good comprehensive school, but I was the first person in my family to go to university, for example. And I think when we think about the world as it is now, it is so important that we give people a proper education, at least the opportunity to have a proper education, whether they take it all up is up to them. Would you agree that with the AI revolution, if anything, the imperative on countries to make sure that their workforce is well educated and have got the right knowledge is all the more important?

SPEAKER_02:

Uh I would, absolutely. I mean, if you look at um not just AI, but some of the aspects of automation that have been deployed now for a couple of decades. I mean, a lot of the jobs that people did leave school at 15 and 16 to go into, even when I left school in the early 80s, they were jobs that are now done by machines and and robots. Essentially, kind of humans were doing physical manoeuvres that now have been automated. And so you both need to, I think, have a more developed kind of cognitive education. You can't rely on essentially doing mechanical things. Your brain is important in virtually all jobs, and uh and obviously kind of interacting with machines is something that requires skills and abilities and and training. But there's also an aspect uh in which things move fast. Technology moves very fast, and you need to be able to do different things from from what you started doing. So uh even going into uh a job uh at 16, 17, or 18 and thinking, as people did, and were right to think that they could do that for the rest of their lives, the rest of the rest of their working lives, that is vanishingly rare that someone will do precisely the same thing when they start to when they finish. So all of that I think requires equipping people, especially when they are young and their brains are most receptive and malleable. That is such an important opportunity. And not to make the most of it, both in stretching people, in equipping them with the breadth of education that they should have. It's unforgivable if you don't do absolutely the most you can uh for young people when you have them, as it were, sort of captive uh before they they're released into the wider world.

Philip:

Indeed, indeed. I mean, when you're young, you're much more curious, I think, when on average than when you get older. I'm not saying that we stop being curious because we're always curious about things, but if you get people when they're young, you can you can give them the foundation so that as they progress through their career, you know, they've got the relevant sort of skill sets and knowledge. And if even if they don't have that, they've got the confidence to know that they can go out and develop it.

SPEAKER_02:

I think that's exactly right. I think uh there is a kind of great curiosity that young people have, but there's also it's a time of some kind of conservatism, kind of peer pressure, and not being unusual and and you know, kind of fitting in. That weighs heavily. When you get to to our stage of life, then you you care a little less about what other people think of you, but you care a lot, especially when you are in your teens. And I think it is important in education that there is a kind of there's a wide variety of exposure that people have to interesting things that that may interest them or not. But that is probably not sufficient because they probably also need to have instilled in them the confidence to be interested in things that perhaps their schoolmates and and friends are not. Um, and so I think the two need to go together. Yeah, I know I agree.

Philip:

And I think looking back over my career, I think when you achieve academic success in different areas, and you always find things you're strong at. So I was never very strong at sport, but I was quite strong in in various academic areas, you then feel confidence in those areas, and areas that are allied to those, you also feel confidence in. So you know you can turn your hand to those other things. And if anything, when we look at childhood, I think part of it is giving children the knowledge of where they are strong at things, where their skills are and where they might be comparatively weaker, so that then they can decide exactly how they're going to progress through life. I think that takes me on a little bit to AI now, and obviously that's the core topic that I talk about a lot on these podcasts. So I think the interesting thing about the AI revolution is that we had the Industrial Revolution, which fundamentally turned basically mechanical tasks into tasks done by machines. And the interesting thing for me about the AI revolution is it's sort of doing a similar thing, but for very much knowledge-based jobs. And obviously, my profession, the legal profession, has been simultaneously excited about the possibilities of AI, but simultaneously worried about what it might mean for the profession and for lawyers. And that takes us really towards sort of policy and your interest in this and the Industrial Strategy Council. And I was going to ask, what do you think from a sort of policy perspective is important for the government in terms of its role? Should it be a sort of active player or should it be more passive? Should it aggressively regulate or should it just try to guide and steer?

SPEAKER_02:

Uh well, I think there are a number of different dimensions to that. Uh first of all, I think the government should be active. I'm someone that believes in active government. I I think this is borne out by experience around the world. I mean the the the old view that on the right of the political spectrum, you b basically believed in laissez-faire, you believed in sort of government getting out of the way and leaving it entirely to the to the private sector and on the left uh effectively substituting the state for the private sector. I think that has been repealed in practice in most countries. I mean, uh you know, uh all your listeners will know all of the obvious examples in the US if we kind of take that as a high water kind of country in terms of a capitalist economy, the role famously of DARPA and other US government agencies, the defense program uh obviously, but also the space program in coming up with new technologies. We've talked a bit about education. You know, in most countries in the world, the government both contributes to the funding of education, higher education uh and research and and research activity, much of it blue sky research. So so the government uh I think is and should be involved, and if it's going to be involved, then it should try to do so in as thoughtful and as helpful a way as possible. And when it comes to AI, uh for the UK in particular, I think it's very important that the government doesn't just sort of leave the pitch on this, because we are very strong in AI, and obviously you through your work uh at Garfield have become part of that strength and part of that uh community. Most reviews consider us to be perhaps third in the world in terms of the strength of our AI ecosystem and companies, uh to the US and China. And that is an amazingly valuable asset that we have. It comes from a lot of the the institutions and the and the government engagement that I mentioned, in terms of our university network, in terms of active encouragement for commercial research, including by things like what's now Google Deep Mind, um, over the years. Uh they've always had a close connection with government. So I think that is a that is a positive thing, and we should engage and encourage that. It would be an appalling thing if through neglect our position of strength was transmitted, was lost to other countries that perhaps aren't as strong but uh are more ambitious. So it's right that we should do that, and and I think successive governments, Conservative and now Labour, have recognised that. Then there is a whole set of questions which we might go on to about the regulation or or not of AI. It seems to me that they are essential questions because many of the conundrums, many of the challenges that AI produces, such as, for example, intellectual property and and who that belongs to, are matters in which public policy in Parliament has over many decades, even centuries, uh found itself thinking hard about and regulating through laws, and you can't simply have that and then not be active in seeing how AI changes the relevance of those laws. So so I think there is a big need for policymakers to be engaged with the industry, with uh with technologists and uh and practitioners.

Philip:

Picking up the first point there, which is really interesting, the um the question of sort of the the skills and the ability of the UK. I mean, we're both the same generation, and and we both came through the 1980s. And one thing that um it was the BBC did in the 1980s is they ran their computer literacy program that became the BBC microcomputers that were in every single school. And I I look back on that and I had a BBC as well when I was a kid. I uh I had the electron, not the not the big brother one, the smaller one. And that's really why I learned to program. And I I I know, Greg, you you know how to program as well and and did some in your time.

SPEAKER_02:

Mine actually came through the private sector route. It was uh Life Sinclair, and uh I I was a I had a ZX81. Oh, you had the wobbly keys. And I remember a very wobbly 16k RAM pack that you had to plug in the back of uh uh but it was a very just as obviously the BBC micro was, these things were were amazing. I mean, going back to that education point, I learnt a lot. It is kind of basic coding in a way, but the the logic and precision and forming outcomes, creating outcomes through logical steps. There's an amazing satisfaction that anyone that's done any coding, however basic, gets from having a sequence of commands and instructions that actually work and then produce something. And the whole kind of debugging thing as well. I mean, that's a I learned probably more from uh pouring over my ZX81 than uh many courses of study that I've done.

Philip:

No, I agree with you on programming. I do find I I mean I find this too. I find when you're doing programming, it there's something delightful about the elegance and also the fact it forces you to think logically, because it's the old programmer's joke that there's no such thing as a bug because you've told the computer to do that. It's on you. Absolutely. Gross up, yeah. We were talking a bit about uh regulation and what the government should do, and um, we've obviously got this wonderful pool of talent in the UK, and um, we've certainly seen a lot of companies, both before the LLM revolution and even more so afterwards, pop up doing things that are agentic. And you were talking about the importance of the government doing things to encourage that. Do you have any thoughts on things that the government should be doing in terms of should they be funding particular research organizations or just generally encouraging particular sectors, or should they sort of take a holistic approach?

SPEAKER_02:

So I think again, I think there are a number of aspects of that. I think the government should fund what's often called blue sky research, discovery research, which is backing financially researchers because these are people of great talent and ability who, if they apply their hard work and ingenuity, will come up with breakthroughs that can change society in uh in many cases. And and so one of the things that the current government have done, and following previous governments, but they've been very explicit that uh funding Patrick Valance, um the science minister, talks about three buckets. The first of which is discovery research, of which there is no, as it were, sort of instrumental outcome expected. And I think that is really important that we do that. And some of our best universities and research institutions have people who are thinking hard, not about a particular application. We've got a new body called ARIA, which was set up by the previous Conservative government. Uh actually, it was a brainchild of the that controversial character, Dominic Cummings, who suggested that there should be a kind of emulation of, in effect, the DARPA organization in the UK. Okay, yes.

Philip:

Because I mean DARPA was where the internet came from. Indeed, exactly.

SPEAKER_02:

No, so they've got a very novel model in the UK system in which they are funded very generously, um, I think about a billion pounds over a five-year period or something, or probably even more. But without the the same scrutiny that even academics in universities have, there's a lot more risk embodied in that. And I think historically policymakers have been quite risk-averse. After all, it's taxpayers' money and people need to defend it to their constituents. But I think recognizing that there's some room for that, I think, is important. But then there is an important aspect of applications. So when I developed the industrial strategy that I did in uh 2016, um, we identified what we call some grand challenges in which the UK had strength but the world was likely to take an increasing interest in. And one of them was AI and the analysis of data. And so we put a lot of applied research funding in that so that institutions, universities, and research bodies could work with companies to develop applications as they were going. And I think that is important uh as well. So um so I think it should be explicit. And there are choices embodied in that. You can't do everything. You can't fund every single branch to the to the extent that they might merit. And so choosing something that I think we're good at and is going to be important in the future as well as now, uh, AI, I think it's right that it should be one of our priorities and we should explicitly back that.

Philip:

No, I agree. I mean, obviously AI is um going to have a transformational effect upon our civilization. So not backing that and not being in the room for it would be actually incredibly negligent. The other topic that we we touched upon but didn't yet go into is um regulation and uh sort of how much. And the interesting comparison there, I think, is between America and Europe, because in America, the present US government is doing everything it can do not to regulate. And I've seen a few examples where it has intervened in state-level regulation to try to stop it. And on the other hand, is Europe that went out and rather trumpeted the fact that it was the first to regulate AI with the EU AI Act, and actually in the last two weeks has been rather rolling back from the level of regulation it was proposing. And so we're sort of in an interesting place as a country because we, as I understand it, successive governments have taken the view that we can basically leave it to sector regulators to use their existing rules and if they need to amend them to amend them to deal with it. So we've very much sort of um said the existing system probably works, and uh there's no need for an act of parliament or anything like that. Where do you think the balance is correctly struck? Do you think it's sort of in the middle of the two extremes or biased towards one of them?

SPEAKER_02:

I think at this stage in the development uh of AI, we shouldn't be cutting off uh its progress uh prematurely. I think that would be a mistake. I thought right from the outset that the EU approach of establishing in effect a kind of an AI sort of regulatory force and being very interventionist in assessing different levels of of risk I thought risked sort of chilling the the development and the experimentation, including deployment of AI in Europe, and that seems to be a kind of growing consensus that that should be the case. I think the UK approach, which has been, just as you said, to to work through existing regulators, I think that is the right approach. Rather than regard AI as being something completely different, by doing it through the regulators, you're looking at its uh effects and and detecting and anticipating whether it's that's going to have some effects that need to be controlled, that would be unacceptable to uh society. And you've got uh regulators with expertise in that, and I think it's best to draw on that expertise. And they fall into uh I think very different categories. I mean, i intellectual property, for example, and obviously there's a big debate and quite a difficult one on to whom the intellectual property uh who owns the intellectual property uh and to whom returns should be given. But you know, as you know, as a as a lawyer, there is a substantial history and jurisprudence and uh philosophy of of dealing with these questions that it we should absolutely be rooted in. And it's important that we get that right, but it's not easy, and I don't think just appointing some new AI regulator that knows about AI is going to be able to resolve that in any way satisfactorily without drawing on the expertise there is in intellectual property law and copyright uh law. So I think that should be determined in that tradition. In matters of privacy, for example. You know, we have regulators, the the information commissioner's office is set up to be our privacy regulator. They should have the powers to be able to deal with the applications of AI there and to be part of the kind of discussion as it develops. So I think that is the that that's the right approach. In the American example, I mean I'm I'm closer to the American model in terms of my preferences for uh for not imposing a kind of central regulation here. I I worry that the you know the the kind of the West Coast way of thinking can be techno way of thinking. So that yeah, I think I think there is a there is a bit of a kind of West Coast, you know, we're all heroes, um, you know, these pesky irritating bureaucrats are kind of holding us back, these policymakers on the East Coast and really don't understand what a heroic mission we're on here and just kind of get out of our hair. And and I think that is, you know, there there is a lot of achievement there, but I don't think that is good enough. I mean, if I take, for example, questions of bias, you know, I was a member of Parliament for nearly 20 years, you know. For centuries, the UK Parliament and others have worried about people being discriminated against unfairly and have introduced laws to prevent people being discriminated on the grounds of race, uh, for example, uh, and of course, sex. It's not so long ago that um uh women didn't have the vote. Um, and that was a huge parliamentary campaign and initiative to make sure that level of protection of people's rights were there. So, in the context of AI, were it to be the case, let's take in the application of AI to employment, what were you to find that a that kind of AI models or their applications were systematically disadvantaging people of colour or women uh in employment markets, and you knew it would it was doing that, then you can't suddenly think, well, that's fine with, you know, what can you do? That's just how the AI works, or at least how it uh produces its outcomes. Battles have been fought to provide that degree of protection and equality, and so they have to, you have to engage with AI. So I do think there's a very important space for public policy, but it should be in the context of generally, in my view, welcoming or certainly kind of recognizing that AI is going to be even more pervasive, even dominant in the future, rather than trying to keep it at bay, as it were.

Philip:

Yes, I and I I obviously agree with that because um I took a conscious decision very early on that Garfield should be regulated, and I was going to build something that would be a regulated product. And um it's interesting, isn't it, that a lot of people around the world have said, oh, Garfield, how's that come about? And why has it come about in the UK as opposed to another jurisdiction? And I do think that part of the reason that it's come about here as opposed to on continental Europe or somewhere else, is because we did take a sector-specific regulation approach to it, and we said sector-specific regulators can decide what is appropriate and what is not. And also the fact it's regulated. I think a lot of people were surprised about that, and I think it's the right approach. And the two words I've used a lot in my public sort of pronouncements over the last year in the various speeches I've given are accountability and responsibility, and I think that's very important for something like Garfield, but even more important for the more foundational layers such as the LLMs themselves, because I think what they essentially do is that they concentrate a lot of power into a relatively small group of hands. And this comes back to your point about you know bias and accessibility and things. You need, whenever you have power that is concentrated, you need to make sure it's accountable and it's exercised responsibly.

SPEAKER_02:

I completely agree with that. And uh, and you know, antitrust law, competition law uh has been very important in this country, and as it happens in America. I mean, the trustbusters uh in the uh early 20th century in America were very important for the development and the dynamism of the US economy. So these make a positive contribution. I don't think these questions should be seen to be tedious quibbles or objections, they can help the development of a vigorous contested space. And I do think this is an advantage that the UK prospectively has. You know, I mentioned that in terms of our research depth and capability, that is strong through our people and our institutions and one of the top three in the world. But one of the things that we have also been historically good at, you wouldn't necessarily know this from some of the political discourse, is actually getting a regulatory environment right. I mean, the fact that you, Philip, are a lawyer who you practice a lot of your working life in the in the City of London, yeah. One of the reasons that the City of London has been prominent and companies and individuals around the world have looked to English law as a means to under which to write contracts and to settle disputes, is that there is both a kind of rigour to it, a dependability, and that offers protections that are valuable. And in a world in which there is a lot of risk and a lot of kind of malpractice, and where trust is probably receding around the world, if you can create an approach to regulation, if we if we call it that, that actually is a positive and supportive of enterprise and innovation and discovery, but also provide some degree of underpinning confidence that people aren't going to be bilked and defrauded, then that is quite a valuable offer, it seems to me, for the tech world, as much as it has been for financial services. And I think for policy and regulation, if we get it right, this can enhance the strengths of our technological expertise rather than be in opposition to it.

Philip:

Indeed. It's a sort of ecosystem thing, isn't it? You don't just need the talent to build the products, you need the regulators who will allow the right things to be built, you need the ability to finance where finance is required, you need lawyers to ensure that the legal structure works. And I agree with you. I mean, I've seen examples of that throughout my entire career. And uh I always think the most flattering example of the power of English law is the fact that in Dubai they have the Dubai International Financial Centre that is basically an English law zone in the middle of the desert, um, which is a very very interesting concept, but it's um flattering towards the power of English law. I think there's one further topic that um, in the time available, I'd like to just take your mind on, and that is something that's also very close to your heart. And I know during your career you put a lot of work into the regions and their development, and you were city's minister. Um, what do you think about how we could ensure that AI innovation is not just sort of centered around London and is more regional? And I think about this, of course, because I grew up in Derbyshire and my local city was Sheffield, and in the 80s, Sheffield had a really strong, vibrant software development industry, and a lot of companies were founded and did a lot of good for the city. And so we very much want to try and encourage sort of um the regions to get their fair share as well.

SPEAKER_02:

So I think that's absolutely right. I think there are big opportunities. Um I work at the University of Warwick these days, and uh West Midlands is associated often with manufacturing, the automotive uh industry and such like. But actually, one of its strengths is, and many of your listeners who who are gamers uh will know that uh that Lemmington Spa, adjacent to the University of Warwick, is one of the the world's biggest centres of gaming and and the development of games and and the development of the technology associated uh with it. Um and uh and across the country um there are areas of real strength. And one thing in particular that I think is a great asset, and actually gaming and visual representations exemplify this, is that the boundaries between industrial sectors, if I can call it that, that were previously thought to be pretty rigid, you know, Sheffield was a steel town, Birmingham was a motor, motor city, and all the rest of things, these things are breaking down. So the work that's done, often with the University of Warwick involved, between the gamers in Lemmington Spa and the designers in Jaguar Land Rover, for example, of their latest models, and when I say designers, not just the aesthetics, but the you know, the kind of handling and the uh the aerodynamics and all the other aspects and the testing, there is a kind of connection there that is actually increasingly typical. And we have also part of and adjacent to the West Midlands, the what's sometimes called the motorsports valley. I mean, the the cluster of F1 teams there, which are rely on manufacturing, but they also rely on a lot of analysis and data handling. So I think the opportunity for us, again, I think there's a national opportunity as well as a regional opportunity, is to bring about a greater porosity, if you like, greater kind of sense in which people in what might have traditionally been different sectors can be porous into each other's. And it's often the case at the at the kind of interstitials of those sectors, and indeed academic disciplines for that matter, this is where the most interesting discoveries and innovation takes place. And so, you know, whether it's Sheffield that has an advanced manufacturing research center and is um very prominent in aerospace, Boeing has a big operation there, or whether it's the West Midlands with its componentry, the connections between that and uh and the creative sector, for example, are very substantial. And one of the things I think we should do as a nation, as someone that has been for many years a kind of policymaker, we should look to enable that. Because a bit like what you were saying right at the beginning of our conversation about sort of people in schools being able to discover what they might be good at and what is available. Sometimes people tend to be encouraged by the system to sort of stay in their lane. Whereas actually kind of um kind of merging with another lane uh is um is often is often a positive thing, not necessarily uh on the M25, but uh but in uh uh in other uh respects. And uh and one can do that through the educational system and the way that we think about policy uh and business, it seems to me.

Philip:

We've come full circle because we've come back to talking once again about the importance of giving young people opportunities, the ability to explore and exercise their curiosity and the importance of having the ecosystem there to enable the right sorts of businesses to to begin, to grow, to thrive, which is basically a fusion of of everything: government, the legal groundwork, regulation, and having the skills and the money and things to achieve that. It's been a very thematic sort of discussion in that respect. So I think that's right.

SPEAKER_02:

I think that you know there's a there is a big challenge that applies at almost every kind of take on this conversation, which is sort of breaking down silos, you know, bringing people together to cross-fertilize ideas, to to work in different ways than they otherwise would. And this requires an effort, a deliberate effort, because actually the forces that constrain people or pull them in a way of staying in tribes that are remote from others, in technologies that don't really talk to others, they're quite strong, even in universities. You know, universities tend to be organized around departments and schools that are disciplinary, and then literally, I mean the the the notion of a university is it's universalists, bringing people together. But sometimes uh on a campus of the university, you know, you have the physics department um and the English department might be next door, but no one goes from one to the other. Um we need to kind of break down those barriers, and I think between government and and technologists, that's also uh important as well. I don't think I think people, hopefully your listeners who are either active in developing technology or using it, shouldn't think, oh well the policy world is, you know, for these politicians or these civil service technocrats, it's not really for us, they'll do it. They absolutely need to be able to have a mutual understanding, it seems to me.

Philip:

Yeah, very much so. And I I'm going to take that as a as a as a challenge as well for this podcast. So the the objective now is to sort of cross-fertilize ideas as much as possible. And going forward, we will try and get as many varied guests as we can on and um get as many ideas as we can to cross-fertilize in your in your honour, Greg.

SPEAKER_02:

Well, I'll uh I listen to them with uh with great interest and uh and uh enjoyment and anticipation.

Philip:

Well, thank you very much. So I think that's all we've got time for today. So let me end by inviting you, if you're listening, to think about how you can cross-fertilize ideas as well. And thank very much, Greg, for joining me on this podcast. Thank you very much, Greg.

SPEAKER_02:

Great pleasure, Philip. Thank you for inviting me and uh congratulations with the um amazing work that you've been doing on Garfield.

Philip:

Thank you.