Litigating AI

Philip and Daniel Long Discuss the Reality Behind Building an AI-Powered Law Firm

Garfield AI Season 1 Episode 2

In this founder-to-founder conversation, Philip Young (CEO) and Daniel Long (CTO) unpack their legal product Garfield AI. Built around precision and empathy, the result is a guided path through small debt claims that feels friendly on the surface and is rigorous under the hood.

We begin with Daniel’s background in physics and his journey to legal tech, exploring how energy, scale, and reliability shaped our architecture. We then consider the broader implications of faster legal processes, including the changing economics of justice and the potential for AI-enabled triage and drafting to increase access to legal services.




Intro:

Hello and welcome to the Garfield Podcast, a series of conversations with the people who are working with AI to improve access to justice. In this episode, Garfield founder and CEO Philip Young talks to his co-founder and Garfield Chief Technology Officer Daniel Long. The pair discussed Daniel's technical background, how Garfield was built, and what the future holds for AI in the legal profession.

Philip:

Welcome to the Garfield Podcast. And on this edition, a very special guest indeed, Daniel Long, my technical co-founder. Daniel, welcome. Thanks. Thanks for having me. Well, it's my pleasure. We're sort of keeping in the family today because on this one, uh both the founders of Garfield are going to talk to each other rather than have uh somebody external come in and chat. So I thought a good place to begin would be to ask you, Dan, just to set out for our listeners something about your background, i.e., where you started, and then we could move on in a few minutes to uh how you came to uh suffering the misfortune of working with me.

Dan:

Yeah, so I think it's it's always a complicated story to try and describe the background because it's not necessarily a clear path. So I originally started studying physics, partly because a lot of people in my family had studied physics, but also I guess because I didn't really know what deep down I wanted to do. And I was always asking why questions and physics naturally leads you that that way. But I was also tied to the fact that I want to do something that is in the real world, is physical. It's not just in academic papers. Sadly, a lot of academic research doesn't make it outside of the lab, or if it does, it doesn't make it outside in your lifetime. So I've always had one foot in startups and then one foot academic research. So I've sort of jumped to and fro, spent a bit of time working at a biotech startup, um, fintech startup briefly, but then also did was doing some research in more quantum physics and started a PhD actually in uh hybrid between it's called neuromorphic photonics. So that's using the same technologies as quantum physics, but as applied to building brain-inspired architectures for computing. But very far from Garfield.

Philip:

Yeah, very far from Garfield. And if it's photonics, is that then sort of computing with light? Is that the fundamental idea?

Dan:

That's the idea. It's a hybrid system. So that's always the challenge with photonic computing, is that you can't always exist entirely in the photonic domain. But this was a really cool architecture where you take a superconducting circuit and use that to imitate a neuron, and then you can have photonic interconnects. So you can have like the electrical synapses of your brain being modeled by light hitting this superconducting wire, which is, yeah, kind of science fiction. And hopefully someday it won't be just science fiction and will be actually scaled out into the scale of a human brain, which would be fascinating.

Philip:

And do you think people will one day build sort of computers with this sort of photonic technology that are not just sort of computers in a lab, but sort of computers at home, things like that? Or is that not a use case of this technology?

Dan:

Certainly these forms will never be in a home, just because unless someone discovers a room temperature superconductor, superconductors normally need to be called to only a few Kelvin. So that's not really very practical. But certainly, you mean the direction we see with current LLMs, you know, you you typically you're running these processing jobs in large data centers. But I can see you got a joke on the on on the cards there.

Philip:

Yeah, I was gonna say when you were saying, oh, um, it's not practical to have a room in a house that only exists at a few Kelvin, I was thinking maybe you should come around into the conservatory at winter. It's a few thousand Kelvin in summer, so it really does vary between the two. Yeah, I'm not sure any computer can handle that sort of swing. So if it helps, neither can I. So um yeah, so photonics is really interesting then, and presumably the advantage of photonics over good old silicon and electrons is that it's much quicker to do the calculations.

Dan:

Yeah, I mean speed is not directly because electrons travel close to speed of light, or a significant fraction of it. So um speed in that in that regard isn't necessarily the biggest change. There's there's other benefits though. I mean, the the biggest one is often people point to the energy usage. It doesn't produce heat. Light when you turn on your light in your room, the light bulb might get hot, but actually the light traveling through space doesn't produce heat. And in the same way, within a waveguide in an integrated circuit, integrated photonic circuit, you wouldn't have the same heat. So energy usage is one of the big ones. Then the other thing that sounds very complicated is is wavelength multiplexing. So that's just basically saying you can pass multiple different frequencies down the same waveguide, uh, and then you can get this sort of parallelism and uh scale that the human brain reaches. So that's that's exciting.

Philip:

Yeah, no, it sounds very exciting, but I can't help feeling that waveform multiplexing sounds like something that people say in a Star Trek episode. I'm sure, yeah.

Dan:

Someday, someday it might be in your conservatory then. I'll have to look out for it.

Philip:

Yeah, no, very cool. Yeah, very cool indeed. And I think um I think it's fair to say as well that when we first met with a view to starting Garfield, at that point you were actively contemplating starting your PhD in this topic, and we were only really thinking about beginning to build a prototype to see what was possible. And then I think it's also fair to say that I might have stolen you away from your PhD, and I I do apologize about that.

Dan:

I think I always viewed it as PhDs is always going to be there. You know, it's not like someday research is gonna be solved and these questions aren't gonna be up there. So I hope that someday future me goes off and does a PhD. But but yeah, we could we could see. I mean, we we started as look, let's build a little prototype and see if it works, and no grand ambitions for scaling super far beyond that. And it was fun. So I think we just we both wanted to carry on and then you know the PhD was fun too, but I could see that Garfield was gonna start picking up pace, and I wanted to be on board for that.

Philip:

Yeah, indeed. And you went all in on Garfield, and I personally am very grateful for that because uh I don't think where we'd be uh where we are now is where we would be if it hadn't been for you deciding that um the sunnier climbs of Australia and the PhD were not as attractive as the incredibly damp atmosphere of London and um breaking new ground in AI law tech technology. No, absolutely. So we've worked together now about two and a half years, and you've obviously seen quite a bit of how the legal process, at least in the small claims court, works, and also had the um misfortune of also having to talk to me a lot about law and ethics and regulation and how lawyers think and how lawyers work. Is there anything that sort of stands out in your mind over that period? Is anything that has surprised you or you've been very interested in?

Dan:

Yeah, I mean, I think I always find it interesting chatting to lawyers because it makes me realize that my understanding of the legal profession is an experience of it is very different to theirs. And suddenly, like every morning, or at least every now and again, we get to hear a story from you of uh in big ticket litigation, and and it's it's fascinating this sort of yeah, the sort of world that exists there. I guess the broader point that I see is that most people interact with a court system where there's long delays. You know, it tries to be simple. The small claims court is designed to be easy to use, but in reality, there's a lot of unfamiliar terms and forms. And and we, I mean, internally we find it sometimes confusing, you know. You get the sort of N1, N9, N9A, N9D, you know, it it's hard to make sense of what all these different forms mean. So I guess it's on the one hand balancing the fact that this is the court the majority of people will actually interact with, not the TV drama criminal court cases and all these sort of suits style big ticket stuff. In reality, this is actually what the majority of people's perception of the court is. So yeah, it's quite a shift from the TV world that maybe I would have more seen before I was working with you.

Philip:

Yeah, it definitely is different. I mean, the whole idea of the county courts and particularly the small claims track is a sort of super simplified process and procedure to deliver justice, but very much a sort of a rough and ready sort of justice because it's got to be proportionate to what's at stake. I mean, uh you might deliver a massive Rolls Voice service if you've got claims where there's hundreds of millions at stake, but you wouldn't ever do anything like that where there's sort of only three figures or four figures at stake. So it's a very different world. And and what do you think in terms of our access to justice mission and what we're trying to achieve in terms of helping people get through the court system? Have you have you had any eye-opening moments as to how the system presently works and how we're hoping to help improve it?

Dan:

You know, majority of people, it's just preaction and then they find their issues resolved. But when we launched, there was a lot of positive support. I had friends that I hadn't spoken to in a long time that were messaging me out the blue, having seen the launch and saying, Oh, this is great, you know, we've got these issues, but like we we couldn't afford the legal work, we didn't have the time. This is exactly what we need. Because when when you're building a startup in a bit of a vacuum, and especially for someone like myself who'd never experienced the legal profession, I didn't know how much of a gap there was. And I think that's something maybe was much more clear to you. So that was that was really eye-opening. You've probably spoken about it before in terms of how you saw the initially the small claims court, but I always think it's fascinating that most big ticket litigators don't have much interactions with the small claims court. So I think I I wonder if you can sort of introspect what it was that made you take such a career shift and actually dive into this world.

Philip:

Oh yeah, well, I mean that's Andy basically, um, my brother-in-law, who um his invoices sometimes weren't paid. And that was always really frustrating for him, and also quite frustrating for me, because it would mean I would have to give him a hand. And I was very happy to do that, but uh I was frustrated on his behalf that he was having to spend time talking to me when he'd prefer to spend time working. And as a young lawyer, I did quite a few small claims, uh, obviously not so much in my later years, but I still remember all of the frustrations um not just from helping Andy, but from helping actual clients when I was actually in practice. And not a lot had changed in the sort of 15-20 years between those two periods. So I instantly saw that this is an area where product like we've built could really help. Let's move to a new topic now. Let's talk about Garfield, the product that we and our team have built together. And I think many of our listeners are going to be really interested to hear your insights into what we've built, um, what it is, and at a very high level how it works. So, do you want to begin by sharing some thoughts on this?

Dan:

Yeah, sure. So uh behind the scenes, we like to joke that it's basically a distillation of Philip's brain. So we grabbed it, stuck it through a big syringe and injected into some code, and that's the that's the attempt. So that code then has to make sure it fits all of the court rules, it doesn't make mistakes, and it takes users smoothly through the journey. So when we when we think about that, we can divide that into two separate components. There's an expert system, and then there's an AI system, and the marrying of those two is really what builds Garfield. So the expert system that that's in charge of making sure the user follows the civil procedural rules. So that's you know, waiting the right timelines for certain things, giving them the right options at the right time, and then obviously generating all the documents in compliant manners. A lot of this is is handled by an expert system. Um that to be honest, that that takes most of our time is making sure that that's all working smoothly. Because not only is that working smoothly internally, that handles all of the outgoing communications as well. So when a user is paid for a document and it gets approved by Philip, then uh it's the expert system that will be emailing, posting, sending it to the court, sending it to the other party, sending it to the user themselves for their records. Uh so there's a lot of different processes that need to happen 100% reliably.

Philip:

Let me just jump in there and say as well. I mean, just for any lawyers listening, lawyers always sort of subdivide law between procedural law and substantive law. And it's probably worth saying as well that the expert system also does the substantive law parts. A good example being limitation. So at the initial triage stage, Garfield checks if there's a limitation issue or not with the invoices that you put onto the system, and then we'll handle that appropriately. So that's a that's a good example of where it also does substantive law. But sorry, I interrupted you. So um you're going on.

Dan:

I was going to say, actually, that that's like it's a common point probably for lawyers to hear, but also technologists. There's the common trope that everyone tries to apply AI to everything. And actually, some problems really shouldn't, you know, you you if you want 100% accuracy on certain things, AI is unlikely to give 100% accuracy. So we've been quite tactical in that way. You know, we we put as much into the expert system as we possibly can. But yeah, anyway, so that's that's the expert system side of things covered. And then the other side is the AI part of Garfield, and that's pulling out information basically from the user. So they might give us a PDF with an invoice in it, and we've got to pull out any relevant facts on that dates of contracts, amounts, line items, all of this kind of stuff. So that's how we handle information extraction is we that's the AI side of our system. But like I say, we try to make sure that the logic of how the claim should progress is not handled probabilistically. That's the high-level thinking, at least.

Philip:

Yeah, because so otherwise we'd get into all sorts of um issues there, wouldn't we? And so it's been a big architectural decision we made. I I think it's kind of interesting, isn't it, that in the last three years the phrase AI has become sort of predominantly about LLMs, because I'm old enough to remember the time when expert systems were also described as AI because they're basically you know if-then-else loops and things like that. And um, we've now sort of reached a world where expert systems aren't treated as AI anymore by a lot of people.

Dan:

They're not they're definitely not as exciting, but if you want anything to work in the real world, I think I think it's the marrying of those two that you have to have, really.

Philip:

This is more a sort of conceptual question about the future, but do you think given the current rate of progress of LLMs, we'll get to a point where LLMs can do more of the work that at the moment our expert system is doing reliably?

Dan:

Maybe. I don't know if you'd ever want it to, though. People are trying to shrink these models down. There's there's interesting stats like the human brain uses 20 watts of energy. That means the human brain does an awful lot with very little power, maybe more neurons, 84 billion neurons. It's probably a lot more than or currently on par, maybe, with large language models. But if people can move it down to smaller models, then you can have low latency, low cost in using those. Um, so maybe you could move more logic there. But I I don't see much benefit because you'd just be saving yourself the hassle of encoding those rules in order to inefficiently run them in production. So yeah, probably not.

Philip:

I I think on that last point about encoding rules, it's quite an interesting topic for lawyers as well, because the common law is not a written-down code somewhere. You have to sort of dig your way through practitioner works and statutes and judgments to find where the common law is on something. And so English lawyers sort of don't see it as a sort of codified system. And yet it's it is possible to codify it. I mean, I spent a year and a bit working in Hong Kong and their version of English common law, it was when I was there 20 odd years ago, was entirely codified. Um, you could go and read it in a book, uh, or at least mostly codified, almost fully codified. And so it is possible to encode law in a way that I think lawyers would be surprised by.

Dan:

Maybe an interesting thought there is that when you're working as a lawyer, often you're solving problems for people. So encoding the logic into also the decision as to what problem you're going to try and solve more tightly, maybe you end up building a system that seeks to solve the problem the user is trying to ask for rather than like diving into exactly what it is coded to do.

Philip:

Yeah, it's an interesting topic, isn't it? I think another thing that's really interesting is approachability. It's not at the forefront of our minds when we think about AI, and yet it's pretty interesting watching how much effort the Frontier Labs have put into making their models at least moderately friendly. Um, and it's interesting, I think, watching users on our system because they they soon start talking to Garfield not as though it's an online tool, but as though it's a person and a friend. And that's quite it's quite warming to see. But it's also curious that we have a tendency as humans to do things like that.

Dan:

Definitely, yeah. I mean, we were joking internally before we launched that we were always going to be shocked by how people interact with the system, but actually most people talk to it like we talk to it. Occasionally we get a user maybe who sends a sort of letter style prose over to Garfield and it can handle that, but it doesn't need that. It just needs a casual conversation, really.

Philip:

Yeah, exactly. And and actually it's it's very efficient in in how it sort of guides people through the process because it knows what it needs and it just asks for it.

Dan:

I was gonna say it's a bit myopic on that sometimes. Sometimes I look at the conversations and it, you know, it really you try to pull it away from it, and it all it wants to talk about is small debt claims, which is yes, you know, not the most interesting party topic, maybe, but yeah, quite intentionally though, uh, because it is only focused on one thing.

Philip:

So, yes, it can be quite myopic, as you say, in wanting to sort of progress um uh whatever the user's request is. I was gonna ask you, I mean, we've built Garfield in a modular way with a view to expanding in the future. Any thoughts on the design and the architecture that we've built to achieve that? Because obviously we are at some point looking to build additional modules to Garfield to solve other common legal problems.

Dan:

Yeah, I guess two different categories of that are there's the UIUX that we've designed, and that's both external and internal. So externally, the user has the model of interaction that can be scaled to any paper-based disputes, really. Internally, the admin panel as well that we've built for yourself to be able to review documents, would have to expand that if cases got more complicated. But that's probably just adding more tabs, making things more convenient for you, really. The core structure is there where you can see a dialogue, you can see the documents uploaded, and then you can contact the user if there's more information needed. The way we built everything, I find it quite elegant, at least. I'm biased. But effectively you can think of it like a decision tree, but it's a bit more complex than the way we've designed it to work internally. But at least how it's designed is very modular. What you need to do if you if you decide there's another stage in this process, or it makes sense to inject a stage where maybe there's a decision point, then we have a fairly standard schema kind of a class that we that we write for that. And you can then define all the actions that can be taken at that point within that. So it's it's actually fairly simple, really. I think when we first started, I just started with a giant if-else block, and obviously that was like a nightmare to look at, quickly weighty complex. But the structure we've we've moved to, yeah, it's it's pretty abstract. It allows you to generate any documents or allow different parts of the claim to be modified, all of those kind of things in a fairly modular way. So yeah, I think we're still ironing out small things with small debt claims, but the basic structure is there to be expanded for sure.

Philip:

Yeah, I mean, I think it's uh when when we started this and I sketched out on an overly long flowchart the possible pathways of a small debt claim, it sort of struck me early on that actually a lot of what lawyers do conceptually is very repetitive. And providing you can conceptually handle the particular steps, then you've sort of broken the back of a lot of the challenge, and then you're just down into the granular weeds of the exact task in question, which is something you can then sort of build things to support.

Dan:

Definitely.

Philip:

That's not to say that uh what lawyers do is like uh working in a factory, but it's not the case. But if you step back and think about what you do from a high level, you'll realize that there is a degree of conceptual repetition in most of the tasks.

Dan:

Yeah, I think that'd be an interesting debate to have with some of your lawyer peers over that, because probably others don't view it the quite the same way. I think people find it entertaining to know that, yeah, Garford started, me and you in a coffee shop looking at some big schematics that you've drawn up over the small debt claim or small claims process, and me just staring at it for a long time, trying to puzzle it all together and make sense of it.

Philip:

That's probably more a sort of indictment of my flowchart than anything else, because I probably should have explained all the steps in a bit more detail at that um very first stage. Yeah, I mean I I'm excited about the tech that we built for Garfield and the fact it is very modular and it can be applied to different things. Um we've built it to an extent in an abstract way. I think that's um that's very much the future. And I think uh you and the guys took absolutely the right architecture um in doing that.

Dan:

I hope so.

Philip:

Let's move on. Let's talk a bit about the future of legal services, because whilst obviously you've not been blessed by being a lawyer yourself, although I tend to think you're an amateur um lawyer now, a bit like I'm an amateur software developer, you've obviously seen a lot about how lawyers think and how legal services work, and you're very, very much into the detail of the technology. What do you think is the direction of travel with LLMs and AI generally with the law?

Dan:

Well, I think one topic that any lawyer listening, I'm sure, is is thinking about is the pricing structure. That it doesn't make sense to integrate AI whilst minimize your time whilst also charging for your time. So I think that's what a lot of law firms are wrestling with. From our end, it made sense to start with a fixed cost pricing. But obviously, if you're a traditional law firm, maybe you have to go for some hybrid pricing or something like this. So I think there's that evolution. But in terms of the actual work that can be done, I guess it, you know, it massively widens where legal services can extend to. There's plenty of our users who would not have hired a lawyer for their work, probably written off much of it. So, you know, that has a big impact in itself. I think the other interesting question, which I haven't got an interesting answer to, but it's what does it mean if legal work suddenly becomes 10 times faster? So if disputes are resolved 10 times faster, like how does that change the world? A bit grandiose, maybe. But yeah, I don't know. You might have some thoughts on that.

Philip:

Yeah, I do have a few thoughts on that one because that's something I've been thinking a bit about. And in fact, I touched on this topic when I spoke at Cambridge University a week or so ago, as it was a topic that was raised from the audience. So my own personal view on that is that if you make the provision of legal services less time intensive and easier to provide, obviously at a good quality of accuracy and quality of advice, that's important, then what you do is you make it essentially cheaper. Because the biggest roadblock for access to justice historically has been cost. And cost has been a function of human time, because a lot of legal processes take up a lot of time, not all of which is particularly valuable. Uh, and yeah, I can think of plenty of sort of document-heavy tasks that don't really add much value but are very time-intensive. So if you sort of strip down the amount of time that's required, then it's going to be less expensive for you to do and deliver, and therefore should ultimately lead to more competition and therefore be cheaper for clients. And I think then the question, and I think the question that law firms, some law firms are worrying about, is does that mean that there'll be less legal work out there? Uh and I personally think not. And actually, funny enough, today on LinkedIn I put up a post on this very topic where I quoted a study that had been commissioned by the Law Society and the Legal Services Board, where they had found that something like 63% of all people in England and Wales over a four-year period had had an unmet legal need. And of the ones that had had a met legal need, only a percentage had gone to legal services. Then that's just a consumer market before we even get into the business market. So you can see instantly that there is a lot of scope for more legal work, but at the moment it's not deliverable economically, whereas AI should change that calculus to make it possible to deliver a lot more work, which I think just means that more people will use lawyers for things where historically they hadn't. Maybe supported by AI products.

Dan:

Yeah. So I mean it's good for the lawyers, but also for the for the client, it means hopefully their issues are resolved much faster. It doesn't linger and they're satisfied, hopefully, with the result then.

Philip:

Yeah, hopefully. Hopefully. I mean, it's important to remember, I think, um, that justice and the law are not products that are there to enrich lawyers. They're there to ensure that civilization functions and that people's lives are not burdened by extra unpleasantness. And so anything that improves the ability to get justice and get legal advice and know where you stand is a big net positive for civilization. So I think it's very good. What do you think about LLMs? What do you think about the progress of LLMs and what they'll be able to do?

Dan:

I mean, I feel like it it changes every week. So we're gonna date this podcast if we say any specific result. I mean, for example, just okay, to date it then, there's a lot of talk about scaling laws, people hitting performance, everything plateauing, it's all gonna commoditize. And then this week Gemini 3 comes out and it smashes the benchmarks. And it also importantly, it smashes benchmarks that's like the ARC 2 challenge, which specifically tests more abstract reasoning. And that's something that a lot of people say, well, we're not gonna reach AGI because there's no actual reasoning going on in these models. And then you see results like that, and you just have to keep shifting goalposts, basically. I wonder, will there be like a bifurcation in terms of what sort of models people are using? There probably already is to some extent, because already we're getting to a point where you don't need the latest foundation model for a lot of tasks, and the cost of running those models, you know, is not so so high, but for some applications, it's already going to be at a point where you don't want to keep climbing that upwards. So, first off, I think already there's a huge number of applications where LLMs are at a point where they can already have huge, a huge economic impact. And I also, from everything I read, it doesn't seem like there's any limits being hit at the moment. What do you think? What's your thoughts on that?

Philip:

Yeah, I mean, I agree with the horses, for courses common. I mean, it's interesting the ecosystem of LLMs has grown up, that some are specialists for certain tasks, and I think that will just continue. Although simultaneously, there's also an increasing degree of proprietary control over LLMs. So, I mean, if if we think back a few months, Meta was still doing the Lama models and releasing them publicly. And I think they've just stopped doing that. I think I'm right in saying that. And that's interesting. Uh, and obviously the Chinese are still releasing deep seek models um publicly. So I think that's an interesting aspect of this because I do think that it would be better for society if there are good publicly available LLM models. Now they'll never be quite as good at as as Frontier Labs models for obvious economic reasons, but I think that's important. I'm not quite sure how we get there, particularly as in our country we don't have any frontier labs. No, it's a problem.

Dan:

And then um I was gonna say on the flip side, I was recently causing you to lose sleep when you're reading the AI 2027. Oh god, that, yeah. So uh, you know, when you read enough of those sort of things, then maybe you wonder whether making all of the models open source and open weight and so on is is entirely safe. Maybe it has some downsides.

Philip:

Yeah, it has some downsides. Some some people who are not well intentioned may then use those technologies for very unwise things. Yeah, I actually on the topic of that article, I saw a an attempted prediction of um I can't remember if it's GDP or GMP growth that would be issued by the Federal Reserve Bank in Dallas either last week or the week before. And it was actually unintentionally pretty funny because they gave three cases. They gave a AI is going to significantly, but not massively, increase whichever one it was, GDP or GMP. And so it was a sort of a linear-ish graph, and then suddenly it went up a bit. Not hugely, but went up materially. And then they had a case of AGI is going to replace all human activity and make us all like characters in um Ian M Banks' culture novels, where we all just sort of live lives of leisure and happiness and do sport, yeah, float and things. Yeah. And that graph just went literally uh exponential and then went vertical. And then the last graph, which was the one I least liked, because there were certain attractions to being um basically able to pursue a life of happiness and leisure and and let computers do all the hard work. The the the third graph was AI will kill the whole of human civilization and it just dropped vertically and went to zero. And I'm thinking uh that's not a prediction that we that's not a good prediction. We don't like that prediction.

Dan:

I mean, uh leaving aside the AI is gonna kill us all sort of narratives. I think there's the the line that people say it's like you see the internet in everywhere but productivity statistics, or you see the internet in everywhere but GDP, and the same people are saying about AI. I think I think part of that is that probably this idea there's gonna be some massive spike is yeah, naive. It doesn't, it won't work like that. I mean, in our case already, like we can apply AI, but we're constrained by the system that we work within. So, you know, you can't just let an agent loose on the small claims court and expect instantaneous change. So I think it people say you know it's like a series of sigmoid functions, so you're gonna see this sort of steady increase rather than this like giant exponential climb, as much as it would be fun.

Philip:

I think I think I might have to ask you just to unpack for the listeners what a sigmoid function is. I can give the mathematical definition, but I'm not sure that that helps. Yeah, don't do that. No, no, yeah.

Dan:

If you could do it in language. It's it's basically two exponential curves, and you have one exponential initially climbing, and then that meets an exponential decay. Typically it's between zero and one, but you could obviously put any y axis you want. And um, I think people often make the point that in any system you can have exponential growth up to a point, but then you have the same exponential decay. And I think like everyone became probably familiar with that to some extent during COVID, because you saw sigmoid functions in a lot of the statistics.

Philip:

That is a good explanation for everyone listening because everyone can picture that, I think. One thing you just said that I think it's worth sort of elaborating on a little bit, Dan, is um is that although AI can very much help productivity and help improve things, it is constrained by the reality in which it exists. And I think what you were meaning to say was that at least as far as small claims are concerned, of course, the court system has its own challenges, has its own processes, but also has its own challenges. And inevitably, although the MOJ and HM Courts and Tribunal Service are looking very much at how they can use AI and other technologies, including digitization generally, to improve these systems, there's inevitably a time lag there. And you've got to sort of build the things and then deploy them and obviously test them before they go live and then see all that work. So I think what you're saying is that AI is helpful, but it's not the complete answer because there are other parts of the system that would also have to improve as well.

Dan:

Yes, I think there's there's two different answers to that. There's the high-level narrative where the courts, for example, it's delivering justice. It's it's these key core principles. And if you build a court system that happened in milliseconds, I think a lot of people would complain that they don't feel like their case has been heard and listened to. Even if it was the same outcome, I think everyone wants to see someone deliberating over their situation and giving it some thought. So I think you have those fundamental issues. And it's the same with probably any system. Like if you just removed all radiologists and you have, you know, your scans interpreted by a machine in 0.1 seconds, I think it would take a while for us to get used to the fact that a mistake hasn't been made and that is actually really thought about it. So there's those like high-level points. But I think at a practical level, one point that we often talk about, and I know you've been talking with the courts service as well about this, is we need to have really simple APIs in which to navigate the court system. And my high-level view is that I think what the government and what the civil service and the courts should really focus on is build really simple, RESTful APIs that systems can integrate with, can send requests to. And then leave the private sector to compete and reduce the margins down to basically zero over the front ends of the UIs for those systems. Because I think in the private sector, there's a strong incentive to build a system that users really like using and they build habits over and they find it useful. So you have those dynamics there. Whereas it's harder, I think, in the public sector to build really effective UIs because you don't have that same feedback of you're competing against the next person and you're trying to make your system as simple and as clear to use. That's where I draw the line if I was a if I was working in policy.

Philip:

Yeah, I agree with you. I think that if you're a policymaker, then thinking about how you can take advantage of the market dynamic and of competition between companies and people who are trying to build the best possible product in the best possible way to help people, I think is a really good approach. What do you think has been the most satisfying part of working on Garford over the last two and a half years?

Dan:

For me, it's definitely when we when we launched and we started bringing on users. I think you knew that people were gonna find this useful and need it because you'd seen there's an unmet need. Whereas as a non-lawyer, I'd not seen that. And you know, it took us a long time to get to the point where we were regulated and authorized. So I was worrying that we were gonna launch and then no one's gonna be interested, no one was gonna use it. So actually seeing it being used by people and also be useful for people that you know, we've got some clients that we get a nice sort of boost every now and again when they say, Yeah, we we use your system and we're just gonna entirely use it now. Like it was so much faster, so much less hassle. We probably were just gonna write these things off. But that's great to see. It's cool to be building a product that is being used and appreciated.

Philip:

Yeah, I agree. I think it's probably the most satisfying thing about this project for me as well. Uh, and I've taken pleasure really from the fact that some of the people that have used us have told us that it's not just for them the money that they've collected that otherwise they would have lost, but they've been able to do good things with that money for their business. You know, they've been able to expand, take on new employees and things like that. And that actually, to me, is a sort of counterbalance to the whole AI is horrible because it's going to cause everyone to lose their jobs. We built something that is actually helping to create new jobs amongst our users, which I find very satisfying, I have to say. On that note as well about building something novel, one thing that has amused me enormously in the last two and a half years is that you remember when you and I made an application for a grant some time ago. Yeah. Uh yes, it was a government grant. And I won't, you know, we're both grinning at each other now because you know what I'm about to say. But we we applied for it and we explained what we were building and what Garfield would be. And there were three assessors. Um one assessor got it entirely and said, Yes, this is revolutionary, it should get the grant, no doubt about it. Uh, another assessor said, There's nothing unique or revolutionary or novel about this whatsoever. And I I'd be wondering what that assessor, whether it be he or she, has been making of all the publicity we've been getting all over the world.

Dan:

I I think I I can I can understand some extent where they where they come from, because unless you are experienced with the legal profession, unless you are, you know, have seen the small claims court, you probably don't realise quite how painful it is to use and how slow it is and how there isn't anything like this out there. And also like where this could go and the fact that if this works here, then it there's so many other high-volume, low-risk applications. So I have some sympathy, but yeah, it is also funny thinking of that perspective.

Philip:

I I just wonder if they if they sort of face palmed when they read some of the uh thinking on what Garfield is from the likes of the senior judiciary and government and um the profession. But yeah, I also agree with you. I also have some sympathy for them because maybe it's on me, maybe I didn't draft the grant very well. Um that's that's actually probably the real explanation for it all.

Dan:

I'm not so sure as it is a good grant. It's a shame anyway. At the time.

Philip:

Yeah. And on that, on that mutual backslapping note, I think we will draw this podcast to a close. So just remains for me to thank Dan for allowing himself to be distracted from being deep in the code base for the last 40 minutes or so and to chat with me about what we've done so far and what the future of law might hold. That was a pleasure. Thanks for having me on.