AI Made Simple

Dr. Benedikt Flöter on Turning AI Regulation Into a Business Advantage

Valeriya Pilkevich Season 1 Episode 6

Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.

0:00 | 33:32

The EU AI Act is live, but most companies are still treating it like another GDPR. It's not. It's a fundamentally different kind of regulation, one that's tied to how your business operates, not just how it stores data. 

In this episode, I'm joined by Dr. Benedikt Flöter - Partner at YPOG and head of their AI and Emerging Technologies practice - who advises leadership teams on building legally compliant, scalable AI governance that actually drives business value. 

We discuss: 

  • Why two-thirds of companies overestimate their AI risk classification 
  • The governance foundations every organization needs before scaling AI
  • How a recent German court ruling means your AI-generated content may not be copyright protected 
  • Why AI literacy is now a legal requirement and what "sufficient" actually looks like 

Check your company’s AI risk category and compliance obligations here: https://ai-act-service-desk.ec.europa.eu/en/eu-ai-act-compliance-checker 

Connect with Dr. Benedikt Flöter:
LinkedIn: https://www.linkedin.com/in/benedikt-floeter/ 
Website: https://www.ypog.law 

Connect with Valeriya:
LinkedIn: https://www.linkedin.com/in/valeriya-pilkevich
YouTube: https://www.youtube.com/@aimadesimpletalks
Podcast: https://aimadesimple.buzzsprout.com

Need help building AI capability in your organization? Book a call. 

Valeriya Pilkevich (00:00)
Does the EU Artificial Intelligence Act apply to your company? And if you're just using AI vendors, who carries the compliance risk? You or them? Welcome to AI Made Simple, the transformation series. I'm Valeria Pilkevich, and I talk with global leaders, innovators, and practitioners for shaping the future of work in the age of AI. In this episode, I'm joined by Dr. Benedikt Flöter, partner at YPOG, one of Germany's leading tech law firms.

where he hits the AI and emerging technologies practice. We talk about where most companies get their risk classification wrong, what deployers of external AI systems are actually liable for, why AI-generated content may not be copyright protected, and how governance can shift from a compliance burden into a real competitive advantage.

Valeriya Pilkevich (00:49)
Benedikt, thanks for being on this podcast. It's a highly anticipated topic and I'm absolutely excited to have a talk with you.

Benedikt (00:56)
Thanks for having me, Vettel.

Valeriya Pilkevich (00:57)
Benedikt, for business leaders who keep hearing about AI EU Act, and aren't sure if they're doing the right thing, or maybe even if it actually affects them, what's the single biggest misconception you see about what this regulation requires?

Benedikt (01:10)
Yeah, I think actually at the starting point, you should think about AI not as just buying another IT service provider, like you change from, I don't know, office to open office, which is just like an IT procurement. But here we are talking about an AI transformation. So this is actually an update of the operating model of your company. And AI is totally different than IT because it really changes how people interact with software, how people work, how actually

workflows are operated. And this is a really exciting time. And people should actually understand that the EU AI Act is therefore not really comparable to the GDPR that was an assortment of data privacy obligations. Here, actually, we have a risk classification depending on the actual use of AI. So you really have to look into what you want to use AI for. If this actually poses a strategic risk for your business,

And the risk always starts with how you use it and who uses it. And thereby, actually, you really nip into how it affects your workforce, how your workforce will operate it. And then you have this kind of risk classification in the EU AI Act. And depending on this risk classification, you have to take compliance obligations. But it all starts with, let's try AI first and see what risks actually happens later. And this can actually then be covered by a good government's framework that you set up.

Valeriya Pilkevich (02:32)
you started talking about those different risk categories for the audience. There are four risk categories right now. we differentiate between AI systems that are prohibited, high risk, limited, or that have minimal risks.

Benedikt, in your experience working with companies adopting AI cross-organization, where do most companies misunderstand where they land and our organizations over or underestimating their risk exposure?

Benedikt (02:56)
Yeah, so like three years ago when the discussions came up, there were different kind of surveys in the markets that asked companies how would they qualify their own AI products or their AI usage. And 2 third of the AI was understood to be a high risk AI or at least something that needs to be evaluated based on the high risk. But if we really look into the categories, obviously there are high risk AI use cases like.

using AI in HR when you want to screen CVs, whether people are actually eligible for promotions or even for hiring processes. If you want to use it in fintechs, so like is someone actually eligible for a loan or what kind of interest rates they should actually be paying. Or if you use it for surveillance of workforce, so have been people actually worked diligently on their routine basis. So this is kind of surveillance practices.

And then obviously infrastructure, like all safety regulations in traffic, in public transport, for example. I mean, this makes all sense when AI immediately actually applies to a human being and the result actually affects a human being. But then there are also exceptions to that. So if you have an AI that just prepares a decision and there's a human in the loop that reviews that decision, then it's not a high risk anymore. It's just a preparatory work.

So if you have an AI tool that sifts through CVs and just checks people if they are eligible and then makes a suggestion that is not binding a final decision, then this is not a high risk anymore. So you can actually fall out of these categories again. And what I understand is that most companies actually see that AI is always high risk to some extent. So if you have a chat bot that gives answers to your customers on, I don't know,

special fees or rates or information on a product. This is not a high risk. This is just a chat bot. If you have some internal processes, they are typically not high risk as well if they just disclose business information. So you really have to look into what you are doing. And it just gets so complicated in the high risk categories. But then it's really not about just a product they use internally. It's more about the product that your company builds and wants to sell to customers.

And then it's like a strategic risk that you have in your business. But this is something that you want to operate because there is actually mainly there's the most to gain if you go into these pockets where technology really serves in a high risk category because this is where the value actually is. And then AI actually becomes like a promoter of business opportunities. And their governance is necessary just as a business setup.

Valeriya Pilkevich (05:25)
And this is where it gets real for most companies. They're not building their own AI systems. They are deployers. They buy tools from vendors for hiring, customer service, lending decisions. But as a deployer, I don't know how the vendor's AI system makes its decisions. So Benedikt, what are these companies actually liable for and what must they have in place?

Benedikt (05:48)
Yeah, To do one step back, all these high risk categories come in force in August 2026 and August 2027. So there's a little bit of time to prepare, so half a year, more or less. What we see right now is you have these kind of big tech models that you can use, like Microsoft Copilot, for example. This is ingrained in most of the Office applications by now. Then you obviously have the OpenAI. ⁓

offerings and we have cloud core work, for example. And it's all getting very differentiated already. But if you have specialized use cases like you get a tool for reviewing your CVs or you get a tool for checking in a bank whether someone is actually eligible for a loan, then you actually need to understand that you purchase not a fully compliant product, but you as a deployer of the product are still liable for

the regulations under the AI Act. So you must ensure that you can monitor the product, that you have a human in the loop review, that you actually have measures to avoid biases, avoid hallucinations, avoid model drifts. And all this information is actually what you need to get from the provider. And there are clauses in the AI Act that actually ensure that the providers provide this information to the deployers. And this is actually where we understand

that the UAI Act actually pushes obligations into the distribution lines. So you can actually request that the deployer actually is provided with these informations. Yes, that can become in detail complicated, but we see it into preparation of the commercial agreements that we advise that you can actually ask for these information. And the model providers or the service providers start providing these because it's part of the business. And there you understand again that

AI is not just one IT service. It is really affecting how your company works and how your operating model is turned to AI.

Valeriya Pilkevich (07:38)
I want to also talk about the other point you mentioned. So let's say if ⁓ the company developed their own AI system for a specific use case. And you mentioned that as soon as there is a human in the loop, as soon as there is a person who meets the decision or who decides at the end, for example, which candidate gets chosen, it falls out of ⁓ AI EU Act. But in this case, let's say, let's take

again, this HR screening process, the human would rely still on AI system. So isn't it getting a bit more complicated than at the end? say if humans, over relied on the AI system or trusted AI system, which was biased who takes the responsibility at the end of the day?

Benedikt (08:20)
Yeah, this is actually exactly a point where you need to build governance structures in your company, how an AI system is actually reviewed and monitored. And this human in the loop then needs to be kind of in a reporting line, where there is some supervisor that actually ensures that everyone is using the AI system in the way that is actually expected to be used. And this then actually becomes basically an obligation within your employment relationship.

Valeriya Pilkevich (08:25)
Mm-hmm.

Benedikt (08:46)
in your company because you will install a person to supervise an AI system. And then this is an employment obligation. And there you see again that you actually need to build up an entire framework around the AI usage. And this is a reporting issue, basically. Someone will have to supervise the supervisor. And again, at the end of the day, it's actually a board-level decision. And we need someone on C-level that actually ensures that AI is not just used, but also then supervised.

And that person at the top, unfortunately, will be liable at the end of the day to the shareholders or to the board meeting. That's where we actually need to plug it in. And that's also how we see how AI transformation actually works. to be frank, this is also what we do here in our law firm in the last years. You really need to ensure that AI transformation is plugged in at the top level of the decision makers. Otherwise, it wasn't.

It doesn't work. And from there, it actually needs to triple down through all levels. And then you can actually ensure how AI is used and supervised. And someone will need to be accountable at the top. that is the person also driving the change.

Valeriya Pilkevich (09:51)
you started talking about governance. If you were advising a transformation manager who's been tasked with rolling out AI across all the organization, what governance foundations do they need to have in place before they even start? I know you already mentioned a couple, but maybe to lay it down very practical, what are, top three things or top five things, if there is a very simple framework that they should necessarily have in place.

Benedikt (10:16)
I would always start with a transparency assessment. So just ensure that you know what AI systems you are using. And there already you get all the trouble with the shadow AI that employees are using quite often. That's at least what we see. So you must make sure that you actually have transparency on that. Then you start with a risk mapping that you actually see what kind of use cases are high risk and which are limited risk. And there already you might get a better feeling of how exposed you are.

Valeriya Pilkevich (10:20)
Mm.

Mm-hmm.

Benedikt (10:42)
company as a whole actually is to not just regulation, but we need to actual risks involved with AI. And then third, you need actually to ensure, again, as I said, accountability of individuals. So someone needs to put the hat on this transformation and also on the liability. And this shouldn't just be the transformation manager, but really find a person on top C level who is responsible for transformation and can actually build up the structures. And then.

you need to go and next round would be, is AI actually also not just within our company for internal use cases and jobs to be done, but it's also part of the product. And then when it becomes part of the product, it gets more strategic because there is obviously more value creation happening with AI, but that also comes with more risk exposure, like intellectual property, data privacy breaches, and so on.

Valeriya Pilkevich (11:32)
just to repeat for the audience, first assessment. trying to assess what are all the AI systems that you have, Then second, risk mapping. under which category each of these fall. And third, accountability. who is accountable? Who is the business owner? think it simplifies a lot in the hat.

Although I find right now many are talking about the agents, AI agents ⁓ in the organizations and having the specific governance to govern these agents. And I feel like it's going to get way more complex with time because ⁓ the agents self-improve as well. and you can never really draw the line, maybe today it's

Benedikt (12:05)
Mm.

Valeriya Pilkevich (12:08)
a limited risk and tomorrow it's already high risk

Benedikt (12:10)
I like this. I like this comparison. So but indeed, like two years ago, not really, to be honest. it's still today. ⁓ People are using AI. This is like level one. And then level two is people are using or supervising agents. And agents have predefined tasks. And then it's third what you say. So the agents kind of become the operating model. And they start developing their own skills.

code co-work and so on, and actually really assume their own tasks that they haven't been assigned for before. But then it really becomes chaotic. then you really need to think differently about agents. It's not just they're doing jobs, like people are doing jobs, but they are getting new jobs. And this is like a land grab. When you have a company and there's a new task, everyone jumps on it and wants to take over the task. And this is like land grab. And so we see agents doing the same.

I strongly advocate for introducing limitations to the agents. Otherwise, you really lose control, and you wake up a week later, and they have built the entire new system. But indeed, so what we have been seeing in the last year was that you have individual agents doing more or less autonomous decisions, and then you introduce service levels or reliability or some limitations on the decision-making processes.

But this is all assigned to one task. And I strongly advocate that you keep on doing this kind of limitations. Otherwise, you really lose control.

Valeriya Pilkevich (13:32)
I want to talk to you also about the topic that is very close to my heart because I work with companies on AI adoption and AI literacy trainings So there is an article for an AI EU Act that makes AI literacy a legal requirement under European Union already enforceable from beginning of 2025, So it's been a year. And for the head of LND, for example, or a CEO who is listening to this episode,

how can they ensure sufficient AI literacy? Because you already mentioned that the shadow AI, it used to be shadow IT. Now shadow AI, basically when employees use unauthorized tools, unauthorized by their company personal chat GPT account, upload data there.

So what does this sufficient editor looks like in practice and how much training is too much?

Benedikt (14:15)
Yeah.

Yeah, it's like this sufficient as well. It's like such a lawyer language. It's like reasonable. It's like someone has to decide later on how we want to fill that up with any meaning. So what our guidance is, you always need to take into account what kind of tasks is being used with AI and who actually does it. So you always need to look at the specific use case. And therein, you have a risk and experience map and then also a different level of literacy and training that you need.

Valeriya Pilkevich (14:24)
Yeah.

Benedikt (14:46)
So someone just executing, say, I don't know, using Google Maps. Google Maps is actually AI. So if you use Google Maps, you don't need literacy training. You just need to know that it's not always safe to follow Google Maps. Sometimes you might just fall down a road somewhere in Italy because it misses it. The road ends. But this is just natural kind of knowledge. And if you then use it on decision making, again, in HR teams, for example, you definitely need training that's

an AI might be biased. It might actually sort out candidates because they just don't fall into a certain bucket for whatever they are. And this is obviously unjust. And you need to understand that an AI is not perfect. And then next level, if you again are on C level, if you are really calling the shots in the company, you really need to understand how AI works. Otherwise, you can't understand how AI is transforming your business, how it is transforming your product, and what new product options you have. And there you see again that

AI literacy and the AI act is not just about regulating and forcing people to apply to a regulation. It is more like nudging people to understand the upside of AI. Because if you need AI literacy training on C level and all the, say, more senior management actually needs to sit down and get exposed to AI and the options of AI, then suddenly AI literacy turns from being just like a chore and an

compliance obligations into an opportunity. And that's really what I want to advocate here, that AI literacy training can be seen as a big business advantage if everyone has to go back to school. So to say and understand what options we have suddenly, it's crazy. That's actually what I see very often in our business that senior management doesn't know AI. They just read it in the newspaper, in the printed newspaper. So then sometimes it's actually quite useful to really go back and get some AI literacy training.

for that level, and really they understand how great business options they are.

Valeriya Pilkevich (16:38)
I love it that you said it's not one size fits all approach, but rather it should be more tailored to different groups. AI literacy foundation training for, everybody in the company. So everybody's aware about the limitations, but also the potentials. Maybe even what I'm advocating for is even data literacy trainings, Or that people understand what is generative AI and what is agentic AI, because even in Microsoft products,

Benedikt (16:57)
Yeah.

Valeriya Pilkevich (17:02)
You can now use agents so that they understand, actually agents can go and delete something there in my Excel table, I think it should be like broader level, but also, as you mentioned, very function specific. depending on which AI tool, for example, AI system, HR users, or marketing users, or any other department, there is also a specific training for them, but also there is a training for leadership, And leaders.

For me, they can sit in the same trainings with their employees and build something and try and prompt, or there can be a separate training. But I completely agree with you that they should actually do some hands-on things and experimenting and understanding as well, you've been publicly vocal that you are not fully convinced that AI EU act as the right approach for keeping Europe competitive.

So how should the companies think about this Regulation as a constraint versus regulation as a competitive mode.

Benedikt (17:51)
Yeah, it's really such a war topic. ⁓ Perhaps to break it down, the kind of innovation and how did it happen that we stand here where we are nowadays is just not about regulation. But what I see is quite a big issue as well as obviously venture capital money. It's about financing markets and about distribution. So we have a lot of clients that go to the

investors and say they want to build AI, but it's not as positive as in the US that you get suddenly billions of euros to invest in AI. And that's actually how OpenAI and all these other companies got so big because they got a lot, a lot of venture capital money. Second, it's really hard to sell AI to corporate customers in Germany. They are quite reluctant to buy AI because they don't know how it works. And in the US, there's a big market for AI because everyone is so open to change and really embracing any new technology.

technological revolution. And then third, there is another set of framework and regulation around. But I need to recall a little bit my opinion on the EU AI Act. Over the winter, we have been writing a paper where we compared regulation in the US to regulation in Europe based on HR tools and on fintech tools. So we went through different sets of regulation. We understood that

Both industries are subject to AI regulation in Europe and in the US. And to kind of sum that up, we found out that in the US, you have like 40 different regulations for fintechs and HR techs, depending on the individual member state you are in. If you are in Delaware, there's nothing. If you are in California, there's a lot. Minnesota has a middle ground. New York has middle ground. And Texas is, again, without any regulation, just like examples. And in the EU, you

you have, in best case, just one EU AI Act that applies all over the union. And then you have actually suddenly a single market with 450 million inhabitants that has just one set of rules instead of a fragmented market in the US. So I have to call this a little bit back what I have been saying before. Like when we went deeper into the issue, it's not as easy as you expect in the US, actually. But indeed,

And we see that this EU AI Act and other regulations is quite cumbersome, but even more for the GDPR that is already in place. But I think we are at least pointing in the right direction to make it more easily applicable. And when we go deeper into it, we actually see that the EU AI Act is not enforced as critically as the GDPR was. And it still can be done for small and medium enterprises with a comparatively low

governance kind of expenses. And also, it is something where we see, again, the educative effect of the AI that increases transparency on your toolset, that actually increases transparency on the data that you use, that you actually can disclose to your customers what AI you're actually using, what data it has been fed with, and how it actually works. That is actually very interesting for your customers as well, and actually builds trust. So I think this.

Regulation, yes, but it is also about increasing business chances because you can actually develop a better product that is more transparent and it's therefore actually more trustworthy for your customers. And that is something that people have been saying since the beginning of the EU AI Act. And by now, like two years later, we can actually tell that it's also a business rationale that has been more more ingrained because you can look into the bonnet, so to say, and see how it actually works.

And this is also then a product decision and a commercial decision for the distribution of a product if you can really explain your customers how it works.

Valeriya Pilkevich (21:25)
So more like AI made in Europe, right?

Benedikt (21:28)
Exactly. And also, even if we don't win the race on the foundation model level, because we just don't have enough compute. And to be honest, we don't even have enough energy to supply all this compute. But the thing is, it's more of a typical kind of public goods thing. If you have the model trained already, it's open sourced, and everyone more or less can use it. And to get the compute is comparatively cheap afterwards for the processing.

And as we see in China, basically just distill models that they take from the US, have done that quite often. So they can just piggyback on all the expenditures that you had in the US. And this is more or less how we can create value here as well when you use the technology that has been developed and really put it into operative practice. And that is actually I think where most value creation can happen here. If we really understand how AI can actually improve the quality of services that we see can actually make much.

production lines much cheaper, kind of a sort failure in productions. We see a lot of really amazing companies coming out of our technical universities that really have great ideas how to just make things better using AI. This is really impressive. There's a lot of value creation happens on this implementation level and not on the foundation level. So the race is not over at all.

Valeriya Pilkevich (22:41)
Benedikt, so another thing that I encounter or the questions that companies have when we talk about generative AI, the tools that they use. So let's say marketing team, they're using tools to create content. Developers use it to create code. Product team using it to create designs and so on.

So who actually owns this output? you even mentioned in one of the talks that, AI is the end of intellectual property, quoting it. what are the real risks that businesses aren't thinking about when using this AI-generated content?

Benedikt (23:10)
Yeah, thanks. Yeah, this was quite a provocative title back then. But I found it really astonishing how easily you can create content by now that is then afterwards not copyright protected. Because under European and also under US or English copyright law, you need to have your human creator. that's the kind of human intellect flows into the creation. And this is the reason for protection.

So it's called a moral right of the author. So the author's personality is in the object. Therefore, it's protected. But what copyright protects is only the expression of the specific object. It's not the idea. So if you have the idea to paint a mountain scenery with a big deer in there, this idea is not protected. But the very specific expression of that idea that can be protected.

So this is what you need to understand when you look at copyrights in AI. Because AI now made expression so easy. You can just type a prompt, paint me a mountain scenery with a deer, and then you get tons of it. And all this expression is not protected. And the idea as well is not protected on a copyright either. So it's both not protected anymore. And we just have handed down a decision of the regional court of Munich just last week that decided that

the design in a trademark kind of from a graphic designer was not protected because it has been created by AI. And so the designer did not acquire copyright in the design because it was AI created. This is actually the first German case we had. It was just last week. Before, we had a lot of cases in the US because in the US, you need to register copyright to get it.

protected. This is the little C that you sometimes have next to a painting or whatever. And the Copyright Office in the US then really looks at to the registration and wants to understand how it was created. And the first cases that we had in the US was that individuals disclosed that they have been using mid-journey for creating the painting or the graphics. And then it was not protected because there was no human creation.

in the graphics. And the same we now have in Germany. So it's kind of like standing legislation. So you can, as standing jurisdiction, can say that you need a human creator and the expression of the individual in the output to be really protected. And then you go back into the AI and agentic models where you have, for example, a big media agency that creates a lot of output using AI. And all this output is presumably not protected by copyright if it is

just done by AI, and there's not enough input of the individual into the output. So you really need to start tracking how you developed the output, which kind of prompts you used, how you created some sketches, how you directed the AI to really create what you wanted it to create, to then acquire copyright protection in the output. So that will really, again, change the entire business process in a company. If you suddenly really need to document how

an AI output was created. That's a totally new process, but you need this to acquire a copyright protection. Otherwise, your competitors can just take what you develop and go away with it. Same with software development, actually. This is why everyone is now so crazy about Claude Cowork and Claude Cote, because you can just copy and recreate software code so easily that the competitive mode is gone. And then also the competitive mode provided by copyright is gone.

And so competition gets much higher. yeah, so this is one principle that is super important to know if you create content using AI. And the second principle that is very important as well is that if you want to develop AI, so you want to develop a model based on your own data, you need to understand that you generally need the consent of the individual that owns this data or owns the copyright content.

Valeriya Pilkevich (26:56)
Mm-hmm.

Benedikt (27:04)
unless there's the so-called text and data mining limitation that is in place. And we have this kind of limitation here in Europe that says you can use copyrighted content for training an AI model unless the author of that content has prohibited the training. So this is kind of like a reversal of the principle. So you have an exclusion to copyright if you want to use it for AI training. The idea was.

that we want to foster AI training in Europe. So therefore, we want to open copyright protection for that kind of use. And that's what we have to be decided in last year in the higher court of Hamburg. And next year, there will be a decision on the higher regional court of Hamburg. So this is like the second guiding principle that generally you can use copyright protected content if there's a text and data mining exclusion.

And then you can, for example, take your customer content that you have been providing in the past. Take the example of a marketing agency again. You have been doing some campaigns for, say, a large OEM provider. Then you use all this content and create the next campaign based on that content again. This would be allowed because it might be considered text and data mining.

Valeriya Pilkevich (28:17)
Yeah, it's very good foot for thought. So basically, if the teams are using AI to create content, they have to make sure that they're documenting the steps and the process in case they have to protect their work so that they can actually show how many prompts they have written and how many days or hours they have spent there. And in case the company wants to actually train the model based on data, they are allowed to do it.

as I understood, unless this customer opts out, unless they explicitly say that we don't want you to use our data. It's very important, very important insights. Benedikt, looking ahead, AI regulation is still evolving in Europe and all over the world internationally. What would be or what's one thing that business leaders should start preparing for now

Benedikt (28:45)
Exactly.

And really understanding that it's not just about regulation, it's more about creating business opportunities based on a structured approach to your own technology and to your own product. So understand that it's compliance, yes, it's regulation, yes, but it's also transparency. It is opportunities. It is actually sell points to your customers. And it's actually a way to educate your workforce as well. So it's a.

As I said before, it's really an operating model shift from like a standard IT setup to an AI setup and take it as an option to create transparency of the tools that you have been using. Don't think about what AI use cases do we have and is this a high risk? It is better to see it a way, let's see where we can use AI. And if it's a high risk use case, we can handle that as well. Or say, we want to develop an AI product.

Our customer will ask us this and that questions because they have a right to know it due to the AI Act. Let's be proactive and already provide this information in our sales campaigns or in our marketing campaigns. Say, hey, we will sell you a product and we disclose how it works. This is super attractive for a customer. And it might actually make you stand aside from your competitors that are reluctant to disclose this information.

Valeriya Pilkevich (30:19)
Thank you, I have two more fireside questions for you. what is one AI tool that changed for you personally, how you work as a lawyer,

Benedikt (30:28)
I worked as a lawyer. mean, to be honest, it's Google. mean, like, when did we start using Google? And Google was already AI. And what I found was super amazing was when Google Book Search came up. And I was doing my PhD at that time. And suddenly, you could look into all these documents from basically all over the world. And even if you didn't get the full kind of feedback, you at least got snippets from the information.

knew where you need to go and look up in your library, because it would have taken you ages to look that up manually, and you got it with Google Books. This is AI. It's machine learning and everything behind. So that really changed my way and access to information. But then second, obviously, the chat GBT moment was amazing for us here as well. We knew that something was coming, that it was not accessible. But then suddenly that you can actually get information prepared in the format and into the scope that you actually wanted.

That was really amazing also for creating content. It was a change. And now the agentics where you can really say, Claude, co-work, we have been experimenting, that you really can say, we all throw that in there. And this is our own ecosystem. It's our own skills that we can develop. We are not depending on a legal tech provider. You have quite a few of them out there. We can actually recreate that all.

within our own ecosystem. And our IP stays in there, and we create our own skills. We don't use template skills. This starts developing our service provision quite a lot.

Valeriya Pilkevich (31:56)
Wow, you have a very innovative law firm. So for the listeners, if you're thinking of legal advice, like look, already using agents and cowork and yeah, so chapeau. And second question, if AI could automate one part of your job tomorrow, what would you happily hand over?

Benedikt (32:13)
That's funny. I love my job. I love it. But obviously, I don't love out of it. I would really like to have a good trained agent that can actually automatically start replying to my emails. It is just that you have such a flood of email. don't know, like 500 a day. don't know. And I always feel bad if I don't answer in time. And I want to get rid of this bad conscience that I haven't answered to these emails. That would be great. But unfortunately, we don't have.

Valeriya Pilkevich (32:38)
Yeah, like a legal AI assistant

trained on the data and on personal, like learning from the personal replies and personal style. So that's a great startup idea also for some of the listeners.

Benedikt (32:40)
Yeah.

Yeah, get that. And I mean, you have this in Copilot. I mean, you can do that in co-work as well. But we don't use that now. I still need human interference. mean, we kind of can draft email using Copilot. And then it's being transformed into an answer. It's already a bit, but like a fully automated AI. Email response would be nice.

Valeriya Pilkevich (33:07)
Thank you, Benedikt. It was a lot of insights, a lot of valuable details for the audience.

Valeriya Pilkevich (33:12)
You can find Dr. Benedikt Flöter on LinkedIn and learn more about YPOG at YPOG.law. All links are in the show notes. If you found this episode useful, follow AI Made Simple, the transformation series for more conversations with practitioners and leaders shaping how AI is actually adopted inside organizations. Thanks for listening.