The Entropy Podcast

AI Development: Challenges and Solutions with Manuel Tomas

Francis Gorman Season 2 Episode 6

Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.

0:00 | 41:46

In this episode, Francis Gorman speaks with Manuel Tomas, a cloud and AI solutions architect, about the challenges and realities of developing AI systems. They discuss the importance of making AI production-ready, the limitations and hype surrounding AI technologies, and the critical need for human oversight in AI applications. Manuel shares insights from his recent projects, emphasizing the necessity of evaluating AI outcomes and the implications of indemnification and insurance in the AI landscape. The conversation also touches on the risks of over-reliance on AI and the future security challenges that may arise as AI technologies evolve.

Takeaways

  • Manuel emphasizes the importance of making AI systems production-ready and accountable.
  • He highlights the need for evaluation-driven development in AI projects.
  • The unpredictability of AI outcomes necessitates a rigorous evaluation process.
  • Many AI frameworks have similar capabilities, contrary to initial expectations.
  • The hype surrounding AI often oversells its capabilities and leads to misconceptions.
  • Over-reliance on AI can result in a loss of critical thinking and questioning.
  • Insurance companies are beginning to exclude AI from coverage due to liability concerns.
  • Human oversight is essential in high-stakes AI applications to mitigate risks.
  • Organizations should prioritize understanding technology before developing strategies.
  • The future of AI security will involve managing complex multi-agent systems.

Sound Bites

  • "The hype around AI is oversold."
  • "You can't automate human judgment."
  • "There's no growth in comfort."

Contact Manuel:

LinkedIn: https://www.linkedin.com/in/manuel-tomas-estarlich

Website: https://levelup360.pro/

Francis Gorman (00:02.586)
Hi, everyone. Welcome to the entropy podcast. I'm your host, Francis Gorman. Before we dive in, if today's conversation challenges you, sparks a new idea or sharpens how you think about the world, don't keep it to yourself. Subscribe, leave a review and share this episode with someone who enjoys staying curious. Today, I'm joined by Manuel Thomas, Astralic, a cloud and AI solutions architect focused on making AI systems production ready, accountable. Sorry, I want to retake that because said audible. Take two.

Manuel (00:33.741)
Yeah.

Francis Gorman (00:37.497)
Hi everyone, welcome to the Entropy Podcast. I'm your host, Francis Gorman. Before we dive in, if today's conversation challenges you, sparks a new idea, or sharpens how you think about the world, don't keep it to yourself. Subscribe, leave a review, and share this episode with someone who enjoys staying curious. Today I'm joined by Manuel Thomas Estralich, a cloud and AI solutions architect focused on making AI systems production ready.

auditable, accountable, and able to survive regulatory and security scrutiny. He works at the intersection of strategy and implementation, ensuring that what gets designed can withstand contact with reality. Manuel, it's lovely to have you here with me today. Manuel, I was looking forward to this conversation for a number of reasons. One, we know each other, so it's going to be fun from that perspective. But two,

Manuel (01:14.107)
to be here. Thanks for having me on.

Francis Gorman (01:28.562)
You've spent the last couple of months kind of at home, in your home lab, working problems that kind of we're all faced on a day to day basis, but may not get enough time to think about. And I know you've really got into the weeds of some of those kind of problems around artificial intelligence, large language models, indemnity in terms of insurance, et cetera. And that's why I really want to talk to you, because I think you've taken the headspace.

not just to understand the problem, but to work the problem in a way that can give material value. And I think that's really important for everyone in the industry because we don't always take the time to work things through to the point where we can see the value proposition from them. So I'd like to start by maybe asking you a little bit about, you know, what have you been working on in the last couple of months and what are the key teamatics that have come to life for you?

Manuel (02:23.381)
So it's been three months and since I left my job, I actually left my job as you know, to take the time to get into this properly. I wanted to explore different agent frameworks and what started as an exploration, if you want, more than exploration, maybe a systematic assessment of different frameworks.

Turn into a full-on project where I decided to build an agent solution, enterprise-grade, bank-like, production-like.

What I've discovered is it's not rocket science. It's the same principles that apply to any other enterprise system apply to AI. The key difference for me is that is because it's unpredictable.

you cannot predict with accuracy what the outcome is going to be. You need to evaluate and based all of the decisions on real data. So I took an approach, I used evaluation driven development where I didn't make any assumptions. Everything had to be evaluated and

only make decisions once I could make them properly informed. That's probably one of my biggest takeaways, the difference between a predictable or more deterministic software and AI.

Manuel (04:21.028)
Francis Gorman (04:21.174)
And when, when, when you, when you started to explore these topics, so you, as you said, you left your job and really gave yourself the time to get into the weeds and you talk about the kind of the first principles, et cetera. What, what surprised you when you started to measure your outcomes against what your expectations were versus what you were seeing in real life?

Manuel (04:46.49)
Well, to be honest, I was expecting different frameworks to provide different capabilities. And yeah, they have minor differences, but in essence, they're more or less all of the same. One of the things that I was expecting is I expected it to be more challenging. I truly was. Like I didn't think I was going to be able.

to build this from scratch. I'm an architect. I don't code regularly. And when I started having to build an application from scratch, a technology that I did have experience, but I have never built on my own.

I expected it to be a lot more complicated than it is. I think people...

And companies overcomplicate things too much. And because I come from a enterprise.

platform, cloud platform role, all of the most important things, all of the same principles that apply to any other technology, stay the same. So I only had to focus on

Manuel (06:17.24)
really AI by itself. One key learning is I built an evaluator, AI evaluator, to evaluate the outcomes of the system. And after a while, what I discovered is that the evaluator was evaluating all wrong assumptions. So one key thing that I learned is you need to evaluate.

the evaluator, you never assume that because the outcomes look good are correct. I discovered this, it was at least two weeks after I was assuming that everything was correct. So I had to put a lot of emphasis in evaluating the evaluator. And this is the major difference for me. When you're building any other type of software,

when you're building a platform, whatever you're building.

You have testing frameworks you can test. And if the outcomes are correct, you know that the next time the outcome is going to be correct. With AI, it's not the case. And that...

increases risk. So I did expect it to be

Manuel (07:45.664)
I wouldn't say more powerful than it is, but if I'm being honest, I think its capabilities have been grossly oversold. That's my own mistake.

It is powerful, don't get me wrong, but if you look at how much hype there is in the space, how much oversold, he can do this, he can do that, he can automate this, he can... You're thinking, Jesus Christ, this is going to take over all our jobs in a year. But when you get down to it and you understand the nuts and bolts of it, you realize, yes, it's powerful.

but it's not what's been sold out there.

Francis Gorman (08:33.428)
that might have something to do with the price of gold going through the roof. People are starting to realise that the magic box has its limitations. And I think that for me is... Go on, go on, go on. I don't want to cut you off.

Manuel (08:38.692)
yes. Yeah. Yeah.

Manuel (08:48.83)
No, no, no. What I was going to say is if you think about it, it comes out to incentives. Because hype drives FOMO.

and fear sells. So the space optimizes for what sells.

Manuel (09:09.102)
And to be honest, this is one of the reasons why I tasked myself with building in public and sharing the outcomes is because I think people deserve better than what they're often getting. And all of this noise in the space, I thought, Jesus, if I can help people make, I don't know, ask better questions, or if I can help them avoid.

some of the mistakes that I or other people have made, if I can go through some of that noise, that feels useful. So that's why I started. Also, I did it also for myself. I think we humans find meaning in our contribution to others. So I suppose all of this exercise is a way of meaningfully contributing to others and finding purpose.

Francis Gorman (10:10.498)
It's very noble, Manuel, and what I've really come to see over the last two and a half, three years is AI's ability to present you with information that looks accurate, yet blind you to the fact that it's being skewed or made up is quite frightening. And as more enterprises

roll this out across their organizations. The whole principle that we have in security of trust but verify almost needs to become part and parcel with that rollout. But humans like comfort and comfort doesn't like the rigor of validating every line that comes out. In your opinion, and I'll take this from a personal capacity more so than the lessons you've learned, how risky

Is it going to be over the next several months as people start to summarize meeting exchanges, regurgitate documentation, use AI tools to validate assumptions? Where is that going to lead to?

Manuel (11:25.593)
I think there are two key angles that I like to take here. One of them is, and I'm going to use an example, maybe, to use this. A project manager was telling me a few weeks back his method to deal with a sudden handover when the next day there was a stakeholder meeting. So new to the project, sudden handover, you have a stakeholder meeting, you have to provide.

the status of the project and so on. So his method was doom everything. And when I say everything is teams conversations, project plans, emails, doom everything into LLM. And then it will summarize it for you. So then you just have to remember the key points that the LLM has extracted. Well, apart from

Feed in.

sensitive information to a public LLM, which is what he was doing, non-maliciously, don't get me wrong. He was just trying to remove friction and be efficient. He walked into the meeting with wrong assumptions. So he walked into the meeting with a brief that looked correct, but it wasn't. And he told me, man, I look like a fool.

I made it full of myself because I was presenting something that wasn't accurate. And people realized, people noticed they had been in the project for long. They quickly realized. So I think one of the dangers is, humans and the more, see AI feels authoritative.

Manuel (13:23.513)
if it's confident. So the more confident and more authoritative and the more times it gets it right, the more comfortable we become and the more we trust it, that's going to get it right. So yeah, I suppose the risk is we stop and I'll now connect with the other point. I think we're going to widely lose the ability to question the machine.

we are going to quietly lose the capacity.

correct it when it's wrong, to shut it down. Because if we stop exercising that ability, I believe that the biggest risk, losing the ability to question it. But it's also going to be a difficult situation because one of the biggest benefits, especially in agent-dki, is the capability of

producing lot of stuff at speed, automation at speed. We humans don't have the ability to check the outputs at the speed that AI is going to produce them.

Now if you

Manuel (14:50.877)
If you design a system where

people have to be constantly checking. depending, we have to be careful where we put the human in the loop because if the human has to check every single output, that's going to lead to approval fatigue. And approval fatigue leads to, just approve by default. I don't question it. I don't exercise judgment.

I believe that's going to be the biggest risk, losing the ability to judge, but also creating fatigue. Because the Humanity Loop is something that is going to be mandatory. Like the AI Act mandates that for high-stake use cases. So it's not like you can say, okay, I don't care. I'm just going to accept, accept. So I suppose that's the...

The two biggest, I don't know if risks, the two things that worry me the most.

Francis Gorman (16:04.026)
And I think when you extract them, very, they're very real things to be concerned about. I see it myself all over the place, not only through doing podcasts and talking to lots of intelligent people like yourself, but I also see it in external factors. And what I mean by that is

I recently had to join Instagram for the podcast to help with distribution against my against my best my best notions and in there the platform is so heavily focused on what I would say amassing well through artificial intelligence and you've got all of these influencers telling the next generation you know you'll make 30 grand a month by putting this into chat gpt and clipping that into here and putting that into this tool etc

And it's like a race to the bottom. So I think, I think your part where you said people will lose the ability to question the machine. If that's your reality and it's been reinforced over and over again through the platforms that you're consuming, your assumption from your personal life will be this is easy. This is, this is easy money. This is, this is easy work. I don't want to have to, I don't have to spend time going through the pain like you did over a number of months to

hit those walls to find those limitations to understand that actually I've been led down the garden pad here for two weeks because my assumption was the machine that I created to evaluate the development in itself was actually lying to me. And that's what it's doing. It's it's it's creating a false pretense. But as you said, a third of Lee, it is it is empowering you to feel like your baseline and you could easily take that and publish it.

Another LLM will pick it up and go, you know, here is this great work that we've seen come out of of Manuel Thomas and they spit out a blog post and that becomes someone else's reality. And all of a sudden, the providence, the lineage, the understanding of the fundamentals get eroded. And I think you put it lovely there. We will lose the ability to question the machine. So I think critical thinking, communication, validation, re-baseline your assumptions.

Manuel (18:05.351)
Mm-mm. Yeah.

Francis Gorman (18:20.564)
are all skills that people need to double down on now not to not to throw away. Manuel, one of the one of the fascinating things that I was reading from your blog posts was your considerations around indemnification for some of these products in the AI space. So will insurance companies

stand over at Gentic AI. Can you bring some, can you bring some of that to life for me please, that recent blog you wrote about this topic.

Manuel (18:54.423)
Yeah, so what I did there is I was doing a bit of research on how are insurance companies addressing professional indemnity in the era of AI, especially for agentics.

What?

To summarize it, there's Lloyds of London.

was advising insurers to carefully consider coverage, excluding coverage for AI. Berkeley's insurance has completely excluded AI from professional identity completely. So 100 % no insurance. And there are a few other companies or big insurers if you want.

insurgency writers that are asking whether they can legally exclude anything AI. And the reasoning behind is the insurance industry can absorb one claim, even if it's big, 500 million, whatever that is. But imagine if it's a failure at a scale that's

Manuel (20:27.379)
Imagine there is an error, something on a specific model that

Manuel (20:36.333)
ends up creating liability for a thousand companies at the same time. There's no way any Sunnis company can absorb that. The risk is just too high, they wouldn't have enough money to cover it really. There's also legal precedent now and...

There was a case, I don't remember exactly the details, but there was a case in which, I think it was in the...

It was in the UK courts and it was a litigation with the National Bank of Qatar, I believe. And the lawyer cited 23, referenced 23 laws out of 45 that didn't exist.

Manuel (21:35.741)
Air Canada as well. Air Canada, the bot on their website was advising somebody who was in government that they couldn't get their flight back or their money back retroactively. But that wasn't true. That wasn't policy. And there was a litigation between Air Canada and this guy and in Air Canada was

Their posture was the AI is a separate entity on its own. It's liable on its own. It's not our fault. And the judge was responding, no, no way. Like anything that's in your website is your responsibility. That has to be, that's clear as day. And I think there are 70, as of December, this last December, there were 7,012 cases.

in course because there's a guy that's tracking all of these litigations. So what you're seeing is an increase between 2023 and 2025 from 213 to something like that. I don't remember accurately to 712 cases. So an explosion of litigation and then the insurance industry saying, hey, we won't insure.

anything that comes from a decision that AI has made. So.

coming back to the human in the loop. If, let's imagine that is, I don't know, a credit score system in a bank.

Manuel (23:25.084)
You can design a system that can score 10,000 people in five minutes.

Now, if you don't have a human...

validating the outcomes.

you are liable. There's no way, it's not that there's no way, the insurance industry is trying not to cover you in those situations because then it's not a professional with a professional license, a person with a professional license with their own judgment making the call in the end. So if you use AI,

Manuel (24:08.929)
automate the parts of the work that can be automated, that's fine. Use the technology to be used. But if you automate human judgment, now it's not only that when things go wrong, the outcomes and going back to the credit scoring, it's not only that.

the bias that is inherent on the system may get 1,000 or 2,000 of those wrong. It's that you're exposing yourself to liability.

Manuel (24:53.839)
And it's law. It's not only the insunance company, it's law. In the EU, with the AI Act, Article 14 and Article 26, it's mandating human oversight for high stakes use cases. So it's an interesting tension between how do we...

take advantage of the technology while we stay in control, while we make sure we don't expose ourselves to litigation or we abide to regulation. So, yeah, it was an interesting, yeah.

Francis Gorman (25:40.25)
It's. I was going to say, I'm just I'm listening to you here and I'm laughing a little bit inside for for two reasons. One, we were selling a to streamline process and reduce headcount and to we're also selling it to remove humans in order to achieve those two pieces, where in reality is the insurance companies have said that's lovely. But, you know.

Manuel (26:02.448)
Yes.

Francis Gorman (26:08.122)
Unless there's someone I can hang this on, we ain't signing off. And I think it's back to your point. Will we lose the ability to question the machine? So even if you do have a human on the loop or in the loop or beside the loop, will they come complacent to the fact that the loop is right 90 % of the time and miss the 10 % anyway? Because it's basically a tick the box exercise. But something else that kind of strikes me here is.

A lot of organizations are going to be three quarter ways down the road in building the technology, deploying the technology and probably about to put it live before it even dawns on them. That they may not be able to get insurance coverage if something goes wrong here. So like I think this is this to me actually creates considerations that we have to rework the product life cycle. You almost need.

You almost need to validate all of your assumptions upfront before you go into build and spend all that money and time and effort on resources, on technology, on data, et cetera, to understand actually, is this dead in the water before I even start? And very few organizations are going to think out the gate. Am I insured?

Manuel (27:20.996)
Yeah. And that's why I started digging into it because I didn't see it anywhere.

And it's something that I kept thinking, if you, it doesn't matter how powerful it is.

If you're liable for it, if you cannot get insurance for it, then what? Nothingness matters. Really nothingness matters. can put you out of business. And one of the things that I think organizations need to consider.

When you have human in the loop and this is coming back to what you just mentioned about a proof of fatigue or losing the ability to validate the outputs, to judge the outcomes.

The way, and this is my take, the way I would suggest people do it is, do not, if you just present someone with an outcome, okay, this is a credit score, approve. This is a credit score, approve.

Manuel (28:36.595)
What you're doing is handing over that judgment to the machine. But if you surface all of the information that AI used.

to reach that conclusion, what were the inputs, what was the reasoning, and what were the outcomes? And you force the person to acknowledge that they have reviewed all of that provenance.

I think that's how you in a way, yes, there is more friction, yes, it's slower, but I think it's the only insurance policy that you have to make sure that the otherwise people are going to be disengaged. And when it comes to automation, the technology is really powerful, but if you try to automate, if you try to do end to end,

zero touch automation in a high-stakes use case, I think you're done for right. So the way I would think about this is most organizations don't understand the technology well enough.

And part of the reason is, if you look at what organizations are doing, they invest in advisory to create a strategy and then spend months building POCs.

Manuel (30:15.698)
then they are struggling to get anything into production.

I would completely flip it. You first need to understand the technology before you start the strategy. So my view is identify a use case, one that is low stakes and simple enough, but it's painful, something that can be automated. Ensure that AI can help. And if it does, run it all the way through production.

Because you're going to learn more in two weeks of building and deploying something to production than in months of POCs and slide decks. And then get the learning and use it to inform your strategy. In the end, and this is my personal opinion, I think most AI strategy is just expensive procrastination. It feels like

progress. But if you're not really building anything, it's just theater.

Francis Gorman (31:29.466)
It's funny you say that because I had Glenn McCracken on a few weeks ago and he said, I've never seen anyone with a PowerPoint strategy and AI is just another tool. you know, I think the crux of it was you have a business strategy, you have a technology strategy. Some of the tools in there may be AI, but it doesn't need a standalone thing. It's an interesting way of looking at it. Manuel, I have to ask you, what drives your curiosity?

Manuel (32:00.133)
What drives my curiosity in the space in general, in AI?

Francis Gorman (32:05.516)
In general, so we've worked together in the past. I know you came to Ireland, you have this enormous resilience, this enormous human resilience to not just reinvent yourself, but to focus intensely on a problem and to better yourself if it's physically or mentally, you have that drive. I think

I think I'm always fascinated around curious people and curious people make the world work because they like to pull at the thread and see what unravels. So what drives your curiosity in life? We don't have to specifically talk AI.

Manuel (32:47.045)
Yeah, so I think honestly when I see a problem.

I find it very difficult not to solve it. It's how I'm wired. I find it very difficult to leave a problem alone, if that makes sense, if I haven't solved it.

I also find, and coming back to the purpose and the meaning that I was mentioning earlier.

Manuel (33:22.774)
I also like to share what I discovered.

I don't like to things to myself. But what drives my curiosity is possibly personal growth. Because when you're curious, and especially when you put yourself in situations where you have to figure it out, like I did when I came to Ireland with no network, no contacts, I had to rebuild myself from scratch, my reputation, everything.

That's humbling and it's clarifying, but...

Manuel (34:05.032)
It led, or it put me in a path of sustained growth because I had to adapt to a new country, a new culture, new ways of working. It changed myself completely in a way that I wouldn't have changed in that way if I had stayed in Spain.

Manuel (34:28.789)
So I'm always drawn to the uncomfortable path. Not because I like difficulty, but because that's where growth lives. There's no growth in comfort. And I think you said that at the beginning of the episode. There's no growth in comfort. So you need to step out of your comfort zone.

And I think that's one of the biggest drivers. It's also intellectual stimulation. And when I look at my trajectory, and this is more on the professional side than the personal side, but when I look at my trajectory, Bank of Ireland is the company I've worked for the most, the longest, six and a half years.

but I've been in three different teams with three different roles.

because I have this need for intellectual stimulation for some reason. So if you take those two, the need of intellectual stimulation with this.

You know hunger is just...

Manuel (35:48.904)
When I feel comfortable, I start wondering. There is something that I know learning. You know what I mean? So I suppose that's what drives me.

to improve myself, to grow, to experiment, and to share what I learned. Because like I said before, sharing with others gives me purpose. If you look at what I was doing before, if you look at how Level Up Resist was born, before being what it is today, the focus was on health, fitness. I'm a personal trainer.

I'm a certified health coach, mindset and behavior change specialist. I'm a nutrition coach.

there was a point in time where I was dedicating almost the same time as to my work to that because I was curious about that. And I wanted to share and I wanted to help people live healthier lifestyles. Does that make sense? So it's always been, there's always been something in my life that forces me to grow in one area or another area. That's been a constant forever. And it's good in a way.

because it allows you to.

Manuel (37:14.677)
to do well, if you want professionally, but it can also be, at times it can also be raining because you don't know where to stop. I suppose if that, yeah, well, I can look at you, you have a very, you have a lot of responsibility in your job and there you're putting all of this content out. takes a lot. People don't see.

Francis Gorman (37:29.869)
I hear you.

Manuel (37:43.933)
People see the outcome, but they don't realize how much work that is behind in recording, editing, producing, preparing, reaching out to, well, and I don't want to speak to you on your behalf, but I can only imagine, know, so I suppose my journey resonates with you.

Francis Gorman (38:07.161)
And I think, I think that's fair. And I know exactly where you're coming from. You know, if you ask my wife, she'd say that the podcast guests and my colleagues see more of me than she does. So, you know, I have to, I have to be conscious of that as well and carve out some, family time, but I think there is that drive and I'm, I'm, I'm very like, like you. I, I, see something and I have to, I have to understand it. I don't just have to.

Seen it is not good enough, if that makes sense. Like I need to take it apart. It's an engine. I need to pull it apart. If it's a an AWS stack, I need to need to rip it down. And I suppose that's how you learn. And I think that's the point I was trying to make earlier in the conversation. That the greatest learnings come from the pain you encounter on the things you don't know now, you know, and. A lot of people are going to lose out on that, and I actually.

I actually feel a bit sad about that, that if if you don't go through the pain, if you if you don't have the late nights, I remember in 2010 building a software defined network on a Unix system and it just wouldn't work. It just wouldn't work. MiniNet MiniNet, think, that was that was the simulation program. Then one day clicked and it was so simple when it clicked, it just made perfect sense. You know, but

To get it to click probably took me five, six weeks of banging my head off a wall, rerunning the configuration, reevaluating my address ranges, trying to understand why the nodes were just not operating in the way that they should, you know? And I think at the time I couldn't type into a chat GPT or a Gemini. How do I do these things? But I even find out now if I ask certain questions, and I'll give you an example.

If I go to chat GBT and I say, I'm using Adobe Audition to edit podcasts, I want you to assume the role of an expert sound professional. want you to give me the, the, the, the configurations that I need in Adobe Audition to ensure that I have high quality audio outputs without distorting the baseline frequencies or without making devices too pitchy. Every single time it'll give me a slightly different variation.

Francis Gorman (40:26.842)
But I spent years working in windmilling and screen scene and post production. like, I know that those variations have a high amount of probability on the outcome of the product. But to be curious and to see what it sends back, you want to tweak and try and see, does that give me a little better outcome? Actually, the sound was a bit bad because we're a remote and we got a bit of boom. But it's not consistent and that consistency only comes.

from pain, from working the problem, from understanding the outcomes. And I'm sad if we're going to lose that. it is something that I'm going to watch for. Manuel, before we finish up, you put a lot of focus on security. If you're to look out over the next 12 to 18 months, what worries you most in the AI space in terms of security risk?

Manuel (41:22.691)
I think if you look at where the space is going with a push for autonomous agents.

And what's coming out right now, which is tool registries, agent registries, agents advertising their capabilities. So then other agents can dynamically discover and invoke tools and other agents. Networks of agents coordinating between each other across clouds, on-prem, at the edge.

Managed, like the complexity is going to explode and all of these new.

Manuel (42:13.8)
network, not just network, but all of these new communication flows that you cannot sometimes you cannot anticipate because it's like I said, dynamic discovery and everything. think the biggest risk is not being able to lose visibility of what's happening and not having the ability to control what

tools, knowledge spaces can this agent access? What other agents can this agent, because it's not codified. Like it's discovered at runtime, it's not in the code. So I think the complexity is going to explode. And the biggest risk is that we don't have a unified control plane where you can enforce policies.

access, rate limits, where you have all of the logging, all of the traces, everything in one place. But the biggest risk, rather than being something specific, which I think that's going to be, was going to bring the biggest risk, the difficulty to centrally manage all of this.

is probably...

is probably.

Manuel (43:56.831)
More than the biggest risk is, lost, you have to go here. I lost the train of thought.

Francis Gorman (44:04.316)
You're good. You're good. It happens to the best of us.

Manuel (44:07.179)
Yeah, I know. No, so yeah. So what I was saying is you see the space that with this explosion of multi agentic huge multi agentic networks. But the biggest issue and the biggest risk is that vendors put the effort on the capability before they deliver the controls and the capabilities come out months earlier.

the controls because it's the capabilities that sell. Is that what they are incentivized to do? And I think that's going to be the biggest risk in the industry. How do you keep pace to control the capabilities when you may not have all of the appropriate controls to have them in check?

Francis Gorman (44:59.355)
I think that's great insight and I don't think you're going to be proven wrong there. We see it already. know, it's a vendor first market. Push out the dream and then follow on with the security once the dream is selling. Manuel, I really enjoyed the conversation. We're going to wrap it up now. But if people want to find you, I'll stick your details into the episode description. So.

Manuel (45:17.879)
one.

Francis Gorman (45:27.248)
If anyone wants to get in touch with Manuel and leverage his expertise and his deep learnings in this space, check out the description on the episode and you can get all of his information there. But Manuel, I really appreciate you coming on and thank you for taking the time out to speak with me today. Thank you.

Manuel (45:43.728)
Thanks very much, it's been my pleasure.

Manuel (45:48.926)
us.