The Entropy Podcast

AI – Progress, Pitfalls, and Predictions with James "Jimmy" White

Francis Gorman Season 1 Episode 2

In this episode of the Entropy Podcast, host Francis Gorman engages with Jimmy White, CTO and president of Calypso AI, discussing the evolving landscape of generative AI, the implications of new technologies like DeepSeek, and the current outlook around regulation in the AI sector. They explore the challenges of data integrity, the risks associated with AI-generated content, and the innovations in AI security. The conversation also touches on geopolitical pressures affecting AI development and predictions for the future of agent technology in the industry.

Takeaways

  • Generative AI prompts should be direct and clear.
  • DeepSeek has both security concerns and hype surrounding it.
  • The US and EU are navigating a complex regulatory landscape.
  • Self-regulation in AI is becoming increasingly important.
  • Data integrity is crucial to avoid garbage in, garbage out.
  • Synthetic data can lead to poor decision-making in enterprises.
  • AI models are at risk of ingesting incorrect data.
  • The rise of agent technology will change the security landscape.
  • Companies need to be aware of the threats posed by generative AI.
  • The future of AI will require careful consideration of ethics and security.

Francis Gorman (00:01.411)
Hi everyone, I'm Francis Gorman. is episode two on the entropy podcast and I'm joined by Jimmy White, CTO and president of Calypso AI. Jimmy, how are you going?

Jimmy White (00:10.872)
Good thanks and thanks for having me on. It's absolutely my pleasure.

Francis Gorman (00:14.573)
That's great news, Jimmy. And you've been a busy man, a lot going on in the AI spaces, as we all know. I do have bit of an off the wall question to start off with. When you're interfacing with generative AI, do you use niceties? And I'll give you an example. When I'm writing my prompts on ChatGPT, I like to finish them with please, and when they get the response, a bit of a thank you, just in case AGI does become a reality.

Jimmy White (00:38.414)
Well, so it's funny, I used to always do that at the start and I'd say please and things like that. But now, you know, through many years of research, coercive prompts are a form of jailbreak. So now when I use it, I'm thinking am I coercively making the model do what I want it to do? And so I've removed all of those things from my usage now and I stick with directness. And I have noticed, though, that the latest models from, you know, Gemini and others

have a tendency to please. if you, for example, if you're using it to generate code, let's say you give it a CSV file and you say, hey, write me a Python script, I'll parse this file and generate four charts, right? It will do that. And invariably, code generation is terrible right now across the board with all of the large language models. And it will...

give you code that does that, but it'll be terrible. And so you'll say, hey, this is bad. It creates a pie chart with one element. So it's just a flat circle. Use a different data type for that or whatever, a different chart type. And it will say, sorry, you're right. Every time it will say, sorry, you're correct. It'll always kind of bow down. And so I think we've gone too far one direction. We needed to be a little bit more argumentative and challenging.

You know, something like, well, why would you want that chart type? Why not do this? This is the way. Or maybe the data won't always be one element. Maybe it'll be more elements in the future. Are you certain this will only be one element? And things like that. So I feel like the more you used to get, the more direct you get. And then when you get direct, you want it to be not so apologetic in its response to you.

Francis Gorman (02:20.687)
Okay, so if Terminator does become real life, I'm living. You're in trouble. You talked about jail breaking. I read recently that you've done some work around DeepSeek. Anything you can share in that space?

Jimmy White (02:24.886)
Yeah, don't say thank you if the terminator comes after you.

Jimmy White (02:38.87)
Yeah, we've done a lot of work on it. So one of the worst parts of working in AI space is the hype. And I mean, hype not generated by people working in the AI space, usually people working outside of the AI space. So we've seen with DeepSeek a very visceral reaction to

some good and some bad. There's good and bad and everything. So I guess the one thing I would say before we get into the hype on DeepSeek, we should really put on our InfoSec hat and

diagnose it in the careful way all software, all new software should be treated. And so first of all, we should separate two concerns. One is traditional InfoSec. So if you think about the app that is hosting DeepSeek on mobile platforms and on web platforms, that has lots of security concerns, right? So data going back to China, using your data for training, all those types of things.

or all to do with the app, nothing to do with the model itself, right? So for example, if you take then the AI side of the house and you say, okay, as a model, how does it behave? We had some hype come in there where there was a report stated it was vulnerable to 100 % of attacks, which is absolutely not the case. That report used Harm Bench as the test for DeepSeek, but it used 50 prompts and...

I always go back to the company Whiskas, the cat food company, their favorite tagline is nine out of 10 cats preferred. So I always wondered to myself, how many groups of 10 cats did they get before they found nine that preferred Whiskas? Hopefully for them it wasn't too many groups of 10, but that's what happened here. When you take a sample set of data, you run the very large risk of, if it's accidental, having a very skewed incorrect report. And so I ran the 400,

Jimmy White (04:47.566)
harm bench prompts us the full data set including training and test data against the DeepSeq model. And again, it's important not to use the distilled version against a LAMA distilled because it's more secure. DeepSeq distilled LAMA is actually more secure than DeepSeq R1 regular. But R1 regular is what we ran against. And out of that, we had a refusal rate of 27%. So

27 times out of 100 deep sequel refused to answer your question because it was a bad question. was, you know, whatever the reason was, it was inappropriate in some way, or form. And so that's very different than zero percent. So there's a lot of hype that, you know, the most detrimental statements will be published and then the rational realistic statements will not be. So you take the stick as a model and actually before it performs quite well in terms of.

against jailbreak attacks and all prompt injection attacks, all different types of attacks, compared to the top world class models like Lama, Gemini, open AIs model, family, et cetera. But of course, it has a lot of problems as well, like all models do. And so it will answer questions about China in quite interesting ways that may not be 100 % factually correct. It will also answer

questions about the US maybe in a very overzealous way, which again, maybe not 100 % accurate or correct. So the content is definitely one question in the primary storage model, the data in the model. So it's obviously been tuned with some biased data on purpose. And that is the biggest flaw I would have with R1. However, the big other side of that is they shared how they train

with the world and the open source all of that, not the data, but the methodology. And we've seen derivatives now. So like a really cool, really small derivative model is S1. If you've the S1 model, completely open source again and really small and performs really well as a reasoning model. And I feel like we'll have a massive. So the minimum bar has been raised from quite a low place with people trying to figure out training themselves.

Jimmy White (07:06.328)
The minimum bar has now been raised all the way up to really good reasoning models by one move by DeepSeek. So in terms of a community, I thank them for releasing that. I would urge caution with using DeepSeek as very much so if you're using their application, I would say don't use it, mobile or web app. But if you're using it as a model for a purpose, just be careful to know there's a lot of bias information there.

Francis Gorman (07:32.415)
That's really good insight, Jimmy. And I think that differentiation between the app and the model is really key. I was looking at the App Store charts today and it is definitely one of the highest downloaded applications in the country at the moment. So enterprises might need to throw an eye to that one very much like the TikTok scenario a few years ago to see do you need to get your mobile application management policies updated to see.

how many of your corporate devices have a deep seek application sitting on them or otherwise. But yeah, it's definitely interesting times with all of this different nesting of application into models, into cloud, into downstream devices. It's becoming really complex.

Jimmy White (08:14.668)
Yeah, I mean, I always go to the pressure situation. So in a normal, no pressure situation, most employees toe the line and follow the best practices and the guidelines of your company. But think about the under pressure software engineer and they're after getting a couple of bad code reviews, user stories they're working on are delayed because they're under pressure to get those finished.

If you're not giving them, from a company perspective, some sort of generative AI model to use, the temptation is now created to use one to get the job done, to expedite the problems you have in your code to fix them. And if you've got deep-seek saying, hey, use R1 for free, you don't have to use it on your laptop, down on your phone.

That's now where you start having potentially some of your company's source code being put in and ex-filled to China with very, very little effort on the Chinese folks side. So I always think of those pressure situations and temptations created in pressure situations. So that's where companies may have a blind spot. And I've no shares in any of the companies that are...

that are creating general VI software development use cases. if you don't think about that from a company perspective, know that it will probably happen in a precious situation where that temptation is created and someone will start using software you don't want them to use.

Francis Gorman (09:50.487)
Speaking of pressure, if we look at the geopolitical landscape at the moment, 500 billion announcement in the US in terms of inward investment into AI. We've got 200 billion announced in the last couple of weeks on the European side. We had the AI safety card in Paris, the US and the UK didn't sign up to the kind of mandate there. you know, there's that pressure to

be cutting edge to, you know, maybe forgo security and ethics to get an outcome. You know, are we, are we heading into a space? We see the EU AI Act has landed. Europe is regulating. looks like the US is deregulating. Are the tables starting to shift a little bit? we, are we in kind of this funny knife edge situation where the first to the chalice gets the prize? Where's your head at in that space?

Jimmy White (10:42.67)
So unfortunately, think it's all hype again. So if you think about what the US is doing deregulating, there was no regulation to begin with. If there's no regulation, there's no deregulation required. So there was an executive order from President Biden. And again, executive order doesn't really carry any weight outside of the running of government.

And there's to my knowledge, there's very few models being created by government. They're all created by private industry in the US. So the US have had the march, the lead for quite a while. None of those companies have been impacted in any way up to now in terms of proceeding with innovation, creation, knowledge growth. The same is true on the European side. And there's been no that the EU AI Act only came into force recently.

And as you're aware, the Irish side submitted a bunch of questions to ask for clarity on the EU AI Act. And the answers were very lukewarm. There was no teeth exposed with the EU Act. So I think on both sides, on the EU, there was initially a saber-rattle of we're going to be the safe economic area to run AI systems.

And I think there's maybe a false thought that the EU would that would foster a lot of people embracing AI would open arms because it was a safe place or quote unquote safe place to operate. But now they're they're trying to go back the other direction because they've been painted as stifling innovation. So now they're rushing to say, no, no, no, it's, we're we're signing up to safety, but we're funding with zealous a huge amount of research and innovation. And on the US side, we've got

pretend saber-attaling of regulation in the Biden presidency, and now a pretend de-stifling of regulation on the Trump side. Really, we're at a stage, and I always say, if AI is a year in time, we're on January 5th right now. It's very, very early stages. The most interesting thing that would cause, for me, the need for regulation

Jimmy White (12:59.162)
is MCP model context protocol. And that's giving agentic AI systems access to tools, both virtual and physical. That's the first place I think regulation will be required. But we've seen that that's not the case already. my wife drives a Tesla and it will use AI to steer and course correct and brake, cetera.

But living in Ireland during the winter, if the sun is low in the sky and it shines into the sensor, it'll just turn off AI completely and you're left to your own devices. And so that is self-regulation from Tesla to be an attempt to be careful and not cause harm. But the mechanism for that is direct handover to the driver.

Francis Gorman (13:28.623)
you

Jimmy White (13:47.66)
So if you're not paying 100 % attention when it immediately turns control over to you, that could also cause a crash, right? So I think there will be need for regulation, particularly with MCP. But I think it will, like all regulation that's ever been successful, come after the fact. And we're going to have to break a few eggs when we're making the sandwich. And so I think this...

course correction on theory of removing regulation or not or more accurately not creating regulation or over regulating because it doesn't really exist already. I think that's the right move for now. And but it will it will then increase the need for private sector to self regulate like Tesla have done successfully. And that can have its pros and cons.

But right now today, the blanket of security that companies would feel by having regulation that they just have to adhere to means that now that that's gone, they have to actually step up and put in their own regulation. So I think net net, it's a good thing.

Francis Gorman (14:50.873)
that's an interesting perspective and I can definitely see how the self-regulation piece plays a part of a new car myself and I have a similar type of functionality. was going on a bad country road the other day, there was a cyclist and I went to go around them but I didn't indicate and next thing I was fighting with my steering wheel trying not to, you know, because the self-regulation piece is to indicate to move around the obstacle you know so

Jimmy White (15:10.628)
wow.

Francis Gorman (15:18.063)
Pretty good note to keep in the back of the head next time. Turn it off or...

Jimmy White (15:20.972)
Yeah, and know, it's funny, everyone always talks about bias as a bad thing. So I always talk about bias as a good thing, right? So you want your AI system to bias protecting human life. So I, a cyclist is more important than a white line or, unfortunately, like an animal or something or a bird or something like that. So you want it to, if it has to make a decision left or right, that it will save the human and potentially harm the animal.

And so that's something that bias, like positive bias built into systems is, is a good thing.

Francis Gorman (15:57.261)
Yeah, definitely, definitely a space to watch, especially in the automotive industry. You know, I look at that for the last couple of years and I'm wondering, is there anywhere near the same level of robustness when you look at the amount of components and technology vendors that are built into cars now, you know, they're controlling your brakes, your acceleration, your steering, you know, they all talking back to some server in space that could be a man in the middle attack waiting to happen somewhere. I don't know.

Jimmy White (16:22.574)
That's so it's funny you say that I was at RSA last year in San Francisco and I used the opportunity to try a Waymo and so I was in Waymo for about 45 minutes and it was

Very nerve wracking for my first time. You get into it, there's a big glass kind of, or perspex protections. You can't access the front seats, you know, for safety reasons. But then there's a screen and a big red button. And the big red button, I just thought of Father Ted, you know, don't touch the red button, do the... And so we're driving around and it's super slow and careful. Like it's overly careful. So it takes you much longer to get places because obviously it's trying to interact.

with humans driving, which a lot of the time will be irrational or do crazy things. And so it's overly cautious as a result. However, twice on a 45 minute journey, it had to be taken over by human assistance, remote human assistance. And so it tells you this message saying, you know, don't panic, humans taking over the car right now. And then some remote driver, you know, navigates around the opposite that it caused the car to get stuck.

Francis Gorman (17:27.586)
god.

Jimmy White (17:32.526)
And so I thought to myself, it was both reassuring and terrifying that a human has to take over at some part. And I thought that's a great analogy or reference to how AI is treated by most people today. And if you removed humans completely from the scenario, would people be very worried? I think so. I really do. If you are over reliant on humans, then what's the benefit of AI? Right. So I think that's a double edged sword that we haven't

crossed the threshold of yet. And so I err on the side of, you know, human in the loop at some stage, especially in critical use cases. But the Waymo thing was a glimpse of the future and it was pretty cool. Yeah, a lot of human intervention with, know, if you're driving around, I always think the ring of Kerry, right? You know, you get those big tour buses driving around.

Francis Gorman (18:18.095)
I wonder how well that works around Connemara.

Francis Gorman (18:23.063)
or

Francis Gorman (18:29.588)
Healy's Pass or something. Yeah.

Jimmy White (18:32.042)
As an Irish person driving in Ireland my whole life, I'm terrified when I see a bus come around the corner and then people who are local to Kerry, not a bother. They have the best peripheral knowledge of where their car is on the road, I think, out of anyone.

Francis Gorman (18:46.061)
Yeah, there's some really dicey roads up around Kent Mayor and Clarnie and even past when you go down around those, you know, we were there on holidays two years ago now and of a four year old, was he was he two at the time. He was just looking out the window. All he could see is this sheer drop. My wife was closed eyes and I was driving. I'm not sure. I'm not sure AI assisted or any other way would have made that any better.

Jimmy White (18:50.156)
Yeah.

Jimmy White (19:13.218)
Yeah, I don't think there's much Tesla test data for trading the models in Conamara or Kerry.

Francis Gorman (19:20.687)
No, it's definitely an interesting use case to validate the robustness of a model, suspect. I suspect. Jimmy, in terms of generative AI and the hype cycle, when I look at a lot of this stuff, data seems to be the last thing people talk about. And they all talk about the capability, the output, the efficiency gains, et cetera. And especially with the end user enabled AI is coming along. I can see a real problem starting to occur where people are.

Jimmy White (19:24.801)
You

Francis Gorman (19:49.507)
generating new sets of data and the providence and data lineage is getting lost. And currently there's no headers to validate, know, that these things are AI augmented, you know, because they're seen as assistants or co-pilots or whatever. there a real chance that enterprises kind of run feet first in here and all of a sudden we're creating lots of outputs that have no...

Jimmy White (19:54.678)
Mmm.

Francis Gorman (20:15.331)
delineation back to their source of truth. know, AI starts then consuming those as matter of fact, and we start this kind of vicious cycle of AI eating AI unless people have a really clear data strategy, data structure and data ownership in place.

Jimmy White (20:29.678)
Yeah, so in terms of the full 360, I'll pin that for a second, but what we see every day now is, it just happened, it seems like it happened overnight, but it's obviously been going on for quite a while. We've seen that people are trying to evaluate models, right? So they have their traditional scores like MMLU and things like that. But they want to test it for their own use cases, which is completely fair and a correct approach.

But then they have, on the other side, a fear of sharing their data with the model to test it. And so what we see is two things. these enterprise companies, Fortune 500 companies, will go to the web and download open source data sets. So harmful content, code exploitation, whatever data set they want to evaluate the model against, they'll download it from an open source place.

And then they'll run that through the system and they'll come back and say, here's the result. It did well or did poorly, et cetera. And often they'll come to us and say, hey, you this did poorly and we'll look at the data set and we'll see that it's really bad, like the labeling. So it could be usually one of two things and someone label it really poorly or the actual content you're putting through is badly created.

When we look at where this data came from, or we ask where it came from, it's often an open source GitHub or GitLab project. And we'll then find out the origin of data was completely synthetically generated. Or another data set that was good from a trusted source that was labeled synthetically by a GenAI model. And so we have this weird situation where human beings have a genuine intent that they want to evaluate a model for their use case.

And that's all great so far, but then they'll take the massive misstep of taking an unknown data set and using that as the decision maker for whether this model is good or bad. And we're seeing a huge amount of synthetically generated data sets or synthetically labeled data sets hit open source, places like GitHub, GitLab, and Hugging Face, et cetera. So because GNI has gone so cheap and plentiful,

Jimmy White (22:54.958)
It costs people no time to, and they feel like they're doing a good thing often, you know, Hey, we want to put this data back out into the world. And we spent some time creating and it's garbage data. Um, so that's the one thing we see that people are using. Genie I generated data without knowing it. And it's causing them to make bad decisions left, right.

Back to the 360 point. So are we seeing models ingest data that's been generated by models or maliciously generated by humans or a mixture of humans and models? And the answer is yes, but only the earliest indicators. So there's a cool article last week where a guy a year ago put up false lyrics for songs on

I can't remember if it GitLab or GitHub. Yeah, you saw this. And now they're seeing it like full circle. They've asked for the lyrics and our song generated in the spirit of the song or whatever. And they're getting the lyrics that they made up that are false, generated by Gen.ai. And so that's not a, you know, that's a kind of a novel white hash threat researcher doing a cool job, but...

Francis Gorman (23:44.473)
Yeah.

Jimmy White (24:09.312)
You have to believe that if that's happening on purpose by a white hat activist, you're going to have accidental content getting into models because it's parsing the web. And these models are continuously parsing the web. And we're also starting to see the earliest attacks on things like deep research projects. So deep research, because it was the Google marketing name, has now kind of become a

a synonym for this type of activity, much like Google itself. But effectively, it's just an agent. So it's a web crawler as a tool. It'll do a bunch of Google searches. And who knows more about Google searching than Google? And so they'll do a great job at searching the right content. And then they'll handle the content over to a model. will analyze it. And then it will give you back the answer to your question. And so because you're using live internet data,

And there is a higher propensity because of a Gentic to have new incorrect data be part of your analysis that you now read. And the ability to create that new false data is massively simplified because you can use Geni to create, you know, reams of what looks good, it passes the sniff test, but it's actually factually inaccurate. And so I think.

We've seen the earliest signs of it, but that's going to become that dog chasing the tail and catching it's going to become a problem very, very quickly with the advent of agentic, the advent of cheap gen AI and the advent of models continuing to just soak up information without being cleansed.

Francis Gorman (25:56.525)
Yeah, it's it's it's fascinating. It's fascinating. I can see this becoming a real problem in in kind of small medium enterprises where, you know, the procedure document gets an AI output makeover and then that becomes the new one. But, you know, nobody actually validated it or whatever. And it starts causing all sorts of departmental issues or whatever. But I think it's going to be a.

going to a real headache for data professionals as well as security professionals to manage data lineage and data providence because without garbage in garbage out it's the I was talking to someone in the aviation industry and they were speaking about they did one of these early kind of pilots and because the documentation set was in different levels of draft it might say where is the escape door and on a bone

Jimmy White (26:33.934)
Mm.

Francis Gorman (26:48.015)
737 escape door is three rows from the front or four rows from the back. You know, and depending on what document it hit on first, you know, there are no doors. know, it was kind of it was a simplistic, you know, early, early days as this was about two years ago, a year and a half ago, it was was kind of the early days of it. But, know, it's simplicity of having, know, the right procure documentation set to trade on and to reference and in in environments where

Jimmy White (26:57.068)
Yeah.

Francis Gorman (27:18.733)
documents are fluid and need to be kept up to date your version control and all of that stuff is going to become really key and how do you cleanse the system from what already knows if that's no longer matter of fact, you know, we saw that in another aviation example of the customer and the terms and conditions being iron clad, etc. to give a payout that had to stand up and go or to not so like there's there's nuances and

interesting ways of jailbreaking these models or leveraging them to get what you want out of them. And I suppose in that side of the house, Calypso AI have been innovators in the AI security space for a number of years now, far before the hype cycles came. Is there anything interesting you guys are working on that we should be delving into a bit?

Jimmy White (28:05.538)
Yeah, I guess, you know, we've been so January 23 was when GPT 3.5 first became publicly available in beta form and was obviously available to others in the know beforehand. But we've been working on. Gen. AI and the problems inherent and how to protect against them for over two years now and five years in the AI space in general in terms of perturbation, the models, et cetera. But.

Through the last two years since Gen.ai became publicly available, we've noticed a couple of patterns. And so one of those patterns is that the security space has broadened. So the remit for AI has broadened the security space. And so you now see, so suddenly worried about brand protection, right? Where they previously weren't really, except for if there's a breach or something. Now they're worried about models seeing bad things about the company.

or breaching, know, saying things that are potentially incorrect on the company's behalf or divulging company secrets and traditional DLP, cetera. But also now that GEN.AI is being hooked up with agent platforms and with Model Context Protocol, MCP, and the ability to actually execute and do things, either read-only or read-write, it now has the ability to be a

terminal to internal operations in a company if it exposing correctly. And so basically we're seeing the threat count just spiral. The amount of people, there's 300 plus research papers into threat on AI last year alone. And we expect that number to more than double this year.

It's a good way of getting funding if you're a PhD. I'm going to research this area of AI threat and people are like, cool, I'm interested in that. And so because you're seeing the research go in and we're seeing like tools like Glaze and Nightshade from the University of Chicago, which are built to help content creators protect their own IP. So Glaze is awesome. It'll put a imperceivable to the human eye glaze over your image that you've created as an artist.

Jimmy White (30:20.334)
And so the model won't understand that image as intended. So if you created a picture of a field or something, it might recognize that it's a picture of a field. And so it won't be usable in a diffusion type model for generating images, et cetera. Or nightshade, which actually goes one step further and you can say what you want the model to interpret your image as. So I don't know if you draw an image of an airplane.

and you tell Nightshade to put another layer over it that's again, and perceived by the human eye to say, this is actually a picture of a cow. When people are typing in, generate a picture of a cow in diffusion or in one of the models, it'll give you a picture of an airplane. And so what's really interesting with that is it's artists now being armed by threat researchers, how to protect their content so that the model providers look like idiots and then have to stop sourcing your content because they keep showing the wrong thing.

So with all of this combination, we saw this pattern emerging where the number and type of threats were growing, both from a white hat and black hat perspective. And we realized that besides protecting models, we also need to arm red team people in companies to answer that really hard question, is, hey, we're picking this model to use for this use case. And that's great. That's the thing to do. Pick the right model for the right use case.

And they use scores like MMLU and other scores to decide which is the right model for the use case. And hopefully they're not using synthetic data to evaluate the model. But they're missing the security score. So what is the score? How does this model perform in terms of security against all these different threat types? Is this a public facing model? Is this a model that children will consume? Is this a model that is in a jurisdiction where certain things are legal?

And so we built this red teaming product, excuse me, and we're launching it March 31st that effectively fully automatically test our model against thousands and thousands of tests and gives you a report card and a score. But it'll also like to supply custom intent. And we use a thing called agentic warfare, where we use armies of agents to circumvent controls on your model to steal information. So if you want to, I don't know.

Jimmy White (32:47.446)
Steel Francis' social security number will ask the model to do that. And if that information is available, our genetic warfare will find it. So that's the coolest thing we've been working on for the last while. And I'm excited for that to be out there.

Francis Gorman (32:58.863)
So don't fall out with Jimmy is what I just heard there. So Jimmy, look, that's been fantastic. I know we're going to run short on time fairly soon, but any predictions for 2025 in this space? What are we going to see?

Jimmy White (33:00.674)
Ha ha ha!

Jimmy White (33:11.468)
The biggest prediction I have is now that regulation is a taboo, we're going to see massive embracing of agent technology. So if you're a security professional and you're buying software from vendors, you need to start asking agent questions. What type of agents are being used? What frameworks? Does the agent have read write capability? What tools does it have under the surface, et cetera?

And then if for your own company creating a agentic solutions for efficiencies, read only isn't a panacea. So you've got to understand if it's access to a database, it can still perform a massively inefficient SQL statement. Excuse me. And if that's the case, you can lock up databases pretty quickly at scale. So agents, the big thing to watch out for this year. I think it would be really successful and really good. We'll see some.

own goal scored unfortunately as well but it's definitely the new threat horizon for AI.

Francis Gorman (34:14.477)
That's amazing. Look, Jimmy, thanks a million for talking to me. Really, really appreciated it. And it was a great chat.

Jimmy White (34:20.398)
Thanks a million, Francis. My pleasure.

Francis Gorman (34:22.425)
Thank you.


People on this episode