EDGE AI POD

Beyond the Cloud: The Hidden Security Challenges of Edge AI

EDGE AI FOUNDATION

"Do you trust your AI models? Honestly, I don't trust them. We should not trust them." These powerful words from STMicroelectronics' Mounia Kharbouche perfectly capture the security challenge facing the edge AI world today.

As organizations rush to deploy AI workloads at the edge, a complex security landscape emerges that demands careful navigation. This fascinating panel discussion dives deep into the three major threat vectors organizations must prepare for: algorithmic attacks that manipulate model behavior, physical attacks on hardware, and side-channel analysis that can steal proprietary models in mere hours.

Through vivid examples—like specially designed glasses that can fool facial recognition systems—the panelists demonstrate how seemingly minor vulnerabilities can lead to major security breaches. They explore the security paradox of edge deployment: while distributing AI provides resilience against single points of failure, it simultaneously creates numerous potential attack surfaces requiring protection.

The conversation reveals a critical tension between economics and security that often drives deployment decisions. Organizations frequently prioritize cost considerations over comprehensive security measures, sometimes with devastating consequences. All panelists emphasize that security must be a fundamental consideration from the beginning of any AI project, not an afterthought tacked on at deployment.

Looking to the future, the discussion turns to emerging threats like agentic AI, where autonomous agents might access resources without proper security constraints. The panel concludes with a sobering examination of post-quantum cryptography and why organizations must prepare now for threats that may not materialize for years but will target systems deployed today.

Whether you're developing edge AI solutions or implementing them in your organization, this discussion provides essential insights for securing your systems against current and future threats. Join us to discover how to balance innovation with protection in the rapidly evolving world of edge AI.

Send us a text

Support the show

Learn more about the EDGE AI FOUNDATION - edgeaifoundation.org

Speaker 1:

Thanks everyone. So it's 2 o'clock Friday afternoon. We are truly on the home straight and it's great to have you all still with us. We have a really exciting panel this afternoon dealing with a really important topic of edge security. My name is Patrick Ward. I work with the Edge AI Foundation. I look after partner development for Europe, so if we haven't had a conversation already and you're not a part of the foundation, let's have a conversation before you leave today. Let me introduce why don't the panel? Introduce yourselves and then we'll get going. Munia.

Speaker 2:

Sure. Thank you, Patrick. So, hello, I'm Mounia Karouchari, so I'm mostly working for STMicroelectronics in France, mostly focusing on the different threats issues that we might face in the future for AI, so the really security issues that we will discuss today with this fantastic panel.

Speaker 1:

Looking forward to Munya.

Speaker 3:

Matthias. Hi, I'm Matthias Kohlke and I work as a consultant for Thistle Security and for Korean AI accelerator DeepX and a bunch of other computer modules, and so I think it's going to be a great discussion.

Speaker 1:

Looking forward to Matthias and Ali Ali.

Speaker 4:

Ors. I work for NXP Semiconductors. I lead the overall AI strategy and innovation products that we have for AI enablement. So that includes all our hardware and software offerings under the EIQ branding that we have at NXP across our microcontroller and applications processor products.

Speaker 1:

That's awesome, ali. Great to have you with us today. So, look, let's put a bit of definition around. Oh, as long as I stay standing on this chair, let's put a bit of definition around security. Look, when we create an architecture for a system, we make choices. When we make those choices, it seriously affects the security landscape. As we move AI workloads to edge devices, maybe I'd ask each of you how does that affect the security profile? What are the kinds of risks that are emerging as we move AI to the edge Munia? Let's start with you.

Speaker 2:

Yeah, that's a great question, Patrick, to begin this panel discussion, maybe I will begin with something that has been said on the first day by Lucas Garcia. I found very interesting. He was asking the audience do you trust your AI models? And I would say to that question, honestly, personally, I don't trust them. We should not trust them. There are many stuff that could go wrong. So, honestly, it could go wrong on, I think, on different levels actually.

Speaker 2:

So, model systems, for instance, if we consider AI models, the AI algorithms, I think for this audience it would be interesting. We can maybe, yeah, divide them in two categories. Maybe, yeah, let's go with algorithmic attacks, so it's a really attacks that will use the algorithms themselves, so take advantage of the algorithms in order to change its behavior, manipulate the behavior of the model. You can give an example on that, patrick. Let's say, yeah, let's take face recognition. There is a model with the four of us where the model will just try to recognize who's actually, for instance, in front of the building, and then maybe I'm an attacker, I'm a bad person, I want to access a building to which I should not have access to. Let's say, the model actually knows me like this. So everyone, I will do like a Superman transformation there, please bear with me. Transformation there, please bear with me. So yeah, let's imagine the models the.

Speaker 1:

AI model knows us like this. It's been trained on our faces. It recognizes us for authentication, yeah.

Speaker 2:

And if I am like that, I would be recognized as Munoz normal. But I printed these specific glasses, which actually appeared a couple of years ago in the literature I appeared. With these glasses I now look like Patrick and I have access to billions of AGI foundation or things like that.

Speaker 1:

I'm using my imagination here, okay, so yeah for everyone from the ILLIS.

Speaker 2:

Hopefully you can see that it's me, but the AI model won't know that it's Munya. But now I can access whatever building.

Speaker 4:

Patrick has access to.

Speaker 2:

So that's really the kind of things, threats, on which we should be informed of, be aware of and really try to protect against them. This is the first category. I will go really quickly on the second one because it's more. It's less related to the model itself, but more on the hardware devices and maybe with matthias it will be closer also to his topic of focus within this technologies. It's um, the physical attacks that we see in general way in cyber security. So all these physical attacks actually have been shown in the late six, five, six years that it actually applies just as well on AI models than it was applying on cryptography.

Speaker 2:

So, for example, we have an AI model, the AI algorithm, and now we will try to modify some weights.

Speaker 2:

Again, if I take the same example than before, I have very big model, millions of parameters, but I want to change just a couple of I don't know 10 weights and again I will look like Patrick, same category. I would say also we have the third threat, which is site analysis. Where we have the third threat, which is site analysis, where we have very big model, trained, I don't know, with a very interesting data set. It took the engineers maybe months, if not here maybe one year or two years to develop this kind of model, and personally, as an attacker, I want to recover that model with less effort possible and in as short time as possible maybe a couple of hours and then I get a model that has been trained for months, if not more. So this is kind of threads that I think are really important to take into account today in when we consider AI and all the issues that this could lead when we develop models and how to implement them.

Speaker 1:

These are the things we need to be aware of and then put things in place to protect ourselves against them and, in case they happen, to be able to deal with them when they do happen. Matthias, maybe I could come to you.

Speaker 3:

Yeah, right, it's always quite interesting when you talk about the AI security. Someone like me, who comes from hardware, then moved to software, thinks about security like when we're protecting the model from excess data as rest, secure, ota. But there is also that kind of attacks of changing the weight on the model. But yeah, let's focus on that side of securing the communication. So that's what we also do at Bizzle right there, I think I'm always surprised how many people are unaware that a good security there requires a combination out of software and hardware.

Speaker 3:

Nowadays you think everything is just an app, but no, where do we store the key? What attacks do exist? It's always a battle between attackers and engineers to do that, and so it's always a combination of this and if it's embedded in the chip from like secure boot on thrust zone already from the vendor, or if it's a third party element to do, and then at least then make sure that no one gets a hold in an easy way of the of the model, which would make it then for all the other products out there more easier to basically create tricks to change the the weight of this. So yeah, it's on both sides.

Speaker 1:

So we're hearing kind of AI specific stuff, model data, algorithm, et cetera, but also this usual cybersecurity, data at rest, data in transit all of which still applies especially so. In fact, ali, do you want to come in there?

Speaker 4:

Maybe I'll take it a bit back to the fundamentals. We jumped right into the model protection. When we look at edge deployments, you're also looking at the benefits and the pros and cons around that. Edge versus cloud deployment or edge and cloud deployments there is the whole. When you're running something in the cloud, you have a single point or an aggregated point of potential failure and potential attack. It's a bigger target to attack. So when you start moving into the edge, you start getting into resiliency out of that effect that you're now distributing your system. So there's a level of resiliency that you get from that distribution, but you're also proliferating nodes that are open to attack. So while you have the resiliency that comes from the distribution side, you end up getting into vulnerabilities, as Munian and Matthias mentioned, around opening yourself up for physical attacks because the cloud is less reachable from a physical point but the nodes of the edge, when you've deployed on the edge, you start opening yourself up to the the physical attacks, which are the traditional. You're still open to traditional software attacks, but you're now open also to an accessible platform with edge attacks like fault injection, like side channel and denial of service type of aspects as well. So you need to have a very robust protection of the traditional cybersecurity and traditional security elements on that edge deployment itself.

Speaker 4:

Other benefits of going into the edge, of course, if you're dealing with privacy-related data or proprietary data that you want to maintain, running AI on the edge gives you that benefit of maintaining data that is privacy-critical. You're maintaining it on the edge. You're potentially not pushing it back onto a transmission channel from where you could have vulnerabilities in the communication channel itself. But that also means again that you need to have very robust protections on the edge device itself so that that private data that you're trying to maintain is protected. And we're also getting into additional requirements around regulations for, like Cyber Resiliency Act, the UAI Act and these are also coming into play of pushing what you do on the hardware. And from NXP's perspective, we're a semiconductor vendor, so we're on the hardware layer and the enablement layer. But our customers and our customers' customers are sitting on the software stack more and more, so they need to be aware and enabling all of that from a protection perspective is critical.

Speaker 1:

It's a pretty complex picture. There's a lot of layers to this. There's a lot of things to be considered. Munir, you talked about algorithmic attacks. So we're talking about getting access to, and taking control of, and in some way modifying, the algorithm itself. Can you just talk us through a little bit of what? How does that happen and how can we protect against that? Just dial into that a little bit more for me.

Speaker 2:

Yes, sure so algorithmic attacks, they actually could happen in different fashions. So it depends on the means, the goals of the attacker. So, yeah, very powerful attacker, we can imagine that he has access to the means, the goals of the attacker. So, yeah, very powerful attacker, we can imagine that he has access to the model. He has access to everything actually. So if I know the model, I know maybe even the data set on which the model has been trained, it's easier so that to imagine images. So, before I took the analogy for facial recognition, where my glasses misinterpret lead to the model to misinterpret the right class, the right person, yeah, the right person?

Speaker 2:

yes, and in that case it's just that because I choose some very minor perturbation that is realistic glasses many people here today have glasses, just like me, and it's realistic. So we are trying to find the minimum perturbation within the input image that leads to a prediction error and when we know the model, so the weights, that topology of the algorithm that we are currently implementing, it's way easier and, for instance, yeah, we will base that most usually on gradients of the model, just to give you a bits of specifics.

Speaker 2:

But so yeah, this kind of thing are really yes, we see that quite a lot in literature. But maybe I can go back to what Ali was mentioning for distributed, and in that case I think it's even more important to the algorithmic attacks because we can actually have in a decentralized architecture that we more and more see, also Queer and development. At ST we have these different models at the edge and if we imagine one of them being malicious and performing this attack on the device for instance, I'm one of the one providing, you know, the data to the, to the server, and then just will, like that I can manipulate all the other clients. So this is even more powerful in decentralized architecture and data poisoning features.

Speaker 1:

So by taking control of one point, you're getting then access to others and then can take more control of the distributors Exactly.

Speaker 2:

You can even manage to manipulate the whole circuit, the whole framework, even maybe creating backdoors leading to different functionality of the full model.

Speaker 1:

Right, okay, matthias, I know you wanted to come in there as well.

Speaker 3:

Yeah, that's the same thing I want to say, the difference between distributed and undistributed. And then you might have actually the security on the data is all solid and touted but someone manip. Security on the data is all solid and touted, but someone manipulates basically algorithmically the data and then still the security layers all intact but the data going back to the, to the server and and training and influencing, then synchronizing with the other nodes, basically gets the wrong input. So you see, it's not only about the complete current security on the data exchange and cryptography thing, but there's still other attacks with the keeping the function intact but just manipulating some training data.

Speaker 1:

Okay, and Ali, I wanted to bring you in there also because I guess when we're thinking about the architecture where to run the workload cloud edge, some sort of hybrid configuration how how does security influence where we're going to make those choices? So we've looked at it from we make those choices, the security implications. Let's look at it from the other way how does it influence architecture?

Speaker 4:

typically. I mean, the first thing that influences where you run the workloads, today at least, is coming from the cost structure. Where do you have the best return on your investment of where you run? It's a very economics-driven decision, I think. More and more, especially with regulations and some of the threat levels coming up, it's become, you know, we want to shift that discussion to security-oriented discussion as well of where some of these workloads should run.

Speaker 4:

Now. Cloud is very well suited for training, for higher-level compute and higher-level reasoning aspects, and there what you need to do with the cloud side is to have a very strong perimeter protection and a very good data governance structure around what you're running on the cloud when you start getting into the edge. You're very good at latency and real-time operations. You're very good at data privacy protection on the edge, but you need to have a very robust security of the device itself. On the edge. There's more and more capability that's actually able to be pushed on the edge.

Speaker 4:

Yesterday in another panel, we were talking about LLMs and generative AI. Is it possible to run on the edge? Yes, Like you can run these generative AI models now on the edge, which kind of leads to additional surfaces that open up and additional capabilities from a vulnerability perspective that open up when you start talking about generative AI running on very small edge nodes. So the decision is typically from where does it make sense to run the workload? A lot of the decision right now is around the economics of where you run it trying to run it on lower-cost devices, more distributed, trying to save on your connectivity costs and your, of course, cloud costs, which can get aggregated quite high, Moving down to the edge, but also moving down to the edge with a security-first mentality of where does it make sense From a security perspective? Does it make sense to be on the edge?

Speaker 1:

So there's a tension there between the economics, the security, and you're making the optimal decision within those constraints. I think you wanted to come in there.

Speaker 3:

Yeah, I think that's very interesting to say. It's not anymore is it capable to run things. You need to deserve infrastructure before. On the device, it's really about the cost and a good example.

Speaker 3:

I've seen there a lot of these security cameras you see now in the market. They send the picture to the cloud and there happens the detection maybe movement detection locally to save some battery. And so I had these cameras and you have a monthly fee you have to pay, because they have to pay the data traffic as well. We don't forget that a data center has to do that and so I switched to a solution where people say, oh, you have it on-premise, you know, over the network and all these cameras have edge AI capabilities I have.

Speaker 3:

But then I looked in the power consumption it takes me at home to have a PoE switch and the server with the hard drive and what it takes me for these cameras over the year, and I realized I'm ending up actually paying more money than the subscription for, okay, germany has a very high electricity cost, right, and this is something also where you can shift the cost to the customer. And so therefore, I will think also that more and more application as to they can run in the edge people economically will run them there and be seen with some like Qualcomm and DeepX bringing out chips which can run even large language models and then you don't have to pay the data center. You don't have to pay the traffic, you're dealing just with metadata, and so I've seen that right and in the end the person is happy not to pay a yearly subscription, but you pay the electricity.

Speaker 1:

But there's a total cost of ownership consideration that needs to be calculated there, muni, I want to bring you in there then, changing tack a little bit, and talk about standards and the role that standards play in security. What standards are at play right now, what's emerging and what role are they going to play in helping the security agenda?

Speaker 2:

I think there is actually a point to make with the previous question related on that, on standards, for instance, for defenses. So currently we are missing defenses that will be efficient for many use cases, realistic use cases Usually the defenses that are developed are whether there are really many use cases. Realistic use cases Usually the defenses that are developed are whether there are really many of them. So usually they have gotten after a couple of days or attacked just a week afterwards and then made inefficient or most of the cases, as I was saying, they are not useful. So that's why it's a bit if we have standards, if we have regulations that make metrics.

Speaker 2:

We discussed actually quite a lot about metrics today mostly, for instance, even with Danilo's workshop. So metrics is really important in that sense that these standards, more specifically, will be a make us able to design some command methodology between all the the main customers within worldwide, so that everyone respects the same inputs, the same constraints when they are designing their model, so that this is the really the main advantage I see on standards and why it's really important to have something that will be makes things easier for security. And you are asking, patrick, for just a couple of examples. Maybe there are quite some initiatives, mostly and Ali mentioned the CRA and the AI Act so this, for instance, are just regulations from Europe for the for the EU, I act, its regulations from Europe to define the criticality of a given application. So, for instance, if we say, if we come back to fashion recognition and we say it's really critical, we need to have this kind of standard defined and for that, so the EU for instance, there is ETSI that is working on that- Okay.

Speaker 2:

With a working group and also the NIST in the US. In the US, yeah, developing things related to that. So, yeah, I think it's really important. I wouldn't say big fan we need to be big fans of that but I think it's important to do. But we still should keep some flexibility.

Speaker 1:

They have a role to play.

Speaker 2:

They have a role to play, but they should be flexible. As we were saying from the beginning that our AI is fast evolving, security is really fast evolving. So if the standards are too struck, too rigid for some specific functions security is really fast evolving.

Speaker 2:

So if the standards are too struck, too rigid for some specific functions, it will be impossible for us to make them actually apply them on the field. So that's why there is this our initiatives that are ongoing. Unfortunately, for now, it's still work ongoing, but yeah, hopefully we can reach something in the near future.

Speaker 1:

It's a great point, moon. Yet Everything's evolving so fast right now that writing standards for this stuff must be a complete nightmare. Matias, did you want to come in there as well?

Speaker 3:

Yeah, so I think again dividing it into the technical standard, securities, I'm pretty confident it's very globally. Maybe it compared to when it's about privacy and what can you ethically do. And on the technical side, I don't think it's so much a problem to have worldwide standard. Maybe that's because these standards often involve engineers are involved in that and the other ones are politicians involved. It's a dangerous mix.

Speaker 1:

And so I think it's a dangerous mix.

Speaker 3:

And therefore I think there's more challenges and you see that how European companies complain about regulations already that they cannot do certain usage of data and the United States is the opposite. So I think that is probably the more complex topic to find a common path, also if you want to have built a better global product. Absolutely compared to data data security. I think there the engineers are basically speak a common language yeah, it's more, it's better established.

Speaker 1:

I guess I wanted to talk about a topic that's really talked about a huge amount it's very topical at the moment which is agentic AI and I guess specifically around running agents on endpoint devices and in a distributed way. Ali, maybe I could bring you in here. What are some of the considerations there around trusting these agents and managing them?

Speaker 4:

You shouldn't trust them, but anyway let me start by level setting, maybe around AI agents and agentic AI, so they're very distinct around AI agents and the Gentic AI. So they're very distinct. They're related but distinct terms. Ai agents are pieces of software, pieces of applications that actually can take some task completion or have a task to complete and might have some level of autonomy in getting that task done with an AI model as part of it. And this is like evolution with generative AI, like from LLMs, from getting into vision language models and vision language action models and agents that relate to it. Then agentic AI is really an orchestration of these. You know, sometimes multiple agents and more of a framework to create more autonomy around task completion and how they can run into each other.

Speaker 4:

You start getting into now one of the protocols or infrastructures that is somewhat of a de facto standard around. This is something that Anthropic released called the Model Context Protocol, mcp, and it's interesting because it's I won't say it's a hype it's extremely powerful. It's able to orchestrate and even have public MCP servers where you can run local models and local capabilities but also source from public sites like YouTube, like Salesforce from an organization's perspective, or even GitHub and Hugging Face, pull in data and, from a security perspective, what you see with MCP is, even though there is provisions for a token based authorization, as agents pass information from each other or access certain services, it is not mandatory to have authorization. So suddenly you can get into having set up an agentic AI setup that has no security around it accessing public sources, accessing private sources, potentially accessing things that have been manipulated very easily in a public domain. So there's a lot of layers of, again, vulnerability that this creates and, because it's such a hot domain, people are using it without thinking about the security. It's kind of like it always happens where security sometimes is an afterthought and people are like, oh, wait a second, like we've really opened ourselves up. Now we got to close it back down. And you might not know that you've, by the time you figured out your vulnerabilities, that actually your IDs have proliferated, your authorization pieces have proliferated.

Speaker 4:

So you need to structure any of this agentic AI stuff with a lot of the fundamental security concepts that already exist and have existed even with HTTP, you know, ethernet-based protocols, et cetera, in terms of you've got to limit the accesses you're granting, the accesses you're granting, so the agent coming in only has certain authorization to be able to complete the task that it's given. You have to work on a limited privilege principle or a restricted privilege principle so it only accesses resources that it needs to complete its task and nothing else. You're sandboxing these agents so that they're contained. So all these have to be put in place, and these are concepts that exist. We just need to be careful as something new pops up in the AI world, like agentic AI, and next buzzword is physical AI, where it starts getting into moving agents. We need to be careful in leveraging the technology that exists already in protecting these systems.

Speaker 1:

Okay, thanks, Ali, and I guess those agents are also accessing, as you say, resources, information, data, etc. Matthias, we need to keep that in mind too.

Speaker 3:

Maybe I always like to talk a little bit in pictures for people who are not so technical.

Speaker 3:

So, to sum up, what Ali said is you have an agent and maybe you as a company have control over this, but there's external resources and it feels for me like I'm in a call with a service agent and you turn on your computer and then you see in that, whatever call it at Google zoom, naming them all, you see a lot of people without a picture and then you said, oh, who's somewhere else on the call?

Speaker 3:

Yeah, don't mind our intern, we train him how to do customer calls and that's a little bit this, especially if it then reaches, reaches out to other services providers and it creates them. We talked about the security risk before this week. You know what is. If I have an idea and I want to check, oh, does someone already has a patent on this? And I go on a search engine and I said, look for existing patent for this, then the one who earns the search engine says, oh, that's a great idea. There's no pattern for this. Okay, yeah, I think that's kind of the same thing that there the security risks is really in a managing and knowing the, the sources and the other agents you're basically connecting to.

Speaker 1:

So the very process of searching for a particular patent is alerting someone, some other party, to the fact that this is an area of innovation and maybe worth looking into. Wow, okay, If you don't mind, could I jump in there? Yeah?

Speaker 2:

yeah, yeah, I think. Yes, I agree with Matthias. I think he took us a bit, even a further way, you know, with a bigger view of we're getting a bit farther than IGN-GKI, maybe more applying to LLMs and GDI, and I wanted to make a point that in that case too, there are a lot of vulnerabilities on which we are today honestly not doing. There are a lot of works, for sure, it's the beginning of that, but the numbers are there and we should take care of those, just like before algorithmic attacks with prompted injection resistance that could lead, you see, two very critical issues.

Speaker 1:

that's why, yeah, yeah, okay, okay, great point. Look, we're gonna go to the audience now shortly, but before we do, I wanted to get a perspective from each of you on where this is all headed. So I'm asking you to kind of look into the future here and just what's coming down the tracks in terms of security or in terms of other developments that could impact the security agenda. Munia, maybe I could start with you Sure.

Speaker 2:

So for the future, that's, yeah, great question. Well, I think that the main topic that's really interesting for us for the future, mostly as an IP provider, is really the hardware To be able to leverage the right hardware to be able to run all these workloads on the edge. Hardware to be able to run all these workloads on the edge. So for today, for instance, during this event, we discussed a couple of times about the neural art with multiple demos. I think this kind of thing. We have a very big gap that we are trying to bridge. And there is the things that I think will be really important for the future, for instance, to respect privacy issues, to have them on the edge and why not have large language models inferred directly on the edge I think that's one of the major process are really important for us. And maybe the second point I would make for the future was maybe to change the way we are working, actually the way we're working.

Speaker 2:

Yeah, the way we are working, and mostly for security purposes. In security, it's always the very last point that's always coming, and maybe for that so Pete began introduced actually the event by asking how can we take AI to the next level, and for that we had very interesting topics all week, but I think security should be one of them. There is always, yes, compression, quantization very important topics for performance-related efficiency, but if we don't take care of security as soon as the model is being trained, for instance, with adversarial training defenses that are efficient, it will not work or it will be too vulnerable for real implementation.

Speaker 1:

So it's about the fundamental principle think about security from the start, don't try to design it in at the end. Okay, absolutely, matthias. Looking to the future, what's coming? I?

Speaker 3:

think when you look at you just told me that HAI foundation was more tiny ML at the beginning and I think it's the right thing to say just Edge AI, because we see the big AI also moving there large language models so that things can be unconnected, but also the other commercial power aspects or whatever, and latency and all of that which benefits the HAI. So we will see that large language models coming there and then like also that it changes our life. Just by the way, the tie in my picture here is AI-generated.

Speaker 3:

I didn't have the picture and someone helped me send the picture and I thought it's cool for a conference like this to sneak some AI in it, and original it was just without a tie, and so it changes the work now. Already. And then one thing I think particular will be important, like can I trust the models? Who is who is? It's kind of like statistics you know, don't believe statistics you have not falsified by yourself and so who is training what is? Who defines what the truth is? And we see that already. I guess they're hiding a political. You know where you ask this and that's going to be a challenge how you manage that, you know because what's the source of?

Speaker 1:

what's the source of this data? What's the source of this and how can I trust it? Ali same question to you in terms of looking to the future.

Speaker 4:

So a couple of points on touching back. So I think, from security being a fundamental topic to consider as we're deploying systems, to actually make these systems really deployable and maintainable out in the field, security becomes extremely critical. You need to still protect the overall system end-to-end With AR. Without AI, I don't want to put myself out of a job as a marketer and a guy that talks about a lot of AI stuff, but it is still very fundamental in what we do, for any application still applies to AI-based applications where the model is part of the overall application. And what changes is? Ai, especially with generative AI, kind of creates additional angles of being able to attack the system, like, instead of fault injection now it becomes prompt injections of trying to decipher what the system and try to get to the keys, or the keys to the kingdom, in a sense, on these systems. So protecting it end-to-end, finding ways of not just protecting the AI model but protecting the overall system On the AI model. Now that's another asset that companies are spending time and resources, which means that it has an inherent value to it. So you've got to protect the model itself as well from being copied or potentially reused or maliciously being modified for various purposes, things that come into play.

Speaker 4:

We talked about standards. I'm not a big fan of standards because I sometimes feel that it limits innovation. But one area where standards make a lot of sense is the security and the keys. I mean we've been successful in creating protection with HEC, with RSA, public key encryption, and all of these are getting now to a stage where you know they're becoming vulnerable as we start talking about a post-quantum age with quantum computers, you know being seen as a way to break all these keys that we have that are fundamental protection that we have. So PQC is an extremely critical aspect and sometimes people say well, you know quantum computers, they're not here yet. So why are we worrying about that now?

Speaker 4:

Today, like the biggest quantum computer is about, ibm has it's like 1100 qubits, so about just over a thousand qubits capability. These things I don't know if you've seen pictures don't look very stable. They need to be super cool. They look really fragile, and they are. They break down very often. They can't compute for very long periods of time, so why are we worrying about that? Now, the overall quantum computer thing like to break an RSA key it's thought that you would need about a million qubits to break it down. So we're at 1,000, getting to a million. Estimates are about that's in a five-year span or maybe 10-year span that you'd get there.

Speaker 4:

But NXP is one of the companies that is working with NIST, that we have our algorithms that have been accepted and now we're already putting PQC into devices, because attackers harvest today and might crack it later. So you need to be, or these devices that we're putting out into the system, into deployment, will be there for five years, 10 years, 15 years. If you talk about microcontrollers and apps, processors in city infrastructures and electrical grids, for example, these things are sometimes 20, 30 years old. So if you don't have the protection already in place, the attackers eventually get the technology and you're vulnerable. So all of these need to be built early and you start protecting early. It's a cat and mouse game, so there's always additional things that need to be put in place and you add those as things come up.

Speaker 1:

Wow, okay, great, ali, thank you. So let's go to the audience and see what questions you have on a Friday afternoon. Put the panel to the test. Any questions, no questions? Well, let me ask one more question then, as we wrap up. I just want to get your perspective on if things do go wrong in this kind of new distributed world of AI on our endpoint devices. How do we deal with that? Are there differences in the way we deal with that and the way we contain that? How do we deal with that? Are there differences in the way we deal with that and the way we contain that? Matthias, do you have a perspective on that?

Speaker 1:

Yeah, maybe you catch me on the wrong foot there, but I don't want to put you on the spot, ali or Munia.

Speaker 3:

Yeah, give me an example, a discrete example. Well, I'm kind of considering this situation?

Speaker 1:

Yeah, give me an example, a discrete example. Well, I'm kind of considering this situation where we've got some sort of penetration, we've had some sort of a security incident that's happened. There are ways of dealing with that in the centralized world. I'm just wondering how do we deal with that in a distributed world?

Speaker 3:

Yeah, okay, then.

Speaker 3:

this is quite why it's so important that you have the security in place, like the root of trust, that the device can really say it comes from the right authority because if someone basically preaches the main central point where you Get the updates from, then of course that can be fixed, right, it's gonna be for a short time and then you at least have to make sure you own, you can basically sign the correct, basically model and push it back to the devices and it can authenticate the source to do that. Of course, if someone would preach the, the keys, that's another, another topic, right, but this is why you want to have the secure boot and have a verified image that you can, the device can verify where it comes for from, and so then you have them back, the ownership of your data center or whatever, that you can basically recover this and they don't have permanently the ownership of the edge.

Speaker 2:

Okay, okay, I might add something to what myia just said, I think there is also yes, this of course that is important between the server and the edge devices, but also the security on the edge device itself that is really essential and on which we should take care of. Actually, for instance, what I was mentioning why not? We can think about local devices that would be able to have intrinsic security for the models, so that we cannot perform data poisoning, attacks or things like that. So this is the kind of thing against which so it's on two levels on the federated service workload, but also on the edge device, and I think that, depending on the criticality again, for instance, of the application, the response of the user would be quite different, depending on whether, yes, whether it's just not notification, for instance, of I don't know the level of beans in coffee maker, for instance.

Speaker 1:

Sure, so it's about the risk of the scenario, exactly.

Speaker 2:

If it's a very, yes, critical scenario, for instance, autonomous driving, in that case we can think about yes, securities, like whether shutting off the service or putting redundancies or things like that, so that we are sure that the actual response of the device is the, the one we were hoping for and the one we were mostly waiting for.

Speaker 1:

Which is a very nice tie-in back to where you've started saying do we put our trust of our life in these, in these models? Munia, matthias Ali, I just want to say thank you so much for your participation today. Great insights, thank you, great insights.

Speaker 2:

Thank you, thanks everyone, thank you.