EDGE AI POD

Smart Chips, Big Dreams: How NXP is Changing the AI Game

EDGE AI FOUNDATION

The artificial intelligence landscape is undergoing a fundamental shift – moving away from simply compressing cloud models to fit edge devices toward developing AI that's truly "born at the edge." Davis Sawyer, AI Product Marketing Manager at NXP Semiconductors, guides us through this transformation and reveals how NXP is revolutionizing the way intelligence is built into our everyday devices.

Sawyer begins by redefining what we mean by "the edge" – a vast spectrum ranging from network infrastructure handling millions of connections down to the dozens of microcontrollers in modern home appliances and vehicles. What makes edge AI unique isn't just about size constraints, but the fundamentally different operating environment. While cloud models enjoy virtually unlimited power and standardized computing environments, edge devices face strict limitations in form factor, power consumption, and thermal management.

The heart of NXP's approach is their EIQ software stack – a comprehensive toolkit that spans their entire product range from low-power MCUs to high-performance MPUs. Two innovations stand out as particularly revolutionary: Time Series Studio brings AutoML capabilities to sensor data, enabling non-AI experts in manufacturing, energy, and other sectors to build powerful anomaly detection models without deep machine learning expertise. Meanwhile, their approach to generative AI uses "RAG on steroids" (Retrieval Augmented Generation) to create systems that are not only compact enough for edge deployment but also inherently more secure and private.

The real-world impact is already evident in applications ranging from precision agriculture robots to healthcare systems that combine multimodal sensing for contact-free patient monitoring. Perhaps most impressive is the rapid pace of innovation – within just months, NXP's edge-optimized language models have seen response times drop from two seconds to less than half a second, making conversational interfaces truly viable on embedded devices.

Looking ahead, Sawyer predicts we're moving toward a new era where edge AI becomes increasingly agentic – focusing not just on human-machine interfaces but on optimizing machine-to-machine workflows in factories, robotics, and automation. Join us to discover how the future of intelligence isn't trickling down from the cloud, but rising up from the edge where our data is born.

Send us a text

Support the show

Learn more about the EDGE AI FOUNDATION - edgeaifoundation.org

Speaker 2:

um.

Speaker 3:

Thank you, okay, fantastic, good morning or good afternoon, I should say that should be a word for every time zone. I guess it's a good day.

Speaker 2:

Good day.

Speaker 3:

Yeah.

Speaker 2:

So you're in Amsterdam, right? Yeah, yes, I'm here in Bellevue, washington, so welcome everyone to AJI Talks Another session for your morning, and we do these every two weeks. Bring on someone to talk about their tech and it's a live stream. Weeks, bring on someone to talk about their tech and it's a live stream. So we have questions coming in from YouTube and Twitch and LinkedIn and StreamYard and all that stuff, so feel free to listen in. And you got your coffee? Got your coffee ready? Get going. How are things, jenny? Keeping busy out there?

Speaker 3:

Yeah, really good. This is actually going to be like. This is my second to last week before I leave for a maternity leave, so, um, looking forward to that and I'll be back online in July. So, not necessarily more related, but it's still a big milestone arguably sure, arguably the biggest milestone. Well, you know, it's a place I can talk to Pretty big milestone.

Speaker 2:

Let me do a couple of PSAs before we bring Davis Sawyer from NXP on. So one thing is coming up on February 25th and 27th in Austin, Texas. It's coming up quick.

Speaker 3:

This is so exciting.

Speaker 2:

Oh, in days. I know I know we're gonna have a huge crowd there. It's um, it's our us event austin 2025 three days, I think it was like 60 sessions, like five workshops, three panels, barbecue and beer and all the good things about austin and um, it's going to be a few hundred folks getting together about state-of-the and edge AI. I know edge impulse will be there, some edge impulsors.

Speaker 3:

Yeah, that's exciting. You guys should post a lot of pictures, for sure.

Speaker 2:

Yes, we will definitely post pictures and videos and things like that. So tickets are still available. You can go to our website edge AI foundationorg or our new URL, edge AI dot foundation. That also works.

Speaker 2:

Will it be live streamed as well, pete? No, we don't. We're going to record it, so it'll be nicely recorded and then we'll eventually publish out stuff. Excellent, but no live streaming. So you got to be there. Yes, you got to be there. So, yeah, check that out. Still some tickets available. Do expect to sell out on that. And we're also going to have the Blueprint Awards there, which is all about incredible edge AI solutions and the nominations for those close on February 10th, so about six days. So if you have a really good edge AI solution that you've deployed or your customers deployed, um, you should definitely send in a nomination. And uh and um, will Smith will be is banned for life from the blueprint award, so don't worry, he won't be there, unlike the Grammys, he won't show up.

Speaker 3:

Oh my God, I'm a little bit behind. I don't even know what you're referring to, but now I need to Google it.

Speaker 2:

Oh yeah. Well, that's a long story, we don't have time, but anyway. So lots of cool stuff going on. Check out edgyaifoundation. All right, why don't we bring on Mr Sawyer? There he is.

Speaker 1:

There he is. Hey everyone, hey Pete, hey Jenny. First of all, jenny I, I just just heard the news. Becks, yeah, congrats. Thank you so much.

Speaker 3:

I don't, I don't need to like talk about it too much. But thanks so much very much exciting.

Speaker 2:

So davis is actually um kind of a regular on the on the streams here because he is he's the co-host of the blueprints show to the similar different show but now he's on. So this is sort of like an interesting crossover. This is like remember, like when you were growing up they had those two tv series that you liked and then, like one show, had both the cats on both sides it's like monsters. You know like kind of a weird crossover the tables have turned.

Speaker 1:

let let's say that. So yeah, I usually am up here talking about other people's technology and getting to ask lots of cool questions and facilitate. Now it's my turn to share and I'm excited to be here.

Speaker 2:

Yeah, so it's a very special episode of NJI Talks Cool. So, davis, what are you going to talk about?

Speaker 1:

There's the main theme think is pretty exciting. Obviously, NXP Semiconductors is a prominent vendor in the edge AI landscape across the secure, connected edge. We also do payments heavy and automotive. I think today we're going to dig a little bit deeper into the enabling technology. So how do we make these full solutions possible? What are some of the trends we see as the market shifts? Obviously pretty tumultuous week last week in the world of generative AI, AI itself, so there will be some comments on that. But I think we're going to talk about the enabling technologies and the solutions they make possible. So I will, if you stick around to the very end, I will give a couple of snippets of customer profile talks and full solutions that we've deployed in our deploying.

Speaker 2:

To wrap things up, that we've deployed and are deploying. To wrap things up, yeah, there's a lot of talk around the ramifications of AI in business markets and all these other things, but you know, we're all about how does this stuff work, you know, how do you make it work, how do you build it and some of the challenges, and that's really. There's such innovation going on and, like what happened last week with DeepSeek and stuff like that, people come up with very novel new ways of doing things and it can be very disruptive. So you know fasten your seatbelts.

Speaker 2:

I think it's just that's the way it's going to be, but it's pretty exciting. Well said, Cool. Well, do you want to bring up some slides, or is that?

Speaker 1:

good, let's do it, I've got yeah, I've got them up here. Yeah, let's take a peek under the hood and, yeah, like I said, we'll talk about the enabling technologies and what they're making possible.

Speaker 2:

Cool. Maybe Jenny and I will hang back a little bit and monitor the, the Q and A's and again, feel free, folks, throw in your questions. This is a chance to do live stream Q&A with a person that knows what they're talking about, so go for it. We'll hang back and let you do your thing.

Speaker 1:

Awesome. Thanks, pete, yeah, and to everyone watching, thank you for being here. It's a great pleasure and honor to be part of the AGI Foundation. Live streams and please highly highly encourage questions live offline, bring them up, bring them in the chat. That's what this is really all about. There's a lot of material too, so let's get started. So first of all, hello everyone. My name is Davis Sawyer. I'm calling from Canada, ontario, canada, at NXP's offices here.

Speaker 1:

I'm part of the Secure Connected Edge in AI Product Marketing and today's talk is called AI Born at the Edge, built on NXP. So the agenda is pretty simple. I'm going to set the stage a bit with what exactly we mean by AI that's born at the edge. I think we're all very familiar with lots of the cloud-based models, the chat GPTs of the world, if you will, but I'm going to take a little bit of a different flavor of you and that's going to lead into what we do at NXP with AI, hardware, software, ecosystem solutions, toolkits, all that. I'll focus on two interesting components of our EIQ software that really, I think, encapsulate what we mean by this AI born at the edge. One is on the ultra low power side, the MCU side for now, time-c Studio looking at sensor data, and the other one's looking at more of the kind of higher capacity, higher performance use cases, and I think we do a couple of things differently there because we're at the edge. And then lastly, like I said, if you stick around, I've got a couple cool demos and solution stories about real customer cases that I think we'd all like to see. So let's dive in Quickly.

Speaker 1:

As Pete said, I've been on here a couple of times before. I'm no stranger to the TinyML Edge AI community. More specifically, I'm an AI product marketing manager here at NXP. Previously, I co-founded DeepLight. It was a Canadian-based startup still kicking, doing great stuff. I was chairperson for the Foundation Summit in 2022. And, as Pete mentioned, I'm the current chair for Blueprints, which is, if you haven't heard about already, really end-to-end solution use cases.

Speaker 1:

Definitely submit your awards. I know we are. There's nominations for the awards. I know we are. And, yeah, I'm excited about Austin Summit as well. Despite my accent and now I sound like I actually grew up in Texas, so I'm looking forward to being back in the South soon.

Speaker 1:

So let's get into it. Ai born at the edge. What do we mean by this? Well, we're going to cover a little bit of material. That might not be surprises to people, but I think it leads somewhere important, and what we mean by that is the edge is always kind of up for discussion. Let's say I think it leads somewhere important, and what we mean by that is the edge is always kind of up for discussion. Let's say I think people have different definitions, different meanings.

Speaker 1:

Really, when you think about the addressable market that we look at with NXP, it scales from the tens to hundreds of millions of devices at the network edge. So connectivity, gateways, communications, telecom infrastructure, down to what we'd think of as end devices that themselves I think could be broken down. So a vehicle itself has dozens of electronic control units. Home appliances might be anywhere from 60 to 80 microcontroller units in a modern home. And then that extends to the electronics that we know in our daily lives gaming, hearables, wearables, noise canceling in headphones, voice translation, automatic translation everything for us really, except mobile and PC. We do have some financial payments in mobile and XP, but I think we talked about the edge AI use cases we're focused on today. It extends from heavy equipment, machinery, mobile robotics, smart vacuums, traffic systems, thermal stats. It's a long list and that has an interesting impact on how we build edge AI solutions. That's different from what's called the homeostasis of the cloud, where it's a very different power infrastructure environment than something like a smartwatch.

Speaker 1:

Looking at this a little bit differently and again I don't want to belabor points that most of us are familiar with, but to just set the definition there's also two distinct phases of an AI model's lifecycle. One is training happens in these centralized environments of AI models. Lifecycle One is training happens in these centralized environments. The other is inference or deployment. The two are becoming less segregated and I think that's really important. We think about the future of intelligence but, to be a little more concrete, training is once, inference is forever and we're seeing that especially with some of the models and things being discussed today around how and why inference needs to be optimized and efficient and the different power demands on both sides. But again, we're on the right side of this diagram and that has a broad spectrum of devices and needs for AI, taking a different look from the device landscape, the hardware landscape. We really want to look at the evolution of AI in our daily lives.

Speaker 1:

I think a few of us are quite familiar with this progress, but it's changing every day, but there's a few critical moments. I think we're in one right now, actually, if you go back 13 years, say the 2012,. There was this AlexNet moment where, as people might know, university of Toronto, a convolutional neural network, outperformed the large-scale vision challenge for recognizing what the content of an image is, an image classification Huge boost on previous statistical methods. And that sparked a wave of models that you know were subsequently smaller, which is interesting. So you know the alex nets, the mobile nets, the squeeze nets, etc. Um, tiny mcu nets. And then there was this other spark moment we know. Some say you know the attention is all you need, you know narrative. But really the last few years, I think point blank.

Speaker 1:

What we've seen is this emergence of the transformer-based mega scale models, 10 to 100 times bigger in terms of parameters and operations than the previous wave. And again this has sparked a similar AlexNet moment. Right, we have these GPTs, lama models, big tech ecosystems, driven a lot of it, breakthrough innovation actually. And then this new wave that's again just a tidal wave happened last week with the deep seeks. You know news that again is is this cloud trickled down as we described here? And I think if you try to it's. It's almost impossible if you try to look ahead and say, well, where, where, how is this going to impact us? At least you know, let's call it a year or two out and speculate a little bit. I think that we have to future-proof ourselves. The bottom line is something will come along that we're not prepared for, that will change the game and you want to be able to adapt to that faster than slower.

Speaker 1:

And I think, when we kind of look at the device classes we just set before, a lot of the models I just described Alex Nets, gpts, et cetera are starting from the cloud and then being whittled down to fit to edge devices because there's real use cases, there's cost constraints, there's a lot of stuff influencing that. At the same time, there is a device being created constantly on the things we use every day. If you look at the PyTorch ecosystem from Meta and their family of apps, I think they really capture what it means to be edge-driven, mobile-driven AI, where you have this point of data capture, point of user interaction, where the models live, where the data lives. Why isn't AI built there first? You know what I mean. Like that's really, I think, what we're looking at in 2025 and beyond. So again to revisit the cloud, ai has brought this current generative use cases, models that range from image classification to full blown multimodal generative stuff, image creation, et cetera. Edge AI over time is going to influence the model creation where we were creating models directly from devices with conversational, visual and other sensor-driven forms of intelligence. Just to kind of sneak ahead a little bit, I think we look at the Time Series Studio, which us and other players in the market have been working on for some time. I think that's a really good example, but it's only going to get, I think, bigger and better from there. And so to quickly kind of draw a timeline, here we have this whole, you know, kind of deep seek moment that you know is really again distilled on a mixture of experts and based on, you know, optimizing and compressing the large models that really did some trailblazing in quality and performance.

Speaker 1:

And I would like to dub this. I guess, pete, you mentioned the Grammy, so I'll bring up, you know this is really dubbed in honor of taylor swift. She was snubbed last week but last night or two weeks ago, whatever. But these eras actually think capture, what, uh, what's worth looking at here? And I think this whole trick. This whole follow-on effect of a model breaks into an ecosystem, whittles down, meets, reach a certain parameter point. Instead of this, you know, 10x and 100x increase. What the edge is going to be driven by is this 10x or 100x decreasing, and we're already seeing that with this family of models that make new things possible at the edge. Why I bump these together is because I think they're pertinent now and will be for some time small language models and this AutoML era.

Speaker 1:

We're currently in the next era that I think we're trying to look ahead again, and there is some use cases in my slides here that describe this. But I think this is really what's coming next as again there's agentic workflows being built on the cloud that optimize your setting a hair appointment or doing stuff you do on a PC or laptop. Eji agents are going to optimize stuff that machines do, and I think that's where there's a massive untapped potential for mobile robotics, factory automation, industrial automation, healthcare, even in residential to some degree. And I think why I put Hugging Face here is because what Hugging Face did for development, I think we're going to see a similar wave of Edge AI agents creation where they're open, they're accessible, they're interpretable and shout out to a company in the Edge AI Foundation ecosystem, ai Zip, that's doing some stuff here. Actually, I'll have them in their blueprint soon.

Speaker 1:

And the last era I want to refer to, when I intentionally kind of put some question marks here because, we don't know, there'll be another era. There'll be another era that I think really is edge native. That maybe involves this unification sort of bifurcation of training and inference. We know biologically that these aren't two fully distinct modes of back propagation where we've learned examples. We know that that's how biological intelligence functions. It's not the way it's currently done with AI. That may change in the coming years and I hope it does for energy efficiency and the future of the field. So that hopefully makes things a little more clear about what we mean by AI that's born at the edge.

Speaker 1:

Now, before I get into some of the novelties I mentioned, I'd like to talk about how we look at AI at XP. We'll go through this a little bit quickly, but I think it's important to capture the context. So we have our EIQ AI stack. That's meant to sit across our whole family of MCUs and MPUs. It's meant to be extensible. It's workflow tools. It can be accessed as command line or graphical UI through our EIQ portal. That's actually changing a little bit. I think we have a couple of new ways we're delivering our software that will help speed up the iteration time, because our North Star for software enablement is really ease of use. You can obtain performance from tools and hardware other parts of the ecosystem but when you make this easy and accessible, I think that has a net positive impact on what can be done and what will be done.

Speaker 1:

So we have, you know, the EIQ software stack. It's also constantly evolving through our EIQ extensions. You know an interesting one in model IP protection, explainable AI for regulatory compliance, model conversion. It's quite comprehensive and I will mention some because I know there's a strong hardware ethos to this community. I'll talk a little bit about EIQ, neutron and PU and why that's important for Edge AI, different way of looking at this. Um, you know, bring your own data, so start from the data. Again, this is exclusively edge data. When you talk about nxp images, video, the whole collection of sensors and time series radar as well, we have a big, big presence in radar and then the ai model side.

Speaker 1:

This is probably the most uh, fluid and and and evolving part of the entire ecosystem that again, you want to have exposure to, to, to leverage the latest innovations, but also provide the reliability and security and stability of commercial-grade software and enablement. So we have a few interesting innovations or contributions there. One would be, of course, our extension for EIQ, tau Toolkit, really comprehensive model library and, through our ONX conversion I mentioned on the previous slide, tensorflow or LightRT as it's now, you know, still pretty dominant at the edge, and so we want to get the best of both worlds with conversion approaches through ONNX and supporting TensorFlow Lite natively. And again, this is comprehensive CPU, mpu, mcu as well GPU DSP. Last way to look at this is, of course, I think, where we stand out in the marketplace, and this is important when you think about the edge use case. It's enabled is the breadth of enablement.

Speaker 1:

This is, you know, not just MCU, not just MPU or applications processors, as it says, it covers both. And a unique aspect of what we're doing in this ecosystem to make more possible at the edge is, of course, introducing our own hardware accelerator. Why BuilderOnix hardware accelerator? I think there's some degree of future proofing right If you see this single architecture with great scalability point, the model ecosystem is evolving so fast. Sure, operations can change different types of functions, can change in the architecture topologies, but we want to be able to again for ourselves and for our customers, future-proof while maintaining performance. And those are two kind of challenging axes to really deliver to. And I think EIQ Neutron MPU, which is part of our MCXN series of microcontrollers which I'll talk about in the time series studio, all the way up to much more capable multi-core systems. You know, like our imx95, which also is, is leveraging the, the uh, you got your neutron pu again. I'll go.

Speaker 1:

I want to leave lots of time for questions so I will go through some of the stuff pretty quickly. I want to get to the, the juicy customer cases at the end. But just to give um you know, some more insights. I mentioned a couple products here. So on the one end of the spectrum we have the mcxn, that's on the lowest, lowest power consumption, lowest profile, lowest form factor, scaling up even to future generations which will have tens of hundreds of tops. We sit here with um, our current generation of n3 imx95, which is about a two top engine but can actually punch above its weight depending on use cases, and that's partly thanks to some good, uh good work on the software enablement on neutron. So I'll skip forward a bit because I know metrics are important here. So to qualify these statements, you know imx95, which is um, a product that will become important.

Speaker 1:

When I talk about gen ai at the edge, um, you know 3.8, almost four times faster than 93, about four times faster than the am plus previous generation. And, as you know we know from being this ecosystem, when you give more capacity it allows people to use that capacity. It just it's a net positive impact on the edge. So it's not about being four times faster, it's saying what could you do with that four times faster horsepower? And I'm sure you know this, this axis here, uh, x-axis, people recognize a lot of these models. If you look at the other side, this is actually building on the tiny ml perf suite of benchmarks, so similar domains like image and vision, but also for anomaly detection and time series data, which I'll get to in a second. Again, this all comes back to our unification of the IQML software that's edge built. We try to allow the optionality of if you want to take models from the cloud, that can be a Tau only use the parts you need. This modularity becomes important because we're in a very, I think, a rapidly evolving ecosystem. So the RT crossover series which is, you know, again leveraging MPU, that 172x is actually. It's compared to MCU Cortex-M3, which you see here. It's a lot of capability at the edge.

Speaker 1:

An interesting case study here, because it's not just about performance, it's actually about, I think, the full story quality and performance Interesting case that we found, you know, last year. You look on the left-hand side, you see this YOLO model on the left is missing some obvious detections here. That's using Cortex-A CPU. On the right, we're actually getting them. You'd think, well, quantization is quantization Like why? Why would this model perform differently if you've quantized the same architecture, the same reference design, training et cetera?

Speaker 1:

Well, turns out there's some interesting nuances here that become really important for mission critical or safety critical AI at the edge. And that's where I won't get into all of this we can obviously come back to if there's questions about it. But turns out, with some smart mathematical transformations you can actually leverage some of the inherent robustness in the training. So obviously, when you quantize from a floating point you have some scaling factors, you're losing information. But if you do this in an intelligent way, leveraging the innovations happening on the model design side. Again, this is some of the edge type stuff. People aren't always training in integer, but obviously, doing inference at integer, at the edge, almost almost exclusively. Um, this is, this is important to the story. It's not. It's not just, uh, you know, a hardware software and it never has been. It's uh, it's really the interplay of the two. So, keeping the pace, I want to move on into some of the things I talked about that really define what we mean by ai born at the edge and and to do so, I'll start with something that we announced towards the end of last year.

Speaker 1:

Other companies in the space, big and small, have been working with sensor data, time series data, for some time, and, I think, for the reason that there's a scarcity of know-how in the AI space and still is, but particularly some industries that can greatly benefit, like industrial, like tooling, motors, arc fault detection, welding, etc. There's a lot of domains that don't need AI expertise but could benefit from AI models. And how do you bridge that gap? This is where I think AutoML has played an important story. So it started as an innovation internally that actually came to be a really useful software product, and that is the Time Series Studio. So target applications, these are some of the hallmark applications that I think they've been talking about many times on TinyML and NGI broadcasts Anomaly detection, classification across multiple sensor modalities, regression Electrification is a big trend, so, with battery management and intelligent battery management being a pretty interesting domain, automl lends itself nicely to um, you know, mcu based controller charge prediction, discharge prediction, energy demand as well at the bigger scale.

Speaker 1:

So this, this is what we mean by true like edge born ai domains, and this is what we mean by how you can use software to bridge that gap in a way that is easy to use and effective. Um. So of course starts in the data. Then we can use this training and optimization. So, again, instead of kind of taking a trained model, trying to fit it, starting with pre-trained weights, this is actually building based off the data, helps things be kind of minimal use, the kind of Occam's razor type of approach, emulation that helps, of course, verify accuracy and performance all the way to deployment. So, as this diagram is meant to recognize, is, you know, the times view studio helps you do all of this in in one toolkit, um as part of the iq uh software package, so in our mcu software development package as well. So this is a quick screenshot of uh of the ui.

Speaker 1:

I think this is just meant to to uh express. You know how quick and easy this is, something that comes out of the box. It works for you. Um, it captures a lot of different sensor modalities and as we keep building this in the market, we're always looking to understand better how we can serve each of these different sensor modalities. But the idea here is that you can quickly and easily train a model that works single-click model generation, optimization and then you have a validated and verified tool for prediction on your MCU.

Speaker 1:

Of course, because NXP is such breadth, I just want to highlight you know how many different product lines this is supported by. I won't get into all this stuff, but for those who are looking to get more familiar with our series or tools, this covers the Kinetis, mcx and RT series to date already. Mcx and RT series to date already. So if what I just went through could be valuable to you or could be interesting to you, I definitely encourage you to go to nxpcom and you can download the package here with the latest release. It's been updated since then, but we released it last fall and seen a lot of great use cases and uptake already from that side. So again, I wouldn't call it simple, but it's a straightforward way of how we can leverage a different thinking, different systems approach to creating truly edge-borne, edge-oriented AI. I'm now going to change gears a little bit. I'm just going to do a quick pause in case, pete Jenny, there's any pertinent questions. I don't have the chat open, but just want to make sure that I don't move too fast here.

Speaker 2:

Yeah, I don't know if we have any viewing questions. Jenny, do you want to throw some of your questions in there for me?

Speaker 3:

Yeah, so one of the questions I had earlier in your slides you had a big diagram of all the different types of use cases and data and one of the questions I had was like with NXP software tooling, what type of data can you bring in on your own? You had a great slide where it said bring your own model, bring your own data. Like is that any type of data? Does it have to be labeled? Do I need to know what classes find data has already before I bring it into the software?

Speaker 1:

Yeah, good question. So I don't wanna get lost in the slides, but I think you're referring to an earlier slide on the EIQ software toolkit that has the BYOD or BYOM, the BYOD slide. Yeah, that today supports all the modalities through time series that you see here. Also images, I think is a clear one that can be done through the bring your own data approach, so we allow you to bring in and you can label. I would actually say that I want to put this in the right way.

Speaker 1:

We're de-emphasizing, let's say, some of the bring your own data on the image or those types of modalities, because I think there's a robust ecosystem of tools out there that can handle that. I think where we really shine in the IQ side is the bring your own model, because there's lots of other places you can train. So I think this is where bring your own data is literally kind of out of the box Bring your own, bring your own data, Any of these modalities. There's an even bigger list I could share, but on the image side and the labeling side, I think that's a part of our software but it's not our focus and I think that I would have mentioned that there's other places people can do when it comes to EIQ deployments start with their training and then bring the modeling to EIQ. I'd say that's the majority of the cases, Okay.

Speaker 3:

Okay, cool, cool, cool, okay, cool. These are specific use cases that your customers have already built using that XP, but we're not limited to these use cases that you see on the slide here. These are just some starting off points, right?

Speaker 1:

That's another good point. That's exactly right. These are starting off points. It's meant to be a semi-exhaustive list, but not complete. I guess that's what the many more refers to here. We're actually, I think, looking to extend that list. We wanted this to be generic by design because of the diversity and complexity. I mean this could be something like a force or accelerometer or gyroscope or something say, for example, in tool to detect too much force, and so maybe it's. You know, there might be a brick coming up on a failure. There's a lot of these keys that have been discussed on other edge AI, tiny melt solutions but, yeah, good catch. This is not a complete list and I think we're actually looking for what else is out there.

Speaker 3:

And so, going off of that, say, I have a really large, robust data set where I don't even really know like it's just data that I've collected off of, um, my warehouse, uh, hardware, say I don't even know really what's in it. Is there any tools that can help me, sort of guide me to which specific use case to use for that?

Speaker 1:

that is a good question. I think that's why we still have ai experts and data scientists employed. That's true, but uh, I think that that's a good. There might be, there might be some tools out there. I think a good reference for that will be my colleague, ted Cao, who owns this, along with a collaboration with Anthony Areca inside XP. They could probably give some even some research experts, but I think that's a good question. I think that's why we still need some human in the loop and can't go full AutoML yet, but I'm sure someone's working on something related to that.

Speaker 1:

Well, great, well, I'm looking forward to hearing more about the EIQ Studio Cool Thanks. Yeah, thanks for the questions, jenny. So I'll pick up where I left off, which is the EIQ, jenny iFlow, which pivots from some of these use cases that are time series data. They're not meant to necessarily be interpreted by a human, but we know what is good, what is bad. Quality you have quantitative measures of success In Gen AI.

Speaker 1:

We don't have as much luxury when it comes to evaluating, let's say, the output of speech from an LLM response, or how does the prompt correlate with the quality of the answer? There's hallucinations, there's these types of error-prone systems that we're somewhat aware of, and so how we approached this problem in XP going back over a year ago, actually was okay, well, we call it this LLM decision pyramid. So where in this pyramid can you get the job done using an LLM? If you start from the off-the-shelf stuff, which I don't think the edge lends itself to put in a prompt, get an answer, okay, call an API. I don't think that cuts it, um, but it's doable. Prompt engineering, necessary skill, um, you know, this is obviously the experimentation across what prompts get the best subjective quality responses, but still prone to hallucination. And because, as you can tell. A lot of these domains, and industrial, residential automotive especially you don't have this, this error, for, uh, you know, for faults or tolerance for faults, we have to ground these models in factuality. But of course that increases the complexity of your training. So at the very top you have this retrain cost prohibitive unless you're a deep seek, fine tuning, challenging. But this RAG thing is actually a good middle ground and what RAG stands for is retrieval, augmented generation, where basically you're looking up some data source and what that data source is and how you access it. Is the complexity here. But that data source grounds your LLM. It's kind of like you were just saying, jenny, about how do you know what mall to use. It's kind of like saying I have this idea of what to do, look up from some vetted source what the right thing to do is and do that. So that's essentially what we've built into our EIQ software stack to help enable generative AI at the edge. Give a quick view of what this means. So RAG is a way of feeding private data to an LLM without expensive retraining of the original model or exposing customer data to the LLM training data.

Speaker 1:

That last sentence is increasingly important, I think, why I'm really actually happy to be at NXP is a lot of AI. Security is an afterthought Get the job done, get the model working and then figure out the security. We are really much security first company, I think, given our legacy in Pedigree and automotive, and so that's where I kind of light bulbs went off when we were working on this over almost a year ago around how to improve the LN performance without compromising data integrity, data privacy, et cetera, especially with new legislation coming into play. I'm thinking more about the EU AI Act here and a couple other brought into law in August of last year actually the EU Act. So I think this is a really important part of the narrative. But this diagram is a little simplistic. We're not just going to take rag out of the box At NXP. We've actually evolved into I like to think of as kind of rag on steroids. It's a pretty comprehensive and a shout out to the voice audio tech team and the individuals in France mostly working on this. You guys have done a fantastic job.

Speaker 1:

Actually, enzo Red has presented, I think, a preview of this going back a year ago at the Gen AI at the Edge forum. We did so. Why this is relevant and why I'm talking about this in the context of edge AI is because you can start from this data source. In this case we're thinking about documentation, so service manuals, product information, product description, specifications around an appliance, machinery, car, vehicle, medical equipment. It's quite flexible.

Speaker 1:

What we've done is introduced this very compact embedding model. Again, this is kind of edge thinking up as opposed to AI from the cloud bringing down. We don't just want to drop in an LLM and think that's going to cut it. There is an LLM component, but what's cool about this is we can actually operate LLM-less, and what I mean by that is because this RAG pipeline is robust, compact and can capture the necessary data points. We can use some kind of agent-based you know, an LLM-based reasoning offline, so in like a PC or development environment to create a actually a vector database that can then be stored on device. Super compact. We're talking like few megabytes, megabytes of memory footprint versus the gigabytes that a quantized LLM would take up.

Speaker 1:

This is what I think of when I think of like edge gen AI, and what we're doing here is quite interesting. So we can create conversation pairs, custom Q&A pairs, with a bit of nuance that's introduced from this offline process. This means we can actually answer a lot of queries. So if you think of voice coming to the edge, or even some maybe gesture, other different controls, this is an important part of how we're approaching it. I'm happy to come back to this because it's already being deployed in a couple of cases and we've done a few demos with this. Now this is part of what's called our EIQ, gen AI flow, which is targeting the IMX family. You know 9.5 and 9.3 as well Already have some use cases working. But the idea is, you know, we can transform HMI human machine interface with conversation.

Speaker 1:

The missing piece for a few years now has been this LLM reasoning. You know, flexible, generalizable component. Well, if you plug that into a really robust speech pipeline, this isn't just your off-the-shelf models. This is, you know, some kind of improved, customized, with custom wrappers and engines that help make these speech models guarantee certain accuracies but also certain performances, and these are truly edge-tailored modules. So the gen AFL is pretty comprehensive.

Speaker 1:

We want to be speech in, speech out, and what can customize this in terms of the voice or graphical UI is this RAG engine I just described. It has a lot of capability and I would supplement this LLM module with LMMs or multimodal. We are not thinking just voice. I think long-term, we're also thinking radar has a play here. Obviously, image video has a play here and that's why this is modular by design. And again, to date, a lot of these use techniques have been take the tiny llama, take noob, take a small model, quantize it, and this gets us pretty far. Um, but the new type of thinking and the kind of the theme of this talk has really been what are the techniques that actually are edge driven as opposed to, you know, cloud driven, trying to fit to edge, and I think this rag module is a really important part of that story.

Speaker 1:

So I'm going to go back in in time a bit here. I'm going to play this demo. This was actually something we showed at the GenAI Forum, at the Edge GenAI, at the Edge Forum in October. This was hot off the press. Actually, in retrospect, I'm amazed that we had the confidence to share this, because it's so much slower than what we have now, but I still want to revisit it so people can see the functionality. What this is meant to show is, when you think about edge devices like patient care systems meant to be contact free, non-invasive, um, you want a fluid response, but you also want a customized and reliable ui and voice ui so how do you use the touch screen, so there'll be some delay and I think you can call it out with this arrow here.

Speaker 1:

This AI assistant box text to speech engine is done wake word.

Speaker 3:

You can touch the screen.

Speaker 1:

And now output the response. You also see it's pretty robust to accent, it's pretty useful when you think of it. Try to empathize with the end user. This is meant to be, you know, for not reduced risk of infection contact-free. So this was actually showcased at Medica as part of one of our ISIM. So industries-focused, medical-focused demos. That was quite well received, I think is going to set the tone for an industry of healthcare.

Speaker 1:

I'll come back to this because we've actually done a lot more since this. And just to show how fast the Edge AI landscape is changing and just to show the kind of different error in with generative AI models, I want to highlight some performance specs here. So yeah, qgni Flow also includes some quantization. It helps to quantize LLMs. What we mean by this is, you know, in October this demo was about almost a two-second delay. So time to first tokens. That first response is it receives all inputs and prompt and generates output. That's about a couple of words. A second, two, four, five TPS around there. These numbers have been blown out of the water recently, so if you'll see in the footnotes the reasons why these numbers are different. But in January we're now about three times that performance, faster response and faster throughput on CPU and with the IMX95 on the NPU, with that natural conversational response is really driven by time. To first token, some use cases can wait for delay a lot can't, I think, as we keep improving performance on the right-hand side here.

Speaker 1:

This is pretty exciting. There are obviously others doing voice commands and voice AI at the edge, I think, with the RAG, with the pipeline, the quantization and, of course, our NPU. The way we approach this stuff is we're not just kind of throwing a chip over the board and saying, hey, work with what's out of the box. We have a really complete enablement, especially when it comes to voice at the edge. Bit of a messy slide, but I want to mention again that this is coming to end users' hands and early customers hands early spring, this spring, so in a couple of months here and that includes what I just covered, the quantized LLM document parsing and this full vice pipeline. That is really pick what you need. So different modules I'll talk about some customer cases just a sec. Different modules have different components that are targeted to different platforms. So I know we've got just about 20 minutes here. As we go into the end I want to leave some time for more questions. I hope people have found this interesting.

Speaker 1:

The solution stories is where this all kind of rubber meets the road. So to say so, ozone, a Canadian startup, a strategic partner of NXP. They're doing sensor fusion for agriculture. So think vision press radar. You can locate and actuate for precision. Ag often means de-weeding or for planting and harvesting as well.

Speaker 1:

That's a tough one in terms of the scalability, flexibility, but good product there. Similar vein weed remover, mechanical weed remover, solar powered. Big presence in those types of mobile robotic applications, but good product there. Similar vein weed remover, mechanical weed remover, solar powered. You know, big presence in those types of mobile robotic applications which I think LMS is going to unlock a huge amount of new potential. On the other side, you know high throughput systems, vision scanning with Cognix, irider system, nxp. These last two here Lytics Mobility and iRider NXP has a long standing. I think about 90% of vehicles, let's say, have some kind of NXP product in them. So I'd say the NXP-enabled edge is quite broad, quite cutting edge, but it doesn't stop here.

Speaker 1:

I think these are some flagship examples, stuff that's hot off the press that I'm particularly involved in, that I think was well at trade shows like CES and is going to be a big part of again this theme of edge-borne AI is this voice plus graphical UI that's powered by speech, customized by RAG and deployed on the IMX family with CPU and MPU enablement. So there's a video on the next slide, but it's similar use cases here. But I want to talk about you know what's happening here this first one so ASR speech recognition. You're controlling, actually, your air conditioner. This intent engine actually helps accelerate response because it's speech to intent as opposed to having to do some wake word Zero. Wake word is also one way of looking at it. Oven mode again. You know these demos are basically voice controls I'd say robust in English and Mandarin at this stage, but we're always improving that list and this is actually, I think, kind of cool because these aren't one-off devices.

Speaker 1:

Secure connected edge literally means secure connected edge.

Speaker 1:

We are doing multiple endpoints, multiple AI coordinated across multiple devices. So I think this is a good example that can be extended out of home and building and into other industries as well. And what I mean by this RAG vector database and why this is so portable and why I think this is born at the edge is because we're actually trying to do in some cases, like LLM-free deployment, where we create these Q&A pairs, embed it and create a compact with RAG and have that be what's looked up to create the right voice command or voice action. It works well. I'll play another video here and I'll put on subtitles, just making sure that the audio comes through. So I think one of the ones that I'm personally excited about and we've already seen great interest for not just the hardware solutions but also software solutions is what's called AI Qi or the AI health controller. We show this at CES. This is a pretty nice, polished demo and I think this level of enablement we can expect in other industries as we keep AI keeps penetrating.

Speaker 4:

Healthcare is facing unprecedented challenges rising costs, staff shortages and the growing need for personalized care. Nxp is at the forefront of the transformation, providing cutting edge technologies and solutions to deliver personalized care directly at home. With an AI controller for health insights, we provide an edge AI platform that securely collects and analyzes multimodal health and other sensor data in real time, detecting early anomalies and enabling proactive personalized care. It seamlessly integrates environmental sensors and non-invasive health devices like blood pressure monitors, ecg patches, glucose patches or medical smartwatches. Individual AI models for multimodal sensor data, as well as sensor fusion and correlation, result in accurate prediction of anomalies and actionable health insights, ensuring patient wellbeing while reducing the strain on traditional healthcare systems. For more information, visit nxpcom slash healthcare.

Speaker 1:

So hopefully that came through.

Speaker 1:

I think a couple of key words in that was the multimodal sensor, multimodal health. This is a complex environment. They're very messy, noisy, not messy, noisy environments that need, you know, certain performance and quality guarantees and I think that we live up to that with this AI first. I call it AI first because the feature sets are really AI driven. Um, and that multimodal stuff I'll come back to in just a second before I conclude. Um, oh no, there's my, my YouTube.

Speaker 1:

Um, this last one here uh, again, we talk about porting down to lower profile platforms. The imx93 is significantly porting down to lower profile platforms. The IMX 9.3 is significantly scaled down than the 9.5, but we can still do some complex AI here. So actually, as we're talking now is a good example. So this idea of AI-driven AI noise, ai, echo, cancellation, noise reduction it's both local, so in this environment and remote. We have two speakers do what's called our voice plugins on IMX family.

Speaker 1:

These are great examples of of edge ai use cases, for sure, and my last oh, okay, that was my last one, I, I would, there was one that's uh. I guess uh is in the hidden slides for good reason. So multimodal is the way I've talked about voice. I talked about vision. We talked about sensor data. The reality is like a biological system. We receive in multiple modalities for fault tolerance, so redundancy detection think of a radar and vision. I think you know that's a well-proven one. But yeah, it doesn't stop here and I wanna thank everyone for joining me today. It was really a great pleasure to walk through these examples.

Speaker 1:

I can probably pick up a couple more if we have time, but I'll press the pause button here and thank you very much everyone.

Speaker 3:

Thank you so much, Davis. That was a great presentation.

Speaker 2:

Yeah, really comprehensive, and thanks for showing off some of the stuff that people weren't able to see the NXP pavilion at CES, which I think was invite only by the way. So, yes, only a few people did see it, but lots of cool demos there and some cool tech, so it was really good to recap it.

Speaker 3:

Well, I have a few questions maybe you can answer. So you mentioned a lot of great stuff about the EAQ Studio and you mentioned the flow of data, training, emulation, deployment. My question is is the software primarily targeting, of course, nxp devices? Can you deploy anywhere? What does the output look like from the software?

Speaker 1:

Yeah, good questions. So the short answer, you know, is in the license that it's meant to be for NXP only and that the devices will be optimized for NXP. The output would be, in this case, for the AI model be a TensorFlow, tensorflow Lite or TensorFlow Lite micro model. That's what the deployable executable is. It's part of our, I believe and I'll if my colleague Ted Cow is listening in you can definitely correct me or follow up but it's part of our MCU Expresso software package, which is meant to be our complete enablement for MCU. Part of the EAQ.

Speaker 3:

And so, of course, with anomaly detection applications you're coming in with usually data that you don't necessarily know. I mean, usually you have your nominal data because that's easy to collect and you don't necessarily have anomalous data. So what types of functionality does the software have that allows you to do sort of an active learning type cycle to improve your model over time as you collect that anomalous?

Speaker 1:

data. That is a good selection. So I think we'll start from the easy scenario where you have a statistical anomaly, so that's detected in the model creation. You have the confusion, matrices and different um. I think that that's one of the main ways to do it. You can think of it as a design of experiments platform. I think that's part of what the suite provides with some of the, the built-in modules in the graphic ui. So in the, in the easier scenario where you have clear statistical anomalies, those will be recognized in the model with some human ability before deployment.

Speaker 1:

To say, hey, does this make sense? I think in another little, little more complicated setting is kind of having these test scenarios or test environment where you'll actually generate anomalies. Maybe it's an excessive force or different conditions, temperatures, sensors, pressure, humidity, pressure is a good example. That's where I think the studio is meant to help accelerate this type of modeling, where you might do it as well. But to your point about active learning or improvement, I think the increasingly well-known problem of data drift or model drift. We're early in the game here with the EIQ time series studio, so I think that's where the direction is going. But I think again, I don't think we're just there quite yet, because we're just released this in the last couple of months, but I think you obviously know the domain well. That's a problem for many is okay, my model worked out of the box, but three months later, four months later, something got bumped and it's setting its anomaly. But I know it's not as a human expert. How do we interface this? So I think we're in that direction.

Speaker 3:

Well, I guess at a certain point it's left up to the application developer how they want to handle it. So maybe there's not necessarily all baked into the software that it needs to be your application side and you pick and choose as a developer which data you send to retrain your model, of course, to prevent that drift.

Speaker 1:

Yeah, yeah, you're right.

Speaker 3:

But, Pete, in the comments we got someone live who had a question.

Speaker 2:

Yes, let's, let's bring it up awesome.

Speaker 1:

I'll, I'll try to so, thanks to you. So I got a qualitized llm.

Speaker 2:

I'll have to admit, I'm not sure what the qualitized llm is they probably mean quantized I think they.

Speaker 1:

Yeah, I think jenny's right. I think they mean quantized. So, uh, yes, I, I think, as you saw from the presentation, we're talking about the edge. The suggestion is to yeah, thank you. Okay, I think, thank you. So the suggestion I think there would be um, yes, quantize, but depends if you're going to cpu or npu and which framework you're using. So I went over it pretty quickly. But there's a, there's a little nugget in in one of the slides with different quantization approaches and different performances.

Speaker 1:

If you start from, say, the very smallest SLMs, like TinyLama 1.1b, because of memory constraints, one thing I'll take a step back. One thing about LLMs is they're memory constrained, not as much compute constrained. Yes, they have lots of parameters, lots of operations. There's lots of inefficiencies such as parameter use. But when you talk about DDR bandwidth or an edge device, we have lots of read-write, that data movement, and that's really what drives the bottlenecks for inference. So, quantization, if you can go from int8 to int4, if you can go from 16.8, having that will have the memory read-writes or bandwidth usage. So that will alone give you lots of performance. It'll almost double your token token speed in some cases, but that will be framework and hardware dependent because different quantization methods supported in different frameworks, like onnx versus tensorflow light versus pi torch versus execute torch, and so if you caught um in the in one of the slides after the medical demo, the, the, the heart, the heart care demo, spin, sorry, spin quant, you know, for integer four with the cortex acpu because we can enable it with execute torch is the fastest. So when you're there's a few things there, so in for spin quant, execute torch on one of the models I think, like denube in in this case.

Speaker 1:

That's a blowing performance out of the water. But that's leaving out some of the credit on things like hardware accelerators which have other optimizations like memory layout et cetera. That will also give you more performance. But I mentioned the CPU because that's what's working today and we're already seeing those numbers. So quantization LMs yeah, that's again. Shout out to the voice audio tech team in france doing work there. That's a, that's a big, big field of potential that's not not tapped at all yet. Uh, more work to be done you mentioned.

Speaker 2:

An important point was that, um, you know, the lms on the edge tend to be uh much more memory bound than tops bound and we're seeing a lot of really interesting innovation on kind of how do you do higher performance memory compute? You know, connections between those subsystems on chip and other things like that, and I think that trend will continue for a while, but that's going to be really interesting to see how that plays out.

Speaker 1:

Yep, there's a guy at a NeurIPS conference five, six years ago, I don't remember. I think it was the Baidu after party or something. But he said something that stuck with me forever. He said flops are cheap, bandwidth is expensive, latency is physics, and I think of that probably once or twice a day. Wow, don't forget to go to the after party yeah. There, you go. That's why you get your life lessons.

Speaker 2:

Yeah.

Speaker 1:

No, that's cool.

Speaker 2:

That's cool, yeah, no, I think it'll be fascinating. I don't know if you kind of touched upon a little bit, but some of the solutions. Are you seeing some solutions now, like getting ready for deployment that will be combining, you know, gen AI with sensors and other anomaly detection, like combining those things together Is that? Are we on the cusp of those kinds of deployments?

Speaker 1:

It depends who you ask. I think some are on the cusp, some are over the cusp. One example that I saw just before the Christmas break or over the holidays was combining radar and audio in a kitchen. So you don't have a camera in a kitchen for obvious reasons In some cases like a like a home camera. Some people do, some people don't, but the idea here was to recognize if something was burning, based on audio and smoke alarm and radar and a combination of radar, and find where that event was. I can't remember the company that did this, but it was using one of their homegrown Gen AI models to combine this radar plus audio. I think it was a German, I won't say exactly. I can't remember the name of the company, but that was a pretty impressive combination that was trying to compliment the privacy and safety concerns but still introducing a generative AI model functionality with not your keyboard text-based inputs this was taking radar data.

Speaker 1:

And they did a cool visualization. So that was one that turned some heads and I think was a good encapsulation. The other ones that I'm a little, you know, I wanted to talk more about, but I think it's. You just have to be patient for embedded world at the latest, for those that are curious, but combining generative AI, multimodal models at the edge with speech plus vision prompts. So think of things like visual reasoning, visual Q&A hey, what's in this image, what's going on here? But through speech commands and then having a multimodal model look up what's happening, that image, or understand the video or plan, answer back to a human operator.

Speaker 1:

Think of complex industries, heavy industries, like power plants, factories. Um, they have a you know, a surplus of of devices that, uh, you know, are creating in human point interactions, but a challenge of what's that domain like, what is going on? And so you have this, this old guard, let's say, of people who, oh, yeah, when those buttons go on, flip these switches, turn this thing and and the power plant's all good as new operators come in. There's an interesting demand for how to encapsulate this kind of one guy called it a fuzzy cognitive map. So how can you kind of teach these types of concepts to an LLM that involves multiple concepts and providing tribal knowledge, Tribal knowledge

Speaker 1:

domain knowledge local. Yeah, exactly that, I think, is a really cool use case for multimodal gen AI at the edge.

Speaker 2:

Yeah, well, we've seen I think we've probably all seen how people are using we had an actual presentation on this in Taipei from, I believe, as a professor to you using signals from Wi-Fi routers and interpreting that, using generative AI to determine people counting and presence detection in rooms, and things like that generative ai to determine people counting and presence detection in rooms, and things like that.

Speaker 2:

So you, know using the analysis of the of the wi-fi waves and anomalies in the in the, in the waves to you know, detect whether someone's in a room or not without using a camera right, so that's kind of interesting yes, exactly, and that was part of our.

Speaker 1:

Yeah, I think there was a little lag in the audio there, but that was similar to our AI healthcare using ultra-wideband. So ultra-wideband can help detect similar motion where again this might be in patient care systems. I think that's why I kind of come back to this AI first healthcare concept. I think other industries will have similar analogous domains. Where exactly you described, pete, what's going on through sensor data, environmental awareness, whatever you want to call it?

Speaker 2:

Fantastic, fantastic, very cool, any last minute questions from the audience. A few folks are saying no question. Oh, here's one, let's see From Kais. Is there a go-to dev kit from NXP like NVIDIA Jetson or Raspberry Pi to get started with Edge applications? Davis, what's the URL?

Speaker 1:

Great question. Yeah, and nxpcom will have it for sure. But I would say, in the IMX family we have our EVKs. So you can request the 8M plus EVK, the 9.3 EVK or the 9.5. Now Actually to this question just hot off the press in January, we have our Freedom Boards, frdm Freedom Boards now launched, for example, the IMX 9.3. I think if you're plugged in to or just looking at xpcom, look up the Freedom Boards as a weight experiment. I think we recognize that, like the jetson oren, I think, is a good, good comparison and that kind of out of the box experience. Um, we're taking a few apart as we speak and looking at what they're doing. I think that's, that's a good direction industry and I'd call it our freedom boards as as a good way to do similar work that is enabled through our our eiq software ecosystem.

Speaker 2:

Right, cool sounds good through our, our software. Right, cool, sounds good. Well, that takes us to the end, I think, of our segment. I'm going to, I'm going to put up our little ad here for, for Austin, let's see.

Speaker 3:

Davis, will you be at?

Speaker 1:

and will you be in Austin? I will be in Austin and I because we have a couple of minutes here I will again, you know, highlight. You know the blueprint awards group. So blueprints, first of its kind. You know we've been doing it for about a few months now. We've had a few talks, more on the way. If you have a great solution that you want to get nominated or get highlighted, it's a few minutes away, you can find it at the event site. I'll be there, probably for that reason, but also because I love this ecosystem, so I'll be there.

Speaker 2:

Well, and also the winners of the Blueprint Awards will be highlighted in the Edge AI Foundation booth at Embedded World. So you have more motivation to enter.

Speaker 1:

Yes, yes, there's already a few that have been put in. I know there's a few that come to mind, so I definitely encourage anyone watching be there, submit. And Jenny, I know you, unfortunately you won't be there, but yeah, I know you'll be For a good reason.

Speaker 3:

I'll watch the recordings I'll like all the big tip-offs, so everybody should go and ask Davis all the questions that they're going to ask online today.

Speaker 2:

Yeah, exactly, awesome. Well, thanks again, davis and Jenny as well. Good to see you again, and that was a really chock full of information session.

Speaker 1:

I hope the viewers got a lot out of it and yeah, we'll see you on the other side, Thanks everyone. Thank you, have a great one. Take care, thank you.