EDGE AI POD

Investing In The Edge: A VC Panel from AUSTIN 2025

EDGE AI FOUNDATION

Send us a text

Support the show

Learn more about the EDGE AI FOUNDATION - edgeaifoundation.org

Speaker 1:

This panel is really all about deploying AI in the real world, or edge AI in the real world. I think one of the things we've seen is that, especially over the past few years, we've been moving from some of the more theoretical parts of tiny ML and edge AI to more of the kind of practical, commercialized deployments. And I think all three of these folks can speak to what's happening with real deployments in the real world and we'll have some good conversation about that. So I'll let them sort of introduce themselves, if you want to go down the line here.

Speaker 2:

Thanks, Pete. So I'm Samir Bunab, founder of the Fractional. The reason why I have some interesting things to say about the real world is because I was a CEO of Witekio before doing embedded and IoT software for OEMs, and so I've been working a lot with a lot of OEMs along the years and deploying real things on the field be it tractors, coffee machines, medical devices, all kind of applications and there is a strong parallel in between what happened in the IoT and what is happening in the HDI, so I'll try to say some smart stuff today about that. Hopefully you learned something, and me too.

Speaker 3:

Yeah.

Speaker 4:

Hi everybody. Jack Skandasamy, co-founder and CEO of Laden AI. Prior to founding Laden AI, I did another company called AutoSense. We did TinyML. Even before it was called TinyML, we had automated human hearing and pushed the model down to the edge devices. With Laden AI, we built an SDK that optimizes ML models for different hardware targets. Let's see what comes out of my mouth.

Speaker 3:

Good morning everybody. Thanks for coming and attending the panel. I'm Sandeep Modvadia. I lead product management for a company called Wind River. We build real-time operating systems and mission-critical platforms Long-time tech person, amateur hobbyist, wannabe, failed programmer. As we think about AI within the Wind River business at least, we really just see this as a continuation of a long journey around data, around acceleration of technology and really trying to figure out how do we solve problems in more efficient ways and some of the real world practical applications and pains that we're seeing that are both accelerating adoption but also hampering adoption as well.

Speaker 1:

Yeah, cool, an interesting factoid Wind River has 100% market share on Mars. So because their stuff's in the Mars rover, so it kind of really crushed that Mars market. So let's talk about like one of the and it's kind of an ongoing theme here. You know, we've all been through some of these different cycles and we've all had these experiences about shipping and deploying product. You know, in the IoT world, you know, things got, I sort of like to say like the IoT world is kind is like that box of cables I have in my garage. It's this weird bucket of stuff that works together and sometimes doesn't. And now with Edge AI, we have an opportunity to now really scale deployments, scale the technology as an industry. How do we, as you see it from the commercialization point of view, like how do we avoid sort of the IoT sort of complexity, sort of I call it the cul-de-sac of complexity with IoT and make Edge AI a much more kind of uniform and scalable business on the kind of deployment, commercialization, anything you can speak to that maybe business on the kind?

Speaker 2:

of deployment, commercialization, anything. You can speak to that maybe. Yeah, I think it's really interesting to look at the driving force. That will be the same. And when you say how can we avoid complexity, I'm not sure we can. At least we can acknowledge that there is a lot of complexity at the edge.

Speaker 2:

If you look at Google, what happened at Google back in time with the IoT platform? It's really interesting because I think they've learned a lot from that. They were the first one to stop the IoT platform three years ago I think it was and we were all shocked. What they learned is the edge is so custom and so different from one application to another that you need to spend a lot of time per customer for customers that will do 20,000 units At Google scale, it's like nothing.

Speaker 2:

You have a lot of SDKs, the MCU are so different from one another, you have bare metal, you have some OS you have, and so they struggle a lot at the edge because a lot of those big players tried to conquer the edge and to make it scalable and unified, whereas it's really tough what they do now. If you look at Google approach to AI, ai and physical AI they will have Gemini, which is their working model and they will make money with that, and it's closed source. It's not open source that, and it's closed source, it's not open source. And so Gemini will kind of be the model that powers not only the IT solution but as well all the OT solutions.

Speaker 2:

And then they have Gemma, which will be what will run actually at the edge, and Gemma is open source and that's really interesting to see that. You guys deal with your edge, mess, we give it to you, you have Gemma, play with that. It's like, okay, you guys deal with your Edge, mess, we give it to you, you have Gemma, play with that. It's connected to Gemini when you need to, but we don't want to touch the Edge. They provide the tools for it and they won't make money out of that. They will make money ultimately with Gemini. So that's really interesting to see that and I think this is what we need to acknowledge. For the edge, it's custom, it's different use case, it's low volume, it's a lot of energy, it's a lot of things like that. And edge AI will be the same. That's my view at least.

Speaker 4:

I will approach it from two ends. One is on the defense sector and then the commercial sector. Defense edge is critical because you are going to be deployed in austere environments. You're going to be deployed in disconnected environments. You're not going to have connectivity guaranteed and even if the connectivity is there, it can be jammed any minute. The first thing the adversary does is go after the electronic signals RF and jam them. So for your functions and your applications to work, whatever the capability that you've deployed, it needs to be running locally, right. So edge becomes a mandatory. It's not a nice to have, it is a mandatory thing.

Speaker 4:

On the defense side, where we are focused with, on the commercial side, as Samir mentioned, the fallacy of IoT. It's been going on and on for many years and people are finally realizing If you're speaking to a large manufacturing customer, the cost of operation of, say, 200 IoT devices is exponentially more than operating two servers and running cables to them and running all the processing on the server. Of course, the server is still on the factory floor, not in the cloud, right. So the edge, the question of sensor to server is what's coming out in the market right now. What are you going to do at the sensor level. What are you going to do at the sensor level and what are you going to do at the server level?

Speaker 4:

Linux Foundation published this thing called Edge Continuum back, I think, in 2017, 2018. We've adopted that and we've derived that into both the defense space as well as in the commercial space. How do you run different intelligence at different layers of the network? So a lot of large companies, as Samir said, have departed from the IoT platform play and it is all about how do you apply intelligence and how do you get the best value at each one of those layers.

Speaker 3:

I'll answer this in two ways, I think. The first one is IoT never had its Jerry Maguire moment. There was never the show me the money where somebody said, yeah, I'm going to make a billion dollars out of being able to pull all this data out of these devices, right, and it was this great notion that, intrinsically, people thought the data had value. Right, google was out there mining data on the consumer side and we thought, well, there's even more devices than there are people, so there must be value in the data. I'll get the data and I'll figure out the monetization afterwards. And the monetization never came. And I'll tell you a story.

Speaker 3:

When COVID hit, I was working with a large automotive OEM and they were pulling telemetry out of all their vehicles and they were storing petabytes of data and they were trying to reduce costs and the CEO was looking at cost savings. It was like the first, like three weeks of COVID where they thought the world was absolutely going to end and they're looking at all the ways to save money. And they looked at all this data they've been collecting for years and the CEO basically said we're spending millions of dollars on storage. We're not getting any value out of it. It's like the value will come, trust us. It's like well, when we don't know. And I think that was the failed promise or the missed promise of IoT was that IoT wasn't the end state, it was plumbing, and the reality is nobody buys plumbing without actually having an end application and the applications never came. And so part of where I think, at least I'm encouraged or at least want to vindicate IoT was IoT was the foundation for a lot of the work we do now for AI, right.

Speaker 3:

Just analytics wasn't enough. You need a use case that actually solves a problem. And so, yeah, we're about to have our Jerry Maguire moment. It's going to happen. We'll be able to say show me the money. But it wouldn't have happened if IoT hadn't gone there first. And that's a common pattern with a lot of technology that's been out there, right? I remember, if you think back to when peer-to-peer, you know, remember the Napster days, right? When peer-to-peer technology first came out and people were file sharing. File sharing and music distribution didn't end up being the killer app for peer-to-peer, it ended up being Teams and Zoom and Skype. It ended up being peer-to-peer communication, not file sharing, right? So from my perspective, I look back fondly at IoT. We may have had rose-tinted glasses of what it was actually going to achieve, but the reality is we wouldn't be here today if that hadn't laid the foundation for what we're doing right now.

Speaker 1:

Right, yeah, no, it makes sense. I think one of the things that would be interesting to hear from all three of you is you know, you're all in the sort of well, you're in the kind of solution provider business. But in terms of technology development business, like, how do you think about developing product in the edge AI space, that's sort of commercialization ready? Like, how do you think about for folks out here that are in the technology development space, what should they be thinking about to sort of get their products to be designed from the beginning to be more easily commercializable and supportable? Like, what lessons have you learned in that space? Maybe you can start. You can start from either end, by the way.

Speaker 2:

You don't have to keep going the same order um, I think the what I find really interesting when I look at the demos here is things are ready now, meaning it's working. There is no demo effect almost. Yeah, the demos actually yeah all the demos work which is pretty cool they work really well, not necessarily the case last year.

Speaker 2:

Yeah, and that says something, meaning that the technologies are ready now and a lot of people will try to go beyond that and be innovating more. I would say it's not necessary. If you want to sell Things are working, don't try to be cutting edge. Try to have something that works really well, because whenever you go to the field, you need to be working 99.999% of the time, if not more, and so you don't want to have a technology which is a new one just from this year or last year. Something from three years ago is good, good enough, and so that's kind of the conundrum of innovators in between doing something cool that we like and doing something that people will actually buy. People want something not fancy but that works. That will be my biggest advice, especially in a conference like that, which is a lot of techie, a lot of PhD, a lot of yes, that's fun, but on the field it's usually less fun. It's about something that works Right.

Speaker 4:

Good so can I get a show of hands of how many academics in the room, how many technology vendors or startup vendors, and then industry people? Right, academics, academics Okay, I got a few. All right, how about startup founders or technology?

Speaker 3:

Okay.

Speaker 4:

What about industry, people who are looking for solutions?

Speaker 1:

Okay.

Speaker 4:

It's a bit smattering. Thank you All right. So to answer your question, pete, right? So what some of you said, right, whatever technology that you're building today, it's going to be applicable two to three years from now. Okay, the customers don't know that they need it right now Doesn't mean that you should stop building. You should build. But from a revenue perspective today, what you built two years ago is what's going to make you revenue today.

Speaker 4:

Okay, and I would disagree with you on a little bit Product doesn't need to be 99.9%. That's a solution. A product is about 80, 85%. You can get there. You're not going gonna solve everybody's problem with your product. You're gonna solve the big rocks For the customer, okay, and then that is a way you can actually make that product applicable across different customer segments, all right. And then you have a solution provider, or even the customer will have unique requirements on their end that is not going to be repeatable, even within their own sector. So don't try to solve that. Don't become a solution provider. You're a product. Build that product, because the technology is going to enable a product, not a solution. The solution is what the customer will implement.

Speaker 3:

From my perspective, a lot of it just comes down to what's the use case and what's the value prop that you're trying to solve, right. So a lot of times, we jump on the tech bandwagon, and my example right now is I think every company wants to be an AI company, and 90% of the resumes that I receive right now it's just littered with AI terms, and people have been apparently doing AI for the last 20 years or whatever. So before I was a cloud expert for 20 years and before I was a blockchain expert for 20 years, and now I'm an AI expert. It comes down to something very simple, simple, which is what's the problem that you're trying to solve and what's the gain that you're trying to drive. Sometimes we over-engineer it. What I mean by over-engineer it is we're going to become more productive and we're going to save money, and it's going to be more convenient and it's going to be more efficient, and it's going to be more efficient and it's going to have lower risk. The reality is, in most cases, you're not going to achieve all those gains and benefits. So what's the one singular thing that you're expecting this technology to be able to solve for you? And it's going to be different for every use case. I think the failure that we see in market is where we think about AI being the silver bullet or this panacea that's now going to solve all of our problems. It's not Pick one, and that applies both for consumption as a customer. It applies for vendors or ISVs or software companies or technologies that are building. When you try and over-engineer it, you're going to fail, and we see this consistently day by day by day, and that's why a lot of these early POCs fail, because the vendor either promises the world or the IT team promises the world.

Speaker 3:

You pick one, you do it well, and then you earn the right to go for the second one. Or you earn the right to do the second thing well, or you earn the right to be able to solve the next problem, and the next problem may use a completely different technology set and you may be worried about. Well, what about the fragmentation of tools and solutions afterwards? That's a tomorrow problem. Figure that out later. You have to solve the first problem, which is can I actually demonstrate value? And that's the hard part is determining what value you want to demonstrate. Can you talk more about that? What you guys are doing specifically? What are we doing specifically? Yeah, so Wind River, we're an operator, we're a platform and tools company for Mission Critical. That's what we do.

Speaker 3:

When people talk to us and say, hey, well, what are you doing around AI? Tell me, you guys are experts in this field. No, we're not. We're not AI experts, and that's the fallacy, and we're not trying to be. So.

Speaker 3:

When I think about how Wind River is using AI, I think about AI in three ways. Are we a consumer of AI tools to make ourselves more efficient and more productive? And we do. We use AI in-house, within our development team, within our testing teams, to write code more efficiently. Developers and great engineers are expensive and they are a sparse resource. So if we can make my existing teams more efficient, great, so that's one. The second one, then, is when people say, okay, well, are you building your own AI tools for other people to use? And point blank, no, we don't have the data. We don't have. You know, that's not our expertise area. We don't have a long history of having a massive data science team, so the flat answer to that is no. But the third piece, then, is okay, well, how do you play in the ecosystem? And the way we play is we make the experts who are actually doing AI more efficient, like we provide the right environment.

Speaker 3:

So if I was to say you know our operating systems, our operating systems will do three things. The first is, as we've heard from everyone here, is we need to make sure that we not only support the silicon that's important, that powers AI, but we can truly light it up within our operating systems, within our platforms. Right, new silicon has emerging features and new stuff that makes it better to do AI workloads than traditional CPUs. So we have to make sure that we support that. So we look at the market and we say, okay, all the top silicon, we have to support that natively. The second thing we do then is say, hey, you know, it's not good enough to have a standard C++ SDK in there to be able to write applications. I need to think about Jupyter, I need to think about PyTorch, I need to think about all the different frameworks and libraries that are out there and I need to make those really, really easy for people to use on my platform. That's the second thing. And then the third thing is you know, we heard about the swap restrictions, right, the size, weight and power.

Speaker 3:

Traditional applications behave differently to AI applications, and the way those AI applications behave I cannot predict the future. They're going to continuously be changing and they're going to act in a polymorphic way. But what we do know is they're going to have a greater demand on the platform itself. And so the traditional ways that we think about energy optimization, the traditional ways that we think about power consumption, the traditional ways that we think about energy optimization, the traditional ways that we think about power consumption, the traditional ways that we think about just basic processing and mail to run software in a deterministic way, has to change, because the application is changing as well, and so I think about what we are doing specifically. We are changing our platform so that anybody here who's writing an AI application can turn around and say that's the best place for it application can turn around and say that's the best place for it.

Speaker 3:

If I'm going to run an AI application that has to run in a deterministic way, that has to run with greater than five, nines availability, that has to be reliable, all of those places where am I going to do that? It's like deciding where you're going to buy a house. It needs to be in the right location. It needs to be close to schools. I want to be in a good neighborhood, I want to be close, it's all those things. That's us. We want to build the right neighborhood for people to come and play and people to come and live and work, and all of those things.

Speaker 3:

But you know we're not going to be out. We're not trying to compete with an AI vendor. We partner with Edge Impulse, we partner with Zededa, we partner with all these other partners because, quite frankly, we know how to stay in our lane and we don't want to overreach. And that's again to that point about value. That's how you fail, because you try and be all things to all people and you want to jump on the hype cycle. And then you look really embarrassed in two years' time when you rebrand your company to I don't know, metaverse or something and it turns out that isn't what you want to do for the rest of your life.

Speaker 1:

Good points, I think. Actually, jags, I want to talk to you about this because you guys are very involved in aerospace and it's a very regulated, you know, deterministic world. You know, I've dipped my toe into that world and it's fascinating. And then, and also Sandeep, you were mentioning about one of the things that's changed. I think I mean going back to the embedded systems days. You were mentioning about one of the things that's changed. I think I mean going back to the embedded systems days.

Speaker 1:

Back in the embedded systems days, we built something, we flashed it or burned it and we shipped it and we forgot about it like it was totally disconnected and that kind of worked. And then we got, you know, the Internet of Things and we got a little bit of data pipe to send stuff up to the cloud. Then the pipe got better so we said, oh, we can send stuff down to the device and now we can do firmware updates. But now in this, in this world of edge ai, there's sort of like this continuous flow of data, like the models are getting updated. We talked about continuous learning earlier today, like in the commercialization deployment sense. And I'm curious too in aerospace, like how are they thinking about this software load on this thing, a tank or whatever it is. That is going to be kind of like evolving, like. I mean it must kind of freak some people out in terms of like you know, it's kind of like there's model drift, I mean there's things that need to be accounted for in AI, right? So how is that going to work?

Speaker 4:

See, the good thing about in the defense space is that they understand that. They understand that the models drift and, of course, right like you've got the generals that are like, oh, send it and forget it. Right, it'll work because we've trained it. And they think it like the infantrymen. Right Like you've trained the soldiers for warfighting functions and that warfighting functions are not going to change. You send them out and they do that. But technology doesn't work that way and particularly AI doesn't work. So one of the things that we have been professing, and we've been talking heavily around in the policy circles as well, is that differentiate and make a clear cut between the application and the model. Application has a completely different life cycle and the model very short life cycle. How do you keep updating that model frequently as possible? We've conveyed that. We've gotten into people's head and we have a deployment and this is public news, so I can share this. We helped Navy deploy a mine detection model to run on a submarine drone, an unmanned underwater vehicle. Right, having the ability to update that model as we get new data and ability to train that model, retrain it at the edge and then redeploy it. Right, and doing this over an acoustic link right Like, think about this, you don't have your 5G connections to do this. It's underwater, underwater, right Acoustic link, right Like, think about this, you don't have your 5G connections to do this. Underwater, underwater right Acoustic link. You're doing these updates. They have come to grasp of that right. So that is coming up.

Speaker 4:

And even the other key factor to consider is test, and Anything that you build on the enterprise side and you deploy it goes through a stringent testing and evaluation process from a policy perspective as well. Dod is very, very clear on this policy. A human has to be in the loop of the kill chain. What I mean by the kill chain is, if you need to deploy an ammunition, you're deploying a kinetic force. You need to make sure there is a human making the decision, not AI doing by itself. So that means all of that needs to be tested and evaluated on the enterprise side. And now, when you're doing this on the edge, you need to have similar stringent processes. But at the same time, you don't want to overkill that. You're spending three weeks trying to test and evaluate on the edge when the water is happening in millisecond length.

Speaker 1:

Interesting Guys have any comments on that like this new nature of this more fluid nature of the software load in a deployed platform and I mean I'm sure Wind River is trying to sort of figure out how to do that too- We've got systems that have been out there running for decades, like literally software that's been running out there for decades and still works.

Speaker 3:

You know, I'll give you an example. We have large airline partners that their plane is 20 years old and it runs the exact same software. And you know Wobotide, the creature that goes to try and patch that system no, it works, you don't touch it. Likewise, you know, if the you know you can't randomly go in and say I want to go test a brand new model or a brand new patch on the mars rover and it breaks. Because if it breaks, you know it's a it's a pretty long round trip to go and roll a truck out to mars to fix that thing. Um, so we're still learning, like I don't, I don't have a definitive answer.

Speaker 3:

I think what we know and expect and understand is there's a greater level of resiliency expected from the platform upon which ai runs, because ai is going to, you know, models are going to be dynamic in nature, yeah, and they are going to change and they are going to shift, and so that that creates a, a sense of unpredictability that needs to be constrained, and that's a hard challenge and we're all going to have to face that and that's not, hey, it's a one-time challenge. That's a new norm, because what you're going to have now is you're going to have a platform that needs to be constant and reliable and bulletproof. And then you need to almost I hesitate to use this word, but you almost need to containerize the dynamic nature on top so that it can do all of what it wants to do, but when it goes out of bounds you can still rein it in, you can still pull it back in right. So you know, in the old days the system had a very, very simple linear path of activity and you could test against that path of linearity. You can't do that anymore, like you have to just almost again.

Speaker 3:

Again, it sounds very trite and cliche, but you have to almost accept the unexpected is going to happen. And then the question is how do you preemptively put the right guardrails around it? And and be cautious. And we're in the infancy of this right right, and because we're in the infancy, we just don't know these answers. But what we do have to be conscious of is like, kind of like the risk management that goes into being able to deal with the unpredictable events that are going to occur. And if you're doing that in a mission critical example, you're doing that where lives are at risk, you're doing that in a safety environment. We're probably going to move a lot slower there than we are with I don't know. Let's use edge AI in a retail environment where the worst thing that happens is somebody can't buy their eggs and bananas.

Speaker 2:

On that point and maybe to connect it to the topic of today, which is the real world. If you look at OEMs so all the people that have those devices and design and sell those devices the answer to your question is they are not ready at all, meaning some people don't even have device management. As of today, A lot of OEMs don't have the infrastructure to maintain a device. Some are lucky enough to use Wind River or those kind of technology because they have the money for it and they have advanced applications etc. But a lot of general purpose embedded devices. You would be surprised how not ready they are.

Speaker 2:

And whenever you have a device on the field, then you need to update it, then you need to have security mechanism, you need to be able to explain how you correct vulnerabilities, etc. So the same will apply to ATI. So the advice I could give in the real world is to abstract that complexity for EMs. They want a solution that works and not only the feature itself, but indeed the lifetime of that model and the needs to be abstract because they don't have the people to do it, they don't have the knowledge to do it and it will be 80% of the pain related to using the feature yourself.

Speaker 1:

Right. Why don't we open it up for some questions here with this esteemed panel please?

Speaker 8:

Oh yes, great talk, especially about really emphasizing doing stuff in the real world. So just to give a quick intro, I work in health care, did lot of stuff in rapid sample to answer for cancer diagnostics, infectious disease and I'm using these terminology for a reason Buzzwords. There's still no fast way to do a blood test for cancer diagnostics. Still no fast way to do it for infectious disease.

Speaker 8:

What I'm noticing with how we're using language around edge AI, then what's very easy to say, but then, as you guys are doing your descriptions, you're really talking about a lot of different computational stacks. Most of it's not even AI at all. It's not artificial, it's not intelligence, it's just an algorithm that needs constant input and refinement for how it controls its output, which I mean that's like what cruise control is. Pid loop, meaning, how do we get ahead of really ruining great, promising industries because we're advertising to lay people who don't care about these fancy words and how it really is applied? How do we fix that? Because edge AI is going to be like IoT, because it sounds like it should work but it just really doesn't, because you know, we're not making it work because we don't describe the needs. How do we resolve that?

Speaker 2:

Just on that, I think we are a bit unfair on IoT. It's there and the reason why we say it's not there anymore is because it's in the vertical use cases. And I remember saying six years ago IoT will be successful when we stop talking about IoT, because that will mean it will be different business application leveraging IoT as a tool. So AGI will be the same. We'll be successful when that we don't need to speak about AGI anymore. It will be vertical in healthcare, in industry, in logistics, in consumer, etc. So it's a natural process. You need that hype for people to get the attention and to figure out that they can do things with that, and then that hype will kind of fragment into the different sectors. And I would say it's a natural process.

Speaker 2:

And the second thing I would say about what you said about. You said about we talk about HCI, but some stacks are not even HCI. I think some people are as well using HCI in some cases where it's not needed, where a good old algorithm would work. And back to my previous point the real world OEM. What do they want to achieve? Do they need an HCI? But yeah, more generally, it's normal that we have that hype. Let's just surf it to bring more new features and new capabilities to devices on the field.

Speaker 4:

So it is all about how you frame the edge. Okay, if you're always thinking about, hey, can I put some algorithms on my camera, all my sensors, and that's only the place that I'm going to be doing the processing, then you're probably going to be wrong and you're going to fall flat. We've got several real world examples right, like, for example, from a digital advertising board Setting up a camera there that's counting the number of people that are driving by, the cars that are driving by people that are walking by, just to get a footfall count. I'm not going to be streaming that data over a 5G link back to a data center or to a CDN or whatever to do the counting. I'm going to be wasting a lot of money on the bandwidth so I can there put a Raspberry Pi or like a small sensor with, you know, very minimal compute. I don't care who the people are, I don't need to keep the record of that. In that case, that's a JI.

Speaker 4:

Right there In a factory setting, I've got a bunch of high megapixel cameras that are around hundreds of them and I'm not going to be processing all of them in location where the cameras are, because it's going to be a nightmare servicing them, cabling them, casing them and all of that. It's a nightmare. That is the IoT fallacy, but I'm going to route all of them. I'm already networked. I'm going to route all of them to a server inside the factory and process them there. That's AJAI for you, so I can go on like this.

Speaker 3:

right it is about how you frame it and how you sell it. I'll say one thing is you should take advantage of the hype right Right now. There's not a CEO in the world who's not being asked by their investors what are they doing around AI, and so you can use that as a blanket banner for anything. Oh, it's an if else statement on steroids. Yeah, that's AI. Right Again, it's just all about how you frame it, because the reality is AI is not new.

Speaker 3:

Okay, like I'll give you an example, tensorflow is what, 10 years old now, aws has had ML services in their cloud for you know, probably over a decade. So the stuff that we're talking about is not new. The only difference is ChatGPT came out two years ago, consumerized it and made it accessible to everyone, and all of a sudden, now this is a new hot thing. It's not, but if it gives you the opportunity to have the conversation, if it gives you the opportunity to go and unlock budget, it gives you the opportunity to go get the dollars you need for a POC, then sure, of course, it's AI. Take advantage of that wide banner because it allows you to do anything. It's the unlimited credit card right now that lets you go and spend.

Speaker 3:

Because if you say, yeah, I've got a compliance project or I've got a research project, yeah, it's not quite sexy, but you've got an AI project. More than that, I've got the specificity of an edge AI project. Well, that sounds interesting. That's something I can go and speak to my board about. That's something I can put in my next investor letter.

Speaker 8:

Hey guys.

Speaker 5:

Miguel Morales, I'm a commercial lead for an IoT platform called Cumulocity. Appreciate your presentation. I would imagine that I haven't seen this many new processor architectures since I joined the market. I was working on MSP430, cortex-m came to be Cortex-A and so forth. Can you talk about the trends of the containerization comment that you made, because it's not really possible to abstract how we deploy these models. How we deploy these models and I would imagine it impacts the investment that you have to make when you're enabling the AI models to support, whether it's something like a TPU or a GPU. But even here we've got custom processor architectures that are really optimizing the energy. What are some of those trends that you see around the abstraction and containerization to deploy AI that will ultimately benefit adoption?

Speaker 3:

Do you want to start? Go for it. I will say, from our perspective as a commercial organization, we're going to be customer-led, and what I mean by that is we listen to our customers in terms of what hardware they're picking and what they're choosing, and then we start to enable that Most of our customer projects don't start by them saying, well, you know, we want to start with the software or the OS or the libraries.

Speaker 1:

We want a RISC-V project.

Speaker 3:

Yeah, they actually start with the hardware, right, they start with the silicon and then they say, based on this silicon, wind River, like, like, can you work with us? And if not, that's fine, right. So I'll give you an example. When we work with automotive oems and they're building their next adas stack for self-driving, they don't come to wind river first. Right, they'll go to, they'll go to qualcomm, or they'll go to samsung, or they'll go to renaissance or whatever. And so, from our perspective, you know, we would love to support the absolute long tail. But what we're going to do is we're going to turn around and say, okay, which ones do customers want? And we're going to build based on that. And it is expensive, right, and it is unscalable, um, but that's the reality is like we're going to go where there's demand and we're going to go with the largest market share.

Speaker 3:

My, my, my sense of what's going to happen is to your point is like there is a huge amount of fragmentation because we are in the early days. Not all of these players are going to survive. In fact, most won't. The question is, is it? Do they just collapse into obscurity and you end up with something that's unsupported? Do they get acquired and the portfolio gets integrated into somebody else's broader you know, ecosystem does it. Does something else happen? I I don't know. The one thing I'm absolutely sure of, and I would absolutely guarantee, is that the number of platforms out there is just unsustainable, like they physically can't get the economies of scale across fab, across R&D, across software support to get any meaningful market share that will allow them to be commercially viable in the long term. So, as a vendor ourselves, we are placing bets on who we think is going to win and we will place bets based on where our customers are placing bets, because they're actually putting their money down and that's a great signal to determine who do we think is going to win.

Speaker 4:

So we build compilers right, so it directly applies to the question relating to it. So we support all the common brands right, like the NVIDIAs, the Intels, the ARMs, the AMDs yes, we and Qualcomm as well. We support that. The moment we go down to the long tail. What we've done is we've adopted ONNX to be the format, and ONNX is the Microsoft format that everybody's using. So we've kind of like normalized on that. So anybody that can ingest an ONyx format, they can use our tooling and go there. Will they get the same optimization as the others do? Probably not, but if they're willing to work with us and again depends on, as Sandeep said, if there is a customer demand and we can go in and say, all right, here is our IR library, you can map it to the IR and we will support that at a deeper level so we can compile and run for you Cool.

Speaker 1:

Any other? Oh, we got another one up there.

Speaker 6:

Hey, my question is more for Jack, since you mentioned the defense and you guys do extensive tests and evaluation, and maybe human with a kill switch Coming from a company. Our customers are also first responders. Could you talk more about your deployment cycle in the real world, where we have zero tolerance for false negatives or false positives?

Speaker 4:

So with AI and the amount of data that you get to train your models right, there needs to be an acceptance of a certain threshold of accuracy. You're not going to get your five nines for this. The customer needs to understand that 90% is the best result that they're going to get and they got to live with that. Once that expectation is set very clearly, then everything is easy from there right. But the moment it drops, like the results are always with a non-deterministic system, the results are in a threshold, not absolute. How do I say, okay, between 88 and 92 is acceptable. If it is within that range, proceed. If not, you gotta question that system.

Speaker 4:

Those are the expectations that we set and even in the testing and evaluation we are setting that as we go into a field like, for example, every time you build a model in the enterprise and you go deploy it and and in in the defense. You don't do this in in war, you do this in test and evaluation events. You have uh t events around the around the year, around the country. That that we do this right in those events, when the moment we go and deploy, 100% of the time it fails. That's why these events are a week long. First day you go deploy, we expect it to fail. The moment it fails, we figure out what are all the corrections that we need to do. What is the retraining data that we need? We collect it on day one, retrain it and then we test it out by day five. If we get to that threshold we are happy, but most times we won't get there either. The next T&E we'll get there, so that helps, Thank you.

Speaker 1:

Good, any other questions out there?

Speaker 7:

out there. You said it before, but what is Edge AI in 10 years If 10 years ago it was like TinyML and IoT platforms at the edge? It seems like we're kind of stuck a little bit in this problem of where do the IoT platforms go, but we'd love to hear it. You know it's 2036. What?

Speaker 4:

is it? I would say AJ, as a term will not exist. It's just going to be AI everywhere. I don't think AI is going to be discriminated as edge or cloud or anything. It's just going to be AI everywhere. You're going to have use cases, like you know, outcomes that are going to be driven by AI. That is going to be everything that has a compute on it will be running some sort of an intelligence Right.

Speaker 2:

Yeah, I think it will be a combination of vertical solutions that won't be named AI at all, and it will be a given that it is powered by AI and a horizontal solution of people that are providing the solution for the ones that are providing the vertical solution, but definitely the term won't exist anymore in 10 years. You have to change the name of the foundation again. I got 10 years, though. I got 10 years.

Speaker 3:

You can buy the hoodie for 10 years, Give it five, I don't know. I mean, if I had a great idea, I'd probably be working for an investment bank and trading on that idea. So I would kind of take it based on just secular themes right. Secular themes is compute grows, things get smaller, things get cheaper right. And so I think I completely agree with what Jack said on the AI side, Like AI will be everywhere and we'll just accept it as a standard. I think the question comes with what Jack said on the AI side AI will be everywhere and we'll just accept it as a standard. I think the question comes what happens to the edge, and my thesis is it just becomes more transparent and becomes more ambient. We had this notion of every device is going to be a smart device. I think we'll be closer to that paradigm than anything else and more things will be connected and more things will independently think. But how does that change, or manifest, or where the value pools are? I have not a clue.

Speaker 2:

Maybe as well in the prediction, one thing which will happen probably and will be really interesting for us AI has fueled that trend of edge AI, and so it came from AI to the edge. I believe that all the innovation that will be done on the edge will be used backward on AI for energy efficiency. Especially, a lot of the constraints that we have on the edge are valid for the big AI because they have this infrastructure problem, energy consumption and I'm sure that it will go back from the edge to the IT side of things.

Speaker 1:

Yeah, Great. Well, I really appreciate having you all up here. Why don't we give them a round of applause for their insights? Really appreciate it. And you know, we're all about connecting AI to the real world, so that's what this talk is all about. So, Thank you.