
EDGE AI POD
Discover the cutting-edge world of energy-efficient machine learning, edge AI, hardware accelerators, software algorithms, and real-world use cases with this podcast feed from all things in the world's largest EDGE AI community.
These are shows like EDGE AI TALKS, EDGE AI BLUEPRINTS as well as EDGE AI FOUNDATION event talks on a range of research, product and business topics.
Join us to stay informed and inspired!
EDGE AI POD
From Sensors to Solutions: The Future of Edge AI with Chad Lucien of Ceva
At the crossroads of cutting-edge technology and practical innovation stands Ceva, a semiconductor IP powerhouse with a remarkable two-decade legacy. Powering nearly 20 billion devices worldwide and shipping over 2 billion annually, SIVA has emerged as a crucial enabler in the burgeoning edge AI ecosystem.
What distinguishes Ceva in this competitive landscape is their holistic approach to edge computing. Rather than focusing solely on neural processing, they've strategically built solutions around what Chad Lucien describes as the three pillars of edge AI: connectivity, sensing, and inference. This comprehensive vision has positioned them as the industry's leading Bluetooth IP licensor while developing sophisticated DSP solutions and a scalable NPU portfolio that ranges from modest GOPS to an impressive 400 TOPS.
The secret to Ceva's effectiveness lies in their deep integration of hardware and software expertise. "The software is becoming the definition of the product," notes Lucien, explaining how their deep learning applications team directly influences hardware specifications. This software-first perspective has created solutions perfectly tailored for low-power, small form factor devices across diverse applications. From earbuds and health trackers to consumer robots and smart appliances, SIVA's fully programmable solutions handle everything from neural network computation to DSP workloads and control code.
Most exciting is SIVA's leadership in the Audio ML renaissance through their work with the EDGE AI FOUNDATION's Audio Working Group. As audio applications shift from traditional DSP implementations to neural strategies, we're witnessing transformative capabilities in speech enhancement, anomaly detection, sound identification, and edge-based natural language processing.
Discover how Ceva is providing the essential "picks and shovels" for the AI gold rush and why collaboration remains the key to unlocking the full potential of intelligence at the edge. Subscribe to hear more partner stories shaping the future of edge AI!
Learn more about the EDGE AI FOUNDATION - edgeaifoundation.org
So, chad, good morning. Or I should say, you're in Israel, so good afternoon. I'm in Bellevue, washington, through the magic of StreamYard. Thanks for joining me on this Edge AI partner session. Welcome.
Speaker 2:All right, thanks for having having me. I appreciate you making the time.
Speaker 1:Look forward to our little chat you're usually in the east coast, in maryland, right?
Speaker 2:so, but you but siva headquarters is in israel, right yeah, well, tech technically, the headquarters is is maryland, uh, because we're uS NASDAQ listed company. But there's a pretty big contingent of the business. About half the people really are based here in Herzliya in Israel. You know much of our executive leadership and R&D and some of my team from the sensor and audio business are based here as well.
Speaker 1:Right, cool yeah from the sensor and audio business are based here as well.
Speaker 2:So, right, cool yeah, get some miles, get some miles on your account, with that arrangement exactly, and if you sleep this, night along the way.
Speaker 1:Yeah, I can imagine right. Well, you know edgar is a worldwide thing, you know it's happening everywhere in the world and so many companies involved and uh, siva, siva is definitely right in the mix and maybe you can give us a little background on Siva and kind of the origin story. And I know you guys have been around a little while, but you're now sort of right in the nexus of what's happening with Edge AI, so why don't you?
Speaker 2:give us a little context.
Speaker 2:Sure, well, you know, siva is actually a relatively old company, right?
Speaker 2:We've been, you know, delivering various semiconductor IPs to the industry for more than two decades.
Speaker 2:We've got, you know, there are nearly 20 billion SIVA powered devices out there in the world today and shipping over 2 billion a year across a whole wide variety of industries and technologies. The company really started out as a DSP IP licensing company focused in multiple markets and has expanded over time and really today our strategy is centered around providing solutions for edge AI in the three pillars of connectivity, sensing and inference the AI portion of edge AI. But ultimately any edge AI solution really has a combination of some connectivity to the network the input, the sensing itself and the inference. So our product portfolio is designed to deliver solutions across all of those. So, as an example, where the world's leading licensor of Bluetooth IP, we have a portfolio of DSP IP solutions for semiconductors and a variety of embedded software solutions for audio and voice and sensing applications. And you know we also have a scalable and, you know, pretty complete NPU portfolio as well that scales up from, you know, the tens of gops type of range all the way to 400 tops type of capability.
Speaker 1:Okay, yeah, I mean one of the things that I've found in talking to partners and I talk to partners every day, as you know, it's kind of my job, partners every day, as you know, it's kind of my job but uh, the a lot of people don't realize when they think about AI and I guess Edge AI too is. It's, it's uh, it's not just, oh you know, stick an NPU on a on a device and run some AI workloads, like the AI capabilities are embedding themselves in pretty much almost every function that you can imagine Communications you mentioned I was talking to someone with hard drive controllers and putting AI in there for fault tolerance and telemetry. So AI is imagine AI workloads are going to be running in dozens of places across a given system, and then, when you think edge to cloud, it's going to be pretty ubiquitous. So it's interesting that you guys are looking at it pretty holistically, right from the sensing, the connectivity and the processing perspective. Yeah, exactly.
Speaker 2:I think it gives us a unique position and perspective in the market, defining the low power NPU solution the Siva New Pro Nano technology that my business unit is responsible for. A lot of what helped in the definition of that product is the fact that we have a deep learning applications team. So we came at it from as deep learning engineers, as software developers, what's actually the hardware that's necessary to enable applications efficiently in these small form factor low power device domains. So having that combination of skill sets around the company, I think, has really put us in a unique position of being able to effectively define the AI solutions for the market.
Speaker 1:Yeah, and I think that's also what differentiates a lot of companies, you know, kind of going a little farther up the stack and kind of being more comprehensive in how do you solve some of these issues for customers. It really makes a big difference the difference between, like an 18-month deployment and a six month deployment cycle. Right. It's like you've got more of the solution sort of figured out already and you're able to optimize and kind of work through all the issues so that when a customer then engages with your IP, you know they know that there's a pretty clear path to shipping Right. And I think that's absolutely yeah.
Speaker 1:No, I think that makes a lot of sense, and so you're really. You mentioned before IP licensing business, so if someone's building, you know you're not, you're not a fab and you're not you're you're providing the IP to a company that is designing a chip that is then going to get fabbed, right.
Speaker 2:Correct, right, and not just providing, you know the RTL and the. You know low level hardware, which we are. We're providing the hardware, ip to design the chip. But, especially in the AI domain, software is becoming increasingly important, if not essential, right. Ultimately, in large part, the software is becoming the definition of the product and the hardware needs to be able to execute everything efficiently and in low power. But the combination of the hardware and software is is really what the product has become today, uh, and you know, fortunately, the. You know the hardware gets fixed, uh, in time, but the software lives on, and I think that's one of the huge opportunities that we all have. You know, in edge, ai is being able to continue to innovate.
Speaker 1:Yeah Well, especially in this space I was talking to someone who's in the neuromorphic space you know, and so you know in a lot of AI, there you're writing AI, you know software and models to sort of fit into more of a traditional computer architecture. But as you get toward the edge, especially the lighter edge, there's opportunities to kind of think differently about architectures. And you know doing things more efficient, more power efficient, you know more reactive, more you know higher performance. So, and that innovation is just happening sort of on a monthly basis these days.
Speaker 2:So yeah, exactly. So you guys are kind of like the picks and shovels.
Speaker 1:You're like the you're providing picks and shovels to the gold miners. You're kind of the Levi Strauss of of Edge AI, I guess Right.
Speaker 2:That's yeah sure, that sounds good.
Speaker 1:Good position to be in. So what kind of markets? Go ahead, go ahead.
Speaker 2:No, no, I agree, it is a good position. It is a good position for us to be in, you know, as the enabler of the industry and the markets.
Speaker 1:Well, so, since you started to ask what type, what kinds of markets, you know, yeah, I was going to say what are your hot industries these days that are taking advantage of AGI?
Speaker 2:Well, the new pro family that we have is, as I mentioned, it's pretty, it's scalable. Right. That we have is, as I mentioned, it's scalable right From very small types of devices in the tens of gots sort of compute range all the way up to the big heavy Gen AI at the edge solutions. I'm personally focused down at the smaller end of that scale and that spectrum and the place that is the most exciting for us right now is really, um, it's really kind of the more general purpose low power microcontroller, uh device Market.
Speaker 2:Right the year we're starting to see the MCU companies um, looking to add the AI capabilities to their devices to enable a whole wide variety of markets.
Speaker 2:So it's not a specific application or device segment.
Speaker 2:We need to have an NPU architecture and a software solution SDK solution that is capable of covering everything from audio and voice applications to sensing applications, to the lighter end of the computer vision scale of applications and even now seeing a pretty strong desire to run SL. These, you know, low power devices um, you know going into things, you know anything from, uh you know uh earbuds and headphones to appliances to, you know, consumer robots um, you know there's a whole kind of yeah, glasses, uh, you know, health fitness tracking Glasses, health fitness tracking sensing devices. So there's a whole wide variety of applications and an accelerator that's specific for enabling one. You know type of application for, you know, a very narrow solution. We need to have something that's flexible, which is why we focused on designing a solution that's, you know, fully programmable. It can handle you know it can handle everything from the NN compute all the way to the DSP workloads and the control code, which makes it much more compatible with the type of architecture that a microcontroller company would logically want to deploy.
Speaker 1:Yeah, and you mentioned, I know we had a live stream recently from one of your colleagues on audio, audio ML, and so audio has become sort of a hot area again in the AI space because there's so many, you know, sensing possibilities with audio and listening for certain things and wildlife, environmental management and security and all kinds of things. I think of using audio instead of a camera in some cases, and so audio seems to be is that kind of a thing that I know. Actually, you guys are also running our audio working group, so in the foundation itself there's a new working group just for audio ML, which is pretty cool. That seems to be getting a little bit of a renaissance these days. Audio.
Speaker 2:Yeah, I completely agree, I mean audio for a long time was, you know, was really focused on, you know, dsp, heavy software development, heavy software development and we're really starting to see audio from the perspective of you and I talking here on a voice call and being able to deliver clean speech in a noisy environment.
Speaker 2:That capability has shifted to neural network-based implementations. But you also mentioned lots of other things like using microphones in a factory to for anomaly detection or for specific sound identification, whether it's in the wildlife and determining what kind of animals might be creeping around, or, you know. You know things like detecting if the baby is crying or the glass is breaking or someone's banging your door down. There's a lot of applications that have been talked about for many years, but I think we're now starting to see those come to fruition. Yeah, start to see more effective natural language. You know processing at the edge, uh, which again is uh, you know, just one of the other pillars of the audio ai yeah, yeah, well, I think about human machine interface and using uh language models for human machine interface.
Speaker 1:I mean, it all starts with audio sensing and you know understanding that yeah, I was.
Speaker 2:I was thinking about it.
Speaker 1:Yeah, I was thinking about this week and I have an old car and every time I started up I always like I'm listening to the engine, like does that sound right? You know it's like, so you know you. Maybe people don't realize you use that audio in your brain a lot to think about does that sound right is? That you your your washing machine is like right, so exactly.
Speaker 2:Yeah. I saw that picture you posted on LinkedIn. It's a beautiful car.
Speaker 1:Oh, thank you. Yes, yes, actually, knock on wood, it's running, running smoothly, nice.
Speaker 2:That's good. Well, on the audio working group, I certainly appreciate the foundation support and helping us get that off the ground. I think it's you know it is. It has been underserved and I think you know the SEVA team has a lot of expertise in this domain. You know, actually many of the guys here on my team in Israel were working on audio applications, you know, for the past 10, 20 years, working on audio applications for the past 10, 20 years. So we've sort of intersected audio and AI in the past several years as our products have evolved and it's exciting times for sure.
Speaker 1:Yeah, yeah, no, it's great. I think you guys came into the foundation last year and I think pretty quickly. I think we met at CES. It was like, hey, we wanna start an. We met at ces. It was like, hey, we want to start an audio working group. I was like great, um, yeah, and we've got a number of companies involved in on the audio space. So I think it's great, great example of the collaboration we're trying to foster in the foundation. Where companies are, you know, there's always a little competition here and there, but in general we're trying to work together to kind of grow the market and uh, and enable these kind of end users and commercial customers to just adopt this stuff and ship it so, uh, so it's good collaboration for sure.
Speaker 2:Yeah, I've been. I mean I will say I've been impressed with, with the level of collaboration that I've seen from from folks engaged in the foundation, and you know it seems like everybody's there for the right reasons, um, to move, you know, to move the ball forward, and we all, you know we all have our own business objectives and goals.
Speaker 2:but, uh, you know, being able to um, you know, being able to brainstorm and and actually get initiatives like this moving with like-minded individuals, I think is really, it's really valuable. So I'm sure we'll see much more of it happening this year.
Speaker 1:Yeah, definitely. Well, it's definitely. You know, the edge has always been a team sport, so it's always. I always tell folks you know make sure you find the right team, join the right team, because no one can individually go in there and solve the problem by themselves. So it's good to have the collaboration, but yeah, cool. Well, chad, any other kind of closing thoughts about Siva. I feel like we got a good. We've got some good context here. Any any other fun facts or anecdotes we should be aware of?
Speaker 2:Fun facts or anecdotes? I don't. I don't have anything creative on the top of my head, so no, I think. I think you know we covered in a nutshell. Hopefully we'll get to do it again and there will be many more updates coming over the course of the year, so we'll we'll be sure to share those with you sounds good.
Speaker 1:Well, chad, I appreciate the time. Uh, have a good uh dinner. I think probably you're coming up on your dinner time there, so, yep, almost there. All right, sounds good. Well, hope to see in person soon all right.
Speaker 2:Yeah, thanks a lot, pete. Look forward to it.