
EDGE AI POD
Discover the cutting-edge world of energy-efficient machine learning, edge AI, hardware accelerators, software algorithms, and real-world use cases with this podcast feed from all things in the world's largest EDGE AI community.
These are shows like EDGE AI TALKS, EDGE AI BLUEPRINTS as well as EDGE AI FOUNDATION event talks on a range of research, product and business topics.
Join us to stay informed and inspired!
EDGE AI POD
Renesas and Reality AI: Shaping the Future of Edge Computing
Discover the cutting-edge world of TinyML and Edge AI with our special guest, Eldar Sido from Renesas. As a key figure in the AI Center of Excellence, Eldar unveils the fascinating challenges and breakthroughs in embedding AI into compact devices like microcontrollers and sensors that are rapidly spreading across the globe. This episode promises an exploration into the dynamic realm of TinyML, focusing on its three foundational pillars: real-time analytics, voice, and vision. Eldar discusses the anticipated double-digit growth of this sector over the next decade, making it a pivotal listen for tech enthusiasts and those curious about the future trajectory of AI technology.
Eldar also shares how Renesas, through its acquisition of Reality AI, is enhancing its prowess in real-time analytics and tackling high-frequency sampling rates. With a focus on consultancy, data selection, and sensor placement, Eldar illustrates how only a sliver of the TinyML process is devoted to model development. By integrating Reality AI with the E-Square Studio IDE, Renesas is streamlining workflows to maximize the potential of TinyML. Tune in to understand how TinyML is reshaping industries and what lies ahead for Edge AI applications. This episode is packed with insights and strategies for anyone looking to leverage the power of TinyML in their projects.
Learn more about the EDGE AI FOUNDATION - edgeaifoundation.org
Next up we have Eldar Sido from Renesas talking about challenges in TinyML and Edge AI. So fantastic, all right, welcome, eldar. Good afternoon everyone. My name is Eldar Sedo. I'm here to represent Rhenesis and the AI Center of Excellence, so I was thinking what to present today, and the main thing I wanted to present is, in TinyML, what challenges we see, what things customers have a bit of an issue with and what kind of solutions we provide to them have a bit of an issue with and what kind of solutions we provide to them. One just against. The main slogan for this year's TinyML in Asia was TinyML drives AI everywhere, and that's truly the case because MCUs and sensors can penetrate and proliferate around the entire world, and maybe in this one room we can. Over a hundred sensors doing a lot of things. We all know I think a lot of people here already presented that TinyML brings AI ML to constrained devices such as MCU sensors, and now it's becoming Edge AI Foundation. So can be also MPUs. A lot of market research we've been going through and most of them promise double-digit Keger within the next five to 10 years.
Speaker 1:Within Renaissance, I think a lot of other teams as well split TinyML into three main pillars A real-time analytics, or some people would say time series, a voice and vision. These are just a few examples that we solve. Ok, that was quick for some reason. Okay For real-time analytics, anything with a sensor like we have huge amount of data that are not used. I think the last statistic I've read 85% of all data is not really used for anything. A voice is very easy to understand, something like keyword spotting, voice command recognition and natural language understanding. Vision is more on the higher end MCUs and MPUs.
Speaker 1:Even though we do see a huge potential and a lot of people here in this room, we still see customers facing a lot of challenges in adopting it. The main challenges, I would say, are data sets and customer know-how what do they do with it? Sensor selection and placements, accuracy to resource tradeoffs and also deployment. These five challenges are not the only challenges we find, but I was thinking maybe these will be enough for this presentation and they manifest themselves differently in each pillar. So, for example, data sets within time series or real-time analytics are usually unique. Every customer will have their own data set and they will need to train a model based on that specific data set Even one customer with the same motor, if they want to deploy it in different conditions, will have different data sets For voice. We have hundreds of languages, hundreds of accents, hundreds of pitches, hundreds of ways people talk, so the data sets are very hard for one customer to acquire.
Speaker 1:Vision well, vision, when you talk about tiny, you need very high quality data sets for the capacity you have within your models, the customer know-how. I think we see it quite a bit in real-time analytics. Customers are not sure how to place their sensors. Customers have a lot of data but they really don't know what to do with that data. Yet they want us to help them and guide them exactly, and I think one of my colleagues here mentioned that they spend like six months just explaining to the customers what can be done from a PSE perspective and actually go to deployment. Sensor selection and placement is all across all three. How would you place the sensors for voice? Can you put in a mic array? How much noise will you have in your environment, and so on. Vision, I would say deployment and resource constraints are quite a big thing, because your model, your MCUs, usually have one to two megabytes and how would you fit everything into that? So I would just skip the overall summary here for all the things we do and go to like the portfolio for each pillar For real-time analytics.
Speaker 1:We acquired a company in 2022 called Realt AI and they have wide expertise specifically in time series for high frequency sampling rates. So one statistic that they found empirically is only five percent of the tiny ML model, or tiny ML flow, is spent on development of models. The rest of the 95 percent is spent on the consultancy, spent on the selection of the data, is spent where you're gonna place the sensors and so on. So after we acquired Real-Time Reality AI, we integrated it with our own IDE, which is E-Square Studio. So now we covered the entire flow. I think it's from here.
Speaker 1:So usually the flow when you do with the real-time analytics is that you first have a consultancy with the customer. You do data storage tool, which is directly integrated within our eSquare Studio that collects the data and sends it directly to the cloud tool, which is the Reality Eye tool. Within the tool, we do data curation. We then do feature discovery and provide the customers with the best model that we find. After we find the best model, we do a lot of optimizations that are very important in real-time analytics. We can vary the sampling rate and check if the accuracy can be reduced or not. Well, can we reduce the sampling rate based on the accuracy? How many sensors do you really need? Sometimes we tell customers you can go ahead and try with 30 sensors and then we can reduce them depending on how much accuracy you want.
Speaker 1:Decision significance is a big thing. A lot of customers want to understand what's happening within their model. We provide them a tool which provides them more explainability on what the model is doing. Finally, we have MATLAB support, because a lot of our customers are currently using MATLAB extensively. So after you're done with developing the model, you can directly from the IDE, using the AI live monitor to detect or to see how the inference is actually happening in the field. Our customers don't only want to check how it works in emulation, but they really want to see how the inference is actually happening in the field. Our customers don't only want to check how it works in emulation, but they really want to see what's happening really in the field. So once the customers are happy, they can actually deploy it.
Speaker 1:One thing I wanted to talk about is real-time solutions, or real-time analytics solutions that we have. The first one is ArcFault Circuit Interrupter. I think Matteo already explained one of them and we do have a lot of customers that really care about AFCI because that can cause fires and this is due to degradation or aging. So the problem statement is quite simple we want to detect if the circuit is aging and it might catch fire. There are already traditional algorithms out there from 80% to 90% accurate, but what they want is to be able to detect up to 95% accuracy and also at high currents up to 200 ampere, and they also wanted to run it on a general purpose MCU, so the RAM and ROM should be considerable. So we were able to develop for this specific customer around 8 kilobytes of RAM and 23 kilobytes of ROM, and the cool thing about it is using the tool. The customer initially was using 250 kilohertz sampling rate and we were able to reduce it to 20 kilohertz sampling rate without affecting the accuracy.
Speaker 1:The other solution, which is on HVACs HVAC is also pretty big when it comes to real-time analytics. It's a very basic or very common use case where the customer wants to build a smart, self-diagnosing HVAC system. The challenge is they want to have an optimized solution where they reduce the number of sensors, so they started with around 20 sensors all around the HVAC suite and then we were able to detect it using the RealityEye tools to reduce it to only one accelerometer placed right in the center, and they were able to achieve 96% accuracy. The last one is actually a demo in our booth, which is roller floor type detection. One thing we had some customers ask is based on the flooring. They want to change the rotation speed of their vacuum cleaners and we are providing them with an entire pipeline, end-to-end, where they can just provide what type of motor they're using and what kind of flooring and they can just have it done really quickly. So this is just a glimpse of the real-time analytics for voice. Let's see.
Speaker 1:For voice, we have these. I think it's very crowded, but it is what it is right now. So we provide a development environment and a plug-in sort of sequel. It's more of a standalone with our partner. So one thing we have an issue with is our customers faces. They's more of a standalone with our partner. So one thing we have an issue with is or customers face is they want to have a global solution that they can sell at a lot of regions. But collecting that data set and having those accents are quite hard to gather, so we provide them with like a turnkey enabler that provides over 45 languages and it's very easy to use. You just choose the keywords that you want to have or the natural language or intent that you want to have, and it's all in GUI. I don't need to do any training. One thing we also found is that we don't want like sometimes customers don't want from the speakers or the TV to activate something, so we provide voice anti-spoofing directly from our engineering team.
Speaker 1:We also provide audio front-ends. We found a few customers where the noise kind of affects the performance drastically. So we found sometimes if you have noise it can affect the accuracy by 20% or something, and we are researching a lot and our engineering team works a lot on the audio front-end. We have a few more. Speaker identification is when you want a specific speaker only to be identified.
Speaker 1:So a few solutions here. The main one is the one, I think, on your right, which is the voice command for wall switch control. Our customer is a Japanese customer who is very big and global. They wanted to create a solution that can be run across various regions. The issue is how would they gather the data set? How would they train the model and how would they deploy it. So we provided them with all that, and this happened after COVID, when people were a bit more skeptical about touching light switches anymore, and so they wanted two languages at least running on a general purpose mic controller, and we did be able to provide that. It was all done through a GUI, and two of the demos that we have in our booth are the voice command recognition, which is similar to keyword spotting you can provide n number amount of keywords that you want, but then your constraints will be increased and also we provide the voice anti-spoofing at the same time, and the other one which runs on the R8, which is the Cortex-M85 with Helium, and that's the natural language understanding, which is more than rather just being rigid with your keywords, you would rather want to know the intent, and this is a bit more complicated and it requires a bit more resources. So we are running it on a high-end MCU which is the Cortex-M85 with Helium, and these both are actually the same partner which provides a turnkey enabler that you can directly run.
Speaker 1:The last one is more of one slide about vision, so from vision it's a bit more challenging when you talk about tiny ML to run actual vision applications, the Cortex. We have solutions with multiple partners at GemPulse which are here somewhere, and then there's AIZip, also here somewhere. So we run solutions from Cortex M4 all the way to the RZV family, which is a Cortex A with DRP AI. What we found right now is, especially for the tiny ML, providing customers with full-on solutions was more preferable, since it's very challenging for them to develop models with high-quality data sets. So for now, we provide them with an end-to-end solutions.
Speaker 1:But for the RZV, which is our MPU class devices, we found that customers want to know the entire vision pipeline and how it works and how do models map on the MPUs. So we provide them, as of today, I think, over 45 end-to-end applications. I think I checked yesterday it was already over 50. And you can check the QR code and our link for the GitHub and yep. So there's a lot of different vision applications stemming from image classification, object detection, post segmentsegmentation and so on. So just a conclusion, quick one, is we do believe that AI and IoT. While IoT is already almost matured, ai is catching up and for TinyML it will be a very big impact. In our opinion, adoption is still a bit lagging the expectations. We do provide a lot of solutions for all the three pillars, and if you'd like to learn more, please visit our web page or you can talk to us in the booth. Thank you very much.