EDGE AI POD

How EMASS is Revolutionizing Battery-Powered AI Applications

EDGE AI FOUNDATION

Power efficiency has become the new currency in AI, and no company exemplifies this shift better than EMAS. Founded by Professor Mohamed Ali as a spinoff from his groundbreaking research at NTU Singapore, this innovative startup is revolutionizing edge AI with semiconductor technology that delivers unprecedented power efficiency for battery-constrained devices.

The story begins in 2018 when Ali and his team set out to examine the entire computing stack from applications down to nanotechnology devices. Their research led to a remarkable breakthrough: a chip architecture that brings memory and compute components closer together, resulting in power efficiency 10-100 times better than competing solutions. Unlike other processors that claim low power consumption only during standby, EMAS's chip maintains ultra-low power usage while actively processing data—the true measure of efficiency for AI applications.

Mark Gornson, CEO of EMAS's Semiconductor Division, brings 46 years of industry experience to the team, having worked with giants like Intel and ON Semiconductor. After seeing the benchmarks of EMAS's technology, he came out of retirement to help commercialize what he recognized as a game-changing innovation perfectly timed for the edge AI explosion.

The applications are vast and growing. Drones can achieve dramatically longer flight times with lighter batteries. Wearable devices gain extended battery life without compromising functionality. Agricultural equipment benefits from real-time monitoring without frequent recharging. Industrial machinery can be equipped with predictive maintenance capabilities that identify subtle anomalies in vibration, temperature, or current draw before failures occur. Robotics systems gain critical safety features through microsecond decision-making capabilities.

For developers, EMAS has prioritized accessibility by ensuring compatibility with familiar frameworks like TensorFlow and PyTorch. Their backend engine handles the translation to optimized binaries, eliminating the learning curve typically associated with specialized hardware.

Ready to experience this breakthrough technology? EMAS offers development kits for hands-on testing and even provides remote access to their hardware for preliminary evaluation. See them in person at upcoming industry events in Amsterdam and Taipei, where they'll showcase how their innovative approach is redefining what's possible with battery-powered intelligent devices.

Join the edge AI revolution and discover how EMAS is making efficient intelligence accessible everywhere it matters.

Send us a text

Support the show

Learn more about the EDGE AI FOUNDATION - edgeaifoundation.org

Speaker 1:

Excellent. Well, good morning, good afternoon, good evening. So, pete Bernard here, ceo of EJF Foundation. Today we have our special guests from EMAS and Mohamed and Mark, and I'll let them introduce themselves and where they're calling in from. So, mohamed, do you want to start?

Speaker 2:

Yeah, sure, everyone thanks Pete for this wonderful conversation we're having now. My name is Mohamed Ali. I'm actually originally from Egypt, cairo, where I'm right now doing a little bit of work here and also spending the summer. You can see the tan on my face, but I'm the founder of emas, which is a singaporean spin-off from my research work at ntu singapore, and we are um very excited, like in at this point in time, where we're reaching the fruition of super started. It's basically a fabulous semiconductor uh edge ai system where they're looking to be transistors all the way down to the application levels. So so, yeah, that's a little bit about me, my prof, my profession I'm a professor, but I've decided to take a venture in the startup world and here I am awesome, sounds good, mark welcome hi, hello, nice to be here, pete, thanks for having us.

Speaker 3:

So I'm Mark Gornson, ceo of Semiconductor Division, and basically I have 46 years of semiconductor experience, 20 plus with Intel, number 11 with on semiconductor, seven with free scale a whole myriad of different semiconductor companies. I was actually retired and enjoying scuba diving et cetera, and then none of you called me up and explained the technology and the company and I got looked over all the data, did a lot of benchmarking of Mo's product that he had developed.

Speaker 3:

And I said wow, this is a great technology. Edge AI is like the perfect space at the perfect time, and so I said you know, this is too great of an opportunity. It's going to be a lot of fun to be able to take this technology and commercialize it and take it to market. So that's why I'm here and so far. I'm enjoying every minute of it.

Speaker 1:

Fantastic, fantastic, no, that's great, and yeah, so it's. You know. So folks may not have heard of EMAS yet, but so you spun out of some research that you were doing in Singapore.

Speaker 2:

Yes, exactly.

Speaker 1:

And then also, so you're also owned by by in nano view right is is has acquired yes and nano view is based in perth, australia, as far as I understand yes so we got australia, singapore, cairo california everyone everywhere. So yeah, speaking, it must be tough to kind of coordinate some of your time zone calls and stuff, but uh, oh yeah, I mean one of us has has to suffer either early morning or super late right?

Speaker 1:

yeah, someone has to take the short straw yeah yeah, actually I spend a lot of time in singapore earlier in my career. I haven't been there in a long time. I was just talking to a friend of mine yesterday. He's doing some interesting work there. I guess there's a good show there called Guy text or J text GIT.

Speaker 2:

Guy text actually was in April and actually we participated in it. We had a booth and we participated, yeah, yeah, it was pretty cool.

Speaker 1:

So maybe next year I'll head out there, get back to Singapore and stuff. So, yeah, cool. So so you were doing this research at emas, I mean you're in singapore and so so what's the what's the origin story here? I mean, what was the impetus to spin this thing out and when did you know that? It was like, yeah, this needs to be a thing, this needs to be a company, yeah, so, yeah.

Speaker 2:

So when I, before coming to singapore, right, I was at stanford, right, like you know, the pull-off researchers then. Then I landed here in Singapore and then they were interested in AI hardware, right, and the idea that proposed to them, like you have to really push the efficiency envelope Right For all of these next generation, and that's back in 2018. You have to look at the entire computing stack, from the applications all the way down to the use of emerging nanotechnology devices. So we created a research program out of that and had many of the higher research institutes in Singapore and doing the work you know publications, patents and then we said, hey, this is actually very cool, but let's try to see if this can be applied in the edge AI hardware, because we think that that's where the highest volume, that's where the efficiency is going to be the key in this aspect, right. So we did first chip right and then we showcased the results.

Speaker 2:

We said okay, this is really valuable and I think the market will be accepting it. And that's where the time the edge ai was starting to be um introduced right in the market back in 2019.

Speaker 2:

so I said, okay, you know what, we're gonna go ahead and let's create a startup company right and for the huge risk because me shifting from the academic world down to the uh, the industrial world right and building the company myself, it was really a big risk. So we founded emas in 2020. We we raised some government fund to fabricate the chip that we have right now, and then we were trying to get dc money all the way. Um, where some success stories with the basic furniture venture, build money. Then in 2024 we'll be getting used to nanovew and they see the value of this one. So they said, okay, you know what, let's combine forces and work together and really accelerate this to commercialization. So what then? They started acquiring us in 2025, march. That's when the official merger happened.

Speaker 2:

So the merger itself, yeah, I mean, was like I mean how to say that was like began in 2017 or 2018 when I was at Stanford and we had a couple of publications back then, but then, because of that time to really de-risk all the technology aspects in it, and now we have the world in shape. Actually, we're building another one, but now it's all about the go-to-market strategy and reaching out to customers.

Speaker 1:

Sure sure, and that's where I suspect Mark comes in with how we get this thing sold in volume, and so I also noticed it's RISC-V based, which is really interesting.

Speaker 1:

Yes, indeed, yeah, and I'm curious about so you know, in terms of where this thing lands in the market and what it's applicable for. I mean, obviously, efficient AI is the efficiency is currency, as they say. So it is. The next frontier in AI is efficiency, and using this RISC-V architecture, so is this, you know? Where do you see this thing really being? What are the canonical scenarios that this chip really shines at? In what industry? Steve Would you say, and that's for either Mark or Mohamed, or either one.

Speaker 3:

So, yeah, this product is uniquely positioned to actually go into anything that is power constrained, meaning anything that uses batteries, and batteries are key because it's actually one of the lowest power processors on the planet. It actually is anywhere between 10 and 100 times more efficient than any other processor, which means that you have a lot more battery life than any other technology. And you know a lot of a lot of these different microprocessor companies. They actually say that they're low power. And a lot of these different microprocessor companies. They actually say that they're low power. We're actually really low power while we're under load, which means that while we're doing the analysis, we actually use low power. So a lot of our competitors, like Ambit, Sintient, et cetera, they say that they're low power. But that's actually in standby. Anybody can be low power and standby, yeah sure Shut everything down.

Speaker 1:

Super low power.

Speaker 3:

So anything that's wearables, drones, et cetera that's where we're best positioned against anybody else because, like I said, if you can have now all of a sudden anywhere between 20 to 100 times longer battery life, that's actually very key for wearables and drones and etc.

Speaker 1:

So those are the marketplaces where we actually shine compared to any competitor I see, I see, yeah, no, and so we have a lot of partners in the foundation that are sort of like pushing that envelope. I mean, would you say this is like a? Is it a neuromorphic kind of computing architecture category, or is it something different? Or what kind of the model architectures that you're really optimized for?

Speaker 2:

yeah. So I mean, in what we're doing here, we're leveraging what we call the near memory computing, but not neuromorphic in memory computing we're doing your memory, so how we can interleave the compute and memory components so close together while providing very wide connectivity and very fast access to the memory. So that is one of the key things that we are leveraging on from the architectural level.

Speaker 2:

Then we're taking also some of the advantages in the emerging uh, I mean memory devices and logic devices and we are working on like leveraging their advantages and suppressing their shortcomings through innovations in the architecture and system. So we're not waiting for the technology. We can use it as it is and we'll try to work out some of the keys from the circuit architecture and system level. Then we'll look at how the hardware software co-play. So we do optimization in the software and then we build a specific hardware module to even accelerate their execution and streamline the data, the data path inside the system much faster, with a more smaller footprint and lower battery life, lower energy got it, got it.

Speaker 1:

So you're really sort of uh, I mean, one of the big challenges, as you know maybe the listeners don't know is the separation of memory and compute is one of the things that really kills, kind of. You know, low power, high performance, ai and uh, yes, so the closer you can get the memory and the compute together. In fact, there I mean, there's this concept of in in memory compute and things like that right that people are working on, so it sounds like that's one of your secret sauces is how to get the memory and the computers close together as possible yes, exactly yeah.

Speaker 2:

So we're doing this one at the same time, understanding that, like I mean in memory computing and neuromorphic computing, is one of the things that we might use. But we take advantage that we're using memory as blocks and computers, but with how you split things and slice and dice things very close together, that's our secret sauce.

Speaker 1:

Yeah, that's cool. So what's the software story? Like the tool chain, and, uh, you know one of the challenges you know I was just talking to a partner, I won't name them uh, you know, struggling with tool chains and model portability and uh, you know, how do you, how do you approach that from a developer who's saying, hey, I'm used to, I'm doing some stuff on cuda or I'm doing some stuff on open vino or I'm doing some stuff on OpenVINO or I'm doing some stuff on Dragonwing Snapdragon. Like, how do I move to this platform? What's the story?

Speaker 2:

Yeah. So I mean one thing that we always think about right when you have this chip right, I don't want to reinvent the wheel or have it like a huge, a huge ramp up of tool usage. This is going to be a big kill for the system, so we want to make it as compatible as we can with existing tools and framework. So, since it's an edge AI, our main goal was to make it compatible with the most common AI suite. So TensorFlow, PyTorch right.

Speaker 2:

I mean next cafe.

Speaker 2:

So we have the engine that will take the code that's written in there and will convert it to the right programming sequence to the accelerator, so that part is done.

Speaker 2:

We've been testing it into various convolution networks where you connect the layer and we we keep adding more support to it, right, as we test more and more kind of these kind of workloads. But we always make you find that you can have the lowest energy chip in the world. Right, if it's programming it, it's going to be tedious or it's not easily integrated with the existing uh tool flow or the or suites mean you're killing, you're shooting yourself in the foot. So we always try to make sure that like, okay, an engineer will take it, he doesn't even need to know right, and especially the AI engineer doesn't even need to know that he's doing it for our chip or another chip. It is up to us to have the backend engine to translate what certain intensive flow PyTorch, all the way down to the right binary right that can be run to our chip, including any kind of software optimizations that we are providing right, okay, got it.

Speaker 1:

So you're providing sort of the translation layer, the recompilation technology, so that they can use their favorite frameworks and and uh tools on top and uh, it all happens underneath. That's pretty cool. So I mean, how are I mean? There's a question, for Mark is like so when you're going after these markets is this are you going after the developer market or are you more of like I'm going after someone who's building this agricultural equipment or aerospace equipment and what's the kind of general approach been?

Speaker 3:

Yeah, our general approach right now is to go after OEMs and this is, the agricultural developers. Ultimately, when we get it optimized, it'll be closer to 40% and we also have some proof of concept with wearables. And then predictive and preventative maintenance on machines is another one that we're going after, Because you know, this is a perfect processor to spot anomalies. This is a perfect processor to spot anomalies and you know, when you look at anomalous conditions in terms of equipment, you know things like vibration, temperature, frequency, current draw are all perfect early indications of machine wearing, and so we can actually spot those anomalies and be able to flag it before the machine fails.

Speaker 3:

So and there's a lot of great markets that we can go after, and those are just a couple that we're picking off right now.

Speaker 1:

Yeah, well, I imagine the drone market specifically. I mean anything that's equipment in motion, you know, like you said, that's using a limited power supply. I mean the drone market is a fascinating market. We read a lot about that. I mean it's used, obviously, for lots of different scenarios. But the ability to keep the weight of the drones lighter, to improve the flight time, which means maybe thinner batteries, which means that's where it all sort of plays in right. It's like how do you minimize that weight distribution right?

Speaker 3:

Yeah, so drones is a perfect one for us, right? Yeah, so drones is a perfect one for us. It's, you know, great for you know looking at swarms and you can do you know preventative issues of, you know, crashing or interface with birds, et cetera you know, you know, increasing the flight time. You know, decreasing the battery weight, increasing payload capacity there's a lot of different variables that we can utilize our technology for to really enhance the drone capabilities.

Speaker 1:

I'd actually read about interesting drone scenario for lifeguards at beaches where they can deploy a drone with a flotation device if they see someone out there struggling and drop a flotation device before they can swim out there and get them and stuff. So I think we're going to see a lot of that, a lot more of that stuff happening, of course, in the drone space. You mentioned the anomaly detection for equipment, I mean especially equipment in motion, so I'd assume, like vehicles and are you getting more into sort of seeing? I mean, drones are robots but other robotic space is that getting hot for you guys? Yeah, that's another good one. Physical AI space.

Speaker 3:

That's another good example for edge AI. It's a perfect scenario for robotics for anything in terms of, you know, crash prevention, injury protection of humans, injury protection of humans anytime where you have a human machine interface. You'd like to have an edge ai processor there to be able to detect that there's interference with the robotics so people don't get injured. Or crash protection in terms of vehicle, anything that requires microseconds to make decisions on. That's the perfect application for edge ai and, you know, for our, with our battery life and this form factor that we have.

Speaker 3:

That's extremely small. We can go into just about any, any product and be able to be able to determine, you know, anomaly protection or any time of crash protection, etc yeah, for sure.

Speaker 1:

So, um, actually just to shift topics slightly, so what, what would? How did you become aware of the edgi foundation? Who tipped you off to us?

Speaker 2:

I mean actually, I mean I was, I was aware of the edgi foundation for quite some time, but I think, uh, I mean our vp, scott, was the one who initiated the enrollment to Agile Foundation, but it was been since we were in Agile. I knew about it for quite some time, even when it was called TinyML, tinyml, yeah Right. So I still mean I remember it was there that it went to Agile Foundation and then that's when Scott joined us and then we decided to be a member. Okay, great.

Speaker 3:

Great. Yeah, we kind of looked at it as like the perfect opportunity. Being a small startup company, the Edge AI Foundation has a lot of great resources for us to capitalize on to be able to commercialize our product and get you know, nanovew and E-Mess more widely known Because you know, we're a very small company that's pretty much hidden away in both Singapore and Australia and so for us to really commercialize and get well known. You know, we thought the Edge AI Foundation was a perfect springboard for us to leverage to, you know, start to commercialize our technology.

Speaker 1:

Yeah, no, that's great. Yeah, there's an incredible amount of tech there. I mean, especially coming out of universities and, like Mohammed, like your story about, you know, doing research and then commercializing it, especially in this edge AI space, where there's so much innovation and new things happening. Yes, and then the challenge is how do you cut through? You know, how do you break through the noise to get people aware of this stuff? Because there is, as you know, very well established and Mark knows this, he's worked in it for 40 years very well established semiconductor market and a lot of marketing, a lot of stuff going on out there and a lot of established big sales teams out there.

Speaker 1:

But part of what we're trying to do, too, is highlight all the innovation and the companies that are doing really cool things that can be very disruptive, and to make sure that people are aware of those as well, because I feel like this edge AI space is really kind of the frontier, the new frontier out there, and, as I mentioned recently, this is the good old days. You know, we'll look back on this Cambrian explosion of kind of stuff happening in the space and making sure everyone's aware of it is kind of step one, right, if you're not aware of of it, how can you even evaluate it? So it's good that you guys are out there, you know, talking about what you're doing. Um, do you have like dev kits and stuff? Or how do people, how do people like interact with your stuff?

Speaker 2:

indeed we do have a development kit where we have it's a board, it's a, an acro seven, so it's7, so it's going to be on the palm of your hand, equipped with connectors to program it and even connect it for all the possible sensors that you want to connect it through the standard buses, like whether it's IFC, spi, gpio. That's available and that's what we're using to reach out to customers. But even we have a remote ability. So we have a number of these boards hosted on our remote servers so people can log in from other places and test their programs on our chip without even sending the code to them. So if they want to have a quick check, okay, they want to upload the code and run it. Then just log into our remote development system and it gets downloaded to our chip and then send in the metrics. If they want the board, we're happy to send that board to them.

Speaker 1:

Cool. No, that sounds like a good thing. Good, scalable resource, and we'll put a link to your website in the description here on YouTube. But, yeah, that's cool. So where are you going to be next physically? Are you guys coming to Amsterdam? I think you're going to be in Amsterdam at the Things Conference. Yep, we are All right. Cool, I'll be there too. So that's going to be quite a show. So I would say, anyone out there that's kind of in the vicinity of Amsterdam, around late September, you should come over and see this stuff. So see it in action.

Speaker 3:

Yeah, we're looking forward to it.

Speaker 2:

Go ahead. Yeah, also, we're going to be in, like the one in Taipei, most probably as well in November.

Speaker 1:

Yes, good.

Speaker 2:

Whoever in Asia would like to travel will see us in Taipei.

Speaker 1:

Yeah, the Taipei event November 11th and 12th. That'll be a big one. And then Things Conference in Amsterdam in September. Yeah, it's going to be a busy, busy fall. I'm kind of enjoying a little quiet august. Here. It sounds like, muhammad, you're getting some vacation time too.

Speaker 2:

Um, yeah, like, just like a recharge time before, yeah, the final sprint mark, I hope you're doing some scuba diving down there, whatever yeah, I did.

Speaker 3:

I did most of mine already. We're getting ready to really go hot and heavy here in September and October as, when the vacation season winds down, we really plan on really pushing the technology, really working on the commercial aspects of everything. So we've pretty much got all the marketing material scored away and you should see a lot more developments coming down the path from uh emas here in the near future awesome.

Speaker 1:

Well, it's been great talking to you both. I look forward to meeting you in person. Uh, maybe in amsterdam, um we have met in. We have met in milan, if you oh, that's right, you were in milan, that's right.

Speaker 2:

Yeah, we didn't meet already yeah, we didn't meet in milan. Yeah, so I'll see you again in amsterdam yeah, that's right, and you were in Milan. That's right. Yeah, we didn't meet in Milan.

Speaker 1:

Yeah, so I'll see you again in Amsterdam. Yeah, that's right. And yeah, great to talk to you both and educate the world a little bit more about EMAS and all the cool stuff that you're doing. So really appreciate it. Thanks a lot Pete. All right, thank you.