EDGE AI POD

Atym and WASM is revolutionizing edge AI computing for resource-constrained devices.

EDGE AI FOUNDATION

Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.

0:00 | 24:40

Most conversations about edge computing gloss over the enormous challenge of actually deploying and managing software on constrained devices in the field. As Jason Shepherd, Atym's founder, puts it: "I've seen so many architecture diagrams with data lakes and cloud hubs, and then this tiny little box at the bottom labeled 'sensors and gateways' - which means you've never actually done this in the real world, because that stuff is some of the hardest part."

Atym tackles this challenge head-on by bringing cloud principles to devices that traditionally could only run firmware. Their revolutionary approach uses WebAssembly to enable containerization on devices with as little as 256 kilobytes of memory - creating solutions thousands of times lighter than Docker containers.

Founded in 2023, Atym represents the natural evolution of edge computing. While previous solutions focused on extending cloud capabilities to Linux-based edge servers and gateways, Atym crosses what they call "the Linux barrier" to bring containerization to microcontroller-based devices. This fundamentally changes how embedded systems can be developed and maintained.

The impact extends beyond technical elegance. By enabling containers on constrained devices, Adam bridges the skills gap between embedded engineers who understand hardware and firmware, and application developers who work with higher-level languages and AI. A machine learning engineer can now deploy models to microcontrollers without learning embedded C, while the embedded team maintains the core device functionality.

This capability becomes increasingly crucial as edge AI proliferates and cybersecurity regulations tighten. Devices that once performed simple functions now need to run sophisticated intelligence that may come from third parties and require frequent updates - a scenario traditional firmware development approaches cannot efficiently support.

Ready to revolutionize how you manage your edge devices? Explore how Atym's lightweight containerization could transform your edge deployment strategy.

Send a text

Support the show

Learn more about the EDGE AI FOUNDATION - edgeaifoundation.org

Speaker 1:

you Okay, well, I shouldn't have played that intro, but that's right.

Speaker 2:

This seems really official.

Speaker 1:

This is really. This is high production here. Okay, so well, Jason, welcome. Welcome to our Edge AI partner discussion. Yeah, Good to see you again. So I think the last time I saw you was in Austin, Texas, at our event back in February, yeah, and you were down there.

Speaker 2:

Yeah, no, I'm literally like a half mile from the place where it was, so I was like I should probably go to that. That's right. Yeah, just walk on. Relevant to what we do, I also saw you. Yeah, you guys were at Embedded World. We saw you there as well.

Speaker 1:

Yes, nuremberg, yes, yes, always a big deal over there. Yes, always a big deal over there. But yeah, so, adam, tell us more about the Adam story. What's the origin story? What's the deal here?

Speaker 2:

So I said for a long time if it's fuzzy, I'm on it. That's one of my many mottos. I always find myself at the front end of new tech transitions and been in CTO roles most of my career and I've been kind of riding this notion of cloud principles extending out into you know, the edge or edge sites over the past really 10 plus years. And I was, I was at Dell Technologies working on IOT and edge stuff with the team there from the CTO office and we start looking at like, hey, what does it look like when you have edge hardware out in the field and you want to manage apps? Wouldn't it be nice if it felt like the cloud and you can manage individual containers and do continuous delivery of software and just kind of modern development? But at the time, you know, 10 years ago, like edge stuff was like old school. You know on-premise apps or metal, yeah, monolithic, you know bare metal stuff. Monolithic, you know IOT apps or basically firmware in the case of constrained devices. So I've been doing this really for the past 10 years, kind of riding down you know this continuum, and it was at dell technologies and then I was at a company called zedita um, who's also a member. Uh, you know, I call like zedita's, like I'm out of the traditional data center but I can still run like linux and docker and kubernetes and all these different technologies. I want to make it feel like the cloud and kind of I it type stuff. But I'm in the physical world like the manufacturing floor, a retail store, whatever. Um, after zadita, just kind of looking at what to do next. And you know, my co-founder, steven, and I were like you know, really, you know, coming to like there's an opportunity for that next step past, where like a zadita would, would uh, is playing, where it's like I'm out of the data center but I can't even really run Docker and Linux and containers and all that, and you know, but wouldn't it be nice if I could or if it felt like it? And so you know I call Adam.

Speaker 2:

Um, you know VMware light or VMware micro or, uh, uh, zadita light. Um, you know Docker on a diet is another example that I use. Tiny VMware. Sure, we are infrastructure. We enable very, very lightweight containers, like thousands of times lighter than Docker on edge devices. And so it really is like make MCUs like base devices, traditionally firmware one change, you recompile the whole code base and you deploy it. It's like brute force. 80% of the developers out there don't know how to program in C or C++ for firmware. Wouldn't it be nice if those you know the stacks were more like containers? So we're focused on mcu-based devices with as little as 256 kilobytes of memory up through cpu-based devices that have maybe about two gigs of memory above that. It's a solved problem.

Speaker 2:

There's a lot of stuff like yeah, we're not there, but there's this, this spectrum of the edge that has been kind of neglected in the past.

Speaker 1:

Yeah, and you know so. You're in that kind of middleware space. I don't know if that's the right term, yeah.

Speaker 2:

Infrastructure is probably the right word. But yeah, we don't sell apps. We're not. You know about that. We're not in the data path. Your data is never coming to us. We help you orchestrate containers. If you're buying our platform like a SaaS solution, you would buy us just like you would buy AWS. But instead of managing containers on servers and some remote data center or cloud whatever you are managing them on fleets of devices out in the field. Or if you're an Oem, buying an enterprise license that it could be hosted, you know, on-prem, it could be hosted in your own, your own cloud. But would be you could, it could be labeled as your own company's, you know product like white labeled. So, yeah, it's infrastructures. Think about it's kind of like aws for really tiny devices.

Speaker 1:

Right, right, and how long has Adam been around?

Speaker 2:

So we, we were born out of some IP that Steven and I helped shape in the like like 21, 23 timeframe, so basically a couple of years. We were founded in the fall of 23. So coming up on two years, yeah, pretty fresh. Yeah, pretty fresh.

Speaker 1:

Yeah, no, it's, there's this whole. You know, I think if we go back I mean you know, being in the business for a while back in the embedded system days, right, it was all very bespoke bare metal stuff Kind of. You know, in those days you kind of you got things to run and you shipped it kind of ship it and forget, flash it and forget it, you know kind of get it out there and then, and it was, you know, it ran for 10 years until it died. And then, you know, things got connected to the cloud.

Speaker 1:

But from the iot side right, that was, the old iot days were sort of like I have this funky thing, I'm going to connect it to a cloud, and god help us you know, coming out of the m2m days and all that yes, yes, yes, and now it's like we know we can talk more about what edge computing is, but I feel like it's really like taking a lot of those cloud principles and extending them out like it's almost like a how do you take? And you know, I would say the cloud world has done a pretty good job of kind of with the virtualization and management, you know, across clouds. You know whether you're on're on AWS or Azure or Vulture or whoever you know. This idea of CNCF-based orchestration and management of workloads is pretty standardized.

Speaker 1:

And so how do you take that kind of mindset and workflow and move it out to this previously crazy, heterogeneous, funky edge stuff, the wild west, I mean?

Speaker 2:

I look at, edge computing inherently comprehends the cloud in some sense, because if it didn't, you'd just be doing old-fashioned on-premise computing, like we've done for a long time, right? So even if you're running, you know analytics or you know ai models on-prem, you know out in the field on a device, um, you might, you comprehend, you could comprehend the cloud in the sense that that model might have been trained in the cloud. There's a symbiotic relationship between edge and cloud and events that then get pushed back, but it doesn't mean that everything's running in one spot. It's a continuum, right, but you'd like to have a similar experience across that continuum in terms of how you manage things and develop applications and all that kind of stuff.

Speaker 1:

Right. Right, it's not edge versus cloud, it's edge plus cloud is really the solution. And we're seeing a lot of folks. And also, I think this middleware space is interesting because we have a lot of folks kind of on the make tech side right, they create cool models and cool chips, whatever. And then there's people on the sort of the device solution side, but I call it the gap between research and reality. It's like I make this cool thing, but how do I actually ship it and manage it in a field? And that's where a lot of the orchestration and management software needs to happen. And so what you guys are doing and we'll talk a little bit more about how it works, but, um, well, a little bit because we don't have that much time. But you know, using things like wasm and other things like, how do you, you know, since you're not using containers, uh, how do you sort of you know enable that cross kind?

Speaker 1:

of that portable uh portability across the edge and that's a kind of a fascinating spot to be in.

Speaker 2:

It's funny, having done IOT stuff and then edge for a long time. You know an edge being like that intersection of digital, digital and physical worlds and it's very location based and whatnot. I've seen so many architecture diagrams of like data lakes and cloud hub and blah, blah, blah, all this different stuff, and then this tiny little box at the bottom. That's like you know, sensors and gateways and like whatever. The real world, yeah, which means like you've never actually done this in the real world, because that that stuff is some of the hardest part.

Speaker 2:

You know the connection out in the field and this stuff gets connected somehow that's a hard one connection out in the field and this stuff gets connected somehow. That's yeah so but yeah that's, you know, we're, we're, you know, out and out in the physical world and kind of bridging things, but yeah yeah, it's funny, but we were talking, before we hit the record button, around the terms edge, ai and the edge and edge computing.

Speaker 1:

And I like the idea. Like edge computing is it's not just local computing, it there is a cloud component to it as well. And you know the tech world I love the tech world because we're one of the few industries where we just make words up you know embodied ai or whatever. People just make shit up and it's like I think it's great. So and I was telling you about my pet peeve, which is people using the term edge ai is like a camel case, like one word, edge ai, like that. That that I get very triggered off of that. But but when you think about edge and edge AI, like how do you cause we you know we, we we used to be called the tiny mail foundation and it's tiny and there's like all these things how do you kind of what's your? Yeah, In fact you worked in the Linux foundation on on this.

Speaker 2:

We worked on a taxonomy there in 2020 and it was. It was really for this reason. Because there's so many loaded terms, edge, you know there was. People are saying, oh, it's near edge, far edge, thin edge, thick edge. There's the ai edge, the industrialized whatever actually now there's.

Speaker 2:

there's one continuum from like a device up through, like you know, kind of on the factory floor, like distributed edge computing, to the you know on-prem data center, to a bottom of a cell tower, to a regional data center up through the right below the cloud, like at the internet exchange. I mean this is a continuum and you run things. So we looked at it instead of loaded terms like near and far, which is funny because a telco will say far edges is like the device, whereas the consumer would think the far edge is the telco tower, yeah, exactly opposite.

Speaker 2:

It's just it lacks the context. We developed this taxonomy, which is what are inherent trade-offs that force choices Instead of some loaded term. Near and far, thin and thick it is am I in a physically secure data center or am I not? Can I run these types of tools like containers and security tools and things like that, or can I not? Am I so constrained in terms of hardware? Can I not? Can I count on five nines of uptime with my fiber connection to my controller, or am I probably going to lose connection because I'm out, like in a, at a mine site somewhere or a farm or whatever?

Speaker 2:

And so these, these sort of vectors define these buckets and so the taxonomy which you can, you can find online and we're even kind of talking about how we frame some stuff within the community here with it. But, like um, you know, just hard trade-offs that define what considerations and what tools you would use along that continuum. So it got. It got really good traction when we did that because it was just these are, these are universal truths versus some you know term, you know ambiguous.

Speaker 1:

I like that. I like that kind of a taxonomy based on those choices. And yeah, I think we're going to visit that. The Edge AI Foundation community is going to revisit that taxonomy too. We talked about that and starting up a new working group here to sort of talk about these kind of commercialization issues and pulling in that taxonomy because it kind of makes sense. And for me it's always like there's the I mean you talked about secure data center there's like a multi-tenant data center, and then there's single tenant and then, because we have folks doing like on-prem single tenant 17-inch rack-based stuff and we consider that edge because it's really constrained to it.

Speaker 1:

But yeah, no, it is kind of relative term, and once you get telco, let's not get telco in there, because that's a whole other thing.

Speaker 2:

Telco just comes in and ruins everything, so let's just not talk about that what you run, where and why and with what tools completely depends on what part of the edge and what the use case is. Sure.

Speaker 1:

So tell us more about like. For the readers that are not, this really deserves its own long tech talk, but let's talk about WASM and talk about how you guys are managing portable workloads across these heterogeneous edge devices. Maybe people don't really understand that.

Speaker 2:

Yeah, so WASM, which is the short version of WebAssembly. Webassembly is an awesome technology with a terrible name Neither web nor assembly, as steven my girlfriend would say. But you know, it was developed a number of years ago as an alternative or to augment. Really, javascript in the browser kind of deliver on the promise of java from 30 years ago. The right ones run anywhere. But what's different is that web assembly it's not a programming language like Java, it is a compiled target so you can compile different code bases to Wasm and then that you know, then it's effectively a virtualization technology that enables you to run that code on any different architecture.

Speaker 2:

So what's interesting about Wasm? It's everywhere. You probably use it every day. You don't even know about it. It's in all modern browsers when you have plugins and stuff. It's uh, it's powering the amazon fire stick and things like that you know in terms of creating apps and whatnot. So it is a pervasive technology. But, um, we made a bet on leveraging web assembly as an enabler for containerization on devices that that can't or effectively run traditional container technologies like Docker.

Speaker 1:

So you know Adam and Docker. For those people they tend to be heavier. That's sort of like an eight gig and above.

Speaker 2:

I mean Docker on Linux, the well-known footprint like it's about 256 megs of memory, so a gig at a minimum generally run okay to, to be, you know, to be comfortable, whatnot. But below that, on a, on a, a box that can run Linux and it's got a CPU, it could be an army class or x86 or whatever. Below that, one gig memory range that's the limitation is usually memory You're doing embedded Linux. Then once you get them to use, it's always been firmware forever, we joke. The 90s called they want their development tools back. I mean, it's been the same way for a long time. But yeah, so WebAssembly because it doesn't presuppose Linux crosses enables us to bring containers past what we call the Linux barrier to devices that are even powered by microcontrollers by having an RTOS-based runtime.

Speaker 2:

So what we sell is our central console. It's the hub, as I mentioned. You'd log into it like you log into AWS if you're buying the SaaS version, or it would be part of your platform if you're an OEM. We've got a built-in container repository. You could bring your own container repository. You integrate and deploy and manage secure containers out in the field. Bring your own apps, all that stuff. That's what we sell. But then we have two flavors of runtime. The original one is for microcontrollers. It's built with the Zephyr RTOS. It really could be any RTOS, but we chose Zephyr because it's got a lot of traction.

Speaker 2:

That basically becomes the firmware on your. What's that?

Speaker 2:

No, zephyr is pretty, yeah, pretty good target yeah yeah, yeah, a lot, of, a lot, of, a lot of momentum. So so, basically, if I have a device today, it's a sensor and it's running firmware, every single change makes me recompile that firmware. I probably don't update it, and if I have to update it, maybe it's because of security reasons. I'm afraid of it. Then I cross my fingers because I might break it if it loses power while it's updating. That runtime basically now becomes the firmware on the device and then you deploy containers written in choice of programming programming language on top, and we should definitely talk about how this is really important when it comes to trying to do AI. But so that's that's like on a microcontroller based device. The runtime basically becomes a firmware and then it's all software on top of that containers.

Speaker 2:

We have a newer version that's a linux runtime and that runs as a linux service. So now that's like that's really like docker on a diet, that's for people that are running on, you know, a gateway, a set-top box, a router, a switch, a controller or something that has, like you know, I can run linux. I I'm doing embedded linux. Today I'd like to move to Docker, but it consumes, like you know, much of my memory, most of my memory. Yeah, and we would consume like a megabyte versus 256 megabytes, so you'd get like hundreds of megabytes back to I don't know run a more involved AI model you know do something else with and you can put AI models can be compiled into WASM as a target right.

Speaker 1:

You can go from PyTorch to Onyx to WASM.

Speaker 2:

Yeah, it's just that compiled target. So an example use case. So it will on.

Speaker 2:

MCU based devices to do AI. I mean, you guys were initially tiny ML and kind of grew into broader spectrum, but tiny ML is still a key component to do that today. Like, you got to take that code and you got to compile it into the firmware and very likely that model code is going to continuously evolve as it's retrained and improved. So every time you do that you got to recompile it into the firmware and one of the. So that's challenging enough. But the other challenge is that the person that knows embedded C, that does lower level device drivers and all this stuff is not usually the person that knows machine learning and Python, docker and Linux and all that.

Speaker 2:

It's the different skill sets. And so what? What? Our approach, which is rooted in that sort of embedded you know kind of abstract, the embedded complexity of the device, and then on the top side it feels like Docker. I mean, these containers could even live in the same repositories as Docker. It basically enables both the embedded and the AI camps to collaborate without having to fundamentally learn new stuff.

Speaker 1:

Right, yeah, that's awesome. It's peanut butter and chocolate, right, that's right.

Speaker 2:

We had a customer, you know we have an industrial customer that you know I want to put machine learning on a sensor for telemetry data. You got the core embedded group that knows all the lower level intricacies of the device and then you've got this guy in this other group that knows machine learning. So we get that out of runtime, spun up on the hardware, and this guy was an SD micro board with, I think, under a mega memory. I think our footprint is about 128 kilobytes. That's 2,000 times smaller than what Docker would require. We get it on there and then we get on the phone with the machine learning engineer in this other group. He's like all right, you got this new environment. What do I have to change to use this new fangled stuff?

Speaker 2:

We're like the target directory Literally the exact same code that he's running on his Linux machine can just be compiled down to this tiny container on a device that has under a million memory. That's that example of how it helps to bridge these different parts of the different camps together.

Speaker 1:

Yeah, bringing worlds together. Well, we definitely need to do an NJI talk and do a deep dive and do a demo and some walkthrough and stuff. So how did you so we in terms of joining the Edge AI Foundation, I think we, I think did you first encounter us in Austin? Was that how you found us or how was it? What was the connection?

Speaker 2:

I've been tracking the TinyML Foundation for a long time. I've known you guys for a while, so I just came in austin because it's down the street.

Speaker 2:

I mean, really, it's just a matter of convenience, you know really, um, but no, you know, we've known you guys are doing good stuff for a long time and and a lot of the folks that we're working with are members as well, and so it it's just a natural fit, embedded, streamlining, embedded in the brute force of firmware I call it source code soup, where everything's all compiled together. That's been a nice to have to improve for a long time. But things like cybersecurity and the Cyber Resilience Act coming up, where you have to be able to more programmatically address security and whatnot and you'll get fined if you break the regulations, that definitely is causing some concern and action. Right.

Speaker 2:

AI fundamentally changes it. If I was a light switch running firmware like a smart light switch and all I do is log on and turn on the light, and that's everything I'm going to do for the rest of my life, okay, firmware, cool, just hard code it, set it and forget it. But when I start running intelligence on that device written by different people, including third parties, I might want to run their machine learning model, and they don't want to give me source code, they'd rather give me a binary container that that protects all their IP. It completely changes the need for how you developed, would develop, embedded software. So so support I mean we big believer in this whole movement and what you guys are doing, and you know edge AI is one of our, one of the driving functions for what Adam is doing.

Speaker 1:

Yeah, Cool, yeah, cool, cool. Well, jason, it's been good to connect today. I think, what your colleague is going to be in Amsterdam, so I'll probably see him there. Yep, yeah, brian will be there and we'll we'll have to connect in person. I think I owe you a hat now that you're a full.

Speaker 2:

Yeah, yeah, yeah. That's actually really the reason the hat.

Speaker 1:

Well, I was doing some little product placement of our new mug here.

Speaker 2:

Oh, that's nice. It was the cool hat was really. Let's face it, that's what did it in.

Speaker 1:

At the end of the day, it's the hat Cool. Well, it was good to connect again with you, learn a little bit about Adam. I'm sure folks are going to learn a lot more.

Speaker 2:

And, yeah, wishing you the best, uh, in the future yep, yeah, and we look forward to collaborating in the in the community.

Speaker 1:

Sounds good, all right, thanks jason okay, it's gonna play an outro. Actually I'm gonna get that. Sorry, I had.