The EDGECELSIOR Show: Stories and Strategies for Scaling Edge Compute

Into the Heart of Edge and AI: A Deep Dive with Arm's IoT SVP Paul Williamson

January 09, 2024 Pete Bernard Season 2 Episode 1
Into the Heart of Edge and AI: A Deep Dive with Arm's IoT SVP Paul Williamson
The EDGECELSIOR Show: Stories and Strategies for Scaling Edge Compute
More Info
The EDGECELSIOR Show: Stories and Strategies for Scaling Edge Compute
Into the Heart of Edge and AI: A Deep Dive with Arm's IoT SVP Paul Williamson
Jan 09, 2024 Season 2 Episode 1
Pete Bernard

Get ready to embark on a journey into the fascinating world of Edge Compute, AI, and all things semiconductors as we sit down with Paul Williamson, Senior Vice President and General Manager of IoT at Arm.

Ever wondered how to navigate a career in the tech industry Paul lays bare his unique path to Arm, emphasizing the importance of following your curiosity and continually learning new skills. 

We then shift gears to take a closer look at the practical applications of AI in a myriad of devices. From noise cancellation technology to anomaly detection and image recognition in agriculture and retail settings, AI has become a game-changer. Listen in as we highlight Arm's recent product releases, including their innovative Helium and Ethos technologies, and how they are reshaping AI in low-power, wireless devices. Be prepared to gain a deeper understanding of how AI is becoming more accessible and efficient across various industries. 

Lastly, we set our sights on the future, contemplating the integration of cloud-level machine learning with localized real-time loops. We discuss how system simulation plays a critical role in optimizing performance, latency, and cost. The conversation also covers the role of AI in PCs and how Arm is propelling this technology. Plus, we delve into the convergence of the mobile and PC worlds, and how this is driving a demand for AI capabilities in laptops. 

By the end, you will have gained insights into the transformative power of AI and IoT in our interactions with technology. So, are you ready to dive in?

Want to scale your edge compute business and learn more? Subscribe here and visit us at https://edgecelsior.com.

Show Notes Transcript Chapter Markers

Get ready to embark on a journey into the fascinating world of Edge Compute, AI, and all things semiconductors as we sit down with Paul Williamson, Senior Vice President and General Manager of IoT at Arm.

Ever wondered how to navigate a career in the tech industry Paul lays bare his unique path to Arm, emphasizing the importance of following your curiosity and continually learning new skills. 

We then shift gears to take a closer look at the practical applications of AI in a myriad of devices. From noise cancellation technology to anomaly detection and image recognition in agriculture and retail settings, AI has become a game-changer. Listen in as we highlight Arm's recent product releases, including their innovative Helium and Ethos technologies, and how they are reshaping AI in low-power, wireless devices. Be prepared to gain a deeper understanding of how AI is becoming more accessible and efficient across various industries. 

Lastly, we set our sights on the future, contemplating the integration of cloud-level machine learning with localized real-time loops. We discuss how system simulation plays a critical role in optimizing performance, latency, and cost. The conversation also covers the role of AI in PCs and how Arm is propelling this technology. Plus, we delve into the convergence of the mobile and PC worlds, and how this is driving a demand for AI capabilities in laptops. 

By the end, you will have gained insights into the transformative power of AI and IoT in our interactions with technology. So, are you ready to dive in?

Want to scale your edge compute business and learn more? Subscribe here and visit us at https://edgecelsior.com.

Speaker 1:

When you ask people what Edge Compute is, you get a range of answers Cloud Compute in DevOps, with devices and sensors, the semiconductors outside the data center, including connectivity, AI and a security strategy. It's a stew of technologies that's powering our vehicles, our buildings, our factories and more. It's also filled with fascinating people that are passionate about their tech, their story and their world. I'm your host, Pete Bernard, and the Edge Celsius show makes sense of what Edge Compute is, who's doing it and how it can transform your business and you. So let's get started. Well, I guess that's one thing. Covid, you know, one of the heroes of COVID was the internet. Absolutely All the connectivity got a lot better and we all upgraded our equipment and here we are.

Speaker 2:

It's fascinating thinking back just what conference calls and how bad they were before we had Zoom and all this technology. I mean a number of times people dialing in from cars and unable to look at slides, and all the rest of it. Well, I remember when I was at Microsoft, we would have you know.

Speaker 1:

We would notoriously forget to dial in to the meeting you know, until like halfway through it's like oh so, and so was supposed to call in, you know. And so then they'd call in on, you know, skype or link or whatever we had back then.

Speaker 2:

Right yeah.

Speaker 1:

You know it was kind of a disaster. So you know it's like kind of a first class citizen thing. It is yeah, I guess that's good, but well, paul, before we get into it too far, let me first of all welcome Welcome to the Edge Celsius show and let me introduce you, paul Williamson, senior Vice President, general Manager, iot, line of business at Arm, and hopefully I got that right.

Speaker 2:

It is.

Speaker 1:

Excellent and maybe we can start with kind of your like what's your origin story? Maybe we'll do that as a warm up, like what's the I'm not. You know, people can go to LinkedIn and look at your your stuff Sure sure.

Speaker 2:

What's the what's the origin story that we should know, I guess maybe we should look at how I came to be at Arm, I guess. I mean, I'm very much a UK person. I've been in, started out in the consultancy technology consultancy world working on with a whole range of OEMs on their end products, but at some point moved over to the Silicon world and then moved up an organization local to where I live now in Cambridge, cambridge Silicon Radio, who people may know from the sort of particularly leaders in Bluetooth audio, but then lastly into sort of Bluetooth low energy technology, and I led one of the divisions there which was heavily involved in IoT and low power radios through to selling that to Qualcomm in the end.

Speaker 1:

And many will know. Jeff Torrance, did you work with Jeff? Absolutely, Jeff.

Speaker 2:

Jeff was leading that M&A process actually and now leads the IoT for Qualcomm.

Speaker 1:

So we're not too far apart. Nowadays, it's a small world.

Speaker 2:

It is a small world. So, yeah, and having completed that deal, I decided you know that was going into Qualcomm and I wanted to move to a company where I still felt there was an opportunity to do some interesting, influential work and stay in the Cambridge area my family, my wife's in the medical industry and there's a lot of interesting med tech around here as well. So Arm Down the Road was top of the list. They were doing interesting, innovative stuff, certainly influencing the world, given the scale of their footprint, and I was very fortunate to be offered a role by their CMO at the time into their corporate strategy team.

Speaker 2:

So a bit of a shifting gear went from leading a team to, or a division down into a role that was much more individual strategy, m&a, that kind of area, right, right, right. But then, a few years at Arm, now coming up for nine years, and in that time, yeah it's good.

Speaker 2:

Good while and I've done everything from mergers and acquisitions when we were sort of getting into IoT, connectivity technology, things like SIMS and security technology, through to running parts of our IP business for a period of three and a half, nearly four years ran our client line of business, which was smartphones, tablets, laptops, but most recently, a couple of years ago, rotated into this seat leading IoT. So to some extent, a sort of return home to the IoT market your core roots.

Speaker 1:

Yeah no, that's interesting. I mean, yeah, lots of folks on the show. That's usually my kickoff question and it's interesting to see there's a lot of parallels in people's journeys, how they go sort of from one situation to another. Sometimes it builds and sometimes it comes back to sort of what they were originally working on, but that's you know. I talked to a lot of college age or early in career folks in the tech business and I always tell them it's just kind of a series of interplanetary missions. You know your career is going to, you're going to land on a planet, you're going to sort of figure it out and then that's going to lead you to some other planet which maybe you don't know what it is yet, but it's sort of like that's the way it is. Just kind of keep getting these experiences, I agree.

Speaker 2:

I think actually the decision to move to arm was driven by me looking at those things and going you know where's that next interesting experience. And, like you said, when I talk to people in their career development, I, you know, a lot of people want to map out every step along the way and I do sort of tend to have to confess that I didn't map out the steps. I followed my curiosity and, you know, found opportunities that fed the curiosity and I think if you're enjoying feeding that curiosity, you'll invest time and effort and you know you'll learn new things and you'll develop new skills and that's that's how you sort of get these opportunities to do the next thing that can feed that curiosity. So yeah, I certainly didn't plan all of this out. That's for sure.

Speaker 1:

Yeah, no, it's kind of impossible to do that, but cool. So let's talk a little bit more about arm and semiconductors and and obviously AI has been sort of the you know hot topic topic du jour, I guess last year, I guess in 2022 is probably metaverse, but this year it's AI. So moved on to another hot topic. But actually AI has been around for a long, long time, as you know.

Speaker 1:

So now generative AI the ability to kind of predict and create has kind of captured everyone's attention. I actually saw an article today about I don't know what it's called, like Channel One News. It's like a complete AI news broadcast with AI generated news anchors Pretty freaky, pretty freaky I was. I was saying like I'd like to see the Walter Cronkite kind of AI avatar come back. So it's really sort of permeating all of our experiences and fears and, you know, apocalyptic. Whatever is right, I mean, it's just fueling this machine. But AI has had a lot of practical benefits, especially in the AIOt and edge space, for a long time. Can you talk a little bit about sort of where ARM has been intersecting AI over the past few years?

Speaker 2:

Yeah, sure. So I think, yeah, I mean you've just mentioned large language models and you know generative AI and I love the excitement around it and I find it fascinating. That example you gave was was wonderful. I you know I get slightly nervous when you talk about news, when we're also aware there's a bit of hallucination that goes on in these systems. So we'll see where this all develops. But you know, arm has been very heavily involved in in recent announcements around those kind of technologies with, you know, things like the Grace Hopper chip that Nvidia has delivered. That's been really focused on.

Speaker 2:

Yeah, the power of compute that you need to feed these sort of training algorithms needs a balance of compute memory bandwidth and specific machine learning accelerators. But, as you pointed out, it's been used much more widely for a long time now and, I think particularly particularly unseen at the smaller end of the compute spectrum. So certainly I've seen a lot of innovation in more deeply embedded devices, in things that we're using every day, and I think we would perhaps not be, you know, not initially think of them as AI applications, but as things as simple as the smart speaker. If you have one in your home, you know whether it be Alexa or Google or something else, the front end of those devices, even the bit that's doing that wake word detection to work out. You know, did you say something? That is the word that they're looking for is using machine learning to do that classification. So I've seen it applied and evolve even in applications that were used to.

Speaker 2:

I mean, we mentioned starting this podcast about echo noise cancellation, you know a key technology was all historically done with traditional digital signal processing and filtering, but more and more, as that is being improved, people are integrating machine learning into those capabilities. So, as you said, it's while the buzz and the excitement has been the breakthrough in large language models, actually the hard work behind the scenes for the last five years also has been to bring machine learning capability into these lower compute performance devices in the background.

Speaker 1:

Yeah, yeah, and I was reminded of. There's a company I'll do a plug for them called Caliper. They're in.

Speaker 2:

Australia.

Speaker 1:

They do some fascinating kind of I call it anomaly detection type of scenarios right. So they're out on the, they're measuring the temperature of the train tracks in Australia for expansion and sewer gas levels, and there's a lot of really interesting instrumentation going on and you know the idea of anomaly detection, or detecting when the pattern is disrupted, which could lead to even like the current of a, the current draw of a dryer, you know, in our house, if that starts to go up unnecessarily, that could indicate that the thing's about to fail.

Speaker 1:

So you know, using AI and that I know that the ethos and some of the kind of core IP that ARM has been driving, which now is manifesting itself in actual chips, is part of that right, the ethos, absolutely yeah 255, 85,. I don't know if I got all the numbers.

Speaker 2:

Cortex-M series. Yeah, so I suppose a key technology. Yeah, we, if we look at that series of deeply embedded microprocessors which you know are the mainstay of the microprocessor sort of developer community. As you said we've seen the way that they're developing those algorithms for the past sort of five years transition to using more and more machine learning and it also needs a bit of DSP capability as well.

Speaker 2:

Typically, you know, even those guys doing those fault detections are doing some kind of filtering step first to work out you know, I don't need to look at all the information, I only need some of it and then later than doing some kind of classification using machine learning, and so we brought in a technology into our mainstream microprocessors called Helium and that accelerates those digital signal processing and those matrix kind of mathematics that you need for the classifiers, even a general purpose microcontroller. And we've had a few of those products out. We had M55 initially, which was kind of a mid-range performance for microcontrollers. M85 more recently, which René Sasse has been leading the way within their product announcements this year, which is a sort of high performance, taking it up towards a gigahertz in sort of frequency, so quite high compute capability, even doing vision level tasks with those kind of inference capabilities. And then most recently we announced a product called M52, which is sort of more for the lower power, more constrained microcontrollers. And you know it was interesting to me. You mentioned those sort of anomaly detection and that's perfect for one of these deeply embedded M52 microcontrollers.

Speaker 2:

But even you know, I always thought you know image and vision sensing was probably the domain of higher performance compute, but when we launched this product, one of the example applications which I think it fits it very well was, which is currently done, with a higher performance, higher frequency, more power, hungry applications processor that is going to fit into this sort of new type of device is where they're using in crop monitoring, actually for wine in Northern Italy.

Speaker 2:

They were looking at the leaves of plants for beetles and signs of beetle tracks to see if the crops were had been attacked, and so, yes, they're using a, you know, an image sensor. But the frame rate you need to tell whether a beetle was there or not in the image is, you know, once every five minutes probably. So you know, actually you can do that on a coin cell battery with a little image sensor. So it's really interesting to see. Yes, there's a lot of compute and energy needed to develop the algorithm to make this effective, but once you've built it, you can deploy, you know, for these lower data rate applications in very small, efficient micro controllers in the future.

Speaker 1:

Yeah, yeah. No, I met with a company called Olive Semiconductor. You probably know all of Pleasanton, California, I think. I was there last week and yeah he was showing me, reza was showing me his old dev board. You know doing this like 250 pixel square, 30 frames per second image, kind of recognition and things. So yeah, you're talking about coin cell battery, ai vision, wireless connectivity. It's kind of an incredible combination, like you said, getting out into agriculture, and I've seen it being used for, like you know, shelf stock detection.

Speaker 1:

On shelfing and other kind of more passive things where you know you just want to have just having a set of eyes on something to see what's going on, or inspecting. Then you could do that. So, yeah, no, that's fascinating. And I think a lot of people talk about AI as very cloud driven technology. Like you said, yeah, grace Hoppers, and all that good stuff and clusters and rows upon rows of racks of views, but at the same time, there's a lot going on to take in these signals and do all the processing at the edge in the field, in the winery, I guess, yeah, and that's going to be interesting.

Speaker 1:

And I think one of the themes for 2024, for me, or one of the things I've seen out there too is like around AI model coherency. And so when you start thinking about AI models that are on the cloud and on the edge, you know how do you have more coherency for developers? Right? Because as a developer, you know you all developers want kind of one universal instance that they can code to, right, which is why Java existed and Electron and every other one of these frameworks, and so then it's like, well, how do you get more coherency of the AI models between the edge and the cloud and training and the inferencing?

Speaker 1:

And it feels like that's another area that still needs to get resolved or it's making more progress against it.

Speaker 2:

Yeah, we're seeing some good progress there in sort of people refer to it as sort of ML ops flows for sort of targeting a scaling your model for different environments.

Speaker 2:

So we've been partnering with companies like Kixo and Edge Impulse, who help developers to do exactly this. Of you may train a model in the cloud and you may have a big data set out there, but you may need to. Well, you will need to eventually run it in some end device in some level of constraint. You can't have endless compute, endless power, and these ML ops developer flows do allow you to look at what if I, constrained to a certain memory footprint or a certain processor type, and see what the performance will be. How many frames per second can I really run out? And ARM works with those companies to provide.

Speaker 2:

We're recognizing as well the way that people develop for these embedded targets is shifting to a more modern developer flow.

Speaker 2:

So, working with these companies, we can provide virtual models of the ARM processor IP that lives within there, whether it be the ethos dedicated accelerator or the Cortex-M core, and with their tooling they can then integrate those models and when they run sort of different configurations of the models, they can work out roughly what the performance will be without you ever having to touch a piece of hardware which is really powerful to be able to sort of more quickly work from the cloud but deploy towards the edge. But we are seeing that intention with the trend that you described. People are more familiar with cloud-based development, increasingly expectations of Linux, high-level languages, not having to think about what compiler am I using and how many bytes of memory am I gonna care about here or there. So we see a mixture of the two happening, of people having these higher performance, less constrained platforms for the software flexibility it gives where they can afford to have it, and then more advanced tooling emerging to help people better target these end devices efficiently.

Speaker 2:

So it's an interesting dynamic that's gonna play out over the next few years, for sure.

Speaker 1:

Yeah, yeah. Now I saw some stats that said there's something like 10x the amount of kind of cloud developers and there are embedded developers or something like that. Right and people coming out of university and things like that. That's a platform that everyone's the cloud DevOps kind of CI, cd, whole thing is kind of the way I guess that's the way you do things now. I guess that being said, people have been doing embedded computer development for decades, right? So this is sort of collision worlds.

Speaker 2:

Yeah, it's more a collision, I don't, or sort of adjacency. I don't think they ever go away. It's interesting to me to see some of these industrial IoT applications where they want. Yeah, they might want containerized software workloads or microservices at the edge and continuous update and integration to the cloud, but equally, when they've got a I don't know, a light gate for human interference detection, for a safety critical system, they're not gonna defer that to the latency in the high level language and they will have a real time subsystem, which is still developed with embedded coding yeah.

Speaker 1:

I agree It'll never go away. Actually, I spent like nine years at Phoenix Technologies, the BIOS folks, I started my career as a BIOS developer in Norwood, Massachusetts. And still know a bunch of those folks and they're, you know, folks are still doing that stuff like just putting bits and registers and moving stuff around. So that's not gonna go away, but it is interesting to see that evolution about.

Speaker 1:

You were just talking about orchestration really of workloads and code flow and yeah, hopefully it becomes more of a one holistic system between the cloud and the edge and people can sort of unlock some of that capability faster.

Speaker 2:

Yeah, that's definitely the direction we're headed.

Speaker 1:

I was gonna ask you too, like so you mentioned there's Cortex-M, there's Cortex-A, there's Cortex-R Sure, we will put the Cortex-R to the side for a moment, but the? Where's the action these days relative to Cortex-A and Cortex-M on the edge in the IoT space? Are we seeing more kind of Cortex-A stuff happening or what's the? I think absolutely yes.

Speaker 2:

So yeah, it's interesting, so it's sort of yes and yes, which isn't very helpful, but both areas are growing in software complexity, whether you're deeply embedded or whether you're in higher performance stuff. But I mean, I have bias at the moment. Well, it's partly due to our product cycles. We've just launched Cortex-M 52, you know, the deeply embedded ML on the edge is fascinating. However, I'm sort of in my mind already sort of moving on to some of the other innovations we're seeing in the market which are fascinating to me. So, if you go into higher performance IoT edge devices, we're seeing a very rapid move from some of the legacy software world towards Linux based platforms.

Speaker 2:

You know, we have a number of silicon partners who've been highly successful in that space Historically the companies like NXP, who are very well known for their range of IMX sort of IoT edge A-class Linux capable devices, which are very successful. But we're seeing also a higher performance tier moving into that space. So almost what you'd sort of call like an industrial edge PC, but it's no longer a PC. This isn't a sort of old school windows box sat on a shelf. It's your vision system in your robot or your production line. Controllers are these embedded arm based compute modules that are becoming more popular, and a number of those are being offered as a sort of derivative of the mobile platform. So you see companies like MediaTek, particularly with the Genio line, but also even Qualcomm, extending support more towards Linux away from just Android in the past, and they're bringing in longer lifetime support so that these higher performance compute platforms can be deployed into the IoT, and range is from 10 years

Speaker 2:

plus or whatever, 10 years plus, yeah, and you're dealing with everything, at the moment at least, from point of sale and retail, which, as you can see day to day, looks more and more like a smartphone, basically, or advertising points. But even we're gonna see, I think, this interesting higher performance Linux industrial compute, where you do also need this sort of real time critical safety level involved as well. But Linux is definitely where that's happening and we're seeing that growth in our partners offerings into that market.

Speaker 1:

Yeah, and I saw. I mean there's like the Nvidia has their IGX platform, which is very high performance, and it also has, speaking of Cortex R, has the real time safety island sort of seeing an interesting mashup of Cortex A based horsepower combined with kind of Cortex R real time based kind of safety.

Speaker 1:

And when you're going back to cloud devs about failing fast and all this other stuff, but when it comes to the edge and IoT, actually don't wanna fail fast Like you don't wanna fail basically because, these are things that are like running manufacturing lines and safety equipment and other things, like it needs to work and it needs to be deterministic, and so it's interesting to see, like you're saying, is the these platforms are becoming more powerful, more capable, but also need to have sort of the deterministic and control that we used to have back in the bare metal, kind of our toss days.

Speaker 2:

Yeah, that's absolutely right and we're seeing that it's solved in a number of ways Sometimes, as you said, an integrated silicon, where people will have a Cortex-A but perhaps a safety island on the same device, so that it's solved at a single SOC level, or sometimes it's at a discrete, still at a discrete industrial PC level, where you might have a series of sort of real time controllers for power switching networks while you've still got a high level supervisor which might be doing continuous management of the infrastructure with a much more capable non-real time Linux based device.

Speaker 2:

So we absolutely see people bring these systems together with the different scale of compute on different nodes in the system and I've in the past, in discussing with the people, I do like the analogy of the human sort of nervous system and the way that you might have a. Maybe you've got a very developed frontal Cortex which is doing a the deep thinking of the system and management of the overall system.

Speaker 2:

But equally, when I tread on a Lego brick, I don't have the time to wait for my frontal Cortex to forget about the other things. I need to step off that pretty quick. So you need those little local loops of control to be acting in the system to stop you doing harm and hurting yourself.

Speaker 2:

So yeah, I think systems will continue to look like that. They'll have cloud level machine learning involved and sort of distributed you know, virtualized tasks, but you'll also have these, you know, real time localized loops for either latency or safety criticality that need to just have that local compute capability. And the goal is to bring that all together in a software world which, as you said, can scale reasonably well, sort of cloud to edge.

Speaker 1:

Right, and that's the direction that we're headed. Yeah, and I actually going back to the point you made about simulation and system simulation the silicon simulation.

Speaker 1:

I mean, I think, that's really key, especially with these more complex systems be able to sort of lay them out, map them out virtually, you know, using some cloud compute, and being able to understand the performance and the latency and what workload should I be running, where to really optimize the system for power or performance or cost or whatever I mean. I think that's that'll be pretty powerful for people to be, able to get their hands on it eventually.

Speaker 1:

I had a quick question, so slight shift, sure, because I know you were in charge of the client business for a while and one of the hot topics again is these days is AI, pc. Yes, and I know that there is some pretty cool announcements going on with Qualcomm and their chips, and now Intel just had their events, of course, and AMD had their events, and everyone's talking about AI and AI, pc. Where kind of what's your, what are your thoughts on this particular incarnation of the PC and and ARM's role in potentially helping accelerate that?

Speaker 2:

Yeah, it's an interesting question and it's the space that I was pushing quite, quite significantly while I was leading the client line of business.

Speaker 2:

And you know, I mean I'm doing this session with you today on a, you know, an ARM based laptop and you know, I think the innovation that having ARM in laptops has developed is for us to sort of challenge and create more interesting, heterogeneous architectures that bring different levels of optimized compute and that inevitably also has seen huge power efficiency gains, as well as being able to bring in concepts like a bit like Big Little, or at least having scalable size cores which allow you to do you know, sprints or bursts of compute when you need to launch applications or doing high performance games, but then to back off to sort of standby level compute for you know, power efficiency and that alone was, you know, an obvious and significant benefit of the move to ARM.

Speaker 2:

But I think the further benefit of this heterogeneity, this, this ability to bring AI and L acceleration into these SOCs as a significant citizen in the design architecture, if you like will be important going forward. So I think the flexibility that ARM brings to this market is going to continue to drive more and more innovation.

Speaker 2:

So, yeah, I'm excited by you, know it potentially, and I think you know we're already seeing the transition that companies like Microsoft are investing in on you know, increased use of AI and applications that will support our working lives, and having local support for those capabilities on a on advice is going to be really critical, so I think it makes total sense as the natural direction from here forward.

Speaker 1:

Yeah, no, it'll be interesting to see. I mean, I think I know coming from. I worked on the Windows, on Snapdragon program.

Speaker 1:

I was at Microsoft, that was one of my big projects there, and so I'm a big fan of kind of, you know, bringing in those new architectures. And even back then we had, like Windows ML and we had sort of APIs for doing AI acceleration, and now I think hopefully we'll see those even more utilized and there's even more horsepower now under the hood to do those types of workloads. So, yeah, it'll be interesting to see what developers do with that and how the market responds to it. But, yeah, yet another chapter in the PC history, I guess.

Speaker 2:

Yeah, another chapter and you know I suppose it's going to be. There's been the continued tension of thin client cloud and sort of local capability and I'm sure there will continue to be oscillations in that. But I think the big innovation that we've done in this last sort of few years and we're going to see sort of bed in is, you know, chromebooks first sort of disrupted this, using sort of ARM based platforms from tablets and going forward. We're going to see increasing innovation in the SOCs that are going into these devices, I think prompted by some of that early innovation. So, you know, learning from mobile but then applying it in a very different way at a different scale, which through to the cloud, which is going to be really exciting.

Speaker 1:

Well, it's interesting because the mobile world and the PC world are sort of like. I sort of consider them to be sort of parallel universes. You know, like I did to my Marvel comics, you know kind of analogies. There's these parallel universes based on different architectures, different channels, different use cases, and then sort of a starting like a few years ago they started to mush together a little bit, I think, maybe because people start using their phones maybe more than their PCs, and now these worlds are colliding again and other two worlds colliding of saying, you know, how do we take what's really worked well in the phone space around ARM based architectures and low power and, like you said, big little and these other things and connectivity, and then, you know, leverage that in a larger form factor like a PC, and so it's interesting to see that sort of innovation happening.

Speaker 2:

So, yeah, I think a lot of it is driven by sort of application level innovation and I love seeing the influence that things like you know Instagram filters end up coming into being. You know business reaction filters that are occurring in a Zoom call or this kind of thing. So you know something that starts as a consumer sort of led mobile feature ends up appearing in an enterprise application and that, you know, is driving that demand for those features and those capabilities of AI cameras into the laptop space.

Speaker 2:

So, you know it's interesting to see that sort of crossover, certainly of user expectation and feature demand.

Speaker 1:

Yeah, yeah, exactly. Hey. What another kind of little change here pace in their show discussion was one of the things you've been at arm now I think you said nine years. It's been one of the big stories, I guess of this past year I'm sure is about the arm IPO and kind of becoming a public company and of course before that there was sort of the Nvidia sort of thing and all that stuff. Sure, how do you see arm like as a, as a public company now? What's? What's the transition been like? Has there been a transition like culturally, operationally, or is it still sort of business as usual, except you happen to have quarterly results?

Speaker 2:

There's a bit of the latter, but you know, I think you know there's been some changes in arm, certainly since I since I joined, and you know, when I joined it was very much a mobile business that was known for the sort of era of the emergence of the smartphone and I think our now CEO, renny Haas, was instrumental in, you know, driving a shift to a more market-led focus which has widely diversified the way our technologies used. So, while we were mobile and smartphone and continue to be very significantly invested in that space, you know our penetration into cloud compute, into automotive and ADAS systems and further into higher performance IOT systems has been driven by that market specific focus and I think you know coming into the IPO, having been successful in that transition, has meant were considered a little differently to.

Speaker 2:

Perhaps we were back in 2016 when we first went private and and sort of moved off the market. But on the other side of it, you know, coming back to that public market where you know it hasn't fundamentally changed the drive for the business with you know we're serving a pretty broad partnership with a greater range of technology than ever and, you know, if it does anything differently internally, I guess is we're back to being able to share with our employees a bit more investment in the long-term growth story for the company and that's exciting to be able to, for new talent as we grow the business, to be able to sort of participate in that.

Speaker 1:

Yeah, see where things are heading. Yeah, actually I didn't get in on the ARM IPO. I did get in on the Birkenstock IPO, though, so I'm pretty happy with my results.

Speaker 2:

Very good.

Speaker 1:

I know ARM's doing pretty well too, so that's good but yeah, I'm a big fan of the Birkenstock and we'll see what happens there but yeah, it was kind of an interesting year for IPOs.

Speaker 1:

I think you know it's good that ARM got through that process and seems to be doing well with that whole thing. So, and the timing is great, I mean I feel like there's almost a maybe it's this AI frothing going on, but there's a bit of a renaissance now about tech and tech's role in the world and how just so pervasive it's become. Like you were mentioning automobiles and and not just phones, it's just everywhere now, and so ARM seems to be sort of right at the the nexus of that, that revolution that's going on, which is pretty exciting.

Speaker 2:

It is exciting, it's certainly, yeah, exciting as well, you know. In addition to, you know, staff being able to participate in stock, seeing, seeing these devices come to market and being able to point at them and show how broad the distribution of. Arm technology is is always exciting.

Speaker 1:

I love that in my job.

Speaker 2:

I can walk in and typically when I begin discussions and people say what's IoT, I can literally point around the room and you know just about anything in the room. Has got something that's got a controller in it and that controller is more than likely running an ARM based computer device. So it's very rewarding for our staff to be able to contribute to that and and to see the innovative things people do with the technology as well. So it's good yeah, cool.

Speaker 1:

Well, any closing thoughts for our listeners. I think we covered a lot of ground here.

Speaker 2:

We've covered a lot of ground, I think. I think the you know if I'd leave you with anything to think of going forward. It's that sort of emergence of AI and IoT being about the you know, the big brain, but also the, the smaller systems that are going to make this real. Because I think, as you sort of highlighted and we discussed in the discussion, you know, having a large language model is is exciting, but as long as it's just doing text in a bubble on a web browser, it's not going to change the world, and so we're all about looking at how you can bring that ML type capability towards the edge and into these devices effectively, and I think you know, with that cloud to edge connectivity that we're driving, it does give the option for, or the opportunity for this to be really impactful in a huge range of industries.

Speaker 2:

So exciting times, I think, and you know, a really good discussion, thank you.

Speaker 1:

Yes, well, thanks Paul for coming on the show and hopefully we'll see each other in person at some point in the near future.

Speaker 2:

That would be great.

Speaker 1:

All right, thanks, paul.

Speaker 2:

Thanks for the discussion.

Speaker 1:

Thanks for joining us today on the Edge Cells your Show. Please subscribe and stay tuned for more and check us out online about how you can scale your Edge compute business.

Edge Compute and ARM in AI
AI Evolution in Low-Power Devices
Future of AI in PC Architecture
The Impact of AI and IoT