The neXt Curve reThink Podcast

Silicon Futures for March 2026 - Intel Foundry, Embedded World, Edge AI and Arm Everywhere

Leonard Lee, Jim McGregor, Karl Freund Season 8 Episode 13

Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.

0:00 | 29:56

Send us Fan Mail

Silicon Futures is a neXt Curve reThink Podcast series on AI and semiconductor tech and the industry topics that matter.

Oddly, all three of us agree, March has turned out to be a pivotal month set by MWC and NVIDIA’s GTC. The AI silicon and systems script is being dramatically rewritten around the CPU and inference accelerators and architectures that arguably challenge the myth of the NVIDIA moat as AI an agentic pivot in 2026.

In this episode, Leonard, Karl and Jim talk about some of the top headlines from March of 2026. 

➡️ Jim recaps March in the semiconductor industry and AI in 2 minutes.
➡️ Karl on NVIDIA GTC 2026 and Arm's new AGI CPU. 
➡️ Leonard invents Application-specific General-purpose Computing with Rene Hass.
➡️ The different approaches to CPU architectures.
➡️ Why some CPU makers exclude Spatial Multi-threading (SMT) from their chips.
➡️ Do buyers buy Arm or agentic AI systems?
➡️ What is Arm's AI systems ecosystem game?
➡️ Is agentic AI changing the AI supercomputing game?
➡️ Is OpenClaw the ChatGPT moment for agentic AI?
➡️ There is a new AI in town. That AI is edge AI!

Hit Leonard, Karl, and Jim up on LinkedIn and take part in their industry and tech insights.

Check out Jim and his research at Tirias Research at www.tiriasresearch.com.
Check out Karl and his research at Cambrian AI Research LLC at www.cambrian-ai.com. Check out Karl's Substack at: https://substack.com/@karlfreund429026

Please subscribe to our podcast which will be featured on the neXt Curve YouTube Channel. Check out the audio version on BuzzSprout or find us on your favorite Podcast platform.

Also, subscribe to the neXt Curve research portal at www.next-curve.com and our Substack (https://substack.com/@nextcurve) for the tech and industry insights that matter.

NOTE: The transcript is AI-generated and will contain errors.

DISCLAIMER: This podcast is for informational purposes only.

Next curve.

Leonard Lee

Hey everyone. Welcome to this episode of the next Curved Rethink podcast, where we break down the latest tech and industry insights from the world of semiconductors and AI into the insights that matter. And I'm Leonard Lee, executive Analyst at Next Curve. And I'm joined by Carl Fres of Cambrian. AI research and we have the, what do you want to call it, multithreaded, right. Are you multithreaded or non multithreaded? Jim?

Jim McGregor

I've definitely multithreaded.

Leonard Lee

Okay. The multi-threaded multitasking and multi-event Jim McGregor. Did I do a good job?

Jim McGregor

You did a pretty good job.

Leonard Lee

Yes.

Jim McGregor

for non, for a non Klansman.

Leonard Lee

Oh, okay. Great, great. So I can be an honorary Klansman of, you can be an

Jim McGregor

honorary Klansman.

Leonard Lee

Yeah. I'm an honorary Mexican by the way. I just want you to know that and there's a reason for it and I'll explain some other episode, but

Karl Freund

that could be interesting.

Leonard Lee

Welcome everyone. And in this episode we're gonna, touch on the highlights of the month, and this is the month of March, 2026. And like any other month in this age of AI where we have this Cambrian AI explosion that. Carl always talks about there's way too much to talk about, but we're gonna try to compress this into a 30 minute chat here. But before we get started, please remember to like, share, react, comment on this episode, and subscribe here on YouTube and on Buzz. Brought to listen to us on your favorite podcast platform. Opinions and statements by my guests are their own and don't reflect. Mine are those of next curve and they're Yeah, exactly. Your own.

Jim McGregor

And we're right. So, you know, it doesn't,

Leonard Lee

yes, they're always right and I'm always wrong. And we're doing this for informational purposes only to provide open forum for discussion and debate on all things. semiconductor and ai. So let's get started. Gentlemen. March. Lots of stuff going on. We've already had a talk on GTC, so let's, maybe minimize that comment. comments on that. Folks, if you're interested in, our insights and our takes an analysis from GTC 2026, 26. Which ob hopefully, if you don't know already as an inve, the big Nvidia event, we have an episode that you can check out. So, you know, please go there for, stuff related to that. So, gentlemen, March. What do you guys think?

Jim McGregor

Where do you wanna start? I was at Intel's advanced packaging facility in Rio Rancho, New Mexico. Then I was at Mobile World Congress and I was at Embedded World then GTC, then at the ARMS event last week, and there was a competing Intel event, their pro enterprise event, going on in New York City last week. So, I mean, just on and on and on and on and on. That

Leonard Lee

sounds like a whole year.

Jim McGregor

It basically was a whole year. It doesn't True. I'll give you highlights in 30 seconds or less.

Leonard Lee

Okay. Do it.

Jim McGregor

Intel Advanced Packaging. Amazing. it really shows where they're going with, Intel Foundry and Very impressive. Yeah. mobile World Congress. Starting to see impact on six G. Real focus on AI ran and AI in the network. obviously, especially for new applications like robotics embedded world, really getting down to the details of robotics. Everything from the small ECUs to the motor controls to you name it. Really impressive to the motors themselves actually. GTC, obviously all about AI and where we're going with the, really the powerful systems. and arm announcing, getting into a whole new, strategy change, getting into, developing chips. So selling, IP selling, CSS or compute subsystems and chips now, and their first chip coming out later this year. And then, you have Intel launching. All of their Panther Lake solutions for PCs and workstations, especially new graphic solutions for workstations with the B 65 and B 70.

Leonard Lee

Wow. That was pretty good. You practiced that right?

Jim McGregor

No, I didn't.

Leonard Lee

Wow. That's impressive. And Carl. why don't you try to beat that?

Karl Freund

I can't. I can't touch that. Plus one, Jim.

Leonard Lee

Yeah.

this.

Karl Freund

Can't touch this.

Leonard Lee

Gonna have to, if there was ever a moment segment that I wanted to edit out,

Karl Freund

That

Leonard Lee

was it. That was it.

Karl Freund

dancing?

Leonard Lee

Anyway, Carl, any highlights for you?

Karl Freund

No, I just focus mostly on GTC. I'm interested in what's happening, what happened at the, arm event this week? Yeah. I was unable to attend, but I was, I what I have read and watched. I was really un unimpressed.

Leonard Lee

Really?

Karl Freund

Yeah. I think they don't get it. They think the competition's X 86 is, it's not, it's Nvidia and, Nvidia Grace and that now,

Leonard Lee

Vera?

Karl Freund

Vera. They've been doing this for a while and if you look at the IDC data of server shipments

Leonard Lee

yeah,

Karl Freund

in last year, it's amazing how much ARM has grown.

Leonard Lee

Yeah.

Karl Freund

And there's really. Only one player. Gravitons out there and stuff, but the primary player, probably 80, 90% of that growth has all been on Nvidia. Why? Because it's a really good chip. Secondly, because it's highly integrated with GPU and NV Link, which ARM doesn't have, so I'm not sure who they're gonna sell this stuff to except for people who are looking to build really cheap. Cloud services.

Jim McGregor

That's a really, actually, I would argue that point. I think if you look globally, Huawei actually ships more, ARM-based servers than anybody on the planet.

Karl Freund

Oh really?

Jim McGregor

Yes. So just a thought.

Leonard Lee

Yeah.

Karl Freund

I didn't know

Leonard Lee

that. Risk five eventually. I know that there's a lot of, tension and talk. Toward, risk five. But anyways, that's the subject of another discussion. But that's a really interesting take. Carl and, Jim, before I chime in, what were your impressions of that event? Because I think it was an important one. it settled.

Jim McGregor

I've been saying for a couple years that I don't think our industry can support an IP only company anymore because we've had consolidation. The major semiconductor companies have grown, and their r and d budgets are so large that they dwarf a lot of these IP companies, and it's really hard to maintain a business models and IP only company. Arm identified that arm's trying to change their business model. Obviously there's some tension there because they're now competing with both the people that do custom arm solutions like Marvell, media Tech, and Broadcom, as well as, the traditional vendors. And that includes, a MD Intel on the X 86 side, now Nvidia on the arm side, so it's going to be interesting to see how that plays out. Yeah, I think it's there. There's gonna be some there, there's obviously gonna be some pain there, but, they had to do it. they obviously, if they're gonna grow as a company, at significant rates, and their forecast for this was going from a$4 billion company last year to a very, very aggressive$25 billion company in 2031. Now, that's not just based on one chip. They plan on coming out with other chips for other segments. This first one is really focused. It was developed in conjunction with Meta. Yeah. So it was developed very much like. Vera from, Nvidia. Yeah. It was developed for feeding the AI beast, basically, making it being that orchestrator silicon for AI accelerators. And not to say it can't, it does have vector engines. It could do AI processing in it, but it's primary focus is making sure that you keep those AI accelerators running it, top efficiency. So it's gonna be a challenge. I don't think this is a general purpose part that's going to satisfy the whole market and I think their customer base obviously is one, the hyperscalers two, the AI powerhouses, and they had both meta and OpenAI there at the event. and three, possibly the neo cloud providers. I don't necessarily see this part going beyond that. It's not to say it can't, but. I think it's very much that niche and it's competing with a lot of other solutions. both from X 86 Exactly. And the arm side and the custom arm side.

Leonard Lee

Yeah.

Jim McGregor

So it's gonna be a challenge. I don't think this is a slam dunk by any means.

Leonard Lee

Well, maybe it's not as big of a market opportunity as, might be suggested in some of the big numbers that Renee floated out.

Jim McGregor

well, and they don't see a significant financial impact from this until they, they expect shipping later this year, but there see a significant financial impact until 2028. It is an emergency strategy for arm. We'll have to see how it plays out and we'll have to see, how the rest of the market reacts to it. this may invigorate their competitors both using the arm architecture or competing architectures.

Leonard Lee

Yeah.

Jim McGregor

We'll see what happens.

Leonard Lee

It could have a, that unintended consequence of having folks gravitate toward risk. Five. Right. Certain players. But, It's interesting'cause the ramp that was represented in the investor presentation was a hockey stick. So Yes. The question is, okay, will it really, what would it really take, in order to get to a$15 billion, number for just the just facility? PI, yeah. CPU. And that's just. A-G-I-C-P-U, assuming that they stay in that lane, over the course of the next five years.

Jim McGregor

Actually, I think that includes other, chips.

Leonard Lee

really, I read it as, okay, I'll have to take a look again.

Jim McGregor

Well, they did indicate that they do plan on coming over with other chips, obviously, for data center, but, they may go after other markets. They didn't elaborate on those future devices or when they will be available at this point in time.

Leonard Lee

I love the point that you made about, how the chip caters to certain niche requirements. I did have a chance to chat with Renee, after, the investors', q and a, and we coin coined a new term. What's that? It's called application specific general purpose computing for the data center. So yeah, it's that whole idea that you're talking about, it's these are system optimized parts and that they're probably a bit less general than folks might think because they associate CPU as being general purpose compute. Yes. And that's not necessarily the case. I think Carl, you're making a really, you you know, brought up a good point that NVIDIA's already doing. This has been for a long time with Grace and there was that early collaboration, because if folks don't remember, I'm sure that they do, but I'm gonna say it anyways. Nvidia made an attempt to acquire ARM at that point and they were co-engineering Grace at the time with Arm. There's a long legacy here of, CPUs that are really of a different class and maybe, jim, what do you think, has the whole, um, arm CPU for the data center, play been a challenge because the application specific angle wasn't there before.

Jim McGregor

Yes and no. You have to remember the fact that, we've never had a single processor architecture for the data center for one reason. There's no two workloads are the same. So where ARM had initial success and it's this has been over the past decade, has been replacing some of those other architectures like mips. Power PC and stuff like that for storage and networking and other parts of the data center. It hasn't had a lot of success necessarily, is that general purpose, CPU, especially for enterprise applications. They have to run a wide variety of different types of applications. But now we're talking, especially here, we're talking about AI specific. Yeah. And really it's that orator, it's feeding the beast. It's making sure that all those accelerators are operating at peak, efficiency and capacity. whether it's for inference or whether it's for training. And so this is very unique. This is definitely someplace I think the arm architecture can and is and will play. the only question is Saying it's for AI is like saying it's for medical because they're like all these different modalities and all these different applications. Are you running small models? Are you running large models? How many mixture of experts are you trying to run simultaneously? It's really a complicated type of solution. There's so many different aspects to AI that you can't classify it as one model. That's the challenge. so I think it's important not only that they feed the accelerators, but they're gonna have to be a part of that solution. And we've seen that from the other CPU vendors trying to A, either integrate some of that AI processing into the CPU and offload some of those accelerators, or. Being very unique in how they handle certain workloads. And it'll be interesting to see how they made a big point about saying, we didn't include Multithreading in this, SMT in this because we don't want to, burden the IO and the memory, however. it depends on how you architect the entire system as to whether or not you actually do burden the IO in memory. We've got Intel has non ST solutions with Zion six using the eCourse. On the other side, we have both, NVIDIA's Vera as well as AMD's Epic using SMT. Doing it very effectively. So it's gonna be interesting. And they spend a lot of time comparing to XA six. But quite honestly, I think, and in some respects it is, it's using PCI express gen six to connect to accelerators. but in other respects it's using in our market architecture, so it's very much competing directly with Nvidia Vera, CPU. Yeah. So it's gonna be interesting to see how that plays out.

Leonard Lee

Okay, Carl, let me throw this out there really quickly. One of, one of the reasons I think people need to remember, were confidential computing and when that discussion was happening about three years ago, that was, against. Growing concerns about side channel attacks. So Specter meltdown, we can't forget about that. This stuff is still a problem and an ongoing concern. I was at R-S-A-C-I heard about, about side channel attacks and and these like literally chip level vulnerabilities in many of the discussions. So it is a real thing. the multi-threading, was deprecated in part, maybe not admittedly, because of, these issues, right? so the speculative execution is really what, these vulnerabilities are, sourced from. So I just want to throw that out really quickly, because I, there, there is a rationale for why. There was a decision to get rid of this stuff, and it's not any of these guys are exempt from the vulnerability, especially as it comes to confidential computing. And, when we start to look at these large infrastructures that folks are deeming as strategic, the last thing you want is to have that kind of vulnerability. Due to the kind of CPO architecture you're putting out there or how you're running your threads. Anyways, Carl, sorry about

Karl Freund

that.

Leonard Lee

I

Karl Freund

wanted to ask you guys a question. Do you think people buy, brace and Vera because it's arm, or do they buy it because it's tightly integrated with GPUs over NVLink?

Jim McGregor

I think it's the latter. And you're going for overall system performance and efficiency.

Leonard Lee

Yeah.

Jim McGregor

So I think it's really, if they're going to consider it, I think it's looking at the platform level. It's not looking at the chip level.

Leonard Lee

Yeah. But then there's also the networking. You brought that up. Carl, and I think that's important, but we also have to, consider that even for Nvidia, with, envy, link Fusion and Spectrum X. There's, they're starting to make inroads into the ethernet, and, hyperscaler market, if you will. And I think that's important, but that's where, the actual, marriage with the Nvidia stack may not be as much of a moat. It's really more of. Nvidia trying to play in that market. So in that regard, I think the, A-G-I-C-P-U from arm. May not have as much of a performance dependency on the Nvidia stack. So I dunno.

Karl Freund

So if you then say, well, okay, if it's really the infrastructure that pulls people to buying, including everything from co-packing to software, um, and networking, if that's really what's causing them people to buy in arm platforms from Nvidia. What is ARM going to connect to? And they don't have NV Link. They don't have a GPU that's really data center class, and they're not gonna connect to a MD, they're not gonna connect to Nvidia. So what does that leave them? That kinda leaves'em second tier accelerator from the hyperscalers. Is that where they're going after?

Jim McGregor

Well, I did actually ask that question of why they use PCI express Gen six. Yeah. Rather than using NV link, they do have a license. Under NVLink Fusion to use NVLink, and it's on their CSS roadmap for compute subsystems. They indicated that it may be used in future chip sets. It just wasn't used in this one, and I think the response was that they felt PCA and Gen six was, more advanced. I think the real reason is, is the target that they're going after is mostly X 86. Yeah. And

Speaker 4

then

Karl Freund

I think I would say it's more scheduled. there's no way they could have had time to get, ENV link, incorporated into their design and catch that in flight. they had already taped out te, various chips. I'm sure they were in test by the time ENV Link was announced as an open platform. So I think they're gonna make excuses why they used it, but the real reason was, I suspect it was scheduled.

Leonard Lee

I think you just point to the changes in the NVIDIA roadmap for, agen ai, right? They introduced the Vera Rack and, I know one of the things that was really shocking for me was the depiction of, the putting of front and center. This, a GI rack system. And, and so there obviously is a certain kind of part that goes into what you might deem an agen infrastructure. So it's no longer just, let's say what we're familiar with the, grace, Blackwell Rack system. There's something else there. Right? And so as we look at inference and agen, the compute requirements obviously are looking different, and these parts are scaling out. And significantly as well. We saw that with, even with the announcement of, LPX, right? The mm-hmm. Three LPX. It's one Vera Rubbin, two up to four, LPX hacks. So we're talking about four times the racks. And content of non-familiar, Nvidia stuff, right? This is like DRock stuff. So now we're looking at a Vera Rubbin, CPU, a, Nvidia, CPU rack. And then we have meta in collaboration with, arm showcasing. This OPC, OCP compliant rack system, that is all CPU, right?

Karl Freund

Mm-hmm.

Leonard Lee

So something, there's a huge change happening here because of agentic AI and inference, right? And agentic ai, it's actuation plus inference, right? And maybe reasoning in the middle. So, as we project out, I think. We're gonna see these systems of the past three years actually become a smaller part of the overall AgTech infrastructure equation. It's happening right in front of our faces, right? It's expressed in the roadmap. So I think that's really the very interesting thing about this month. It's like. Tectonic shift, at least in my view. You guys might completely disagree. I would love to hear how much you disagree.

Karl Freund

I would actually, I agree with you. I, I think it's, AI is, yeah. I'm sorry, but I think agen AI is the forcing function here.

Jim McGregor

I think it is. And even more so. And we, we first saw this when I think Mark Zuckerberg and Jensen Wong talked about, having personalized AI agents over a year and a half ago, siggraph, in 2024. And we're there now, and just in the last. Three and a half months seeing what people have done with Open Claw and how that's exploded. And Nvidia reacted very quickly and said, this is good, but you guys need some kind of secure wrapper around this. So they came out with Nemo Claw. So seeing how fast we're moving to those personalized agents to where That's

Karl Freund

amazing.

Jim McGregor

It is mind blowing. How quickly

Karl Freund

I'll give you guys an example. My experience, one experience I have with ent, ai, I wanted an application that would tell me what to look at through my telescope. Right.

Jim McGregor

Mm-hmm.

Karl Freund

On a particular night,

Speaker 4

Mm-hmm.

Karl Freund

And in the time, less than the time, it would've taken me to evaluate the various, applications that are available. This, these sky calendars. In less than time, it would've taken me just to read through their websites. I was able to build my custom application using, perplexity computer. Which is not the same thing as complexity search. Little button on the right says computer. When you go to perplexity, I pull it what I wanted in just three sentences and it produced in about oh five minutes a complete application that I could access from any browser. Using existing algorithms that are available in open source. I'm like, wow. And it's a nice application. It's well laid out. It's good user interface, very easy to use, very accurate. and so that's less than the time it would've taken me just to read the data sheets of the competing applications. Who do you think is gonna win this battle?

Jim McGregor

Yeah. And you can do that with Claw, you can do that with chat g pt, you can do that Perplexity computer.

Karl Freund

Exactly. Jim,

Jim McGregor

and that's still really what you're developing is an application. Mm-hmm. And what we're doing now with Open Claw takes it to a whole new level where it's just. Find this for me and it's gonna find it. Yeah. Or like with using your telescope, it's gonna tell you exactly what you should be looking at notifying you and where to find it or something. Exactly.

Leonard Lee

Yeah. Yeah. Yeah.

Karl Freund

I wish I could just hook I look at Sure. I could actually just hook the application up to the telescope.

Jim McGregor

would be not, I, I, well, you could, and I'll bet you if you have a motorized telescope that within a day we could get that up and running.

Karl Freund

I bet we could.

Leonard Lee

Yeah.

Karl Freund

Yeah.

Leonard Lee

Jim is. You know,

Jim McGregor

we, we, we started experimenting with open claw about a month and a half ago and identified some of the big limitations with it. And, but we're still going through and developing, agents, personal agents, financial agents, even a virtual, analyst, right now, using both Open Claw and Nemo Claw. So it's amazing what you can do with it. And we're going full steam ahead with it.

Leonard Lee

Yeah. Looking forward to um, your state of the open claw readout. We'll be Kevin Damien

Jim McGregor

on with us one of

Leonard Lee

these days. That would be fun to have them talk about. Oh my gosh, yes. They are the experts. Let's do that. Let's do that. So, really quickly, I was at Edge AI 2026, put on by Edge AI Foundation. if you wanna know where and how, AI is being diffused across the edge. Great conference, by the way. Mm-hmm. it very, it's a great balance of technical plus, talks about. sensible applications, right?'Cause if you have a research, a technical group, you're not gonna gravel or take away too much from, the ground truths. But, Pete Bernard does a great job of putting together a really, levelheaded conversation and program around, edge ai. Neuromorphic, lots of talk about neuromorphic. I know both of you are excited about that. And it looks like it's starting to make inroads into, into applications. They don't, they're not all that sexy, but the benefit that, things like. neuromorphic, computing bring to some, edge cases. Our, our edge computing or edge AI cases is really compelling. So, yeah, like companies like Eter, you have brain chip. Yeah. Some really cool companies that I think are gonna be out and coming. And, one of the things that I think, is really interesting about what folks are doing with. Edge AI stuff is that other vector of innovation that I think people are under appreciating because there's so much focus on the big data center stuff. But a lot of the lessons learned, in how you bring power, efficiency, resiliency, and endurance. at the edge are things that actually are important in this discussion about sustainable data centers and, all this uber computing that's happening with generative ai. I was only there for one day, but they put a whole program together online stuff. I encourage all the listeners and folks who are interested in Edge AI to check it out, some really great stuff. And, one of the things I'm also noticing is that the topic of, military and defense applications of. Edge AI is starting to commingle quite a bit with the commercial stuff. Mm-hmm. And we saw that at M wc. Right. And we've seen this

Jim McGregor

MWC embedded world pretty much across the board. Yeah. It's, and obviously in the, battle we saw between philanthropic and the Pentagon recently.

Leonard Lee

And so lots of interesting stuff happening across the board, with Edge AI and the application front, and a lot of lessons learned. Actually, one of the things I not noticed is that defense industry is a great source of lessons learned, And the commercial, the commercial industries can, benefit from the lessons that the military folks are, learning in the field as they

Jim McGregor

always do. They always do. Yeah, I completely agree. a lot of the technology, and you have to remember the amount of investment that still goes into r and d for, aerospace and defense applications.

Leonard Lee

Yeah. So anyways, gentlemen. Great recap of March. I think it's a pivotal month in the, this, epoch of the AI Cambrian explosion. Am I like fronting your company? yeah.

Karl Freund

I think you

Leonard Lee

are enough.

Karl Freund

Appreciate it. Appreciate it.

Leonard Lee

Should we call it the TEUs explosion or No? Doesn't, it doesn't work.

Jim McGregor

It's definitely the AI explosion and if you want to go with cab ai, I am fully, fully engaged with that. Carl deserves it.

Leonard Lee

Yeah. Yeah. No, it's wonderful. Hey, thanks so much for jumping on and really appreciate the insights that you're sharing with, our audience. It's our audience, so it's the Silicon Futures audience. And again, thanks to all our a hundred thousand subscribers and the folks who listen to us and watch us on a regular basis. Thank you so much, and remember to make sure and follow Carl Fre. Cambrian AI research@www.cambrianai.com. He's also on Substack and Forbes, so check out all his stuff. wonderful insight, wonderful coverage of AI and semiconductors related to ai. And then also reach out to the multi-threaded Jim McGregor and Terry Research. I'm gonna get really good at that, Jim. I wanna be perfect. Okay.

Jim McGregor

Okay.

Leonard Lee

And Yes, work on that. You I have to work on it. I'm not close.

Jim McGregor

Yeah. Well, you have to work on your I

Leonard Lee

ah, I A. Okay. Okay. All right, Jim. Okay. And yes, check out TEUs research at www do. Tears research.com and please subscribe to our podcast you featured on the Next Curve YouTube channel. Check out the audio version on Buzz Pro and find us on your favorite podcast platform. And also subscribe to the next curve research portal@www.next-curve.com for the tech and industry insights that matter. And we'll see you next time next month.

Podcasts we love

Check out these other fine podcasts recommended by us, not an algorithm.

The IoT Show Artwork

The IoT Show

Olivier Bloch
The Internet of Things IoT Heroes show with Tom Raftery Artwork

The Internet of Things IoT Heroes show with Tom Raftery

Tom Raftery, Global IoT Evangelist, SAP