The neXt Curve reThink Podcast

Silicon Futures for August 2025 - Hot Chips 2025 and GlobalFoundries Technology Summit

Leonard Lee, Karl Freund, Jim McGregor Season 7 Episode 35

Send us a text

Silicon Futures is a neXt Curve reThink Podcast series focused on AI and semiconductor tech and the industry topics that matter.

In this special August midpoint episode, Leonard, Karl and Jim talk about some of the top headlines of the second half of August. 

Topics that mattered in the AI and semiconductor universe capping off August of 2025:

  • Analysts' takes and analysis of Hot Chips 2025
  • GlobalFoundries Technology Summit 2025 and the MIPS acquisition

Hit Leonard, Karl, and Jim up on LinkedIn and take part in their industry and tech insights.

Check out Jim and his research at Tirias Research at www.tiriasresearch.com.
Check out Karl and his research at Cambrian AI Research LLC at www.cambrian-ai.com. Check out Karl's Substack at: https://substack.com/@karlfreund429026

Please subscribe to our podcast which will be featured on the neXt Curve YouTube Channel. Check out the audio version on BuzzSprout or find us on your favorite Podcast platform.

Also, subscribe to the neXt Curve research portal at www.next-curve.com and our Substack (https://substack.com/@nextcurve) for the tech and industry insights that matter.

NOTE: The transcript is AI-generated and will contain errors.

neXt Curve:

Next curve.

Leonard Lee:

Welcome everyone to this next Career Rethink podcast episode where we break down the latest tech and industry insights and happenings into the insights that matter. And of course, this is the Silicon Futures Series where we take a magnifying glass as well as a telescope to the world of semiconductors and ai. And all that wonderful HPC stuff that we constantly talk about. And I'm Leonard Lee, executive Analyst at Next Curve. In this Silicon Futures episode, we'll be talking about hot chips, which was probably one of the hottest events this month. And then we will also touch on, global Foundry. So they had their tech summit, this week. And, of course we all know about the recent acquisition of MIPS by Global Foundry, so maybe we will touch on some of the structural changes that are happening in the industry that Jim McGregor of TEUs Research would like to talk about. And of course, he's one of the hosts of this podcast. And of course we have the scale across, not scale up. Not scale, out. Scale across. Carl Fre of Cambrian AI research, LLC Gentlemen. How's it going?

Karl Freund:

I'm ready for the weekend.

Leonard Lee:

You are ready to go. Ready to go for this podcast to be over so we can decompress after a very, very dense week

Karl Freund:

Month, year.

Leonard Lee:

Yeah. Oh, crazy. But, oh, hey, before we get started, please remember to like, share, react, and comment on this episode. Also subscribe here on YouTube and on Buzz Brow, take us on your on your jog, on your daily commute. Opinions and statements of my guests are their own and don't reflect mine or those of next curve. And hopefully you'll find this episode enlightening and, fun. Let's get started with the big event of the month, which is Hot Chips 2025, which took place on the Stanford campus. And I think there were about 2,500 total attending this year. about that, yeah. Online. Yeah, they were over 2000 in attendance. That was

Jim McGregor:

a record.

Leonard Lee:

Incredible, incredible. And we saw everybody there seemingly in the AI universe and even those that Carl doesn't feel like, is, are participating enough in the AI universe, right. Jim, you and I, we sat next to, not three of them from a particular company that's often mistaken as a fruit company. So yeah. what, what were your key takes? I mean, there was so much there. It's three days people, first day one is tutorials, right? And then the next two days are sessions that cut across, a whole myriad of, topics. And one of the things I always think is interesting, gentlemen, is, how they prioritize things. And this year, and for the tutorials, it was, rack and then this weird topic of AI kernel programming. Right. And I think they squeezed security in there somewhere, right? No, security was the next day, but yes.

Jim McGregor:

Yeah.

Leonard Lee:

Yeah. So anyways, I,

Karl Freund:

I got on the plane and I was kind of disappointed. I was disappointed'cause what I like to hear about is, Really fast GPUs and CPUs and stuff.

neXt Curve:

Yeah.

Karl Freund:

And, but then I reflected on the importance of this event and it was really about the data center. not about the GPU. Mm-hmm. It was really about rack scale and rose scale and data center scale and scale across. which is really impinging on both memory and networking. the key takeaways for me were memory and networking, memory from lots of different folks, especially surprisingly, new Hbms solutions and so forth. From Marvell, new SRAM based solutions from Marvell. Really? Marvell sram. Yeah.

neXt Curve:

Yeah.

Karl Freund:

Google's turned it up to 11. Oh my God. that system is just ridiculously huge and fast. It makes NVL 72 look puny. But anyway, it was really about the networking. It was really about memory. It was really about cooling, and you're starting to see, UEC compliant networking cards. It was pretty exciting stuff. We could dive as deep as you want, but it was. really a game changer.

Jim McGregor:

Yeah. I would say the same thing. It was less about the GPUs, which have been the focus for the past couple years and more about everything else. It was about the network, it was about the infrastructure, the rack. It was about the memory. it was about the security. So it really was everything that you have to build around it. The big guys, especially a MD and Nvidia did talk about, their architectures, the Blackwell, the RDNA and the CDNA, but. I guess the real important part was everything else. Matter of fact, even meta surprised me'cause they got up there and talked about a processor for, I actually distributed processing solution for their Orion AR glasses where they're actually breaking it up into. Actually three different chip plates, but you actually end up with four with the glasses, so you have one for each eye. Evercore talked about having a very, very low power embedded processing solution that can actually use energy harvesting to run it pretty much forever. So, no, there were still I think some key takeaways. It just wasn't what we usually see. And even on the processor side, there was really only one big iron processor, that was revealed there. And that was, Clearwater Forest by Intel, talking about their next generation Zion, presuming Zion seven, processor with their efficiency course. And I was actually really excited about that product because. Saw what they did with ecos in Lunar Lake on the PC side, knowing a lot of that same technology went into these ECCOs, it's gonna be a very, very power efficient solution. So I was excited about that.

Leonard Lee:

What do you think about, I suppose it's a rumor that Buan wants to introduce, hyper threading back into the mix

Jim McGregor:

he actually said that during their earnings call. That he wanted to introduce that. I'm concerned about that because I don't wanna see that impact their. Product strategy and there's pros and cons for having Hyperthreading and Intel has had Hyperthreading forever went away from Hyperthreading with the recent, Zon processors because now they have an efficiency core and a performance core. So they're not trying to do everything with the same core. I just hope it doesn't impact their roadmap, in terms of timeframe because Intel right now more than anything, can afford any clips in their schedule.

Leonard Lee:

Well, I was surprised that there isn't a conceptualization of why they got rid of hyper threading in the first place. But one of the shocking topics, was, confidential computing and that discussion was framed around what that gentleman from Microsoft. A presented right where he was talking about how the si cyber criminal economy is the fastest growing economy on the planet, and is number three in, in terms of quote unquote GDP, if you can even call it GDP, because it's actually negative GD because it's a burden on all other. Let's call it legitimate. It's def the theft economy. The theft economy. Third

Karl Freund:

largest economy.

Leonard Lee:

Right, right. And so, you know, obviously we know about Specter, meltdown side channel, vulnerabilities. there's a lot of measures to mitigate these. Things,

Jim McGregor:

I mean, obviously Hyperthreading introduced some of those potential threats and that was an issue. But even more so, you gotta remember that the X 86 has been under pressure from arm,, for lower power, more energy efficient computing. So Intel went to having solutions that had, specific solutions just for that single thread performance and solutions just for that efficiency so that you're using the right solution at the right time. and that's been a trend for Intel over the past, and not just in servers, but also in PCs, but making sure you use the right execution element for the right workload. That was really, I think, the impetus for going away from Hyperthreading. this strategy, this multi-core strategy

Leonard Lee:

Networking was huge. Networking up, up, and down the stack from, it's not just about the GPU, it's also about memory and this whole theme of memory fabrics. So we heard fabric quite a bit in a lot of the presentations. these hybrid topologies to connect not only GPUs to GPUs, but different classes of memory or tiers of memory to different, compute, whether it's the CPU or the GPU, right? So when we look at the frontier of how these systems need to evolve in order to scale out as well as scale up. And dare I say scale across, we're looking at much more complicated networking and interconnect. A lot of this has to be optical, so we're we are also seeing that acceleration in the interest as well as the discussion around optical. And I think a lot of that is Carl, as you and I discussed, probably a lot of that credit has to go to Nvidia, which made that whole. Announcement, earlier this year about moving to co-pack optics, cPO?

Karl Freund:

Yeah. So we saw a lot of different approaches to applying optical, both all the way down to triplet, to chip communications with, optical. with an outboard off, I guess you call offboard, lasers, all the way down to a package, a substrate, which provides optical connectivity, not to the periphery of the dye, but to the center of the dye, so that way the optical. Area can grow as a square, not as the linear size of the chip. That was from light matter. That was very interesting. And all the way to co-pack optics from Nvidia and elsewhere. you can tell the world is about to shift optical in the next five years. There's still issues with optical in terms of, laser reliability and failure rates and serviceability, right? Mm-hmm. If you think about serviceability, well, if I've got an optical connector, that's actually my substrate. if there's a failure, I throw it away. I can't go in and just replace the transceiver plug on the side. So the industry's still maturing and optical, but it's pretty exciting stuff. Low power, very high performance. and some people say it's, not only the future, but it's a bright future with light.

Jim McGregor:

You know, and I'll be honest with you, just the reliability issue and nobody really brings that up. They usually talk about power and they talk about, speed and everything else. But just the reliability issue of not having those physical transceivers, which are prone to fail, is gonna be a huge transition optics. However, it was interesting, Everyone was pretty much promoting optics from CHIP through the data center while NVIDIA's a staunch proponent of using copper in the rack as long as possible,

Karl Freund:

as long as possible.

Jim McGregor:

Yeah, and rightfully so. it's a cost advantage to using copper, but eventually just because copper, the distance keeps shrinking, the faster you try to run it. You have to move the switch from top of rack to the middle of the rack to make sure that you have enough length to get across the rack. So, it's still gonna run outta steam.

Leonard Lee:

It be, as it becomes more and more of, networking. Discussion, right? When we look at the evolution of these very large scale AI supercomputing, systems. And, data centers, the conversation really becomes about networking. It's no longer about just the GPU or the accelerator, there is this really fast acceleration toward advanced concepts, in networking. And so, one of the things that impressed me was how much this looks like. telco networking, but being scaled down into something like a data center or even at a rack scale or even at a tray scale, right? So latencies, determinism, all these things. Jim, you mentioned reliability, surface ability, longevity even. It's hard to hear some talk about longevity. So there's this whole view of how do we make these systems operationally resilient, reliable. And what was one of the metrics? I think it was the gentleman from Google who mentioned, meantime to failure was like.

Jim McGregor:

Right? Yes. Well, and, and, and you're right. those have been critical aspects of, the communication segment forever. I mean, you had to do nabs testing. Matter of fact, in Motorola, we did optical back planes 18 years ago. I mean, we were already pushing the limit way back then. And NS testing, if you're not familiar with it, it's intense. At one point, you actually have to have two systems side by side set one on fire, and the other one has to keep running for a certain period of time.

neXt Curve:

Yeah.

Jim McGregor:

It can. Insane. But, you're right that that reliability and that meantime to failure are critical components to that, and we're starting to see that creep in to these systems Part of it is, definitely ROI, because especially the hyperscalers, the neo clouds, everyone that's buying these billion dollar systems definitely wants to make sure multimillion dollar systems wants to make sure they're gonna get the most out of it, but also because they're becoming critical to other enterprise type workloads, right? And you, you can't afford the failures anymore. So it's, it really is pushing the envelope of reliability.

Karl Freund:

Yeah, I was really impressed with the, announcements about the Bay Eye of HBM instead of it just simply being a passive die. Electrical signals. Yeah, yeah. Logic in the bay, which will apparently available on H HPM four, but I think it'll be H HPM four E before it really matures. It involves a lot of partnership with the SOC designers. To figure out, well, what do you wanna offload to this? the simplest one is just to put SRAM in there, right? Then it acts like a very fast HBM with SAM on the base side, and then dram above that, for the 3D stacking and to get the capacity. That's a simple. solution to speeding up HBM dramatically.

Jim McGregor:

They're looking at it for in-memory compute. So they're looking at putting compute, logic there. And that kind of concerns me. I am, especially at, f Fms, FMSI have to remember the acronym here because they keep changing the name. they talked, all the big three vendors, sk, Hynix, Samsung, and micron. Talked about, HBM four E and this in memory compute and everything else. It still really worries me because nobody wants a single source solution, Who's gonna be producing that Logic Doc, right? and who really becomes a memory vendor at that point in time and in terms of assembly and everything else, because nobody wants a single source solution. So it's, I think there's a lot of things to be worked out before we get to that point.

Leonard Lee:

There seems to be more than one way to skin the cat. It may not entirely just be solely a Nvidia cat. as we start to look at inference and the different model architectures that are emerging. That there may be some, let's say, model specific or application specific compute architectures that, may, bring about a step change improvement over, what we see with general purpose AI supercomputing. So I think, when we talk about frontiers there's some of these disruptive factors and vectors that are starting to emerge, but they're still really early, right? There are ideas and there are things that definitely the incumbents can embrace as well very quickly as we've seen, Nvidia do, right?

Karl Freund:

yeah. I think DMA is a good example of what you're talking about, where they're not trying to be everything to everybody, If what you need is really fast inference, they have a system use a combination of SRAM and dram, but the way to store it in sram, so it's really fast and you. Put together a bunch of these chips on a card. And what you have is a very fast infras process. You're not gonna ever use it for training, But that's okay. Exactly. and you know that much RA's very impressive. And, they're interesting startup to watch. They're presenting at the AI Infra Summit, in just a couple weeks. Audience, members may wanna tune into that to watch, what DMA is doing.

Jim McGregor:

AI more than anything is a data issue. How do you move data? Where do you execute data? Yeah. Where do you use it? And I think one of the things people are realizing is that in a lot of cases you may wanna be doing, since there's multiple stages of an AI pipeline, you may want to be doing some of that. Processing where the data is rather than having to keep moving it around. Yeah. To different elements. Matter of fact, even using the same processor isn't necessarily going to be the most efficient way to do AI processing going forward. So you have to collect the data. You have to ingest the data, you have to train the data, and then you have the inference execution of the data. And all of those have different processing requirements. And they have different storage and data requirements. So that data pipeline is becoming critical and I think that data pipeline is going to become, more intelligent and become part of the, if you will, the data pipeline and the compute pipeline are merging.

Leonard Lee:

This whole idea of disaggregated inference stuck out for me at the conference as well. not only the disaggregation of the compute system, but also cooling, which I thought was weird. And, power. Especially as the industry is looking at bring these accelerated computing systems into traditional data centers, going back to my previous mention about enterprise. Ai. that's something that we've seen. There was a presentation, about MGX that was, the meta presentation on rac, right? the RAC tutorial. I thought those were really. Interesting dynamics in terms of what is happening right now to take these hybrid approaches to, water cooling as well as air cooling and engineering systems that are specific to different. Environments outside of what is now required at the leading edge of AI model training systems, right? It's probably not entirely gonna be the case that all of the generative AI workloads are gonna be running off of, quote unquote the cloud or neo clouds, right? There has to be some degree of diffusion, and you got a sense of that at Hot Chips this year, although the whole A IPC thing was missing, and that was a big thing last year, right? No one. Did you guys

Karl Freund:

hear anything about A IPC? Just the little DGX from Nvidia. Yeah, all that's an a I pc. A I PC and that'll work. It only runs Linux, so Yeah. It ever runs, if ever runs Windows, look out be whose market segment for Nvidia enter, should Microsoft decide to support it. It's armed, so ARM and GPU. Support arm and GPUs, but not that particular product yet.

Leonard Lee:

Yeah, for some

Karl Freund:

reason. Yeah.

Leonard Lee:

Yeah. So any final, it's

Karl Freund:

cute. It's like a size of a deck of cards. I mean, that.

Jim McGregor:

Like

Karl Freund:

Jim doesn't want it. He wants a real, real,

Jim McGregor:

real. No, I act, actually I do, and I actually am on the pre-order list for it. I just think it's cool. Yeah. And the fact that you can stack'em, so you can have multiple ones, you can increase to performance. So it's, can you daisy chain them? no, I actually think it's a great solution. And the thing is. It's not just a desktop, it's actually portable. It's small enough to where you can just stick it in your briefcase, stick it in your backpack and go with it. So I actually think it's a phenomenal solution

Karl Freund:

combine that with META'S glasses?

Jim McGregor:

Using that would be

Karl Freund:

primary display.

Jim McGregor:

Yeah, and it shows off. I think some of the changes we're seeing with the, like Leonard alluded to some of these partnerships or changing dynamics to the industry. This was co-developed, co-designed between, media tech and Nvidia. It wasn't. One company taking the other's ip, it was co-developed. So you've got media tech on the SOC side of the chip, and it's a two chip solution and an NVIDIA on a GPU and some Nvidia on the SOC side. So, you know, really forming a tight bond to have where you have a really powerful low power, high performance solution just for AI development. Although you could do a lot of other stuff on it.

Karl Freund:

Yeah, a couple terra flops, Just a lot of performance to put on the desktop and it looks really cool.

Jim McGregor:

Yeah,

Karl Freund:

it looks like a miniature DGX server.

Jim McGregor:

What cracked me up when they talked about this, or they introduced it at CES, was the fact that they had it on display and the fact tag or just the description of it was larger than the device.

Leonard Lee:

Oh, that's crazy.

Karl Freund:

It's important to mention who goes. I bumped into John Hennessy in the hallway in John Hennessy, chairman of Google. I bumped into Pat Gelsinger. Pat Gelsinger. Oh, you did?

Jim McGregor:

Yes. Yeah. Oh, I bumped into so many people there. It, and I guess that's the benefit of not just the presentations, but all the people, all the history that are there. Obviously Kevin k Al, my, partner that retired Nathan Brookwood, who's been around the industry forever. Yeah. Just. Tom Coughlin, just all the people and it's always great the conversations you have. Matter of fact, I even got information from a, really old secure contact on announcement coming up September 3rd that I can't even talk about. Oh, wonderful.

Karl Freund:

And it's amazing'cause they all move around, like Prakesh was there from Nvidia. When he's done a couple startups since then, he's on another startup now. Can't wait to hear more about the details. But, yeah, it's great masters there from Arm who now works at Google. they all move around, so you get all kinds of interesting perspectives and stories. it's the community that makes hot chips what it is.

Leonard Lee:

It is. And it's a massive brain dump. And there's so many interesting things, at least for me.

Karl Freund:

Yeah. I was one, I think I was one of the only people there without a PhD. Right. And it's like, no, I don't have a

Leonard Lee:

PhD. You and

Jim McGregor:

the both of us, all three of us, I think you can put, just don't have PhD.

Karl Freund:

We decide to work instead.

Leonard Lee:

Yeah. But, it's amazing to see the work and, the research that's coming down the pike. That's going to influence how these systems evolve. you attend this conference, you get a blueprint for what you can expect for the next two, three years, right? Yeah. And so

Karl Freund:

it should stop allowing people from talking about last year's products, right? There were some that will remain nameless. They got up and talked for an hour without a product they introduced. Six months ago. Yeah. It's in market. In market spelling. I think they should stop

Jim McGregor:

allowing all the Apple people to attend if they're not gonna talk about products.

Karl Freund:

They never talk. They never talk. They only listen.

Leonard Lee:

They're there to learn. Yeah. Yeah.

Karl Freund:

That's okay. I'm okay with that.

Leonard Lee:

Yeah, but I'm not, okay. I'm just teasing'em. Let

Karl Freund:

me tell you about last year's product. That was boring.

Leonard Lee:

Yeah. And they shall not be named,

Karl Freund:

we will not name them.

Jim McGregor:

Yeah. Well, and you have to remember that, some of the companies like Intel and a MD and Nvidia, they had many, several presentations at the event. they're not talking about just one product or one part of the system. They're talking about multiple parts of the system. I agree that I don't like. Hearing about last year's products or a product that was already introduced. But usually these guys at least give you some information that you didn't have before. Some detail you didn't have before. So I still appreciate that. Yeah,

Leonard Lee:

it's all good stuff. Come on guys. We love hot chips and kudos.

Karl Freund:

Great conference. I encourage everyone to attend.

Leonard Lee:

Yeah, sure. but then that lunch line will get longer and longer. Carl, it's

Karl Freund:

long, but it was pretty quick. Attend Virtually they pretty good food in that lunch line, didn't they? Yeah, they did day. It's all Indian food. Yeah. The thing, half the audience was either from India or Pakistan. So

Jim McGregor:

the thing that cracked me up about it was, especially on the tutorial day on Sunday. Nvidia and the first part of the tutorials were on racks, and the second was on, programming AI kernels. And Nvidia talked about their NBL 72 and everything they had to do to beef it up and, even said, fully configured, this is the weight of a Chevy Tahoe. Yeah, it's massive. Massive. But even without that, I mean with the structural enhancements they had to do and everything else to it, you're talking about 3000 pounds. They actually wanted to bring one to the show to show it off. And they couldn't get it through the door. No, you can't tip it. You can't tip it. No. They literally, you can't tip it sideways or anything else.'cause it weighs so much and it would not fit through the doors to the, to the auditorium.

Leonard Lee:

Yeah. You know, another thing that was really interesting, I used to do a lot of, research on, additive manufacturing and 3D printing. When I was at IBM, there's a company called Fabric Fabricate. Is it Fabricate? Yeah, yeah, yeah. Fabricate Cold plate presentation with, with an eight. Really cool. Yeah, with an eight. Their whole, cold plate. Presentation was really interesting. And the fact that they didn't use traditional 3D printing methods, was also really interesting. They design cold plates that are custom designed for a particular thermal characteristic, planar, thermal characteristic of a chip or a package. and that's where we are now at that level of material, physical optimization and innovation.

Jim McGregor:

Materials, technology. It really is materials technology. And the fact that they print these using copper at room temperature. So it's not even like they had to heat it. So, I mean, this is Yeah, it was, impressive technology. They called it, it was called E-C-M-E-C-M.

Leonard Lee:

Yeah. ECM. Right, right. Last year we weren't really talking too much about coal plates. they were just generically being introduced in terms of a topic. But with all the discussions around power, efficiency or, thermal management of these data centers and racks. Pretty much every, scale that you can imagine. It's interesting to see how these really mundane, boring things are becoming all of a sudden, important factors in taking things to the next level in terms of scaling and, power and performance efficiency, So I thought that was cool. So I did attend a global foundry tech summit just right after, hot chips. So I know that, Jim, you couldn't make it because of the extremely short notice this was my first time attending a global foundries. Event, and this is their technology summit? Called that the Marriott. And then, obviously Jim, you have a long legacy with these guys so you asked me what are some of the things that kind of. Popped, during the presentations or through the course of the whole day, it's a one day event. they really wanted to flex their silicon photonics muscles, so that was a really big topic. They even took it to the quantum level, Quantum photonics. And so they were talking about quantum dots and a bunch of physics stuff that I didn't quite, I, I was not able to digest or may never ever be able to digest that, was being presented, by Ted, doctor, by the way, Ted. Ludovic, who's the corporate? I think he's a corporate fellow. He's a SVP at, global Foundries who presented the whole roadmap, right? The technol logical roadmap. And, for those of you who don't know, global Foundries is more, where Intel, TSMC, Samsung, they're more of the lead, fab stat and the headlines for their leading edge, process technologies. global foundries, correct me if I'm wrong, Jim, they're more of the lagging edge stuff, but when you start to talk to them, you hear a lot about IOT ish type of stuff, power, rf, these foundational things that, support all these. Digital systems that are enabling our digital lives, right? And as well as these hyperscale AI data centers. So powers like a key area. The other thing is CMOs, ultra low power as well as feature rich. and as I mentioned before, silicon photonics. And these are. Now, like actually becoming an important domain because of a lot of the stuff that we mentioned at Hot Chips where now the conversation is, shifting toward resilience, reliability, serviceability, and all these concepts actually require, guess what? Iot, right? Sensors, being able to capture telemetry, having power efficient X, y, Z across the board to, drive the sustainability of systems at almost every scale from, the edge sensor all the way to these hyperscale. Ai, super competing systems. it was interesting for me. It was actually kind of refreshing, Jim.

Jim McGregor:

Well, I wouldn't classify them as a laggard i'd, I'd say that they're a leading, I didn't say they're a laggard, did I say that? Said they were laggard? Yes. No, they are a leading, they are a leading, semiconductor foundry, company. However, they are typically the n plus one or n plus. They're not on the bleeding edge of technology. but they also have a lot of specialty, so they focus more on not just being, on the leading edge, but having the specialty technology for wireless, for optical, for all these different types of applications. And they've got so much capacity, between Singapore and the us I would say that they're the key foundry for the other 95% of the semiconductors we need to make devices. Yeah. Did you say

Leonard Lee:

95%?

Jim McGregor:

Yes. In terms of units, absolutely speaking

Leonard Lee:

boundaries. One of the things that I came off pretty impressed with was their advanced packaging, right? Mm-hmm. So they do a lot of advanced packaging. they. We're touting their chops in 3D, heterogeneous integration. A lot of talk about homogeneous and heterogeneous integration. I have this thing called Slate. It's an advanced packaging platform where you can do like wafer to wafer bo a homogeneous bonding. As well as, wafer, the wafer, RF stuff, died a wafer. So, you know, for the classes, the devices that they support from a manufacturing perspective, they're. Very advanced and investing tremendously in advanced packaging. Which is obviously becoming a big thing, up and down the stack. advanced packaging is that thing that has, enabling, the industry to continue to, advanced, semiconductor, systems and architecture. So, I thought that, that was a good a, a cool learning for me. They also emphasized RA. Resistor Ram for a lot of the iot applications that they support. And, Jim, I don't know if you can share some details on what that is. It's a form of, embedded, non-volatile memory.

Jim McGregor:

exactly, it's embedded non-volatile memory. And it's one of several that have been focused around, creating the perfect memory. The challenge has always been volume or scalability. Especially when you only have one or, one company doing one version of it. It is a very viable solution, especially as we run into challenges in scaling other memory options. I think everyone's looking at how memory's going to change going forward. And Global Foundries is unique in the fact. That it has a lot of those pieces of the possibility. It even acquired mips. So now it has, some ip and you mentioned this early on, changes in the dynamics of the industry or the structure of the industry. You know, I don't think we'll ever see a pure play. IP company again, like we saw with ARM or mips. I think those companies need to be part of a bigger entity and I think everyone's trying to figure out how to put all these different pieces of the puzzle together from EDA to semiconductor design, to actually manufacturing to all these pieces and make it effectively cost effective.

Leonard Lee:

So a certain degree of vertical integration. Yes. Not full vertical integration, but vertical consolidation to unleash a bit more value and differentiation. And that's what I gleaned outta my conversations with, the folks at mips, that is the. Thinking around that play, is that we can, by having IP as part of our portfolio, we can help our customers go to market quicker. Also providing them with integrated tooling to do that. So there's like a whole range of accelerators consolidating, integrating tooling, to make that happen. So it's like pre integrating stuff and then validating it and certifying it so that you don't have to do all these integrations. Separately, being able to scale those integrations out. I think that's really the gist of the story from what I can tell.

Jim McGregor:

You know, obviously we've seen global foundries morph over time. Yeah. they were spin out originally from a MD as a pure play foundry. they acquired additional assets from IBM and in terms of packaging and everything else, now acquiring mips, they continue to evolve their business model, and I think that's good. Yeah. Because I think business models have to change. especially in the AI era, every company should be asking themselves today, how does. AI and the changes around ai not only change your company and what your company does, but also change your business model.

Leonard Lee:

When it comes to ai, these guys, definitely have threaded a lot of AI into their storytelling. Like for their feature Rich CM Os. it is basically a perception AI story, physical ai, right? They framed it as physical AI with sensing compute, and then they also do display stuff. which I didn't know about that was interesting for me. Cool Company, at the epicenter of Edge, a emerging edge AI or even existing Edge AI and quote unquote IOT and the next frontier, which is, sensory or the perception end of physical ai. Good stuff anyways. I enjoyed it. Hopefully I'll, join them next year. We'll all be there. I don't know if Carl wants to be there, but, I'm sure we can get you an invitation. this is gonna be important because all these large data centers, they're gonna have to be instrumented, right? Is it can't just be about core compute, the entire data center has to be ai. You don't think so?

Karl Freund:

Oh yeah, I do. Absolutely. One of the things we forgot to mention about Hot Chip, to go back for just a moment, Was rapid as, CEO, came over from Japan and presented a fascinating, albeit slow, presentation. About the Rapid is FA Factory. So those you don't know, rapid is, is, a Japanese government sponsored, fabrication facility coming up in Japan. Its first products will be two nanometer, right? So they're not sitting back waiting for technology to get to them. They're going after it, and this thing is gorgeous.

Jim McGregor:

Well, they formed a key partnership also with ibm, so leveraging IBM's latest technology

Karl Freund:

right, last two years ago. Yep. we ought post some pictures of that facility. Awesome.

Leonard Lee:

Yeah.

Karl Freund:

the facility's amazing. They say in the winter you can ski down it.

Jim McGregor:

Yeah.

Leonard Lee:

I'm open. Let's go. I'm open. We're going, yeah, we're gonna try to get an invitation, winter, winter invitation, to visit the fab and do a tour. Yeah, that'd be great. Crazy month, crazy month. Do you think that next month is going to be any more or less crazy? No. What kind of crazy stuff do you guys have on your

Jim McGregor:

I'm going from here to, meetings with ARM on their next generation products to efa. Which is a Europe's version of CES to IA mobility to see the latest in personal, mobile tech. and it just keeps going on and on from there. So, no, there was absolutely no break. I think this year, summer break and it just keeps flowing in the fall. Like usual is gonna be crazy. I think I'm traveling for the next 12 weeks. Oh, what about you, Carl?

Karl Freund:

not traveling for the next 12 weeks, but I will be going to the AI infrastructure summit and the, Snapdragon summit, for Paul Kong.

Leonard Lee:

We'll all be there.

Jim McGregor:

Yep. and then we have super computing coming up and other things.

Karl Freund:

We have super computing.

Jim McGregor:

Yes.

Leonard Lee:

guys are gonna be there as well? Yes. Oh yeah. I have OCP and SEMICON West.

Jim McGregor:

I'll be at OCP and there's some other company events that week. one of my colleagues, Damien, will be at Semicon West.

Leonard Lee:

All right. So, everyone just reach out to us. If you're at any of these events, we'd love to meet you and chat and, and so I think we'll call that an episode. And so. Definitely reach out to these gentlemen, their leaders in their field. Carl Fre just started his substack, so make sure to go and check out his substack, but he's on Forbes, for the time being, I think.

Karl Freund:

Actually on Forbes for some stuff, but some of the stuff I write about is just not really the right content for that audience.

Leonard Lee:

yeah. But if you want the latest, greatest insights on AI and, ai, super computing and HPC, definitely, contact Carl. Follow him on Substack. his website is www dot cambrian. Hyphen ai.com and also follow on LinkedIn and of course Jim is everywhere. Are you on Substack

Jim McGregor:

Not yet. No, not yet. We're not on Substack, on E times Eur, ECT news. we're, and a couple of other publications, not just myself, but my colleagues. Yeah. So, we are across the board, for electronics and for some of the general business stuff.

Leonard Lee:

Yeah. And, he just knows everything. Jim is a legend. I wish. Yeah. You're calling me old again. Yeah, you are old. Embrace it. No, I'm old. I'm old. You guys are all young at heart. That's all that counts. Remember to like, share and comment on this episode, and, subscribe to the Rethink podcast here on YouTube and buzzsprout. Also, please subscribe to, the next curve research portal@www.next-curve.com. We have a substack as well. We're trying, I'm trying to promote that because LinkedIn is getting really weird. Subscribe for the tech and industry insights that matter, and we'll see you next time and hopefully, please don't be such a crazy month, the month of September. So anyways, gentlemen, thank you.

People on this episode

Podcasts we love

Check out these other fine podcasts recommended by us, not an algorithm.

IoT Coffee Talk Artwork

IoT Coffee Talk

Leonard Rob Stephanie David Marc Rick
The IoT Show Artwork

The IoT Show

Olivier Bloch
The Internet of Things IoT Heroes show with Tom Raftery Artwork

The Internet of Things IoT Heroes show with Tom Raftery

Tom Raftery, Global IoT Evangelist, SAP