3D InCites Podcast

The AI Revolution: Energy, Ethics, and Advanced Packaging

Francoise von Trapp Season 5 Episode 12

Send us a text

What happens when the world's thirst for artificial intelligence collides with physical limitations? The answer lies at the intersection of silicon innovation and packaging technology.

From the conference hall of ECTC 2025 in Dallas, we're joined by Sam Naffziger of AMD who reveals how AI has transformed from a peripheral topic to the dominant force in computing within just five years. The economic motivation is clear: creating machines that generate intelligence offers "uncalculable value" to humanity. Yet as Nafziger explains, today's large language models face significant limitations—they excel at pattern recognition but struggle with true reasoning, suffer from hallucinations, and require human verification for mission-critical applications.

Perhaps most surprising is Nafziger's revelation about "the data wall"—general-purpose AI models have essentially exhausted the high-quality training data available on the internet. This constraint is pushing AI development toward more sophisticated approaches involving reinforcement learning and models that check other models, gradually shifting from simple pattern recognition toward deliberative thinking that more closely resembles human reasoning.

The conversation tackles the looming concern of AI's energy consumption, projected to reach 10% of global power by 2030. Rather than viewing this as an insurmountable problem, Nafziger offers a compelling perspective: AI's ability to optimize countless processes—from transportation routing to crop yields and manufacturing efficiency—could ultimately lead to net energy savings despite its own substantial power requirements.

For the advanced packaging community, the message is clear: your work sits at the foundation of AI's future. As computational demands increase exponentially, innovations in thermal management, power delivery, and component integration will directly determine how quickly and effectively AI can evolve. 

Connect with Sam Naffziger on LinkedIn or visit AMD.com to learn more about their competitive AI solutions and join the conversation about technology that's reshaping our world.

Support the show

Become a sustaining member!

Like what you hear? Follow us on LinkedIn and Twitter

Interested in reaching a qualified audience of microelectronics industry decision-makers? Invest in host-read advertisements, and promote your company in upcoming episodes. Contact Françoise von Trapp to learn more.

Interested in becoming a sponsor of the 3D InCites Podcast? Check out our 2024 Media Kit. Learn more about the 3D InCites Community and how you can become more involved.

Françoise von Trapp:

This episode of the 3D Insights podcast is sponsored by the IEEE Electronic Component Technology Conference, organized by the IEEE Electronics Packaging Society, ectc brings together the best in packaging components and microelectronics systems, science, technology and education in an environment of cooperation and technical exchange. Learn more at Ectcnet. Hi there, I'm Francoise von Trapp, and this is the 3D Insights Podcast. Hi everyone, this week we are recording live from ECTC 2025 at the Texas Gaylord Resort in Dallas, texas, for the 75th edition. We're hearing a lot about the most advanced of the advanced packaging technologies. What's driving this right now is power-hungry AI, still top of mind for every engineer in this industry, and the keynote speaker today, sam Nafziger, is here to talk about emerging trends aimed at addressing the demand for high-performance computing. Welcome to the podcast, sam, thank you.

Sam Naffziger:

Excited to be here, Françoise.

Françoise von Trapp:

Okay, so before we dive in, you're with AMD. Can you just share a little bit about your background and your role there?

Sam Naffziger:

Yeah. So I started out in microprocessor design with a focus on power efficiency improvements and how to extract the most performance per watt out of these devices and I've started to lead that cross-company from a power efficiency perspective and simultaneously have driven a lot of our chiplet architecture approaches and design, which is the way to extract more performance out of the silicon, the advanced silicon technology processes, and of course that involves deep package technology engagements and advanced development. So I've ended up leading the architecture cross productproduct for the company and sponsoring long-lead technology development.

Françoise von Trapp:

Now, AMD is pretty well known for its high-performance computing processors, really targeted a lot towards, I think, gaming.

Sam Naffziger:

We have a broad spectrum of products, right, yeah, and gaming is where we first deployed our advanced hybrid bond 3D in the CPU side actually. So gaming's been kind of bread and butter and it's a really great market and very enthusiastic customers. Right but make most of our money actually in the data center.

Françoise von Trapp:

Right, ok, and that is a big deal right now, as we see this explosion in AI models. You were talking this morning about how fast AI has grown since I think you were talking about since COVID.

Sam Naffziger:

Yeah, I mean, it was hardly a topic five years ago when I spoke at ECTC, and yet now it's the topic, and the reason is the economic motivation of AI. I mean, we actually are getting models that can replicate many aspects of human intelligence and, of course, if you consider the most valuable commodity in the world, you know how did we get all the comforts of modern existence and cars and computers and medicine? Right, it's human intelligence inventing stuff. So if we can now invent machines that can produce intelligence, it's of uncalculable value to the world, and so that's why there's so much excitement and hype about it. Now, the kinds of intelligence that we're manufacturing are imperfect. Right, we're constantly improving, and that's what, you know, makes us feel exciting is the evolution, the pace of development. It far exceeds anything I've ever seen in the industry.

Françoise von Trapp:

Why do you think it had such a drastic or rapid escalation? I mean, it seems like once they deployed the first versions of chat GPT that's right. It really started to take off, even though there's other market spaces that aren't really consumer facing.

Sam Naffziger:

Well, it, and it goes way beyond consumers. The applications of AI and in science and medicine and robotics are going to be of immense economic value and that's what's really driving it. And, in fact, when you can have I mean ChatGPT was such an explosion in awareness because the capabilities just blew people away. Its ability to compose entire sophisticated essays and synthesize books, provide a distillation and a summary of complex technical treatises into easily consumable paragraphs that would have taken days for an expert in the field to synthesize down. The models are amazing.

Françoise von Trapp:

I hesitate with tools like ChatGPT, though, because it's only as good as the data that it's training on right, and the data has to be extremely accurate and on point, and I think I feel like there's a lot of people out there, especially younger people, who are, you know, using it to write their papers, for instance. I mean, I heard that chat GPT started out really smart and that it's getting dumber.

Sam Naffziger:

Yeah, there is a corruption factor that canages the erroneous or fabricated ones, because you know, as you're, I'm sure you're aware you know hallucinations are an issue with AI. It'll make up answers if it doesn't actually know and present them. Yeah, communicate them as if they are authoritative. So Every AI response needs to be checked right, and putting it in mission-critical applications to make decisions is not a good idea at this point. Right, because the models are not reliable. They're very impressive, but we can't depend on the results.

Françoise von Trapp:

So you mentioned just now medical, industrial, robotics as three.

Sam Naffziger:

Those are some of the top ones.

Sam Naffziger:

Right, okay, and agriculture, robotics as those are. Those are some of the some of the top ones. And agriculture I mean, if you'd say ai for science, it encompasses a vast field of drug discovery and genomics analysis and agricultural improvements which are extremely compelling. The ability of ai to synthesize vast amounts of data and come up with useful conclusions from that vastly more data than humans can possibly absorb. You know climate data and histories of crop yields for specific variants in certain regions and the soil types and fertilizers, and you know just talking about the agricultural aspect and come up with a plan for crop rotation and the appropriate farming techniques that will maximize yields and minimize losses. So just an example. I mean, you know there's human experts that can do that, but an AI can essentially, for these specific fields, become superhuman in its ability to provide those kinds of recommendations.

Françoise von Trapp:

Okay, so one of the things you were talking about in your keynote was about running out of data to train models. Can you explain what you meant there?

Sam Naffziger:

So for the large language models, you know the general purpose. Like ChatGPT you mentioned, they have been trained on the compendium of Internet data that's out there and slurping in all the books and all of the analysis. You know everything. But the model developers try to focus on the high quality data that can make the model more intelligent versus just a bunch of random numbers. And yeah, the internet's pretty much been tapped out now for these huge model training exercises. That is somewhat independent of the specialty fields, like I just mentioned say in agricultural medicine?

Françoise von Trapp:

Is that because it's more enclosed the data that you're feeding? This AI engine is already qualified. You know, it's not just scraping random data off the Internet, they're actually feeding it.

Sam Naffziger:

High-quality data.

Françoise von Trapp:

It's like high enclosed, like encapsulated data that's not been corrupted by any other input, right?

Sam Naffziger:

Right, right, model contamination absolutely can degrade the intelligence of the AI. So that's a very important field. But yeah, we have to some extent you know, for the general intelligence applications hit a data wall. And actually it's a good thing because now we're leaning into new approaches that leverage reinforcement learning, approaches that have feedback loops and models, checking models, and we often put humans in that loop as well human reinforcement learning to improve the quality responses, to grade the responses, which is the better response out of the set. And now, when we can automate that with multiple AIs checking each other and generating synthetic data, we've been making significant strides in the ability of these models to actually reason, not just regurgitate answers. So the initial LLMs, the GPTs and Geminis they're good at using that vast trove of Internet data. They're trained on to produce the most credible response to a given query.

Sam Naffziger:

But it's essentially just pattern recognition. It's not really thinking. It's just like give it a response and boom, here's the answer. It's kind of like system one thinking in the brain where you can recognize faces really quickly. But if you start thinking through, okay, if I, if I see John and I last saw him here what's the right way to respond? Make a connection with John. You know that's, know that's a reasoning thing. A system two and we're only just starting to get models that can do that. More deliberative thinking.

Françoise von Trapp:

So I know we're limited on time, so I just wanted to ask you two things. First of all, one of the things that we're hearing that people are concerned about is the amount of energy that AI is consumed, and there's projections that by 2030, 10% of the world's energy will go to powering AI. So I guess maybe I'm in the middle of an existential crisis around this. So I mean, it's too late. You know the genie's out of the bottle, but should we be rolling this out before we've solved the energy problem?

Sam Naffziger:

Yeah, that's a fine question and I think, yeah, people have every right to be concerned about the energy consumption of AI, because it does appear like it will outstrip supply and power limitations become a very real cap on the amount of AI we can deliver. But I guess I would turn that around a bit. If you think about what we are achieving with AI, we are developing machines that can solve the world's hardest problems and actually invent new approaches or identify, I'd say, optimal approaches to transportation, routing, to minimizing power consumption across a myriad of industries, to providing better crop yields, reducing pollution, improving gas mileage, countless things. So actually I view AI as a. It's kind of like. You know it's a compound interest return. Investments in AI are going to improve human productivity, quality of life, health and actually reduce energy consumption in net. Even if itself is consuming a lot of power, the intelligence we're generating is going to be harnessed for vastly more productivity.

Françoise von Trapp:

So maybe it's limiting the frivolous use of AI and focusing it on the areas where it really can make a difference.

Sam Naffziger:

It'll reduce inefficiency, okay Right.

Françoise von Trapp:

So just last question for you what does AMD need from the advanced packaging community to solve these challenges that you talked about in your talk, and I think up there was thermal issues. I think maybe, yeah, had energy.

Sam Naffziger:

Yeah, thermal, you know.

Sam Naffziger:

Just power efficiency in general, power delivery, getting the heat out, getting power in the package community is absolutely foundational to achieving the next wave of growth in AI.

Sam Naffziger:

Like I said, the economic demand, the potential of AI to make the world better in countless ways, make our manufacturing processes more more efficient, as well as medical and the health benefits, drug discovery, all these things. So the more intelligence we can generate with AI, I believe is better for humanity, even though, like any tool, it can be used for good or ill. I believe, by and large, we'll use it for good and we try to marginalize the corrupt uses. And the ability to generate more. Ai is limited by power, and the package community has a huge role in providing power-efficient connectivity for the silicon chips that are at the core of those compute activities, right? So, whether it's the memory, whether it's the compute devices, the GPUs, the accelerators or the networking, getting those components closer together with the most energy efficient connectivity possible, with the best heat conductivity, the lowest resistance for power delivery, all of those sorts of problems are going to enable us to develop AI faster and more effectively, which I believe is a net good.

Françoise von Trapp:

Well, thank you so much for your time. I appreciate it. Can we connect people with you on LinkedIn?

Sam Naffziger:

Oh, absolutely yes, I'm on LinkedIn, and AMD. com has great research about our company's particular AI solutions, which are very competitive.

Françoise von Trapp:

Okay, great. Thank you so much.

Françoise von Trapp:

Next time on the 3D InCites podcast. We wrap up our coverage of ECTC 2025, talking with 3D Insights member companies about their key takeaways from this year's event, what they were showcasing, and also some of their memories of ECTC's past and what they hope to see in the future. There's lots more to come, so tune in next time to the 3D InCites ast. The 3D InCites Podcast is a production of 3D Insights LLC.