Intellectually Curious
Intellectually Curious is a podcast by Mike Breault featuring over 1,800 AI-powered explorations across science, mathematics, philosophy, and personal growth. Each short-form episode is generated, refined, and published with the help of large language models—turning curiosity into an ongoing audio encyclopedia. Designed for anyone who loves learning, it offers quick dives into everything from combinatorics and cryptography to systems thinking and psychology.
Inspiration for this podcast:
"Muad'Dib learned rapidly because his first training was in how to learn. And the first lesson of all was the basic trust that he could learn. It's shocking to find how many people do not believe they can learn, and how many more believe learning to be difficult. Muad'Dib knew that every experience carries its lesson."
― Frank Herbert, Dune
Note: These podcasts were made with NotebookLM. AI can make mistakes. Please double-check any critical information.
Intellectually Curious
NVIDIA Nemotron 3 Super: Powering High-Throughput Agentic AI
Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.
Today we unpack NVIDIA's brand-new blog post on the Nemotron 3 Supermodel and how it powers high-throughput agentic AI. We break down a 1,000,000-token context window, a hybrid mixture-of-experts architecture that routes tasks to subnetworks to avoid full-model compute, a 120B-parameter open model that only activates about 12M parameters at once, memory-efficient MAMBA layers, and multi-token prediction that speeds inference. We discuss implications for software and financial agents, reducing context drift and the thinking tax, and what this could mean for enterprise AI and everyday workflows. We close with a prompt: what ambitious world-changing project would you entrust to an autonomous agent?
Note: This podcast was AI-generated, and sometimes AI can make mistakes. Please double-check any critical information.
Sponsored by Embersilk LLC
Have you ever uh started telling a friend about your weekend? You know, just a really long, detailed story. Oh, absolutely. And halfway through you completely forget what your original point even was. You're just rambling about your coffee order while they stare at you.
SPEAKER_00Yeah, we've all been there.
SPEAKER_01Well, it happens to the best of us. But it turns out artificial intelligence has that exact same problem when it tries to juggle too much information at once.
SPEAKER_00It really does.
SPEAKER_01Today's deep dive explores a brand new NVIDIA blog post from today, March 11th, 2026. We are discovering how NVIDIA's Nimotron 3 Supermodel is powering high-throughput agentic AI to help humanity automate complex tasks and solve massive problems.
SPEAKER_00It's an incredible leap forward.
SPEAKER_01It really is. But before we get into the weeds, a quick thanks to our sponsor, Embersilk. If you need help with AI training, automation, integration, or software development, you really need to visit Embersilk.com.
SPEAKER_00They are fantastic for figuring out the next steps.
SPEAKER_01Exactly. You can uncover exactly where agents could make the most impact in your business or even your personal life. Again, that is embersilk.com.
SPEAKER_00It's a great time to be looking into agents, especially with the breakthroughs we're seeing right now.
SPEAKER_01Okay, let's unpack this. Anyone building multi-agent systems right now knows the pain of context explosion.
SPEAKER_00The dreaded context explosion.
SPEAKER_01Right. You string a few agents together to solve a complex problem, and suddenly they're passing so much token history back and forth that the model suffers from gold drift.
SPEAKER_00It just loses track of things.
SPEAKER_01It literally forgets the initial prompt, just like my rambling coffee stories. And then you have the thinking tax where the system gets incredibly sluggish because it's using massive models to reason out every single tiny subtask.
SPEAKER_00What's fascinating here is NVIDIA's incredibly clever solution to all of that. Just throwing raw compute at the problem isn't scalable. So to tackle that goal drift, NVIDIA gave Nimotron 3 Super a 1 million token context window.
SPEAKER_01A million tokens? That is massive.
SPEAKER_00It is. It means these agents can hold an entire massive workflow in their memory without ever losing the plot. But to prevent that from bankrupting you on compute, they utilized a hybrid MISTR of experts or MOE architecture.
SPEAKER_01So it's routing tasks to specific subnetworks rather than lighting up the entire model every single time.
SPEAKER_00Precisely. Under the hood, it's a 120 billion parameter open model, but it brilliantly only activates 12 million parameters at once. It acts like a team of highly efficient specialists.
SPEAKER_01So you get the intelligence of a flagship model for a fraction of the cost.
SPEAKER_00Exactly. It really democratizes enterprise grade AI.
SPEAKER_01Here's where it gets really interesting. I saw they also integrated Mamba layers for memory efficiency.
SPEAKER_00Which is a huge deal.
SPEAKER_01Because Mamba allows the model to process massive sequences sequentially without the memory bottleneck of standard transformers. And they paired that with multi-token prediction, which literally guesses multiple future words simultaneously.
SPEAKER_00It's the silver bullet for the thinking tax we talked about. That technique alone makes inference three times faster.
SPEAKER_01Three times faster. That is a massive jump in speed.
SPEAKER_00If we connect this to the bigger picture, the real-world applications are just incredibly uplifting. Think about software agents. They can now load entire enterprise code bases instantly.
SPEAKER_01Generating and debugging end-to-end without breaking projects into tiny pieces.
SPEAKER_00Right. Or financial agents seamlessly synthesizing thousands of pages of reports without breaking a sweat.
SPEAKER_01So what does this all mean? For you listening, this open source technology eliminates tedious workflows. Handing that friction over to agents frees you up to focus on big ideas, unlocking boundless human creativity and progress.
SPEAKER_00It fundamentally shifts how we work for the better. And this raises an important question. With an AI capable of holding a million tokens of context without losing its mission, what ambitious, world changing project would you entrust to an autonomous agent?
SPEAKER_01That is an inspiring question to mull over. We are stepping into a bright, beautiful era of human AI collaboration. If you enjoyed this deep dive, please subscribe to Intellectually Curious and hey, leave us a five star review if you can. It really does help get the word out. Thanks for tuning in.