Intellectually Curious
Intellectually Curious is a podcast by Mike Breault featuring over 1,800 AI-powered explorations across science, mathematics, philosophy, and personal growth. Each short-form episode is generated, refined, and published with the help of large language models—turning curiosity into an ongoing audio encyclopedia. Designed for anyone who loves learning, it offers quick dives into everything from combinatorics and cryptography to systems thinking and psychology.
Inspiration for this podcast:
"Muad'Dib learned rapidly because his first training was in how to learn. And the first lesson of all was the basic trust that he could learn. It's shocking to find how many people do not believe they can learn, and how many more believe learning to be difficult. Muad'Dib knew that every experience carries its lesson."
― Frank Herbert, Dune
Note: These podcasts were made with NotebookLM. AI can make mistakes. Please double-check any critical information.
Episodes
1941 episodes
The USSR Olympiad Problem Book
Dive into the USSR Olympiad problem book by Shklarsky, Chensov, and Yaglom—320 unconventional puzzles designed for seventh- to tenth-graders that still stump PhD mathematicians. Learn how these problems force new mental models, not brute-force ...
Interaction Models: Scalable Real-Time Human-AI Collaboration
We dive into Thinking Machines Lab’s breakthrough that shatters the typing bottleneck by streaming real-time microturns and decoupling quick conversation from deep reasoning. Learn how a fast-front interaction model handles live dialogue, while...
The AI Co-Mathematician: Agentic Workflows for Mathematical Discovery
Google DeepMind has introduced the AI co-mathematician, a specialized agentic workbench designed to support the multifaceted and iterative nature of mathematical research. Unlike standard chatbots, this system utilizes a statef...
Natural Language Autoencoders for Unsupervised LLM Interpretability
Introducing Natural Language Autoencoders (NLAs), an unsupervised method developed by researchers at Anthropic to translate the complex internal activations of large language models into human-readable text. By utilizing an <...
Mollifier Layers for Efficient High-Order Inverse PDE Learning
This paper introduces Mollifier Layers, a novel, lightweight module designed to enhance Physics-Informed Machine Learning (PhiML) by replacing recursive automatic differentiation with convolutional operations. While traditional me...
The Rise of Point Absorbers
From the staggering potential of 29,500 TWh of wave energy to the nuts and bolts of point absorber wave energy converters, this episode shows how buoys that ride the surf can generate electricity, desalinate water, and power remote islands. We ...
Autocompleting Reality: The Rise of Large Event Models
This episode unpacks large event models—AI that can understand, represent, and forecast real-world event sequences over time, not just generate text. We explore how LEMs extract underlying rules with schema induction, marry neural nets with sym...
Agentic Commerce 2026: AI Shoppers Do the Shopping
A deep dive into how AI agents move from answering questions to taking real buying actions on your behalf. We break down the surge of agentic commerce, the infrastructure that makes it possible (and the ‘invisibility’ problem), real-world wins ...
Autodata Unleashed: How AI Learns to Learn
We dive into Meta AI's Autodata framework—an autonomous system that designs, tests, and iterates its own training data. From challenger models and weak/strong solvers to meta-optimization that removes negative grading, we explore how AI becomes...
Ineffable Intelligence: The Superlearner Manifesto
A radical exploration of a zero-data, self-learning AI that discovers physics and math from first principles. We unpack the ‘superlearner’ idea—an agent trained purely by reinforcement in a digital sandbox, rewarded for uncovering truths and so...
Stanford Future of Mathematics Symposium 2026
At Stanford's Future of Mathematics Symposium (May 1–2, 2026), AI shifts from calculator to collaborator while formal methods guard every step of the proof. This episode unpacks frontier reasoning, human–AI partnerships, and the visions of lead...
Air-Gapped Payments for AI Agents: Stripe Link CLI Secures AI Payments
Stripe has introduced Link’s wallet for agents and Stripe Issuing for agents to provide secure financial infrastructure for autonomous AI. These tools allow digital assistants to make purchases using one-time-use virtual cards<...
The Goblin Problem: When a Tiny AI Quirk Sparks a Linguistic Contagion
Explore OpenAI’s April 2026 study The Goblin Problem, where a nerdy personality cue in GPT-5.x triggered a cascade of goblin-themed prompts. We break down how reinforcement learning and supervised fine-tuning amplified a tiny feature, why safet...
Nemitron 3 Nano Omni: Real-Time Multimodal AI That Unifies Vision, Audio, and Text
We unpack NVIDIA’s latest Nemitron 3 Nano Omni model—a compact 3B Mixture-of-Experts architecture that processes vision, audio, and text in one pass, eliminating the old relay-race latency. Learn how MoE routing preserves accuracy, delivers up ...
Talkie Time Machine: A 13B AI Trained on the 1930s Library
We dive into Talkie, a 13‑billion‑parameter AI raised in a sealed pre‑1931 library. Trained on 260 billion words published before 1931 and guided by etiquette manuals, Victorian prose, and historical letters, Talkie challenges our ideas of AI r...
Vision Banana: From 2D Pixels to 3D Reasoning
A deep dive into Google DeepMind's Vision Banana, a foundation vision model that learns spatial physics by generating images. We explore how instruction tuning turns a capable base into a generalist vision learner capable of depth estimation, s...
AI on the Front Foot: Cricket Australia’s Live Storytelling Revolution
Cricket’s jargon can be baffling. This episode explains how Cricket Australia teamed with OpenAI’s GPT-5 (via Microsoft Foundry) to turn 140 years of scorecards into real-time, personalized narratives. From 1886 data to Azure Cosmos DB-powered ...
Resolute Raccoon: Ubuntu 26.04 and the Frictionless AI OS
We unpack Canonical's Ubuntu 26.04 LTS, codenamed Resolute Raccoon, and why it's more than a routine patch. We explore native integration of NVIDIA CUDA and AMD ROCm into the 7.0 kernel, and optimized support for Intel Panther Lake NPUs, as mov...
GPT 5.5 and the Agentic AI Leap: From Babysitters to Co-Scientists
In this episode we unpack OpenAI's GPT-5.5, an agentic AI that plans, uses tools, runs its own code, and self-corrects until the job is done. We explore how this leap reshapes workflows in coding, data analysis, and scientific discovery — with ...
Workspace Agents: OpenAI’s Digital Nervous System for Your Business
A deep dive into OpenAI’s April 2026 announcements about workspace agents in ChatGPT—no-code, memory-enabled agents that run multi-step workflows across your apps and services, even after you close your laptop. We unpack how Codex translates pl...
ChatGPT Images 2.0: The New Era of Strategic Design
OpenAI’s announcement introduces ChatGPT Images 2.0, a sophisticated visual generation model designed to function as a strategic design system rather than a simple art tool. This updated version features enhanced precision ...
Hyperagents: The Self-Improving AI That Rewrites Its Own Learning
Dive into hyperagents—AI that can rewrite its own learning process by merging problem solving with meta-improvement into one editable program. Learn how they guard against self-corruption with persistent memory, how cross-domain transfer works,...
Move 37 and the AI Creativity Revolution
From a baffling early-game move that shocked pros to a broader reckoning with how AI reshapes strategy and science, this episode dives into the 2016 Lee Sedol–AlphaGo match. We unpack move 37, its field-shaping genius, and how AlphaGo’s unconve...