Intellectually Curious
Intellectually Curious is a podcast by Mike Breault featuring over 1,800 AI-powered explorations across science, mathematics, philosophy, and personal growth. Each short-form episode is generated, refined, and published with the help of large language models—turning curiosity into an ongoing audio encyclopedia. Designed for anyone who loves learning, it offers quick dives into everything from combinatorics and cryptography to systems thinking and psychology.
Inspiration for this podcast:
"Muad'Dib learned rapidly because his first training was in how to learn. And the first lesson of all was the basic trust that he could learn. It's shocking to find how many people do not believe they can learn, and how many more believe learning to be difficult. Muad'Dib knew that every experience carries its lesson."
― Frank Herbert, Dune
Note: These podcasts were made with NotebookLM. AI can make mistakes. Please double-check any critical information.
Intellectually Curious
Latest Episodes
The USSR Olympiad Problem Book
Dive into the USSR Olympiad problem book by Shklarsky, Chensov, and Yaglom—320 unconventional puzzles designed for seventh- to tenth-graders that still stump PhD mathematicians. Learn how these problems force new mental models, not brute-force ...
Interaction Models: Scalable Real-Time Human-AI Collaboration
We dive into Thinking Machines Lab’s breakthrough that shatters the typing bottleneck by streaming real-time microturns and decoupling quick conversation from deep reasoning. Learn how a fast-front interaction model handles live dialogue, while...
The AI Co-Mathematician: Agentic Workflows for Mathematical Discovery
Google DeepMind has introduced the AI co-mathematician, a specialized agentic workbench designed to support the multifaceted and iterative nature of mathematical research. Unlike standard chatbots, this system utilizes a statef...
Natural Language Autoencoders for Unsupervised LLM Interpretability
Introducing Natural Language Autoencoders (NLAs), an unsupervised method developed by researchers at Anthropic to translate the complex internal activations of large language models into human-readable text. By utilizing an <...
Mollifier Layers for Efficient High-Order Inverse PDE Learning
This paper introduces Mollifier Layers, a novel, lightweight module designed to enhance Physics-Informed Machine Learning (PhiML) by replacing recursive automatic differentiation with convolutional operations. While traditional me...