Intellectually Curious
Intellectually Curious is a podcast by Mike Breault featuring over 1,800 AI-powered explorations across science, mathematics, philosophy, and personal growth. Each short-form episode is generated, refined, and published with the help of large language models—turning curiosity into an ongoing audio encyclopedia. Designed for anyone who loves learning, it offers quick dives into everything from combinatorics and cryptography to systems thinking and psychology.
Inspiration for this podcast:
"Muad'Dib learned rapidly because his first training was in how to learn. And the first lesson of all was the basic trust that he could learn. It's shocking to find how many people do not believe they can learn, and how many more believe learning to be difficult. Muad'Dib knew that every experience carries its lesson."
― Frank Herbert, Dune
Note: These podcasts were made with NotebookLM. AI can make mistakes. Please double-check any critical information.
Intellectually Curious
Autocompleting Reality: The Rise of Large Event Models
Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.
This episode unpacks large event models—AI that can understand, represent, and forecast real-world event sequences over time, not just generate text. We explore how LEMs extract underlying rules with schema induction, marry neural nets with symbolic planners for safety, and use sparse attention to manage massive timelines. We discuss real-world uses in public safety and healthcare, the safety nets that keep predictions grounded in reality, and imagine how a personal LEM could optimize your day.
Note: This podcast was AI-generated, and sometimes AI can make mistakes. Please double-check any critical information.
Sponsored by Embersilk LLC
You know how uh like one tiny decision can just completely unravel your entire morning?
SPEAKER_00Oh, absolutely. The classic butterfly effect.
SPEAKER_01Right, exactly. Like today I hit snooze just once, and um that meant rushing breakfast, which led to a smoking toaster, which meant opening the windows, and then, you know, inevitably missing my train.
SPEAKER_00Well, from a data perspective, that is just a perfect sequence of discrete events. I mean, cause, effect, and time.
SPEAKER_01Yeah. And if you've ever felt like your life is just one giant chain reaction like that, you are going, absolutely love today's deep dive.
SPEAKER_00Aaron Powell We are looking at a really massive breakthrough in AI called uh large event models or lems.
SPEAKER_01Right. Because we are all super familiar with large language models, you know, predicting the next word in a sentence. But lems are this entirely new class of AI.
SPEAKER_00Aaron Powell Yeah, they are designed to understand and represent and actually forecast real-world events over time. It is a huge shift.
SPEAKER_01So instead of generating text, they're built to optimize these complex sequences in reality, right? Like from healthcare protocols to um global logistics.
SPEAKER_00Exactly. To a lem, your burn toast isn't a word. It's a structured object. It has a specific timestamp, participants, a location.
SPEAKER_01Aaron Powell Okay. So it's almost like uh autocomplete for reality.
SPEAKER_00I love that phrase, yeah. Autocomplete for life.
SPEAKER_01Aaron Powell But wait, if it's just looking at patterns and massive data sets, how does it learn the actual rules of the physical world? Like why doesn't it just guess wildly and you know hallucinate impossible sequences?
SPEAKER_00Well, that brings us to a really fascinating mechanism called schema induction.
SPEAKER_01Schema induction. Okay, unpack that for me.
SPEAKER_00Aaron Powell Sure. So instead of just finding statistical correlations like saying um A usually follows B, lems actively extract the underlying logical rules of a system.
SPEAKER_01Aaron Powell Wait, how does it extract a rule from raw data though?
SPEAKER_00Aaron Powell Think of it like watching thousands of hours of a sport you had never seen before.
SPEAKER_01Okay, I'm with you.
SPEAKER_00Eventually you deduce the rules of the game just by watching the events play out, right?
SPEAKER_01Oh, right, because you see the same patterns repeat.
SPEAKER_00Exactly. The model does this and translates those rules into explicit symbolic logic using frameworks like PDDL. Yeah, it's essentially a programming language built specifically for complex planning.
SPEAKER_01Yeah.
SPEAKER_00So it explicitly learns, for instance, that a pan must possess the physical attribute of being hot before an event where a finger gets burned can logically occur.
SPEAKER_01Oh wow. So it is mapping out the actual physics of the situation, not just parroting vocabulary.
SPEAKER_00You got it. But you know, knowing the rules of the game is very different from knowing exactly when the next play is going to happen.
SPEAKER_01Right. Time is the hardest variable here. And actually, before we get into how LEM's master time, I want to quickly mention that just as LEM's Optimize Event Sequences, today's sponsor, EmberSilk, optimizes workflows.
SPEAKER_00They are fantastic for that.
SPEAKER_01Truly. If you need help with AI training, automation, integration, or you know, custom software development, they are the ones to call.
SPEAKER_00Yeah, if you are uncovering where agents could make the most impact for your business or even your personal life, definitely check out EmberSilk.com for all your AI needs.
SPEAKER_01Absolutely. So back to mastering time. Our sources mention something called temporal point processes.
SPEAKER_00Yes, and specifically the Transformer Hawks process.
SPEAKER_01Hawks process, I saw that term. But usually it was related to predicting things like earthquakes, right?
SPEAKER_00Precisely. So classical Hawks models predict self-exciting events. Mathematically, an earthquake increases the probability of an aftershock.
SPEAKER_01Okay, that makes total sense.
SPEAKER_00Well, LEMS take that exact math and apply it to human behavior. Like a missed car payment mathematically increases the probability of a future loan default. One event excites the likelihood of another.
SPEAKER_01But wait, mapping millions of these cascading events over years of data, wouldn't that require an impossible amount of computing power?
SPEAKER_00It absolutely would if the AI looked at every single second of the day.
SPEAKER_01Right.
SPEAKER_00That is why researchers integrated something called sparse self-attention.
SPEAKER_01Sparse self-attention, meaning it ignores something.
SPEAKER_00Exactly. Instead of analyzing every single moment of a five-year timeline, the model learns to selectively focus only on the few past events that actually matter to the current moment.
SPEAKER_01Oh, so it just ignores the dead space.
SPEAKER_00Right, which makes processing years of complex timelines incredibly efficient.
SPEAKER_01That is brilliant. Let's talk about where this is actually going because the applications in our sources are incredibly optimistic.
SPEAKER_00They really are. I mean, in public safety, limbs are being used to predict human mobility. So they can help manage crowds at massive public events to keep people safe before bottlenecks even form.
SPEAKER_01That is amazing.
SPEAKER_00And in healthcare, they can simulate patient trajectories through hospital systems to foresee health risks days in advance.
SPEAKER_01Okay, let me stop you there because the healthcare application makes me a little nervous. If a LEM is forecasting medical treatments, couldn't it um suggest a physically impossible or dangerous sequence of events?
SPEAKER_00It is a valid concern, but that is exactly where the schema induction we talked about earlier acts as a safety net. It uses a process called neurosymbolic integration.
SPEAKER_01Meaning it combines the neural network guessing with the hard logic.
SPEAKER_00Precisely. The neural side makes a prediction about what happens next, but before that prediction is finalized, it's run through an external planner that checks it against the strict symbolic rules it learned.
SPEAKER_01Oh, wow. So if the neural net predicts a medical event that violates physical or medical logic, the system just rejects it.
SPEAKER_00Exactly. It is fundamentally constrained by reality.
SPEAKER_01It is incredible to see how we are building these safety nets directly into the architecture. It really makes you optimistic about our ability to engineer solutions to massive complex problems.
SPEAKER_00The capacity to foresee and optimize the future is honestly just beginning. It is going to help us so much.
SPEAKER_01It really is. And it leaves you with a really fun thought experiment, too. Like imagine applying a personal lem to your own habits.
SPEAKER_00Well, that'd be amazing.
SPEAKER_01Right. What hidden rules and positive chain reactions could you optimize to design your absolute perfect day? Definitely something to think about.
SPEAKER_00For sure.
SPEAKER_01Well, if you enjoyed this deep dive, please subscribe to the show. Hey, leave us a five star review if you can. It really does help get the word out.
SPEAKER_00Thanks for tuning in, everyone.
SPEAKER_01Keep dreaming big because the future of humanity is looking incredibly bright.