Intellectually Curious

Autocompleting Reality: The Rise of Large Event Models

Mike Breault

Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.

0:00 | 6:20

This episode unpacks large event models—AI that can understand, represent, and forecast real-world event sequences over time, not just generate text. We explore how LEMs extract underlying rules with schema induction, marry neural nets with symbolic planners for safety, and use sparse attention to manage massive timelines. We discuss real-world uses in public safety and healthcare, the safety nets that keep predictions grounded in reality, and imagine how a personal LEM could optimize your day.


Note:  This podcast was AI-generated, and sometimes AI can make mistakes.  Please double-check any critical information.

Sponsored by Embersilk LLC

SPEAKER_01

You know how uh like one tiny decision can just completely unravel your entire morning?

SPEAKER_00

Oh, absolutely. The classic butterfly effect.

SPEAKER_01

Right, exactly. Like today I hit snooze just once, and um that meant rushing breakfast, which led to a smoking toaster, which meant opening the windows, and then, you know, inevitably missing my train.

SPEAKER_00

Well, from a data perspective, that is just a perfect sequence of discrete events. I mean, cause, effect, and time.

SPEAKER_01

Yeah. And if you've ever felt like your life is just one giant chain reaction like that, you are going, absolutely love today's deep dive.

SPEAKER_00

Aaron Powell We are looking at a really massive breakthrough in AI called uh large event models or lems.

SPEAKER_01

Right. Because we are all super familiar with large language models, you know, predicting the next word in a sentence. But lems are this entirely new class of AI.

SPEAKER_00

Aaron Powell Yeah, they are designed to understand and represent and actually forecast real-world events over time. It is a huge shift.

SPEAKER_01

So instead of generating text, they're built to optimize these complex sequences in reality, right? Like from healthcare protocols to um global logistics.

SPEAKER_00

Exactly. To a lem, your burn toast isn't a word. It's a structured object. It has a specific timestamp, participants, a location.

SPEAKER_01

Aaron Powell Okay. So it's almost like uh autocomplete for reality.

SPEAKER_00

I love that phrase, yeah. Autocomplete for life.

SPEAKER_01

Aaron Powell But wait, if it's just looking at patterns and massive data sets, how does it learn the actual rules of the physical world? Like why doesn't it just guess wildly and you know hallucinate impossible sequences?

SPEAKER_00

Well, that brings us to a really fascinating mechanism called schema induction.

SPEAKER_01

Schema induction. Okay, unpack that for me.

SPEAKER_00

Aaron Powell Sure. So instead of just finding statistical correlations like saying um A usually follows B, lems actively extract the underlying logical rules of a system.

SPEAKER_01

Aaron Powell Wait, how does it extract a rule from raw data though?

SPEAKER_00

Aaron Powell Think of it like watching thousands of hours of a sport you had never seen before.

SPEAKER_01

Okay, I'm with you.

SPEAKER_00

Eventually you deduce the rules of the game just by watching the events play out, right?

SPEAKER_01

Oh, right, because you see the same patterns repeat.

SPEAKER_00

Exactly. The model does this and translates those rules into explicit symbolic logic using frameworks like PDDL. Yeah, it's essentially a programming language built specifically for complex planning.

SPEAKER_01

Yeah.

SPEAKER_00

So it explicitly learns, for instance, that a pan must possess the physical attribute of being hot before an event where a finger gets burned can logically occur.

SPEAKER_01

Oh wow. So it is mapping out the actual physics of the situation, not just parroting vocabulary.

SPEAKER_00

You got it. But you know, knowing the rules of the game is very different from knowing exactly when the next play is going to happen.

SPEAKER_01

Right. Time is the hardest variable here. And actually, before we get into how LEM's master time, I want to quickly mention that just as LEM's Optimize Event Sequences, today's sponsor, EmberSilk, optimizes workflows.

SPEAKER_00

They are fantastic for that.

SPEAKER_01

Truly. If you need help with AI training, automation, integration, or you know, custom software development, they are the ones to call.

SPEAKER_00

Yeah, if you are uncovering where agents could make the most impact for your business or even your personal life, definitely check out EmberSilk.com for all your AI needs.

SPEAKER_01

Absolutely. So back to mastering time. Our sources mention something called temporal point processes.

SPEAKER_00

Yes, and specifically the Transformer Hawks process.

SPEAKER_01

Hawks process, I saw that term. But usually it was related to predicting things like earthquakes, right?

SPEAKER_00

Precisely. So classical Hawks models predict self-exciting events. Mathematically, an earthquake increases the probability of an aftershock.

SPEAKER_01

Okay, that makes total sense.

SPEAKER_00

Well, LEMS take that exact math and apply it to human behavior. Like a missed car payment mathematically increases the probability of a future loan default. One event excites the likelihood of another.

SPEAKER_01

But wait, mapping millions of these cascading events over years of data, wouldn't that require an impossible amount of computing power?

SPEAKER_00

It absolutely would if the AI looked at every single second of the day.

SPEAKER_01

Right.

SPEAKER_00

That is why researchers integrated something called sparse self-attention.

SPEAKER_01

Sparse self-attention, meaning it ignores something.

SPEAKER_00

Exactly. Instead of analyzing every single moment of a five-year timeline, the model learns to selectively focus only on the few past events that actually matter to the current moment.

SPEAKER_01

Oh, so it just ignores the dead space.

SPEAKER_00

Right, which makes processing years of complex timelines incredibly efficient.

SPEAKER_01

That is brilliant. Let's talk about where this is actually going because the applications in our sources are incredibly optimistic.

SPEAKER_00

They really are. I mean, in public safety, limbs are being used to predict human mobility. So they can help manage crowds at massive public events to keep people safe before bottlenecks even form.

SPEAKER_01

That is amazing.

SPEAKER_00

And in healthcare, they can simulate patient trajectories through hospital systems to foresee health risks days in advance.

SPEAKER_01

Okay, let me stop you there because the healthcare application makes me a little nervous. If a LEM is forecasting medical treatments, couldn't it um suggest a physically impossible or dangerous sequence of events?

SPEAKER_00

It is a valid concern, but that is exactly where the schema induction we talked about earlier acts as a safety net. It uses a process called neurosymbolic integration.

SPEAKER_01

Meaning it combines the neural network guessing with the hard logic.

SPEAKER_00

Precisely. The neural side makes a prediction about what happens next, but before that prediction is finalized, it's run through an external planner that checks it against the strict symbolic rules it learned.

SPEAKER_01

Oh, wow. So if the neural net predicts a medical event that violates physical or medical logic, the system just rejects it.

SPEAKER_00

Exactly. It is fundamentally constrained by reality.

SPEAKER_01

It is incredible to see how we are building these safety nets directly into the architecture. It really makes you optimistic about our ability to engineer solutions to massive complex problems.

SPEAKER_00

The capacity to foresee and optimize the future is honestly just beginning. It is going to help us so much.

SPEAKER_01

It really is. And it leaves you with a really fun thought experiment, too. Like imagine applying a personal lem to your own habits.

SPEAKER_00

Well, that'd be amazing.

SPEAKER_01

Right. What hidden rules and positive chain reactions could you optimize to design your absolute perfect day? Definitely something to think about.

SPEAKER_00

For sure.

SPEAKER_01

Well, if you enjoyed this deep dive, please subscribe to the show. Hey, leave us a five star review if you can. It really does help get the word out.

SPEAKER_00

Thanks for tuning in, everyone.

SPEAKER_01

Keep dreaming big because the future of humanity is looking incredibly bright.