Causal Bandits Podcast

Causal Inference & Reinforcement Learning with Andrew Lampinen Ep 13 | CausalBanditsPodcast.com

April 01, 2024 Alex Molak Season 1 Episode 13
Causal Inference & Reinforcement Learning with Andrew Lampinen Ep 13 | CausalBanditsPodcast.com
Causal Bandits Podcast
More Info
Causal Bandits Podcast
Causal Inference & Reinforcement Learning with Andrew Lampinen Ep 13 | CausalBanditsPodcast.com
Apr 01, 2024 Season 1 Episode 13
Alex Molak

Send us a Text Message.

Love Causal Bandits Podcast?

Help us bring more quality content: Support the show

Video version of this episode is available here

Causal Inference with LLMs and Reinforcement Learning Agents?


Do LLMs have a world model?

Can they reason causally?

What's the connection between LLMs, reinforcement learning, and causality?

Andrew Lampinen, PhD (Google DeepMind) shares the insights from his research on LLMs, reinforcement learning, causal inference and generalizable agents.

We also discuss the nature of intelligence, rationality and how they play with evolutionary fitness.

Join us in the journey!

Recorded on Dec 1, 2023 in London, UK.

About The Guest

Andrew Lampinen, PhD is a Senior Research Scientist at Google DeepMind. He holds a PhD in PhD in Cognitive Psychology from Stanford University. He's interested in cognitive flexibility and generalization, and how these abilities are enabled by factors like language, memory, and embodiment.

Connect with Andrew:
- Andrew on Twitter/X
- Andrew's web page

About The Host
Aleksander (Alex) Molak is an independent machine learning researcher, educator, entrepreneur and a best-selling author in the area of causality (https://amzn.to/3QhsRz4).

Connect with Alex:
- Alex on the Internet

Links
Papers
- Lampinen et al. (2023) - "Passive learning of active causal strategies in agents and language models" (https://arxiv.org/pdf/2305.16183.pdf)

- Dasgupta, Lampinen, et al. (2022) Language models show human-like content effects on reasoning tasks" (https://arxiv.org/abs/2207.07051)

- Santoro, Lampinen, et al. (2021) - "Symbolic behaviour in artificial intelligence" (https://www.researchgate.net/publication/349125191_Symbolic_Behaviour_in_Artificial_Intelligence)

- Webb et al. (2022) - “Emergent Analogical Reasoning in Large Language Models” (https://arxiv.org/abs/2212.09196) 

Books
- Tomasello (2019) - “Becoming

Support the Show.

Causal Bandits Podcast
Causal AI || Causal Machine Learning || Causal Inference & Discovery
Web: https://causalbanditspodcast.com

Connect on LinkedIn: https://www.linkedin.com/in/aleksandermolak/
Join Causal Python Weekly: https://causalpython.io
The Causal Book: https://amzn.to/3QhsRz4

Show Notes Transcript

Send us a Text Message.

Love Causal Bandits Podcast?

Help us bring more quality content: Support the show

Video version of this episode is available here

Causal Inference with LLMs and Reinforcement Learning Agents?


Do LLMs have a world model?

Can they reason causally?

What's the connection between LLMs, reinforcement learning, and causality?

Andrew Lampinen, PhD (Google DeepMind) shares the insights from his research on LLMs, reinforcement learning, causal inference and generalizable agents.

We also discuss the nature of intelligence, rationality and how they play with evolutionary fitness.

Join us in the journey!

Recorded on Dec 1, 2023 in London, UK.

About The Guest

Andrew Lampinen, PhD is a Senior Research Scientist at Google DeepMind. He holds a PhD in PhD in Cognitive Psychology from Stanford University. He's interested in cognitive flexibility and generalization, and how these abilities are enabled by factors like language, memory, and embodiment.

Connect with Andrew:
- Andrew on Twitter/X
- Andrew's web page

About The Host
Aleksander (Alex) Molak is an independent machine learning researcher, educator, entrepreneur and a best-selling author in the area of causality (https://amzn.to/3QhsRz4).

Connect with Alex:
- Alex on the Internet

Links
Papers
- Lampinen et al. (2023) - "Passive learning of active causal strategies in agents and language models" (https://arxiv.org/pdf/2305.16183.pdf)

- Dasgupta, Lampinen, et al. (2022) Language models show human-like content effects on reasoning tasks" (https://arxiv.org/abs/2207.07051)

- Santoro, Lampinen, et al. (2021) - "Symbolic behaviour in artificial intelligence" (https://www.researchgate.net/publication/349125191_Symbolic_Behaviour_in_Artificial_Intelligence)

- Webb et al. (2022) - “Emergent Analogical Reasoning in Large Language Models” (https://arxiv.org/abs/2212.09196) 

Books
- Tomasello (2019) - “Becoming

Support the Show.

Causal Bandits Podcast
Causal AI || Causal Machine Learning || Causal Inference & Discovery
Web: https://causalbanditspodcast.com

Connect on LinkedIn: https://www.linkedin.com/in/aleksandermolak/
Join Causal Python Weekly: https://causalpython.io
The Causal Book: https://amzn.to/3QhsRz4

Andrew Lampinen: [00:00:00] When you try to constrain the reasoning processes too strongly, that actually is going to make the system more fragile, because as soon as there's something weird in the world that doesn't quite match your assumptions, the system will totally break down. The objective of humans and of language models isn't to be rational reasoners, it's to 

Marcus: Hey Causal Bandits, welcome to the Causal Bandits podcast, the best podcast on causality and machine learning on the internet.

Jessie: This week we're traveling to London to meet our guest. As a child he loved to play chess. He studied math and physics, but decided to pursue a PhD in cognitive psychology because it seemed less abstract. He loves rock climbing and plays guitar. Senior research scientist at Google deep mind. Ladies and gentlemen, please welcome Dr.

Andrew Lampinen. Let me pass it to your host, Alex Molak. 

Alex: Ladies and gentlemen, please welcome Dr. Andrew Lampinen. Lampinen. Welcome to the podcast, Andrew. 

Andrew Lampinen: Thank you [00:01:00] for having me. 

Alex: How are you today? 

Andrew Lampinen: Doing pretty well. How are you? 

Alex: I'm very good. We have some sun in London. 

Andrew Lampinen: Yeah, it's a lucky day. You came in at the right time.

Alex: Great. Andrew, in one of the recent papers you published, which was a part of a series of papers, about large language models and causality released in the second half of 2023. In the title of this paper, you talk about active and passive strategies. This is a little bit maybe unusual when we think about traditional ways of talking about machine learning models in general, in the context of causality, where authors would tend to extrapolate the naming convention from Perl's Causal hierarchy and talk about models or training regimes that are interventional or observational or counterfactual.

What dictated this choice of wording or this choice of concepts? 

Andrew Lampinen: Yeah, I'm glad you brought that up. 'cause that's actually one of the key distinctions that we want to make in this paper. Because, although language models are trained passively, [00:02:00] that is, they're just processing language data from the internet that has been generated by someone else, that doesn't necessarily imply that that data is purely observational.

So, for example, if you're reading a scientific paper, Even though you're merely absorbing that data, you're not out there making those experimental interventions yourself. Those data remain interventional data, and you can learn about real causal information from those. And so, the first thing that we point out in the paper is that actually, Language Day on the internet is interventional.

It contains science papers, it contains Stack Overflow posts where people are debugging something and trying some experiments and seeing what works and what doesn't. It contains just conversations where people are talking to each other and each thing they say is an intervention in that conversation.

And so even though language models are learning passively, they're learning from data that's interventional. And that is an important distinction. 

Alex: Does this fact impact their capabilities, capabilities to generalize? That is beyond what we've, what we maybe traditionally think about this. 

Andrew Lampinen: That is exactly what we wanted to explore in the [00:03:00] paper.

So in the paper, we suggest that there's two reasons that this kind of training language data on the internet That could give some sort of causal strategies or understanding that could generalize beyond the data that's been trained on. So the first case is what we call causal strategies. And what we mean by that is that by learning from others interventions, the models might be able to discover a strategy for intervening that they could apply in a new situation to discover new causal structures, and then to use those for some downstream goal.

And so what we suggest in the paper, and Suggest formally and then show empirically is that precisely you can discover from purely passively observing someone else's interventions A generalized book strategy for intervening to determine causal structures that a system could then deploy downstream.

Alex: Can you tell our audience a little bit more about how you structured the learning task for the language models in your paper? And what were the main insights based on the results that you were able to achieve and show in the paper? 

Andrew Lampinen: So one of the hard [00:04:00] things about studying large language models, of course, is that it's hard to understand everything that's going into the training corpus.

Even if we can search through it, it's hard to know all the things that are in there that might just be slightly rephrased or something. And so one of the things that we try to do in our program of work is to do more controlled experiments where we train a simpler model on a data distribution that we really understand and then see how well it generalizes or doesn't.

So what we did in this paper is that we trained a model on a distribution of data which shows interventions on causal DAG. So on each document, you could think, or each episode in the data, the model sees a series of interventions on a DAG, and then it is sees a goal, like maximize this variable, and a series of interventions that try to achieve that goal.

And so what we wanted to test in the paper is, if the model sees passively this data of interventions that are trying to discover a causal structure and then use that causal structure to achieve a goal, on a set of DAGs that hold out certain kinds of causal structures. So [00:05:00] it doesn't see everything in training.

Can it generalize to itself, intervene actively and really discover new causal structures and exploit them at test time. And so the way we test this is just like a language model. When you deploy it, suddenly it becomes active, right? It's talking to a user intervening on the user rather than just passively absorbing data anymore.

We similarly train the system passively. We test it interactively and we show that the system is able to apply the causal strategies that are passively observed in training and then deploy. and use them to actively intervene at test time, discover new causal structures, and exploit them. Then we compare the way that the model is doing this to various heuristic or associational strategies.

We show that the model is much more closely approximating correct causal reasoning than any of those simpler baselines. 

Alex: In another paper in the series of papers called Causal Parrots, co authored by Matej Zecevic, who was our guest in the 0th episode of this podcast, the authors propose a hypothesis.

that says that large language models are learning a so called meta SCM, [00:06:00] meta structural causal model. And this meta structural causal model is learned based on the, how the authors call it, the correlations of causal facts. that are present in the, in the training data. And so, one of the proposed conclusions in this, in this paper is that causal models can talk causality, but they do not reason causally.

Do you feel that the results from your paper are contradicting this hypothesis? Or maybe are they complementary in some sense? 

Andrew Lampinen: Well, I would say to some extent they are contradicting it. In the sense that I think our results suggest that actually the models are capable of discovering some sort of causal reasoning algorithm that they can apply in a new setting in a generalizable way, at least if they have a good enough training regime.

Now, I think that on natural data, The models probably are learning of a variety of things. And of course, in our training distribution, there were a lot of interventions, whereas natural data might have quite a few more correlations, and only occasional interventional data. [00:07:00] And so that data mixture will change what the models are learning, and they might be learning A lot of strategies that are more sort of like correlational, meta SCM like, and relatively fewer causal strategies.

However, what we show at the end of our paper is that, actually if you test language models on sort of like, experimenting just to discover causal strat structure, Tasks similar to the ones we used for our simpler experiments. In certain cases in particular, if you give them explanations in the prompt, they're actually able to learn to do this pretty effectively, to discover new causal structures that aren't included in the prompt.

And so I think that suggests that there is at least. enough interventional data going on in the language model training distributions that they are able to discover some of these strategies, but in terms of what they do on average, or in like most of the cases you deploy them, it could be something a little bit more like what they're describing in the causal parrots paper.

Definitely. 

Alex: In your paper, you also emphasize the importance of, of explanations. This maybe directs us to broader topic of. [00:08:00] structuring the causal, the training regime in a certain way. What are your thoughts on this? Is the training paradigm or training regime that we today use to train those models helpful for them to use causal structures or not necessarily?

Andrew Lampinen: I mean, I think that the training paradigm we use today is very driven by what works and what allows you to use data at scale. I think almost. Always, if you were designing a training paradigm for a model, you would try to leverage more kinds of auxiliary tasks, for example, that help it to understand the data if there's something you know about the data yourself.

So, what we did with explanations in this paper and what we've done in some prior papers is to show that if you know something about a task, like you know, for example, why a reinforcement learning agent is getting a reward from the environment, giving it a natural language explanation of that reward and asking it to just predict those explanations as part of the learning process can actually improve what it's learning, and you can even use this in cases where the data are totally confounded to shape how the model [00:09:00] generalizes out of distribution.

I think that to the extent that you know more about the data, then it's, there's definitely many things you could do to better structure the, what the model is learning and to change the way it generalizes. I think the tricky thing is that, you know, we don't know that much about the structure of data on the internet and it's very You know, there's a trade off between trying to do something that is, that we really understand at small scale and trying to get as general a system as possible.

And since language modeling is a relatively new, well, language model itself isn't a new field, but training large language models is quite a new paradigm. I think people are just barely starting to scrape the surface of what you can do in terms of structuring the training process in a more interesting way.

So one example of this is that people have recently started conditioning these models on a signal of quality like a reward estimate for the data or a quality estimate during the training process. And that actually can help in certain cases to disentangle the training [00:10:00] a bit more and to get a system that at the end, if you say, okay, now produce only high quality things at test time, you can still learn from the worst data, but you can push the system more to generate high quality responses at test time.

And this is similar to things people have done in. Reinforcement offline RL type context for a while and things like decision transformer or upside down RL where you really just Condition on a signal of how good something is that allows you to learn from the bad data without necessarily Replicating that data at test time.

Alex: In your paper you are talking about large language models But also agents for those people in our audience who are less familiar with with reinforcement learning paradigms. Could you give a brief introduction or brief description about the intuition how large language models and agents are related?

Andrew Lampinen: So actually, I think that the term agents is quite general, and large language models can fit within the paradigm of passively trained agents. I think of an agent as just a system that takes inputs, such as observations of the world, [00:11:00] and produces output actions in a sequential decision making problem.

This covers, for example, language models which take inputs in natural language and produce a sequence of language in response to that, maybe an interaction with the user. It also considers things like the kinds of like chess and Go playing engines that we've worked on in the past at DeepMind, also systems that play video games like Atari.

It covers quite a wide range of systems. The key part to me is this aspect of interaction with the environment. And one of the key distinctions to make here is systems that are Interacting with the environment at training time or only at testing time. If you have an agent, it's probably because you want it to interact at some point.

But often you train these agents at least partly on data where they are not interacting. But for example, they're just observing what expert StarCraft players or expert Go players would do, and learning from that. And then later on you might do some reinforcement learning training where you're actually playing them against each other against other agents and then giving them rewards if they win and then [00:12:00] using that as a way to make to improve them.

And that's similar to the paradigms people are using with language models now, where they first train them on a lot of passive data, just language that humans have generated on the internet. And then they do some fine tuning and then they finally do a reinforcement learning step, which is. Where they train a reward model based on what, which kinds of language responses humans prefer, and they use that to actually reward the models for producing the kinds of responses humans will.

Alex: Some people would argue that autoregressive models might not be very well in generating very long text. or predicting long time series, because there might be the mechanism of accumulating the error. So the model is conditioning the next token or the next whatever item, data point, On its previous generation and this generation necessarily contains some error and this error can be reinforced over time steps.

Interventions might be used in order to decrease the [00:13:00] amount of error. How much intervention or action do we need to make autoregressive models reliable for longer form content? Let's think about text generation or video generation. 

Andrew Lampinen: So I think there's a lot of things that go into this question. I mean, one point I want to make is that even with current language models, at least the state of the art systems, they have some ability to error correct in the sequences they're producing.

If you do something like chain of thought, you will occasionally see the models like produce something and then say, oh no, that's a mistake. Actually, I should do this, right? So it's not the case that error is necessarily monotonically increasing over time. Although I will agree that the systems are not particularly good at this yet.

I think that. Part of what's important for this is that the system is able to reconsider the prior context and maybe find errors in its reasoning in that way. And so the architecture of the system and the, for example, its context length and its ability to use the earlier generations may affect its ability to do that in non trivial ways.[00:14:00]

Now coming to the intervention part, it's a well known fact that passive learning of systems is not very efficient and tends to lead to worse generalization out of distribution. That doesn't mean that it's impossible for these systems to generalize, but simplistic strategies for passive learning at least, you tend to get this problem where on the data distribution the system is accurate, but once it starts being active it will move off that data distribution, And then it will start to break down because it's never seen any situation like this before.

And so in the offline reinforcement learning literature for a long time, there's been techniques like Dagger, which basically you get a little bit of interventional data that sort of allows you to recover back to the data distribution if you step off of it. And those kinds of strategies quite important for getting a robust system in practice.

So you could see things like the active reinforcement learning people do with language models at the end of training as being a kind of way of doing this, or even if you're doing supervised tuning, but on real generations, the model produces when interacting with users. This is a kind of interventional data.

But the question of [00:15:00] how much of that data you need to get a robust model is I think a very hard question to answer. You would have to make very strong assumptions on what the data generating processor, like What kind of structures you're trying to get the model to learn, how reliably the model can error correct, and I think all of those questions are quite open at the moment, so I don't think it's easy to put a number on it.

Alex: I see a very interesting parallel between what you just mentioned and the ideas from causal inference literature where people talk about The causal data fusion or mixing observational and interventional data in order to recover and what on one hand recover the causal structure of the problem on the other hand maximize The efficiency of inference.

Have you ever thought about those two ideas, those two paths and relations, potential relationships between them? 

Andrew Lampinen: Yeah, that's a great question. So we didn't really address this in the paper, but we did have some cases where we sort of like, Yeah. [00:16:00] Got the system to use some, what we called like prior knowledge from the literature.

You can think of it as coming from like an observational source, where we sort of cue it on which variables in a system might be relevant, and then the system learns in a generalizable way to intervene only on the relevant variables. So it's sort of like most efficiently determining the causal structures that are nec that are necessary and not exploring the things that aren't.

You could imagine doing this similarly with a system that also combines observational data in as a way of sort of efficiently discovering which things might be the most promising to intervene on, or as a part of the learning process, more generally, I think that that similarly would work pretty well.

And I have some colleagues like Rosemary Keh, who have worked on things in this space. So I definitely think it's a promising area. 

Alex: Where are we today from your perspective, when it comes to building Useful sequential decision making systems. Useful for what? Useful for some problems. 

Andrew Lampinen: So I guess like, I don't personally use large language models that much, but [00:17:00] I would consider them a sequential decision making system, and I've talked to a number of people who have found them quite useful for things like writing, for idea generation, even for just sort of like critiquing their thinking on a certain problem.

So, in some sense, I would say language models are a useful sequential decision making system. I wouldn't say they're a useful autonomous sequential decision making system. Perhaps that would be an important distinction to make, right? They're useful if you have a human in the loop, at least for right now, because they're way too unreliable to use without a human.

But I think they're useful in certain cases. And honestly, I'm more interested in AI systems as tools to help humans rather than AI systems to be an autonomous agent, at least in deployment time, right? I'm more excited about, for example, things that a scientist could use as a tool. Like for helping them to understand some complex system better or helping them to reason about a problem.

I think that is probably going to be one of the most important and impactful use cases. So, as far as I'm concerned, language models are a useful sequential decision maker. Of course, they could be a more useful sequential [00:18:00] decision maker and I'm sure there will be improvements in that very soon. 

Alex: You said about helping scientists, helping humans, building systems that are, that are helping, helpful for someone.

What drives you in your work? 

Andrew Lampinen: Yeah, that's a great question because I actually feel like a lot of the work I do is not particularly useful compared to some of my colleagues who are working on things that help scientists like me. doing protein structure prediction or fusion or something. I'm mostly motivated by curiosity and by the hope that by developing some understanding of the systems that we're working with we can better understand how to improve them and in ways that will be deployed downstream to help improve scientific applications or other things.

So I guess I'm motivated primarily by curiosity and by the hope that doing the sort of pure research that I do or more conceptual work That it will be useful downstream in some application, but we'll see if it is or not. 

Alex: You said mathematics and physics, and then you move to cognitive psychology with elements of cognitive science, neuroscience, and so on and so on.

And then you ended up at DeepMind or Google DeepMind, as we should call it now, working with [00:19:00] algorithms that nevertheless, in many cases, somehow interact with some environments. What qualities! Or resources were the most helpful in your journey to go through all those stages and work in one of the best known AI labs in the world.

Andrew Lampinen: I think my foundation in physics and mathematics has been really useful to me across a lot of the stages of my career because working in physics and in mathematics, you get, you develop good intuitions for things like linear algebra that crop up all over the place in the statistics we do in cognitive psychology and thinking about how machine learning systems work and so on.

And So I think that those quantitative foundations have been very useful to me and also sort of programming foundations that I acquired there and in high school as well. The main resource I'd point to is actually the, the people that I've interacted with along the way, because I think one of the things that I find most valuable at work about working at DeepMind is the incredible people who are there who come from all sorts of different backgrounds.

And similarly, during my PhD, I was very [00:20:00] lucky to work with an advisor and in a department that has a diversity of backgrounds and has, Just really supportive people who are excited to teach you about the things that they're interested in. And so, yeah, I think I've learned the most from the people I've met along the way.

Alex: Do you think that causality is necessary for generalization? 

Andrew Lampinen: I think that causality is one thing that you can look for as a tool of generalization. I mean, I often think about problems like in mathematics because of my background there. And for example, in mathematics, if I have a system that's like a, a theorem prover that generalizes to some new theorems.

I don't know if thinking about that in terms of causal causality is the most useful way to understand that, because I tend to think of math as being fundamentally about equivalences that don't really have a causal direction, right? However, I think in terms of a lot of real world systems and agentic systems, causality at least offers very useful intuitions for thinking about causality.

and how a system will generalize and where things might break down. So I do find [00:21:00] intuitions about confounding and intervention and so on to be quite useful and practical, at least in how I think about the problems of generalization for agentic systems specifically. 

Alex: You mentioned mathematical theory approvers.

Those systems often are built, maybe always are built, as a series of logical steps. which involve symbolic operations. So, in contrast to large language models that learn, at least that's my understanding, that learn some kind of a smooth approximation or smooth representation of concepts. Symbolic systems will be more discrete on a more discrete end.

There's a whole field called Neuro Symbolic AI. And it's absolutely neuro symbolic causal AI as well. In some of your work, you are also interested in representation learning. Do you think that combining representation learning, as we usually do it today, in this soft differentiable way with symbolic systems, more discrete systems, is a way or a path that has [00:22:00] a bright future?

Andrew Lampinen: I do, but I think I think this in kind of the opposite way that of many other people. And in particular, what I'd like to say is that I think symbolic systems, logical reasoning systems, are useful tools for an intelligent system to use, but they are not intelli relevant to intelligence in itself. And what I mean by that is, actually, if you think about humans, we are not particularly good logical reasoners, we are not particularly good mathematical reasoners, without extensive training.

Right. And because of that, we build tools like, you know, mathematical proving systems and things like that to help us with the parts of solving problems that we're not particularly good at. But I would say those tools are most useful insofar as they're used by a system that is intelligent. And I think that that intelligence.

Tends to take something that's more the form of a continuous fuzzy sort of reasoning system, and the symbolic logic is just a useful tool for those systems to use to tackle certain kinds of constrained problems that those logical systems are good at. So I do think that using symbolic approaches in [00:23:00] certain situations, like in mathematics or in physics or in many other computational sciences, is very, very useful.

I think that the most effective, most general ways to approach that will be through systems that Put the fuzzy learning continuous systems in control and just use the logical systems as sort of a, a tool to use just like humans use. 

Alex: Yeah. Yeah. I'm happy you said this. I think I agree. And not, not, maybe not many people think about this in this direction.

So I feel less alone in the world. I often also argue that people are not that good in causality, which might be well, counterintuitive sometimes when we think about causality in humans. In particular, we'll often, if you start reading about this, we'll often end up at some point, at least that's one of the paths you can take, with research coming from Alison Gopnik, who's a developmental psychologist.

A very prominent one, he and, uh, she and people in [00:24:00] her team has shown repeatedly that human babies, when they interact with, with the world, they do it in a systematic way that allows them to build world models. And it seems that. The motivation for this exploration, or at least some structure that allows them to do this in a way that is systematic, is, at least to some extent, inherited, inborn, or developed through evolutionary paths, maybe.

Do you think that these types of solutions, that come from evolution. I don't know if you would agree with me that it comes from evolution, but assuming that it does, do you think that those can be imitated in a simple computational manner? Or is there something, maybe some underlying complexity in the evolutionary process?

That is not that easy to, to emulate. 

Andrew Lampinen: Yeah, I think that's a great question. I don't think we know the answer yet. One different instance I'll point to is the fact that, you know, for a long time, people thought that [00:25:00] grammatical abilities of humans were evolutionary and couldn't possibly be learned from any finite amount of data, particularly only positive examples.

I think what language models show is that actually, you know, you might need more language data than humans learn from. But you don't need that much more, and actually people are trying to push that amount of data down even further and get it to a really human like regime. As a side note, I actually think that embodied experience might be quite important in our learning efficiency and language, and we had a paper a few years back showing that systems that are embodied in richer environments show better language generalization, for example.

Now, coming back to the causality point though, I don't think we know yet exactly what it will take to get human like inductive biases into a system. Maybe you do need more data, or maybe the active experience that infants have even when they're growing up before they come to Allison's lab is quite important.

One thing that I think is very interesting, I'm glad you brought up Allison's work because Some of her most famous studies are actually quite close to the paradigm I'm describing of passive observation because the children [00:26:00] don't perform the experiments in these studies themselves, at least not at first, they observe a lab assistant putting some things onto different detectors and seeing what lights them up and what doesn't.

And then it's only from observing that those interventions that the children get some sense of the causal structure that is then tested in some trials where the kids are actually allowed to play with the detectors and the different things. So I think it is actually. It has a somewhat close analogy to the kinds of learning that I'm describing in this passive learning paper.

And I think there's some very interesting connections to be made there. Now, Allison and some people from her lab did try these kind of studies with language models in a paper that came out a little bit before mine did. And they showed that the language models are still not particularly good at them.

So it does seem like there's something there that's It's missing and perhaps it is sort of this early interactive experience that the models are missing. But I think we'll have to do some more exploring to resolve that. 

Alex: Often when we talk about large language models, we can see some people giving comments about understanding.

So they say, ah, they are just predicting the next token, but they don't really understand what language [00:27:00] is. They don't have. The semantics are, in a way, similar to the Chinese room argument. What is understanding? 

Andrew Lampinen: I guess I tend to think of understanding and many other properties as being sort of graded things that are, you know, it's not like a system is discreetly, oh, it has understanding or it doesn't.

Rather, it's like, well, understanding is something you have to a degree. It's the degree of like richness of representation you have of something, degrees of ways you can generalize that. Okay. So again, returning to math, like, I think about this a lot in the context of, you know, if you have a student in an intro calculus class, they might understand differentiation, for example, to some extent, but probably their math teacher understands it better, and someone who's like a researcher in sort of analysis would understand it even better, right?

There's sort of different degrees of understanding of a concept, even if you're able to work with it to some extent. So, I think that language models do have some degrees of understanding of certain features. Of course, there's some aspects of our [00:28:00] understanding of the world, for example, that are really about the phenomena of perception that they can't possibly learn about.

But that doesn't mean they don't have any understanding at all. And similarly, the fact that they are trained by just predicting the next token, I don't think that means they don't have understanding. I would think of understanding as, you know, the degree to which that if you test them on something by like asking them questions about it, they get those questions right.

That's how I would measure understanding. I think by that measure, they have some understanding, often imperfect, but then if you ask the average human questions, they'll often get some of them wrong. So I would still say that there's some understanding there. 

Alex: We started our conversation today, um, talking about your paper about passive strategies and active strategies and so on.

And you shared with us some of the results showing that large language models can generalize to new causal contexts if the training have certain properties. Some time ago, François Chollet published [00:29:00] his set of tasks to measure passive strategies. What he called in calls intelligence. These are tasks called arc or sets of tasks called arc and arc two.

And it seems that large language models are, there's a lot of detail to this, but in some sense, they are not performing very well in those tasks. In some sense, those tasks, what is important about them is that there is a concept and there is a structure and the concept and the structure are orthogonal.

What kind of training or what kind of data would we need for large language models in order to. make them perform better on this type of tasks? 

Andrew Lampinen: Yeah, that's a great question. So, one thing I want to bring up in this context is there is a nice paper by Taylor Webb and Keith Holyo, came out maybe six months or a year ago, looking at the ability of language models to understand analogical reasoning problems like the ones that humans are, have been tested on in the past.

And Keith Holyoak is one of the people who did a lot of the earlier work on human analogical reasoning. And so, analogical problems are really a, an [00:30:00] example of this case where you have this structure and you have some superficial details around it. The details are orthogonal to the structure and so you can test the same underlying abstract structure in different situations.

So they tested these language models on some problems that are analogy problems with letter strings and things like that, that are somewhat similar to some of the ARC problems. And they did find that the later generation models are doing increasingly well on these problems, even in fairly complicated cases, often nearly as well as humans are.

So I do think that. To some extent, you can get some performance on these problems out of these systems. At the same time, I think that one of the things I think about a lot is in humans, formal reasoning is really an ability that we really have to train in depth, right? Like, you learn to be a formal reasoner through years of education, maybe like 20 years of education, to become like a researcher in mathematics or logic or something like that, right?

I think that the We perhaps underestimate the extent to which a system needs really [00:31:00] rigorous and perhaps interactive training to efficiently learn these kind of formal reasoning strategies. And we maybe overestimate the extent to which humans are just naturally good reasoners. And I think part of the reason that we overestimate this is that the people who run these studies are exceptional and they tend to be highly educated humans, right?

They are the people who have gone through these years of rigorous training in order to become a more formal reasoner. And I think we sort of suffer from this expert blind spot where it's hard to step back and say, well, actually, how easy is it for the average person to reason about a logical problem or to do these sort of structure mapping reasoning processes?

So one slightly different example of this is, you know, In a recent paper, we looked at how language models and humans reason about logical problems that have some underlying semantic content. And we looked at what we showed is that basically language models and humans both do better when the semantic content of the problem supports their logical reasoning.

And they do much worse when the semantic content contradicts that the sort of [00:32:00] conclusions that you would draw from logic. And so what I think this is pointing to is the fact that, you know. Like humans, language models are imperfectly using logic in order to address a problem, but that's maybe kind of a natural thing to do unless you've been very carefully educated to take a more rigorous, strictly formal approach.

Alex: We as humans are also exceptionally good in building defense mechanisms that allow us to think about us, ourselves, as better than we might be in certain contexts. It's about reasoning as well. I'm pretty convinced. 

Andrew Lampinen: Yeah, definitely. I mean, I think that, you know, I make all kinds of reasoning mistakes all the time and, you know, if, if someone's skeptical of reasoning, saw some of those, they would say, oh, that's a system that really doesn't understand.

But I think, I like to think at least that I do. 

Alex: Yeah. Yeah, I think we, we, we all like to, and I think it's healthy actually, to think this way about ourselves, at least to end. healthy extent, whatever it is. Andrew, what would be your advice to [00:33:00] people who are entering a complex field like causality or reinforcement learning or machine learning or theoretical physics?

Andrew Lampinen: My perspective on this, which comes from perhaps from, from physics to some extent, is that the best way to learn about something is to play around with it, right? So if you're learning about a mathematical concept, or if you're learning about a causality concept or a reinforcement learning concept, the best way to learn is to like code that thing up, right?

Try it out, like change some parameters, see how things vary, see, you know, if you change from more observational data to more interventional data, what does the system learn or fail to learn? Doing those kinds of experiments yourself and playing around with that, I think is the best way to build the intuitions that I think are the kind of the foundation of our understanding.

Alex: What's your message to the causal Python community? 

Andrew Lampinen: I guess one of the things that I take away from our paper that I think is Kind of an important point is that, you know, if you have a system that treats different levels of abstraction, for example, the raw data themselves, but also the [00:34:00] causal abstractions about them and even counterfactual questions, like what would have happened if we'd done something else, in sort of a homogenous way, you actually enable kinds of reasoning that might not be so obvious, like being able to learn about causal strategies, for example.

And so, I think that my message would be think about how you can build more holistic systems that are capable of bridging understanding across multiple levels. And I think I'd be really excited to see where the causal community takes that. 

Alex: In your work, you extensively refer to reinforcement learning on different levels.

Also in this paper, for many people who are maybe less aware about your work, but were interested in reinforcement learning in the context of causality, They might be familiar with the works of Elias Baer and Boehm. What would you say the main similarities and differences between your work and Elias work?

Andrew Lampinen: Yeah, that's a great question. So I'm not familiar with all of Elias work, but he does [00:35:00] have some papers that are quite closely related to the kinds of work that we did in this paper, looking at what you can learn from sort of passively imitating a system. I would say the key difference between the work that I'm familiar with from him and our work is that we considered sort of.

fundamentally a meta learning regime, where you don't have a system that's learning about a single problem, but you have a system that's learning how to adapt in new situations to new problems. And that's sort of the general distinction between meta learning systems. They're learning to adapt to learn and new, learning how to learn.

That's the meta learning part. Whereas a basic learning system is just learning about a single structure or a single problem. So in LIS's work, they were looking at what you can learn by imitating in a single case. And of course, there you're really bound by the fact that this data, the data are observational.

The key point we make in our paper is that actually, If you're learning about how to learn by intervening, that is something you can learn just by passively imitating what someone else has done, and you can actually deploy that to [00:36:00] acquire like more general causal understanding in new situations. So I think that is the fundamentally important distinction between those works.

Alex: When we were speaking a second ago, So you mentioned systems that maybe could learn in this soft way and then just use logical symbolic systems as a tool that is being used in cases where it can be useful. Antonio Damasio, in one of his books, uh, was describing a case of one of his patients He nicknamed him, him Elliot.

So Antonio Damasio is a neurosurgeon, neuroscientist, and his patient has a particular brain injury that disconnected lower parts of his brain with his prefrontal cortex. And this patient was doing exceptionally well in logical reasoning. So in tasks like intelligence tests and mathematical reasoning tests, he was everywhere there, he was above the [00:37:00] average.

But when it comes to his personal life, he was suffering greatly, he lost his family. One day he went on the street, he met a bunch of people somewhere on the car parking. and started talking about business with them, and they asked him to give them his money, and he just gave them like a huge amount of money, like most of his money, I don't remember exactly.

And all this happened to a person who was exceptionally intelligent in terms of how we tend to measure intelligence. Perhaps this story tells us something important about our own rationality. What are your thoughts about rationality of artificial systems? And what role those symbolic capabilities would play in those systems when we think about them from the point of view of rationality and decision making?

Andrew Lampinen: I think that's a really interesting question. So I mentioned earlier this paper on how humans and language models show similar patterns of content entangled logical reasoning. And one of the arguments we make in that paper is that [00:38:00] There's actually a rational reason to do this if it helps the system to make better decisions across the distribution of situations that it encounters every day.

So, humans tend to make better logical reasoning inferences in situations that are sort of more familiar to them, which are of course naturally the ones that they tend to encounter more often. And similarly, Language models perhaps make better inferences in situations that they intend to counter, encounter in their data distribution.

So I think one way of framing these results is that, well, the objective of humans and of language models isn't to be rational reasoners, it's to be sort of adaptive to the situations we encounter. And that might actually mean that it's better to be less rational, but to do better on the kinds of situations that you tend to encounter every day.

And I think that might be one thing that's playing out in these kind of cases, you know, humans are sort of, you know, adapted to be irrational in selective ways that perhaps make us better able to handle the kinds of social situations and other [00:39:00] things that we encounter. And maybe if you're too rational in a logical sense, that will actually work against you in certain cases.

And so there's actually a whole literature on sort of things that are called bounded rationality, for example, that try to come up with normative theories of why humans are irrational based on sort of similar arguments that like overall it helps us to be irrational because like for example we have limited resources and so we can't make the fully perfect rational inferences all the time or because it helps us to be more accurate in the kinds of situations we encounter every day.

Alex: Perhaps also our definition of rationality as related to formal reasoning and maybe this example shows this the example from That's not a useful definition in terms of what is really rational in terms of decisions that we make in real world, in everyday life. 

Andrew Lampinen: So I think like a lot of these approaches that try to come up with a more normative theory draw in sort of more probabilistic accounts of what's rational and sort of how you would be the best in expectation over a probabilistic [00:40:00] distribution.

And that tends to lead to slightly different patterns of behavior than you would get from just doing sort of strictly formal logic. 

Alex: It seems that when we look at complex systems, dynamical, dynamical and complex systems, when we try to regularize those systems overly, makes them perfect, it might cost us Very unpleasant surprises and very abrupt surprises later on when the system dynamics just explodes, in a sense, at some point.

One example of this has been given by Nicola Stalep in his book about antifragility. When you work with learning systems, does this intuition about bounding or trying to limit systems in a certain way? That leads to detrimental consequences down the road is something that is also like reflected in the world that you find useful in your work.

Andrew Lampinen: Yeah, definitely. So I think that we've written a fair [00:41:00] amount about, uh, the sort of idea that instead of building constraints into a system, you should try to learn about those constraints from data. And we've demonstrated this in various contexts, but I think the general intuition is similar to what you're describing.

It's that If you have some idea about how you think a system should approach a certain class of problems, like you think, oh, it should really be a formological reasoner, and you try to constrain the reasoning processes too strongly, actually is going to make the system more fragile, because as soon as there's something weird in the world that doesn't quite match your assumptions, the system will totally break down.

I think that the approach that I try to take, for example, is to use things like these explanation prediction objectives that try to encourage the system to represent the things we think are important, but without overly constraining the internal computations of the system. And perhaps that's a way that you can in a softer way influence what the system is doing without putting too hard of constraints on it.

But of course, I think that there's a trade [00:42:00] off. Like, if you know exactly what the problem is that you're trying to tackle, you can build in a really strong inductive bias, and you can make the system do better, of course. I think a lot of the cases I'm concerned with are things like, you know, things that work at scale, like how do you build a better language model, or how do you build a better agent that can interact in a rich virtual environment?

And for those kind of situations, I don't think we really know what the right solution is. And so, when we try to impose something like, Oh, I think there should be a formal hierarchical grammar here. We tend to probably get some things wrong about how you actually need to solve that problem in practice.

Therefore, we make the system more brittle. And so, I guess there's this idea that's gone around for a while now in the machine learning community called the Bitter Lesson from Rich Sutton, which basically says that if you try to build in what you think is the right way to solve a problem, it'll work at small scale, but as soon as you scale the system, it tends to break down.

And I think Empirically, we've encountered that a lot. And so I tend to think that these sort of [00:43:00] softer solutions, like giving explanations or changing the data distribution that you're learning from. Can be more effective and scalable ways to solve the problem, at least in certain situations. 

Alex: In both machine learning research and, and causality research, we sometimes have very interesting papers that show some very interesting phenomena, but those phenomena are only demonstrated empirically in the context of a specific set of tasks.

In contrast, when we take those systems to the real world, the tasks are usually more Not that well defined or the distribution of tasks is and is is is completely unknown What are your thoughts about this discrepancy between the lab and the reality? 

Andrew Lampinen: Yeah, that's a great question So I think it's important to do work in controlled settings where we really understand what's going on and usually that necessitates doing something That's a little bit more toy because otherwise we can't possibly understand it But I think that the real world is a good forcing function [00:44:00] for encouraging you to not overfit.

Because I think a problem that comes up is that people are designing a, a new algorithm, for example, that they think will work better. And then they're designing the test for it simultaneously. And so they sort of test the algorithm on the perfect case that it will perfectly work for. And of course, their algorithm is better for that particular case.

But then if it's like, well, I want it to work across all the language tasks that you could learn from the Internet, Maybe the assumptions that you made there in designing that system won't hold anymore. And so, I think that working in the real world, like trying to build image processing systems that can handle any natural image they encounter, or trying to build language models that can handle sort of any natural language query from a user, while it's a very hard problem that neither of those is Even close to fully solved, I would say, I think it's a really good way to test your ideas and make sure that they're really solving the problem and not just solving some toy problem that you designed.

Alex: What would be the main lessons that you [00:45:00] learned during your studies in psychology? that help you in your work today. 

Andrew Lampinen: I think there's so many things. I mean, I think cognitive scientists have spent a lot of time thinking about some of these causality related issues about things like confounding or different ways that you could explain the same different underlying processes that could explain the same observations that you make about humans in an experiment.

And so I think from an experimental design perspective, and also just thinking about doing rigorous statistics on the experiments we perform, I think cognitive psychology has a lot to teach machine learning, where often They don't even do any statistics. It's just like this number is bigger than that other one.

So the system must be better. So, yeah, I think experimental design and doing rigorous statistical analyses are two of the main things I've learned. The other thing might be just sort of thinking at a more abstract level and trying to bridge from sort of The observations we make to more abstract models that could explain them.

I think that's something that's really emphasized in computational cognitive science, and, uh, is maybe a valuable skill for thinking about the kinds of [00:46:00] systems that we're working with today. 

Alex: You mentioned statistics, uh, in the context of, of machine learning. You said that sometimes we just say, Hey, this is a bigger number, right?

That's true. Unfortunately, what are your thoughts about statistical learning theory and how it applies to modern machine learning systems? 

Andrew Lampinen: Yeah, great question. So I think, you know, when you make a theory, just like when you make a simplified model, you'll build in some assumptions to it. And it's important to consider what those assumptions are in order to understand how well the theory is going to apply to a new situation.

So statistical learning theory, I think, has some assumptions that are perhaps too strong in some cases for modern machine learning systems. Or maybe another way of saying it is that we don't quite understand how modern machine learning systems fit into statistical learning theory appropriately. So one example of this is that there's been a lot of theory work maybe trying to bridge this gap in understanding, arguing that there's some implicit inductive biases in, for example, the architectures of our models or the [00:47:00] learning processes of gradient descent or how those things interact.

That causes the models to effectively be not as over parameterized as you would think of them as being, at least at early stages in the training process. And I think that those kind of explanations, you know, point to a nuance that's maybe not really captured in at least the most naive versions of statistical learning theory.

Where you sort of assume that like the capacity of the system is sort of uniformly distributed over all the different ways that it could fit the function. Maybe that is not really true in practice when you train a system by something like gradient descent for example. 

Alex: Is the future causal? Of course.

Andrew Lampinen: Yeah, I mean, I tend to think that the world is causal and so the future must be 

Alex: from all you learned in your journey. What do you think is a good life? Wow. That's an interesting question. 

Andrew Lampinen: I think the, the life I enjoy is being able to both engage with sort of intellectual issues like ask questions about causality and intelligence and even like dabble in a bit of philosophy, although I'm not [00:48:00] an expert in that, not an expert in very many things, honestly, but also to have sort of a fulfilling life outside of research.

And I do a lot of rock climbing, as I think you mentioned in the introduction. And I, uh, have a lot of friends and my partner who I really value our time together. And so I think it's, uh, yeah, it's good to have a balance of engaging in intellectual topics, but also, you know, having a more sort of like physical engagement with the world and having social relationships too.

I think that's a very important part of what it means to be human. Who would you like to thank? I would like to thank. All of my collaborators at DeepMind. I mean, there's so many amazing people there. People like Stephanie Chan, Initiative Dasgupta, who I've worked with pretty closely on a lot of this work.

Also my Felix Hill, who's been a real mentor to me over the years and grad school advisor, James McClelland, who has been a huge hero of mine and a real inspiration and it's really helped me to think carefully about science and statistics and everything else. And I'd also like to thank my partner, Julie Kasha, who has been a huge support to me and just tremendously helpful.[00:49:00]

Alex: Before we conclude, what question would you like to ask me? 

Andrew Lampinen: I'd like to ask you where you see the future intersection of causality and machine learning going. Like, what is your vision for how causality research and machine learning research will come together to, 

Alex: That's a great question. I'm recently thinking a lot about combining formalisms, causal formalisms, like Perl's formalism with things like differential equations and chaos in mathematics, dynamical systems and this kind of stuff.

And I'm also thinking about something I feel very similar to what you mentioned before. So combining this soft learning with more formalized ways of, of inference, of doing inference. My intuition is that we in everyday life as humans, using more of soft learned skills than formal reasoning, but we probably also have certain constraints imposed on [00:50:00] those learned in a soft way associations, or maybe there's some structure, I don't know.

Maybe there are some constraints that are enforcing structure on our associative learning. And I agree with you, with what you said before, that formal reasoning is something that we need. to be trained in. And I experienced this firsthand because I studied philosophy. So I studied logic and then more logic and even more logic.

And then I started learning causality, already having experience in machine learning. And I found it deeply counterintuitive to internalize things like do calculus, for instance. So I think these tools are useful. They are not necessarily the main ingredient. If, and I think this is maybe the main point of this answer, if we want to build a system that is human like.

And this conditional goes down to something that I think is very fundamental. And this is a question that I think we don't ask ourselves as a [00:51:00] machine learning community in general often enough. And this is a question about what are we actually trying to do. So, do we want human like systems that will give us a feeling that we are interacting with something that is similar to us?

Or do we want to build systems that will be superhuman in certain regards, and then in which regards actually we are interested in building in systems like this. So these are the things I'm thinking about. And I must say that this is probably a little bit on the philosophical end. This is a little bit further, further away from, from practice.

On the practical end, I'm thinking a lot about providing the community of ways to provide the community with more narratives about causality. That will make causality more accessible to more people. Because there's a certain learning curve to jump into these things. 

Andrew Lampinen: Definitely. 

Alex: Yeah. And there's also, I think, a lot of misconception that comes from many different angles.

So I see [00:52:00] sometimes people from the experimental community saying things like, We are doing A, B testing as a superior to causal inference because causal inference has to be from observational data, which is not true. We have people in simulation and digital twins that are saying, Hey, we're doing simulation.

Why would need actually any causal inference? So I think a unified perspective that shows that interventions or A, B tests are a, a Special case of causal inference and so our simulations because they are basically some kind of operation operationalization over a that Elvis structural causal models is something that we that could be very beneficial for the community to move forward.

So this unified perspective I found it something fascinating and I work every day on making building narratives that that could be readable to more people regarding this. 

Andrew Lampinen: Yeah, I think that's super valuable. I mean, I think. Understanding technical topics is always hard. And so I really admire the work you're doing in the podcast and with your book and everything to try to bring these [00:53:00] things in an accessible way to a broader community.

I think that's, that's super important work and yeah, trying to arrive at a more unified perspective of everything is. The grain goal of many sciences, right? So yeah, that would be awesome. 

Alex: Not an easy one. Thank you so much. Thank you, Andrew. Where can people learn more about you, your, and your team's work?

Andrew Lampinen: Well, you can check out my website. If you just Google my name, Andrew Lempinen, or my Twitter, I often post about my work on Twitter and my colleagues work, but also just other papers that I find exciting. Other things in the field that I'm excited about. So yeah, follow me on Twitter. That's probably a good, good solution.

Alex: That's great. I have one more question that I want to ask you before we finish. What are two books? The 

Andrew Lampinen: first one I would say is perhaps a more recent one, which is Becoming Human by Michael Tomasello. I think it's a really beautiful book for, it's sort of a technical book. You know, he's a researcher who thinks about humans and what makes us special.

Does a lot of comparative work with other primates, for example. And it's, it's a beautiful book for thinking about [00:54:00] how social interactions from a really early age are a fundamental part of human experience. And that book provided a lot of inspiration for some of the ideas we talked about in a paper called Symbolic Behavior that we had a few years back, thinking about how social interactions might be really important for building more natural artificial intelligence.

For a second book, oh man, there's so many it's hard to pick just one. I think one of my, this is a bit of a trite answer, but one of my favorite books ever is Ulysses by James Joyce. And I think that one of the things I really like about that book is the way that it plays the sort of style of the writing and the content together with the narrative in different sections of the book.

And I actually think that like this bridging between different levels of abstraction, right? What is just the superficial style? What is the content? What is the overall underlying narrative is actually quite similar to some of the themes I'm interested in my research. Like how do you bridge between different levels of causal reasoning?

How do you bridge, uh, in my dissertation, I was interested in bridging between sort of like reasoning within a [00:55:00] task and reasoning across tasks and things like that. So. I think this, perhaps, I'm totally just retrospectively attributing this to myself, but perhaps some of the ways I thought about, like, literature when reading Ulysses and other books like that, uh, percolated into the way that I think about more complex, uh, formal systems that I reason about in my everyday work.

Alex: Thank you so much for the conversation. It was a pleasure, Andrew. Thank you very much.