Fat Tony's Podcast
Captivating conversations with amazing experts. Inspired by the works of Nassim Taleb, amongst others. Hosted by Sebastian David Lees.
Fat Tony's Podcast
Stephen Wolfram - Computation, Strategy, and a Life of Ideas
Some conversations feel like opening a secret door behind the everyday world. This one does. Stephen Wolfram joins us to trace how a teenage physicist became a toolmaker, language designer, and explorer of the computational fabric of reality—then shows why strategy, not bravado, is the quiet engine behind ambitious work.
We start with the leap from Oxford to Caltech and what an international physics community taught Stephen about reputation, reinvention, and ambition. From there we dig into the missing discipline in academia: strategy. Stephen explains how treating research like product, where ideas need a distribution channel, changes what gets built. He shares his system for shepherding decades-long projects, the moments when timing and tools finally click, and why exposition is the ultimate test of understanding.
Then we go deep. Language is compression; computation is the substrate; most of the universe is computationally irreducible. Stephen maps how science advances by finding pockets of reducibility - laws and models that compress behaviour into concepts we can think with. We explore the Ruliad, observer-dependent laws of physics, and why storytelling, morals, and scientific principles all act like fractal compression for meaning. We also unpack AI’s true frontier: blending brain-like neural nets with symbolic, computational and AI.
The final stretch tackles human immortality as both a technical and social horizon, the identity puzzles raised by AI clones, and the Wolfram Physics Project. Even if the universe runs on hypergraph rewriting at scales we can’t probe, the methods built for physics are already paying off in mathematics, machine learning, and distributed computing. If most of reality can’t be shortcut, progress is learning where to compress, and how to ship the tools that make those compressions useful.
mqekBhS8ApZ0jFmKGkFj
Fat Tony's is more than a podcast - it's a community. Join our community at FatTonys.net.
Welcome everyone to another edition of the Fat Tonnews Podcast. I'm your host, Seb Lees. Today's guest, Stephen Wolfram, is someone whose work has been deeply influential across modern science and computation for a long time. It's shaping not just what we can do, but how we think about doing it. Um Stephen is the creator of Mathematica and Wolfram Alpha is also, of course, the founder and CEO of Wolfram Research and author of A New Kind of Science, as well as many, many other fantastic books. But more than any single title, he's someone who's spent decades following a very particular kind of curiosity. The kind that keeps asking what's really going on underneath the surface, and it isn't afraid to take the long way round or depth slightly out of the mainstream to find out. From an early age, Stephen has moved comfortably between physics, mathematics, computer science, entrepreneurship, and even philosophy, building new tools along the way when existing ones weren't quite enough, and many of those tools turned out to be very, very valuable businesses. That way of working has led him to big and sometimes unfashionable questions about complexity, the nature of computation, and whether the universe itself might be best understood as a kind of computational process. From this conversation, we talk about those ideas, but we also talk about how he thinks, about tools, exploration, and about what it means to do serious intellectual work over a lifetime. It's a wide-ranging and thoughtful discussion and one that really rewards taking your time with. So with that, here's my conversation with Stephen Wolfram. Enjoy. Stephen Wolfram, welcome to the Fat Tonies Podcast. Hello. So, um I want to start at the beginning, or if not the beginning, certainly a while back. You attended Oxford for a couple of years in the late 70s, uh, which you then abandoned to pursue a PhD in particle physics at Caltech. What was the impetus behind that decision? Especially as the world was a much smaller place back then. That must have seemed like a big, bold step into the unknown. And I'm curious to hear about your thinking behind it at the time.
Stephen Wolfram:Well, I I got involved in doing physics when I was pretty young. I got interested in physics when I was maybe 11 years old or so. By by age 12, I was kind of accelerating and learning about physics by this surprising thing that one could do of just reading books and learning from them, so to speak, uh, long before the web. That was uh around 1972, 1973 type time frame. Then um by 1974, 1975, when I was 14, 15 years old, um, I kind of started doing uh publishing, you know, physics research papers and so on. Left school when I was 16, always one of those good things to be able to say one did. Um, the uh and um went uh I worked for a little while at a government lab in in England and then uh then went to Oxford to be a physics undergraduate, which was a little bit anomalous because I had by that point published a bunch of research papers in physics and so on. Um and uh I kind of lost it there for about a year, spent spent a summer in the US at a uh government lab in the US doing doing physics research, and then um uh and then decided, okay, I next step is go to graduate school in the US. And um uh the time probably still true, the the uh the kind of the the place where things were really happening in science and so on was the US. And so that was uh um but you know you you say the world was a small place, but one thing was definitely true was that the the kind of international theoretical physics community was quite small. Yeah. By the time I was, you know, had published papers, gone to conferences, things like this, I knew a decent chunk of the uh of that international crowd. So, you know, moving from England to California, I went to Caltech for graduate school was just like, oh well, this is a, you know, it's a physics department here. That's kind of like a physics department that I've seen in in England and so on. Um, you know, I'm not sure that I I uh probably it took me a while to absorb the full Californian uh experience. Maybe I never did, but uh um, you know, as far as the the physics thing was concerned, it was uh it was a sort of a a uh uh not that different. I mean, the weird thing about the smallness of the international physics community was that you know I was uh there weren't that many 15, 16-year-olds, um, yeah, probably zero other ones, you know, writing physics papers and showing up at physics conferences and so on. So I was a pretty well-known character, so to speak, very early on. And I think that that had the that had the interesting or uh side effect that you know I was uh um I had a certain personality as a teenager, and that was owing to this sort of international physics community, and it took probably 30 or 40 years for people to decide that I might be different than I was when I was 15 years old in that in that particular community. Um I think if it had been a much bigger community, I would have only sort of interacted with some small corner of it, and uh I could have um become different more easily, so to speak.
Sebastian David Lees:It's interesting you say that because one of the previous guests on this podcast was Emmanuel Derman, who uh does remember you when he was a postdoc physics researcher at Oxide, he came across you, and I think he said something along the lines of uh you were intimidating, but because you were in a generation below, he didn't necessarily have to be too worried at that time. Um but he also mentioned that it was a small place, and he said the biggest difference was just waiting for papers to come out and be delivered, whereas now you have almost near instantaneous access to them. It's interesting. I've heard I speak to a lot of entrepreneurs on this podcast and business people, and they often talk about the difference between England and America in terms of entrepreneurship and appetite for risk. But I'm curious, have you seen uh differences culturally in academia and how that's approached?
Stephen Wolfram:Well, gosh, I haven't been in academia for a long time. I I was last in academia as a sort of working operative in 1986. But um uh in, you know, I I would say that the um you know, the these uh uh sorry, it's it's hard for me to answer that for sure. I mean, I you know, logs of academia, the the challenge is always what is the strategy of what people are doing? One thing that's kind of interesting in the commercial sector is people have the idea that strategy is important. Um in academia, they often don't have that idea. Now, in some fields where you know the objective is to just keep preserving knowledge across generations, you know, strategy isn't that important. But if you're trying to do research, strategy is important. And it's something that's essentially never taught and uh too rarely practiced. And so, you know, there's uh sometimes kind of sometimes bad strategy is is uh is even worse than no strategy. I mean, if you're simply sort of uh you know preserving knowledge about some area of history or something like this, or just continuing to to uh you know look at more documents from the 1200s or something like this, that's uh uh you know it's it's not heavily dependent on strategy. But I think you know, in the in in back in, well, when I kind of moved from from England to the US, the the kind of uh the kind of ambition of uh uh at least in an area like physics, the ambition was definitely a lot higher in the US. Um of uh I mean I think that um um uh you know it's uh it's it's something where people I mean, you talk about risk, and I don't think academics think in terms of risk particularly. I'm not sure that I think in terms of risk. I mean, for me, everything I've ever done is is kind of perceived by me as essentially risk-free. Um, in the sense that, you know, I'm gonna do something, I think I know how to do it, I get into doing it. Maybe it doesn't go exactly as I expected. I'm kind of confident that I can kind of navigate it to the point where I can get something good to come out of it. And I'm not sitting there thinking, oh my gosh, should I do this? You know, it's too risky to do it. You know, I I I mean I I like to when I get into doing things, I do try to think about, you know, what could possibly go wrong.
Sebastian David Lees:Yeah.
Stephen Wolfram:Um, but on the other hand, it doesn't, I'm not kind of in the, oh my gosh, I'm doing something risky. I'm just like, I'm doing this thing, I'm going to get this thing done. And I think that uh, you know, in academia, for example, the the notion of risk is, you know, the the probably the way that the thing that people would think is riskiest is doing something that isn't exactly what all their colleagues think they should be doing. Yeah. You know, anything like that is viewed as risky. Now, you know, but it's it's a weird kind of risk because it's it's only um um the uh the you know it's something where yes, if you do the same thing everybody else is doing, you will get the same kind of results everybody else gets. And um uh, you know, so you know, that means you're not going to probably you're not going to do less well, so to speak, but you also have a very limited band of what's achievable. Yeah. You know, I I think one thing to understand about about kind of intellectual work and academia, I suppose, is the the vast majority of work that's ever done is not attempting to do anything big. People are usually afraid of the foundations of fields. I mean, in other words, when if you want to do something big in a field, you need some high leverage point typically to do things. That's often the foundations of the field. And the typical pattern is some field gets invented. The people who invented, the first generation of people, are very aware of the foundations and what's not necessarily quite right and all this kind of thing. Then in academia, you go a few academic generations further forward, and then you know, the fourth generation, fifth generation academic folk are like, well, of course, you know, the foundations were laid down 50 years ago, you know, uh whatever, you know, we'll never question those. And, you know, all we can work on are these kind of uh small-scale things kind of at the top of what's been built. And and that's a um, you know, it's it's a that's one of many kind of limiting factors of of kind of the typical academic way of thinking about things. I mean, I I've been kind of I suppose I suppose fortunate or or it's sort of uh the plan kind of thing in my in my life that because I've been kind of an entrepreneur type person in the in the commercial sector, uh, you know, trying to figure out how to build products and figure out strategies for things, that has been a thing that I've applied consistently over many, many decades to doing things like science. And and actually the interplay between those things of think about things on the kind of commercial side with a higher level of intellectual rigor and energy than one might otherwise think about commercial things, because one's getting that from the work on science, that's a win. And in science, uh kind of importing the kind of strategic thinking that one is uh used to in the commercial sector is also very valuable. So it's it's uh it's a thing that I think has worked out very, very well for me. I think had I tried to do one or other of those things on their own, it would have been much less successful than sort of alternating between those two different areas.
Sebastian David Lees:I think this idea of risk taking, I compare it to rather than thinking about the worst, what is my blast radius here? And if the blast radius is too big, how can I limit the blast radius by still taking the risk? And I think Nassim writes about this idea that there's this conception that a lot of innovations, ideas, discoveries are generated from a top top-down academic hierarchy when really it's the the tinkering with convexity that happens with tinkers, with entrepreneurs, and then the academic framework kind of gets built a little bit around that, not retrospectively, but almost as a complement to it.
Stephen Wolfram:I think one of the things it's worth realizing is that the number of sources of ideas in the world is vastly smaller than you would expect. I mean, I I noticed this because I spent my life having ideas about things. And I feed these ideas to people and to companies and you know, develop some of them myself and so on. And I'm I'm kind of shocked to realize sometimes that things that I thought, oh, you know, that came, that idea came from somewhere or somewhere. And I realized actually it came from me back, you know, 20 years ago when I mentioned this to this person and that turned into this and that turned into that. And it kind of is is is very shocking to me that the number of people in the world who are generating significant numbers of of of you know worthwhile ideas is much smaller than you expect. It's not an activity that most people engage in. They could. It's not that they're perhaps not capable of it, it's just that they don't do it. And and also it's the case that that not everybody has a you know a fertile ground on which to plant ideas. I mean, it you know, if you are doing some, you know, if you're in some profession where you're just doing what you do and you have some great idea about this or that thing, what are you gonna do with that idea? I mean, it's it's the same for me. There are there are ideas that I could have about things where I have nothing I could do with it. I mean, if it's an idea about technology that's fairly close to technology that we build and we have distribution channels for and so on, well and good. If it's a science idea, it's a little bit broader, it's become a little bit broader for me over the years. That, you know, if I have some idea about uh, well, let's say I had an idea that was, you know, could make a great hedge fund. Okay, I have no distribution channel for that really. I could tell it to friends of mine who run hedge funds, maybe something would happen with it, maybe it wouldn't. But for me to build the distribution channel to actually build a hedge fund to implement this idea, that's a huge amount of work, which I'm likely not to do. Um, and so you know, there's a certain, there's certain, you know, there's this question of whether the ideas actually can can fall on fertile ground. But there's also the question of of you know where how many how many ideas are being generated. And in academia, one of the shocking things is that there actually aren't that many ideas being generated. It's an awful lot of you know, filling in details and so on, um, and not sort of bold ideas. And and by the way, the academic system is sort of its own worst enemy in this respect, because as as it gets big, it gets lots of institutional structure. And the institutional structure is typically not oriented towards the bold, innovative, and new, it's oriented towards keeping the institutional structure running, something that uh kind of needs things, ideas that are close to the mean, so to speak, not kind of outlier ideas which that system can't deal with. So it's it's a um uh I mean it it's it's and it's always interesting that when there are kind of significant ideas, it's um uh, you know, in the academic world, the interaction is pretty poor usually. I mean, I I've certainly um I mean it doesn't really matter to me that you know I generate generated a lot of ideas that are of high relevance to various academic areas. Sometimes they land on fertile ground and things really get developed. Sometimes there's kind of like, oh my gosh, we don't need a new idea. We're just fine, thank you. We've been doing what would you been doing for 50 years, and well, nothing much is happening, but it's going fine for us type thing. And uh and and then and sometimes, interestingly, you find cases where some field has people in it, but they know the thing is is not working. And then you give them some idea and they're like, wow, this is an idea. Let's go do something with it, which is nice. I mean, and and um, you know, because that's a uh another challenge always with ideas, is you know, you have the idea, you develop it to some extent, and then there's a question of how hard do you push it? Do you market it, or do you just put it out there and uh wait for 50 years for people to notice that it's important? And um, you know, that's a that's a complicated kind of piece of personal optimization, is is whether, you know, how much of your effort do you put into inventing the new and how much of it do you put into essentially marketing the existing kinds of things? That's really fascinating.
Sebastian David Lees:I think this isn't just a problem in academia. I've seen it in large corporations and other bureaucracies as well. Um, I think Rory Sutherland coined the famous um nobody ever got fired for buy uh buying IBM. Was that Rory Sutherland? Was that his last? I think it might have been. He certainly says it a lot if it wasn't. But what's also interesting, you talk about ideas and idea generation, and a lot of successful people you you see interviewed and see talking will say you have to learn to say no to the vast majority of ideas and focus on things. And I like that you have a qualifier of what's my distribution channels? Can I actually make a go at this as a filter? But even so, compared to most other successful people, it I think it's fair to say your output is prodigious and you have a very eclectic range in your output. So does that come from an innate drive, or is there hacks that you've learned over the years or discipline that you've had to build up, or has that just always been inside of you?
Stephen Wolfram:Well, the my kind of range of interests has broadened over time. You know, when I was a kid, you say, What are you gonna do when you're growing up? I would have said, I'm gonna be a physicist. You know, I have done a bunch of physics, I sort of came back to it after being away for 30 years, uh, much to the horror of friends of mine who were in the biz, so to speak. Um both sides of that, both when I left and when I came back, they were horrified by both of those. Um, but uh the um um I think uh my you know what's ended up happening is that I've sort of invented some both practical methodologies and kind of conceptual methodologies uh all around kind of the idea of computation and computational language and sort of the computational universe and so on. And it's just turned out those ideas, happily, are extremely productive ideas. And so, you know, you end up seeing a lot of low hanging fruit in lots of areas, and it's kind of fun, fulfilling, and worthwhile to go pick it. And that's that's a lot of what I've done. And I think the um uh this question of sort of what areas should one work on, it's kind of ones where one can actually pick the fruit, so to speak. Yeah. Now there are um and you know, ones where the methods one knows are going to work. And I've been, as I say, lucky in that the methods that I've kind of both invented and resonated with are ones that just have a pretty broad domain of application. And it, you know, it it it uh oh, I don't know, recently I've been working a bunch on the foundations of biology. I did not, I I thought about that back in the 80s. I got a certain distance, but not that far. Finally, actually last year I had kind of a breakthrough in in this based on a bunch of other things that I've done. And now it's kind of like I feel like I have an obligation to go figure this out because the things I'm thinking about are things people have kind of, yeah, they kind of thought about them a little bit a hundred years ago, and then they sort of gave up. So, you know, now I've got an actual chance to make progress, and I feel like I better do this. You know, it's it's kind of, I don't know, it's it's it is fun to do it, but it's also one has a certain sort of, I think, into uh obligation to the ideas, so to speak, to pursue them and not let another hundred years go by with these things not being pursued. I I think um uh but in terms of of you know, are there things where I have an idea and I say, I'm not gonna work on this. I I don't do that very much. You know, I have a bunch of ideas that I'd been meaning to work on. Sometimes it takes decades before I get to them. I mean, I just recently finished something that I started thinking about 50 years earlier. You know, and and what tends to happen is that some fraction of things that I do, I've been kind of accumulating knowledge, kind of directions, ideas about those things for a long time, often decades, waiting for a moment when the sort of ambient situation in the world, or the set of tools that I have, or the distribution channel that I have is right to actually pursue those ideas, and then I'll do it. And you know, I would say about half the things I do are things which I've sort of long planned to do, and half of them are things that are sort of purely things that sort of came in and were new opportunities which I realized I could do something with. Um and it it's uh um it's kind of um uh I don't know, the the the um uh it it it seems to work out fairly well. It's a little bit um uh it's very satisfying to finish something that you sort of started thinking about, you know, decades ago. Um almost everything which I started on and didn't pursue, I regret not having pursued, and I intend to finish it, and I I've been checking them off at a pretty high rate actually the last few years. Um then there are other things where I get a little bit into it and I just say, I'm not gonna do this. I'm not interested. Um, I mean, you know, it's happened with, I don't know, spin-off companies that we've we've thought of doing. It's like, okay, maybe we'll do this. There's a set of people who are supposed to work on it, they just don't have quite the right energy or whatever else. And it's just like I haven't put much into this, and I try to be careful not to put too much into things where I'm not sure what's going to happen. And then, you know, the the the challenge for me, the the thing that is often the difficulty, is when there's some initiative and I know that as soon as I've invested a certain amount in it, I cannot let it fail. I will always go in there and get the thing to work, even though sometimes it probably isn't the best thing for me to be doing. Um, and so I've uh I mean that's uh it's a it's a it's a skill I've been developing over probably about a 45-year period now, you know, how to not get too involved, how to let, for example, people who are trying to do some initiative just do a bunch of stuff. If it seems to be working out, great. You, you know, you sort of push harder, but you know, don't get too involved so that you're not sort of emotionally invested in the thing actually taking off. Because it isn't really there's a point at which you just say, hey, I'm gonna drop it on the floor. And that's a that's a much better outcome than, at least for for me personally, so to speak, than it is to have, you know, either like, oh, we're kind of sort of trying to get it to work, but we're not really putting enough effort in that it's really gonna work. And you know, that that's kind of a lose. And I've I've uh I've tried to hone my my uh personal technique for dealing with that kind of thing over the course of many years. And I think I think it's getting better. It's always you always it's always a problem because there are always things where it's like, this was a good idea, then it should be done, but then you can't really get you know the right team or something to do it. And it's like, how far do how much do you help them? And how much do you say, uh, I don't care that much, you know, let's let it drop on the floor, so to speak.
Sebastian David Lees:It's fascinating when you were talking about your process of maybe sometimes things accumulating over decades, and then something unrelated will come along and knock it. I was almost thinking of a mental model of uh like an a a a belly and sandpile or something where you might get to a certain level with your thinking, and then something totally unrelated, maybe a a new language or a feature of something in Wolfram or something you've read will come along and and cause a kind of avalanche of productivity or something new to come out of it. So that's that's really, really interesting. I um have one final productivity question before moving on to sort of Wolfram and computation in general. So I think uh in a previous interview, you you were asked the question um, do you ever spend any time doing anything mindless? And I think the example given was you know watching cat videos on YouTube. And you very succinctly said no. So if that's the case, how do you deal with mental fatigue? Because certainly I, you know, when I'm working on something for a long time, there is that barrier of mental fatigue. So what do you do? Do you have an abnormally high threshold for it, or is there some secret you have?
Stephen Wolfram:I don't know. I mean, I I think one thing that I have is a portfolio of things to do. So if you said to me, hey, you know, for the next, you know, uh hundred days, you're only going to work on this one thing, I don't know whether that would work out very well for me. Um, but the fact is I'm never in that situation because it's always like, you know, in sort of standard uh weekday, daytime hours, there'll be a whole bunch of meetings that got set up about reviewing this project or that project or figuring out this or that thing. And I I think I I I don't have, I mean, I'll I you know I I think that um I have uh when I'm working on something, I can certainly go for six, eight hours, something like that, just doing the thing. And um, you know, it doesn't really uh you know, if I'm if I'm motivated to do it, I just keep doing it. Now, you know, do I get to a situation where I'm sort of stuck and I don't know what to do next? That's fairly rare, um, not least because my way of working is kind of I'm always doing things computationally. And sort of the, I suppose sort of you could think of the mindless thing is write the piece of code and sit and wait for the computer to run it type thing. And that's uh, you know, writing a piece of code is a little bit of a different activity from kind of trying to conceptualize it. It's it's close actually. It's it's uh you know, I tried to make it as close as it can be, but it's still there's some some mechanics in getting pieces of code to work and and all this kind of thing. I mean, I I would say in terms of of uh mindless activities, sometimes I find it, you know, when when I've got some big project and I'm trying to write a description of what's going on or something like this, um, I'll often have just tons of material. And so sort of my closest approximation to a mindless activity, it's just doing that actually for a project um uh just yesterday, is you know, I've got, you know, 50, you know, often notebooks, you know, the long complicated things. And it's like I'm gonna go through these, I'm gonna find the things that are, you know, I'm gonna make myself sort of a summary that I can use to actually start writing the story of what's going on in this particular project and so on. Um, and that's that's a a different kind of activity from trying to sort of push back the frontiers. It's it's kind of harvesting what's already there. I suppose for me that's a a bit more, it's a different kind of thing. But I I would say the main answer to your question is that I just have enough different kinds of things to do that I never really get into this mode where it's like I'm just grinding on one particular thing. I mean, it also helps that when a lot of what you do is computationally oriented, and I'm always doing computer experiments, the computer is always giving me sort of stimulation for something different because you know I'll invent some computer experiment, I'll have some idea about what's going to happen, I'll actually run it, and something often very different will happen. And then that's kind of a thing which says, okay, now I'm going to think about that different thing. Didn't have to come from inside me, so to speak. It was the computer that told me that I had to think about something different. It wasn't, you know, me having to kick off a different line of thought for myself. It was something where by just doing that experiment, seeing what happened, I'm kind of led in a different direction. So I think that's a, I mean, the other thing for me, you know, what are my other hacks for doing things? I mean, one thing is that I'm very keen on exposition of things because I feel like I don't really understand it myself unless I can explain it. And I kind of have high standards for the level of kind of uh uh foundationalism that I'll use in um in explaining things. And I found uh you know, last oh, five years or so, I've been doing a bunch of you know live stream Q ⁇ A type things, and I found that those are uh they're they're very they're very relaxing. I mean it's kind of strange because it's kind of like here you are, there's random questions coming in from people all over the place, and you know, you're just sitting there looking at a camera and not stopping and uh kind of just trying to find answers to these questions and and uh and talk about things. And it might be something that one would find very kind of stressful and so on, but I actually find it very relaxing and it's very useful to me because it it kind of, you know, when I talk about topics that I've thought a bit about, and then I'm now sort of forced to formulate them in a way that I can give a good exposition of them. And that's uh that's a very uh it's a it's a helpful thing for me, um, that uh sort of uh I would say helps unlock uh certain kinds of things where where I would, if I was just thinking about them for myself, I wouldn't figure them out. Yeah. But if I'm thinking about them in the context of this kind of expository situation, it's I find it sort of easier to figure them out. So those are those are a couple of things. I mean, I would say that the another uh personal hack, I suppose, is uh dealing with young folk, kids and so on. And uh, you know, I do a whole bunch of educational stuff with with kids of various ages, so to speak. Um that's always fun. I mean, my own kids are now grown-up, so they're uh they're out of that picture. But um uh it's um uh somehow um I I often find you know, in k kids tend to have a a broader point of view about things than you find in the grown-up, so to speak. And it's kind of like they haven't been, you know, they don't know all the stuff they're supposed to know and supposed to think. So they'll actually ask the the the more fundamental questions, so to speak. And that's that's always fun. And and also somehow it um it helps me to not just be a pure kind of uh you know, middle-aged tech, tech exec type person when you when you're being sort of challenged and hearing about kind of the uh the things that are popular among the young, so to speak.
Sebastian David Lees:It's interesting with lost uh and I I like finding the I I do watch your live streams and um like you're having a meditative mental defrag train of thought to kind of reprioritize the ideas in your head. And especially and talking about you, you know, talk to young people as well. I saw a wonderful clip the other day about uh I'm a soccer engineer, about another soccer engineer who's talking to his uh a young me, and she's saying, you know, oh, Uncle, if if the interpreter knows there's a comma missing, why doesn't it just put it in there? And there's this lovely thing that a seasoned programmer probably would never think to to ask that. And you also talked about you know the nature of computation and thinking. I think that's a lovely segue to maybe getting to some slightly heavier uh uh questions. So I'm gonna switch context a little bit now and talk about computation and maybe get a little bit philosophical. But um uh Wittgenstein famously said the limits of my language are the limits of my world. To what extent do you think you can apply this to computational thinking and limits of the human brain? In in the same way that perhaps we could never teach a dog algebra, no matter what. Do you think there are computational limits to the human brain that may bar us from ever achieving higher levels of knowledge, for want of a better word?
Stephen Wolfram:Well, I think the, you know, in any in any of our brains, there are 100 million neurons that are just doing all kinds of different things at any given moment. When we communicate with each other, we're packaging up all of that complicated stuff that's going on in all of those neurons into the words that we're saying and so on. We have this sort of transportable way of packaging up concepts. And it's that's something that, at least for sake of communication, is important. For the sake of preserving thoughts, it's also important. You don't remember, I think, the precise configuration of neurons at some particular moment. You remember some conceptual, some some sort of uh symbolic concept, so to speak, that was some collective thing that emerged from all of those detailed sort of neuron firings and so on. So I think the uh this idea that sort of the the preservable thought is represented at some higher level in some more symbolic way, I think that is a thing that is uh sort of a feature of of the way thinking has to work. But if you ask the question, sort of uh, is there are there things that we can not be able to do with our minds, able to do with our minds, the natural world does endless stuff that we can't trace with our minds. Our minds do rather specific things. It's perhaps a little disappointing that um, you know, for all our kind of wanted intelligence and consciousness and all this kind of thing, in the end, the the sort of the computations that we get to do in our brains are a very specific kinds of kind of computation that is very sort of narrow relative to the kind of computation that can happen in the physical universe and and in abstract systems and so on. And I think it's uh the the way that we have of organizing our computation uh the things that we can do with our brains are very um very the language is an important thing because it is the thing that gives us this kind of permanent way to represent sort of the the details of what's happening in these kinds of computations in our brains. But I think uh when we when we kind of look, well, I spent a decent part of my life developing our computational language, Wolf and Language, as kind of a computational way to formalize things in the world, a way to take all the details of the world and turn them into things that we can actually think about with our minds. So it isn't particularly useful to say what the value of every pixel in some image is. It's it's we want to say something about the image as a whole, or we want to do some operation on the image. These are things that we can conceptualize with our minds. The you know, the details of every pixel are not what we conceptualize with our mind. And you know, it's a question of how do you find those components, those primitives that we can understand with our minds and that are useful for doing the things that we want to do in the world. And that's sort of the story of computational language. I see it as kind of this bridge between what we can think about with our minds and what is in principle doable in the kind of computational world or the physical world. I I think um uh to the question of sort of how much further does that go, we certainly know there are many things that we can define computationally that our minds, with the limited computations they can do, will never be able to reach. We we can even even very sort of easy to define abstract systems, we can set up their rules, we can say, okay, you can run this rule, you can run it for a billion steps, it'll do what it does. We can't predict what's going to happen without essentially running those billion steps. This is this phenomenon of computational irreducibility that I've talked about a lot over the course of many decades now. But um that that's a ubiquitous phenomenon in the natural world. We uh kind of we hope that the world will be a somewhat predictable place to us. So we have to find sort of pockets of computational reducibility that avoid this kind of generic irreducibility that otherwise happens. And so, in a sense, we the sort of story of science, the story of civilization, the story of language is finding these sort of pockets of reducibility where we can say this whole bunch of things we can describe in this kind of fast symbolic way rather than just having to follow through each step uh one piece at a time. And so I think the uh the question of sort of how far can we go with the kind of building out these pockets of reducibility, one of the things one knows from sort of the formal structure of computation is there will always be an infinite number of pockets of reducibility. And sort of the effort to find them is the effort to make discoveries, the effort to invent things, and so on. There will be no there's no limit to the set of inventions we can make. There's no limit to the discoveries we can make, whether it's in mathematics, in science, in in other areas. And and each one of those discoveries is a little bit of additional computational reducibility that lets us kind of quickly get a little bit further in dealing with the world. And those pieces of computational reducibility, a a good way to kind of summarize them is through language of some form. I mean, that's essentially that's that language is the way of sort of taking lots of details and just saying it's a cat and not having. To say, oh, it's a thing which has these atoms arranged in this particular way. You just get to say it's a cat, and you can make a bunch of conclusions about what it's going to do just by knowing it's a cat, without having to know these particular atoms in the in the hair on the tail of the cat have this particular form.
Sebastian David Lees:I've often thought or hypothesized that the reason storytelling evolved as such a central mechanism is it has this amazing, almost like a fractal compression algorithm to it, where you can have a three-hour blockbuster that has some kind of parable or lesson in it, or you could compress that down to a five-minute oral story and probably retain 99% of the meaning. So this lovely mechanism of storytelling, I believe, is is a manifest manifestation of some about to a degree.
Stephen Wolfram:Yeah, I mean, just to say, I mean, I think that the scientific laws, sort of morals, principles, these are all this kind of compression. They're all kind of making use of some piece of reducibility in the world. In other words, a principle about how something should be done, it's only it it requires the fact that that sort of can repeatably happen. If it was all, if everything was irreducible, you'd never be able to come up with that principle that would be generally applicable. And that's that's so yes, I think that's true in um uh, you know, this is what we humans try to do is we try to abstract from all the complexity of the world things that are simple enough that we can talk about them, simple enough that we can think about them. Um the thing that is sort of the progress of civilization, it tends to be the development of sort of exploring more of these pockets of reducibility, having more things where we can reduce kind of all these details, all this complexity of what's going on in the world, to actually say, oh, we can think about that in terms of you know the dynamics of risk or something like this. That was, you know, a thousand different phenomena until you know one figured out how to sort of package them all together into some kind of abstract concept that one can give a name to and one can start talking in terms of. So I think that's the, you know, in a sense, it's it's very much you can almost trace the progress of civilization in the kinds of things that we have been successful in kind of reducing, giving words to, and so on.
Sebastian David Lees:Hmm, how should I create it? If we imagine these pockets of reducibility in some kind of space or plane, even though they're infinite, we know some infinities are larger than others. Do you have any kind of sense or gut feeling about what the ratio of these pockets of reducibility compared to a larger irreducibility landscape may look like? Are we seeing a comparison to the universe where there's vast waves of irreducibility and then small pockets, or do you think there's a slightly higher concentration?
Stephen Wolfram:The vast majority of what's out there is irreducible stuff. I mean, we can see this in a many, many different ways. So one fun example, uh, you can take sort of a generative AI system that makes images, and you can tell it make a picture of a cat. Okay, we'll make a picture of a cat. Then you look at sort of the internal representation of its concepts, and you can tweak the numbers a bit, you can change the embedding vector, whatever, change a little bit. Pretty soon you don't have a picture of a cat. Pretty soon you have a weird picture of something you don't know what it is. There's sort of an island around cattiness that is things which are close enough that we recognize them as a cat. So then you can ask, uh, you know, how big are these regions where we have human words for them? Where we have, we say that's a cat, that's a dog, that's an airplane, whatever else. And how much interconcept space is there between the things for which we have words? Well, even in kind of a very sort of simple, you know, generative AI system, you'll find that the interconcept space is everything except for maybe one part and 10 to the 600. So the things that we have words for, the things that we have so far populated in human civilization, represent, even in sort of the simplest approximation, maybe one part and 10 to the 600 of what's out there, even among things that are generated by AIs that we've trained from kind of human uh image data and things like this. So the the vast majority of what's out there in the set of everything that's possible. I mean, the thing that I have this concept of the Ruliad, which is kind of the entangled limit of all possible computations, is kind of the universe of all possible universes. And the the remarkable thing is that just knowing how we sample that the Ruliad tells us that what we must perceive about the Ruliad are certain things like laws of physics that are like the ones that we know, are the laws of physics that are human laws of physics. The fact is that were we to be different as observers of the Ruliad, we would conclude that there are different laws of physics operating. And the question of what is not obvious is that observers will conclude that there are laws of physics operating. They won't just say it's hopeless. The universe is a random place, it's all irreducible all the way down. We the way that we exist as observers of the universe have sort of keyed into certain pockets of reducibility that allow us to talk about things like continuous space and so on. I mean, it's a it's an interesting question for other fields. To what extent are the things that we care about in those fields things for which there are pockets of reducibility? So, for example, in biology, um, you know, there's something I'd sort of discovered in recent times. I sort of understand the extent to which biology is making use of kind of blobs of irreducibility, kind of encapsulated within reducibility. It's not obvious that biological evolution, for example, where you are saying, I want this creature that's going to be able to reach to the, you know, the top of the tree or something, is it is it really going to be able to evolve that way? It's not obvious that that would be possible. I think I now understand why that's possible. It's kind of leveraging computational irreducibility, but it is trying to achieve reducible objectives, like the describable thing of being able to reach to the top of the tree. So, you know, it's an interesting question still. I don't know yet for understanding society and economics and things like this. One of the things I consider quite interesting is the extent to which there are kind of uh narrative things that you could say about kind of economic systems, social systems, and so on, and to what extent the things that happen there are pure irreducibility. To what extent one just says, well, there are 8 billion people and they do what they do, and we don't have any kind of principles, we don't have any kind of short narrative for what's going to happen. Now, in fact, we we have very clear anecdotal evidence that there are short narratives for a lot of kinds of things that can happen. But to make that systematic and more formal is something that's interesting, and it's not obvious the extent to which there are there is reducibility in that there will be pockets of reducibility, but they may be about things that we just don't care about. It may be that you could say, you know, in an economic system, I can tell you that, you know, some particular measure of you know uh price versus volume versus this versus that will have some particular form or will have some particular set of uh you know inequalities or something about it, and you'll say, well, that's nice, but it's not really relevant to anything that is of sort of human importance, so to speak. I think that's you know, that that tends to be the issue that there are there are these regularities, but the question is, do we humans care about those regularities? So, you know, human activity and technology typically is making use of certain regularities in the way the world is is. In other words, we couldn't we couldn't make consistent technology if we hadn't found consistent things that happen in the physical world. Um, and the question is then, are we going to be able to make use of those things? So, you know, there was a time when somebody discovered magnets. There's a time when somebody discovered liquid crystals. Whether those things will have a use in the world is not obvious. It's it's a it's and I think this is the thing that that's sort of as we expand our domain of of language, concepts, and so on, there's the question of how far do we push that? And the the distance we push it will depend on what we can make use of. And that's something that's uh that that's that's an interesting question for irreducibility, is how can you predict kind of the limit of human interest, so to speak? We have, you know, if you look over the past you know a couple of thousand years, you'll see certain progressions in what humans have considered was worth doing, interesting, and so on. What is the limit of that process? Uh we don't know that. And and and how much can we can we kind of find laws which describe the the progression of that of that process?
Sebastian David Lees:I once heard you say, related to to rule your space and some of the concepts you're talking about, that the way we understand the universe is profoundly related to how we are and how we're made. And you you can't separate those two concepts. And I think you mentioned the example you gave was um the fact that we evolved at you know sub-relativistic speeds, and that we pretend to perceive the universe as the same thing happening to everyone else in this kind of single thread of time uh dynamic. And of course, if we hadn't, if we happen to be entities that evolved where we see relativistic effects, we'd have a profoundly different view of the world. Um so let's talk about we talk about AI a little bit, and obviously that's a hot topic at the moment. I absolutely loved um your book, What ChatGPT is doing and how does it work. By far the best book I've read on the subject. But I don't want to talk about ChatGPT so much as there's almost a kind of throwaway comment in the book, which I absolutely love, which is when you're talking about language. And you mention, I'm gonna paraphrase you here and I might butcher it, so I'm sorry. But you mentioned that it may not be that language is actually as complicated as we thought, and that chat GPT is is amazing. It may be, but it actually turns out it's quite computationally simple. Do you think the reverse of that might be true? Where are things that we think are computationally simple, or should I say computationally not signs of intelligence or not signs of higher order thinking that that may actually be incredibly complex?
Stephen Wolfram:Well, I mean, for a long time, ask a computer to tell a cat from a dog and it was hopeless, it couldn't do it at all. I mean, that was a you know, that was something to us seemed very simple. But as we tried to formalize it, it was increasingly difficult. And in fact, you know, even today, if you say what is the formal difference between an image of a cat and an image of a dog, nobody knows the answer to that. All we can say is we can make a thing that makes those distinctions in the same kind of a way that humans make those distinctions. Now, I mean, to this question of whether there are things that um that seem simple but are actually more complicated than you think. I would say that one of the things you notice in technology is there'll be some sort of first version of how something was done. People will have some way of doing some basic kind of functional programming or some some such other thing. And that will work for a while. But then as one really digs into it, one will see there are many, many, many details and many things that require one to be more sophisticated. And that's a thing that usually what happens in these areas is there'll be some initial sort of push where you can kind of where something works. Um, and then for a long time, people will just, you know, use the thing that works, but then it will start, you know, showing its showing cracks as it's actually a more complicated story. And then decades later, one will be able to look at all those cracks and see, well, actually, there was another level of abstraction that we could have used, and then we can sort of take the next step. That's a that seems to be a repeated thing. I mean, I certainly noticed that in my own efforts at language design. That's a uh, you know, it takes you you do something, something works. Over the course of five, ten, fifteen years, you kind of get used to it. You've used it tens of thousands of times. Now you can start to see, well, you know, there are these limitations and so on. And it was a more complicated story. I mean, it's, you know, there are endless algorithms where somebody said, well, uh, you know, we'll just this particular thing will heuristically choose this and this and this, and it more or less works. But the true story is more difficult than that. Uh and in the end, it requires some higher level of abstraction, higher level of thinking to really kind of uh nail down what should what should be going on. I mean, I think um in uh trying to think of a not deeply technical example of that, I mean uh an example that uh happened to have been exposed to recently is if you have a pure function, an anonymous function, a lambda function, you you're it's like you have this function and it has some variable x, and you know, it uh uh the the body of the function might be you know x plus one and the variable is x, you feed it something that gets substituted in for x, and then the result is that thing plus one. And it all seems quite simple, etc. And then you realize, well, what happens if you feed the function whose variable is x a function that is also using the variable x inside? And then it's oops, there's a problem there. These things are going to conflict. The name, it isn't enough to just have the name, you have to disembark the name, and pretty soon you're on a slippery slope going into a very complicated set of stories, even though at the beginning it looked like a pretty straightforward thing to describe. And that's uh, and then in the end you realize the bigger picture of that is something that doesn't have to do with these variables and names at all, ever. It's something there's a more abstract way of thinking about it that never talks about those kinds of things. You can represent it, you know, diagrammatically, then represent a bunch of other ways, and um, you you you kind of avoid those issues. But at the beginning it looked simple, then it looked really complicated, and then eventually sort of you understand enough of those different examples that you can get to a higher level of abstraction later on.
Sebastian David Lees:It's like um getting lost in the computational complexity zoo, and then eventually finding simplicity. I know a lot of very famous people, very famous minds, have talked about the final effort being finding the sim simplicity amongst complexity. Stephen, I'm conscious I've only got you for an hour and uh there's so much more I want to ask you. But what I want to do for the last five or ten minutes is just hand over to Fatoni's community. So what we do at the end of each podcast is we field questions that have been submitted by community members. These are always a little unpredictable, we never quite know what we're gonna get, but they tend to be pretty rapid fire. Um, so let's see what we've got on here. Let's pick a few. So, what do you feel is the most important problem that needs solving? And what are your thoughts on this being a scientific versus social problem?
Stephen Wolfram:I think for us humans, how about human immortality? That'd be cool to have. That's uh, you know, that's that's the great driver of the human condition is mortality. And uh, you know, exactly what the human condition is like if you don't have mortality is an interesting question. And it may not be as exciting as we think. But you know, if you ask for a single kind of uh big problem that is the most obvious one for us humans, and in terms of how to achieve it, that's an interesting question. It's something that I've sort of thought about at a foundational level in terms of sort of the the computational thing that is a human. What do you do to kind of keep the keep the program running, so to speak? Um and it's hard. And I think the um uh you know that there's you kind of have to replace pieces of it. It's kind of like you can have a mainframe computer, you want to just keep it running. And um, you know, there's you have to gradually replace things, you have to sort of graft things on. I think the thing that is kind of a big direction there is, you know, we are sort of the only example that we know so far of molecular scale computing. Biology does molecular scale computing. And, you know, as we understand more about how to do our own molecular scale computing, the most obvious use case is interfacing to actual biology. And um, you know, and figuring out, and then then there's a lot of questions about sort of at what point uh, and that this becomes more a societal kind of question, at what point is the the kind of um uh the the the sort of engineered thing no longer the human thing, how do we feel about different things that have different levels of kind of engineering and technology? I mean, just last week uh somebody had made a made uh another effort at making kind of an AI clone of me. And so I did a uh a kind of a live stream talking to that um that thing. But it's an interesting experience because thinking about what it means, if there really is an accurate AI clone of me. This one wasn't that accurate. It's getting there, it's kind of interesting, but it's not, you know, I can tell it's not me, so to speak. Um, not just because I have an inner feeling of where the me is, but um, you know, it's saying things that's like, I would never say that. I would never say that. But but you know, what is it like when when it is something where it could finish your sentences for you all the time? And how does that make, you know, what is the what's the human import of that uh of that uh of that thing? And and what you know, what does that mean about the the significance of the of of us as humans and and so on? So anyway, I think that's a that's a that's an important uh that's an obviously central question for um uh for us humans. I mean, I think they're a um you know, one of the things that um if we look at sort of society and the governance of society and all those kinds of things, you know, the the the current systems of governance were invented at a time when technology of 400 years ago or something was coming online. How that should work with technology of today is another interesting question. I think it's a much that that question is deeply entangled with kind of the human condition and the the unchanging nature of the human condition and so on. But uh yeah, lots of interesting things there.
Sebastian David Lees:I think that's a very clever answer as well, because you avoid the scientific versus social. Because I think there are huge ramifications, ramifications for both fields. And yeah, it reminds me of a few um uh student pub uh philosophical does the Star Trek transporter kill you if it dematerializes you, reassembles you, or is it still you? Um, I think we've got time for two two very more quick questions then. So I think the next one spawned on nicely from this. Um, does AGI come from more training and more data, or is there a fundamental piece missing?
Stephen Wolfram:You know, people who wonder about AGI should go back and read what people wrote. About AI in the early 1960s, because it's exactly the same as what people write today. Only one difference. The things, there are lots of things that today would not be considered politically correct that were said in those days. But other than that, the things that are said are identical. And, you know, over the course of the time I've been paying attention to this half a century or so now, it's uh uh, you know, the there are things that computers manage to do, there are things that technology manages to do, there are things we manage to automate. We're very proud of those things. If we say, is there going to be ultimate automation of everything? The answer is, well, we can automate, you know, the nature is an example of something which has sort of automated everything and just does what it does. The thing that we humans add to things is the question of what do we want to do. Now, if you say there's going to be this box on my desk that is going to figure out the perfect what it wants to do, that doesn't really mean anything because there's an it's sort of the things that can happen in the natural world, there's all kinds of things that can happen in the natural world, the ones that we care about are the ones that we humans, with our sort of inner feeling of what we want to do, decide we want to do. And that's something that you can emulate, just like this AI cone of me, can emulate kind of what it is that um uh you know can make suggestions about what it is I might might do, but there's still some, you know, for from my point of view, inside me, there's still a, well, I want to do this particular thing. But in terms of you know the kind of the character of AI and so on, it's it's worth looking at the history of this technology. I mean, what's happened is every so often there's been sort of a breakthrough and some new uh domain is sort of able to be automated. So 2011, you know, deep learning, image identification got automated. Uh, you know, um uh 20, I'm losing track of time here, 2022, um, kind of uh language generation with Chat GPT, um that became possible. It hasn't advanced that much in the last few years. You know, details have changed, tool use is better, models are faster, et cetera, et cetera, et cetera. But there was sort of this threshold of now we can do this. Um, and you know, the the value now being added is mostly in how do you harness that particular sort of uh uh seed of technology, so to speak, to doing different kinds of things. And I think that's the typical thing that we see, you know, we'll automate another level of thing, and uh, you know, the the thing that is sort of the perfect human, I I don't know what we, you know, if we say, let's make a human-like thing, we'll keep on saying, well, it isn't really quite human because it doesn't eat, it doesn't die, it doesn't whatever else. The only thing that will be a perfect human-like thing is a human. And you know, that that it's it's as we sort of move away from there and say, what is the generalized human? Well, then it's like, well, we'll we'll nail this particular piece, language generation, we'll nail another piece, you know, in probably not very long, we'll nail robotics, um, and you know, sort of physical actuation of things and so on. So, I mean, I I think uh the idea that um uh you know you just feed in more training data and magically the uh the neural net will wake up and um and be a human, I think is is is kind of uh it's it it doesn't make a lot of sense, but it doesn't make a lot of philosophical sense in addition to technological sense. I mean, I'm I'm uh uh you know we happen to be a producer of lots of training data, so it's good for our business that people want to feed in more training data to AIs, but um, I I don't think that's really the way to go. I think the thing to understand is that there's kind of neural net AI, which is doing very brain-like things. There's kind of computational AI, the kind of thing I've done for a really long time, that is doing things that brains can't do, but computers can do, the natural world can also do, and there's a you know, merging those two things together leads to something which is sort of a beyond brain kind of capable set of capabilities, just like the computers we have right now can do plenty of things that that unnated brains can't do. Sounds like we have time for basically one more question here.
Sebastian David Lees:So one more quick one, yeah. And I I am very I'm always amazed at how we see these leaps and and leveling off multiple times through our lifetime that people forget. Very final question then. Um I believe this is in reference to the Wolfram Physics project. So um, what benefit will we get if these computational methods do not lead to unification? And I'm guessing by unification, they're talking about classical and quantum uh uh physics.
Stephen Wolfram:I think we're we're we're past that milestone. So I'm not um I that's that's not a good hypothetical. But I think you know a reasonable question would be okay, so you know that the universe is computational all the way down, and you know, at some very, very tiny scale, there's lots of things we can think about in terms of hypergraph rewriting or whatever else. There's a question of so what? I mean, in other words, if what happens at a scale of 10 to the minus 100 meters, how does that affect my everyday life? Well, that's an interesting question. I think it's worth looking back at some history. You know, in the past, uh uh there were like when Copernicus was doing his thing of saying, well, actually, you know, with these mathematical methods, we can see that the earth goes round, you know, we can think of the earth as going round the sun rather than the other way around, and that's a useful way to think about things. Why was that significant? It was significant because before that time, people had imagined that there was a that that the things that happened were things that were obvious they were happening. Like it was obvious the earth is standing still, because after all, you know, that is our experience is the earth is standing still. The concept of the earth will be going around the sun is something that was then being told, well, this mathematics implies that that's a thing that we can think about happening, but it is different from our common experience. And so I think the realization that sort of mathematical science was there's things we can figure out mathematically that that go against the things that we just can figure out just with our minds. So when we think about sort of the computational paradigm, and if we know the universe is ultimately computational, that kind of forces us to confront lots of issues that come up when you think about things computationally, like computational irreducibility, the fact that there will be things where, even though you know the rules, you can't figure out what the thing is actually going to do. That's an important conceptual kind of realization that I suppose over the 40 years or so since I introduced that idea, there has been slow understanding of that and its implications. But it's an important idea because one would otherwise say, well, as soon as I can find, you know, go down and find the laws of physics and find the underlying rules for things, we're done. Or if I can figure out the axioms for mathematics, we're done. You can just figure out mechanically all of mathematics or all of lots of other kinds of things, and you don't have to do any more work. But this is kind of telling one that's not the case. And it's kind of, you know, knowing that the universe is ultimately computational kind of forces one to confront these kinds of ways of thinking about things in computational terms. That's one thing. The other thing that's a very practical thing that's really been surprising to me in understanding how we can see how physics works, see how quantum mechanics works, these things we call multi-way systems that have to do with this kind of multiple threads of history that show up in quantum mechanics that uh I actually originally started studying for slightly other, more abstract reasons, but that seem to be very applicable to quantum mechanics. The kind of development of thinking about those things has now led us to a bunch of ideas about the foundations of mathematics, foundations of biology, uh, foundations of machine learning, actually. Um, and it's been very fertile to take kind of the methods that have come that were developed for the physics project and apply them to other areas. The other thing one gets to do is once you know that the same underlying methods can be used across these different areas, the successes that physics has already had become spreadable to these other areas. You can say, well, we know in physics there are theorems about how black holes work. Well, if we know that decidable theories in meta-mathematics are like black holes, we can say, well, let's import those theorems that we got from physics, let's import that to the foundations of mathematics and see what their implications are. And that turns out to be a really powerful thing. So it's it's I have to say, when I started working on the physics project more in my sort of recent push that started in 2018, I, you know, people, uh it's like, is this going to have applications? And I would say, well, you know, maybe in 200 years it will have. Um and uh I realized that I was quite wrong about that because we started having all kinds of applications to distributed computing to a bunch of other areas very quickly. It was a strong enough, it's a strong enough formalism that got built for the purpose of doing physics, but then applied in a bunch of other areas. I mean, it's always uh, you know, you you you have to wonder, it's like when old, you know, Isaac Newton was coming up with, you know, calculus and things and universal law of gravity, you know, he he could have known that you could launch artificial satellites. Um, but if he'd been really pushing the artificial satellites idea, you know, he was 300 years too early. Um and uh it's you know, so it's kind of a um a thing where sometimes you can have these abstract ideas about how the universe works or something like this, and it's something which uh where it's really only of what one might say, coming back to the beginning of our conversation, of academic interest, so to speak. But I think there are two reasons why it isn't. One, because it really gives one more of a grounding for how one has to think about things in general, and two, because the ways of thinking end up, it wasn't predictable to me, end up being strong enough that they're applicable to all these other areas. And so that becomes the sort of immediate application, even if one doesn't, isn't able to do those experiments that can probe the structure of space on a on a length scale of 10 to the minus 100 meters or something? Amazing.
Sebastian David Lees:I think that's an exciting and tantalizing place to leave the conversation about what's coming up. I'm conscious we're over time as well, but thank you so much for staying a little bit beyond. Um, before we close, is there any upcoming projects, books, talks, features of Wolfram, anything you want to draw attention to or give a shout out to?
Stephen Wolfram:Oh my gosh. So many things. I mean, I suppose that the pure lots of lots of interesting upcoming stuff, but I think in the in the technology space, the most relevant thing is probably this thing we call CAG computational augmented generation, just kind of a computational backing for AI systems that we kind of introduced an early version of that right after ChatGPT came out, but we have a a a more grown-up version of that that's coming soon. And uh, you know, expect your AI to have access to a CAG, to a computational augmented generation system. If it doesn't, it's going to be a dumb AI. And um that's that's uh that's one of the things that we have uh we have coming soon. I think um in uh um yeah, I know I'm I I there's there's a lot of projects going on, a lot of projects in biology, physics. I'm hoping to really tackle some things in economics and social science. Um we'll see how that works out. I've been uh I've had a multi-year plan to do that, and it's we're it's slowly getting to the point where I think I'm I'm ready to do it. But that's uh hopefully will be coming attractions over the next year or two. That's um I think that's uh uh um there's a I think the next book of mine is a collection of uh things that I've written about philosophy of various kinds. Even though when I was a kid, I my mother was a philosophy professor in Oxford actually. Um and uh when I was a kid I always said if there's one thing I'll never do when I'm grown up, it's philosophy. And here I am about to actually publish a book about philosophy. But uh that's a that's another coming attraction.
Sebastian David Lees:Amazing. Thank you so much, Stephen. It's been an absolute pleasure talking to you. Bye.
Stephen Wolfram:Thank you, goodbye.