MindHack Podcast

Decoding the Universe with Stephen Wolfram: Bridging the Gap Between Man, Mind and Machine | Ep. 051

September 21, 2023 Stephen Wolfram Episode 51
MindHack Podcast
Decoding the Universe with Stephen Wolfram: Bridging the Gap Between Man, Mind and Machine | Ep. 051
Show Notes Transcript Chapter Markers

A heady mix of philosophy, physics, and computational theory, this episode is not one to miss for those interested in the frontiers of scientific thought and the mysteries of existence.
Cody McLain sits down with Stephen Wolfram, the founder of Wolfram|Alpha and a pivotal figure in computational theory. With discussions ranging from the inherent complexities of the universe to the nature of consciousness, listeners are taken on a cerebral journey that challenges conventional wisdom.
Whether you're a tech guru, an aspiring entrepreneur, or just someone looking to expand your intellectual horizons, this episode offers something for everyone. Don't miss out on this captivating exploration of life's biggest questions.

More on Stephen Wolfram:
Website
Twitter
Instagram
YouTube, Q&A Videos
What Is ChatGPT Doing ... and Why Does It Work?
Other books here

Books and other interesting mentions:
Wolfram|Alpha
Podcasts
Wolfram Writings
A New Kind of Science by Stephen Wolfram
Mathematica
Double Slit Experiment
Principle of Computational Equivalence
Computational Irreducibility

CODY:

Look

Stephen:

if people are gonna have a tool that they're gonna use for the rest of their lives, then you should educate them about it. To say you're gonna be able to use this for the rest of your life, but you shouldn't use it when you're getting educated is kind of silly.

CODY:

welcome to The Mind Hack podcast. Today we have the distinct honor of hosting a visionary thinker, an exceptional scientist, and an influential entrepreneur who has left an indelible mark on the fields of computer science, physics, and artificial intelligence. Our guest, Dr. Steven Wolfram, is the founder and ceo of Wolfram Research, and you may know it as Wolfram Alpha, which has been instrumental in helping all the high schoolers and college students to pass their math and physics classes. He is also the visionary behind the computational software system, Mathematica, which has empowered countless researchers, scientists, and mathematicians to explore, analyze, and visualize complex data in unprecedented ways. He has also had a prominent impact across society and most notably his latest book. What is ChatGPT doing and why does it Work? Looks at the underlying mechanics of this revolutionary new AI and provides insight as to the incomprehensible amount of data and computational analysis that goes into every word that chat. G p t outputs, his thought provoking Ted Talks, interviews with Tim Ferris, guy Kawasaki and Lex Friedman, as well as his prolific contributions through Stephen Wolfram writings have all solidified his role as a leading voice in the global discourse on technology and its implications. So please join me as we embark on a journey into the impact of ChatGPT, artificial intelligence and its future impact on society. Please welcome Dr. Wolfram Alpha.

Stephen:

Ha. Thank you.

CODY:

Did

Stephen:

You said Dr. Wolfram Alpha, which is

CODY:

cool Uh

Stephen:

It's kind of, kind of sounds like one of those, uh, uh, you know, villains in some, uh, in some sci-fi movie. It's cool.

CODY:

hmm. so I was gonna ask you a question about ai, but I had something else come up. Is you are a computer science person first. You are a technical genius, as some might say. How did you come about to building such an influential company? Wolfram Alpha before, well before ChatGPT or any notion of AI really existed.

Stephen:

Oh, boy. Well, this is a long story. I mean, this is kind of the, the life story of me, but I can tell a sh uh, perhaps a short version of it. I mean, I, I grew up in England, as you could probably tell from my unamericanized accent. And, uh, I got interested in science when I was pretty young. So by the time I was like 11, 12 years old, I was kind of reading physics textbooks and trying to understand lots of stuff about physics. And then I, I kind of realized that I liked figuring out new stuff. And so I was not a do the exercise in the book kind of person. I was a, is there a problem that is kind of suggested by what I'm reading about that I can go try and solve myself? So I started publishing papers about physics and such, like when I was maybe 14, 15 years old, and, uh, happened, uh, managed to go through the kind of school system pretty young. Wound up getting my PhD in physics at Caltech, actually when I was 20. And then I was, uh, a physics professor basically. And, uh, but I had, I, as my kind of secret weapon in doing physics was that I had learned to use computers, which, and it always amazed me that other people weren't doing this, but, uh, it turned out to be a really good tool for figuring out things about physics. And then I got very involved in building my kind of my own software system for doing the things that I wanted to do, in physics and beyond. And, uh, that, so I built my first big software system in 1979 to 1981. And then, that kind of backed me into starting my first company. I made many mistakes in that, although the company did okay in the end. then, uh, well then I went into doing basic science for quite a while. And kind of the result of doing that basic science was the realization that I needed more computational tools. It also was a realization I had been a pretty successful academic and it was kind of a realization that, uh, I had a, uh, sort of, there were, there were better paths that I could go on in my life, uh, than by that point I was running a kind of academic research center type thing. And so this is me, age 26 or something, um, decided to, uh, start company and build, well, what Mathematica at first, what's now Wolfram Language, on top of that later bulit Wolfram Alpha A kind of, the real origin story I suppose of these things is when I was interested in doing physics when I was like 13, 14 years old, I was like, physics requires doing all these math calculations. I don't do like doing these math calculations. They're really dull. They should be, you know, they're kind of mechanical. Can't I just automate them? That was kind of the, uh, that was what kind of got me started on the path that eventually led to Wolfram Alpha and the automation of everybody's math problems, so to speak. Um, not just mine. It was kind of a, this is a thing I need, so let me build it for the world. But then after I had built the first version of Mathematica, which came out in 1988, started my current company, which I've been running kind of boringly, I suppose, for 36 years now. and, uh, uh, I don't think it's boring. I think it's great. I spend a hundred hours a week doing it. So it's, it's, um, uh, and doing things related to it. it's, it's something where, uh, I, I spent, kind of grew the company for a while, then spent what ended up being about a decade, really concentrating on basic science that led me to this book called The New Kind of Science, which is really a book about science. But one of the implications of that book was that. This idea of being able to make sort of the world's knowledge computable, which is an idea that I'd kind of had since I was a kid, that that idea should be possible. It wasn't obvious that that should be possible, but, um, uh, I kind of realized that, that it should be, and so started building Wolfram Alpha and kind of that, the big story of what I've tried to do in building is, is about building kind of computational language, building a way to represent how the world works computationally. And it's kind of like, you know, the sort of a, a long arc of history from, you know, back when our species was, was starting out, its sort of big innovation, was this idea of language where you can not just point at a rock, but you can abstractly say the word rock, so to speak, and other people know what you're talking about. And that kind of, you know, in a couple of thousand years ago, uh, another innovation was the idea of logic. That you can have this kind of structure of arguments that is a formal structure, independent of whether you're talking about turtles or elephants or something else. And then, you know, in that whole sort of arc of the formalization of thinking, the next big thing I suppose was mathematics and kind of the introduction of mathematics into science, which happened mostly in the 16 hundreds. And that's led to this whole kind of stack of ability to, to talk about the world formally and to be able to figure things out that are beyond what we can just figure out with our minds. and so that's, that's sort of the thing that has led to this idea of computational language, this idea of expressing yourself not in natural language, but in this precise kind of formalism. A little bit like mathematical notation, but much more general than that. Has the feature that you can then just feed it to your computer to say, okay, you know, if you say what's the, you know, geo distance between New York and this and or something, you know, something you've specified in this sort of precise computational language, then the computer can go compute things with it. So that's kind of the, the thing that I've been interested in for a long time is that sort of the developments of this computational language to let people express themselves in a precise computational way, and really use the power of computers and computation to figure out things they never would've been able to figure out before. Now there's a different branch, which is kind of doing the things that we humans find pretty easy to do. We humans don't find it very easy to build these kind of big towers of computation, but we find it pretty easy to, you know, make up a sentence in natural language in English or something and, uh, do those kinds of things. And for a long time it was not obvious how to get computers to do that. Then this kind of ideas about ai, particularly ideas about neural nets. Neural nets are kind of an imitation, a simple imitation of what our brains do originally invented back in the 1940s and sort of progressively kind of able to be deployed better as computers have gotten more powerful. And then the thing that happened with chatGPT, for example, is a big surprise to everybody, including people who worked on it, was this kind of, uh, being able to just take the content of the web, feed it into a neural net. Have the neural net be able to produce things where you started off saying some sentence and it will be able to continue that sentence in a way that's sort of typical of what it found on the web, so to speak. And more elaborately you can kind of ask it a question and it will answer the question sort of in a way that's somehow typical of what it found on the web. It's kind of able to, interpolate between what it found on the web. It's able to take all that stuff that it got from the web and produce something which is kind of, uh, takes the kind of conventional wisdom of the web and gives it back to us. The thing that was kind of a surprise, I think, to everybody is that it can do so in a sort of produce meaningful essay length. Things that don't kind of wander off and start to talking about irrelevant things and so on. And I think that's really telling us something quite scientifically interesting about the structure of language we've known for a long time that language. Has this kind of structure with nouns and verbs, and there's a certain syntactic, grammar to sentences, you know, uh, the sentence says the cat noun, noun phrase, you know, sat verb on the mat, you know, noun phrase. And, uh, uh, you know, we've known that there's that kind of structure. But what seems to be the case is that there's kind of a, a semantic grammar of language, a a way of constructing sentences, which actually means something as opposed to, I don't know, the electron sang water or something, which is grammatically correct, but means nothing. And, there's a way of putting together things that mean something. And, uh, that, that's something that, that in a sense, chatGPT was able to extract by sort of statistically analyzing in some sense. The 4 billion webpages or so, or or whatever that it was fed as training data. So that's kind of the, the thing that's come out of that for as far as I'm concerned, the, the kind of, the way one thinks about using these kinds of things is, ChatGPT provides this great kind of linguistic interface, it provides this way you can kind of just say whatever you wanted to say. And the question is, can it translate that into something that you can, for example, do computations with? Or are you just having it respond in the way that's based on the wisdom of the web, so to speak? And a thing that seems to be the emerging workflow that's pretty interesting is you get it to write computational language. So you get it to write Wolframlanguage code and does okay at that. Then you look at that code and you say, is that really what I meant? You know, you say draw a circle that's half red and half green. What did you actually mean? Did you mean draw two semicircle where they're filled in half red, half green? Did you intend that to be a vertical sep, you know, line between the green and the red? Did you intend it to be horizontal? Whatever you ask ChatGPT or G P T four, whatever, some other large language model, l l m to, uh, you say, you know, write Wolfram language code that does this. It's gonna give you something that says, you know, graphics of disc of whatever else it is. Then you see what it produces. Maybe it produces what you wanted. Maybe you say, no, it's not quite right. You can read the computational language code and say, is that what I meant? And then either fix that code or try and tell the l l m to fix that code. So it's kind of a thing that allows you to merge kind of the, the sort of very human-like linguistic interface of, that LLMs provide together with the kind of. Hard, sort of formal computational power that the computational language provides. I mean, you know, we kind of, uh, a, a piece of this that's super convenient is in WolframAlpha that we released in 2009, was sort of the first example of a system where you could type natural language and ask questions about the world in natural language and have the natural language be understood. And the questions, the answers to the questions be computed. And it, it's working in a very different way from chatGPT. What it's doing is it's translating the questions you ask, which might be a natural language into like, you know, uh, how far is it from Riverside, California to, um, uh, to Boston or something. And, you know, divide that what fraction of the way around the earth is that, let's say, And it will then translate that into a precise piece of computational language that might be, you know, geo distance of Riverside to, to Boston divided by, the, radius of the earth times two pi or so, or something like this. and then it will compute the answer based on kind of the curated data that we've been collecting over the last few decades, in our system, in our knowledge base. So what WolframAlpha has managed to do is to take kind of short utterances that kind of question and convert that to computational language, then compute the answer. What ChatGPT is doing is dealing with much larger chunks of text. But it, it's not. It's not intended to or able to be as successful in sort of turning that into something which can be precisely answered and so on. So it's kind of a slightly different objective and the combination of ChatGPT together with Wolfram language and so on, is really powerful 'cause it allows one to both have this sort of natural conversation, really a conversation. Not just to ask one question, get one answer type thing, but really a conversation and yet also tapping into kind of the precise knowledge of the world that exists in WolframAlpha and Wolframlanguage. And also the ability to compute things that are far beyond what humans could compute and are things where that never showed up on the web. That wasn't something you can deduce from the things that were on the web. That's something we actually have to compute a fresh sort of new knowledge actually generated by computation.

CODY:

Have you asked open AI about a potential collaboration to allow Wolfram Alpha to be used within chatGPT. Oh

Stephen:

Yeah, we were, we started talking about that last December and actually we built this thing with them that came out in March that allows, allows WolframAlpha from Wolframlanguage to be used within our, within ChatGPT.

CODY:

And, and so ChatGPT, it's, it's trying to fill the shoes of what might be considered and probably not quite there yet. A generalized, a general ai, what is Wolfram Alpha. And compared to that, it's, it's not an ai, it's a computational model as you describe it, right.

Stephen:

Yeah, I mean, so, so as I was saying, I mean chatGPT is a large language model, which is trying to give you a response that's based on kind of the average of what it saw on the web. It's not computing things, it's, it's doing linguistic continuation. It's saying, you know, you say the cat sat on the, what's the next word? Okay, so it's gonna say, I've seen lots of webpages. The most common next word is mat. So that's the likely thing I'm going to say. Now, it's not quite as simple as that because most things you ask, there won't be an exact version of that on the web somewhere. So instead what happens, and this is kind of the, the surprise magic of neural nets in this case, is that the way it extrapolates, what it's seen on the web is similar to the way humans seem to do it. So it's kind of imitating what we humans manage to do very quickly in our brains. If you said, okay, compute. Something about some combinatorial, you know, some optimization, you know, visit all the capital cities of, countries in Europe in, in a way that has the, the, the shortest path. Okay. The, the chatGPT doesn't have a clue how to do that. That's just not what it does. Um, and, uh, whereas in, in Wolfram, alpha and Wolframlanguage, um, we're, you know, we, we can define that question and we know the locations of all those cities and we can compute using a kind of difficult algorithm, what is the minimum path? And we can say there's the result. And that's, and so we're, we're kind of in a position to, to compute, create new knowledge and so on. And we have a, a very solid knowledge base of actual factual knowledge about the world. It's not just, oh, I kind of read that on some webpage and I'm kind of guessing that it might be this now that not, I'm not. I'm not saying that that's not useful. That's super useful. And if you are trying to, make, you know, if you are doing something like you are writing a report that kind of describes five points you want to make and you want to write a long report that says that it's super useful to know how a report's typically written, and that's where large language models really shine. Or to go the other way and say, I want to sort of extract from what's, what's in a big piece of text or I, or I want to tutor some, some student on some particular topic and I'm gonna have a chat with them about the things they're interested in. And I'm going to sort of pull out of, let's say out of, uh, Wolframlanguage or something. I'm gonna pull out some math question I want to ask. Then I'm going to dress it up in a story about, uh, fairytales about wolves or something, and then ask it in that form. The student's gonna respond in, some piece of, you know, then there will be seven wolves, uh, you know, who, uh, got to the lake or something like this. And then it's going to understand that to the point of being able to convert it into, you know, X equals seven or something. And then that can go back into the sort of the computational layer to get processed and say, yes, that's right, or compute the fact that it, the answer should be 13 or something instead.

CODY:

and there's been an issue with ChatGPT in terms of its ability to provide information That is not bad for society. Say, say like the ex, the early example of say, how do you create a, a Molotov cocktail for example. And then, they prevented that, but then people found a creative way around that of how do you give this in the form of a recipe or in this as if this was a story. And then they have had to add specific rules to counter against this type of creative, uh, creative adaptation of giving it better inputs. And there's been an argument that ChatGPT has in essence become, it's the output, the quality of the output has become less and less. Uh, can you tell me what they're doing behind the scenes to help, is there a way to have it understand values? And Then it can determine whether or not it should answer a question, or do they have to continue adding these small rules that seem to degradate the quality of the responses.

Stephen:

Well, this is all a huge mess, right? So first point is it learn from the web and the web has all kinds of stuff on it. You can kind of say, well, don't look at that part of the web. Don't look at this part of the web, et cetera, et cetera, et cetera. Uh, the web is kind of a reflection of, of what we humans put out there that includes how to make all kinds of terrible things and how to do all kinds of great things too. if you, uh, now this question of, you know, can you sort of tell it, do this, don't do that, do this, don't do that. By the time you constrain it too much, it's going to be kinda like, it's kinda like a, a, a, a person. If you say, no, no, no, you can only say, never say this word. Don't say that word. Don't do this, don't do this, don't do this. Pretty soon they're kind of lamely just saying, uh, well, yes, no, whatever. it's, you know, there's a, the, the more you constrain it, With kind of very, kind of course constraints, the more you prevent it from doing what it's best at, so to speak, which is really, you know, producing language, producing rich, never seen before, examples of language. So yes, I think it's a very challenging thing. I think this, the, the question of kind of, people have a, uh, I mean, there's a terrible tendency to say, oh, it said the wrong thing there. Let's patch it. You know, let's give it kind of the, uh, give it the, uh, you know, let's censor it. Let's do this, let's, let's put it in this box and so on. I think in the end that that can't work, and in fact, the sort of deep. Theoretical computational reasons that are kind of things. I worked out back in the eighties actually about this phenomenon we called computational irreducibility. But you can explain it's not too difficult. It's basically the question is, if you've got a program and you say, here's my program, it's a little program, let's say you say, okay, you might say, if I've got the program, I know everything about what's gonna happen. But it isn't true. If you run the program and you might run it for a billion steps, you can see what it does. But if you say, from the program, can I jump ahead and say, can it ever do this or that thing? The answer is you can't figure that out. It's computationally irreducible. You kind of have to follow through all those steps. Let's say it's a billion steps to see what it does. You can't just jump ahead and say, oh, I know what it's gonna do. It's going to say fu at this point. and so that, that's kind of the issue that computational irreducibility is a fundamental feature of computational systems. If you lock them down to say you can't do arbitrary things, you can only do the things that I tell you to do, then they won't be able to do anything sort of rich and computational, so to speak. So it's, it's a trade-off. which I think for, as society goes forward, the phenomenon of computational irreducibility, and this, like, even though you know the rules, you can't know what's going to happen is going to be more and more important issue. I mean, it's, it's, you know, we see this in human legal systems. It's like you set up a bunch of laws, somebody says, you know, with this law, we're gonna make sure that society is wonderful in this way. And turns out there are unintended consequences and things kind of go badly in some direction or another. It's, and then you have to put in some other patch and keep going and so on. And that's kind of a, a sort of everyday human example of the same phenomenon. So I, I think it's, uh, and then the other huge challenge is, okay, so you say, I want the AI to only do good stuff. Okay. What do you mean by good stuff? You say, well, I want the AI to, let's say, follow the law. Well, the law in some particular country, et cetera, et cetera, et cetera, fine. You can imagine implementing that. There's a lot that people talk about, like the law doesn't say, I can't put out a webpage that talks about how to make a Molotov cocktail. Um, you know, that, that's, that's not something that is constrained by the law. There are other things I can't put out on a webpage that are constrained by the law. But, you know, the idea that we can make it only do the good stuff is very hard to define what you mean by the good stuff.'cause, if you say, well, let's put it, let's make it do the same kinds of things that people do. Well, people put out webpages about, you know, how to blow things up and all kinds of stuff that you might think is a, is is bad. and I think that then, you know, then, then you have to sort of say, well, okay, let's have a code of values for what the AI should do. And is that implementable a bit? I mean, you can certainly, this is something that's become quite popular. I don't know how well it's working yet, is to say, you know, I've got some terms of service. I've got some principles about what should be talked about on this particular, uh, you know, website or, social media thing, or generative AI thing or whatever. let's have the AI be able to give a response. You know, I asked you to show this terrible scene of some, you know, somebody killing a penguin or something. And, uh, it says, no, no, no, I don't do that. And it can kind of give you an essay response about why it isn't gonna do that. That that is probably gonna work up to a point. I think the idea of, of, uh, like, you know, I. Um, you know, sort of saying, uh, you know, you can't see, here's one of the issues. There's, there's a question of when you connect an AI to something in the real world, like you have it driving a car, you have it making a decision about this or that thing. In the real world, there's a certain amount of constraint you can put at the layer of actuation. You can say something like, you, the AI cannot drive the car into a brick wall at, at 50 miles an hour or something. You know, we, we just won't let the car, the car will just as a matter of the op, mechanical operation of the car, it won't do that even if the AI tries to tell it to do that. but by the time you're talking about some of these other things, it's not a sort of, you can make the ai, the, the things that are controlled by the ai, you can put constraints on those. That's much easier than putting constraints on the AI itself.'cause there's kind of, if you don't give the AI kind of freedom of thought, You're basically gonna prevent the AI from being useful, and you have to give it a certain amount of freedom of thought. You can't sort of box it in too much. Now, you know, it's a, it's very interesting question. What kind of constraints do make sense to provide? Even which ones would we humans even think are worth providing? if you say the AI shouldn't tell people, oh, what's a controversial topic? I mean, pick any controversial

CODY:

Trump.

Stephen:

Okay. The AI shouldn't let anybody vote for or against Trump. Pick your, pick your side. that's, that's going to, uh, well, you know, people aren't going to agree. That's the, you know, if you say I'm gonna have, you know, you could say, I'm gonna have an AI that's an anti-Trump ai, I'm gonna have an AI that's a pro-Trump ai. And you could, you know, you as a human could pick, oh, I want the anti-Trump ai, for example. but it's, you know, people are not going to agree. Every AI should be anti-Trump, or every AI should, AI should be pro-Trump. People are not gonna agree. That's, that's sort of just the way that, uh, that people are, so to speak.

CODY:

How long do you think it's gonna be before there is a. Equivalent AI or or large language model of chatGPT, that you can simply download on GitHub and set up yourself, and therefore we have all of these independent AI or large language models functioning that are influenced by the creators and owners of that. Do you think that's where we're headed, where we're taking this ChatGPT? that's run by one company and it's going to be kind of individualized.

Stephen:

I think that will be a good outcome. I think that that's a, um, you know, the situation that, um, uh, it's a really a technical issue. How much, you know, how big does it have to be? How crunchy do the computers have to be? Some of the scientific things that, I've worked on strongly point to the idea that it could be a lot smaller and that it could be runnable on an individual computer and so on. my guess is that's where it's gonna go. it isn't there yet and there are technical hurdles to making that happen. My guess is that's where it's gonna go. And I think that's a pretty good situation.'cause I think, you know, it's kind of, do you have a totalitarian government or do you have a, a kind of a, uh, a more distributed, uh, situation? I mean, do you have something where sort of all the AIness is in one box, so to speak? Or does everybody have their own ai? and you know, if, if all the AI is in one box, that is something that can be very centrally controlled and, you know, you might believe in the benevolent dictator or you might not. but that's the situation you are in. If you've got it all sort of in one box, I think it's a much better situation. I think it's also much better from the point of view of, of, you know, you say, well, the AI might start doing all sorts of terrible things in the world, but if there's a whole giant society of ai. Then, you know, there's a, you get a different set of values building up within that society of the ai, just as you do within human society. I mean, for example, one of the issues is does the AI mind that it's switched off? You know, if we don't give it some survival instinct, it's not going to care that it's switched off, so to speak. Yet in some sort of, uh, kind of, if, if there's this giant kind of society of ais and the AI that misbehaves itself, so to speak, is cut off from the other ais, they say, we're not gonna talk to that ai. Then even, you know, whatever the AI might internally feel, which we don't know, it still is a practical matter that AI is being sort of removed from society and, not able to cause trouble, so to speak. That's something one can imagine when there are sort of equal forces of many kind of, uh, equivalently, powerful ais kind of all interacting in this kind of big society. That's something we can imagine kind of naturally happening. So that's a, that's a pretty good situation relative to there's one master AI that runs the country, the world, whatever else.

CODY:

so you've used the term ai, but I ChatGPT. It's, it's a large language model, right? So it's something that just gathered all this information and then it assigns a, assigns a value to the input and determines, uh, or guesses what kind of response that you want. So it's not really an ai, it's more like a, an early version of an algorithm in some ways, uh, un unless I'm wrong on that,

Stephen:

No, I, I mean, it depends on what you mean. People have been, look, I've been kind of paying attention to this for about 50 years now, and people have said, you know, when computers can do this, Will know that truly have achieved ai, whether this might be doing some math or might be doing playing chess or might be doing all kinds of different things. Then every time the computer's able to do it or, or question answering like we've done with WolframAlpha every time the computer's able to do this kind of thing. Each of these kinds of things, people say, well look, we can look inside. It's just engineering. It's not magic. Well, you know, here's the sad fact. We're not that magic either. It's, you know, we got a hundred billion neurons in our brains and the little electrical devices and they're connected to each other and we learned a bunch of stuff from the experiences we've had in our life and you know, we could in principle, and we are starting to be able to do this more and more, go in and see all those neuron firings and we could say, look, it's not magic. It's just a bunch of neuron firings. So, you know, in that sense, where the question that we could ask is, how close are the things that we are doing digitally, so to speak to the kinds of things humans and human brains do. And the answer is we're pretty close in a lot of areas. In some areas we vastly surpassed what human brains can do in other areas. We're just kind of a, a parity with what human brains can do. And I think this is, uh, you know, if you ask for something, you say, but look, this thing can't do x, y, or Z thing, which humans do well, eventually you're gonna find those things.'cause the thing isn't actually a human, so to speak, if you want it to have all these experiences and, and have a feeling of mortality and have, uh, kind of a, a need to, eat food and things like this. If you want all of those things, which are the bundle that come with being human, then you have to kind of have a human, you don't get to have this thing that is a digital device, that's not a human, so to speak. So I think it's, it's a, it's, it's kind of not, you know, one, the thing is it has a certain slice. Of Being like an int inte, like a human-like intelligence that is artificial. you are always gonna be able to say, but it doesn't do this thing, which is what humans do.'cause it's, it's just not actually a human.

CODY:

Can you see from a computer science perspective whether or not we can ever have a sentient ai I.

Stephen:

Are we sentient? I mean, if we are sentient, you know, it's, it's kind of, that's a complicated issue because if you look, you say I feel sentient, you say I feel sentient. How do I know that you are sentient? Right. I, I know that I feel sentient internally, but to, and I have this guess that other humans who I see who kind of seem to act more or less like me, I kind of extrapolate from my own feeling of internal sentience to say, and they must be sentient too. Now. Right now we don't typically make the extrapolation. Oh. And our computer is sentient as well. It's kind of interesting if you just kind of think through what's it, what does it feel like to be a computer? You say, well, I wrote, you know, what does it feel like to be you? What does it feel like, from to be another person? We don't know it. It's, we can guess if we say, what does it feel like to be a computer? You know, the computer is sitting there, it's getting all this input, it's experiencing things, it's having inner thoughts, it's communicating with other computers. It's eventually, you know, it crashes. It effectively dies. It gets restarted, when you reboot the computer and, and then it's kind of learning a certain amount from what left in, its in it on its disc or whatever, and you know it's going to learn new things from, from the outside world. It's kind of very human-like experience in many ways. Even, even an existing computer, forget LLMs or, or anything sort of ai like, so from the, from inside the computer, if we imagine what it's like to be a computer, Experience is probably not that different from sort of what it's in many ways from what it's like to be a human. So I think it's, it's a, uh, you know, this question of what does it mean to be kind of, uh, to have this experience that we can extrapolate from our own experience? It's, it's, it's like that's a, that's a thing. We've, you know, it, it's not so hard to achieve. Now, you might say, what's the consequence of that? You might say, if it's sentient, then it should be treated like a person and have rights. Let's say you have various consequences. what does it mean operationally to be sentient? Does it mean that the thing has free will? Well, that's a complicated issue. You know, a a a typical computer, you can't tell what it's going to do. This is the story of computational irreducibility. Just as you can't tell what a brain is going to do. If you're dealing with a, a lower animal, for example, you might be able to measure enough neurons in its brain. You might be able to say, I kind of know what it's gonna do. You know, I kind of know this particular circuit in the brain of a songbird is going to cause it to, to sing this particular song. But it doesn't, it still doesn't quite feel like it's, it's a, as soon as we can get in and say, we know what it's gonna do, it doesn't seem to have free will. But as soon as it's sort of irreducibly complicated, it's a, it's like we can't tell what it's going to do. We might as well just say it's figuring out what it's gonna do.'cause after all, it is figuring out what it's going to do. So, I mean, these are, these are complicated stories and I think, one of the things that's happened in the other part of my life is working on the fundamental theory of physics and a bunch of things that come out of that. actually a lot of these questions about what is consciousness, how does it relate to our perception of the universe and so on, really made a huge amount of progress in the last few d years figuring out how that all works. And, you know, I think one of the things that's been most interesting to me is the, the fact that we are, the way we are. That we are sort of observers who are bounded in the amount of computation we can do. We believe we're persistent in time. Those, those attributes necessarily lead us to basically the laws of physics as we now know them, which is kind of exciting that it's possible to go from sort of attributes of us as observers to sort of a necessary deduction of the universe appearing to us the way it appears to us. But, so that's a sort of an application of what we mean by sort of consciousness and so on. Is that means that we can then mathematically show that the universe to an organism, to a, to an entity that has these attributes must appear a certain way.

CODY:

Yeah, it, it's quite possible we might all be NPCs, you know, non playable characters in somebody else's world. but that, that also brings me to computational universe theory. Uh, I, I know that's something that you've worked on before. Can you explain what that is? I.

Stephen:

Well, I mean, that name isn't a particularly common one to talk about. But, the question is how does our universe actually work? what's underneath the things that we perceive in the world? For example, people for a long time had wondered, you know, you have a glass of water or something. It's like a fluid, it just flows around and you say, what's, what's inside? What, what is that water made of? And about a hundred years ago, people of 120 years ago, people kind of finally realized it's made of molecules. There are discreet little things that make up this, this glass of water. Then it was realized a little bit later than that, that light is made of discreet things, photons. One of the things we, we haven't known about is space. We've kind of always assumed that space is something continuous. One of the things that sort of a starting point for our theory of physics is space is not continuous. Space is made of discreet points that are related to each other in different ways. Kinda like the points of space. Just all they know about is their relations. They are the, you know, points of space, are kind of friends with other points of space and the whole universe is just made up of this giant network. Of relations between points of space, and so all the things we know, you know, electrons, photons, atoms, whatever else, they're all just features of this giant network. Much as, you know, if you look at a fluid like water, you can have a little eddy, little vortex that's, that's going through the water. It's made lots of molecules. We can say, look, there's an eddy. It's going through there. But if you look underneath, it's just a bunch of patterns of motions of molecules. So similarly in, in, in the physical universe, what seems to be the case is that the things we perceive, like electrons and so on, are just features of sort of the details of these relations, this network that represents the structure of space. So, you know, this is kind of the starting point, and then things get, uh, that's kind of the structure of space and the things that are in space. Then it turns out that one can derive the properties of gravity, the properties of relativity. one can then. Jump further and derive the, uh, quantum mechanics. and in the end it seems we can derive at least the general features, the general theories of physics. Basically three such theories, and we can derive all of them. And from just so two observers like us, it is inevitable that the physical universe must follow those laws. If we were different kinds of observers, if we were some kind of alien with a, with different things where we didn't believe in our persistence in time, for example, where we were, where we didn't think we were the same consciousness at another moment in time where it was always changing. Not something we can readily imagine, but let's say that that was the case or where we don't even think we have a single thread of consciousness where we say we're constantly branching with many different sort of threads of consciousness or something like this. We will conclude the universe works differently from the way that we conclude it works. But one of the things that's been well, really cool as far as I'm concerned, That, and this gets sort of deeply abstract at some level. the universe in a sense seems to, uh, sort of follow all possible computational rules, but we are embedded within that thing that is following all possible computational rules. But the attributes that we have of the way we observe how the universe works necessarily cause us to make these conclusions about how the laws of physics work. And so this, it's kind of a, it's kind of a wonderful thing which was not sort of expected philosophically and scientifically that, that there was sort of a way to derive that the universe has to be, has to appear to observers like us the way it does, so to speak. So that, that's been something that's been, been very exciting to me. And they're just a, just an unbelievable number of implications of that in across science and, philosophy and lots of kinds of thinking about lots of sorts of things.

CODY:

And so at the end of the day, everything can be denoted to. Math, everything can be calculated from a a numerical perspective, including things like a sophisticated ai.

Stephen:

Well, I wouldn't call it math. It's really something very computational, sort of deep below math. Math. When we talk about math, we're already talking about, you know, you learn algebra, it's like variables x squared minus y squared or something. That's already a pretty high level of, of, you've got variables, you've got powers, you've got subtraction, you've got all these kinds of things. the kind of underlying structure of the universe. Our models is something much lower level. It's just points and relations between points and so on. Actually, you can build math from the same stuff. I just recently wrote a book actually about the physicalization of meta mathematics, which is all about this, and it's kind of a, a way of understanding the foundations of math. Math is a funny thing because math, there are many possible mathematics. We humans picked a particular mathematics and we did that very consciously. We said, we're gonna have geometry, it's gonna have these axioms and so on. That's a human choice. There are infinite number of possible mathematics is, and actually very much like physics. The nature of us as observers, as as entities doing math, has all sorts of constraints on the ways that we could have set math up. We picked a particular one, but, so, so it isn't, it's kind of, it's kind of, math is another thing far above this kind of underlying infrastructure of, computation. It's, it's a piece. Physics is another piece. And in the end, do all these things, uh, you know, does AI work using these kinds of principles? Yes, it does. Can we say, can we use the kind of formalism, the ideas about how sort of computation works and plays out in the world? Can we use those ideas specifically to think about ai? I think the answer is yes, actually. And there's a, we have quite a project now to look at that, but that's a very, that's a sort of exotic corner of this whole, whole set of questions. I mean, it, it's the question of, of sort of what did I. You know, when ChatGPT kind of packages up the knowledge and language of our species and turns it into a bunch of neural net weights, and then it's able to kind of make use of that. How does that really work? Why does it really work? That's a question that is in the end, a, a sort of a science question, like you'll give you an example. There's this parameter called the temperature, which doesn't have much to do with physical temperature, but it's just a mathematical parameter. In something like ChatGPT, it's usually set to 0.8 for typical essay writing. If you crank that temperature up, eventually it will start talking nonsense. It'll get more and more bizarre and eventually it'll, it'll, as above some critical temperature, it's kind of like water boiling into steam. The thing will just go, start talking just p prattling on and talking nonsense, turn the temperature down. Eventually it gets very boring and, and monotonous, so to speak. How that works and why, for example, there's a, a sharp transition as you increase that temperature to where it kinda just falls apart and starts talking nonsense. We don't know yet. My guess is that's a thing that is derivable using methods that have been, you know, exploiting mathematical physics and things like this, but hasn't yet been really nailed down. But those are the kinds of connections that I think one can make.

CODY:

Earlier you were talking about math and it's kind of inherent in the universe. It, well, it's a question of whether it's inherent or whether it's a human creation. And, and that actually leads me to, to something that the double slit experiment as, as you know about light and matter and how we view it, and then it changes, it changes the way that, or from a wave to something else, I believe. but that, that brings me actually just to, so my own curiosity out of, uh, in terms of science and the universe, is. I have like a, I've seen a new theory that the universe is conscious and everything around us has a level of conscientiousness to it, and that might explain this double slit experiment. I'm just curious if, if you've explored, uh, I'm sure you have, thinking about the universe and our existence.

Stephen:

Oh, sure. I've written. Hundreds, perhaps even thousands by now of pages about this and, and yacked about this for, for gazillions of hours. Yes. I mean, the whole question about sort of intelligence consciousness, these kinds of concepts, lots of things are that way. They're just not that way. The way that we humans are, in the end our brains are physical constructs that have a bunch of electrical signals going around in them. You know, something like the weather is something that consists of a bunch of, pieces of air and, and water vapor and so on, kind of moving around according to certain rules, just as the neurons in our brains operate according to certain rules. So in some sense, there's the same kind of level of computational sophistication of what's going on in the weather, what's going on in our brains. There's a thing I call the principle of computational equivalence, which kind of captures this idea. But the issue is to what extent are those things aligned with the human view of the world? The human view of the world has evolved over the course of time that humans have been around. But there's sort of a characterization, I talk about it as points in rural space, that different minds, different things that one might think of as mind, like kind of exist at different places in rural space. And so to human minds might be pretty close together. Human mind relative to your average, dog or something. It's a bit further away relative to the weather. It's considerably further away relative to the whole universe. It's also far away. But absolutely the universe and the principle of computational equivalence, which we have. well rather, I, it's kind of interesting'cause I invented this idea back in the, in the beginning of the nineties and it's, uh, it's something where at first it seems like really, can this possibly be true? But now there's sort of a generation of scientists who are younger, who kind of grew up, who. Knowing it. And for them it's sort of obvious that it's true. It's kind of obvious to me that it's true too, but it's kind of interesting to see how that, how that progresses over time. But kind of the idea that there's this computational equivalence between lots of things in the universe, including the whole universe itself, is something that is, I think pretty, pretty clearly, understood at this point. Now, in terms of the relationship between that and things like quantum mechanics and the double slit experiment, that's a whole big technical stack of stuff. But you know, to say one thing about that, uh, okay, so one of the, so quantum mechanics is kind of the theory of how small things, little tiny things in the universe work. And its main feature is, you know, when we think about ordinary classical physics, we think that definite things happen. You know, you. I don't know, roll a ball. It's gonna roll in a definite direction. In quantum mechanics, what it says is there are many paths that it could take, and we only get to know things about the probabilities for different, different, uh, collections of paths. So there's kind of these many different paths of history that get explored, and that's the big story of quantum mechanics as many pods of history get explored. So now the question is why do we think definite things happen if there are all these pods of history that're being explored? Why is it that we humans think definite things happen? Well, I mentioned before one of the features that we have is we believe we have a, we're persistent in time. We have the single thread of consciousness. We believe that definite things happen, but in a sense, we are part of this universe. That is branching through all these different kind of, uh, paths of history and so on, that's what's happening in our minds. So it's kind of like the story of how does a mind that is going through all these different branches of history perceive a universe that is also going through all these different branches of history. And the thing that is kind of the story of quantum mechanics is kind of how that, how that fits together and well, for example, one of the mysteries in the double slit experiment is you've got these, you know, you've got these two slits and you have photons that can go through one slit or the other. And you know, you, you look at some screen behind these slits and you say, did a photon arrive in the middle? Well, a photon could go through one slit just fine. It could go through the other slit just fine, but you'll never find a photon in the middle. There's destructive interference between kind of the, well, the waves that correspond to these photons, but it's kind of weird that you would think with classical physics if it can go through one side or it can go through the other side. Well, it's gotta, by the time it could go through either side to get to this place in the middle, it's gotta be there in this place in the middle. It turns out that in our understanding now of how quantum mechanics works in our models, it's uh, well, what, what, what seems to happen is that the photon is kind of in, in, in this collection of branches, of different branches of history. They're kind of laid out in what we call branchial space. And effectively what's happening is going through each different slit. Corresponds to going to a different end of branchial Space, a different place in branchial space. And when we humans with our attempt to kind of make the universe fit together, when we say, can we make it all fit together? Because these photons kind of went to different ends of branchial space, that there's no way to make them fit together like that. And so we said, well, there isn't a photon there. I mean there's, there's, there's a big depth of technical stuff, which I've just elided in that description. But it's kind of a, that's maybe some flavor of how one starts to think about things like that.

CODY:

I'm curious about your perspective of ChatGPT and its role in education as there's been some, some curricula and teachers that try to outright ban it, and then there's been some other professors and teachers that look at it as a tool akin to something like Excel that can be useful in the future. Where do you stand on the integration of AI in education? Look

Stephen:

if people are gonna have a tool that they're gonna use for the rest of their lives, then you should educate them about it. To say you're gonna be able to use this for the rest of your life, but you shouldn't use it when you're getting educated is kind of silly. So, think that the, um, uh, you know, my, my point of view is it's, integrated into education as you can expect to integrate it into life. And it's kind of like the tools I've built, we've built for doing computation. It's like I. Actually, I don't know, in the, in the 35 years they've been around. I would say that, uh, the, the amount that they get used by professors and teachers and so on is, uh, I think it, it, there's, there's not been people adapted quite quickly to what can we now do now that we have these tools that students can use, that we the professors can use, that the students can use for the rest of their lives and so on. that's the place to adapt. Not to say, oh my gosh, the homework exercise I gave last week can now be done by a computer. It's like, well, pick a new homework, exercise. Don't just try and, prevent the student from using the computer. Now I think it, it's the case that, that it looks very promising to use LLM's as a. A way to develop tutoring systems that can be very personalized to individual students that can kind of learn things about how students learn, what students know, and so on. I think there's a real potential to have sort of very powerful kind of, uh, ability to sort of diffuse the kinds of personalized education that most students never get a chance to have. There's a chance to have that much more broadly, which I think is quite exciting.

CODY:

And there, there's been discussions about the, the ability of ChatGPT to remove critical thinking. I'm curious, do you think it's going to cause a generation that lacks this critical thinking or do you think it's gonna help 'em in some way? I.

Stephen:

Well, I'll learn to write better essays.'cause if you want to prompt ChatGPT to do what you want, you've gotta explain yourself. Well, it's a good, uh, test of expository writing. If you can't explain yourself, the AI is gonna go off and do something completely different from what you thought it should do. So, I think that the thing that I hope will happen is people will realize that a lot of these kinds of, uh, sort of almost mechanical tasks can now be automated. And the thing we really should be teaching is about how to think about things. And I don't think there's anything. In what's happening with, with ChatGPT, that, that it very much goes in the direction of, think more broadly, actually think, you know, it's like kind of, well I know the technicalities of how to answer this question, but the real thing is, well, what question should you be asking? Which is something that involves kind of human choice and thinking in, in a way that isn't this sort of almost by definition not automatable. And that's, that's the thing I'm, I'm kind of hoping that there will be more kind of, Breadth of education because these tall towers of technical detail aren't as necessary to teach.'cause they can be done automatically, so to speak. And I'm kind of hoping that that's, that's the direction that goes in. I mean, we'll see the education is a very slow moving area in general. It's always a very frustrating area because it's kind of like when we first released Mathematica in 1988, there were very quickly a bunch of, you know, K through 12 schools. And by the way, higher education is a different story. That's a, that's much more connected to kind of research and so on. But, in kind of, high school, middle school, whatever else, education it, it tends, you know, it was very confusing.'cause at the very beginning, you know, there were people using. Our technology very quickly, and it's like, this is really cool, this is great. But over the years it didn't expand as much as it should have done because it's just such a complicated area with so many forces and, and you know, sort of prevent the ways that change gets prevented and so on. it's really a, um, it's, uh, it's something where I don't know exactly where that whole change is gonna come from. I mean, I know, you know, we, put on these summer programs for high school kids and middle school kids and so on, we've been doing for years. And, uh, it's, it's kind of really cool to see what, what these kids can do if you kind of teach them computational language, it becomes this kind of superpower that they can apply and, they can do all kinds of interesting things with it. but that's something, you know, delivering all of that and how that connects to the existing sort of institutional education mechanisms is, is a huge challenge. And it's not, that's not my kind of challenge, I have to say. It's kind of, uh, you know, I'm maybe decently good at, at figuring out kind of what's possible and what tools to build and so on. But I'm not the person to, to figure out how you move the giant institutional ship and turn it in some, some particular direction.

CODY:

And I know that we're short on time, so I'll just ask a final question is where can our listeners go if they want to see what you're currently working on? I.

Stephen:

Well, good place is writings dot stephen Wolfram.com. I also do a whole lot of live streams. So I do a couple of, um, uh, at the beginning of the pandemic I thought, oh, there are all these kids who are not gonna be in school. I should offer a kind of q and a about science and technology for kids and others. So I started doing that at the beginning of the pandemic, and I'm still doing it every week. and it's, I find a lot of fun. Uh, people ask all kinds of interesting things. Gets me to think of a lot of stuff that I wouldn't normally think about. I also do, uh, another week, another. Livestream about, uh, business innovation and managing life. And another one about history of science and technology. And I also do another strange thing, if you really wanna know what I'm working on, uh, we livestream a bunch of the internal software design meetings for our company many times a week. And, that's really the front lines of kind of what I do for a living every day, so to speak. And, I'm kind of a, a, um, why not leave, you know, have as much kind of openness as I can about sort of the, the intellectual things I'm doing. So I actually, one of my most extreme things is video work logs, which are kind of just me. Uh, you know, I just switch on the, the kind of the screen recorder when I'm just trying to figure stuff out and doing work and, I don't know if that makes for good television, but it's, it's, I figure I might as well do it. And, when people read things I write and they say, why do you think, why is that true? Eventually you can just go back and figure out, okay, when did that goofy guy actually write that sentence? Did he, what was he doing? You know, why, why was that written that way? So that's, that's another thing, but I would say that the main, uh, and you can follow me on all the usual social media, so to speak, and that's, uh, those are some of the ways that I kind of try to engage with the world. And, occasionally I do podcasts like this one.

CODY:

Thank you. Well, I'm, really grateful that you were able to spend your time with us here today and, really loved the insights that you were able to provide and, definitely will be checking out some of your streams. So thank you again for your time.

Stephen:

Thanks a lot.

Intro
About Stephen Wolfram
Wolfram|Alpha: the beginnings
Mathematica and The New Kind of Science
Computational Language and AI
Wolfram|Alpha + ChatGPT
the good vs the bad, how values and morality fits in
Computational Irreducibilty, what is it?
Values and Contradictions, There will always be the other side of the coin
AI for everybody, is AI the new people's gadget?
AI or just a larger more sophisticated algorithm?
Sentient AI's, what is being sentient?
Things just got deep, what is computational universe theory?
And you thought Math is boring
The Universe and existence as we perceive it, the principle of computational equivalence, and more
ChatGPT in Education and for the future generations, bane or boon?
How to get more of Stephen Wolfram and final thoughts