Getting2Alpha

Doug Hofstadter: Reflections on AI

June 29, 2023 Amy Jo Kim Season 9 Episode 2
Doug Hofstadter: Reflections on AI
Getting2Alpha
More Info
Getting2Alpha
Doug Hofstadter: Reflections on AI
Jun 29, 2023 Season 9 Episode 2
Amy Jo Kim

Douglas Hofstadter is a professor of Cognitive Science and Comparative Literature at Indiana University in Bloomington. His research into cognitive science includes concepts such as the sense of self in relation to the external world, consciousness, artistic creation, literary translation, and discovery in mathematics and physics. His 1979 book Gödel, Escher, Bach: An Eternal Golden Braid won both the Pulitzer Prize for general nonfiction. His AI interests explore the subtlest and most slippery aspects of human intelligence, as embodied in deceptively deep analogy problems like ABC is to ABD as XYZ is to what?

Check out the video here: https://youtu.be/R6e08RnJyxo

Show Notes Transcript

Douglas Hofstadter is a professor of Cognitive Science and Comparative Literature at Indiana University in Bloomington. His research into cognitive science includes concepts such as the sense of self in relation to the external world, consciousness, artistic creation, literary translation, and discovery in mathematics and physics. His 1979 book Gödel, Escher, Bach: An Eternal Golden Braid won both the Pulitzer Prize for general nonfiction. His AI interests explore the subtlest and most slippery aspects of human intelligence, as embodied in deceptively deep analogy problems like ABC is to ABD as XYZ is to what?

Check out the video here: https://youtu.be/R6e08RnJyxo

 ---[00:00:20.6] 

Amy Jo: Doug Hofstadter is a cognitive scientist and professor at Indiana University, but he's best known as the author of the Pulitzer Prize-winning book, Gödel, Escher, Bach. For me and for many people in my generation, g e b as we affectionately called, it was a landmark work that brought together our mutual fascination with computational systems, how the mind works and the beauty of paradox. Not to mention it drew connections between art and music and mathematics, all things that I love deeply.

Thanks to my partner, Scott Kim. Doug has been part of my life for many years. Scott and Doug first met back in [00:01:00] 1975 when they were part of a circle of friends that nurtured the creation of g e b. 

Doug has a lifelong fascination with the nature of consciousness.

Doug: It hit me like a ton of bricks all of a sudden, how a brain, a physical object inside our head. Is responsible for all that we consider ourselves, our feelings, our souls everything about us. And it led me to asking all sorts of questions about how it was possible for a physical object to support something so abstract and ineffable as a self or a soul or an I. 

Amy Jo: Join us as we talk about the origins of Doug's interest in the mind, how he came to be writing Gödel, Escher, Bach, and what he thinks about the recent wave of advancements in AI. 

So Doug, how did you first get interested in AI and cognitive [00:02:00] science?

Doug: I wondered how it was that I created sentences in French, as opposed to creating them in English. And the bubbling up of ideas was something that fascinated me. Then I also admired enormously Certain creative geniuses and I wondered things about their minds, how they did what they did.

My sister, my youngest sister, Molly, had brain damage. And I didn't think so much about phrases like brain damage or something, but when my parents bought a book about the brain and I thought about Molly and I started reading this book, it hit me like a ton of bricks all of a sudden.

How a brain, a physical object inside our head. is responsible for all that we consider ourselves, our feelings, our souls, everything about us. And, it led me to asking all sorts of questions about how it was possible [00:03:00] for a physical object to support, something so. Abstract and, ineffable as a self or a soul or an I And lastly, when I was 14 or 15, I read a book called Gödel's Proof by Ernest Nagel and James R. Newman. And that book was about, the whole, in a certain sense, at the center of, mathematics, the idea of unprovable statements that were, for the reason that they were unprovable is that in a certain unexpected way, statements of mathematics could be made to twist around and talk about themselves.

And the logi, the Austrian logician, Gödel in 1930, 31 was able to create a statement that said, Essentially, I am not provable within a certain. Formal system and, for a statement to talk about itself, to be able to talk about itself was just a miraculous thing [00:04:00] to me. And it opened all sorts of doors, in my mind.

So it was a combination. Oh, and one other thing, I, a very crucial thing. I learned a program when I was 15 from my friend Charlie Brenner, and I started programming all sorts of things. And I knew how computers worked because I was a programmer. And, and, I created a program in mid-sixties that was able to create sentences, random sentences that employed randomly chose chosen pathways through syntactic, network and randomly chosen words filling in the.

Parts of speech, a noun or a verb or an adverb or whatever. And some of the sentences were very long and complex and very humorous. Some of them were not so humorous. They were actually, they sounded fairly meaningful. And I, that again, made me think about what is it going on inside this computer that is similar to and what is, you know, different from [00:05:00] what happens when I, myself, come up with sentences, whether in French or in English or in any other language.

And so it was a combination of all those things. Computers, my sister Molly, my interest in languages, Gödel's theorem and so many things that came together to make me interested in these questions. 

Amy Jo: Right. So you mentioned, an aha moment that had to do with recursion or self-reflection.

Yeah. Yes. can you go in a little bit on that, cuz that's such a fundamental concept in g e b and it's also a fundamental concept and argument going on in today's AI and LLMs. 

Doug: Yeah. Well, The self-reference, in girdle's construction. Comes about in a way that's very surprising because, in 19, roughly 10 to 13, Bertrand Russell and Alfred North Whitehead, two important philosophers, [00:06:00] created a work called Principia Mathematica, meaning the bases of mathematics.

And they tried to found mathematics in logic. But since Russell had created a paradox that involved, the set of all sets that don't contain themselves, he knew that this paradox was fatal to mathematics. And so he wanted to create a system that could not talk about such things. And so he created an idea that I'm not gonna go into, but he called it the theory of types that prevented sets from containing themselves, prevented, sentences from talking about themselves.

Et cetera. And he thought that by banishing self-reference, he was going to be able to create the fundamental basis of all of mathematics. And he did this in conjunction with Alfred North Whitehead. The thing that was amazing though, was that Gödel, in when he was about 24, 25 years old, he came up with this. [00:07:00] idea that numbers can stand for things. We know that they stand for things in all sorts of ways, and that, numbers can stand for symbols. And so he c could create, a sentence that was about numbers, but at the same time it could be read at a second level so that it was about symbols.

And it turned out that he figured out a way to map the entire structure. Of, sentences in or formulas in Principia Mathematica onto numbers. And so the sentence that he created could be read on one level as a sentence about numbers, but on a second level it could be read as a sentence about structures.

In the formal structures in Principia Mathematica. And it could thereby be talking about such things as theorems and proofs and axioms and so forth. and in fact, the way this sentence says that I am not provable, it really says there does not exist a derivation [00:08:00] of a certain formula. In Principia mathematic, derivation, meaning a proof, and, and then the certain formula that is talking about turns out to be itself by virtue of the mapping that girdle created between symbols and numbers.

So he wound up finding, creating a self-referential sentence in the fortress that Bertrand d Russel had erected to banish self-reference from. So it was an amazing thing, and it, it, it side stepped all of Russell's ideas. it turned self-reference into something that was inevitable. Even despite the best efforts to banish it, and that struck me as very magical, and it reminded me of the idea that a brain.

Seems to be something that is inanimate. It seems to be made of inanimate in the sense that it's made of molecules which are inanimate, that are just, [00:09:00] doing their chemical things. and yet somehow out of it comes not only, life, link, you know, the ability to perceive, the ability to, react to the world, but also the, The ability to create a self model, the ability to create the feeling of consciousness.

So it's a kind of a second level of looking at the brain. The brain at one level is just a physical object, and on a second level, it's something very magical because it creates a thinking, feeling, conscious being. And so I made the analogy, between consciousness and girdles, construction, and I tried to spell it out.

I did it a little bit better, I think some years later in my book. I'm a strange loop that came out about, 25 years later maybe. 

Amy Jo: Wow. So you've been pulling this thread for a while. 

Doug: Yes. Yeah. Well, in fact, the first time I was thinking about it was when I was 16 years old and so I didn't write g e B until I was in my [00:10:00] early thirties.

So in fact, it goes way back to about, 1961, over 60 years. 

Amy Jo: That, and that's such a great story about, getting into the fortress and messing everything up. It's like a mathematical Trojan horse. 

Doug: That's right. That's correct. That's a good phrase for it.

Amy Jo: I lo-, I love that story.

Doug: Bringing self-reference in despite the fact that it had been officially and it’s totally banished.

Amy Jo: Right. And then it just took over. so tell us a story of how GEB came into life. It's a monumental effort to write any book, let alone a book like that, let alone get a publisher, let alone have the cajónes to just pull that together and put it in the world. What was that journey like? What. You know, how did it come about?

Doug: Yeah, it began, I became a graduate student in mathematics in 1966, and I dropped out because [00:11:00] I didn't, I wasn't able to handle it. It was too abstract, and I took a jump into physics. in 68 after two years of struggle against mathematics and in 68 I became a graduate student in physics, and I struggled in physics.

And that's a long story complex, which I don't want to go into, but it was a very painful part of my life and it lasted for quite a number of years and I was interested in many things, but I had long left behind my interests in computers, in consciousness, in self-reference, in Goodes Theorem, all of those things I had left behind in going into physics.

But I loved moseying around the bookstore at the University of Oregon where I was, and one day I came across a book called, Profile of Mathematical Logic, by Howard Delong. And I picked it up out of curiosity, go to serum belongs to mathematical [00:12:00] logic, and so it reminded me of my old interests from quite a number of years earlier.

And I picked it up and started flipping through it, and I got completely sucked in. It was very rapid intoxication with that book. And the book re inflamed all of my passionate interests that I had as a teenager in self-reference and so forth. And, and I couldn't stop thinking about it even though I was a physics student.

This would be in about 1972 and, 72, 73, and I could not stop thinking about these things. And one day I started writing a letter to my friend Robert Boeninger, it was a long letter. I happened to be in Boulder, Colorado, and I was in the library of the University of Colorado and I was sitting at a big table and I had a bunch of paper and I wrote a letter that was 32 pages long that was put putting forth.

Some of my ideas, about consciousness, mathematics, [00:13:00] abstract structures, codes, self-reference, computers. There were formal systems, proofs, so many things, and I got, I wrote 32 pages and I taken me three or four hours and I thought, This is, I think I can't go any further today, but maybe I've done about half of what I need to do.

So I'll mail this letter off and maybe I'll write the other half of the letter in the near future. Well, that 32 page letter was sort of the germ of g e b and, I didn't wind up writing another 32 pages, but when I got, back to Oregon several months later, I wound up writing, a draft of a book, which I, at the time was called Gödel's Theorem and the Human Brain.

And, that was the first title. And, I did it all very rapidly in the fall of 1973. I wrote this book, maybe 200 [00:14:00] pages,in pen. On just ordinary, paper, lined paper. And one day when I was thinking about a particular issue, I started writing, a dialogue that was modeled on a dialogue that Louis Carroll had written, called What the Tortoise said to Achilles.

And I used the same characters. I used the tortoise and Achilles in my, my own dialogue. And, they were very humorous characters and I was able to, I don't know, to,pick up their character traits and, write an amusing dialogue. And, and I thought, this is fun. I'll try to put this into my book,

And then I got, into the Frame of mind of writing, more dialogues once in a while, and I wrote two or three more. and at one point, I wrote a dialogue that was structurally kind of tricky and just for the fun of it, I went back to the very beginning of the [00:15:00] dialogue. And I typed the word fugue at the beginning.

It wasn't really very fugal, but it reminded me vaguely of a fugue. And all of a sudden that writing of that one word,sparked in my mind the idea that maybe I could write a dialogue that really was like a fugue or a another kind of piece by Bach, like a cannon, c a n o n, that is, ,Which is like a round in music, but it can be more complex.

And I thought (geia) a dialogue with an interesting structural form, as well as interesting ideas would be a novelty. and so that became a second facet of the book writing dialogues that had interesting structural forms that were based at first on Bach music and, and so forth.

And the structures became more and more elaborate and, eventually I wound up inserting an intricately structured dialogue between every. Pair of consecutive chapters. And [00:16:00] that made the book have a very different flavor from a book called Good in the Human Brain. And I knew it had to have a different title and since.

Contrapuntal music was playing a role in the book, a very important role in determining the structures of the dialogues. I decided, well, Gödel and Bach and then I also, my dad had read an early draft of it and it critiqued it a bit, and he had said a lot of things that were use useful to me, but one of them he said was, why don't you have more pictures?

And then it occurred to me that, In the back of my mind, as I was writing a lot of the book, there were pictures by MC Escher, paradoxical strange pictures that were flooding through my brain as I was writing, but I wasn't telling the readers about them. I wasn't saying a thing about any pictures at all by Escher or by anybody else.

And it occurred to me, if my dad thinks I should have pictures, why don't I include some Escher? So then [00:17:00] Escher came into the book, and then I thought, well, this book is really full of art, full of references to art, full of references to music in some ways. And of course, Gödel. so why don't I just call it Gödel Escher Bach?

That'll suggest to people that, well, of course, to knowledgeable people. Actually, Escher wasn't very well known, and Gödel certainly wasn't known, so it wouldn't necessarily suggest too much to people other than the word Bach. and then I invented the subtitle an Eternal Golden Braid, which was the same three letters, EGB, in a different order.

And, it, it, the whole book started being, The ideas started getting more and more self-involved. And, during that time while I was writing a third draft of the book at Stanford in 1975, 76, 77, I got to know Scott And Scott's, way of writing had a big influence on me, he was very playful in his use of language.

He loved to use parallel paragraphs. He would write a paragraph that, was talking about one thing, [00:18:00] and then he would write a paragraph that was almost identical, but that was talking about something completely different. And I thought that was very beautiful. And It influenced quite a bit of things that I wrote in the book, and I was spending huge amounts of time with Scott during the final, the writing of the final version, which was, as I say, 75 through 77.

So, then I was lucky enough to be able to type, set my own book and so forth. But tho those are se separate stories. I don't know how important the they are.

Amy Jo: Wow. So, it takes a village. 

Doug: Yeah. 

Amy Jo: Dad, and Scott, and so

Doug: Pentti Kanerva, who wrote the text editing program that I used to write the book and also the typesetting program that I used to typeset it.

Amy Jo: Wow. How did you find a publisher?

Doug: I was just, pretty naive. I wrote a cover letter, I guess, and I took a chapter or two and I just sent them out to a bunch of publishers and mostly I got rejections of, you know, all the publishers [00:19:00] that I first thought of said they thought it was interesting or something like that, but it wasn't their type of book.

But the 12th publisher, as I recall, that I sent it out to, which was basic books, was enthusiastic. And I guess it was because, they sent it out to a physicist named Jeremy Bernstein. And Bernstein gave it an incredibly favorable review. and I think it was thanks to Jeremy Bernstein, perhaps also Freeman Dyson, another physicist.

Freeman Dyson gave very positive comments as well, so I think that it was thanks to those two physicists that the book, the people, it was Martin Kessler at Basic Books who sent it out to them. He was the president and I believe it was because he got back such favorable reports from these very knowledgeable people that, that the book was, was accepted by basic books.

Amy Jo: And then what an unlikely hit. 

Doug: Well, it [00:20:00] was an unlikely hit. Maybe you're right. I agree with you there. But, at the same time, I, again, I owe to Scott the fact that he wrote something called The Strange Loop Gazette because the concept Strange Loop was represented. This idea of self-reference that was at the core of Gödel’s Theorem and at the core of a human eye.

And it was a term that I used in GEB. Quite often and, especially toward the end of the book. And Scott wrote The Strange Loop Gazette, which was a several page document, explaining a lot of the book to, an idol that we shared, namely Martin Gardner, who wrote a monthly column in Scientific America called Mathematical Games.

And that letter from Scott, if you wish to call it a letter, it was more than a letter. But anyway, the Strange Loop Gazette that Scott wrote and sent to Martin Gardner, got Martin extremely excited about the book, and he wrote an incredibly favorable review of the book. and, and that must have [00:21:00] helped propel the enormously, the popularity and success.

He wrote that in July of 1979 and the book, received the Pulitzer Prize and another award in the middle of the next year. And certainly, Martin Gardner's endorsement. thanks to Scott, I would say, was pivotal.

Amy Jo: Wow. That was another, his column was another touchstone for me when I was in school.

And we all looked forward to reading it whenever Scientific

Doug: Yeah. Well, whenever Scientific American arrived in the mailbox, the first thing I would flip to was about page 125 roughly. And see what is, what does Martin Gardner say this month?

Amy Jo: What? What an amazing guy and what an amazing story. So, what of all the ideas you explored in GEB, which ones do you think are most relevant for today's budding AI scientists and enthusiasts?

Doug: Well, you know, I [00:22:00] think the question still remains, what is an I, what is consciousness? What exactly is thinking? I think that many people are puzzled about whether computers, especially, I don't want to use the word computer, since something like ChatGPT is much kind of much bigger system than what we usually call a computer.

But, anyway, a computational system, I may say computer in the future because I slip, but I really mean computational system, whether such things made of very different hardware from animal hardware, from human beings can have. Anything like experiences, feelings, thoughts, ideas, meaning in what they're saying there are certain people who are naysayers who say that everything that comes out of [00:23:00] these kinds of systems, like ChatGPT, is inherently meaningless.

And it's just symbols being batted about by systems that have no understanding of anything. And I think that's a misleading and misled opinion. Part of what I learned when I was writing the program that created sentences back in the mid-sixties, you know, I was wondering why, how I was different from a computational system.

That was creating sentences. And I felt that the essential difference was that the behind the words, there wasn't meaning, in the computational system and behind my words, there was meaning. And what was the difference? What made something have meaning? And I thought a long time about what made something have meaning.

And I talked about it a lot in GEB. And I felt that it was when the symbols in a system and GP is full of formal systems, they're not really exactly computational [00:24:00] systems, but they're similar. It's these formal rules that guide symbols and make symbols work in certain ways. it's when the symbols in that system are, are tracking something in the real world when they parallel something in the real world.

So exactly that, you can say that they stand for those things in my sentence creation program, (my) the words weren't tracking anything. They were just being pushed around at random by programs that selected pathways through a syntactic network and selected words to fill in, but the words were not being used because they had certain meanings.

They were just being shoved in at random. And, but when words are very systematically correlated with phenomena, in a very coherent, consistent way, over a long time, you come to believe that those words or those symbols really can be said to have meaning. And [00:25:00] it seems that today's systems are doing that, a great deal.

Sometimes they fall all over their faces. I mean, I've recently saw a proof, in quotes, proof by ChatGPT that claimed to prove that every number of the form 3n+1, where n as an in integer, is odd, which is crazy. It's nonsense. And you find a lot of nonsense still occasionally being produced by these chatbots.

But it's being reduced over time and a lot of what they're producing is totally coherent and believable and sensible. And so you have to, or I have to, I don't know if one in general, but I have to start assigning meaning to the symbols that they're using and saying that if there's meaning here, then there are ideas here.

And if there are ideas here, then there's [00:26:00] thinking here. And if there is thinking, then there is some degree of consciousness here and it's a kind of a slippery slope. And right now, we don't know where we are on that slippery slope. We don't understand very well. So GEB is, was trying to set forth and later, I Am A Strange Loop was trying to set forth what it is that really makes a self or a soul.

I like to use the word soul, not in the religious sense, but sort of a (syno) synonym for I a human I capital letter. I. And and so what is it that makes a human being be able to validly say, I. and what is it that justifies the use of that word? When can a computer say I and we feel that there is a genuine I behind the scenes.

I don't mean like when you call up the drugstore and the [00:27:00] chatbot. I don't know if I should call it that, but anyway, the, whatever you wanna call it on the phone, says, tell me what you want. I know you want to talk to a human being, but, first in a few words, tell me what you want.

I can understand full sentences, and then it, you know, you say something and it says, do you want to refill a prescription? And then I say, yes. It says, gotcha. Meaning, I gotcha. So it acts as if there is an I there, but I don't have any sense whatsoever that there is an I there. it doesn't feel like an I in the least to me, it feels like a very mechanical process.

But in the case of more advanced things like, ChatGPT-3 or ChatGPT-4, it feels like there is something more there that merits the word I, and the question is, when will we feel that those things actually deserve to be thought of as being, full-fledged or at least [00:28:00] heartly fledged Is?

And I personally worry that this is happening right now. But it's not only happening right now, it's not just that certain things that are coming about are similar to human consciousness or human selves, they are also very different. And in one way it's extremely frightening to me. They're extraordinarily much more knowledgeable and they are extraordinarily much faster.

So that, if I were to take an hour in doing something, the ChatGPT-4 might take one second, I don't know, maybe not even a second to do exactly the same thing, and that suggests that these entities, whatever you want to think of them are going to be very soon. Right now, they still make so many mistakes that we can't call them [00:29:00] more intelligent than us, but very soon they're going to be, they may very well be more intelligent than us and far more intelligent than us.

And at that point we will be, receding into the background. In some sense, we will be, we will have handed the baton over to our successors for better or for worse, and I can understand, that if this were to happen over a long period of time, like hundreds of years, that might be okay. But it's happening over a period of a few years.

It's like a tidal wave that is washing over us at unprecedented and unimagined speeds. And to me it's quite terrifying because suggests that everything that I used to believe was the case is being overturned. 

Amy Jo: What are some of the things specifically that terrify you? What are some issues, well, that you're really concerning about?

Doug: When I started out studying cognitive [00:30:00] science and thinking about, the mind and computation. You know, this was, many years ago, 1960 roughly, and I knew how computers worked, and I knew how extraordinarily rigid they were. You made the slightest typing error and it completely ruined your program.

And debugging was a very difficult art. And, you might have to run your program many times in order to just. Get the bugs out and then when it ran, it would be very rigid and it may, might not do exactly what you wanted it to do because you hadn't told it exactly what you wanted it to do correctly, and you had to change your program and on and on and on.

Computers were very rigid. And I grew up with a certain feeling about what computers can or cannot do, and I thought that artificial intelligence, when I heard about it, was, a very fascinating goal, which is to make rigid systems act fluid. But to me that was a very, long remote goal [00:31:00] it seemed infinitely far away. It felt as if artificial intelligence was the art of trying to make very rigid systems behave as if they were fluid. And I felt that would take, enormous amounts of time. You know, I felt it would be hundreds of years before anything even remotely like a human mind would be.

Asymptotically approaching the level of the human mind, but from beneath, I never imagined that computers would rival or let alone surpass human intelligence and, in principle, I thought they could, rival human intelligence.

I didn't see any reason that they couldn't, but it seemed to me like it was a goal that was so far away. I wasn't worried about it. 

But when certain systems started appearing, maybe 20 years ago, they gave me pause. And then this started happening at an accelerating [00:32:00] pace, where unreachable goals and things that computers shouldn't be able to do started toppling. The defeat of Gary Kasparov by the Deep blue and then going on to Go systems, Go program, well systems that could defeat some of the best go players in the world.

And, then systems got better and better at translation between languages and then at producing intelligible in, responses to difficult questions in natural language and even writing poetry. And my whole intellectual edifice, my system of beliefs. It's a very traumatic experience when some of your most core beliefs about the world start collapsing, and especially when you think that the human beings are soon going to be eclipsed. It felt as if not only are my belief systems collapsing, but it feels as if the [00:33:00] entire human race is going to be eclipsed and left in the dust. Soon. 

People ask me, oh, what do you mean by soon? And I don't know. I, what I really mean, I don't have any way of knowing. But, some part of me says five years, some part of me says 20 years. Some part of me says, I don't know. I have no idea. But the progress, the accelerating progress has been so, Unexpected.

So, completely caught me off guard. Not only myself, but many, many people. that, there is a certain kind of terror of an oncoming tsunami that is going to catch all of humanity off guard. It's not clear whether that will mean the end of humanity in the sense of the systems we've created destroying us.

It's not clear if that's the case, but it's certainly conceivable if not, [00:34:00] it's also, it, it just renders humanity. A small, a very small phenomenon compared to something else that is far more intelligent. And will become incomprehensible to us as incomprehensible to us as we are to cockroaches.

That's an interesting thought. Well, I don't think it's interesting. I think it's terrifying. I hate it. And it, but you can't help thinking it. I think about it practically all the time, every single day. Wow. And it overwhelms me and depresses me in a way that I haven't been depressed for very long time.

Amy Jo: Wow, that's really intense. I know your time is very short. So maybe we'll talk about it another time, but maybe we'll just absorb mm-hmm. Your concerns because you, of all people, I think, you have a unique perspective, so knowing you feel that way is very powerful.

Okay, so we've already got questions rolling in [00:35:00] from our yeah, audience. So how have LLMs large language malls impacted your view of how human thought and creativity works? 

Doug: Well, it's reinforced in some sense. Of course, it reinforces the idea that human creativity, and so forth come from the brain's hardware.

There is nothing else than the brain's hardware, which is neural nets. But one thing that has completely surprised me is that these LLMs and other systems like them are all feed forward. It's like the, it's the firing of the neurons is going only in one direction. And, I would never have thought that deep could come out of a network that only goes in one direction out of firing neurons in only one direction.

And that does, that doesn't make sense to me, but that just shows that [00:36:00] I'm naive. It also makes me feel that maybe the human mind is not so mysterious. And complex and impenetrably complex as I imagined it was when I was writing GEB and writing I'm A Strange Loop, I felt at those times, quite a number of years ago that, as I say, we were very far away from reaching. Anything computational that could possibly rival us.

It was getting more fluid, but I didn't think it was going to happen, you know, within a very short time. And so it makes me feel diminished. It makes me feel in some sense, like a very imperfect, flawed structure. And compared with these computational systems that have, you know, a million times or a billion times [00:37:00] more knowledge than I have and are a billion times faster, it make me feel extremely inferior.

And I don't wanna say deserving of being eclipsed, but it almost feels that way as if we all we humans, unbeknownst to us, are soon going to be eclipsed and rightly so, because we're so, so imperfect and so fallible. We forget things all the time. We confuse things all the time. We contradict ourselves all the time.

You know, it may very well be that, that just shows how limited we are. 

Amy Jo: Wow. So, let me keep going through the questions. 

Is there a time in our history as human beings when there was something analogous that terrified a lot of smart people 

Doug: Fire.

Amy Jo: Yeah. You didn't even hesitate, did you? So, what can we learn from that? 

Doug: Well, I don't know. Caution. But you know, we may have already gone too far. We may have already set the forest on fire. I [00:38:00] mean, it seems to me that we've already done that.

I don't think there's any way of going back. When I saw an interview with Jeff Hinton, who is probably the most central person in the development of all of these kinds of systems, firstly, he said he might regret his life's work. Part of him, he's, is what he said. He said, part of me regrets all of my life's work.

And the interviewer asked him, how important is this? These are these developments, are they as important as the Industrial Revolution? And Hinton thought for a second and he said, well, maybe as important as the wheel.

Amy Jo: Wow. So I see that our time is almost up. Thank you everyone for your questions. 

Doug: Why don't you ask a question, Amy, if you have a last question? 

Amy Jo: Okay. Um, what brings you joy these days? 

Doug: Not much. Not much. What brings me joy is clever clever bon mots, quips, spontaneous pieces of wordplay [00:39:00] or jokes spoken by friends that, you know, it brings me some joy.

I don't know. It's, these days I'm pretty down. I'm sorry to say, seeing friends brings me joy.

Amy Jo: Well, I'm absolutely thrilled we got to see you today. 

Doug: That was fun, I must say. 

Amy Jo: Thank you everyone for joining us today. And Doug, even though you're down and a little bit bummed, you brought us a lot of joy by being here today and sharing. Thank you, your perspective. It's my pleasure, really valuable. 

Doug: Great. See you soon.

Amy Jo: All right. Take care.

 [00:40:00]