FOR WHAT IT'S WORTH with Blake Melnick

A.L.M.A - Journey into the AI Frontier with Eric Monteiro - Part 1

December 14, 2023 Blake Melnick Season 5 Episode 4
FOR WHAT IT'S WORTH with Blake Melnick
A.L.M.A - Journey into the AI Frontier with Eric Monteiro - Part 1
FOR WHAT IT'S WORTH with Blake Melnick
Help us continue making great content for listeners everywhere.
Starting at $3/month
Show Notes Transcript Chapter Markers

What if our future was not just shaped by  human intelligence but also artificial intelligence? This week on "For What It's Worth", we explore one possible future in a world dependent on Generative AI as we sit down with Eric Monteiro, analytics expert, robotics engineer, and author, through the lens of his  science-based novel ALMA. We weave our way through Eric's fascinating journey from Brazil to Canada, his unexpected shift from engineering to writing, and his profound thoughts on AI's potential role in our future.

Imagine quantum mechanics influencing business decisions; sounds crazy, right? This episode takes a deep dive into this intriguing concept with our guest. We unravel the mysteries of the double slit experiment, quantum entanglement, and how observation shapes reality, exploring implications of its impact on society. We also highlight the growing importance of analytics and AI in business, the need to question assumptions, and the value of data-driven decision-making.

Finally, we navigate the potential impact of AI on jobs and free will, and the potential for increased inequality due to AI's impact on the workforce. We strive to shed light on the importance of fair distribution of AI's gains, a topic that could influence societal dynamics in profound ways ...For What it's Worth

Blog post for this episode

The music for this episode, "How Come I Gottais written and performed by our current artist in residence, #DouglasCameron

You can find out more about Douglas by visiting our show blog and by listening to our episode, #TheOldGuitar

Disclaimer: The views expressed in this episode are solely those of our guests. They do not represent the views of their organization or their affiliates

Knowledge Management Institute of Canada
From those who know to those who need to know

Workplace Innovation Network for Canada
Every Graduate is Innovation-Enabled; Every Employee can Contribute to Innovation

Disclaimer: This post contains affiliate links. If you make a purchase, I may receive a commission at no extra cost to you.

Support the show

review us on Podchaser
Show website -
Follow us on:
Show Blog
Face Book
Support us
Email us:

ALMA Part 1

[00:00:00] Blake Melnick: 

[00:00:30] Blake Melnick: I'm Blake Melnick and welcome to this week's episode of For What It's Worth, called ALMA. In the past two episodes, we spoke with well-known social entrepreneur, mathematician, playwright, and founder of Jump Math, John Mighton, about the potential of mathematics to end hierarchies in the classroom and in learning.

[00:00:50] Blake Melnick: Mighton believes promoting intellectual equality can serve as a means for addressing other forms of inequality in our society. The research based [00:01:00] approach to the teaching of mathematics developed by JUMP instills confidence and allows students to realize their full potential. In this episode, we continue the focus on the importance of research, science, and mathematics as we explore the implications of artificial intelligence through the lens of literature.

[00:01:19] Blake Melnick: My guest this week is Eric Monteiro, a robotics engineer by training and analytics executive by profession. And a writer quite by accident. Eric is the author of Alma, a science based fictional novel, which explores one possible future in a society dependent upon generative AI for what it's worth.

[00:01:44] Eric Monteiro: I am Eric Montero. The author of ALMA, a sci fi novel. I'm a robotics engineer in training. I am an analytics business executive by profession.

[00:01:52] Eric Monteiro: And a writer by accident. I am not a professional writer, but I am certainly a curious mind and I'm very interested in the [00:02:00] future of humanity and what's going to happen, particularly with the advent of AI, but also what that means for us. And I've always been interested in the nature of reality.

[00:02:08] Eric Monteiro: And really what is life all about? 

[00:02:10] Blake Melnick: It's great to have you on the show, Eric. I really enjoyed the novel. I found it fascinating and very timely, given everything that's going on in the world with respect to artificial intelligence. And everybody's wondering what that's going to mean for our societies and for people in general.

[00:02:24] Blake Melnick: There's a lot of positive elements to it. There's a lot of negative elements. And we'll talk about that as we go through this interview. You were born in Brazil. You have a background and training I'm assuming at university in robotics, is that correct?

[00:02:38] Eric Monteiro: That is correct. Yeah. I went to robotics engineer school in Brazil. 

[00:02:41] Blake Melnick: So what does a robotics engineer actually do? What's that training all about? 

[00:02:47] Eric Monteiro: I can't say I really know very much because I was only on it for about a year after graduation. but my sense of it from the little bit I practiced you get to design robots.

[00:02:56] Eric Monteiro: If you're lucky, you get to do a bit of actual [00:03:00] science, engineering and experimentation. But in most cases, which was my case, particularly in Brazil in the nineties, when I started my career, it was really all about execution. I would go into an industrial plant and implement robotic systems, whether they be actual physical robots or production systems that would automate the industrial production.

[00:03:19] Blake Melnick: And so was the motivation really efficiency? 

[00:03:22] Eric Monteiro: Efficiency, and I'd say safety too. There were lots of things that people did in industrial environments that really weren't safe for humans to do. And robots are much better and much safer to do it with. 

[00:03:32] Blake Melnick: What kind of background do you need to be a robotics engineer? Is it physics? Is it mathematics? What's the core competency? 

[00:03:39] Eric Monteiro: Yeah. And that's evolved a little bit from the time when I studied it. I was actually, I think only the second class in robotics engineering.

[00:03:44] Eric Monteiro: In my school and really was a pretty pioneering program, so they didn't really know what they did is they got together a mechanical engineering degree, an electrical engineering and computer science degree and put it all together, made it a difficult program, as you can imagine, because they didn't take a lot out.

[00:03:58] Eric Monteiro: They just basically combined two [00:04:00] degrees. So to your point, a lot of math, a lot of physics, a lot of production theory as well, and a lot of control theory and computer science. 

[00:04:07] Blake Melnick: You were born in Brazil, you studied robotics at some point you came to Canada what motivated all of that?

[00:04:13] Eric Monteiro: Yeah, I always say it's the weather. Ha. What, you came 

[00:04:16] Blake Melnick: here for the 

[00:04:16] Eric Monteiro: weather? That's right. We got tired of the craziness in Brazil. Whether it's traffic day to day or the fact that you're not safe really in many places when you're going around. I was robbed a couple times and things like that.

[00:04:29] Eric Monteiro: And so we just decided we wanted a quieter calmer more structured place for our kids to grow up, right? 

[00:04:35] Blake Melnick: What type of child were you? I mean you've written this book and it is a science fiction book, it has solid science behind it, which is why I really liked it were you fascinated by science by technology what inspired you? 

[00:04:46] Eric Monteiro: The first thing I would say was I was always curious about everything. I was pretty restless as well. I liked martial arts. For example, I did all of them. I did judo. I did karate. I did kung Fu.

[00:04:57] Eric Monteiro: I actually did seven years of taekwondo. So I went through all of [00:05:00] them because I was always very curious and also very curious about life. So my dad tells the story when I was six that he says I came up one day and said, you know that, how do we know if we're awake when we're awake and sleeping when we're sleeping or the other way around?

[00:05:15] Eric Monteiro: Because you could be awake when you're sleeping. And I still remember he paused for a second. I said really but being the wise man he's always been, he said, it doesn't matter though, because you should be happy whether you're dreaming or awake and you should pursue your dreams, whether their sleep dreams or awake dreams.

[00:05:31] Eric Monteiro: And so that stayed with me, but I've always been very curious about life. I also always like. Discipline a lot. Both my grandparents are in the military. One was an officer in Spain before he went to Brazil. The other one was a general in Brazil and I always wanted to be a general.

[00:05:46] Eric Monteiro: And it's interesting. It wasn't until probably later in my life, I was probably 10 or 12. Then my grandpa sat me down because I told him I wanted to be a general. And he said, let me walk you through what a military career looks like. And I realized it's a pretty hard life. You [00:06:00] don't get any control where you go live.

[00:06:01] Eric Monteiro: You don't make a lot of money. It's a very disciplined, difficult life. And so I was like, okay, that's probably not for me. And so anyway, that was my childhood pretty much. The one thing I will say I was always very interested in technology. I remember my dad used to work with technology and I remember, this is the early 80s when computers started to become a thing.

[00:06:19] Eric Monteiro: I still remember the Apple II and I remember getting interested. I ended up programming a bunch of stuff, programmed video games and got into all of that. 

[00:06:27] Blake Melnick: So you're responsible for Pong, right?

[00:06:32] Blake Melnick: The other thing that jumped out at me about your novel is there's a lot of quantum physics and analytics of all kinds. Did you learn about quantum physics on your own as something you were self-taught or did you learn it in some other way?


[00:06:47] Eric Monteiro: I went through engineering school. I loved it. I particularly loved maths and physics side of it. And I actually became fascinated by a lot of the quantum physics concepts that we learned very quickly actually in engineering school because it's not, [00:07:00] you're not really learning physics, you're learning physics for the purposes of using it for engineering.

[00:07:04] Eric Monteiro: Exactly. And I was particularly curious about some of the implications that nobody talked about. In fact, I raised it with a couple of my profs and they were like that's very interesting, but it's not what we do. And I'll give you a couple examples and I can talk more about them as we talk about the book, but.

[00:07:18] Eric Monteiro: You study the double slit experiment, 

[00:07:19] Blake Melnick: The double slit experiment demonstrates that light and matter can satisfy the seemingly incongruous classical definitions of both waves and particles, which is considered evidence for the fundamental uncertain nature of quantum mechanics. In other words, light could in fact show both wave and particle characteristics.

[00:07:41] Eric Monteiro: Which demonstrates that your own observation of physical phenomena affects the outcome. That's the physical implication. The more interesting question from that is what does that mean for the nature of reality? we're actually affecting what reality we observe, that's actually a very interesting question.

[00:07:56] Eric Monteiro: What? What if it wasn't a human? Do dogs cause that to happen? [00:08:00] Do cats cause that to happen? None of that is clear. Another one that I thought was very interesting is the idea of quantum entanglement, which is Two particles that might have been created at the same time and might be traveling away from one another, even at the speed of light.

[00:08:12] Eric Monteiro: In fact, they've now demonstrated this, not quite at the speed of light, but at very high speeds. In practice, they actually continue to be synchronically connected, or there's synchronicity between them. So something that happens in one, the spin specifically, reflects on the other. Again, interesting physical concept.

[00:08:27] Eric Monteiro: The question to me of, what does it mean for the locality of reality? Because we've proven that nothing can travel faster than the speed of light. So the implication of the synchronicity must be that they're connected somehow beyond space. They're non-local. So those questions, typically, physicists tend to shy away from because I think there's a bit of risk to their reputation as a scientist if they ever go there.

[00:08:50] Eric Monteiro: But they're actually the things that interest me the most. 

[00:08:53] Eric Monteiro: Those are big questions. I'm not sure there's answers to them either. So maybe its why people steer away from it. 

[00:08:58] Eric Monteiro: Let's talk a little bit about your career. So you [00:09:00] have all this knowledge, you come to Canada and you become an analytics expert. 

[00:09:05] Eric Monteiro: My trajectory was I started in engineering, did that for about a year and a half again was interesting, but given we were mostly implementing these pre-designed defined solutions and software platforms and hardware platforms

[00:09:18] Eric Monteiro: and then I thought, and I've always pursued something that would allow me to explore my curiosity and frankly, something that was challenging and interesting and difficult and then business came up as an idea. So I became a business consultant with McKinsey did that for 15 years of my career, of course, lots of analytics.

[00:09:33] Eric Monteiro: Basically, every project was analytics driven, then became an analytics executive for a few years. And now I work for an insurance company. So if there's a golden thread, it's curiosity. I've always been very interested in these difficult, problems to solve, whether they're in academia, which was my first kind of aspiration didn't happen in engineering or in business as I do today.

[00:09:52] Blake Melnick: Tell me how you're applying all of this knowledge, all of these, experiences in the context of your work now?

[00:09:57] Eric Monteiro: Again, it is a golden thread in this [00:10:00] whole thing.

[00:10:00] Eric Monteiro: Curiosity is one of them. I'm always curious in business as well. So when something comes up, a problem, a solution, an idea, a request from a client or anybody else, I'm always curious. What problem are we really trying to solve? What elements of this matter? What elements don't, how do we get to the best answer?

[00:10:17] Eric Monteiro: The second one I'd say is just structured analytical thinking. One of the things that I love about engineering, and it's served me very well in a career in business as well, is this rigorous approach to breaking down a problem sometimes get a very difficult, almost intractable problem, you break it down into much more manageable components, solve the individual components or not, but you get as close to it as you can, and then assemble together and synthesize into a solution.

[00:10:42] Eric Monteiro: That skill set, which is very much an engineering skill set, has been very helpful for me. In business. And the other couple of things I'd say that I've really learned, particularly from quantum physics, is question every assumption. We make assumptions all the time. Some of them are good, some of them are not.

[00:10:56] Eric Monteiro: Of course. But until you really articulate and question them [00:11:00] you don't really get to the right answer. 

[00:11:02] Blake Melnick: And metrics and analytics help us really understand the root cause, if you want to call it that, from an engineering perspective. When I spent time in engineering, in aerospace all the engineers were always focused on a root cause analysis.

[00:11:14] Blake Melnick: So if something was wrong, something wasn't performing as was expected, something was failing, it was always we have to find the root cause. We have to do a root cause analysis we can't just make assumptions. That's a very disciplined approach, 

[00:11:26] Blake Melnick: you have all this background in analytics, and you have this now background in AI. Is this why the firm hired you, because of your perspective in these kinds of emerging fields? 

[00:11:37] Eric Monteiro: Yeah. I think it's definitely helpful. It's also frankly, where the world is going.

[00:11:41] Eric Monteiro: It's been going there for a while., I started my career in the late nineties. And I remember back then, even we used to make decisions in business very much on gut feel. 90 percent of the time. Nowadays, that's just not the way businesses run, make decisions based on data and analysis.

[00:11:56] Eric Monteiro: And in logical conclusions, by and large gut feel still plays a role. And in [00:12:00] fact, it plays a big role when understanding, analyzing, and interpreting results from analytics. But the results have to be rigorous and analytically driven. 

[00:12:07] Blake Melnick: You told me this great story where you were doing a presentation.

[00:12:10] Blake Melnick: You were trying to demonstrate. Where the future is going with respect to artificial intelligence the existential threat and we'll talk a little bit about this but what is the truth? 

[00:12:19] Eric Monteiro: So yeah, there's today available on the internet. You can get an app that translates a video, any video you've done from whatever language you've done it into any other language.

[00:12:29] Eric Monteiro: And it takes your mannerisms, your impressions, your tone of voice, your voice itself, of course. And it really makes it perfect. And so it really starts to raise the question of how do you know did Eric really say all that because it could have done that. It could have made me defend the Alberta pension plan idea, which I have no problem saying personally, I think is a terrible idea.

[00:12:49] Eric Monteiro: But there could be a video of me out there on the internet, defending that so I think to your point we got to start really being careful with what we see and what we observe and assuming it's true. 

[00:12:58] Blake Melnick: Yeah, absolutely.

[00:12:59] Blake Melnick: Let's [00:13:00] move to the book let's start. With the name, so ALMA, it's a bit of a double entendre, is it not?

[00:13:06] Eric Monteiro: Yeah, it is. It's called automated lucid machine algorithm. At the same time ALMA means soul in Portuguese and Spanish.

[00:13:14] Eric Monteiro: And I speak both languages, and I was always intrigued by this question of if, when will an AI ever have a soul ever have consciousness, even though I'm not talking about The religious concept of a soul, but like the concept of a consciousness. So that's why I called it that because it's a double play on what it does, which is an algorithm, but also could it become self-aware?

[00:13:35] Blake Melnick: Yeah. I love the title. And I understand it also refers to a young woman, which I thought was interesting given the context of the story, but let's talk about this story. What inspired you to want to write this book? 

[00:13:47] Eric Monteiro: I've always wanted to write a book about the implications of quantum physics, right?

[00:13:52] Eric Monteiro: I am not a scientist by any stretch, and therefore I can take all the risks in the world and write all kinds of things about the [00:14:00] implication of the Heisenberg principle or non-locality. And not worry about whether that's going to undermine my scientific credibility. So I thought, I'm going to do that.

[00:14:09] Eric Monteiro: And I found myself in between jobs. I knew I was going to have two to three months of time to be able to do something. So I actually told my daughters and my wife at the dinner table that I was going to write a book about quantum physics and the implications to reality. Which we had talked about a number of times.

[00:14:25] Eric Monteiro: My daughters would have been teenagers at the time. And they both said, That's cool, but that's going to be a very boring book. No one will ever want to read that. So what would make it more interesting? And they both actually said it, they had different points of view.

[00:14:40] Eric Monteiro: Why don't you make it a novel? Why don't you create a wrap around it, create a plot? Of course, the natural plot would have been time travel, which I know is a bit cliché for sci fi, but it worked very well for what I wanted to talk about. And then and then I added this AI because I thought 200 years in the future, we are definitely going to have an AI that's very intelligent.[00:15:00] 

[00:15:00] Eric Monteiro: And then somehow, as I started writing the book, ALMA which is the A. I. Character took over. It became the more interesting part of the book. I still wanted to get all the quantum physics stuff there, which is why it's there. And I hope readers don't mind the technical aspect of it. But, of course, ALMA became the most important part of the book, because, frankly, that I think is one of the more interesting questions for us is.

[00:15:21] Eric Monteiro: As a species right now. 

[00:15:22] Blake Melnick: For our listeners, how would you describe your own book? 

[00:15:26] Eric Monteiro: I would say it's a book about two things really and they're interconnected. One is what will the world look like 200 years from now, which is when the book takes place, when There is an artificial intelligence or an intelligence because the notion of artificial becomes irrelevant a little bit, as you see in the book that is basically more capable than all of humanity combined.

[00:15:48] Eric Monteiro: And I really do think we're going to get there. 

[00:15:50] Eric Monteiro: and particularly, what's the nature of time and our reality in relationship to time.

[00:15:53] Eric Monteiro: And again, those things are interconnected, but to me, those are the two things I wanted to explore 

[00:15:57] Blake Melnick: did you go in directions you didn't [00:16:00] anticipate going when you first started? 

[00:16:02] Eric Monteiro: For sure. I wasn't even planning to spend that much time on AI until I started writing, and that was very unexpected.

[00:16:09] Eric Monteiro: The other thing I would say, and this is more what I didn't do in the book, Is it raised a whole bunch of other questions that I chose to ignore because they're very complex and it would take a long time and probably multiple books to explore, for example are we going to be a cyber species where we start to migrate incorporate elements of technology in ourselves?

[00:16:29] Eric Monteiro: Didn't explore that. I fast forwarded the book, if you will, 200 years, right? That's actually quite simple because you have that unifying artificial intelligence that can do a lot of things that can make a lot of decisions and therefore simplifies our existence in humanity.

[00:16:44] Eric Monteiro: But the process to get there, 3, 5, 10, 20 years, are going to be fraught with danger, and I decided to ignore all that as well. So it's more about the things I decided not to do, I don't know when I'll have time to write a prequel, but when I do or next book, when I do, it'll be a prequel, because I'm [00:17:00] very interested in thinking about, what 200 years could look like, , what are the different paths we could take to get there, and what are the implications 

[00:17:07] Blake Melnick: I thought it was interesting that you chose to set the novel 200 years in the future. Not a thousand years as many futurists will do. When writing about time travel, why did you select 200 years?

[00:17:18] Eric Monteiro: Yeah, I thought of it in two ways. One. I want to four or five generations.

[00:17:23] Eric Monteiro: So that there was enough distance between the generations to make it interesting to I try to think about how long do I think it would take for an AI to get to that level of intelligence. And I don't know if it's 200. I can almost guarantee it's not 20 years. It'll take longer than that.

[00:17:38] Eric Monteiro: Just computing power alone. But also, I don't think it's a thousand because we have seen this in the past as Moore's law, for example, shows in other parts of technology and I think we're going to continue to see exponential growth in other parts of technology as well. Especially when you get things like quantum computing, et cetera, they're going to kick in.

[00:17:55] Blake Melnick: I've been watching a number of interviews with Geoffrey Hinton, the grandfather of [00:18:00] AI. And his timeline seems to be shorter the 200 years.

[00:18:04] Eric Monteiro: I've heard 50 years. And it could be right. Yeah, it could really be, yeah.

[00:18:07] Eric Monteiro: In fact, you got to remember I wrote this book in 2016. That's right. That's right. And so it was pre the latest breakthrough. In fact, it was pre the latest two breakthroughs I think we had in artificial intelligence. 

[00:18:18] Blake Melnick: That's what I loved about the book.

[00:18:19] Blake Melnick: I thought, boy, it's like you wrote it today. For our listeners this is a great story . It's very readable. There is a lot of science and technical terms, but not. Too much. The story itself is captivating and I think anybody that is a fan of science fiction writing Asimov and Arthur C.

[00:18:36] Blake Melnick: Clark and people like that will like this book. It's very accessible, very readable, very enjoyable because of the story around it and the characters that you've created and the love story that's in there as well. But who were your influences? You were I'm assuming, a big science fiction reader when you were younger,

[00:18:53] Eric Monteiro: For sure. But I have to say though, my biggest influences weren't from Sci-Fi. They were from physics and from science itself. So there were [00:19:00] the three primary set of types of influences. On the physics side, you of course the classics. I love Einstein Heisenberg Schrodinger who wrote.

[00:19:08] Eric Monteiro: The cat experiments the famous Schrodinger cat. There's one particular influence, a guy named Richard Feynman, who inspired really Richard Wiseman, who's the character who really, and what I love about it, he's really simplified quantum physics to its key tenants, right? There's actually a bunch of lectures from him online on YouTube.

[00:19:25] Eric Monteiro: They're from the fifties, so they're all black and white. Some of them are colored, but they're very interesting because he really simplifies the concept. If and the other thing I love about him, he questions every assumptions. that's one sort of group of influences. There's another group of influences more on the nature of reality itself.

[00:19:41] Eric Monteiro: And there's a couple of physicists who talk about it. Some of them talked about it more openly three physicists in particular, Bohr, Heisenberg and Born wrote the Copenhagen Interpretation, which is really the underpinning of my understanding of reality. And as well as John Wheeler, who really Fleshed out this delayed choice [00:20:00] interpretation of time and really, he didn't say it this way ,but my articulation what he said is basically time is a psychological phenomenon, right?

[00:20:06] Eric Monteiro: It isn't inherent in the nature of the universe. It's us as humans experience and create time in creating the experience, which is a lot of what's in the book. And then sci fi to your point. Arthur Clarke, I read Isaac Asimov when I was a kid and loved it. There's a bunch of newer sci fi authors as well that I really love, but there's too many to really talk about.

[00:20:26] Eric Monteiro: The other couple things that have influenced me. I have read quite a bit about eastern philosophies and in particular Buddhism. And I'm not a religious Buddhist or practicing Buddhists in any shape or form. But the philosophy of it is actually very interesting to me. And in fact, and lots of people have written about this.

[00:20:40] Eric Monteiro: If you read quantum physics, it's hard not to see the parallels to some of the things that You know, Eastern philosophy says the nature of reality, the fact that we play a big role, the fact that there's a deeper connection to things and a deeper meaning to reality than what we observe. All those things are part of Eastern philosophy. So those are the primary influences. 

[00:20:57] Blake Melnick: I noticed that you included [00:21:00] Asimov's Three Rules of Robotics or Three Laws of Robotics in the book. The first law, a robot may not injure a human being or through inaction allow a human being to come to harm. The second law, a robot must obey orders given to it by human beings except where such orders would conflict with the first law.

[00:21:19] Blake Melnick: And the third law, a robot must protect its own existence as long as such protection does not conflict with the first or second law.

[00:21:27] Blake Melnick: When I'm reading about what's going on in the world around AI development, I'm thinking those rules should be in place.

[00:21:32] Blake Melnick: We should be really sticking to those rules. Cause what I'm reading is pretty scary. Especially when you listen to somebody like Hinton who does know what he's talking about and we're going to talk about that in a minute. You created in the book a vision of a future with generative AI that I think is quite positive.

[00:21:49] Blake Melnick: You took the best-case scenario perhaps, although the ending, which I don't want to give away, surprised me and made me think. Oh, hold on a second. Everything was going really well until the [00:22:00] end.

[00:22:00] Blake Melnick: And then I had to reflect on everything that I thought, which is good. But you chose to take this very positive view. Why? 

[00:22:08] Eric Monteiro: Because I think again, this could go a million ways. And in fact, I do think we have a bit of a say on how it'll go. But part of the reason I wanted it was because are so many AI books out there that are the Robo apocalypse we're going to be taking over machines and all that. I said first of all, I think there's an assumption back to questioning assumptions there, which is machines are going to have the same objective function we do.

[00:22:30] Eric Monteiro: We are the most destructive species ever to walk the earth and we tend to want to dominate control, destroy everything that we can, and that's what we do as a species. And to some extent our biological evolution has gone that way, right?

[00:22:43] Eric Monteiro: Survival of the fittest, we're not the first species to fight for our own survival. I don't necessarily think that assumption is going to hold particularly if it's an AI, as is the case in the book, that is so capable, it could wipe us out in a heartbeat. I actually use the analogy of the book, which I hope doesn't offend anybody, [00:23:00] of we're a little bit like bees.

[00:23:02] Eric Monteiro: For an A. I. Of that nature, right? We don't understand it. It cares for us. It likes us but it doesn't really need us for anything. And we're like a hobby. And so I actually think that's not a crazy scenario for An intelligence that's that capable. 

[00:23:17] Blake Melnick: When you imagined this future world dominated by an AI entity you covered off what the implications would be. 

[00:23:24] Blake Melnick: In the context of culture in the context of farming, in the context of politics, which I thought was really excellent because of course you could take that sort of broad blanket approach and you leave the reader going I'm just not sure what the implications are. I think you imagined it in all of these different sort of societal structures.

[00:23:41] Blake Melnick: And I thought that was really interesting. Was that intentional as 

[00:23:45] Eric Monteiro: yeah, very much. So again, I, as I mentioned, I described book as trying to understand what the world will look like with an AI like that. To your point, the one question in the book throughout is, and I think it's a real question for us as humanity is, at what point do we lose [00:24:00] our free will?

[00:24:01] Eric Monteiro: And how do you define free will? There's a book and it was written way after my book, but it's brilliant. By Max Stegmark. The book's called Life 3. 0. And he does a better job of describing the multiple scenarios that could happen.

[00:24:13] Eric Monteiro: He actually says exactly what I think I also worry about. Which is, we wouldn't even know. At that point, if we lost our free will, because that AI would be so capable of manipulating and making us think we're in control while we're actually not that we would not even know 

[00:24:29] Blake Melnick: I read something recently from Geoffrey, Hinton about that very fact that he likened it to adults to young children where adults can manipulate young children to do what they want. And he said that's super possible with an advanced AI. That they could manipulate us to do whatever they wanted us to do.

[00:24:51] Blake Melnick: And it was in reference, to Sam Altman saying that he had a kill switch where he could shut down his AI algorithm at any time. And [00:25:00] Hinton said, yeah, unless the algorithm manipulated him some way somehow that he wouldn't do it. 

[00:25:06] Eric Monteiro: That's right. I think that's probably true. 10 years from now, probably not 


[00:25:10] Blake Melnick: let's talk a little bit about AI because of course it's in the news these days. Everybody's talking about it. People are experimenting with it. I am too. I'm sure you are and I'm sure your company is as well. There are obviously different viewpoints on all of this.

[00:25:24] Blake Melnick: Certainly positive elements of AI as it relates to things like medicine as it relates to any kind of engineered product in terms of safety and things like that. There's a very positive outcome that can be associated with using AI in these contexts.

[00:25:39] Blake Melnick: There's that negative side. We've touched on it. At what point do human beings get pushed to the side or become irrelevant? One of the big concerns that Hinton has is if we embrace AI wholeheartedly we're going to lose jobs. People are going to lose jobs as a result of that. How are we going to manage that? I think that's something we all [00:26:00] accept will happen. How do we manage that as a society?

[00:26:03] Eric Monteiro: It's obviously a very complex question.

[00:26:05] Eric Monteiro: So I'll try to tackle two or three different elements in that. I think the first one is to separate an AI from general ai. Okay. And, 'because we've had ai, we've had Sure. We have for a long time. In fact, we've had expert systems, which was the first version of, that's right. AI for 30 years.

[00:26:21] Eric Monteiro: In university I was building a learning model. With a neural network. And so they've been around for a long time. The first implementations we've had, we're all very task oriented, right? So it's identified whether or not this is a, an anomaly to a pattern, right?

[00:26:36] Eric Monteiro: So you put in data one way, data the other way, you feed the model the reinforcement training you need, and then eventually it gets to that point, which is a very targeted approach. We then started to see machine learning models that can actually spot the pattern. So we don't just ask them a very simple question of, do you see a pattern or not?

[00:26:52] Eric Monteiro: We ask them, what patterns do you see? And you have the famous Google model that identify cats without knowing what a cat was. And [00:27:00] then, of course, you have learning models. And now we are getting to the point where we have general learning models that actually understand language and can start to look more like general artificial intelligence.

[00:27:10] Eric Monteiro: Because And this has been true from computing from day one, not just AI computers are very good at doing very specific tasks or if they can't really do general tasks, same thing is true of AI, right? Let's pick an example. Take a weather model.

[00:27:23] Eric Monteiro: Weather model can do weather forecasting very well. But we'll ask you to try to do that while training a dog or listening to a kid or chewing gum. And it just doesn't do that. Generally I have the potential to actually behave much more or act much more like a human intelligence.

[00:27:38] Eric Monteiro: And that's where I think you start to see bigger issues with potentially taking jobs away. Because I really don't believe the machine learning model we had today. Are in fact in any shape or form going to take jobs away now, and in fact, I don't even think the general AI in the next few years is really going to take jobs away. It's going to take the grunt out of the job. So it's the stuff you don't want to do, right? It's that first summarization of the [00:28:00] meeting. It's the minute taking it for the next few years. That's what we're going to see. And I actually truly do believe that this is going to make us better, more productive, et cetera.

[00:28:09] Eric Monteiro: The challenge is. A few years from now, and I don't know if it's 5 or 10 or 20, there will be general AIs that can actually do basically our entire job, whatever your job is, whether you're a lawyer or an executive or a consultant. And that's really, I think, the tougher question.

[00:28:25] Eric Monteiro: Now, what I actually also believe is that we don't know. How that's going to evolve. If you go back to prior industrial revolutions or economical revolutions, I'll call them because you can go all the way back to mechanized agriculture and it really hasn't actually taking jobs.

[00:28:41] Eric Monteiro: It's taking specific jobs away, you go back a couple of hundred years and try to explain to somebody a world where No one actually produces a physical good, whether it's a piece of furniture or a car a world where there's no more need for energy in the house because electricity does it all.

[00:28:58] Eric Monteiro: They couldn't visualize, and yet [00:29:00] there's a whole series of professions, all of our service economy, the entire entertainment complex. The computer industry, which actually is where most of the values generated today in society wouldn't even be in the picture. In fact, they could possibly visualize or articulate what that could look like.

[00:29:15] Eric Monteiro: So it's the same thing. You say today, what is the future going to look like even back to ALMA 200 years from now? I have no idea. But I didn't even try to see what other professions are going to be created.

[00:29:24] Eric Monteiro: Because I don't think we can imagine them. All that to say, over the short term, I'm not particularly worried about jobs being eliminated. Tasks are going to be eliminated. Jobs that are composed of very simple tasks are going to be more at risk. Over the very long term I'm actually not worried as well, because I think we're going to find other things to do that are more interesting.

[00:29:43] Eric Monteiro: In between is where it gets complicated, because of course, On average, we're going to be okay, but for the people whose job specifically was eliminated, that's still painful. And if they have to go retrain, and if they have to go learn how to actually write code so that they can be writing models instead of[00:30:00] doing the work that models are doing, that's the stuff that's hard and that we need to figure out as a society.

[00:30:04] Eric Monteiro: Now I have a couple more thoughts I think there's a few things we can and need to do to get this right. The first one is, as I mentioned, making sure that people are enabled to make that change. So if your job has been eliminated, I'll pick one of the ones that I think everybody agrees is going to go quickly.

[00:30:18] Eric Monteiro: A paralegal. Most of their job is you're reviewing legal materials, summarizing, synthesizing looking up prior cases, et cetera. That's the kind of work that GEN AI is particularly good at even today. Are we equipping them with a different skill sets so they can find the next thing?

[00:30:33] Eric Monteiro: Are we thinking about how to re-educate them? Are we redeploying them? Are we thinking about how to help them? Just the same as we had to do for employees who did manual work in an industrial setting. When the robotics revolution came in the kind of 90s to 2000s. So that's one thing we got to get right.

[00:30:50] Eric Monteiro: And I think that when we will it's painful and it takes a bit of time, but I think we're going to get it right. The one I worry a lot more about is are we going to be able to properly distribute the gains from AI? [00:31:00] Because the more you digitize things and the more you create value from technology, the more you're returning.

[00:31:07] Eric Monteiro: All that value to capital, not to labor, right? And there's a big risk. We and we've seen this with the tech revolution of the last 20 years. And there's a big risk. You continue to see that it's even more of that, right? And you really do get much more inequality because the more these models are owned by businesses and their return to capital, 

[00:31:26] Eric Monteiro: so what's happening to the labor component of the economy? 

[00:31:28] Blake Melnick: This concludes part one of Alma with my guest Eric Monteiro. We'll be back next week with part two, so make sure you hang in there for what it's worth. 

[00:31:43] [00:32:00] 

Exploring Mathematics, Robotics, and Science Fiction
Quantum Mechanics in Business
Exploring Artificial Intelligence and the Future
AI's Impact on Jobs and Free Will
Equity in the Age of AI