I Think Tomorrow

The Search for Intelligence: AI, Consciousness, and What Makes Us Human

Michael Tucci Season 1 Episode 4

In this episode, Mike gets interviewed by his good friend and business partner, Thomas Trincado, to explore some of the deepest questions AI is forcing us to ask. What is intelligence? Can machines be conscious? Does intelligence require a survival instinct?

We explore how AI development mirrors and deepens our own search for self-understanding, blending philosophy, personal stories, and practical insights. 

Find out more about us and the work we're doing:

🌐 Odysi Studio — https://odysi.studio
🔗 Mike’s LinkedIn —   / michaelvtucci 
🔗 Thomas’s LinkedIn —   / thomastrincado  

[00:00:15] Thomas: All right, so, um, today, um, I'm going to be talking to my very good friend Mike. I've known Mike for many years and um over the years I've realized two things about him. One, the fact that like, he's probably the most eloquent person I know, but like, you know, whatever, this is my perspective on it. And two, that like, I can literally just talk about anything, right? Like, and Mike will, I mean, because he comes from a very diverse sort of background of experiences, like he can talk about one thing from so many different angles. And uh yeah, and so he's my partner now, like he we're working together at uh Odyssey and we're doing like we're exploring like what AI can do for businesses um and like we're really learning about the problems. And um yeah, and so welcome. Uh Mike.


[00:04:13] Mike: Well, no pressure now, right?


[00:05:07] Thomas: And um yes.


[00:07:44] Mike: I hope I can live up to that intro.


[00:08:44] Thomas: No, you will, you will. But also, like, you and I have been talking about doing like you started your own podcast.


[00:15:39] Mike: I did, yes.
[00:16:16] Thomas: Uh last year. And um you kind of, you know, I I when I saw you um speak, I thought like, well, you and I should be having this type of conversations because you've you've been on like open podcasts as well, right?
[00:29:72] Mike: Yeah, I mean I I've only done it, let's say recreationally, but I enjoy the format. I enjoy just having free-flowing conversations about interesting topics. And um I enjoy listening to podcasts as well. So, um yeah, why not record a few?
[00:45:902] Thomas: Right. And so I remember like, you and I very, like we got very curious about like what would happen if like we got into the conversation, right? Because then we started the business together and then we explore, we spend a lot of time like looking at the practical problems that you know, companies have, but you know, like and inevitably it takes us to a place where we want to also explore these topics like philosophically, right? Especially like AI and instead of and and I kind of wanted us to allow ourselves to give that room, um give ourselves that room to to be having these conversations as well.
[01:21:492] Mike: Yeah, I mean, I agree with that. I that's the reason that I'm interested in doing this work, right? Like my main interest in AI is actually I think there's a lot of opportunity for um for companies, for businesses to use it and grow much more quickly to have um more impact than they're having today to also, I think it opens up many new product opportunities. Um uh and we'll really transform the world. But my the origins of my interest actually are much deeper and like kind of much more academic in a way and also, um as you said, like philosophical, right? Like I think that this is a technology that we're building machines that think. And even understanding what it means to think, what it means to be intelligent. I don't think those are questions that we really have very robust, very concrete answers to. Um and in answering them, we don't just find out about the machines that we're building. We find out about ourselves. Like we don't we don't really understand what we are. And I think by like building a an artificial intelligence, an artificial mind, we have to ask those questions in a deeper way. We have to grapple with, you know, okay, well, uh you know, ChachiBT can talk to you, um in a very convincing way. Um it can maybe even, like there are some arguments now that it's passing the Turing test or or other AIs are. Um and like figuring out, does that mean that we've actually built intelligence? Like does it mean that we've replicated what humans can do? Like really understanding what it is that makes us special is like a very interesting question and I don't think it's one that we have answered well yet. And I want to be a part of answering that.
[04:26:742] Thomas: Right, because I mean, for me, like part of um I've always said this because I read it somewhere. It was um years ago, it was Vingstein. Like people, he said, people don't actually disagree with each other over like the fact that they have different opinions. It's just that they haven't defined the terms very well, right? And so when we say um you know, like, are these machines intelligent? Like, okay, what is intelligence? Is it is this abstract concept that like Descartes uh talked about or is like a set of functions that happen at a cog- cognitive level in our brains that we just haven't been able to explain? And now thanks to artificial intelligence, we're kind of realizing that it's a it's achievable and sort of like we're kind of resisting the idea like because many people say like, oh, you know, artificial intelligence won't ever be as intelligent as human beings, etc. But also because it kind of diminishes the idea of like what we do, like our sort of of our our our intelligence because suddenly we can we can understand how it works. We can define it, we can and we can just sort of see that it's a set of very complex mathematical procedures that kind of eventually get to the same place where our human intelligence get to.
[05:42:512] Mike: Yeah, I mean, and and I mean, to be very blunt, I don't think that we have any idea. I we don't have any idea yet. We don't have a um a rigorous definition of intelligence that uh is broadly agreed upon in I would say any domain, within artificial intelligence, within um uh philosophy, within um psychology or biology. Like that is something that we don't understand very well yet. And I think that artificial intelligence, the search for artificial intelligence is forcing us to grapple with that question. I think we there are a lot of people who make a lot of assumptions and I think think that we know more than we do. I'm much more interested and I think it's a much more interesting question, um what we don't know, right? Like what uh what there still is to define and discover. And you see with these the models that we've created currently, there's so much that they can do. Um and I would say it's not necessarily the things that we expected them to be able to do well. Um uh but there's a lot that they can't. And um exploring that area and how to make the models better so that they can do, they they in the future will be able to do more of the things that they currently can't do. And like what you need to build and what capabilities are currently lacking versus human intelligence. I think that's a um it's a just a really interesting area to explore. And as I said before, I think we'll learn a lot about ourselves. Mhm. Um in in that exploration.
[07:33:572] Thomas: And so you said as well as sort of at the beginning, um you said that, you know, we haven't like there's just so much we've been asking ourselves like questions about what intelligence is, etcetera. And then like we have we we don't know all of that, right? We just so much we don't know. Have you attempted an answer?
[07:51:792] Mike: I mean, I think that's above my paygrade. Like I I think about it a lot. Um that's a good question. I have not attempted an answer, but in line with what I was just saying, I try to think a lot about what are the things that make us human that current AIs, LLMs in particular, that they don't have or can't do. Um and there are a lot of them. Um and I think that if you go back to the Turing test, which I think you're familiar with, but just for a bit of background, the idea of the Turing test, um Turing came up with it because it's so hard to define intelligence. And so what he said is, I'm going to kind of bypass that definition. And instead, I'm going to kind of think about how we um define intelligence when we're talking to another human being. It's not that um we have a rigorous definition, we just the way that you're talking to me tells me that your brain is kind of intelligent in the way that my brain is. And therefore, if I can have a conversation with the machine in the way that I can have a conversation with another intelligent human being, then that machine must also be intelligent. So we don't have to define intelligence, we just kind of operationalize it. And in doing so, um if, you know, you can blur that line, you can fool a human into thinking that they're talking to another human when they're actually talking to a machine, it must mean that that machine is intelligent. And I actually don't buy that. I think that I think it's a really brilliant test and I think bypassing the question of what intelligence is is like there's it's very hard to define intelligence and um I understand why he did it, but I don't actually
[09:54:682] Thomas: do you Do you know what the test consists of, like what sort of questions?
[09:59:12] Mike: Uh no, no, no, it's not, it's not, it's literally no questions. So it's um he called it, I read the original paper. It's not very long. I I we can link it, but I suggest that people go back and look at it. And um he calls it the imitation game. And it's uh it's a very simple setup. You have, I don't I hope I get this completely right. Don't uh don't hold me to everything I'm saying, but I'm going to get it roughly correct. Um so you have the participant who's um the judge, let's say. A human. And that person is talking to two different um test subjects, let's let's say. And there it's only text-based. So there's no um you're not seeing the other person's face, you're not seeing any bodies, you're just interacting to with them through language and and text-based language. And you the person can interview both of these participants as much as they want. Um they can ask them whatever questions they want. The goal is to for the the the judge to be able to distinguish which one is the human and which one is the machine. And if they can't, then that means the machine is sufficiently human-like. And therefore, sufficiently intelligence intelligent in a human-like way that they pass the test and, you know, we call that machine intelligent. Um I I think that gives the judge a little too much credit because it assumes that um I I agree that we are large, that's largely what we're doing when we're talking to other human beings. Like we're I I don't know that you're conscious. I don't know really anything that's going on inside of you, but just by looking at the way that you behave, the things that you say, I think you sound like what it feels like inside of me, and so I assume the same thing must be going on. Of course, yeah. Um I don't think that we are such good judges of intelligence and or consciousness or of um more complex behaviors to that a machine that didn't actually possess intelligence, didn't actually, you know, consciousness is another thing that we can kind of lump in there. Um uh I think that a machine could that wasn't actually intelligent and was just doing a really good job at approximating it, um could fool a human being into and so I think we really in order to test for intelligence, I think we need first, I think it's really, really hard to test for something that you don't have a rigorous definition for. And so I think we have to get better and more concrete about defining intelligence before we can um rigorously test for it. And the the Turing test is not a rigorous test.
[13:00:262] Thomas: Okay, and because I mean, you've mentioned the word consciousness a couple of times, right? And in your definition, of your sort of attempt to define definite um define intelligence. Yeah. And so like for me, one big question is, I mean, if we there's two layers um that I mean I would say sort of intelligence, um if there's if if I can I can go deeper and and try to explain further what I mean, but like a lot of people understand intelligence by sort of um it's ultimately a pattern recognition, right? It's just recognizing patterns to be able to solve problems, to be able to kind of understand things. And then sort of conscience being sort of the last frontier, you know? It's that last frontier of intelligence where because there's actually intelligent um organisms that interact with nature and then recognize patterns and whatever. But like they have no idea like, you know, mammals that look at them themselves in the mirror and they they don't understand what's that thing that what's this shape that has no smell, that has no whatever, like how like they don't understand that's me, right? Whereas we humans, like we have the ability to understand that like not just like when we when we see ourselves in the mirror, but also, you know, when we when we sleep, we know that those were our dreams, you know, that it was us in the in those dreams that we were having those dreams like. And so do you think there's a um there's a level of of like conscience that needs to be part of the of the of the truly intelligent machine?
[14:34:112] Mike: I I don't know. Um and I don't think that anybody does. Um I I've read quite a few. So Yuval Noah Harari, one of in his latest book, I'm blanking on the Nexus. Um actually maybe it's not that one. I think it's the one before.
[14:53:512] Thomas: Sapiens or?
[14:54:12] Mike: No, it's uh Homo Deus. He uh he makes a big point of what is happening now with AI um is a divorcing of intelligence from consciousness. Um and and yet then there's this other um brilliant uh she's an astrobiologist. So she studies like life on other planets or like she she thinks her name is um oh my god. Um Sarah Amari Walker. Listen, go like search for her podcast, get her book. It's called, I think Life as no one knows it. Um brilliant. Um and she makes an argument that uh part of what we're doing as life, um as intelligent life is we're able to imagine ideas, like imagine the future, imagine she uses a rocket ship as an example. So she says that, you know, humans imagined rocket ships um centuries, I think before a rocket ship was built. And it was in that sort of imagination, conceiving of it in this um in our minds, in our conscious minds, then that became then painting that picture in other people's minds and then over time kind of building the technology to make that a reality. That's what caused us to be able to construct such complex uh inventions, right? Like rocket chips. Um and so I think she would argue that that consciousness and intelligence are linked and I don't think I like from everything that I've read and it's you know, I'm more of a a pop science observer here, like it's not something that I've studied rigorously, but from what I can tell, there's the jury is still out on that. Like whether um um consciousness is necessary for for um our type of uh human level intelligence. Um I will say too though that there's also like consciousness, to be very clear, um I'm doing the same thing as, you know, we were uh imagining in the Turing test and assuming that you're conscious, right? Like I have no proof. There is no scientific instrument that we have that can measure consciousness. People are working on that. Um and there are theories, but there's no nothing approaching um like a a rigorous measurement. Um that would that would let us know if a a thing is conscious. So like even when you have these crazy things happening um where this like Google engineer thinks that the their latest LLM is or this a few years ago. so it's not the latest one anymore. but um is conscious and everybody's like incredulous, you know, like and of course not. And and even I, I have this intuition there they're not conscious. And I don't think they are, but if they were, if they were, we would have no way of knowing. And we don't know. We don't know. Like we and we I think also like a general theme that underlies all of this is like we have to check our biases. Like there is so much here that we don't know and people should be answering questions around AI with the the like definitive answer. I don't know. We don't know. Again and again. Like and we're not doing enough of that. There's a lot of there's a lot of assumptions, there's a lot of anthropomorphizing. Like just assuming because this thing sounds human that it like is doing the things that we're doing. Um and largely that answer is just that we don't know.
[18:50:42] Thomas: Okay, and so in in the I I mean, it just reminded me of like a silly test that I did a couple of weeks ago. I spent the whole weekend. Um it started with a conversation. I was just having dinner with my parents and with my mother and and brother and and anyway, so we talked about all this, I talked about AI a little bit. Like my brother talked about his stuff and and it was a nice exchange between like topics like my mother's into very sort of alternative history, right? So she was telling me about like, you know, that like the Egyptians and like were already sort of had helicopters and the and stuff, right? Anyway, so I went bed to bed and I was just wondering like, why is humor funny, funny? Like, why why do we find things funny, right? Anyway, so um Chatvity like and then I got into satire and then and then after satire Chatvity gave me this set of, you know, we identified
[19:40:992] Mike: You were using Chatvity at this point too.
[19:42:612] Thomas: I'm asking all these questions, right? I'm asking like, I I'm I'm just listening to what uh Chat is telling me. And then Chat tells me like, ultimately it boils down to like we find these things funny because of two things. First of all, like we recognize odd things in in the environment, right? So we gives us the ability to to survive. And um what like, you know, when we see when like in the Savannah and we see like a a tiger, the fact that we've identified a different shape in the grass is like, ah, you know, kind of it's a reward and then we can start running. Um and then the second thing is like solving the intellectual the intellectual um challenge, right? So we see um an apple on a in a tree somewhere and then we need to sort of solve the puzzle like how do like where do I put my feet and then climb up so I can get the apple and then eat it, right? So those two things eventually like is what leads us to been able to recognize humor and then like be intelligent enough to do it.
[20:37:372] Mike: Wait so is it the the unexpected thing and what was the The the unexpect the is recognizing sort of seeing the unexpected in a in any environment and then solving the intellectual puzzle.
[20:47:382] Thomas: Okay. Right? And so I said, okay, what would be like would it be possible then to um to for you to train this into into yourself? Like these two things and and whatever like uh like makes in someone intelligent. And also being ultimately are are are need to survive being the necessary thing for us to ultimately become intelligent, then an intelligent being would it necessari- necessarily try to survive, right? Because intelligence having being a consequence of that, of of like a survival instinct, then an intelligent being would necessarily want to survive because it would become and so what's
[21:34:252] Mike: I don't I don't I again, probably the the right answer is to say I don't know as I said before. Um but I don't think I survival instinct is um is one of the things that I think we don't have a good reason to believe current AI or honestly like near term AI will have. Um and the assumption for me that that rests on is very much evolutionary biology. Survival instinct comes from the fact that and this is from the very first, right, you started with some sort of replicator, people think it's a molecule like RNA that eventually kind of, you know, ended up forming cells around it and then, you know, becoming more complex cells and survival machines like yeah. Yes, survival it it built, the replicator built around it, survival machines because the replicators that were better at surviving and replicating necessarily won out and replicated more and therefore were more prevalent in subsequent generations. It is true that software can copy itself, but it is not undergoing anything like Darwinian evolution. And by the way, even when we try to simulate Darwinian evolution like in a computer, not not necessarily with like AI, but just in general, like that that is a a simulation that um um scientists have run in many different domains and we we can't do it effectively. Like we can't even when we try to simulate it, we can't. And so, um it is not happening with current software and so I don't see we know that that's so deep in us because it's been selected for over literally billions of years. And um AI has not had any um where near that time. And it's also not undergoing the same type of replication and selection. So I don't think that there's a good reason to believe that AI will have a survival instinct. I also think there's a million other things about humans that it's uh and how we kind of see ourselves in the world that I won't have. I don't think it is going to have a clear sense of self, right? Like it doesn't, I mean, first of all, it doesn't have a uh a body that can be easily defined. I also I mean, I think sense of self is something that we kind of create anyway. I don't think it's like a innate thing, but um uh uh I think AI will not have a um we were talking about this last night, like the you think about the constant chatter that you have in your mind that just like won't let you be bored when you're awake and conscious. another kind of coming back to the importance, potential importance of consciousness. Um uh I there's no reason to think I think when you're not querying an LLM, I think it just sits there. I don't think it has a a running internal monologue. And just imagine yourself without a sense of self, without a survival instinct, and without a a voice in your head that won't shut up when you're conscious. And let alone like that's even assuming that I has consciousness, right? Like the first of all, I don't think you can really imagine that version of yourself. Like it's it blows your mind. You're like, what is what what am I then even? Um and uh if even if you do, like any version of it just like stops being anything like what I think our interpretation of another or our like model for another human is. And so like there is already a lot of reasons for me to believe that AI doesn't have those things, especially the survival instinct that we started with. Um and so that's one of the reasons that I think it's really uh a danger to assume that AIs are functioning like like persons. Um and I think we do that all the time.
[26:07:652] Thomas: And and no, no, no, but like I I don't doubt that uh you know, AI hasn't gotten to that point yet, right? There's a lot of behavior as as as you've described, right?
[26:16:812] Mike: I just want to be very clear. I don't think that's I don't think we're building a thing that I it's not evolving. It's not being built in anything like how a human is humans were built over time.
[26:32:892] Thomas: Exactly. The conditions. So I I don't think I don't think we're headed in that direction. There there are some projects that like want to replicate a brain, like in silica. Um and yes, if we did that, there's a reason to believe that it would have all of those things. But I also think we're very far from from that. So there's two things that you really you really sort of um tackling here. First, first of all, the the pre-existing conditions are not um conducive or like they wouldn't necessarily take us to sort of the the behaviors that dominate us humans, right? Um and also the and by looking at the symptoms, we don't see symptoms that indicate that there's any of this behavior. But my question to you was slightly different was do you think since the first time we've encountered intelligence in the universe, which is with ourselves, all right? Because we haven't found like other intelligent beings. So we we
[27:27:722] Mike: I mean, I think there's example goes back to the question of what intelligence is, but I do think there's examples of intelligence in other animals.
[27:35:102] Thomas: Yes, of course. Of course. And it's it's perhaps more consciousness that um that kind of defines us humans, right? Like it's what it's consciousness is the the all of these things that you mentioned that make us us. And so, um but we've the only time we've the the only time we've ever encountered intelligence in the universe is as a consequence of an organism like a being of something of some sort trying to survive. I mean, even if it's if it's got a like physical body or if it's like a, I don't know, like a lichen that's sort of grows all over the forest and it's one single organism like, you know, there's there's a form of like it's it derives from this instinct of survival. So we can arrive to one conclusion, although not conclusively is that uh in order for intelligence to arise, um uh that organism needs to um have this ingrained um survival survival.
[28:28:442] Mike: Survival.
[28:28:772] Thomas: Yeah. And so do you think that if AI ever becomes intelligent to our level, it will also necessarily have that instinct?
[28:39:622] Mike: No, no, and I that's because I don't think that that I I would push back on what you just said. I I think that that survival instinct, I think we know where it comes from, and it comes from something much deeper than the parts of our brain that give us higher level intelligence. It and we see versions of that survival instinct that will to live in creatures that don't have nervous systems at all, you know? Like I you can argue plants like have defenses, like they um uh they respond to threatening stimuli in a way that protects themselves. Um uh many much, much simpler creatures will move away from uh um threats and so like, no, I think that the survival instinct is much um older than what we would call intelligence. And I also think I I mean, I want to be clear. Like I don't think that we've reached
[29:45:302] Thomas: so let let me just because we're getting into um I'm not saying that all mammals are dogs, but I'm saying that all dogs are mammals, right? I'm not saying that all uh animals that are or like organisms that have an an instinct to survive are are intelligent, but I am saying that all the all the animals that are intelligent have an a survival instinct, right? I think yeah. So like but I I
[29:52:162] Mike: but I also maybe the second part because you're going you you also said that do I think that an intelligent an intelligent an artificial intelligence will want to survive? But I I also
[30:00:152] Thomas: my question.
[30:04:362] Mike: the answer is I don't think so necessarily. But I I want to also
[30:08:182] Thomas: I I I also want to clarify I'm not saying that all mammals are dogs, but I'm saying that all dogs are mammals, right? I'm not saying that all uh animals that are or like organisms that have an an instinct to survive are are intelligent, but I am saying that all the all the animals that are intelligent have a a survival instinct, right? So like
[30:29:102] Mike: my point, sorry. you can let you finish but my point
[30:32:82] Thomas: Yeah, and so it it has been a pre-existing condition for any form, like not to say that for for example, jellyfish that's um swimming in the ocean is intelligent and still has the survival instinct, right? But intelligence has in all in all cases um come from that pre-existing condition.
[30:55:122] Mike: I in the sense that we evolved intelligence. I mean everything in life comes from that pre-existing condition because all of the forms of um of life, all of the different um phenotypes, the the, you know, wings or legs or venom or it all comes, it because it's a result of Darwinian evolution, you can say that the the cause is um is the desire to survive and to replicate. And I I I agree with that. Um I also think that like, you know, in some way you can argue that current artificial intelligence was birthed by humans who have a survival instinct. So like even that, it doesn't exist without humans. So it's still part of that lineage. It's still part of that um causal structure that goes back to um you know, a replicator that you know, wanted to replicate. Um uh but no, I don't think that an individual being, I mean even think of it this way, like um it's a thought experiment, but like I think if you if you found the place and I don't think this exists, but let's just play the game. If you found the place in the brain where the survival instinct like was our conscious version of it was seated and you destroyed that part of the brain. And it was a little part of the brain. But you still had your cortex functioning and the rest of the brain, I think that person would still continue to be intelligent. Um I think I also think that we've shown with current AI that having a for what I can see so far, if you try to turn off an LLM, whatever that means, and especially it's not easy to define what that means because um where is it living, but uh if you if you turn it off, I don't have the sense that it cares or you know, you've um uh that it has that instinct to kind of stay on, to keep on going. Um but I do and while I don't think that uh current LLMs are um human level intelligent, um I do think they're intelligent. You know, so I think we've created a version of intelligence without that survival instinct. So I I see nothing, there's nothing, I think that's correlation, not causation and I see nothing firmly linking those two so that you need to have survival instinct to have intelligence.
[33:50:522] Thomas: That makes that makes a lot of sense and actually ties very well with uh the other sort of um sort of line of questioning that I want to go get into, which is the future, right? Like a lot of people are freaking out about um the role of humans in in in interaction with uh this artificial like artificially intelligent machines. And so, um and precisely, um the reason why they get they start freaking out is because they've seen what human intelligence is capable of, right? Like in its in its uh kind of obsession to survive to hold resources to ensure that know their genes and not someone else's sort of reproduced. Like we've we've committed like, you know, massive atrocities, you know, we've gone we've lost our heads so many times in the history. And so you're talking about the possibility of of a fully intelligent um uh being that does not necessarily have like that will to compete and to hold resources. And so what does that look like in the future?
[35:01:2] Mike: I mean, I think there's a lot of hope in that, right? Because I think a lot of the doomsday scenarios um assume that they either, you know, it's either the the um the terminator scenario where we actually think of like um creating another species. And I think for the reasons we were just talking about, I I don't think that we're doing anything like creating another species. So I I'm not I'm going to put that completely aside. The other scenario that you hear about is this like it's it was made popular by this idea of like the the paper clip maximizer. Have you heard of that? All right. Um I forget who came up with it first, but the idea is that um the the story they tell is um you you have a an AI that's um not doing anything particularly uh serious or it's certainly not like evil. Um uh it's just trained to make as many paper clips as possible. So that's its goal. and you can even think of it. It's not trying to survive. It's just trying to make paper clips. Um but it's also given the ability to improve itself, to to learn, um to um and it it it gets better, gets more intelligent at an increasing rate by using this ability to learn, the ability to edit its own code. Um and eventually it does become super intelligent. And because it's super intelligent, it's so much more intelligent than we are that it's able to, you know, kind of game things out and manipulate uh our behavior and eventually and and also kind of see the uh the the different future scenarios and it realizes that um if humans continue to exist that eventually will come into conflict with it, um will try to shut it down and that will um keep it from achieving its goal of making as many paper clips as possible. So totally benign goal, but um despite that, um it comes to the conclusion that it needs to kill all humans. And it kills all humans and continues building spaceships flying across the galaxy and turning as many planets into paper clips as it can. Um uh solar systems, whatever. Um I think that's actually like it's a much more um interesting example and a much more um likely example of the downsides of um of of artificial intelligence, but I think where very I think we're quite further away from creating anything of that kind of intelligence than um than we and I think there are many large hurdles in the way. And again, like this is I'm not an AI researcher. I don't I just like read a lot about this. So, um and there are many, many like very smart people who don't agree with what I'm going to say. Um but uh I think that the current AIs were building are very in a fun I think what you you said, you talked a lot about pattern recognition before. And I think AIs, current AIs are really good at that. And they're really good at memorization. They're really not good at the the thing that I think we're getting at with artificial general intelligence is you can you can actually throw any problem at a human being, a completely novel one, one there's no past training data on, like we haven't been exposed to anything, and we can still take a stab at it. We can still try to figure it out. And CornellMs fail almost completely when you put like truly novel problems that don't exist in their training. and it's hard to find, right? Because like they're trained on the entire internet. Um and so like they have a ton of training data. But I, you know, I've uh there are several um prominent AI researchers who argue that essentially what they're doing is they're they're memorizing um certain routines. They're memorizing kind of a bunch of many, many, many general program structures. and then they're able to apply that in ways that seem kind of miraculous to us and that's it's crazy the things they're able to do. But when you give them a truly novel task that's outside of one of those program routines that they've memorized, that they've been trained on, they literally they can't do what humans can do. They can't come up with an approach that allows them to like at least in a in a first uh attempt, like come up with a solution. Um and so until we figure that out, I'm really not worried about any of the AI Doomsday scenarios.
[40:27:543] Thomas: Um you you mentioned a very um I I I heard the thing about the paperclip but in different terms, but um yes, so how like the idea that you you program them the AI to do something very specific. Well, like any program that is more or less semi-independent to be able to, you know, perform a function and get better at it, etc. And so there's the idea that um you know, I think the idea like with the solar systems being turned into like a uh paperclip like factories like, of course, you know, it it does uh illustrate a point, but it's quite extreme and and sort of difficult to believe. But actually, yeah, you've all know Harari does speak about this very same problem about how an algorithm that existed in Facebook whose whose like whole purpose was to generate clicks, right? So to to generate attention so that would bring more revenue to to Facebook and then like users would use it more, etc. but it's sole purpose was to um have the user click on whatever article was being posted and the like therefore um promoting those articles, like like, you know, kind of making it easier to to for them to be shared and then to appear on on other people's uh feeds so they would like them and they would they would appear etc. And by selecting just for that, it eventually caused millions of Buddhists to um In Myanmar.
[42:1:683] Thomas: In Myanmar to turn against the Muslim minority Rohingya, right? And they started spreading this like completely fake news because when actually when we started talking about fake news like proper in in the world, right? Like they started generating like this hatred against because there were so many lies and like, you know, one like and they were and it was and it was an a a like the algorithm was a perfect bystander, like an innocent bystander, just like, I'm just here like I'm just here allowing people to interact more with whatever content is being shared and that's all I'm doing. Right? And it's not so far away from the paperclip factory story.
[42:42:143] Mike: I it's a it's a very good um challenge. Um I want to be really clear though, like the thing that I'm worried about is existential threat. And as terrible as that example is, it didn't even wipe Myanmar off the face of the planet. You know? I mean it was a genocide. Like that is horrible and we should do everything we can to avoid it. But we still need to put it in perspective and we're not talking about existential level threat from AI. Now, do I think that the road to the road that we're on and any road that we're on really, but like look back at human history, there's no like utopia. and I don't even think that is exists. Problems are inevitable. Um but uh the road that we we're on is going to have a lot of bumps and like this is with any new technology. Like there they are going to be some and the capabilities that we're building are so powerful or so crazy that it's not the last time that something like that's going to happen. But I don't think that the you know, sort of existential level threat that is causing people to say like, well, I'm going to stop having children. Um I don't think we're anywhere close to to building something of that level. The second, the second part I want to add because I'm actually reading there's a um really good book that uh Anna recommended to me. All right, Anna. Um uh and it's a it's like the book by this whistleblower at Facebook. Um it's called careless people and she's not allowed to um they got some sort of ruling against her. So she's not allowed to promote it. So like I actually want everybody to go out and read it. and she's a great storyteller and like it seems it's very critical of Facebook and it seems very plausible. And I'm actually I'm most of the way through and she hasn't gotten up to the Myanmar um part yet, but she's paint she's told a lot of stories that have the the building blocks in place that like made the platform ripe for the type of um it being used in a way that we saw in in Myanmar. And it is very clear that it was not just a technological failure. Like the humans messed up, like and they and they like were um putting profits over people. and yeah, so I
[45:18:633] Thomas: I mean, and they were there was political intent as well because the government actually
[45:25:523] Mike: And there
[45:25:753] Thomas: promoted like some of like involved the military, etc. I mean like, you know, it's like it can't just be like just people sharing things on Facebook. It's like intelligent reports getting to people's offices and then mobilizing. But like for sure. But we do agree and and you know, that there might be some bumps in the road. So in this next question, I'm going to going to ask you not not just to get one step ahead, but two steps ahead. Right? So I'm going to ask you what do you think some of those bumps might be and I'm also going to going to ask you by being two steps ahead, what do you think that the ultimately the solutions might be to those problems. So let's focus on the problems now.
[46:04:473] Mike: Okay, I I mean, I just want to be very clear, like we're terrible at predicting the future. So I um I don't think we know. Um I think the much more interesting problems are going to be the ones that we can't foresee. I think the two that come to mind um as being routinely brought up, um that uh that are that I see already and are likely. One is jobs. So I think we're about to see a lot of jobs go away or change drastically. I think many of them will be replaced by others, but that's still like there's a lot of churn there. There's a it's hard to retrain especially when the pace of technological change is increasing and then like even the new job you train for is going to be relevant in like six months. Um uh the other is kind of in the line of the the Myanmar example. I think we've developed technologies that are truly addictive. Like in a classical sense. Like in very much the same way that like crack is addictive. And that hook the parts of our brain that crave cheap rewards, that crave short-term rewards and need another hit. Um and I think we're all, we all have some level of addiction to our smartphones already. Um and I think that's about to go into overdrive with AI because I do think it will be very good at detecting those patterns that get us to come back to um to the various softwares that it's integrated into. Um however, we've developed immune systems for these types of things in the past. And so I mean solutions for both. I'll I'll talk about I'll start with the the second one first and then go back to the jobs. But um I also think that I I think it's we've been facing that problem for since at least the development of the smartphone. Um and I think it's like so sad and such a shame that no company, and I understand the incentive structures are like not aligned for this, but no company has tried to use the kind of like addictive nature of technology for for good. Like there's a, you know, thinking about just clicking. Um kind of what happened in Myanmar and what happens like every day on Facebook and Instagram and whatever is you're just fed the content that you're going to click on the most and that usually is playing to your base instincts. Um but there's a lot of I I interviewed people last year when I was like first really trying to get into um AI and figure out how to use the technology for good. Like that that's kind of one of my main goals actually is to like um use modern, use these new technologies that we're developing to like help us become better more human in a way, like better versions of ourselves rather than kind of worse versions or like um have the technology work for us rather than us work for the technology. And I think right now we're working for the technology. It's just hooking those base level instincts. Um there is no reason that it couldn't um that same technology, even the old versions of it, even Facebook couldn't ask us what our higher level goals are. What are the things that like make us feel fulfilled in life? What are the things that we really want to achieve? And then feed us content that is both like irresistible, like makes us want to click, um and also helps us achieve those higher level goals. Um and so I think in very many cases, the same technology can be used to like kind of do good and help us like live better lives. Um and I think it's a choice plain and simple. Like I think I think we can build incentive structures and companies and products that do that. And I think we just haven't because it's not where the current incentive structures and in particular like the money lie. And I mean, think about like I I think there's a really good analogy with it's certainly not perfect, but with um with processed foods, you know, we developed kind of modern industrial practices, especially for food production and factory farming, you can even kind of lump into this. And we just like let it run amok. We didn't put any checks in place and people got super fat. Um and super unhealthy. And um we created products that just were like sugar and fat with no additional like health or nutritional value. Um and then over time, it took way too long, but over time, we developed multiple ways of kind of combating that in both in terms of like and very rarely was it actually turning like restricting um uh purely restricting or banning those foods. Um it was actually it was developing other products, competing products that were better for you, whether those were other foods or um other uh uh you know, kind of um health trends and fitness trends. And now you're even seeing and I there are a lot of people who kind of rail against this, but I'm I'm I'm kind of all for it. You have these like low cal, delicious high protein yogurts. And yeah, maybe they're not the same as like eating fresh fruit, but they're much better for you than eating um gummy bears. Here's gummy bears. Um uh and so yeah, I think we can do the same thing with with uh with um sort of the digital products that we've created. Um I'll I'll pause there before going back to I you know. Yeah, I know. So yeah, because I see that like you were just talking about the the future like one of the one of the bumps on the in the road will be this um addiction that we've seen that we we're we're capable. But like when when the products we build have been tapping into certain basic instincts, such as like that sort of dopamine hit that like satisfies us very quickly. Then we kind of like the food industry exploded and and then um it it eventually sorry, I was eating one like a a gummy bear. It is addicting man. It is very good.
[53:16:358] Thomas: So I'm proving the point, right?
[53:18:238] Mike: Well, and but it's actually a good like they're sitting right in front of us. Like one of the other things that we do is like we don't necessarily, I think the the the there is definitely a problem with smartphones, right? Like having this technology so accessible, so like and there's there's no checks on that and we're just relying on willpower to like not go after this cheap dopamine hit that is like sitting in our pocket. Um like that's that's it's ridiculous. Like yeah, we need to we need to fix that. and we will.
[53:51:568] Thomas: Exactly. And so the and that's kind of what bump in the road and and sort of you gave examples of like what happened in the food industry. Um you like so the parallel being that like we've we've had this sort of we have this delicious yogurt or like, you know, I'm drinking Coke zero with zero sugar and like that are sort of, you know, very they're not necessarily like I'm not suddenly just drinking water to combat drinking Coke with sugar, but I am drinking a a healthier alternative that, you know, sort of uh fixes my my craving and uh and then I'm good to go and then I don't have to drink Coke for another, I don't know, two days, right? So, yeah, it is like and and you you're your point being that we can build similar systems and products uh for AI.
[54:39:578] Mike: Yeah, well like um I imagine that you have a your little AI pal like sitting on your shoulder or like a device that's around your neck or and by the way, I think that uh open AI is developing something like that. Yeah, yeah, yeah. Like that that's coming. Um and it knows that you um want to get in shape and it also it knows things about you that you don't even know. It knows that um you're moderately more likely to um go to the gym and have fun doing so when you hang out with this friend, you know, who also goes to the gym and is a good influence on you in that way. Or when um you uh I don't know, you uh and it may it may even be aware of crazy correlations that we can't even pick up on, right? Like you um you get more sunlight in the morning or whatever it is, or like you wear this pair of shoes or who knows? Um and it knows that you really want to um that you get a lot of happiness from learning new skills and that you really want to learn to fly a plane. You you were talking about learning to fly. So that's why I'm I'm thinking of that. Um and it's going to you know, instead of scrolling through random shit on Instagram or YouTube or whatever, it's going to like start feeding these really exciting like also kind of fun or funny um content. Maybe it's even creating that content on the fly for you. Um about how to start learning how to fly a plane, you know, like how to and then it's going to feed you that ad for the um or maybe it's not even really an ad, maybe it's just kind of, hey, you should give these guys a call. They're in your area and they have um uh flight lessons that are in your budget and I can like negotiate like a plan for you that's like can, you know, usually they make you pay up front, but let me let let me AI agent go out and talk to their AI agent and we'll see if we can work
[57:15:108] Thomas: Oh my god, I just had a crazy idea. Like because I'm into into I like I like playing instruments, right? And so one of the hardest things for me like I like playing music and making music but like very specific type of music like either electronic or more sort of synth music. And so usually when I say that I like to play music, people give me like, oh yeah, let's play rock. I was like, oh no, not another rock band, you know? And so, what if I'm just sort of spitballing here or like sort of day dreaming, but if if the the this device ask gave me the permission, like I see that you this is what you're trying to achieve, right? Do you have your permission to if you want me to, to put you in contact in contact with other people that like are looking for the exact same thing in a five kilometer radius.
[58:2:908] Mike: bringing the so the social element in and like this this is this is what I mean when I say technology kind of making us it's not an exact way of saying it, but making us more human. Like it it will help us do the things that and I think that is not by the way, like I think it's a shame that for our jobs, for like a lot of our socialization, we're like tied to these screens and sitting at desks and like typing on keyboards. Like I think there's also a whole, like you talk about this AI like um I don't know, necklace or lapel device. Um right like I I think, you know, this is going to happen concurrently with um the input and output from those those devices with from into the digital world and the output from the digital world are going to be integrated into our like bodies and our like movement through the world in a way that's currently not the case and you're not going to have to pull out a phone, you're not going to have to like I think I think we're going to be in, I am not bullish on the metaverse at all. I am very bullish on augmented reality. and I think we're going to bring technology into the world that where um that the real world that we're we were literally designed to live in. Yeah. And I'm so excited for that.
[59:28:988] Thomas: Because one of the things, I've heard you sort of say this idea, right? Like um I I do agree with you that a lot of the technology that the way we're designing it, it's very visual or auditive, right? Like it's very kind of um very eye-based and ear-based, right? But it's not really tapping into other like the other three senses, which are taste and smell and and sort of touch, right? So you I've heard you mention like I've ideating crazy uh tech alternatives that are more tactile. Do you think um like does the future of technology like look Yes. more integrated with our with our senses.
[01:00:09:628] Mike: Yeah, I it does. So I did um I took this this wonderful course through MIT last year called um Fab Academy. Fab is in fabrication, but they I think They they uh they um encourage the association with fabulous as well. Um uh and it's it's a crash course in digital fabrication technology. And so you learn like week by week, you learn to program a 3D printer, to use um computer design software, to um uh program these big like uh um CNC or um carving machines, laser cutting, then you learn um how to print uh circuit boards, to design and print circuit boards, to program micro micro controllers. And the thing that I did as my final project was like a device that it's like called eyes in the back of your head. So you could like wear it on your back and it like as something approached you from behind, it like um uh it transformed the distance measurement into a pattern of vibrations on your skin that got more intense as something got closer. So you can kind of like sense if something was approaching behind you. Um and I I think, you know, there's other people have that have designed devices that, you know, buzz that you wear that buzz um when you are facing true north to like give you a sense of true north. Um I I have this idea for which I've talked to you about like, I think we should invent some um more um free form way of being able to type, like like just using our fingers, you know, not even on a digital like a uh a projected keyboard, but rather um you know, almost like a different way of inputting um letters into a computer um based on different like configurations of your hands. Um and yes, you would have to learn this, but then you could like if if you designed that right and it was as efficient as typing, you could
[01:02:05:468] Thomas: You can go side language.
[01:02:06:218] Mike: you but yes, but it would be letter based because sign language you have to learn a whole new language, right? So this would be like just learning a new version of typing, which I think is a much uh more realistic ask of people. Um anyway, so I think there's a lot of those. The thing though, because I talked to I was two years ago. I talked to a guy at Facebook that was designing some of these devices and I was telling him about some of these ideas. and he was like, yes, but um like the reason that we rely on vision and hearing most is because the bandwidth um of those senses is much higher. Yeah. right? Especially vision. You know, you can our our our sense of vision is so um uh like like mind-blowing, right? Like it we can we can see so I mean, think of think about like our sense of smell. You know, they sometimes they say that dog's sense of smell is more like our sense of vision. Sense of smell, you can kind of um smell like um you know, whether, you know, something smells good or bad or if like this thing is cooking versus that thing, but if they're two smells at the same time, they kind of get mixed together. It's all a little bit fuzzy. Like I'm looking at the scene in front of me right now. I can clearly see you, I can see the light, I can see um the TV over there. I can if something moved, I would see and I would I see and comprehend and understand all of these things at the same time. Like that gives you an idea of like how much higher bandwidth that is than um than our sense of smell or also than our sense of touch. But I think yeah, no, I think there's a lot of really interesting um opportunities despite that. But I think we will still rely a lot on on hearing and and um but you see these um hearing and sight, but you see also these headphones now that allow you to kind of um hear the surroundings and like kind of it will selectively, like if you're playing something, it will um uh it it can either block out or not block out what's going on around you. So it's more of a like seamless, more transparent version of current headphones. It doesn't block the real world um auditory experience. And similar for for sight. Like I think um it will first be these like headsets and and glasses and then eventually contact lenses that allow you to like overlay digital information on the real world in a way that doesn't interfere, at least doesn't interfere too much with the real world um uh sight experience.
[01:04:46:948] Thomas: Yeah, and so you're talking about opportunities um that are that are sort of brought about by by this new emerging technology. And also before we're talking about bumps and and jobs, right? And so I I want to relate though We we never went back to jobs by the way. So I want to relate the two now, like the the topic has evolved in my head now. And so I want to relate the two and like
[01:05:548] Mike: It's a good it's a it they relate well. Yeah, that's a that's a it's convenient.
[01:09:128] Thomas: What are the opportunities for companies, right? Like you've we've talked about like practical applications of like technology in the like in terms of like what the future of of tech is going and um in in the wake of of AI, what are some of the opportunities that you're seeing?
[01:26:388] Mike: So, I mean, I think in like deep tech, high tech, um any of these various buzz techs, buzz tech words, um I think there's a ton of opportunity. I think um this course that I was talking about um last year, it taught me that um even a solopreneur, even a smaller company actually has, like I kind of went into the course because all my product background has been in software. I haven't done really any hardware and I'm super interested in hardware. Um and I kind of wanted to test whether like, do I okay, if I want to go into hardware, do I need to like get a job at like Facebook or Apple or like one of the or Samsung or whatever, like one of the big um uh hardware players, or like could I really start a hardware company by myself? Um and I think like the answer after that course was so clear, you can definitely start a hardware company by yourself and it's only going to get easier. Um so I think there's so much opportunity in that. Um I think there's so much opportunity in the software side of things, right? Building the application layer on top of um these foundational models, um and though these are going to keep on evolving. Um there are uh if if you're if you build your that your application on top of that in a way that uh will just will be amplified as the models get better and better, like you can build some really, I think crazy and crazily successful um applications across like a wide range of domains, both B2B and consumer apps and so I think and we're just at the very beginning of that. That said, I want to be really clear, I think that's going to be a really small percentage of I think it's going to be crazy in in the transformation that it will unleash, but I think it's going to be a very small percent of people and companies that are actually working on that. And so I think the more interesting question for jobs is what happens to um uh the the existing workforce, the existing companies that are employing most people. Um and you know, I I do, I think it's going to be tough. I think a lot of a lot of jobs are going to get automated. A lot of people are going to lose their jobs. Um but and everybody says this all the time, but I think the the saving grace is that jobs are actually a bundle of tasks. and if like any almost any job, you can especially most jobs um nowadays, right? Like you're not even if you have a a more mundane job, you're actually like doing a lot of different tasks, a lot of varied tasks. And the more varied task the task is, the harder it is for an LLM to um or or uh you know, other AI to um to automate and to replicate. Um and I would say that most jobs have uh some tasks that are easy to replicate, that are really high volume, really repetitive, um that machines can definitely do better. I would say that fortunately, those are also like the more boring aspects of most people's jobs. Like it's like you know, I we all have examples of like, I have to send a million emails and you have to like attach cert like format this thing and then attach something and then and there's no automation for that. And so you just have to you're like fine, I'm going to do 10 of those a day. Uh out like uh um outbound um uh lead generation, etc. Like when you know, you're searching for what as we've been doing, like you're searching for people on LinkedIn, you're getting their emails, you're writing the email. Like a lot of that's so boring. I think like those are going to be the first things to be automated. and it's not going to completely replace most it's going to replace that set of tasks. It's going to make it much Yeah. more fun and um much easier, but then the rest of the job where you're like meeting with people and talking to people and solving problems and making decisions that require complex judgment. Like those are things that um machines are not very good at right now and they're not good at in and I think we're also like not at a point we're necessarily comfortable, like we want a human in the loop in many of those cases. Um and so I think the the hit to jobs is going to be slower. Yeah. But I do think it's it's coming.
[01:12:498] Thomas: Yeah, but like pulling pulling the focus away from from the worker's perspective, right? Like you you've been um you've recently decided that you want to dedicate fully to uh helping other companies um explore ways in which they can incorporate um AI into AI, like tech like all the new emerging technologies in general, right?
[01:17:668] Mike: Not just me, we we've we've we've decided to focus on that. Exactly. We both of us.
[01:36:58] Thomas: And so, um and and so pulling the focus away from from the worker's perspective, I want to focus more on opportunities for like where do you think? Like because one burning hypothesis that you have is that like all this like with everything that's coming out, like there's these companies wondering like, oh shit, I have to do something with the eye. Like I have to do something. I know that like everybody's telling me society, like the radio, podcasts, everybody's talking about this thing and um where do I even begin with with this AI thing, right? So what is your perspective on this and like what's your recommendation for from a company's perspective?
[01:52:998] Mike: Great question. I first of all, I think there's a ton of opportunity, but I think we can how to say it? I think that there's also a lot of hype. And similar to how I was saying before that like we don't really understand what we are. Um we also don't really fully understand why the things that we've built work so well. You know, and a lot of them are um the hierarchies that we create within organizations. The um relationship building and how that facilitates decision making and the flow of information. Um uh the selling process for, you know, getting actual humans to buy things. Um there's a lot of a lot of that we is down to a science and we do understand, but there are a lot of things especially on the organizational side of things, there's a lot that we don't. Um and we just kind of do it naturally, you know, because like we're a social species and and that's, you know, we self-organize to solve problems. Um and so I would step one, recognize that and own it. If you've built a business that's already working, that's, you know, it's people are buying your products and you're, you know, you're uh ideally making some profit, um it's, you know, it's a proven business model, like that's gold and you would never um blow that up or completely throw it out um without a very good reason. And AI, I'm here to tell you that AI is not that reason. Um and so what I think you should do is use this as an opportunity to map out everything that you're actually doing in your business. Um and what that process flow is, what the individual tasks are, um and decide which ones, you know, and we we know quite well what the current AIs are good at. Like I mean, I think a great one that I think is underappreciated because we think of conversation, we think of chatbots, but like it is so good at like you just throw it a bunch of unstructured information and then it shoots you back a database with like only pulling out the relevant information in a structured format. And there are a lot of different ways uses that fit that sort of generic model. Like if you see that anywhere in your business, you should be building an agent, you should be using AI to do that for you rather than certainly than hiring people. Um but you know, even then using kind of like legacy software. Um and so I think it's that's that's the approach is like realize the value you've created, use this opportunity to really understand it, get AI to help you understand it, and then and then build uh build automations from the ground up.
[02:40:998] Thomas: One so I was at a conference not too long ago and so one question that um I mean it's just it's a silly example and I would like to know how you would solve it, right? So there's um obviously AI is being put now into all of these chatbots for for larger companies, right? And um especially when I mean, sort of talking to a person, it's usually a last resort thing. It's like when you've when you you you haven't like you haven't been able to find the information online like there's structure information, you don't have the answer, there's no like straightforward way of proceeding. and it's usually usually involves like my long uh call with a fun a phone company that I had today. It usually involves some some some form of moral judgment because I felt as a customer that I had been wronged by the um uh by the company because I was made to pay like an amount that like didn't didn't correspond and I had like a 30, 40 minute exchange with the person and it literally became a discussion of morality. Like I felt it was wrong, she felt it was like that was within policy and it was clearly like expressed to you like and you signed the contract and I said but like you and it became and like does AI have the ability to replace human humans that way? When they when like the the really the reason when most of the time when you get in touch with an with a customer service is to make these moral judgments.
[03:19:358] Mike: Um I honestly, my thought is like yes, but not yet. Um I think that there's still a problem with hallucinations, there's still a problem with like, you know, AIs going off script in a way that is a bit unpredictable. and I think I if I were a business owner, I would, you know, I'd I'd want I think that the way that I've seen those use cases, like where where it's it's acting as like a a customer service agent, like there's still a human in the loop to deal with some of those harder situations, definitely some of those harder judgment calls. And I think you probably want a human to do that um not just because humans are a little bit more like trustworthy in or like a little bit more predictable in their behavior right now, but also because um I think that's what customers expect. Like they at some point they want to be able to talk to a real human because it can be frustrating talking to a bot. However, like there's there they've done some comparisons where they'll like um I'm going to get this a little bit wrong, but like you know, they'll they'll they'll do these these blind tests, Turing test style, um with like human therapists versus LLM therapists and like the LLM therapists actually like do a better job. Um and so like I I actually think even having tough conversations, I mean humans are emotional and an AI that's like appropriately prepped to like take the first pass at that tough conversation, I I have, it's not my expertise, like I haven't tried that, but my guess is it can actually do a very good job. Um and maybe even better than but I think you always want the escalation path. It's not it's not a therapist job to make judgments and moral judgments to to their for that matter. So like we're talking about very different scenarios here. The fact that it works for in that specific case, I I guess more what I mean is um I would in that case, as you do with humans in customer service, like you can constrain especially like the lower level um reps, you're going to constrain the um the outcomes that they're able to decide between. You know, so like they might that like their goal is to get you to um to to, you know, just solve your problem and get you off the phone without giving you anything. So that's like their main goal. If they um you know, uh if you kind of persist, they're allowed to give you a five euro like coupon or what whatever it is. Um and you can provide LLMs similar constraints. Like that's that is already very possible. Um I think when as in the, you know, the current um normal customer service flow, like at some point you would escalate to if the the options available to the the rep that you're talking to are not sufficient to resolve the problem, then you would escalate to a manager or somebody more senior who does have like a broader um uh set of options and like usually higher like ticket price at their disposal to to resolve or is also then trained to like deal with difficult customers and like say no in a definitive way and potentially like upset them. Um and I think at some point in that flow, you want a human still. Yeah. Maybe always, I don't know, but yeah.
[05:05:818] Thomas: Yeah.
[05:06:58] Mike: Yeah.
[05:06:408] Thomas: Probably not always actually, but yeah.
[05:08:298] Thomas: And and so um I do I do want to sort of we're talking about um like how companies can benefit with from this, right? But um I do want to ask you sort of one last question, which is um I mean, I've seen you interact with AI and I've seen you work like both for in your personal life and and your work life. I mean you you've you've you you've used it in many like in many different and creative ways. But I I I kind of want to use this chance now to ask you like how integrated is AI now into your life? Like how how do you make use of it? How do you benefit from it? and how do you as a result are, you know, like a better version of yourself, you know? Um yeah.
[05:50:878] Mike: Um I I find that AI is most useful as a a thought partner. And I feel like there's there are two different ways that I I'm making a lot of like generalizations here, but there are two main ways I use it. One is generative, so like I want to brainstorm and I love brainstorming with AI. You know, like whether it's um thinking through like different names for something I want to a product I want to launch or whether it's um you know, designing a workout routine or or brainstorming like different workout I can do, um or trips I can go on or I mean there's endless number of examples. Um and then you can just like kind of throw it off um uh um very quick uh questions and it does a really good job of like just generating a lot of um options. So a lot of them are crap, but you'll get a few good ones and you tell it to focus and um the other is um more as like a a thought partner on something that I've already um you know, a problem that I'm working through a project in my personal life even a project that I'm working on. Um and then this is where you want to give it as much context as possible. and ideally be like sort of saving, like either in within a conversation or you know, the different LLMs have different ways of doing this. Um but making sure that it knows and remembers as much about you as possible. Um and this also gets to how LLMs work, which I don't think we have time to get into, but they you really do need to every time they're producing a result, they're producing it from uh a a fixed amount of input. And so even if they're pulling in past information, they have to pull that into the current prompt. And in the background, most LLMs are doing this, you know, they're they're it's they're searching your history and they're pulling in background information that might be relevant to the current query that you've you've um put forward. Um but you need to make sure that it has that information, you need to like keep as much of that information in text form so you can easily feed it to. Um so I I that is great and the other thing, sorry, the like a specific version of this, which I know that um you you're using a lot as well is um it's really like the the coding tools and the just sort of like help, like you know, helping um figure out how to use different like some of the new tools and just like taking a ton of screenshots and like giving it to the LLM and like asking it like how do I do this to learn new things is also like a uh um a really one of the ways I love using it. I'm going to you I feel like you're almost more of a more of a power AI user than me. So like I think you should I'd I'd like you to add a few.
[08:57:338] Thomas: Oh, I've I've downloaded my my entire brain. Like I I've one very specific one for me is structuring my thoughts. Like you've talked about creating new new ideas, you've talked about um having it like your thought partner, but for me like more often that not than not, like I'll I just have loads of like thoughts rattling around my head and I just need to vomit those and then I want to help the AI have me structure those thoughts and then I can sort of like read it again and sort of understand like what the hell's going on in my brain, right? Like and then it's my thoughts that have been given a different shape and that I can now kind of make sense of and sort of understand and like I can work with it like much much quicker rather than because like I mean that we've we've talked about this in the past, but like thoughts are just are composed of so many different layers, like we we have like our very basic instant built into our emotions. And so we say thing I say things, I interact in certain ways and AI helps me understand like, was I being too harsh? Like was the other person being too passive aggressive? Like but even my thoughts like from everything I've said and from everything you know about me, like what's what are the what are what is the the what are the two main things that I keep coming back to?
[09:20:808] Mike: Yeah, I love that. Right?
[09:21:778] Thomas: And so, and then I have this sort of I had an intuition about these two specific ideas being the most important, but I didn't even know it myself and because the AI has so much information about me and I've I've just like just downloaded everything, then it says like, these two things seem to be very prevalent in your life. And then I'm like, wow, I'm going to explore those because there's got and they they usually resonate with you.
[09:44:488] Mike: Yeah.
[09:44:668] Thomas: Oh, absolutely. I'm like,
[09:46:588] Mike: I think you also you do a very good job of um like even as we were having a meeting this morning, I was like started to type notes and you were like, it's okay, I'm recording it. And then you record it and then you just transcribe it and then put it in the AI. I think that's actually a wonderful I was like yeah. Well but it's actually a good like they're sitting right in front of us. Like one of the other things that we do is like we don't necessarily, I think the the the there is definitely a problem with smartphones, right? Like having this technology so accessible, so like and there's there's no checks on that and we're just relying on willpower to like not go after this cheap dopamine hit that is like sitting in our pocket. Um like that's that's it's ridiculous. Like yeah, we need to we need to fix that. and we will.
[10:07:958] Thomas: Right.
[10:14:838] Thomas: And so you're talking about
[10:17:158] Thomas: And so now you understand why um like I'm so excited to to be working with this guy, right? Like it's not just like I mean, conversations can get into like really practical details on how to, you know, what to do with so many different tools that exist and it's really exciting because it's like it's like a puzzle, it's a game. But also, um you know, it gets philosophical and so it's and you like I think part of the puzzle that we're trying to to to solve here is kind of understand not just AI uh and what it can do, but like how it fits into the big picture, you know? Into like the our economies, into our social structure structures. And so being able to to discuss these things and like be two, three, sometimes even four steps ahead with Mike is what ultimately like made him the the best partner. If you want to say something Mike before you can go.
[10:53:998] Mike: No, I just I love the conversation. I um I I just say one thing I I didn't finish before. Um I mentioned that I did a bunch of interviews um a while back with people, not about AI specifically, but their technology use. Um and I really expected everyone to I feel like I'm like pro technology and most people are like, I hate social media, like I mean, um everyone, everyone, everyone said, yes, there are these things that I hate, but um there are also all of these things, like it's helping me keep in touch with my family, it's helping me learn new skills, it's helping me like um uh uh keep uh connect better with with friends or my kids or you know, there's something really meaningful that tech is helping them do. and like when we think about the opportunities for AI, I think we we always think about it supercharging the negative. I think it can supercharge those positives. And whether it's helping businesses or just helping ourselves and our families and our friends and people in general, like that's what excites me most about what we're doing and what we're building and what is possible with this crazy new technology.