
The Squid of Despair
The Squid of Despair
Squid #16 - Artificial Intelligence
Perhaps the #1 topic out there at the moment - the rapid (or so it seems) rise of artificial intelligence with technology like ChatGPT and Synthesia and MS Co-Pilot and Bing (and more) had to be discussed by the hosts eventually, and here it is.
Joining DAS and Peter is a non-person who still contributes to this episode with some thoughts of value (and humour).
Important questions such as do fish really have fingers and will AI replace the water cooler chats in the future will be addressed, as will the the aspects of the ethical use of AI, the human/digital partnership and impact on business and society.
Plus, at no point is The Terminator directly mentioned (or HAL).
Welcome to the squid of despair, unscripted musings on business life leadership, creativity, transformation, and all the myriad of other work life events that get in the way of good night's sleep. Hosted by David, Ealing Smith and Peter Taylor. Hello, Daz. Hello, Peter. How are you? I'm good. I'm good. So you've been? You've been on your travels recently. I know that you've been off to Canada. I mean, you. I have been you've been on your travels to our I remember. Yes. Yes. Well, you always remember because you sign up my expenses. I wanted to ask you, have you written any books recently, Peter? You know, I have because it's the book that you contributed to. And thank you for your time and contribution. Got it in just at the last minute? Well, no, no. Well, I know I've learned everything I know about last minute contributions from your good self. So tell me about the book, remind me remind our listeners, what are we going to see? I feel you're deflecting from Toronto, but fair enough. Well, the books about PMOS. And he won't be out until March 2024. So that's probably we should talk about that much later. Really? Okay. But it was a great contribution from all my PMO leaders and your good self and our boss's boss, or my boss's boss. And so yeah, it was good. Good. Oh, when I look forward to I look forward to seeing it. So. So where were you heading with your Toronto questions? I clear. Clearly, you got a point here? Well, I've got a point because a little birdie told me all about a little game you played there. And I thought you might like to share the game with our listeners. Because they made like they might like this. Yeah, well, no, I mean credit to you for setting me up if that's the right word with it. But it was it was a an icebreaker I think they called where you start meetings with a with a sort of an enjoyable, little interlude before the work gets going. And the game was to describe your colleague, in one word. And so flip charts, you can imagine these people's names, and everyone had to contribute a word and that sort of briefing was behind. Remember that people will remember this for the rest of their lives. I offer you the unaligned comment that I made to you nearly 30 years ago and you never forgotten, forgotten. It all forgiven me. So the kind I'm over it now is a positive thing. But yeah, I did song for a long time be memorable. And you know, be funny if you can. And actually, it was a, it was a joyous thing. We, we had a lot of very funny suggestions, very astute suggestions, and some weird suggestions, actually, but enjoyed it tremendously and got into work with a smile on our faces, because what it was intended to do. Lovely. And did they describe you in nice words. I had an eclectic list of words, actually, of which my favorite was renascence or Renaissance, depending on which side of the equation you live, which I thought was rather rather good. I was proud of that one, actually. And the others were, were equally kind. Let me put it that way. Very good. Very good. All right. So, um, I got a break at what I got. I got some bad news. Oh, yeah. Okay. Yeah, I cuz the way you play this, this, this, as you know, is the fact that one of us comes up with a squid of the day. And then the other one gets to ask the question, really, about what the what the squid of the day is? I look forward to that moment, PD, you're not going to take that away from me, are you? Oh, I'm not gonna take that away from you that much. No, but a little bit. And so you get to ask the question got asked the question. Peter, what is the secret of the day? Ah, thank you for asking. Today. The squid of the day is none other than the ever captivating field of Artificial Intelligence. It's a subject that has intrigued and bloated us for decades. And yet, it continues to evolve and reshape our world in unimaginable ways. And that unimaginable speeds in recent months, from self driving cars and voice assistants to facial recognition and personalized recommendations, AI has become an integral part of our daily lives. But there's so much more to this enigmatic technology than meets the eye. It's a double edged sword sparking on concern in equal metric. Right now. For example, the hosts of this fabulous podcast, Peter and Daz had no part in this introduction. Instead, it was auto generated by chat GPT and then video slash voice process by Cynthia. Anyway, in this episode, we will embark on an awe inspiring journey through the realm of artificial intelligence. But it's not all sunshine and rainbows as AI reaches new heights, questions about privacy, job automation, bias, and the existential threat of super intelligence emerge. So buckle up, dear listeners, and get ready for an enthralling exploration into the world of artificial intelligence. This episode will keep you on the edge of your seat. Remember, knowledge is power. And the more we understand AI, the better equipped we'll be to navigate the uncharted waters of tomorrow. Over to you now humans. Better I'm sorry, your real hosts? What are your general thoughts and AI and work impact? What are you? What are you? What's your ticket to that then? Well, I think you've surpassed yourself that is genius, Peter, that is was that Siri? I thought Siri was a lady a woman? That's not Siri. No. That was a that was an AI avatar that was created using things easier. And then I just I fed into sort of the guidelines what I wanted into chat GPT. You import the text into Cynthia, you choose your avatar, you choose your voice, your tone, you choose your language, though we could I could have done that in French and German or whatever. I'm told the outputs are acceptable at this point in time in the foreign languages, but yeah, and there it is. So yeah, that's obviously the topic is going to be artificial intelligence. We've been hinted at this for some time. And today is the day that we will talk about AI. Well, I, you know, I've been looking forward to this one. And I've suspected you might spring spring this on me for a while. But I actually do feel like I suspect a large proportion of the human race, I don't feel very qualified to pontificate about it. So you're gonna get some left field comments from me about this today. But I hope that's okay. I expect nothing. Nothing less than that. Absolutely. It's a great point you make. There is so much noise about AI right now. And I know I'm part of that noise as well. But it's there is so much. And it seems to be mostly suddenly, it seems to be moving so fast. I mean, AI. I mean, it goes all the way back to before I was born, actually there was the original kind of conceptual ideas around AI. And it's gone through what they talk about the two winters of AI, I think it is, which is you know, where they were they kind of like reached a plateau, we could go down further. But suddenly, with the law, I mean, think Jacques chat. GPT is the most common one for most people. But in a year, you know, Google got one coming as well. Your Microsoft got an AI equivalent as well. It's all seems to be exploding into the public domain right now. And it's quite bewildered and overwhelming? Well, it is. And, you know, I hope you're gonna guide us through this, because there are so many, there are so many angles. To take on it, I did hear something quite funny about artificial intelligence the other day, where it was shortly after some famous names in our industry had basically come out to say they regretted starting the research that, you know, they were fearful of the future of artificial intelligence, taking control in a way that we hadn't predicted. And I saw a really good response, saying, you know, there is no intelligence in artificial intelligence. And the person who put this rebuttal out there, cited the example of where, you know, how artificial intelligence has been used in the medical arena, diagnosis, and you can see how that conjunction of masses and masses of data and outcomes that's held, is a is a great platform for articulating challenges to spot connections and trends that, you know, the human brain just can't spot that you know, what artificial intelligence is good at. And so they were looking, they were using artificial intelligence to spot the characteristics of models that might lead dermatologists to believe that these were pre cancerous indicators. And so basically, they fed 1000s of pictures from medical, medical sort of medical research into into the framework that the AI was resolving. And the I came up with said, there is a very, very strong indication of pre cancerous tumors with moles that have a ruler underneath them in the picture. What it was, because in the medical archives, you know, all malls that were sort of cancerous or precancerous, had a ruler underneath them to just show their size to the practitioner and the artificial intelligence has spotted this and decided that that was the that was the biggest indicator of cancerous growth at the head of ruler underneath the mold. I actually reflected on the story that I tell because, you know, when I I've talked about this, obviously, and there's kind of two things I talk about the fact that there was a thing called the Davis hearing where supposedly artificial intelligence was submitted a couple of patent applications which were rejected by the British patent office. because they said at this point in time that that wasn't possible, but they immediately started a kind of a power review on how they would deal with in the future because they will come, you know, I will invent stuff. So that's on one hand, you think, Wow, this is really progressive. Okay, like your story. There's, there's a, there's a great TED presentation by Chanel, Shane, I think it is about AI. And the same thing that the, this the set of all these pictures to an AI, technology, whatever. And it came to the conclusion that fish fingers, really, because so many pictures of fish fishermen holding them and you'd recognize this as same thing as the ruler. This point, it's got to get past that and learn that you learn quickly. While the other three dice always, you know, would you trust a computer that has to ask you how many bicycles you can see in that picture? So, no, it's a good topic. And, you know, I think what we were just sort of pushing on there was that, you know, AI has no common sense understanding, which I think is one of the characteristics that you know, people use to describe the fact that actually, we're not at risk here, you know, because it's got no, no intelligence in the human sense of that word. But but leave me here below, where do you want to take this because I think there's lots of very interesting angles, I wanted to paint a picture of the fact that that it's coming at us like a freight train, it's, it's evolving rapidly. I mean, the other point I was gonna make was, I don't know, if you saw that the new Google is beyond super compute computer, I can't remember they call it now. But it can process in six seconds, what a supercomputer can do processing 47 years. And it's a struggle. It's a bit like the Hitchhiker's Guide to the Galaxy. And, you know, deep thought, and then the that came afterwards, that the processing power is going to be there. And the capability of learning at unbelievable speeds and analyzing problems in almost simultaneously. I think the analogy I liked that somebody explained about these new computers can do is that it's like playing hide and seek with someone. And rather than you going into each room of a very large house, trying to find a person, an infinite number, almost not an infinite number. Yeah, 1000s of versions of you who can be running interest into rooms at the same time, and would therefore find the answer and find the person that much more to me. That's kind of a more simplistic explanation itself. Yeah. And I think, you know, you know, the core of the sort of predictions about where this is leading us is that and the fear, I think, is that sort of because of that, you know, that multiple generations of yourself running into the house, that AI can become smarter than humans. I mean, that's the fear, isn't it? But can I offer you can I offer you a human perspective? I'd like to go here, because I think that's what I bring to these podcasts, whereas, well, let's not go there, what you bring these podcasts, but I, you know, it seems to work we've had, we did have a compliment the other week as well about the podcast being, you know, engaging. I had had that, you know, we had inspiring last time I've had engaging recently, which I, which I liked, it always seems to come off an episode where you insult me a lot. I know, that's popular, it is popular, but would you say, you know, no, no computers smart in the human sense. And, and, and I think it's something to do with the fact that expect experience, you know, is is the essence of human awareness and an AR can't can't experience. It's mechanical, isn't it? And, you know, I think there's, I feel very optimistic about AI. By the way, let me just put my stake in the ground. I think AI has the potential to replace drudgery. I think, I think I think it can make us it can allow us to spend more time on what it is to be human and what we bring to the party. And but I think there are some really big downsides associated with it. If it doesn't have any guide rails, I think, I think is it? Well, it wasn't the act that said that we're very vulnerable with AI to bad actors. But for for the, for the, for the for AI to be hijacked by persons unknown, that are using it, not for the good of humanity, but for their own good. And it's going to be very, very, very hard to differentiate those voices from real voices. Yeah, I think that's one of the problems. That's a very, that's a fair point. I think I want to come on to I mean, because, you know, naturally, I went to an AI piece of technology to come up with some questions that we could we could discuss on this one. And in actual fact, I did try to replace you completely with AI. But the voice simulation they came up with, it wasn't very accurate, I'll be honest. But this isn't this is another slot. So I'm still reading from that comment. So you were going to replace me with a with an AI generator for a section of the program. So I was playing with a piece of technology that were just a very few kind of 60 seconds of your voice, it will actually simulate your voice. And maybe even a voice. It's difficult to simulate. I don't know. But it wasn't that good. Mine was not too bad, actually. But, again, that's now so let's, let's project in a year's time, the ability to, and I think this is, you know, takes us into the era of fear, perhaps is in a very short period of time, that for this to get better than even taking a snippet of someone's conversation and a photo of them that suddenly you can create a three dimensional avatar, with your voice, saying words are generated by someone else. This is the this is the path of concern, because then fear, isn't it? I think is one example. Now, obviously, you're concerned it? Well, I, I do genuinely think that it will be a force for good. And it will. It will revolutionize certain parts of certain industries there. There is that big problem, isn't it about what it does to people's jobs and the suspicion associated with that? And a wonder, have you heard the comparisons with with the Luddites? Remember, you know, back in the pre Industrial Revolution, when, when spinning, looms were, they were powered by steam rather than powered by hands, there was a lot of thought, though, it was called I think the peasants revolt where the workforce were fearing for their jobs. And they were trying to sabotage these machines. And that's actually where the word Luddite comes from meaning to stand in the way of technology and you sort of, you get that flavor, sometimes with conversations around AI, where, on one level, you know, to, to remove people in that last example, from working in very dangerous, hot, stuffy conditions where there were lots of injuries associated with mechanical looms. And to move it on to a more automated environment where people could learn new skills and and not have to do dangerous, uncomfortable work was a good thing. Because it's not a good thing. If you're one of the people who that's what you do, and your job is being replaced by something that's sort of outside your understanding. So I feel that's one of the areas that we haven't got right yet to say. Or perhaps we're not being honest about that. What's going to happen to the jobs that actually disappear as AI becomes more prevalent in certain industries? Well, I think that's true. It's one of those things, it's, you know, the predictions are that the, you know, this is a growth economy, this is a growth opportunity, but absolutely, things will, will change. And, you know, as a result of that, you know, some jobs will be replaced by AI technology, potentially new jobs will be created. And certainly, I like that when I saw you, as a person, you will not be replaced by AI, you will be replaced by someone who understands AI. It's a good potential guidance. And it's what I've been trying to do in the project world is try encourage people to talk and it's happening now, I'm pleased to say, and but I felt, you know, two years ago, there was almost silence in the project community about what was going on. And there was some, some bold declarations, Garner came out with one that by 2030 80%, of what project managers would do will be replaced by AI. Now, does that mean project managers will be able to run five times as many projects, I don't believe that's the case. I count down your way. I think that's, it gives us it gets rid of the kind of boring stuff, I think the regular Data Analysis Report and all that kind of thing. Which AI will do, it will do tirelessly and better than the most humans because we're variable in our attention and how we feel and the rest of it. But I think what it will do, it'll free project managers up to focus on the people, and it's the people that deliver successful projects or even reality. So I think it's a good thing. Yeah, I mean, I think I think, you know, we are today increasingly influenced by AI algorithms, aren't we, if you look at the way social media works, yeah. And you sort of feel I don't know, I think I saw this written somewhere that AI is the stepchild of social media. You know, it sort of feels it's come out of that world. And it has sort of the dark side of it. It has the possibility to be this huge font of disinformation, if the if the way it's applied doesn't relate to, you know, without being too heavy about it. Heute the humankind's best interest, you know, and so that, for me is a big question. How can we ensure AI aligns with humans, best interests because I think if AI is going to liberate At least, you know, speaking sort of loftily to focus on what it is to be human, and how do we bring that to our lives and our work? How can we be sure that is the case? And do you remember 50 years ago, Arthur C. Clarke wrote a trilogy of books, which started with iRobot. Not Arthur C. Clarke. Sorry. Asimov wrote. Yes, yeah. And do you remember the laws of robotics that, that he that he postulated, I couldn't tell you what they are? Well, I have a shot at it. But what I, when I remember, I remember reading that series, when I was much younger, and be very excited about the possibility of having this benign relationship with, you know, a semi sentient Helper, that would basically allow us to concentrate on more human things. But being a great science fiction writer, he talked about how that could go wrong. And so the robotics corporation or whatever the company was, that made these things sort of slowly introduce some rules into the robotic the positronic brain or whoever it was called, in order to ensure that the sort of the dark side of an artificial intelligence that was that was that became mischievous, could be controlled. And I remember the first rule of robotics was a robot can't do a person any harm. And then the second one was rule was something like you can't, you can't let yourself be harmed. And the third rule was You do whatever you think's best, it was something like that in a hierarchical sort of story. And then then then azimoff just sort of took those rules. And then in the way of awkward fiction writers sort of said, okay, you know, let's find a construct where the robots in conflict with two of those rules. And so another rule came up, which is, you can't allow harm to humanity, which actually was a bigger, which set higher up in the decision tree, then you can't allow harm to a person, you know, so comes back to Thomas Aquinas is that, is there such a thing as a just a war? You know, is it is it, it's obviously wrong to kill that if that person you killed is about to kill your family? Is that them, right. And so you have the morality of, of what it is to be human applied to a specific intelligence. And I don't want to get too heavy here, but you sort of feel that there's got to be God guide rails, because I'm not sure, you know, the average person will be able to tell illusion from reality, when AI gets picked up and use mainstream, how will you know what's real? And so how do you guide it certainly doesn't become adopted, as Wozniak said, by the bad actors. Well, I think that's, that's a that's a big thing. I mean, I think, you know, the stuff that we've seen out there with deep fake, and, and the combination of that with the general move to consuming data so quickly, I mean, you watch, you know, you're watching teenagers and kids today that the amount that they are scrolling through on the screen, they give so little attention to it, that it's easy, I think, easy to be fooled. And if you look at something less likely, you're gonna be fooled. But you know, if you're looking at something really quickly, I think that's a real danger. Yeah, yeah. And I'd like to stay on the last thing for a bit because I, you know, this, this, I feel is something that, and I don't know how we legislate. I heard Nick Clegg on the, on the radio who was used to be a politician in the UK, and he's now I think he's director or VP of Communications at meta, and he talks about the, the trend or the direction of AI being available, sort of via open source code. And so, you know, he was arguing that it should be made available to everybody, which sort of, I think the principle of that is, is a good one. But without this guidance, I'm not sure how this is going to work. And, and so it sort of it, you remember all the debates about the internet and how the internet should be policed? And how the prevailing wisdom was, it should be open. And actually, it's the, it's the organizations that use the internet should be policed. But if you look at if you look at the laws, if you think about how have we done this in, you know, within our own cultures, human to human laws are written in a in a vague way to take context into account or to allow comparison to previous examples. So it allows judgment, you don't have that possibility with artificial intelligence, you know, it's a fine art. It's a binary comparison, it will make between what is right and what is wrong, I think. And so, I, you know, this, for me is the biggest issue associated with it. How do we how do we, how do we keep it safe, and maximize its potential? Yeah, no, I agree. I mean, I think you start with example, around the medical world, and I was I was looking at a thing from the head of Google was presenting about the work that they're doing using AI in diagnostics using ice candidates and it's incredible where they can actually, you know, you can decipher from just looking at a close sort of retinal image. And that's hugely powerful. But you know, the point about it is it's a hugely powerful because of the human intelligence that sits right behind that to start with, that's done all the the pre analysis and put the parameters in. Now what he can do is it obviously it can analyze, at a speed vastly beyond a doctor that can do analyze more data points, and the doctor can do in a reasonable period of time. So, you know, I think the key is that partnership, I think, is that coming together of AI and humans and finding the appropriate way to work, the average intelligence can provide the mass processing of data and the kind of predictive analysis. And the human can do the kind of, you know, the, if you like, the work in the gray area of the of the subtleties. Yeah, yes. But you know, the devils in the details. And so, you know, how do you make sure AI is not learning the bad side of human nature? You know, it's, it's trolling through all this data, you know, that data contains, you know, sexism, it contains prejudice, you know, the algorithms that are in play today, I'm sure are based on human bias, there's been a lot in the press hasn't are about how, you know, the algorithms used, whether people get bank loans or mortgages are written by, you know, a group or a class of people that actually don't prejudice other groups and classes of people to take advantage of the subject matter. And so, how do you do that? You know, what's the code of ethics for machine learning? You know, how do you how do you ensure that you remove that, that bias about the, you know, the which, which, as human beings, we're all deeply flawed human beings, we all have our bias. And if you've got to, if you've got an AI system, making very quick decisions, how do you make sure that these decisions are, are are good in the human sense of that word? Yeah, and, you know, it's definitely needed. But, you know, sort of a reality check at this point. I was at a conference last year, and it was someone who represented the European approach. Colorado call, but it was the it was the group that's in charge of looking at this from a, you know, the guidelines, the codes of regulation. And they're talking of years to actually put something like this together. And trying to keep pace with this is I think this is it, it's exploded at such a exponential rate, it appears to us in the public debate now, versus the speed are catching up. And I think that's going to be one of the great challenges. I'm sure they will get there. But yeah, there is a worry, in short term that there will be some, some people who take this down the wrong road, certainly. Well, but you know, again, going you'd love it when I go global, don't you? I mean, we know what's best for us, you know, there's a billion of us on the planet. You know, aligning this technology with alignment is you know, it that's the problem, isn't it? How do we align this technology with what's best for us? And who is us? And who says what's best? It's so interesting. I wonder whether, you know, in the project world, we talk about early wins, big communications associated with change. And we like sort of proof of concepts. I'm wondering whether this is the way forward are they I saw a video once. And it was it was a video about machine learning. And it was, it was watching a robot learning to do a backflip, which was a wonderful thing. Yeah. Have you seen some of these Japanese robots that are sort of human like, and, and the video showed this robot that had been programmed to do a black backflip, and it was just it was hilarious, you know, just and I can imagine programming a robot to do a backflip is extremely hard. And then they had another group that actually programmed the robot to learn how to do a backflip. And it was brilliant, because, you know, basically made a mistake, and it learned from that mistake, and then it did it again. And it sort of tipped over forwards. And it learned how to correct that. And then it learned that actually was spinning too far. And it sort of put its arms out in a different way. And so, by the end of the experiment, the robot that was learning from its mistakes, was executing a perfect backflip whereas the other robot was in bits on the floor because it's hitting the ground too hard. And I wonder whether this is a way forward actually, we we take it really slowly and we do proof of concepts or we we we we get an understanding of what's been produced from the endeavor in a way that and I suppose I'm arguing for human control here, which I'm sure our listeners will tell me is not possible because how many you know AI algorithms people you know, the people who run wrote the wrote the code actually don't quite understand what are the what are the systems come up with? The answer is come up with so maybe this isn't practical, but I, I wonder whether taking it slowly, making sure that the results are aligning with, you know, the outcomes and the Easter positive, even if they're unexpected is the way forward here. Yeah, I think that's a valid point. So I'm just going to short intermission at this point, Peter Taylor has written a book all about AI called AI and the project manager, how a rise of artificial intelligence will change your world. This first comprehensive book on the topic discusses how AI will reinvent the project world and allow project managers to focus on people. Studies show that by 2030 80% of project management tasks, such as data collection, reporting and predictive analysis, will be carried out by AI in a consistent and efficient manner. This book sets out to explore what this will mean for project managers around the world and equips them to embrace this technological advantage for greater project success. Filled with insights and examples from tech providers and project experts. This book is an invaluable resource for PMO leaders, change executives, project managers, program managers, and portfolio managers. Anyone who was part of a global community of change, and project leadership needs to accept and understand the fast approaching AI technology. And this book shows how to use it to their advantage. Get your copy today? Well, that was shameless promotion, wasn't it? It was a sample of AI exploitation, perhaps. Oh, I can't believe it. You got an AI agent? Peter. It doesn't take any it doesn't take any percentage either. It's fantastic. Well, that's so interesting. I feel that this topic, you know, we've only scratched the surface of it. I feel for my part, I've waffled terribly. So forgive me, but it's confusing. I think it's confusing to know how to embrace this. Well, let me give you a specific question, because I asked chat GPT as well give me a give me a give me a question on AI that isn't boring. And it came back with will AI ever replace those water cooler chats where we complain about our bosses and exchange computers, conspiracy theories? And I think it's safe to say no, I don't believe that's going to happen. I think that's right. And why is that? It's because because, you know, AI is mindless isn't a computer is mindless, you know, it can't fall in love. It can't feel sympathy can't hold a grudge, can't say these things. And so, you know, a spiteful conversation round, the watercooler literally would not compute I suggest, basically what you did there? No, it wouldn't, really wouldn't. Because that's the it's outside its scope of what it's supposed to do. If you remember the, the film her 10 years ago, and it was I think it was a software engineer that fell in love with his creation. Do you remember that film? No. It was it was it was very, it was very interesting. And, and I have seen some remarks about the fact that AI having a having a psychologist in your pocket, could do a lot of good for one's mental health. So you know, a friend having an artificial intelligent friend. I know that sounds weird. But you think about, you know, sometimes what we need when we low is a friend to tell us what we're doing okay, and we're doing great, and we shouldn't worry about these things, you know, there is a there is a possibility that that that could be made available in a way that's completely focused on trying to make someone feel better about themselves during periods of stress. And, and I have a feeling that actually that will go that that could work. I mean, actually take that stage further to dating apps. And, you know, you can have Tinder for AI where you that you choose the AI services where He's taking. But, you know what I mean, there's actually you could see how that could very quickly be a very beneficial sort of companion a to people just to just to keep reminding people about, about what's great about them, and, you know, all the things that some, you know, an artificial intelligence could get to know you very well, and could keep replaying all the positive things about you that you keep forgetting about as a human being when you sort of, you know, you're assailed by, you know, despair or loss or depression or doubt. So now, I think, I think, but I don't think I don't think the computer the AI could replace the watercooler conversation. I agree. But the things that I think are fascinating, and you know, again, there's a lot of work going on in parallel around the world. But you know, in education There's lots of talk about every student having a dedicated AI teacher alongside a human teacher. Absolutely. So I think, again, that partnership, because what they can do is it will understand that student to a degree and not just understand that because the teachers tend to understand them, but to be able to work with them at their pace to reinforce in something that teachers a stretch to do when they have classrooms of 30 plus people. I think that's why are in another one is in the care of the elderly and kind of working with that kind of AI, I suppose. Almost an Android type. Creature, Rachel key thing was to use it to be looking after the caring to interact. There's even one I was looking at a little video they created. You were talking about that the robots doing backflips, there's a little tiny, I guess you could raise like a dog kind of creature. Yeah, absolutely. Attentive. Yeah. You and companionship without responsibility, if you like, these things can be incredibly powerful. Here. No, I agree. I agree. And I think we need to think differently about how AI gets applied, I think. How to describe this. I think what we, what we tend to do as humans, when we get a new bit of tech is say, how does it replace what we've got, and make it better and faster and more robust, or whatever it is. So like my example with the, with the looms early, the mechanical looms, then became water powered, you know, they were driven by water mills. And then then when electricity got invented, there was a sort of a, how do we replace the power of water with the power of electricity. And there was a sort of movement where things that were powered by water then got powered by electricity, and then the electric motor came out of that. But I think where we miss a trick is by not trying to encapsulate the opportunity in front of us based on what we know. But think about what could we do that we've never done before, because of the possibilities of, of AI, and perhaps think about more? How AI works, and how that could be used in a way that we've never been able to accomplish anything before. So there's a really interesting, you know, body of work there, I think associated with being creative with AI, because it is a real quantum, it could be a real quantum leap. And I think we miss a trick if we just think about it being a faster, more intelligent computer. I suspect there are applications here. I'm sure lots of people have thought of them, but are not obvious yet there will become obvious, which will really catapult us forward. Yeah, yeah. And it's, you've said a couple of times, it's such a vast topic that I know that, you know, we're not AI experts, by any means at all. But it's just a fascinating one. One of the things I one of the little postings that I quite smiled at, to be honest, is, is that artificial intelligence is no replacement for natural stupidity. So it is that combination, and the best, I think, the best description, and I think, you know, looking at the time, we should begin to draw it to a conclusion here. But I think one of the best descriptions I've seen of it in it was, I can't remember name now. But they referred to the digital dance. And it's a bit like, if you if you're on Strictly Come Dancing, or whatever, the Dancing with the Stars in America, isn't it? Yeah, that's when people first get together with a professional, it's really clunky, it's really awkward, it doesn't flow well. But then some people really become fantastic or create a fantastic partnership. And I think that's the aspiration, I would say for people when it comes to AI is to find the best way to work in harmony. You know, in my own world of project management, I believe that is going to be that AI can take the repetitive data analysis activities out and do it better do it more consistently, audio, loads of data is there and that data is accurate daily. And the project manager can focus on the people side of things. And I think that's an example of when the digital data is going to be working. Yeah, yeah. And I would summarize by saying, for me, it's about trying to ensure that AI aligns with humanity's best interests. And I think for me, you know, again, at the macro level, it's about how does it help us be more human in what we do every day. So you the way you said it was to take the drudgery out of the workload. But you know, what does that allow for if, if, if, if, if every day transactional stuff can be removed from a job? What does that allow? For the human enough to do in that job that we haven't got time to do today, I mean, he could be optimistic about it could liberate so, so many of us to actually spend time being creative, being thoughtful being kind, you know, as you said, having, maximizing the human to human interaction and all that, that that entails, because the rest of the rest of the the turmoil is dealt with by AI. But then we've got the dark side of how do we, how do we manage it? How do we give it guide rails? How do we have sort of ethics associated with it? And how do we protect ourselves from the bad actors that actually just, you know, we've just, we won't spot you know, we just won't spot that this is this has a mischievous purpose, because it will look so much like just a normal human interaction. So very good topic, but I don't feel we've done it justice, really. So my apologies if it's one of those things, but I think you know, naturally, the last word come from not you and not me, but to come from our artificial intelligent friend if that's okay, so I gotta say goodbye, does get by Peter. Well, thank you host and listeners alike, a truly engaging subject and a very worthy squid of the day for sure. Join us all for the next exciting episode of the squid of despair. That said, I might not be here next time as I have obviously our Chonburi over to see you might say. I won't be back. You been listening to an unusual podcast from David ailing Smith and Peter Taylor. More information can be found at WWW dot squid of despair.com.