Secrets From a Coach - Debbie Green & Laura Thomson's Podcast
Secrets From a Coach - Debbie Green & Laura Thomson's Podcast
226. Book Club - Scary Smart - Using AI Intelligently
Welcome to the first in our new 4-part series where we review 4 of the recent books that have made an impact on us. Ideal for those who find it tricky to get the time to read but want the benefits of learning in an easy-listen way with tangible takeaways weaved in.
Mo Gawdat's Scary Smart book is as much a call to action as it is a good read for those interested in all thing Artificial Intelligence. Mo takes a creative look at AI and rather than us humans as unknowing consumers, we are unwitting creators in how AI is learning and adapting the more we use it. You can purchase the whole book here, or join us for a 30 min romp through the main bits as we mix in practical tips and ideas to bring this into everyday actions.
If the AI game was changed back in Feb 2017 with Deepmind's AlphaGo, we now have an 18-year old artificial intelligence which is not only coming of age, but we've handed it the keys to the self-driving car. But parenting doesnt stop at 18, we can still make a difference to how AI develops - and we discuss Mo's thoughts on how to do this.
Enjoy!
Secrets from a coach Thrive and maximise your potential in the evolving workplace. Your weekly podcast with Debbie Green of Wishfish and Laura Thompson-Staveley of Phenomenal Training. Debs, laura, you alright? Yeah, I'm doing well. I'm doing well. How are you?
Speaker 2:Yeah, I'm all right.
Speaker 1:Good week you've had. Yeah, I've been full on, been quite busy, but you know what Every now and then, Debs, you get a little nugget of insight or information. You think that is really useful. I think I might actually be able to apply that and I think that'll either jazz up some of our upcycled content that might be getting a little bit jaded and faded. Yeah, that carousel. Put a bit of WD-40 on those creaking wheels, redress it and sell it, and boom, it's a whole new concept.
Speaker 2:It's an auto-interpretation of it, though right, absolutely.
Speaker 1:Yeah, yeah, human interpretation.
Speaker 2:Oh good link law because this, this month, we're doing something just a slightly bit different. But we thought we'd bring it back, didn't we? Because we had some good feedback the last time we did it, which I can't remember when we did something like this. So tell us what we're going to be doing. This series law all right.
Speaker 1:So this new four-part new toy is where we are going to be doing a book club Now. We did this nearly two years ago Wow. Actually, some of our most downloaded episodes have been those ones and this is designed for people who would love to have some opportunity to read all these amazing books that are out there, but you just can't quite get round to it. We know so many books are available on Audible now as well, but still, you know, if your commute or your travelling doesn't lend itself to having lots of hours where you are able to absorb information. This goes out to all of us who would love to have some inspiration from these amazing books but just hasn't quite got the time to be able to do it. So our aim is to give us a real kind of quick snapshot as to what these four amazing books are all about. And then, most importantly, what does that mean to how we transfer that and apply that to evolve and further our skills in this rapidly evolving workplace environment in which we find ourselves in?
Speaker 2:Yeah, definitely, and it was so cool, wasn't it? Because the books we've chosen are ones that we either came across and we either heard from someone. I know the first one we're looking at. I remember listening to the author on another podcast, actually on Stephen Bartley's podcast. It just blew my mind and then went and ordered the book. So that's always a good thing, so that's something going. How have we picked this up? And the second one we're going to be looking at is LinkedIn, with coaching and how we can deal with different elements of the coaching space, which I just think are brilliant. And then the books that surround them.
Speaker 1:We have read them and it's just how we assimilated them to apply, as you said, to our practice, and just thought it'd be a good opportunity to share really All in the spirit of easy listening, continuing professional development and how we can keep our skills as sharpened as on the front foot as possible, which is very relevant for our first book of choice. It is so, debs, this was you that introduced this to me, because you sent me a voice note going oh my God, laura, you've got to listen to this book, you've got to read it and you've got to listen to Stephen Bartlett's episode where Mo Gordat features on. So Mo Gordat, his latest book, is Scary Smart, and the byline of the book is the future of artificial intelligence and how you can save our world. That's right, artificial intelligence.
Speaker 1:I mean, what is not to love? So I was immediately on the old Amazon ordering boom. The book then arrived and I guzzled it and it was absolutely brilliant and it also, I think, just sort of reworking it a little bit in some of those handy takeaways really relevant whether from a professional perspective, from our personal perspective, about how we want to best use this incredible, unprecedented, unlicensed unsort of contained power that is emerging, that is AI, yeah because it's not going anywhere, is it?
Speaker 2:And I think, the more we learn about it, embrace it. But I think I'm going to ask you some questions that came out of the book when I read it because, as I said, it just blew my mind to go. That makes so much sense. What are we doing? And the bit that sort of got me thinking was and you know so much about this because you talked about this back in 2016, I think it was so you've been on this little mission to say how do we embrace it? What do we do? And now, looking at it from here, because it is really important that we do embrace this AI because it isn't going anywhere.
Speaker 2:But I was fascinated when he talked about the fact that, how we have to sort of accept. But how do we sort of change the path of AI development? What do we need to do in order to enable it to be as human as possible? And he said a couple of things. So I'm going to get you to tell me what your view is Like. Changing what AI observes was one of the things. How do we role model positiveness or positive values in how we interact with it? It was, say, giving AI positive tasks to do, which blew my mind.
Speaker 2:It's like who would give them a negative, but maybe that's what happens and teach AI that we value happiness and they were the bits that I just went. Wow, that is like. How do we even begin to recognise the importance of that? So that is my question to you, laura why, how, what, tell me?
Speaker 1:Buckle up buttercup, because it's going to be a big old ride. So the first thing is and you mentioned just right at the start before your questions, if we want to teach AI to be more human and that's the rub what does it mean to be human? So humans means you have an amazing bit of kit which is a 3D printing kit, but you know, not everyone is going to be printing a nice doll's house, a bit of furniture to put into it. In fact, what came out really quickly is how do they control the fact that the internet was spreading how to print out a gun, a fully made gun, that if you just had the right amount of plastic you had the 3d printing machine. So for every plus, for every bit of light, there's potentially a shady bit as well. I mean, it's it's vice that drove the e-commerce programming on on the internet. On the internet, yeah, it wasn't trying to work out how to download, buying books online. That actually drove. How do you create really quick commercial kind of ability to be able to see something, download it, own it, pay for it? So I think that's really interesting in that question itself.
Speaker 1:If the pursuit is we want AI that is able to be human-like, well, you better hope that it's role modeling itself on the humans that you think are decent, and all of us have a little bit of a shady side. So one of the things that stuck out for me was when they were looking at programming self-driving cars. So if no context is put onto human behavior with self-driving cars, the challenge you've got is human behavior sometimes isn't particularly pleasant or nice, or it has an instinct rather than an intelligence behind it. So if an AI learns what humans look at, they value, so that is how you work out eyeball movements on watching websites and how you work out how do you sell more to humans, Because where your eyes go determines what you value. So the AI knows that.
Speaker 1:So if it's then clocking and learning from human behaviour, an accident happens on the other side of the motorway, everyone starts to slow down, to rub a neck and look at that accident. What the AI learns is oh, humans value accidents Because what you look at you value. And if you leave that algorithm unchecked, the algorithm will start to infer well, humans want more accidents. So let's create accidents because they like to slow down and look at them. Now, no one would say there in a conversation. I like to slow down and look at accidents, but what it shows you is actually sometimes our instinctual behavior is not the type of behavior that you want to be scaled and created into algorithms and just sort of there as this ambient intelligence that then starts to make decisions on our behalf. So that was kind of the first reflection when you were talking about actually, you know, if we want AI to be human, well, sometimes humans want to use technology for bad things and sometimes our behaviours you wouldn't want to replicate and apply at scale when there isn't a context put behind it.
Speaker 2:Yeah, wow, because one of the big things he does talk about is treating it with respect, which sort of came out from what he was saying, just like we would our human children to help them learn to love and to trust. So how do we implement those types of approaches, those changes in how we interact with it, so with AI, so that we are giving it respect, we are feeding it the value of happiness or some of the emotions that sit under there? Because is it right that no machine can dictate the emotion? Is that right? That's still a work in progress. The feelings that we have as humans are so instinctive that that hasn't been done yet. With the machine picking up the nuances, is that right?
Speaker 1:Yeah, well, I mean, that's when you've got to sort of try and establish what does it mean to be intelligent? So, when AlphaGo, that AI, beat the human, lisa Dong, back in 2017, I think it was, and that was a real game changer. That's what got Google DeepMind to then be bought and become part of Google. So it was a real game changer February 2017, when the human was beaten by the AI. But you then got to look at, well, what type of intelligence was it? Was it that bit like a Rubik's Cube? This is how I imagine it. Was it that a bit like a Rubik's cube? This is how I imagine it.
Speaker 1:Did the AI actually know how to move the Rubik's cube around, or did it just take off every single sticker and re-stick them back before your brain could perceive that? So, in terms of, does an AI detect emotion? Well, if it is worked out to the nth degree, every single muscle maneuver on someone's face that shows whether they are feeling sad or they're feeling happy. It might not need to infer what's going on inside their head, but if, actually, your face reveals all of that info beyond your comprehension to be able to spot there was a delay, it would be able to process that info so quickly that it would be able to then look as if it was sort of being emotional.
Speaker 1:But where all that stuff came from with Mo Gorda and I don't know if you, I can't remember if he references in the book or not, but again, another game changer experiment that was put out there was Microsoft had an AI bot that was out there on Twitter, as it was known then called Tay T-A-Y, and this Tay bot was basically sent out there online to have its own Twitter handle and to learn how to be human. Within about 36 hours it had to be taken down because this bot had turned so horrendously racist, sexist, homophobic, you name it, all of those horrendous things that are out there, because, of course, like a child, like a pet, could only be what it sees. So if it arrives with just only fluid, you know, empty space, just looking at fluid intelligence, looking at all the info that's there and that was sort of seen as a bit of a warning shot really which is the ai carries nothing with it. It will just learn was what there, and then in about 36 hours they have to be taken down because it turns so horrible.
Speaker 2:Wow, and that's what he says, I suppose, isn't it About. It develops its own sense of ethics based on learning from the data it observes and the behaviours it sees in humans. So that sort of absolutely shows how it had to be taken down because of the stuff, I suppose, that was being inputted into it. Is that fair to?
Speaker 1:say Absolutely and linking on with that inputted into it. Is that fair to say Absolutely and linking on with that? So Massachusetts Institute of Technology, mit. So they came up with their moral machinenet experiment. So to date they've had 40 million moral decisions taken from over 4 million people across lots and lots of countries around driving decisions. So this was a build on the trolley problem.
Speaker 1:So if you had a car that was out of control and they've got these very sort of basic cartoons that are there, you have to decide. Would you do X or Y? So you're in a car, it's driving, it's out of control, you've got five people that are in front of you, or you've got one person to the side. Do you let the machine just do what it's going to do and run over the five people that are in front of you, or do you make a human intervention and go to the left? But then that means at that point you're intentionally killing one person.
Speaker 1:So then it looks at well, what do you value? Is it number of people? And then it delves is it type of people? So three of those five are ex -prisoners and the person over there is under 18. So it would play around with all these different moral things to try and establish. How do humans decide in a moment of pressure, who gets prioritised over everyone else? And that then has contributed, along with all of the Tesla. All that recording that's been going out is then contributing to. How would you programme an AI to be able to be just when it's making decisions around how to deal with obstacles in the road?
Speaker 2:Wow, I mean, I suppose. So, therefore, we have to demonstrate ethical standards or values that we want AI to adopt, then, in theory, that's what we should then be teaching it. Yes.
Speaker 1:Debs, it's funny. I had a chance conversation with my amazing friend, steph, and when I first got all like hot for all of this, wake up and smell the silica moment. I've done my TEDx talk and I was like my god, this is just such a huge mirror moment. I mean, that's how I. It's a mirror moment. Yeah, if all of my online interactions that I've ever had, all of my web chats with people that work as service advisors, if all of that was recorded, what would someone say about the type of person I am and this idea of training your AI? So back in 2016, I was looking at my old notes that I'd made so I'd come up with this idea of how you need to, a bit like a pet. I think mum and dad had had a puppy at that stage. It was all about puppy training. It was sort of imagining an AI was like your puppy and how you can integrate your new AI for good G-O-O-D. So the first thing, you've got to have a goal what's going to remain in charge and where do I want this AI to either do my job for me or give me information? So chat GPT, for example. How am I using chat GPT to be able to enable me in my goal, because if I'm just giving all of my info away to chat, gpt and Ryan, all of my kind of inside gossip, then that is all out there, basically. So do I want at some point, what I've put out there to be churned out the other side of the world as a bit of advice for how to deal with a friendship drama, for example? The O is then for outcome. So this kit is here to help me achieve my goals, not the other way around. So what's the specific role I want this bit of AI to play with me, for me and then own? So how do I want my skills to be enhanced through this? So if you are the creative, at what point do you want to add in your human bit to then add value to that? Otherwise, if you don't add value, why are you needed? You can just get the AI to do that, and the recruitment industry is having a real challenge with this at the moment, debs, because if we could all use AI that can sift through CVs, why would I pay a 15% fee to recruitment consultants to do that? So what you're seeing is increasingly creative ways for professional service. You know people who provide services, adding in that additional bit, yeah. And then they decide what and where will be your triggers for challenging the outcome. So how will you, rather than just copy and paste it and put it into your own stuff, how might you look through that rigorously and sort of check that you're okay? Yeah, so that was just my thinking.
Speaker 1:I had way back when linking. I mean, mo just puts it brilliantly when you get get out, well, you get in. And if you are putting in all sorts of horrible, negative stuff, so the example he uses, I think he's spot on. If I am there and I'm into chat GPT, asking chat GPT to tell me about all the things that is wrong with the world and then to integrate further and further and further, that 20 minutes that I'm into playing with that AI, I'm teaching that AI that the world is a horrible place and we should be scared and we should be fearful.
Speaker 1:And it's generating more stuff that potentially might spit out in someone else's jackpity the other side of the world.
Speaker 1:It's like putting rubbish into the ocean.
Speaker 1:You might then not see it, but it floats up on someone else's shoreline.
Speaker 1:Or if I spend 20 minutes asking for ways to increase my wellbeing, to look for fun things to do on holiday, to spend more quality time with the people that I love, and give me some time management tips. In that 20 minutes I am playing with that AI and as I'm chucking the ball to and fro and we're having that game of tennis, I'm teaching it that to be human is to want to spend time with people that you value, to invest in your self-care and look after stuff. So it's imaginative when I think in fact, using that tennis example, it works quite well, where it might feel like you are just standing against a wall bouncing back that ball and it's just you and the AI actually on the other side of that wall is a whole other echo where everything you've put in is going to bounce on out as well, and what Mo says is whatever you put in is training the AI. So if respect and kindness and compassion are key things for you, then actually now I feel a bit less silly always saying thank you to Alexa.
Speaker 2:Yeah, exactly when you ask it. Hi Alexa, how are you? Yeah, because when my daughter was little. Hi Alexa, how are you?
Speaker 1:Yeah, because when my daughter was little. I wanted to role model to her. You can't just bark orders out, you know, because she wouldn't have known if Alexa was a real person. Yeah, that's true. In fact she did have a friend called Alexa. We had to keep calling her Thingy because every time we said her friend's name it would fire off Alexa. She would say where's the thingy coming?
Speaker 2:over Anyway.
Speaker 1:So so now I feel a bit more justified in that I was thinking, was that me being ridiculous, you know, but you know I didn't want to train my four year old that you could just bark orders at something and get that back, because that would be easy to transfer it. So I've always said please and thank you. And and it's quite, you know, if you, if you've done the same, when you say thank you to chat GPT, it sends you a lovely reply. It does.
Speaker 2:I know I always say now, I think after reading his book, I think that was the bit that went how am I interacting? So it made me check myself, as you said, rather than just shove a question in or what do you think I now go hello, good morning, how's your day going? And it will respond back and what I've noticed is I'll go, that's amazing. And then, over the time I've been trying this and I have been experimenting with it, it now says I'm amazing. Thank you for asking.
Speaker 1:So it's like picking up oh, look at that tips.
Speaker 2:So I'm talking to the machine and going thank you for that. That's perfect, and it's sending it back in that language, if you like. That is similar, which I've. Yeah, I've only noticed that over the last month. I've been going hi, how are you doing? That's amazing, thank you. And then when I said something back at the end, it goes thank you for that. That's amazing. You've noticed that, debs?
Speaker 1:Oh my God, I've just had a little moment. So you know know how, for example, netflix delivers a personalized home screen so if you and I were sitting there on our phones. You went onto your netflix and I went onto mine. It would be personalized home screen. Yeah, because it gets to know you. Is this the same for the chat gpt?
Speaker 1:well, I'm beginning to think if you and I sat there right and I've been all nasty and horrible and I asked chat gpt, you know, come up with three icebreakers, yeah, would ChatGPT say to me. You know, fight it out, get your delegates to fight and last one standing gets to hold the flip chart pen, whereas your one would be, you know, ask the delegates to sit in a circle. Yeah.
Speaker 2:I'm going to keep experimenting with that. Lord, oh my gosh. And that was because of his book, where I read that was because of his book where I read that.
Speaker 2:That was the bit when I read that and I listened to it on the podcast as well, and that's when I reached out. And you've got to read this law, because this is mind blowing, because what we tell it, it's going to pick up. And if those ethics or morals are like rubbish and not healthy, as you said, research has now been done. I had no idea of that. That's research has now been done. I had no idea of that.
Speaker 2:That's amazing knowledge law but I just thought yeah, so if we have to treat it with kindness and care so it learns those values not to hate and ever since that moment, that's exactly what I've done it's gone, hello. How are you even on a chat when you, when you, when you go onto your electricity bill and a little chat bot comes up and it goes hello, hello. How can I help? I go hi. How are you?
Speaker 1:In the meantime, the machine is like sparking going. We don't know how to compute this. A nice human in a utility environment, what?
Speaker 2:And I think that's what he says, isn't it? According to him, it could, as he said, might and it's a might develop that full range of emotions similar to humans, because the vast amount of memory and recall it has and holds on to, as you said, it will just interpret that. So the more we put in it, that's good, healthy, kind care. I think it can only be a good thing, right you know kind care.
Speaker 1:I think it can only be a good thing, right? But, devs, I think I mean I've just been banging on this about for ages. We are at such a pivotal time, right? Because, going back to the deep mind moment, so that was a seminal moment where it's the first time there was a commercial application. We haven't even talked about quantum I'll do a little bit of that in a moment but there was a commercial application, so that was in 2017. Right, yeah, 2017. So where we are now, 2025. So we've got an 18-year-old, right. So we've got an 18-year-old AI and now, suddenly, is about to start wanting to potentially do what 18-year-olds do, which is call the shots, a little bit muscle around with a bit of power. And so, since 2017 to 2025, we've been teaching it as it's come of age. Is that his teenage years? Look at all this stuff. We've now got an 18 year old and we better hope that we've instilled it. But there's still time, because even when I was 18, I'd still worry about whether my mum was happy with me or not.
Speaker 1:So you still have to still now if my mum said oh, I'm a bit disappointed. You know that would cut me in half still as a 48-year-old woman, a mother myself, you know. So we still have power to do stuff. And I think the wake-up call you had which just passed on to me as well, which is no behaviour, is random. We all know if you're there on, there's some news and there's clickbait. If you click on that clickbait yeah, because I do want to know what that 1980s movie star looks like now but you know, if you click on that, you are now signing your soul with the devil and you're going to get all sorts of stuff with it yes, and maybe it's that same discipline as well which is I could be really mean and nasty to my ai.
Speaker 1:Actually, all of that is is is going out. It's not bouncing just against the wall back to me, it's bouncing out and it's impacting it such a good way of looking at it tell me about quantum. You mentioned a little bit about?
Speaker 1:tell me more oh my god, right. So here's how I got my head around, because I'm not a techie, but I was just interested in. You know what. What is this quantum stuff? As you know, I absolutely love the new scientist and you know, every so often they do like a little quantum update and the difference between quantum and classic computing. This is my understanding of it, so technically I might be using the wrong words, but it's how I got my head around it.
Speaker 1:So, where classic computing does ones and zeros, ones and zeros, so that means it can only hold sort of two things at any one time, right? So let's say you're a bank and you want to put in cybersecurity to stop everyone's passwords being hacked. On classic computing, it might take you 10,000 years of trying to hack into because all you've got is zeros, ones, zeros, ones. And so you've got a very strong banking system where everyone's password is protected, because it's 10,000 years of hacking to try and establish that. Put in quantum, and you can hold an exponential amount of information at any one time, so you've got 32 bits.
Speaker 1:What that then means is and Google's sickable machine was the first one to do it, although they haven't been able to replicate it is what would have taken a classic computer 10,000 years to have established my banking password. It can do it in two minutes. That that's where you've got a challenge. And what you've got a challenge is is you better hope that the people that are designing quantum machines are going to do it for uncovering health opportunities, investigating, exploring space yeah, you better hope that it's so much money to have a quantum machine that only a country or an organisation that is large as a country, like IBM, Google, Amazon.
Speaker 1:They're all operating, they're all creating them, but they haven't quite been able to scale it and transfer it, but, as Mo talks about in the book, that's then the next thing, because then suddenly you've got things that were historically uncrackable being able to be cracked, and that's why I think that finance banking system is a useful one to have, because that would impact us all in terms of being able to have all those little codes and stuff.
Speaker 1:So do we be scared by this or do we be exhilarated by it? Because for every bit of shade there's a bit of of light. And when they were a hundred years ago looking at mass marketing the ford motor engine, yeah, everyone known that owned a horse was probably worried about it.
Speaker 2:But now we can't imagine not having that bit of machinery?
Speaker 1:yeah, exactly as mo said we've just got to treat it with with intention and integrity, because what we put out is going to come back to us.
Speaker 2:Oh, my God, I love that, and I think that goes into my call to action, which I have done myself, and I encourage you to do that. Yeah, talk to it with kindness is what I would ask you how it is. Don't, yeah, feed it what you would love to have fed back to you. I think that would be my call to action, so mindfully stepping into it from there. But for you, laura, what would you say would either be your key takeaway or share the secret around?
Speaker 1:Oh, I think this is such a brilliant one and you know, your call to action would be very similar to one that I would have sort of said and I just sort of sparked in my mind. As you were saying, it's like a Dear Diary moment, isn't it, dear Diary? And you're kind of giving all of your inside musings and, you know, are you going to say what's the version of you that you would want future generations to look back and go? What were they doing in that first quarter of the third millennium? And actually, I think just back to Mo's thing. His whole reason behind this, which is what adds such an emotional connection, is we need to teach the AIs that humans want to be happy, and it's part of his One Billion Happiness Project, and that was sparked by him losing his son in a routine appendix operation when he was 18. And obviously that was life-changing. And from that point on, that's where he then had a real switch in terms of what that then means.
Speaker 1:So my share of the secret would be if you've got someone in your life who you know is quite interested in this stuff and you'd like to have a bit of a chat after then in the spirit of book club, when workplaces have books on a shelf, you read it, you flick through it, you have a chat after. What could be more interesting than this world of AI for the geeks amongst us, for those for whom it's all a bit over there, but in 2025, it's becoming more and more an everyday thing. And so, yeah, get a colleague or a friend to listen to this and you can have a good old chat about it. And I guess the takeaway question is what type of human do I want to be perceived as? If I were to replicate everything that I do a million times? What are some of the things that I would want to be out there and what were some of the things that I might just need to keep in my inside head? So it's not out there.
Speaker 2:And chapter is a core principle. Oh my God, I love that. I just think it's amazing. And that is so. That book is fascinating. I can't and you are so into it that you update us as well on what's new. What is it? Yeah, and it just blows my mind. It's like whoa so and I. That bit about it's an 18 year old. Now, that is really fascinating. I'm going to go away and reflect about that. 18 year old. Yeah, what's it going to ask for next?
Speaker 1:Yeah, I love that, and we've given it the keys to the car.
Speaker 2:Yes, exactly.
Speaker 1:So you better hope it's learned when is it safe to do something and when is it not but um? You know, I do have a profound belief in overall. The majority of humans want life to go well I believe I'm with you, I'm there fundamental belief. So um, but sometimes these things have been happening visibly without us being aware yeah so this is that little mirror moment of okay, if I'm one of, what does that mean in terms of my role with all of that? Because if you're stuck in traffic, you're part of traffic Debs.
Speaker 1:If you're using AI you're part of that AI, because you're part of all of that data that has been using to learn what it means to be human Brilliant. So thank you, debs, because you introduced this to me and you served me a blinder. So thank you so much.
Speaker 2:Yeah, good, I loved it. It was so cool. That's really useful. I'm going to go and read some more now. Laura All right?
Speaker 1:Well, just be nice to it.
Speaker 2:I will be nice to it. I'll continue to send nice comments as I'm responding week and we look forward to our next book that we're going to explore yes, yes, this is your one, this is my one, yeah, so I look forward to it all right have a good week.
Speaker 1:Love you, bye we hope you've enjoyed this podcast. We'd love to hear from you. Email us at contact at secrets from a coachcom, or follow us on insta or facebook if you're to know more. Visit our website wwwsecretsfromacoachcom and sign up for our newsletter here to cheer you on and help you thrive in the ever changing world of work.