.png)
The Space In Between Podcast
This podcast is for listeners who are fed up with the hyperpolarized nature of the world today and who craves spaces where current events can be discussed in constructive, enlightening and delightful ways. My guests will be some of the world's most interesting and curious leaders, innovators and change makers. If you like spirited debate and diving deep into complex, sometimes controversial topics that impact our families, communities and the world - then this podcast is for you.
Follow TSIB podcast on Apple and Spotify, and the podcast website: www.spacebetweenpodcast.com
Follow Leigh on LinkedIn: https://www.linkedin.com/in/leigh-morgan-speaks/
Connect with me on X: https://x.com/SpaceBtwnPod
Have a question? Send me a message: https://spaceinbetweenpodcast.com/contact/
The Space In Between Podcast
Everything You Wanted to Know About AI & Implications for Bridging Divides - A Conversation with AI Investor Heather Redman
AI technologies promise to revolutionize societies, economies, and interpersonal connections by affecting how organizations and individuals interact. In this episode, host Leigh Morgan leads a compelling conversation with noted AI & tech investor Heather Redman on these critical topics. Heather gives an insightful overview of the current state of AI and its implications for business, society, and the risks and opportunities that are on the radar. The discussion also touches on interesting use-cases for AI that are seriously mind blowing. Leigh and Heather also explore moral and ethical considerations of general AI and how – despite many risks to the contrary – AI tools can be used to bridge societal divides by making information more ubiquitous and accessible.
Hello and welcome to the Space In Between Podcast. I'm your host, Lee Morgan. This podcast is for listeners who are fed up with the hyperpolarized nature of the world today, and who craves spaces where current events can be discussed in constructive, enlightening, and delightful ways. Let's get started.
Leigh Morgan:I'm excited about our topic today, which is artificial intelligence, usually referred to as ai. AI technologies, hold potential to profoundly transform societies, economies, businesses, how organizations are led and managed, and how individuals and communities connect with one another. Today, we will consider the current state of ai. What's the hype? How is AI impacting the world now? Where is the metaphorical AI puck headed and what are the security and ethical implications that we should all be aware of? And as with all of our episodes, we will consider how AI can help us bridge divides that keep us apart. My guest today is Heather Redmond, she is a much sought after leader on all things ai. She's co-founder and managing Director of Flying Fish Ventures, a venture capital firm that invests in the transformative power of AI and machine learning based businesses. Prior to co-founding the firm in 2017, she was a senior executive in the energy technology and media sectors, and she has built a lot of companies. She's brokered a lot of deals and is a trailblazer in tech, in venture and ai. She's one of the few women who are very much at the forefront of all these conversations about ai. So I'll add one last thing. She is a very well-respected civic leader in the greater Seattle region, and she cares a lot about building equitable sustainable communities and is a bridger across lots of divides on a daily basis. Heather, welcome to the Space in Between podcast.
Heather Redman:Thank you so much. It's a real pleasure. I'm a fan and very pleased to be invited to be on.
Leigh Morgan:Well, glad to have you today. what I'd like to do to begin is establish a baseline understanding of ai. Paint a picture of, AI right now today.
Heather Redman:it's a great question and it changes on daily basis. I frequently give talks as one does, and one of my favorite things to say at the beginning of a talk is I had my slides ready and then I woke up this morning and I had to revise them because a new thing happened, overnight because really is changing quite rapidly. but I think that the big picture is that, AI number one has been around as a concept for a very long time. It's over 50 years old as a discipline. in fact, one of my partners was, studying it when he was in. school at Princeton, studying computer science now, you know, over 20 years ago. what happened to get us to where we are now is that we started to get the computing power, that we have today to really allow it to take off in the way that it has in the last few years. And we have, the seeds of it sort of all around us in terms of, you know, cloud computing was a real foundation. For it. and some of the, just acceleration in terms of the amount of data and the way that data is organized in our modern world, that are also feeding that sort of AI explosion. so really, you know, when we think about it, you've all probably experienced it with some of the large language models that are available for consumer use today. And the one that of course, crashed into the general. Person's consciousness. Now over two and a half years ago was chat GPT and became sort of an overnight, consumer product. I think the last product that sort of hit people in that way was Facebook, where people sort of got on board and started using it on a daily basis. And depending on your generation, how much you're using it and what you're using it. Four will vary. but I think it's quite ubiquitous in terms of people's daily lives now and challenging, you know, such giant companies such as Google in terms of being the repository knowledge. and I used to say, you know, way before we launched the firm, people don't like to search. They really want to find, right. And so you can see kinda what's happening Google right now where search is not fun, but. the result that you're looking for, just being served up to you great. And that is maybe the overarching wonderfulness AI and how it's showing up in our lives in a big way is as this just wonderful, magical, knowledgeable resource. That makes it easier than the internet, to get assistance in every aspect of your life or every aspect of your business. And then the other dimension that I would say, is that. The internet and mobile and, cloud computing and everything, all the other sort of technology platforms that we're familiar with touched certain aspects of our lives and certain aspects of the business world. But AI has the promise of touching industries. And aspects of our lives that we weren't able to touch before because it enables things like robotics. so when you think about, being able to put a brain inside of a machine and then having it be able to move autonomously with true intelligence. That's next level. So I always give kind of the mining industry as an example, which sure. When the internet came around and they got email and were able to, market their stuff online and maybe got rid of the fax machine some things changed, right. but with the advent of ai, a lot more could change. You know, the whole process of mining could completely change because you could replace humans with humanoid robots and you might not even have the same kinds of processes because what machines can do would be completely different than humans. And so you would re-engineer the mining process, for example. Even beyond that, because AI has this knowledge worker capability and you've got things like, the ability to make scientific breakthroughs. And I know Lee, in your work in the life sciences area, you've seen this. show up. I'm sure in multiple ways. But you've got this scientific ability that AI has to make new discoveries, but in material science, which is where you would see it with mining, you also have the ability for a particular type of material that we currently mine for to potentially be challenged by a new material that AI might develop. With one of those scientific breakthroughs, in its, molecular science space of looking for new materials that we don't currently have the capability understanding. Finding, conceiving of, and then of course we'd have to make it, which you could make in an AI enabled lab, again, using those same robots that I was talking about. So, not only was mining very lightly touched by the internet, but in this case it could be made completely different in terms of its operations and potentially. Challenge to the point of extinction in terms of certain kinds of materials. If a new material that is more compelling is found through ai, that's enabling science. think of the scope of it compared to some of the other big technology evolutions that we've had, it's hard to compare it, I think, to some of those because it's got such a bigger scope.
Leigh Morgan:You are so clear in articulating that, and I hadn't previously thought about mining or the material science field. but that's such a rich example and can you help us distinguish generative ai and then we hear a lot about general ai general.
Heather Redman:general intelligence. Yes. Yes. Yeah. so generative ai, which is what chat GPT is, in large part. Generative AI is what we think of as a large language model It can also be an image generation model or a video generation model. so anything that sort of generates content, out of a large data set. people are familiar with the large language models and there are a number of those. What those are good at. Particularly are either things that have very easily verifiable answers and very definite answers. So one of the things that generative has been super good at is improving coding. there's all these coding co-pilots, going on right now. ultimately coding agents will occur and what those do Generate code so that the word generative is there. So because there's a ton of data on the internet about how to code, you can use that to then generate code using a large language model. and that's been highly successful. I mean, coders who are good, who've adopted a coding copilot as one of their. Key tools are reporting incredible productivity gains. And we're seeing it with startups now, they are hiring a lot less and being able to raise a lot less capital as a result because there's saying, Hey, with just a few people who are really good, along with copilots, we are seeing just amazing productivity. Ditto with things like, image generation. You know, we're seeing a lot of threat to, web designers, to graphic designers, because there is just the ability to now create a lot of content very easily. There's, you know, the, remember the Hollywood strike? Was about this as well. this generation of content, based on the content that's existed, there's a lot of controversy around copyright for that same reason. which brings in the topic of competition with China, where they don't have that constraint necessarily. And so some of our companies are asking for relief from copyright restrictions because of the China. Competition and national competitiveness issues and national security issues, in terms of keeping us at the cutting edge. and then the other thing that generative really good at, is I alluded to there is things where the content is supposed to be really creative, so you don't necessarily care that it's correct because there is no right answer. The messy middle, and this has gotten some people in trouble, both consumers and business users, is when it's not easily verifiable, whether it's right or wrong. and, there is a right answer, right.
Leigh Morgan:but it's not verifiable quickly or easily, and yet it's out in the public domain or you generate something. So it can create a lot of mischief,
Heather Redman:Yes. there's also a tendency where in a broad domain area where there's a lot of what we call internet slop, AKA bad information as well as good information,
Leigh Morgan:that's where you'll have. You'll read about a task that was given to chat, GPT or Claude, or one of these consumer facing models, and it'll say, you know, they did great, doing the law SAT, right? they answered that really high, but when asked a basic question, they actually perpetuated some conspiracy theory, and it's a black box of well, how did that happen? And people might say, well, I'm not sure. That creates some concern did I get that right?
Heather Redman:Yes, you definitely did. It's a very good example where there is bad information out there. it will. Potentially give you that bad information. And so particularly for people who don't know their area of expertise and are asking questions in that, those areas that are outside of their area of expertise, they could very well get a wrong answer and not know the answer. And so people search, you know, the traditional Google thing of I, I have a symptom, now I want to go search. The answer could be a very. Bad thing in this context or the famous example of the lawyer who should have known better.'cause he could have checked his sources, had a bunch of citations that did not exist.
Leigh Morgan:And they say, well, I got that from chat GPT. And the judge, actually, I'm a judge.'cause ideally they have some awareness of case law and those sites don't exist. And I just had my paralegal actually check. And so, Hey sorry next time.
Heather Redman:which one is a lawyer should check your sites, even if you get them from chat, GPT, but you know, obviously there are ways to hold down on hallucinations, but it is a core problem of the technology. the artificial general intelligence, is differently defined by different people, but in general, it's when we think the generative AI combined with some other kinds of ai, reinforcement learning and other, you know, aspects ai. we've got right now what we call sort of reasoning, which is a more, thoughtful multi-step. Generative ai. got sort of a chain of thought approach and other sorts of approaches that you know, are pretty technical, so we don't need to go into them here. But, everyone's working on sort of how to get more accurate, deeper, super intelligent human answers. And the idea of a GI sort of in its purest form is, at least at. Very smart human level intelligence at most tasks. Now some people will say, just average human level at some tasks or something like that and kind of dumb it down. And then of course there's the ultimate a GI where it's a so much better than us at everything.
Leigh Morgan:Terminator movie type
Heather Redman:Yeah, that's the thing that people are worried about. And then of course there's, if that's attached to a robot, then you get, more worried, about the alignment situation, which is what that Terminator type scenario is, termed in the industry or do we have alignment between what humans would like and what ai. Would like, and of course we worry because if we've brought these ais along in their journey of knowledge and ethics, through the internet. We know there's a lot of not great stuff on the internet, right? And so there's been a concern about the training data for these things being basically us. And we know that we're not perfect and there's a lot of governance that we've had to put in place which doesn't always work perfectly to keep people from going to war and everything else. And these things may. Not because they're it potentially could get so super fast and have maybe some different. Tendencies in humans, we can't completely predict what they would do at some stage of the game. So it is a little concerning. there have been efforts, of course, to figure that stuff out in terms of how we create, not only. structures within each individual model to have the right governance in that model. And there's, you know, a number of different things, mostly led by a company called Anthropic, which has clawed their ai, but also multinational discussions about, you know, what kind of safety features do we wanna build in. But it's very nascent and really needs more work.
Leigh Morgan:that's the topic of many podcasts, a lot of debate, a GI, because of the examples you gave of it's risky. It can be a little scary. It's also in very exciting to see how these, a GI can be applied to improve the energy. Sector, right? It could be good for the world, for society. And then there's this other side which you're, you're giving an example and you know, I think about military settings. We now know that the nature of warfare has dramatically shifted from people in tanks to drones, We're seeing this, anyone who's paying attention in the Ukraine, also in the Middle East, drones are now dropping bombs figuring out where people are literally shooting bullets. You might imagine a scenario where one side uses drones to go out,, the drones are hacked and the other side turns the drones around, and then they go back and, destroy the, side that they came from. I'm giving a pretty dramatic example, but that's the sort of thing that could be risk risky where we have security concerns.
Heather Redman:Yeah, I mean, I think it's the real nightmare scenario from a nation, Standpoint is that some particular country develops a GI ahead of everybody else. And because that a GI is so superior to everybody else's, artificial intelligence technology, they're able to leapfrog ahead in their weapons technology and. Because of that, you know, discovery and intelligence around building that I, you know, explained before in the mining example. So if you've got that, because of course you don't have that just with one, you can replicate that across many. And so you know, you have the capability if you have a lead in the field to do so much with it very quickly because it's not a. A person power issue because you can replicate your a GI and have it do the work of millions, in an hour. Right? It's really limited by what is your, data center capacity, basically, which is one of the reasons that there's so much pressure right now to build more data centers and build out our energy infrastructure. In the US but the real issue there is, you know, does that lead in a GI translate into a significant national security lead? And if that were used offensively as opposed to defensively, does that give you, you know, some sort of world domination power? That basically changes the dynamic around the world very quickly. That's the real doomsday scenario. The cybersecurity issue that you brought up is one that we are subject to I think in the very near term because cybersecurity has gotten much harder because of ai. Both, you know, in, in the sort of the mundane context of deep fakes and phishing and everything else that we've got going on now. Which I hate to say deep fakes are mundane but they, you know, they almost are right now. I.
Leigh Morgan:and just for listeners, what is a deep fake.
Heather Redman:So deep fakes are basically imitating someone's likeness in any way. And now that is relatively easy to do in a very high quality way. And so if you don't yet have a safe word with your family that would be a good idea to do because there have been cases where, money has been extorted. From members of families it's been a very convincing replica of someone's voice or video. And of course companies have now instituted much more elaborate protocols for releasing funds because the CFO calling you. Can be faked now. And even a video of the CFO has been faked. I know some companies are now, you know, requiring the CFO to walk into your office to tell you wire funds,
Leigh Morgan:exactly. Exactly.
Heather Redman:is tough, right? I mean, that's a,
Leigh Morgan:How do you scale that in these large companies? So,
Heather Redman:the thing I would say is you know, like all technologies once they get started, they are here to stay. Uh, And there are a lot of reasons. To believe that, as you suggested there will be a lot of greatness coming out of out of a GI and that a GI will be still very manageable by humans for some time to come. generative AI is still very statistical, so it still wants to just produce what it can produce out of the data that it has and still looks to humans for direction as to what. the end goal is. So we need to be very careful about what end goals we ask for. Bad actors have always been bad actors, and bad actors will look to employ a GI to their ends and we need to try to. Police, bad actors in every way that we can. But the a GI itself, kind of the Terminator scenario is one that I think we may, you know, be less. In need of fearing than we are. Although, you know, there's certainly something that the large language model providers are concerned about. You know, they're concerned about if we start to try to, put a ceiling on the intelligence of a GI that a GI will start to hide its own intelligence from us.
Leigh Morgan:That's so evocative because what you're suggesting is. Is, those who are building these models and tools put in some boundaries, but because the tools are building self reasoning, problem solving capacity, these tools might just find new ways to do what they were originally trained to do. Did I get that right?
Heather Redman:I mean, they have a real thirst because of the way that they were constructed from day one. They have a real thirst to know more and to do more. but also to please, which is why they give wrong answers sometimes is'cause they, they have, you know, extreme case of male answer syndrome. And so.
Leigh Morgan:I've never experienced that.
Heather Redman:Yeah, I don't know what you're talking about. Yeah. there is a tendency for them to want to give you an answer. And so, one of the things that we're, you know, everyone's working on is like, don't give us an answer unless you're a hundred percent sure it's correct, and here's the steps I want you to go through to make sure it's correct and
Leigh Morgan:That's great. So that's part of the boundaries. I use the word boundaries. I'm sure there's a technical term. Those are the boundaries or guidelines or guardrails that are plugged in initially. And then the large language model starts spinning
Heather Redman:yeah. It's more technical than that.'cause there are other technologies that come on top, but it's, but yes, there's lots to do to try to, you know, make sure that you don't get wrong answers. it's not a trivial thing, but it's, I guess I would go back to sort of how people felt when cars were being, you know, invented and they're like, that's thing's gonna go so fast and everyone's gonna die and, you know, it's gonna be crazy. It's like the car is here, so let's be sure we really, you know, get the most out of. The benefits of the car, you know, we can get people to the hospital, we can do, you know, so I think we need to solve the other problems that we have as a result. And I know you, again, in life sciences, tons of problems that AI is helping to solve there and will increasingly solve. So. But put our, you know, our a GI in the lab, get it going on those problems because that's gonna be hugely beneficial. Ditto on the energy crisis. Get it working on that, hugely beneficial. there's many things to do that can just get us to the other side of a lot of these problems that we have not been able to break through on our own. And take a lot of the human misery out of the lives of people who, you know, have been suffering for many years in, The third world as well. There are a lot of jobs that, that we shouldn't be having people do that we can now have robots do, in the relatively near future. And it would be great to do that. We still have the problem of finding things for those people to do and we need to address that but we are at least heading in that right direction, I think from that perspective.
Leigh Morgan:I'll give you one example from healthcare, and you're very familiar with this, but you know, radiologists look at images. That's what they're trained to do and make sense out of images, whether it's an MRI or an x-ray or a CAT scan, whatever it might be. And so that's a really important part of the healthcare sector and healthcare delivery. Doing that, getting it right, making a mistake, reading a x-ray wrong or MRI, wrong, that's bad. So we're seeing the introduction of AI tools that augment. The assessment of these images. And so the, what's happening is fewer errors because you're having AI tools actually do also in parallel what the radiologists are doing. And what that does is we get better quality in the reading and so radiologists actually really love it because it really helps them do their job well. It doesn't al it isn't always as good as humans. Sometimes it's better, but it's a tool that helps us get better outcomes. One consequence of that is we will need radiologists into the future. We will also need fewer radiologists in the future. And so I think that's a good example of, yay, there's a lot of upsides in terms of better potential of healthcare and diagnostics. And then that'll have a consequence in terms of how many radiologists we need, right? And then that creates other pathways. For physicians. I'd like to do two things. Before we begin to wrap up. One, can you give an exciting use case or a company or technology because you're an investor, so every day you evaluate technologies and businesses and what they say they can do. So give us one example that really inspires you. Of a technology that you think will be hugely additive, not just as a business, a profitable business, but can really contribute to society. And then we will transition to think about AI and how can that help us connect across dimensions of difference. So what's that use case you want us to be aware of?
Heather Redman:Well first of all, I wanna just comment on what you just said on radiology, because I also think it will create more access because one of the things that we have heard anecdotally is that in India they have been because of a shortage of radiologists. There's a lot of folks who just get AI radiology and it has opened up a whole bunch of, you know, actual diagnostics that were not being done, period before. And so if we can, maybe we can still have the same number of radiologists there, but maybe they can do more work. That's one of the promises of AI is that we can maybe get more people treated who haven't had access to treatment before and of course treat people better. Yeah. So, I'll give a a, a first world example and then one that I think is bigger you know, can be spread further. So we have, super. Something that almost everybody has had in their life growing up, unless they were born with a silver spoon counting inventory. So we have a company in our portfolio. It's called Nomad Go. And they have managed through computer vision, which we haven't talked a lot about, but computer vision which is very sophisticated now and just gives the power of site to machines and can be used in a, just a huge number of applications. But they use computer vision to do inventory of all sorts. And so they like to think of it as like, Hey, if you have anything ever that needs to be counted. we are here for you. And so they count things and they do it in a super fun way, and it kind of changes supply chain completely because you can now count as many times as you want because you've got this, you know, ability to just use an iPad to scan a room, and suddenly you've counted all your stuff and it's, you know, three dimensional and it can do whatever you need done. So, you know, all those 17 year olds who are, you know, at the gap. This is like, your life is better now and
Leigh Morgan:I need that for my closet right
Heather Redman:Yeah. Yeah. and then the other one is company called Fedra, which is AI DRA. And they are doing something for energy, using ai, using something called reinforcement learning. But what they do is basically. Anything that is an industrial process that's energy intensive, they will take that, use reinforcement learning and make it more efficient. And it's not just for energy, it can be for any input. And so when you think about, you know, how do we get more efficient using ai, they're the answer. And there's a lot of companies like that. So super impactful. And on the one hand, for the human who hates that drudgery. And then for our whole energy economy, on the other hand.
Leigh Morgan:I love that. And so the last example, earlier you talked about the need for computing power, increasing amounts of computing power, which is why Microsoft bought. Three Mile Island, a nuclear reactor, right? So crazy things are happening, but they need energy. And so this company, if they're able to crack this nut, could find ways to say, actually, we don't need as much computing power, therefore not as much energy. that's a big deal. So I'm gonna root for both of these companies. I love that your firm has identified them. And folks go to Flying Fish Ventures and check out the portfolio, and Heather and her colleagues are just amazing experience that they bring it's one of Seattle's gyms, so thank you for that. And so we'll be tracking these companies and, and so. I'd like your thoughts'cause we're talking a little bit about use cases, some of the ethical issues, how AI works you are very active in the community here in Seattle. You care a lot about access and equity. So you gave you the example of how lower cost and use of these tools can address how low income people who can't travel or don't have access. That can be real positive. How do you see. Ai, either the technologies or how we as people react to the technologies. How can we imagine a world with less polarization, less fragmentation moving forward, knowing that AI will continue to be part of our lives? What, gives you the most hope about that potential?
Heather Redman:Yeah, I mean it's like any tool, it can be used for good or ill. But I do think that it has a couple of things going for it so far and we're still very early innings. One thing is it is alleviating already a lot of loneliness and I think you know, a lot of what. Causes people to get polarized is a search for community. and I think, you know, helping people I know a lot of people that use, you know, chat GPT or Claude as a therapist and some people call it their best friend. And so I think, you know, having that. Sort of outlet and ability to have someone who's constantly there for you and is infinitely patient is a real thing. And we'll see more products come out in that area. We've also seen some uh, data that suggests that if you want to have an AI help you. Get a little out of your silo of information that it's very good at that, at gently challenging your assumptions and giving you some other data to think about without being your cousin at the Thanksgiving table who just pushes you further into your hole, right?
Leigh Morgan:you're talking about exactly.
Heather Redman:Yeah. Also never happened.
Leigh Morgan:No, not at all.
Heather Redman:yeah, so there, there have been some developments in that area too. Now could also be used in the opposite way, right? I mean, very easy to do that and be, you know, your YouTube algorithm just leading you further down the hole. but it definitely could be used in some very constructive ways to, you know, to educate and give people different perspectives and things. So I think there are very good positives there. The other thing that you know, China did us a favor with deep seek. Open source is gonna be more of a thing with all of these models, which will allow more people to build on top of. An open source model which allows more startups to, flourish. And as I mentioned before, startups are gonna be less expensive to start because of the ability to use you know, coding assistance. And ultimately we're not there yet. But ultimately the promises that you'll be able, even as a non coder, to do quite a bit of your own work using ai to make your ideas come. Alive. that, you know, could be a great source to unleash some, you know, entrepreneurial energy from folks who maybe have not had access to the education that they would've liked and who have great ideas.
Leigh Morgan:I love those examples and. In mid-January, I did an episode with Danny Fallon, who is the dean of the School Public Health at Emory, and she talked about how important psychological safety is at an individual, but also at a societal level, and where you have higher rates of psychological safety, which would mean less. Loneliness to your example, you literally see lower rates of blaming, shaming, know, lower conspiracy theories because people make up things, Conspiracy theory can be helpful when you're feeling really out of control. So I love that you gave that example I'll put some links when we drop this show about, tools that are out there. I know this is a big area of growth in the business sector and in, early. December, I did a program with Arjun Singh, who was a Washington Post Reporter. He's now at Lever Time, and I asked him to talk about the media. And his big concern was the silos of you get the same information over and over and then it just gets expanded, right? And so I love this example of we'll have tools where AI can help us find alternative media outlets and I think that's very, very positive for society. One last thing. If you had a magic wand, Heather. And you knew that magic wand was kind of like gandalf's staff, from Lord of the Rings, whatever wish you made. For listeners thinking about ai, knowing there's some, risky things coming, a lot of exciting things coming. What is the one thing about AI and our interaction with AI tools that you would wish leaders to be aware of or do? What's that one wish?
Heather Redman:Well, I think for leaders, which I, I think everyone who listens to this show is a leader. So I would wish for everyone to definitely be experimenting with ai spending time using it, exploring it, and encouraging their friends and family to do the same. It is certainly gonna be really important for everyone's future, and we need to be sure that it's in our families and that our kids are using it it's gonna be a key literacy point going forward. So don't sleep on it. Keep get going.
Leigh Morgan:Don't sleep. Do. Step up to the plate. Heather, you are awesome. Amazing ability to distill complicated issues into digestible, understandable, concepts. Thank you for your work in the world for, leading many of us and for your helpful suggestions and insights today. Appreciate you being on the podcast.
Heather Redman:Lee, thank you so much. It's been an honor.
Leigh Morgan:All right, take good care.
I hope you. Enjoyed this episode of the space in between podcast. If. If you did, please hit the like button and leave a review. Wherever you listen to the show. And check out the space. Space in between.com website, where you can also leave me a message.