The Mindset Economy

How To Create Hope with Sarah Housley

Sarah Housley Episode 3

Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.

0:00 | 1:12:37

What if the biggest risk in the age of AI isn’t losing jobs, but losing our ability to imagine the future?

In this episode of The Mindset Economy, Jean Gomes and Scott Allender speak with futurist Sarah Housley, author of Designing Hope about why our collective imagination about the future seems to have stalled. As AI advances and change accelerates, many people feel less agency, not more. So how do we reclaim the ability to imagine and design better futures?

Building on ideas such as protopia (futures that improve gradually through experimentation, human ingenuity, and collective action), the conversation explores how leaders can create more hopeful visions of the future, why rapid technological change can undermine our sense of agency, and why developing more non-linear ways of thinking may be essential as AI and other forces reshape the world.


Reading from Sarah Housley:

Designing Hope: Visions to Shape Our Future (2025, Indigo Press) 

 

Reading from Jean Gomes and Scott Allender: 

Leading In A Non-Linear World (J Gomes, Wiley, 2023) 

The Enneagram of Emotional Intelligence (S Allender, Baker Books, 2023)


Social:
Instagram           @mindseteconomypodcast
LinkedIn             The Mindset Economy Podcast
Bluesky            @mindseteconomy.bsky.social
YouTube           @TheMindsetEconomy

The Mindset Economy Podcast is researched, written and presented by Jean Gomes and Scott Allender with production by Phil Kerby. It is an Outside Consulting Ltd production.

Sarah Housley:

Hope has become such a difficult, controversial mindset to have, partly because if you are hopeful, you are seen as unrealistic. Our society really appreciates and valorises this mindset of being cynical or being pragmatic or being realistic in the sense that nothing can get better and we just have to keep going with what we have

Scott Allender:

In 2021 researchers surveyed 10,000 young people across 10 countries about the future, and 75% said it was frightening. More than half believed that humanity was doomed. And when those young people were asked what the future looks like, most of them, could only describe versions of collapse that finding should disturb us, but maybe not for just the obvious reasons. The problem is not that young people are pessimistic, it's that we've given them almost nothing else to imagine.

Jean Gomes:

There's something stranger going on beneath that though, it's not just that we lack positive images of the future. It's that we've grown comfortable inside the dark ones. Dystopia, as our guest puts it today, has become cosy. We see it so often in our films, our feeds, our headlines. It no longer functions as a warning. It has become familiar, almost reassuring. The stories where everything falls apart are the ones that feel most realistic to us now, and that's a problem, not of information, but of imagination. When imagination narrows, so does our agency. If the only futures that you can picture the ones you would not want to live in, the rational response is to retreat into the present, which is precisely what many of us are doing, scrolling through a permanently replenishing feed, locked into a slot machine relationship with our now.

Scott Allender:

In the face of our poly crisis and the ping of our devices, it's easy to take our eyes off of what could be when we're just focused on getting through today, in this conversation, we are exploring what it would take to break the cycle with Sara Housley, a Design futurist, trend forecaster, and the author of Designing Hope, Sara, it is great to be with you. You know, your new book, designing hope, starts with the sentence, people are not excited about the future anymore. And that kind of hit me right in the chest as I read that. Let's start there. If we could in this conversation, why is that and why is this important?

Sarah Housley:

Oh, I love hearing that. It it really got to you. That's fantastic. Thank you for sharing that, and thank you for inviting me on, by the way. So I think this could be a very, very long answer, almost an endless answer. I'll just, I'll pick out a few factors, maybe, and we can go from there. I think one of the reasons we're not excited about the future anymore is that we are living with the future visions that we previously created. So the century before this century, we spent a lot of time making futures, visions and making futures to work towards. And some of these become very famous, and some of them less so. And that was a big focus. That was a way that we really excited people about what was to come. And we had a lot to work with in the last century. And things felt new. Things felt like they were opening out. There was a huge golden age of innovation and possibility, particularly post war, and those futures, images and visions are still very strong. A lot of them we've actually achieved and got to some of them we haven't. And the ones that we have got to have had consequences, they've unfolded or mutated in ways that maybe we didn't see coming, and ways that have been quite damaging. So one of the really big leaps forward we made over the past few centuries was obviously in using fossil fuels that opened so much up to us, and now we see the consequences of using fossil fuels very obviously, with climate breakdown, and we've had this wonder material that has made so much else possible, which is plastic. Plastic used to be seen as just a universal positive, this material of innovation, and now we know the consequences of plastic in waste and pollution and the health crisis. So some of these futures that people had previously been so excited about have have diminished or have actually damaged us. And we also don't have this same appetite for replacing those futures that we work towards and achieved. We've not created new images of the future, or new futures visions to kind of move on from them. And to some extent, I think we don't want to, and to some extent, I think we can't, and there will be lots of reasons for that, but then one of those really big reasons that we do need to kind of reckon with is social media, and particularly the news feed. So not necessarily lovely posts from your friends and family, but this constantly replenished news feed that we now all face on our phones and on our screens, and that keeps us in this kind of slot machine and. Mentality of always reaching for the next new thing and living in this permanent present. And I think when you live in this absolutely engulfing amount of news and information and hot takes and stories and images, our brains find it very difficult to make space to think about the past and to make space to think about the future. And so we don't have this long term view in either direction, because we're now so stuck in the present. And then the final factor that stops us being able to think about the future or wanting to is that we increasingly feel like we don't have a future. And the obvious reason for that is climate breakdown. But there are lots of other factors and political factors as well that are limiting our imagination about how we might get through the poly crisis that we find ourselves in, and also the kind of possibilities or excitement that could potentially lie ahead. Wow. You've just laid a very rich agenda on our plate. Thank you. That was brilliant. I love this, this insight around the kind of

Jean Gomes:

the present and how addictive that's now become. Yeah, we all know why we were here. You know, you can see all the things that are here. Why haven't we kind of stopped that? Why? Why have leaders not stepped in and said, Look, we need a better vision. What? Because this seems to be happening at so many different levels. It's happening at an organisational level, with businesses, even charities and certainly political leaders. Why is there this kind of system level inertia around the future?

Sarah Housley:

I think to some extent, because there is a feeling that it's not possible, and that anything that would be promised would be unrealistic and potentially not achieved. And leaders know these days that if they don't immediately achieve, achieve an ambitious vision that they set out, they will be attacked. They will be kind of shouted down. They will get the negative news headlines. They will get the accusations of greenwashing. They will get kind of this, this take down culture will come for them, rightly or wrongly. So there is more reticence about being the really ambitious, moonshot person. Also, we now have a lived history of Silicon Valley becoming famous for these really ambitious visions that then have caused problems haven't happened or have happened in different ways. So there was a lot of reticence about being that ambitious visionary person now that we perhaps didn't used to have, but I think also hope has become such a difficult, controversial kind of mindset to have, partly because if you are hopeful, you are seen as unrealistic. Our society really appreciates and valorises this mindset of being cynical or being pragmatic or being kind of realistic, in the sense that nothing can get better, and we just have to keep going with what we have. It rewards people who who really stick to the status quo and stick within the systems that exist. And it's very suspicious of people who offer something genuinely different, because something genuinely different is revolutionary. It's a huge amount of change. It's a huge commitment to the possibility that things could be different, radically different, and we don't make space for those kinds of thinkers in the way that we perhaps did anymore. So we've encouraged leaders to become pragmatists over everything else. I think possibly that's the way that the system has kind of worked, rather than any one person or group of people planning this or thinking, Oh, we should actually really disincentivize hope. It's kind of happened as a result of the way these systemic forces have come together.

Scott Allender:

That's really interesting, because

Sarah Housley:

it would seem to me that even in a more practical level, you'd have to have a bit of hope about what the outcomes could be to motivate people to action, wouldn't you? Yeah, I think hope is actually really practical, and I think hope is really realistic. And I'm perhaps a bit unusual in that I have some I have some problems with this really controversial position that hope has now, has now taken up partly because I published a book called Designing hope. And so people kind of expect me to be this persona that is a representation of hope, and kind of solve all their problems and and be really, really sunny and optimistic and positive. And I don't think that's what hope is. I don't think it's relentless positivity. I think it's so much more nuanced. And actually, I get this question so much around, what is hope that I carry around definitions to kind of throw at people. So the really relevant one, I think I'm quote quoting Seamus Heaney here, who says hope is not optimism, which expects things to turn out well, but something rooted in the conviction that there is good worth working for. It's as simple as believing that things could be better, not necessarily that they will be better, and that the role of people is to be ingenious and work together and try to make things better. So I think if hope isn't realistic, that's an incredibly bleak society that we've created.

Jean Gomes:

And it you don't you're not a futurologist in the classic sense, but. You There are a lot of people in your field who are creating futures that have quite a lot of dystopia in them, and there's a lot of fear at the heart of those things. And they're kind of arguing that that's essential, you know, they're taking the, you know, the realism, to that next level of that, people only change at the last moment when the you know, when the facts completely outweigh the resistance. What? What's your thoughts on that?

Sarah Housley:

I think the the central challenge is that dystopias are great, and we love them, and we're really drawn to them, and they make for really, really good stories. So the last minute change kind of the hero coming in and saving the day or making a really last minute decision, is an incredibly good narrative, and we love it. And so one of the things I do in the book is immediately break down this idea, this binary we have around the future of utopia and dystopia. Utopias. We think we want them. We think utopias are perfect. But I think the reality is, not only is Utopia impossible, like definitely definitionally impossible, it's going to be really boring to live in. So is it worth aiming towards? And then dystopia is is the opposite, and that's the one we love and go to. That's the one we see in our favourite TV shows, in our favourite films. We're so familiar with dystopia, and that's another problem, that it almost feels comfortable to think about dystopia, because it's so such a familiar image of the future that we feel comforted and we feel almost cosy within it. And that's that's really kind of perverse, if you think about it, because dystopia is meant to be something we avoid. It's meant to be a warning of what not to go for, but it creates an immediate impact, which is why, in futurism, it's a really strong way of telling the story of the future, but very often they are meant to be warnings of what we don't want to get into. But only then do we transform them into things that we think are aspirational. So all of the dystopian TV shows have then inspired tech companies to go out and make the things from the dystopian TV shows, which is kind of the opposite of what they were intended to do. So all of this to say that I think we need a third option that goes beyond that binary, and that third option has been really usefully provided to us by two really great futures thinkers. It's called protopia, and I know you've spoken to both of them before. So Kevin Kelly coined the word protopia, or proposed it, back in 2011 and I always want to get his definition exactly right. So he says protopia is a state that is better today than yesterday, although it might be only a little better. And then another futurist, Monica, Bill Skeeter, came along in 2021 and kind of expanded and reworked and challenged this definition, and she says protopias are proactive prototyping of hopeful futures. So my takeaway from both of these approaches is that protopia is something messy and human and achievable and realistic, but also ambitious and radical. To be achievable. It doesn't mean you can't be ambitious, you can't be radical, but it does mean that you're not aiming for utopia, and you're not settling for dystopia. You're going for something that is is more meaningful in a way,

Michelle Beagley:

The world is evolving beyond the previous industrial service and knowledge economies, a new multi trillion dollar global economy is emerging, the mindset economy an economic driver in which our beliefs, resilience and adaptability are the new currency for an automated world. Welcome to the mindset economy.

Scott Allender:

Does the speed of change in our world right now is that impacting people's ability to feel that they've got much agency over, over the future. In other words, like it feels like change is coming so quickly.

Sarah Housley:

AI's, you know, feels overwhelming. You know, everything's just moving so so rapidly. I'm wondering in your in your estimation, or your research and your work, if, if the speed of change is partly what's stealing away people's ability to feel hopeful, optimistic and even practical, about taking some agency over the future. Yeah, I think absolutely, the the speed of change now feels very, very fast, and the way change is presented to us is so short form and bite size and cut into little chunks that it feels even more fragmented and disjointed than perhaps it would have done in the past, when there were also fast, you know, times of change. But the other big issue is that this change is is packaged and parcelled to us to feel inevitable. And the reason that we don't feel we have this power or this agency is is that it is sold to us in this way that it's absolutely coming, it's absolutely unstoppable, and all you can do is either do it or panic because you're not doing it. And this is particularly true in the case of AI. It's we've become so so polarised into either you're all in on something or you hate something and you just refuse to participate. And in reality. The vast majority of us are somewhere between those two states. It's a very kind of Stark view to think you can only be kind of on or off in those positions. Part of that is, is the way that we now feel that society is run. I think the average person is becoming much more inclined to think that they don't have any choices they can make, and the average company is even starting to maybe feel that way as well. Because the technology companies are so powerful, we have an increasing number of billionaires in the world who are very obviously now exerting a political influence in a way that they were not seen to in such a public way in the past. And so the power dynamics have really shifted away from the average person towards people that who are perceived as elites, and also the systems that those people create. And so us having no power and no agency is a very, very real feeling. But then countering that is that we are seeing a wave of collective action in multiple spheres, but obviously in the political sphere as well, and we are seeing people come together to kind of not only fight against the changes that they don't think are good, don't think are helpful or positive for society, but also to offer alternatives. So they're not just opposing change that they think is too fast or they feel uncomfortable with they are proposing alternative ways of going about that change. And I personally think that's a really important form of innovation, that kind of social innovation, and I'm really excited and happy to see it coming through when you when you look at covid And what happened with the pandemic, what what do you think is transferable from that? Because we did do some remarkable things in a very short period of time when we were faced with this vast amount of uncertainty? Yeah, I think it's a really good lesson for what, what the right conditions were to encourage fast change and fast acceptance of change. And obviously, no one would want to create another pandemic to bring back those circumstances again, one of the differences, for example, in the challenges we face today is that there's not one super distinct challenge, a deadly virus, and there's not one super distinct, obvious singular solution, which is first of all, keeping people in their homes to stop it spreading, and then developing a vaccine at super speed. So there still are several identifiable problems, and there still are a whole range of very well understood identifiable solutions to those problems, but people are very good at thinking, okay, one problem, one solution, rather than poly crisis portfolio of solutions, it doesn't come together as easily. It's a harder message for leaders to convey. Is a harder message for people to understand and take on board. So a lot of the problem is in accepting that there's a plurality there. There's a plurality of issues, a plurality of strategies and ways forward. But I think covid is a really object lesson in the Overton window and how to shift public opinion and public appetite quite quickly. So I think you two will be aware of the Overton window, but a very brief recap, for anyone who isn't it's a political theory of change, and it describes the window of different policy possibilities that could be introduced by a politician and for them still to keep their popularity, for them still to hold their power and stay in office. So if you suggest a policy option that's outside of the Overton Window, that would be a radical or unthinkable idea, and the general public would reject it, a policy option inside the Overton window would be acceptable, popular or even mainstream. So these are acceptable ideas, and the Overton Window is very, very open in in times of fast change and uncertainty, and that's why it became very, very open during covid, because that was a time of immense change and immense uncertainty, and also crisis and fear and the feeling that we needed to act. And so ideas that were previously unthinkable could go immediately through that window to become policy. And what I always say when I'm doing talks and workshops about futures thinking is that we are currently in a time that is also very open, the Overton Window is very open, and attitudes are changing really quickly, and ideas that were previously unthinkable are becoming mainstream very quickly. And it's not always ideas you support, like stopping the spread of a deadly virus. Sometimes it's ideas that you might find repugnant, like mass deportation, for example, is an example of an idea that has moved into the Overton Window relatively recently. But the takeaway that I always focus on, and that I always try to kind of amplify and spread, is that in times of change, there is more possibility. People are open to more possibility. And so if you really focus on that, and you look at the ways to move ideas into the Overton window, and there's a whole long list of ways, everything from lobbying and activism to the role of media and pop culture and the role of social media trends as well, to normalise ideas, then you can start to see how. We could actually galvanise the ideas we want to move into the over to Window quite quickly.

Scott Allender:

That's super interesting. Let's stay with this idea of like, you know, exploring plural futures. You lay out four potential futures, more than human degrowth, solar, punk and metaverse. I'd love for you to take us through some of that, if you would. Absolutely So the premise of the book is that

Sarah Housley:

we think that we are not interested in futures. We're not thinking about futures. We don't know what they could be. And as someone who works in futures thinking professionally, I perhaps have an insight or an insider's view into what the futures are that are being made. So these are all real futures that are being constructed, that are being pursued by change makers all around the world in different ways. And as we go through each of the four futures visions, I'm actively inviting you to interrogate them and critique them and think about what could be good, what could be bad? Who's being excluded in this future? Who is this future being designed for? And by, why is this future being proposed? And at the end of the book, if, if none of these four really captures your interest, and you think, okay, there's a really good combination here, between these futures, I'm fully satisfied with what's being developed, then absolutely there's a gap there where your future's vision should fit in, and you should be thinking about what you want from the future and kind of actively putting together your own vision of the future as well. So it's meant to be very participatory. And as we go through the four futures, it's not necessarily that anyone thinks these are the four optimum end state futures. It's that they're kind of starting points for discussion about what we want, what we might want. So, more than human futures, challenges us to rethink our relationship with nature and actually think of ourselves as nature. So rather than having this, this human dominance, or this human supremacy that we currently live within in quite a few different countries around the world, although not every culture has this, this kind of way of thinking, we really think, Okay, we are part of nature. We need to work with nature. And as nature, what I like about more than human is that it sounds very cosy. It sounds like, oh, I'll just get a garden and I'll get a pet, and I already like going for a walk in the forest, and this is fine. But then if you actually really go into what that would mean, that changing worldview, it's incredibly radical and incredibly different, and it becomes quite challenging to kind of think it through. The next future is more economic in origin and its degrowth. So the economic theory that we should put into place a planned reduction in our use of energy and resources in the global north, and also redistribute those resources and live more communally, reuse things and and live more responsibly as a society, and put kind of policies and infrastructure in place to make that happen. The third future is Solarpunk. This is the newest future in the book, and when we talk about the fact that we're not making futures anymore, Solarpunk is the only futures vision we have made this century, which I think is really interesting. Solarpunk rose in tandem with the internet and with the early days of social media. So platforms like Tumblr and blogging were really, really important for helping Solarpunk spread, and hashtags were the technology that really helped it to take off. So it's kind of a crowd sourced future vision, and it's a movement based around climate hope and approaching climate breakdown with a feeling of possibility rather than a feeling of doom and despair. And it's kind of tech and nature in parallel with a big dose of community as well. And then the final future is the most recent future in terms of how brightly it burned for a while, and that's the metaverse. Obviously, it's its roots go back decades in technology and virtual reality. But it really became popular and hit its peak in 2020 and that's in large part due to the pandemic and the fact that people were at home wanting to escape into virtual worlds, and then it saw a huge investment from companies like meta. Will all remember this, what a buzz word the metaverse was, and all of the problems people had with it, but also all of the ideas that it generated, and then how quickly it dissipated and faded. And the hype cycle moved on, and the technology industry's hype cycle is now very firmly focused on AI and the metaverse is regarded in some circles as kind of a has been as a futures vision. But no future ever really dies. They they might hibernate for a while, and they might change shape, and then they'll come back. So the book looks at the possibilities for the metaverse and how it kind of rose and how it fell, and what's next for it.

Jean Gomes:

So this is incredibly interesting, because, as you say, this Overton window where people are willing to kind of consider new possibilities that they may have found repugnant or impractical even months.

Sarah Housley:

Beforehand, certain set of things sort of come in place. And part of that may be exposure. Part of it may be that a trusted person, it might be having a conversation around it. What do you think are the mechanisms at the social level where these futures translate into possibilities that people can start to go well, maybe the future could look like that. What what do you think is the the social mechanism? I think that's a really interesting question, particularly with someone of your background. You'll be thinking about behaviour change and the psychology of behaviour change. I maybe will come at it from a slightly more practical standpoint, because my background is in design and and physical making and physical creativity. I think what's really important other than just other people doing it and presenting that possibility and making it look good, that's that's the basic thing. If you hear about an idea often enough, quite simply, it just becomes familiar to you, and you stop thinking of it as weird, and you just start thinking of it as something you're hearing about quite a lot. And at that point, it becomes normal, and possibly it becomes interesting. Maybe it doesn't but the really practical way of spreading these ideas and seeing their potential is to make systems demonstrators. There's lots of names for this. You could call them prototypes. You could call them small scale kind of examples of alternative ways of living. So for solar punk, this might be or someone opens a repair cafe, or someone has a DIY weekend at your house where you can just go and make something cool. You make something kind of community based, like, let's say you make your own social network. So you code your own social network, and it's made really simple for you to do that in a few hours, and you see suddenly, oh, this is part of Solarpunk. And Solarpunk sounds like this quite cool movement that I want to want to, want to be part of something bigger and see what else this could be. So just kind of inviting people in with these small scale systems, demonstrators that show, maybe physically or maybe emotionally, what this future would be, and then getting them to see if they want more of it. And coming from an art background, I'm probably, I'm very biassed towards things being kind of visually shown. But I personally find images of the future, literal images of the future, really inspiring when they feel fresh and new. So I'm always looking at architectural renderings, for example. So in the UK, there's a new proposal for forest cities, and there's an artist who was commissioned to draw the forest cities, and I think architectural renderings have a very steep bar to climb in, what actually looks new and what actually looks futuristic, because we're so used to going all the way back to Metropolis. We're so used to seeing cities of the future, and there are so many cliches, but the Forest City this artist has captured genuinely does look quite innovative and quite futuristic and quite new. And there's someone from a creative background that just really drew me in to thinking about being in this forest city, and what would be the reality of living there so small scale systems, demonstrators and then visual images of the future are two ways forward that I think are really powerful and effective at kind of igniting people's curiosity. I feel drawn to living in the image. No, I have enough. I'm gonna have a look after the show. That sounds amazing. Where do I sign up?

Scott Allender:

I'm curious to you know, want to talk about this. Stay on this. Well, Jean, Jean touched on this already, a little bit like, what are the mindset shifts that we need to start adopting as leaders in our workplace and our communities. They can help us to anchor to a Prag a practical hope, right? I'm you meant, you mentioned Matt's deportation earlier, and you know, sitting here in America, I feel, you know, this last year, I've felt like I've lost a lot of hope for where we're going, and it's hard to sort of find the anchor back to a more protopia, even perspective. And I think I watch that kind of seep out, not just, you know, when I'm feeling a little bit absent of hope, but I watch others, you know, my neighbours, my colleagues feeling a little bit of that, and it kind of seeps out into everything. You start to almost adopt this sort of pessimism or sort of about a lot of things, right? And it can impact your work, and it can impact your family life. What? I'm not even sure where my question is in this, if I'm honest. I just, I guess I'm asking you, like, in the face, in the face of Sara, you'd also mentioned billionaires, right? So, like, the power distribution has changed so radically, like it can feel like, gosh, I don't have much agency to do much of anything. So how do I find a practical hope when that's the sentiment or the perspective?

Sarah Housley:

Yes. Well, I'm going to borrow someone else's words to start off with. Well, first of all, I should say, when we're talking about futures, there's always an S, it's always plural. There's never one singular future. So multiple plural futures, and they will be specific to your context, and I'm speaking in Europe, and even more specifically. Speaking in the UK, I think this is a country that is really navigating its relationship to hope at the moment. Is in a very particular place with it. I think America is in a totally different place with hope right now. And you have a very particular context that you're navigating. And there's a limit to which I can personally speak to how to find hope in, for example, the political context, the economic context, it has to be on a broader basis. But I absolutely empathise with this feeling and this kind of depression of hope. And I think that is a that is fairly universal. Everyone will have experienced that before. So I was listening to George Monbiot on a podcast recently, and he's much more active in activist circles than me and I kind of seed to him on on where we find hope in collective action or political action? He said there's three words for what you need to do, and the three words are, mobilise, mobilise, mobilise. And that you you will never find action. You will never find agency. Individually. It's a multiplayer exercise. It's a collective exercise, and coming together will immediately start to build that momentum and create that feeling. One of the things I talk about in the book is is building these networks of action. So not necessarily that you scale up your action. It doesn't always have to be about scaling up. It can also be about networking out and building these webs. So if you are running one of these systems demonstrators I talked about that might be in one city, and you might connect with people on the other side of the world who have similar aims, to use similar goals, and can share some of their best practice and create momentum and motivation. One of the really simple examples in the book is a litter picking app. And litter picking very positive exercise, but it can feel so futile and so pointless, because you'll just see someone throwing some litter on your way back from your litter picking, and you'll think, Well, what was the point of that? And does anyone else care at all? And just through the exercise of this litter picking app, having a map of where other people are active with their litter picking around the world, it can scale very quickly from local to global. Of course, this very simple way of showing other people doing it will immediately restore some of your faith of oh, there are other people who feel like this and other people who are doing this. And if you can build connections between these groups, between these people, then that's the motivator, that's what keeps you going, and that's what creates the momentum. So hopefully there's something useful in there. Tonnes, tonnes useful. I love the reminder of the collective and yes, and and what I hear in there is the coming back to that non duality, right? Like you can have fear and sadness and all these other things about what's going on, but the importance of hope as also as a motivator, yeah, and that hope is always gonna be part of this cocktail of feelings. We seem to expect that we'll just feel one thing and we can keep that just being a positive feeling, so we can just perpetually be either hopeful or excited, or, you know, dynamic, whatever it might be. But the reality is that, no, we all feel lots of different things that are intersecting constantly. And the quote, unquote bad emotions, not to sound too much like a therapist, but we do need those negative emotions, the challenging emotions, they're completely valid. One of the things I always come back to when I'm doing my futures work is that I think climate anxiety and climate grief are going to be the absolute background emotions of this century. So that's going to be a really long time period to have climate anxiety and ecological grief always in our psyche. But that's the reality of of the what we now face. We are living through climate breakdown. That's going to be in that cocktail of whatever else happens in the future, so we can focus on technological innovation, or we can focus on the next cool thing that's happening that might catch our attention, but there will be some sorrow, some grief, some anxiety there as well. And it's learning to be comfortable with that mixture of feelings and learning how hope can balance alongside other emotions that I think is really important.

Michelle Beagley:

Mindset is the interplay of how we feel, think and see in this rapidly changing world where machines can think, our mindset will profoundly influence our economic success, well being, and our capacity to embrace uncertainty. Welcome to the mindset economy.

Jean Gomes:

This is such a wonderful conversation for the mindset economy, because it's spot on to the agenda that you know, that we want our listeners to be thinking about. Because you know this, this ability to hold the tension between feeling, you know, anxiety and fear and sadness and hope, optimism and that the future can be better despite it being difficult, is exactly what we need people to kind of think about rising to that another level. But when we think about AI, we take the whole thing off another level, because climate anxiety and so on is obviously real, and it is. Material AI is staring me in the face in terms of my paycheck, and people tend to get a bit more motivated when things are that immediate. What in your work? What have you envisaged in terms of the future of the workplace or life generally, with AI becoming more pervasive, what are your thoughts?

Sarah Housley:

So many places to start. And I bet you come across this all the time. You say the word AI, and people just go, oh my god, don't know where to start. I mean, even starting with defining what that word means is kind of impossible. But if we I really like the that you use the word machine intelligence, by the way, when you're talking about your podcasts, I think that immediately gives it a different tone, and it takes it away from that cast of just complete panic and the headlines saying that you're going to be killed by an AI within your lifetime, because a machine intelligence feels like something, something a little bit more technical, a little bit maybe more understandable. So I think it does feel very real. I always say that AI is a suitcase word, because there's a lot to unpack. It has lots of different meanings to lots of different people, but in the workplace, it's very real and present, and we can never escape the implications of it, because we are reminded 1000 times a day of of the implications of it. So AI is going to be a general purpose technology. It's not going to be one thing. It's going to be the foundational layer. It's a technological breakthrough, right? And it's going to be the foundational layer for lots of different products and use cases and ways of living and a lot of these you're not going to think that's an AI whatever. You'll just think, oh, that's, you know, you won't think that's an AI powered sofa, for example, you'll just think, that's the sofa. I'm going to use it like a sofa. And this is what sofas do. Now, they've got this amazing new feature because of AI, but that has very quickly become normal because it's still just a sofa. So use a silly example. Yeah. So now, adjust to electricity in the 20th century, isn't it? Yes, I think the very present way it's showing up is is being used as a tool, the kind of chat bot functionality of feeding it information, getting summaries in return, and that's what the the average person might think of as AI, the thinking of chat GPT. So the first point, I think, is, do we see it as a tool, or do we see it as a collaborator? And when you're thinking about different possibilities for the future. The words or the terms that you're starting with can be really important. If you see it as a tool, you're going to use it like a pencil. If you see it as a collaborator, you might work with it more like you might work with a colleague, and immediately that's more relational. It's a relationship between you and a machine. I think the most interesting way to think about AI, or the most interesting research I've come across is is the latter view of thinking of it as a collaborator, thinking of it as a co intelligence, and even as a new form of intelligence, a new species of intelligence, because that takes us away immediately from this perhaps unhelpful way of thinking of it as a human, like intelligence, or a service that is trying to emulate or perhaps even replace a human when at its best, at the best of what it could be, it's something else. It's a different it's a different type of thinking, a different type of intelligence, a different type of life, even so, some of the most interesting projects I've seen. So Vaughn tan released something called confidence interval. You familiar with this? Yes, yes. He's been on our show as well. Okay, did he talk about it? Because no, I think I was a little bit early for it. Okay, so I'm sure he would explain it much better than me, of course. But confidence interval is a thinking tool, so you work with it to think through the process of critical inquiry that you're embarking on. And it's a structured way of scaffolding your thinking and your process of inquiry and your process of coming to different parts of your thinking and your conclusions, and it completely skirts around, okay, well, AI is going to stop me thinking critically. AI is going to make me stupid. I'm going to produce to produce slop because I'm not using my brain anymore, because it's acting as a partner for thinking. And I think anthropica kind of started to do this quite well as as well with Claude and and Claude being a thinking machine. Another really interesting avenue that I'm tracking is there's an AI company called socana. Have you come across them? No, so they're Japanese, they're a startup. They're still quite secretive. They are pursuing AI models that use other types of intelligence as their starting point. So not human intelligence, but animal intelligence. So for example, studying the way that bees operate as part of a hive, or the way that birds might fly in formation, or the way that ants work when they were working together, and different ways of embodied intelligence. Essentially. What's interesting with AI and this machine intelligence is that it's emerging. Our obsession with what is machine intelligence is emerging at the exact same time as more than human futures are emerging. And and more than human futures, are really thinking about different types of intelligence beyond the human they're thinking about the intelligence that might be held in water or in soil or in mycelial networks or in animals. For example, octopuses, their brains extend all the way into their tentacles. So it's a very embodied type of intelligence that an octopus has, and the fact that socana are focusing on that kind of intelligence, the more than human intelligence, as the basis for how a machine could think, I find that not only really poetic, but really generative for what that could be. And if they were to bring out collaborators who think in this genuinely more than human way, the possibilities of how we would work and how we would live alongside that intelligence that's a lot more exciting to me than intelligence that's just trying to cosplay as a human and is leading to all of this panic we see at the moment about them being too human, and we're treating them as humans. And is that an issue? And where is the line there? So it's those genuinely fresh lines of inquiry that I'm tracking and that I think will have the most interesting impact. That's a fascinating I mean, the possibilities of being able to think alongside an octopus so much wider, isn't it? Yeah. I mean, I know some people might be kind of thinking that's crazy, but I could, you know, from an embodied perspective, the the ways in which you could see problems completely differently, and what that might unlock is fantastic. So excellent.

Scott Allender:

I'm thinking now about our virtual worlds and sort of that experience. I've got teenage girls that are, you know, on social medias and things like this, and watching a generation deal with associated well being challenges as part of that. How do you see us

Sarah Housley:

being able to deal with with those challenges? How should we be thinking about that? Well, I think the metaverse is one of those futures that really leans its itself towards being dystopia, and that makes sense because it emerged really in novels and in science fiction, and we already established they have this bias towards dystopia because it's a better story. So virtual worlds have always kind of had this context of, if humans have moved fully towards virtual worlds, then the real world must not be that good anymore. So in novels like Snow Crash, people have retreated to virtual worlds because their physical reality is so degraded and there's so much inequality that people on lower incomes just have no alternative but to strap the headsets on, and that's a very dominant perspective of the metaverse. So it doesn't necessarily help the concerns that we have around social media and children spending too much time on their phones, and us spending too much time on our phones that that is that narrative around the metaverse. And there are, there are risks. And it's one of those futures that where the critical thinking is very easy because the risks are kind of flashing at you. They're very, very obvious. They present themselves to you very quickly. And there's our scientific studies that kind of prove and validate those risks and kind of set these guardrails for us, for example, that when people spend a long time in virtual reality, they completely lose their ability to distinguish between what is virtual and what is real, and the memories become very jumbled up. So something that you experienced in virtual reality, you might experience in your memory as something that happened in physical reality, and that that complete blurring of the lines can be dangerous, right? Or could be really exciting. There could be a lot of possibility to explore there in a creative context. So the the kind of medical context can be quite alarming, and the creative context can be quite generative and exciting. I think what we're learning now well, as we have this conversation about AI, which is obviously the focus virtual world and not the focus anymore, it's very much on AI. But there are a lot of conversations around the ethics, the morality behind AI, how it should be programmed, how it should be used, how it should be funded, all of these big, chewy questions. And as we come to some conclusions or to some evidence based kind of stances and strategies on using AI, a lot of that will be transferable to how we regulate, how we design and how we use virtual worlds as well. And at the same time, we're having a kind of societal level conversation around how we want to use our smartphones, what age should people access smartphones? Should they be allowed in schools? And a lot of adults are trying to rebalance their relationship with their screens as well. So as virtual worlds become more of a proposition and more of a popular kind of movement or development, we will have had those society level conversations, we will hopefully have reached some positive conclusions, and we can apply that to our development of virtual worlds as well. But I will say a lot of people are not interested in virtual worlds at the moment because that hype cycle has moved on, but coming from a design background. And with virtual worlds offering this chance to redesign reality, that's still an incredible proposition for anyone with any creative experience or interest, and I don't think we can let that lie. I think we really need to start exploring, if we can redesign reality, what could we make it and how could we make it better, and how can we make it more exciting? And so I think that will come back more into the spotlight over time. And I think virtual worlds are not done. They're still very much developing, and as with any future or any technology, it depends entirely upon the direction we decide to develop them in. So it is a choice. It's always a choice, and it's a choice we can influence and steer going forward.

Jean Gomes:

So can we shift the conversation to leadership and strategy for a moment? Because many people who are, you know, kind of listening to the show, will be leaders, and they will be thinking about their strategy and so on. And we know that a lot of strategy isn't really strategy at all. It's kind of past, present goals extrapolated to the future and, you know, and then people get disillusioned with that, because it makes them a bit more vulnerable and not very competitive. So if, if I'm a leader listening to to this show, and I want to build hope into my strategy in a very practical way, and I want to make it tangible, what would you advise are the steps to start thinking like this?

Sarah Housley:

Yeah, there's maybe two distinct approaches that I think could be immediate starting points. The first would be, What is the vision? And I know vision is a very overused word within business and strategy, but it's perhaps not always meant as literally as this. If we think again about images of the future and this big meta challenge we face that we lack these images of the future. One of the reasons we have images of the future is that they're very, very motivating, and they bring groups of people together towards a shared goal, and they really inspire people to work together and be excited about their purpose and their impact and what they're working towards. So every organisation will have values and purpose and a vision, but in most organisations, that vision is not fleshed out. It's not fully understood by all of the employees. It's not something they've participated in, and it's perhaps not something very meaningful or tangible. So going back to the absolute basics of what do you exist for and what is your vision of the future, one of the things I do is work with organisations to work out what these possible futures are and what these preferable futures are, and then how they fit into it. So how does your organisation fit into that preferable future? What role does it play? How are you part of this bigger change? Are you a systems demonstrator? What are you demonstrating? All of these things can feed into what actually your future's vision is, and is that communicated well, and is that something people are really buying into and participating in themselves? The other thing about hope is that I do always think about this tangible and this grounded hope, so perhaps thinking about what the most important steps are for you within your strategy, what the things you're absolutely hitting and why are you doing it? I think a lot of the time we're not explaining well enough why our strategy is a certain way and what the point of it is, and then perhaps even going more big picture. What does hope mean to you as a leader? How are you listening to hope or creating hope? And more importantly, honestly, what does hope mean for the people you lead and the organisations you are running? Because, as we've said, it's a very complicated word. It can be quite a divisive word. If you're in this situation that Scott talked about with this real lack of hope and this real feeling that there can't be hope, then, if you as a leader, are working really hard on this beautiful vision that has a lot of hope in, is that going to land with the team you are leading? Are they in a place where they're ready to just take this hopeful future? Probably not. They probably need to be a really strong part of participating in it and creating what it should be themselves. So I think not feeling like hope can be something that's just forced on people has to be co created. It has to be agreed on. It's not always the right emotional mindset to go straight in with. Sometimes it has to be earned and it has to be cultivated, but also to realise that hope can be part of a strategy. So I worked with someone once who was always saying to me, hope is not a strategy. It's a very famous quote, right? It's not a strategy, but it's it's always underlying a strategy, and it's always feeding into strategy. And I think when strategy becomes too devoid of hope, and when it is focusing on cynicism, because cynicism seems realistic or practical, that becomes a real issue. So this may be a really stupid question, but I'm gonna ask it anyway. Can hope be measurable? I mean, it's such a qualitative thing. So the leader says, Well, how do we know when we've created hope? Yeah, I think it can be measurable. I don't know if you can go so far as to make it a KPI or to get a return on investment. From it in a very tangible way, but definitely emotions are perceived and felt, and you can measure the emotions that people perceive and are feeling. So something I've done in the past during future thinking workshops is start off the day with a measurement tool of how people are feeling, and then end the day with that same measurement tool and the ratings for things like hope and agency and feeling creative and feeling inspired always shoot up, because thinking about the future and planning towards a better future is a really effective way to make people feel inspired and motivated. Yeah,

Scott Allender:

I love this conversation so much for this show, because, you know, one of our central purposes is that we need to be investing in uniquely human capabilities, things that AI doesn't possess and can't right, creativity, emotional intelligence, ethical judgement and hope, right? Oh, AI can't hope. And I'm just interested in building on this conversation around what leaders can be doing. Is there a sort of practical way to start developing hope as a capability in ourselves and in our teams?

Sarah Housley:

That is a tricky question, and I think it's increasingly hard to say, what is a human skill or a human trait, and what can a machine take on because I think all of our expectations around that have been blown out of the water. So for example, AI is preferred as a mental health companion or a therapist now, because in some studies, not by everyone, and not in every study, but in some studies, because it's seen as less judgmental. And can a machine show empathy? No, but it can give the impression of showing empathy. And quite a lot of the time, that's what people are doing anyway. They're just trying to give the impression of being supportive or showing empathy. So I did think you might ask me, What is a uniquely human skill, and I did think I would have to say I don't know, because we can't, we can't gatekeep Those in the way we used to. We used to think a machine can never do these soft skills of talking to people, and they can never be an artist. And all of that is being blown up now. So I don't know that there are set in stone, uniquely human skills. I think what's more important is thinking about what do we want to do? What do we see as uniquely human skills? Because if we didn't do them, we would feel lost and we would feel bereft. And coming back to the future of work, I think it would be about what work do you find meaning in and what work actually adds to your life, rather than feeling like judge

Scott Allender:

I know we're nearing the end of our work? conversation with Sara, but I think it's worth breaking into this conversation for just a moment. Firstly, Sara's insights are so profoundly useful, and I felt more personally anchored in a truer sense of hope after speaking with her as she reminded us that hope is not optimism, which just expects things to turn out well, but it's rooted in the conviction that there is good worth working for. At this point in the conversation, we traversed into a space that I think is plaguing the minds of many, and it's a topic that we will surely explore in depth on the future episodes of the mindset economy. It touches on a question that is perhaps the most defining Riddle of our decade, where will the machine end and the human begin? We are in a somewhat disorienting time, as we can see AI acting as a therapist, providing a judgement free zone and space for people, and people start to wonder if the empathy we receive from the screen is true, or at least good enough, if it feels true. And Sara rightly observes that in our busy, distracted lives, we often settle for the impression of support anyway. So if a machine can mimic the I understand or the I'm here for you, does the distinction even matter? But I wanted to pause here for precisely this reason, because while a machine can brilliantly replicate many skills, it cannot embody them. Ai doesn't feel anything. It doesn't truly understand what we understand, because it doesn't care.

Jean Gomes:

That's right, an AI can analyse the emotional frequency of a voice, but it doesn't feel the weight of that emotion, and because we do, we're uniquely positioned to read a room, make sense of nuance, organisational politics playing out in real time, or offer empathy. I think that's Sara's point. AI is mimicking those behaviours, but often so we, even though we have the capacity. See, we often only replicate and perform proper responses in their current context, often without sincerely feeling them. As AI gets better at replicating human capabilities, our invitation is for us to become more human in every way, and that's what the mindset economy is all about.

Sarah Housley:

Your question cultivating that hope, and kind of cultivating that mindset of hope, right? I think you have to be open to possibility, because I think hope can come from very unexpected places, and it's a difficult thing to plan for. Because of that, it's also difficult to be open to be open to possibility, because we are really trained to, just as you've mentioned to me before in our conversations leading up to this, we're really trained in linear thinking. It's kind of a to b, and futures thinking is not an A to B line, it's, it's, it's a radar, it's a circle. It's not circular thinking. It's radial thinking. So you think out from the centre to what might happen next, what might happen next, and then what might happen as a result of all those different things that are happening. So it's already a messy way of thinking, and our brains might fight back against that a little bit, because they're thinking, I'm used to just going a to b and then being done and closing the book. But actually we have to think, well, what if not only a but every other letter of the alphabet, and they're all intersecting, and they're all talking to each other, and we're kind of going quantum with our thinking that is teachable. So people like me do teach it. It's an exciting thing to learn about. It might sound really daunting, but it is exciting. It is empowering. Is energising. It can be quite transformative. And that will then give you this mindset, or this capability of being open to possibility. And I think people often find it quite liberating to be told, No, we don't have to go from A to B anymore. We're going to do radial thinking. We're going to think outwards, and it's going to be better, it's going to be more interesting, because it is inherently limiting to do linear thinking. It's inherently liberatory to think outwards and to think in terms of possibilities. But our education systems are not particularly good at non linear thinking. Yeah, definitely. And one of the things that you discover if you embark on this practice is that you go from one thing that isn't quite fit for purpose, and then you discover this other thing that's not quite fit for purpose. If you're going to what are the drivers of this trend, what the drivers of this future? You're thinking, Well, that won't work because the education system is not set up for that. Oh, well, that won't work because that's not how we currently design products. So yes, a lot of things. One of the reasons we're in poly crisis is that most of these systems that we have constructed around ourselves, or that are the people before us have constructed around us, are not fit for purpose anymore. We do need a new century of thinking that we haven't really embarked on yet, and we're now 25 years in. So redesigning systems has to be part of it. It is inherently big thinking. So if our education system is not fit for purpose. That can be exciting, that can be an opportunity to redesign our education system, which a lot of leaders are already starting to do, because there is a general feeling that our education system is not future fit, right? It's not it's not thought about the challenges of the century and how we should be educating people to prepare for them. So I don't necessarily see the fact that our current education system is not preparing us for that kind of messy thinking as a problem. I see that as maybe an opportunity to redesign it. I mean, you've learned a lot about non linear thinking because of what you do and your motivation to actually to do that, and you get a huge reward intellectually from doing it. How would you, you know, if I'm sitting here and thinking, you know, how do I as a, you know, 20 something, develop that non linear approach, or if I'm a parent and I can see my kids kind of being potentially stonewalled by a very linear kind of way of looking at the world, how would you encourage that? What's the practical things you can start to do, to to embrace it. So there are some very simple, practical tools you could embark on. I'm really not meaning to plug the book here, but I do have to say that my work on this is at the end of the book. So it's in the sixth I thoroughly recommend it. I love the book. That's partly why you're here. From the fact that you know, our agendas are very convergent, but yeah, and it's a brilliant book. Well, I'm not meaning to say the only way to do this is to buy the book. Absolutely not. But there were six exercises at the end of the book, and they also shout out the some of the other thinkers in my field who are doing this kind of work, so people like Institute for the Future, for example, who are amazing at this kind of scenario planning. I think it's something that that needs to be taught at every level. It's called futures literacy, very simply, I think it needs to be taught in school, at university, and then, as you know, lifelong learning through through our careers, it's only going to become more important. It's people are. Going to be retraining up skilling, reskilling, not just because of AI, but because of things like the clean energy transition and these, all of these other factors that are changing. What kind of work will be needed and what kind of jobs people will do. It's relatively easy to encourage children to do because it's not been completely trained out of them at that point. I think sometimes it could be about the parents taking the cues from their children about how to think in a non linear way, because their children will already be doing it. So it can be not necessarily teaching your children, but learning from them, and kind of doing some unlearning of your own. But there were, there were lots of kind of short courses and information online, if you look up systems thinking online, that will shake you out of linear thinking straight away, because Systems Thinking is never linear and kind of straight line. It's always much more nuanced and 3d than that. So there's lots of resources to get you started and kind of fun exercises you can do, and then you just keep going from there, excellent. So when you think about in the last few years, what surprised you most about what has moved into the Overton window that you know you weren't necessarily expecting, or it gave you a kind of pause for thought. I think overall, something that has kind of shocked me, although you could see it coming, was the speed at which social media has sped up, and therefore the speed at which social media can introduce different ideas into the Overton window, and how quickly the public can be activated, both by those ideas and by those formats, into believing things that they previously would not have believed or wouldn't even have thought about. So this became particularly prominent with video apps. So obviously Tiktok and things like Instagram reels before that, before when when we had flat media or when we had longer form video on platforms like YouTube, there was still the Overton window was moving at the speed we were perhaps accustomed to. Social media made it faster, but not the breakneck speed it now is at. And because Tiktok has this very addictive and very fast format where videos just have a couple of seconds to hold your attention, and then you move on. Information is really condensed. It's really made bite size, and it's really kind of sensationalised to hook you in. And I'm saying all of that, which is perhaps we all know that because the format has changed, not only how the ideas are transmitted, but what kind of ideas are transmitted? It's the most attention grabbing sensationalist stuff. And of course, this leads to the polarisation we now see, but it also leads to things coming into the Overton window that seem absolutely bizarre. And I'm thinking specifically about all of the wellness trends that take off on Tiktok. And for example, the really obvious one is drinking raw milk. You were to say to someone, drink some raw milk, that their obvious reaction would be discussed. And we're really conditioned to think you don't drink raw milk that's come straight from the cow. That's really disgusting. It's gonna be dirty and but it was turned into packaged into this somehow aspirational idea by the people who were conveying the message, and how they were conveying the message, and I think we can extrapolate that outwards to a lot of the other ideas that that are portrayed or peddled by influencers. And really, if you can make raw milk into something aspirational and normal, then we shouldn't be thinking that a four day week is impossible to sell as an idea, or any of the other policies that we might think would have positive implications for society. And I think we are starting to see people who believe in these, these better or more protopian futures, starting to learn from that messaging style, starting to learn how to package their ideas into really bite sized pieces and sell them. And kind of learning the helpful messages from influencers, and perhaps not taking on board the less helpful messages about how to kind of sensationalise ideas. And as a result, because people, these kind of protopian thinkers, are learning about these new formats and these new ways to hook interest, we are not now starting to see more radical but I think, more positive ideas starting to gain a lot of traction as well, and particularly in politics, we've got a leader of a political party now in the UK, on the left, who has learned how communication can be really effective on social media, and is having such a strong impact by selling hope very literally. He's always using the word hope, it's his slogan, and selling positive images of the future very quickly and very effectively and really getting a mass kind of surge of interest and of popularity behind him. So that's been one of the more wild card aspects of the Overton window in recent years. For me, I'd like to pick up on this thought that, you know, a lot of the stuff that seems to

Jean Gomes:

take fire very hot quickly. Is disinformation, because it's it's interesting, isn't it? So listen to something you don't you disagree with, and go, you know, and you know, whatever reaction you have to it. But this idea of mobilise, mobilise, mobilise, and actually harness that to our. Advantage, Scott, I'm expecting a slew of tick tocks from you over the next couple of weeks, but if you were, let me put you on the spot for a second and see if you can mobilise us with something that you think you know is going to give us more hope. What would your you know like, your most condensed thought be around that my most condensed thought about how to have more hope you can, you can think about this. You don't have to

Sarah Housley:

immediately give it to us. Well, I always, I'm always a little bit hesitant to kind of sell an idea. Is just an overtly positive idea that I'm completely behind, because my role as a future thinker is to evaluate and to work with other people to see what they think and to kind of CO create our images of the future together as well. So this one idea that I'm just 100% behind is going to be difficult to me to get into the headspace of that one thing I'm really, really behind is exposing yourself to protopias. Because, as we've previously covered, dystopias are everywhere. Utopias are not so helpful. They're actually a lot rarer, but they're not so hopeful, even when they exist, protopias are so hard to find because they don't have that amazing narrative that hooks you in. And they can seem because we think hope is a little bit naff or a little bit silly, they can seem unrealistic. They might seem like stories we're not so interested in. But there are, there are protopian outlets out there. There are protopian stories, protopian books and films. My favourite protopian author is Becky Chambers, a science fiction writer. Do you know her? So if you read Becky Chambers, it is immensely radically protopian. It will open your mind to the possibilities of how humans relate to each other and to other forms of life. And there's also a really nice series that I always shout out on grist, the climate website. Grist have responded to this, this idea that we don't have new images of the future, or images of the future beyond climate breakdown, images of the next century. And they publish a series of speculative fiction short stories called Imagine 2200 and they're very human and real and messy, and they're about things like people starting mushroom farms and people going to climate resilience hubs and people falling in love and they're really beautiful short stories about possible futures. So that's another part of the protopian diet that you could take on. So widen your protopian diet and start looking for protopias to counterbalance all of the headlines and all the dystopias that otherwise you're going to be bombarded with. Do you need 30 a week. I don't know that there's kind of a vegetable esque guideline. I don't think you want 100% protopias, because you need to know the reality as well. You need to know what is happening either way that could be good or bad. But a decent proportion, we're always looking for balance, always looking for balance between these plausible futures, these possible futures, and these preferable futures. So a decent proportion, excellent, Sara, has hope always come easy to you. Have you always been a hopeful person? This is a great question, because I think some of it is personality. I think I can sit here and tell you how to have hope. And first of all, your personal circumstances will dictate how much hope you're able to have at any given moment. And some of it will be personality, I think so. I think I've always been a hopeful person. I've always exposed myself to radical and interesting ideas about what's possible, and then I studied design. And design, designers are known as people who set out to solve problems. We see a problem, we try to solve it, but designers also are people who ask questions. And that, I think, is why I became a researcher, because I'm always asking questions about what could be possible. So absolutely, there's some bias here.

Scott Allender:

I love that though.

Jean Gomes:

Well, as we bring this to an end, I would love to get a sense of what's going on inside your mind as you're doing this work. It's a big philosophical question, but how do you think, pondering, hope, thinking about protopias, thinking about the the balance between reality and optimism, optimistic futures. How's that changing you? How's it changing your mind?

Sarah Housley:

I think specifically since publishing the book, because so many of these conversations are really about hope, and there is some pressure to have new examples of hope, to kind of give to people, but also it's it's shifted my mindset massively to just always be thinking about hope in a way that I was not before. And it might sound completely obvious, but it's just made me so much more hopeful because I'm partly because I'm looking for it, and partly because that's the mindset I've moved into. So it has made me hugely more hopeful in this present moment. And perhaps that's why I think that the UK is entering a more hopeful place. It's just because I'm entering a more hopeful place and I'm projecting it the other aspect of hope, or of the work I do. I had a really interesting question the other day after a talk I gave, and a woman said to me, can you still live in the present, in the moment, or are you always in the future? Because you're thinking about it so much, and completely honestly, I'm quite guilty of always being in the future and not being in the present. But my answer to her was that I have two young children, and young children have this amazing ability to just drag you into the present and keep you in the present. So there's always that balance between living in the present and being hopeful about what could come next if we design it in the right way.

Scott Allender:

Thank you, Sara. As we conclude this episode, I want to sit with something that Sara said about the cocktail of emotions, and that is that hope is rarely going to ever exist on its own. It will be mixed with grief, with anxiety, frustration, and that's not a failure of hope. It's just the condition of being alive, particularly during periods of profound change and uncertainty.

Jean Gomes:

What struck me was her distinction between linear and radial thinking. We're trained to go from A to B, identify a problem, find a solution, close the book. Sara described futures thinking as the opposite. You stand in the centre and think outward in every direction at once, holding multiple possibilities without collapsing them into a single plan. She called it liberating. I think it's also deeply uncomfortable, because it requires you to stay in uncertainty longer than most of us would want to. That connects directly to what we talk about as mindset capital, the ability to resist premature certainty, to hold the tension between competing futures, to stay curious when your brain is begging for closure, and this is exactly what leaders, parents and young people need most right now.

Scott Allender:

Sara also said something about hope that I think is easy to miss. She said, hope cannot be imposed. It needs to be co created a leader who builds a beautiful, hopeful vision and simply hands it to a team that's not ready for it will fail. Hope has to be grown and cultivated together. Her final challenge was collective. Mobilise. Mobilise. Mobilise. You won't find agency alone. It is a multiplayer exercise. So the question we leave you with is not whether you're hopeful, but who are you building hope with, and whether you're giving them the space to imagine something worth building. Our imagination is part of our most valuable currency in the mindset economy.