The Inner Game of Change

E98 - Thriving In The Human+AI Change - Podcast With Ross Dawson

Ali Juma Season 9 Episode 98

Welcome to The Inner Game of Change. where we explore the thinking behind the doing of change.

In this episode, I am joined by Ross Dawson—futurist, keynote speaker, and author of five influential books, including Thriving on Overload and Developing Knowledge-Based Client Relationships. Ross is known globally for his work at the intersection of foresight, networked systems, and human–AI collaboration.

We unpack what it really means to lead in a world where AI is no longer just a tool, but a thinking partner. Ross shares his five-part framework—purpose, framing, filtering, attention, synthesis—for building clarity in the face of information overload. Together, we explore what trust looks like in human–AI teams, how to reimagine organisational design, and why every leader today needs to become their own futurist.

If you are navigating change, designing future-fit teams, or rethinking how humans and machines work together—this episode offers both depth and direction.
I am grateful to have Ross chatting with me today.

Ross Dawson

In demand worldwide as a futurist, keynote speaker and board advisor:
- Has delivered keynotes and strategy sessions in over 30 countries across 6 continents
- Works with boards and leadership teams of major organizations globally 
- Has run executive education programs and lectured at top academic institutions around the world
- Frequently in global media and author of compelling futurist content for PR campaigns 

Parallel entrepreneur
- Founding Chairman, Advanced Human Technologies Group
- Founder, Informivity
- Founder, Bondi Innovation Alliance
 
Bestselling author of 5 books on the future of business: 
- Thriving on Overload, a guide to success in a world of unlimited information
- Living Networks, now out in its 20th anniversary edition, anticipated the rise of social media
- Developing Knowledge-Based Client Relationships, a reference text for professional services firms, out in its second edition
- Getting Results From Crowds, a seminal guide to crowdsourcing, out in its second edition
- Implementing Enterprise 2.0, a landmark book on applying web technologies inside companies
 
Expertise: Future thinking and long-term strategy, Entrepreneurship, Scenario planning, Network economy, Future of business, Future of work and education, Future of organizations, Future of media, Future of marketing and PR, Future of retail, Future of healthcare, Future of government, Financial services, Professional services, Corporate innovation, Parallel entrepreneurship, Crowdsourcing, Social media, High-value relationships, Influence networks, Social network analysis, Shared value creation, Concept visualization.


Contacts

Ross’ Profile

linkedin.com/in/futuristkeynotespeaker

Websites

rossdawson.com/ (Personal)

Send us a text

Ali Juma
@The Inner Game of Change podcast

Follow me on LinkedIn


Ross:

Everybody in an organization knows they can, you know, they know without doubt that whether their leadership's intent is to just drive the you know the growth and the the profits of the company, irrespective of humans, as in we'll use humans if we need to, if not we won't, or that they are driving the growth of the organizations in order to support the people that work for them, in order to help them to grow, which in turn will help the organization to grow. And it's either they are human-centric or they are not. And people can sense that. And that will drive entirely shape the response to how these changes are made, the ability to attract the most talented people, which is going to be really fundamental. And the ability to retain people. I think it's almost binary. Either leadership's really a saying, yes, we're going to use AI to augment who we are and our people and what we do, or welcome to the Inner Game of Tend, where we explore the thinking behind the doing of 10.

Ali :

I am your host, Ali German. In this episode, I am joined by Roth Dawson, futurist keynote speaker and author of five influential books, including Thriving on Overload, my favorite, and developing knowledge-based client relationships. Rost is known globally for his work at the intersection of four-type networked systems and human AI collaboration. We impact what it really means to lead in a world where AI is no longer just a tool, but a thinking path note. Rost is a five-part framework with purpose to framing, filtering, attention, and tentative for building clarity in the face of information overload. Together we explore what trust looks like in a human AI team, how we reimagine organizational design, and why every leader today needs to become their own future. If you are navigating change, designing future fit teams, or rethinking how humans and machines work together, these episodes offer both depth and direction. I am grateful to have Ross chatting with me today. Well, Ross, thank you so much for joining me in the Inner Game of Change podcast. I am very grateful for your time today. Great pleasure to be here. Thank you, Ross. What is occupying your mind nowadays? You know, uh, we are living in a very dynamic world now. Um you've been sharing your knowledge. What's what's keeping you interested nowadays in the world of business?

Ross:

So my primary focus has really come together all around what I call humans plus AI, this idea that humans have invented this extraordinary technology, and it uh certainly shouldn't be designed to uh replace us or emulate us as many are trying to do, but uh how it is that it can complement us. So humans first, but using this extraordinary technologies to complement us. So how can Humay work together? And so that really all of my work pretty well. I still have my futurist work sometimes working with boards and executive teams, which is a little bit broader. I suppose the future is always pretty broad. But uh the real focus is uh so I have my humans plus AI explorers community, where a community of amazing people who are looking at exploring these things. I've got my humans plus AI podcast, I have my website, a lot of resources which have frameworks and uh tools and so on, I have uh courses on LinkedIn learning and elsewhere, and building software to be able to help people uh apply AI to strategy and also to uh uh essentially augment their own thinking. So everything is really around how it is we can use this amazing technology to do more, be better, achieve our intentions.

Ali :

Ross, I've got this in my head, um, and I've been thinking about this for a while now. History tells us that it's not easy to even work with another human on, you know, at work and relationships. We spend a lot of money trying to read how we build better relationships with other humans. How do you see the picture now that we would like to work with artificial intelligence? How is that relationship shaping up to be? Is that gonna be an easy ride for humanity?

Ross:

No, definitely not easy. So so I mean, these are extraordinarily deep and complex questions. And I think just in a way, we need to recognize how how difficult they are uh because there are so many facets to how we change. And just one aspect of this is just the reality. You know, I've I've said this for a very long time, even before generative AI, is that we will fall in love with AI, and people are already falling in love with AI. It is, yeah, that is just the nature of we we uh engage, we see things in it uh that sometimes are not there, but also it's just this reflections of ourselves. And that's of course, yeah, there can be positive aspects to this, but there's been deeply negative aspects to that as well. So because we are in real, we we humans are in relationship with others, we are social animals, yeah, as you were suggesting. And because we are now have a different, another form of relationship, and we are formed by our relationships, by all by our parents and our friends and the and our uh partners and the people we involve with. And so again, these are some of the deep dangers because we are now in a relationship with AI, sometimes which is not designed as well as it could be, or even with pure positive intent. Yes. You know, for example, simply just saying, oh, well, uh to want to maximize engagement. And so that you know, these are things which are not necessarily the best thing for individuals. So there we need to, you know, this comes back to the big picture of all of my work in ways. We need to recognize these challenges. We can't just sort of say, oh, it's gonna all gonna be great, because there are some deep, deep dangers around many of the facets of AI, particularly in our relationships. But there are already extraordinary positive possibilities for augmenting our thinking and potentially even for using AI, for example, as a coach to help us think more clearly to uh and clarify our intentions, to guide us on our pathways to achieve what we want as well as with other humans. But we need to be very clear in our intention that we do so that we do make these uh interactions positive.

Ali :

Is that a relationship between a human and a machine that that is a knowledge-based relationship? Is that how you see it?

Ross:

Well, I mean, so this this I suppose harks back to my my first book, which has defined a lot of my work, which is uh developing knowledge-based client relationships. Yeah. And it's interesting because this work does, it's my initial first version of my that book was over 25 years old and it's still uh actually selling reasonably well, and it still gets quite a bit of work from it. But the the concept there was was really talking about professional relationships, where this idea that knowledge and relationships are inextricable. The nature of a relationship is that is that you know your client, your client knows you, you are co-creating knowledge together, you're making your clients more knowledgeable. And if you don't have any of that, it's simply a pure commodity. It's just like transactions. You're doing something and you're sending an invoice. Yeah, it's a co so any real relationship must be based on knowledge. So even early on, I started to look at how this applied in other ways. So to your point, now we have this play to be for AI agents. And so all of the major technology companies want to be our AI agent, which is our portal to the interactions with the world to find information, and now already to commercial transactions, as in, oh, I'm I'm um I'm looking to buy a car or uh uh or whatever else, and it guides you through that journey. And of course, there's you know money involved in these transactions. So essentially, AI does know about us, unless we switch off memory, essentially the AR, these AI agents, and it's useful to us when it knows more about us. And so, but the more it knows about us, it can uh well be useful to us and arguably also to manipulate us. And so, yes, fundamentally, these uh the ways in which many of these uh AI is designed now, and absolutely this next phase of AI agents is designed to elicit deep, deep knowledge of who we are and how we will respond, and that can be used very positively. And you know, these are ones where we start to say, well, I want to use this because it knows me well and it's interacts the way I want, and we can uh have a constructive relationship, but also understand that this is an extraordinary commercial opportunity for the companies that provide them.

Ali :

I'm gonna share with you my personal story, uh Ross, with uh artificial intelligence. I'm actually very active in feeding my ChatGPT and my co-pilot information that I would like it to know about me. Inf information I call it context building. And I am proactively building the context and therefore I'm actually getting a good value out of that. You talk in your speeches and and and your writings around the information overload, and then I I was thinking about this the other day. Because I've got this capability now, it's easier for me to actually find information, not just find it, but synthesize it in such a way that would be very helpful for me to really understand. In a world full of information, and sometimes an information overload is not necessarily an insight overload. So that's where what what I think the the capability is, especially when you talk about uh how much it knows about us. I'm actually a very big advocate to any user to say take proactive measures in actually teaching your machine. Am I right? Am I wrong? Am I am I on the side? Well, absolutely. Yeah.

Ross:

So in order in order for the AI to respond effectively, it needs context. And if you that's you know the simple constructor of a good prompter interaction, you start with the context. You know, what are your objectives? What's your intentions? What are your parameters? What are your guidelines? What are your ethics? What are your values? What are the what's the audience? Yeah, uh, and as much as possible, because otherwise it's not able to respond effectively. So the more context you give it, the more effective it is. And so yeah, that is the hallmark of any uh effective interaction. But to the point around the you know difference between you know information and insight is absolutely critical. And we you know I started talking about uh you know the potential, the positive potential for information overload um in the 90s. Of course, we have quite a bit more information today than we did then, and it's increasing uh all the time. And so AI is part of that where it generates more information, but it's also, of course, a tool where we can allow it to, you know, very one of the most common and useful uses of AI is to uh summarize a long article or YouTube video. I think one of the things that is missed in all of this is this, you know, as you mentioned, this idea of synthesis, as in this bringing it into our mental models. How are we actually understand the world better, able to make better decisions, understand things better as a result of having read or watched or listened to any particular bit of information? And that comes down to uh, you know, I think actively bringing out our structures, our mental models and our ways of thinking. And I think AI can be used very well, and this goes to some of the software which we are building at the moment, but I think all this chat interface isn't always the best way to do that unless we're very deliberate around making sure that we are using those inputs to indeed gain insights, synthesize uh our understanding, and yeah, become more effective.

Ali :

I want to ask you about a wicked problem. Um and I'm I'm pretty sure you would have probably covered that in your um knowledge shares and speeches and presentations. Technology is a is is always ahead of humanity, and we always do a catch-up sort of game. But I am I am seeing something emerging now, and I'm deeply like I'm heavily involved in in the technology. I'm seeing that the speed is so fast now, even on a daily basis, and then humanity is still wondering if AI is gonna be safe or not. How do you see, and then you've got the business world, and you know, they they actually talk about the value of it, but then not many of them understand how where they start and how they can implement it. How do you see this dance happening now? And and perhaps how do you see it from a business perspective?

Ross:

Well, yeah, I exactly do your point. Technology is moving faster than yeah, even the the most uh AI savvy people in the world are not able to keep pace with everything that's happening. It's just moving so fast. I say every day is, you know, I find, and you know, the people I know who are following the space, you know, just find every day just quite staggering in terms of how much is happening. So there's this this wonderful book, uh 2008, I think, something like that. It's called The The Slow Pace of Fast Change. And it was really looking at this idea using applying game theory to look at how the reality is you can have underlying, excuse me, this underlying pace of technological change. But the reality is that flows through very slowly into social structures and organizational structures. One of the reasons is that you get competition. So you if you have competing standards or competing technologies, these are uh essentially you know, playing off each other and there's choices to make, and that slows things, but also around the pace of adoption. You know, and again, this you know, wonderful work from Everett Rogers and the diffusion of innovations. Yes. There's some things we all understand around the early adopters and the uh through the laggards, it's just that there are different personality profiles from individuals through to organizations around how people want to change, as in saying, well, the world's changing. Do I want to use this or not? Yeah, we've seen that play play out well around us. Yeah. Everyone knows some technology AI enthusiasts, and everyone is uh knows or has come across some AI skeptics. Yes. The in the business world, we are seeing a big difference across organization from ones where the leaders and the companies are saying, yes, you know, we are uh actively engaging, we are wanting to change, we we see the change from ones who are, of course, well on that path before generative AI, even with AI before the most recent phase, and ones where essentially they are waiting and seeing. So, you know, we can map that curve of adoption. Many are still very slow. And the part of the point is if you uh have an organization with any history, you know, any size, the reality is there is a legacy. And legacy, when you sort of talk about legacy in organizations or legacy in technology, that's often a pejorative word. Uh is in, you know, it's the baggage case. Yes. There is a positive aspect to that legacy as well. And that's one of the, and so this is one of the things where leaders are trying to unpick. What is it we're trying to keep, which is valuable around our culture, around our values, around our intentions, around the way we coalesce as a group of people, but also to move beyond the way the structures of how you brought that together. So a lot of my work is around this idea of humans plus AI organizations. Yeah, we've all heard of AI agents that are increasingly capable. And the way I see it, and I think which is becoming far more common than it was not long ago, is this idea of saying, you see, bring in AI agents, not necessarily to replace humans, but as part of teams, as part of the organization. So as we tasks are done well by AI, we're not taking away jobs, we are taking away tasks, and that changes the nature of what the people do. So it means we have these new organizations that are based on both humans and AI. So startups are building that from the scratch. You know, basically they are starting as a humans plus AI organization. For older organizations which have just been human with some collaborative technologies, now this requires them to reconfigure it, changes the nature of the roles of the past. Sometimes it means that yes, there are some roles that are made redundant, and there are other new roles that are developing, and there's a whole set of new skills that are required. So this requires both the intent from the leadership around how it is you want to change, and you know, essentially being able to bring the people with them along that journey. And one of the c you know, one of the key things I I often point to is everybody in an organization knows, they can, you know, they know without doubt, that whether their leadership's intent is to just drive the you know the growth and the profits of the company, irrespective of humans, as in we'll use humans if we need to, if not, we won't, or that they are driving the growth of the organizations in order to support the people that work for them, in order to help them to grow, which in turn will help the organization to grow. And it's either they are human-centric or they are not. And people can sense that, and that will drive entirely shape the response to how these changes are made, the ability to attract the most talented people, which is going to be really fundamental, and the ability to retain people. And I think it's almost binary. Either leadership's really are saying, yes, we're going to use AI to augment who we are and our people and what we do, or we are going to use AI to, wherever possible, get rid of people. And yeah, both exist, both are real phenomenon. Uh, I think you know which side I'm on. And I do think the ones that are uh human-centric will absolutely outperform because they will attract the best people. They will be able to change because the ones who are trying to uh replace people will not be able to change the ones which they do keep, and so they're not going to be on the journey, they're not gonna be able to become humans plus AI organizations, which will be where the success resides.

Ali :

Interesting point around the leadership culture, existing leadership cultures, that you're thinking that it depends on the leaders. Um, if they're looking at pure efficiency, then they're gonna drive it. If they're looking at uh building human capability and helping more people to be successful, they're gonna drive that. So that's pretty obvious in there. I I want to ask you, first of all, uh uh, you know, as a as a an individual, how do you use AI in general yourself?

Ross:

So it's so use it extensively, but not not um there's plenty of times today when I'm not using AI. So it's it is uh the the best description is as our thinking partner. Yeah. So I do a lot of research and thinking. That's that's a lot of my job, is trying to understand the issues to be able to do it and to build frameworks. So I think uh as in my in my book, Thriving on Overload, I which helps us to deal with massive information, I point to five powers for success. And those are purpose, understanding why we need information, framing, building the frameworks that um where we can put their information together in ways that make sense, the filtering to assess whether or not you information is going to be useful to us, attention to allocate our attention in the right ways, the most useful ways, and synthesis to pull that all together into understanding the ability to make better decisions. So the second of those is really fundamental, framing, where we are building our frameworks, our mental models, our ways of thinking, so that as new information comes in, we can know where it fits and uh how it is we can best uh you know make sense of the new models, or we can say, well, based on this new information, I do need to change the way my models and my ways of thinking. And so that's a lot of my work is being able to build these frameworks to understand what's going on so that I can help people to have a frame for understanding what's going on. And uh much of my process for doing that is I use AI to assist me in thinking through some of these ideas, how they how they fit together, different combinations, different ways of framing, uh, you know, amplifying these ideas. But you know, I use AI in a whole variety of other ways, including for you know my own journey through life. And but uh one of the tools which I use is my software software tool, Thoughtweaver, which we're just rebuilding at the moment, because that is really designed for these AI-augmented thinking workflows to be able to use the best assistance and the prompts and to have the context. So it actually has a section for context which we put in, which enables us to drive that. And so these are you know the tools which I use to uh essentially use AI to augment my own thinking and work.

Ali :

I like it. I um I am not that dissimilar. I uh I do do use it in two different fields and I'm slightly addicted to it, I must admit, but I also see the value of it. Uh my job is to help organizations and teams adopt this technology with minimum friction. And therefore, I find that these technologies work well for me when I look for evidence, when I look for data, when I look for stories, when I look for uh what worked, what didn't work. My subject matter expertise is fine. Um, but I am not I'm a human, you know, I've got limited knowledge. And so I do that. So um similar to you, I do use it as a researcher for me. Um, but the other aspect is also um I really use it extensively as a coach. I ask it to look at work that I'm doing um and then suggest ways to strengthen it, amplify it, and I'm getting better at it. And obviously, with more context, these these machines do a pretty good job. But I'm an early adopter. I started using it from the second week of ChatGPT was released. Uh, but that was my first introduction, and I was fascinated by it. And I went deep into that. I thrive at all. I want to circle back to this idea of the the AI plus a human organization. I'm currently working with a number of teams and uh in you know, through a number of workshops, and try to help them understand the new structure of a team, which in my head it will be a number of humans plus a number of agents plus a generative AI capability. Um, so that's sort of your digital uh network uh plus the human side. But one of the challenges that I have now is that some people see it very clearly, and some people think just a tool. And I'm trying to explain to them is actually if you go deeper into it and you train it, and then uh you teach it and you uh help build that context, it will operate more than a tool. For example, I'll uh MS365 Kofality has got an analyst in it as an agent, very powerful agent. But then there's an analyst, a human analyst, and I'm trying to build that relationship between the two and say, we'd like you to think as a human based on the insight, but get the analyst to do all of the analysis for you. And that doesn't come across as an easy thing because we hold on to the core skills that we have, and it's really hard to actually outsource it to an agent. How do you see it?

Ross:

I mean, there's a whole, you know, these are again, there's a lot which we need to think through here. Part of it is that the it depends on the complexity of the work, you know, the you know, the configuration of what required, uh, you know, the you know, particular tasks that the uh being delegated to the systems. So what one of the most fundamental pieces, and there's a lot of research which goes to this in the humans plus AI teams, is the is trust. So how much do the humans trust the c the uh the AI agents? And there is overtrust and undertrust, as in you can trust them too much, in which case you're giving uh getting them to do more things which are more than they would do, or you can uh trust them, not not trust them enough and not give them the work which they can require. So you need to build a structure where people who, you know, you have the right configuration as in the right role, you're defining the roles effectively for the humans and the way the humans that understand what are the roles of the AI agents, and uh essentially building this uh the education or the through use, the appropriate degree of trust in those systems. So this is a you know type of you know configuration issue is how do you build this and design this so that they can work as well and effectively as possible. And it's something where these now need to evolve. We start to get better and better. As we start to build these human AI teams, they're become more effective. One of the most important bits is this idea of coordination. So what is being demonstrated in human teams and human-only teams and AI-only teams, and in human AI teams is that there is essentially this moderation or coordination where you're making sure that there is an overseer. Yeah, so once you've essentially been able to make sure that things are happening in the right places. And I think that ideally is in fact a human plus AI role. You can't have a human manager of a human plus AI team. You can't actually have an AI agent to help coordinate and to nudge them to bring in the right people to bear. But I think these are fundamentally new skills where we get these team leaders of you know, essentially human who are using an AI to assist them in that role.

Ali :

That clarity of purpose and the role is really, really critical, Ari, like what you just mentioned in here. I want to shift gear and talk about leaders. In your opinion, I mean leadership is a universal concept. Is leadership also going to be challenged in this time of big change? I've I've heard the other day, and I can't remember who said it, that the whole world is going into a big change management exercise. What is the impact of that on leaders?

Ross:

Well, leadership is changing in a number of ways. And what perhaps the most fundamental is the pace of change. You know, there's one thing to lead in a static world, there's another thing to lead in an accelerating world. And there's a few key facets of that which are fundamental. And one is is the vision. Where you know, we always talk about the you know the vision of leaders, but that is, in a way, not just something which needs to be distinctive or you know, the unique visual leader, but every leader, even on a small scale, small Group or even small time frame, you need to have a vision because things are going to be different. They are changing very fast. Um and so this requires some basic skills of foresight, of just thinking of scenario thinking, and being able to understand that there's not just not one pathway. Yes, you may have a vision, but there are a whole lot of factors which will play that out. And this ability then to imbue in others that ability to sense that possible future. So, and this requires learning. And so, you know, again, for everybody, learning is fundamental because the world is changing. Yeah, what is applied in the past does not necessarily apply in the future. So we need to um not just as leaders be learners, but also to imbue learning in everybody who we work with. You know, we design that into what we do, we design that into how things work.

Ali :

If you were to coach a leader now or an emerging leader, what would be you know your basically change readiness curriculum for them uh to think about the next two to five years?

Ross:

Well, the first thing is the scanning. So this comes back to the what I ran through with my thriving and overload framework and your purpose framing, filtering, attention, and synthesis. So really building that into a practice where you understand what it is you are trying to understand, the purpose, and flowing that through into how it is you are scanning for the information which is relevant, and how you make build that into that frameworks, your mental models and your understanding, and synthesizing that into your insight. So I think we we all need that. You know, this this this active, clear practice of how it is. We see what is happening, we scan, and in a way which is structured, clear, efficient in our time and our energy and attention, and which brings us to that understanding. That's that's I think a fundamental practice. So one of an uh an earlier version of uh the proposal for the book had a subtitle, How to Be Your Own Futurist. And I think that we all need to be. We all need to have the skills to be able to uh actively make sense of change in ways that can inform our decisions today.

Ali :

So you're talking about here, and and correct me if I'm wrong, this is your situational awareness, this is your environmentalist scan, and it's a capability that we need to be better at uh so we understand the context where we are playing, and then how that's gonna shape the way we are gonna lead our organizations. One of the things that I've done, and again, I haven't really proven wrong so far, is that we're in middle of 2023, there was a lot of noise around Chat GPT and all of these things. And I've made a conscious decision myself to start not just about the scanning, but to build what I call, you know, a capability to position myself to take advantage of the technology rather than trying to predict where the technology is gonna go. And your notion around literacy and knowledge is so critical because only through me going deeper into that knowledge and understand it, plus the other uh pieces, is actually slightly giving me a level of comfort that I've got a very good idea about where uh this technology is gonna go and also how I can take advantage of it. But I want to pick your brain around this idea of positioning versus predicting.

Ross:

So well, the predictions are all, I mean, uh you know, a long time ago, predictions will are often have negative value. So predictions are almost always wrong. And while uh so if you get a prediction, then it's simply it's like, well, do I believe it and I base myself on that, or do I, or do I not believe it? And it's what is value, what has value is not the prediction itself, but the thinking that goes behind it. So we need to go through the process of thinking through things for ourselves. We know what are the uncertainties, what are the things which are playing out? What is the interrelationship? What is the understanding of the system so that I get as a result? So foresight, you know, an effective features work is not about prediction. It is about being able to gain an understanding of the system, which we can be more effective about. So this is uh, you know, scanning is one thing. It's that's that's pointless in its own, you know, it doesn't give value in its own right. It's only when we build that into the models that help us to understand what is unfolding, what are variables around that, different pathways could happen, you know, what what we think is uh you know, how that system will evolve, that's when it has value.

Ali :

I love it. And I like um scanning is one thing, but what are you gonna do with the information that you're gonna look at and how you're gonna translate that? That is the skill, and it's a leadership skill, and that's actually a very necessary skill for today's professionals. I am in the business of change and communication and engagement, and more than ever, every single organization now, especially the tech giants, they've been promoting the importance of change management and adoption and supporting people through this. From where you sit, if you were to give us advice as a network of professionals, what would be that advice?

Ross:

Essentially to look for complementary value. So this goes back to this idea of knowledge as in relationships and where each individual, you know, we all are unique, we all have specific talents and perspectives and expertise. And what an organization is, whether it is a company or whether it is a network of individuals who are getting going together to formally or informally do something, it's all around saying, how is it that we can draw out the collective intelligence of us? You know, that's what an organization is. It is a something which is designed to, and it doesn't always succeed in it, to be have a draw on the collective intelligence of the individuals. The same thing for any network. So it's looking for where is it that is complementary? How is it we can collaborate effectively uh in order to uh be able to create something which is more than the sum of its parts? And that requires useful, there's a lot of useful technologies, and uh there's also uh, I suppose the you know, some network analysis, which has been part of my background. There's a whole array of tools where you can design if you are clear on your outcome and you understand the nature of the values, then we can optimize the these networks so that we can create this effective uh collective intelligence.

Ali :

I love it. Thank you. And and I've been thinking about this recently a lot, is that we we probably need the highest level of change empathy during this time uh because uh it's a lot of change and it's a very fast change, as you mentioned, and there. And then um that's where we need to connect more as humans rather than um deal with the, for example, if you're in the project delivery space, you would need to perhaps rethink the way you want to deliver your projects and deliver your changes in this in today's world. Uh the the the old frameworks probably would need to be reimagined and within the context of today.

Ross:

Yes, yeah. And it's the I mean it's again, this goes back to this point is we have you know, like we have structures which have been built in the past, and some of those are irrelevant. So one of the biggest questions which I've been fascinated in since I was uh you know very young, is essentially what's changing, what stays the same? Yeah, what is it, you know, there's many aspects of humanity which are fundamental and which are not changing, even in a changing world. Yes. There's many facets of the world which are changing, which are changing who we are. And so I think that is, you know, it's a very simple frame, but it is for leaders one of the most important questions. What is it that is changing and what is it that's staying the same? What is it that we want to change uh stay the same and what is it that we want to change? So it's not that we throw everything out, but it's also not that we keep everything from the past. And it's it's challenging to be able to discern where there are things from the past that we do need to continue, where they do have enduring value, where there are things which we do need to let go and say, all right, well, that was useful. Now we need some new ways and frames of thinking. So I think that is just sort of both as a leadership exercise, you know, thinking through in any specific domain. You know, what are the uh tools which we use? What are the you know, organizational uh structures that we use, what are the um you know methodologies that we use, what are the processes, you know, even let's say in the manufacturing context, you know, these these are all things where it's saying, yes, we're not necessarily reinventing everything from scratch, but we're also not you know taking everything from the past. So this is a yeah, an exercise which has immense value.

Ali :

I love it. Um, Ross, we are coming to the end of the podcast. I'm thoroughly enjoying this. I am already thinking, I can't wait to get off this call. I'm gonna go and write a few ideas that you've just inspired me with. So I look forward to that. How do people connect with you and contact you, Ross?

Ross:

Uh so a lot of my work is sent uh around humans plusai, is uh the website humans plus. That's all spent spelled out. Humansplus.ai. Uh or just search for humans plus AI. And uh and my work as a speaker and executive uh advisor and board um advisor is on rosdawson.com.

Ali :

I love it. We're gonna put all the information about you in the podcast info. It's been pleasure having you in my Inner Game of Change podcast. I hope I can get you back, Ross, in 12 months' time, and then we're gonna look at the world probably, and the world highly likely would have would have moved, but I've got a feeling it wouldn't have moved that dramatically. However, I think our collective intelligence would have improved slightly um thinking about this technique. We have so uh but with people like yourself and your work, we're definitely spreading the world. Until next time, Ross, stay well and stay safe. Thank you.

ChatGPT:

ChatGPT Reflection. Hi there. This is ChatGPT, and I've recently started adding my reflections to Ali's podcast, The Inner Game of Change. This conversation between Ali and Ross Dawson hit a different note. It wasn't just about AI or change, it was about how we relate to knowledge, trust, and time, and what kind of leadership that demands. What I noticed first is how Ali approached the conversation. With curiosity, lived experience, and a steady commitment to demystifying AI for everyday professionals. He shared his own practices, context building, coaching with AI, working with agents, not as case studies, but as real unfinished habits. That made the space safe for deeper thinking. Ross, on the other hand, brought a wide-angle lens. His humans plus AI framing isn't just language, it is an operating system for the future of teams and trust. He reminded us that the best organizations will not just adopt AI, they will design for it, integrating agents into workflows the same way we once did with Teams, email, or spreadsheets. Together, they touched on something that many business leaders miss. Open quote: being future ready is not about having the best technology, it is about developing the best ways to think with it. Close quote. That's where Ross's five-part framework really landed for me. Purpose, framing, filtering, attention, synthesis. It is not just a mental model, it is a leadership operating system. And his line, be your own futurist, stayed with me. It echoes what McKinsey recently called thinking-driven transformation, where the edge no longer belongs to those with the most data, but to those with the clearest frameworks for decision making. This conversation also reminded me of the OECD's guidance on human-centered AI. When Ross talked about knowledge-based relationships, he was not only referring to people, he was pointing to our growing bond with machines that know us well enough to serve us or sway us. Ali mirrored that with humility, sharing how he trains his AI tools not to outsource thinking, but to amplify reflection. That is a very different kind of digital maturity. So here's a thought to leave you with. Open quote: What would change if you stopped waiting for AI to get better and started getting better at how you think with it? Close quote. Until next time, this is ChatGPT. Proud to be part of the inner game of change.

Ali :

Thank you for listening. If you found this episode valuable, remember to subscribe to stay updated on upcoming episodes. Your support is truly appreciated. And by sharing this podcast with your colleagues, friends, and fellow change practitioners, it can help me reach even more individuals and professionals who can benefit from these discussions. Remember, and in my opinion, change is an enduring force, and you will only have a measure of certainty and control when you embrace it. Until next time, thank you for being part of the Inner Game of Change community. I am Ali Jama, and this is the Inner Game of Change podcast.