The Company Road Podcast

E29 Sarah Kaur - AI for good: The healthy intersection of AI and human

January 30, 2024 Chris Hudson
E29 Sarah Kaur - AI for good: The healthy intersection of AI and human
The Company Road Podcast
More Info
The Company Road Podcast
E29 Sarah Kaur - AI for good: The healthy intersection of AI and human
Jan 30, 2024
Chris Hudson

“The level of power that consumer facing Gen AI offers with such intuitive accessibility for such essentially a low cost for that high powered tool… is unprecedented.”

Sarah Kaur

In this episode, you’ll hear about:

AI as an enabler:
How AI can be used to enable and empower unique work processes and scenarios, including key examples of how it it can enable design thinking and transformation work

Digital divide and inequity: Confronting the challenge of digital inequity and strategies for evening the playing field of technology across the workforce

AI's efficiency vs human value: Balancing the use of AI technology with traditional human work and where and how both are most valuable

ChatGPT in design workflows: Analysing how chatGPT has changed the approach to design, how it will continue to do so and how human participation and perspective must be recentered for continued value

AI upskilling and ‘Shadow AI’: How ‘Shadow AI’ is allowing for expanded creativity and creating opportunities for upskilling the workforce and why you should be taking advantage of it

Key Links

Max Kalis episode: https://www.youtube.com/watch?v=2_fTrmruTPk

Shadow AI: https://www.forbes.com/sites/delltechnologies/2023/10/31/what-is-shadow-ai-and-what-can-it-do-about-it/?sh=1b4aa39b7127

ChatGPT: https://chat.openai.com/

Data61: https://research.csiro.au/data61/

Copilot: https://copilot.microsoft.com/

Lorraine Finlay, Australian Human Rights Commissioner: https://humanrights.gov.au/our-work/commission-general/human-rights-commissioner-ms-lorraine-finlay

About our guest

Sarah Kaur (https://www.linkedin.com/in/sarah-tamara-kaur/) is a Strategic Designer and Human-centred Design Researcher. Her passion is supporting the creative work of teams across disciplines to collaborate, and come up with smart ways to create impact through using participatory design. She now works as a Design Thinking Practitioner at CSIRO's Data61, where she focusses on research into Responsible AI, and embedding human insight in machine learning and AI research and product development.

She holds a Master of Business Analytics and a Bachelor in Fine Arts, and 12 years of professional experience supporting NFP, Government and Private organisations realise their business objectives by placing humans and quality data, at the centre of their decisions and investments.

About our host

Our host, Chris Hudson (https://www.linkedin.com/in/chris-hudson-7464254/), is a Teacher, Experience Designer and Founder of business transformation coaching & consultancy Company Road (www.companyroad.co)

Chris considers himself incredibly fortunate to have worked with some of the world’s most ambitious and successful companies, including Google, Mercedes-Benz, Accenture (Fjord) and Dulux, to name a small few. He continues to teach with Academy Xi in Innovation, CX, Product Management, Design Thinking and Service Design and mentors many business leaders internationally. 

For weekly updates and to hear about the latest episodes, please subscribe to The Company Road Podcast at https://companyroad.co/podcast/

Show Notes Transcript

“The level of power that consumer facing Gen AI offers with such intuitive accessibility for such essentially a low cost for that high powered tool… is unprecedented.”

Sarah Kaur

In this episode, you’ll hear about:

AI as an enabler:
How AI can be used to enable and empower unique work processes and scenarios, including key examples of how it it can enable design thinking and transformation work

Digital divide and inequity: Confronting the challenge of digital inequity and strategies for evening the playing field of technology across the workforce

AI's efficiency vs human value: Balancing the use of AI technology with traditional human work and where and how both are most valuable

ChatGPT in design workflows: Analysing how chatGPT has changed the approach to design, how it will continue to do so and how human participation and perspective must be recentered for continued value

AI upskilling and ‘Shadow AI’: How ‘Shadow AI’ is allowing for expanded creativity and creating opportunities for upskilling the workforce and why you should be taking advantage of it

Key Links

Max Kalis episode: https://www.youtube.com/watch?v=2_fTrmruTPk

Shadow AI: https://www.forbes.com/sites/delltechnologies/2023/10/31/what-is-shadow-ai-and-what-can-it-do-about-it/?sh=1b4aa39b7127

ChatGPT: https://chat.openai.com/

Data61: https://research.csiro.au/data61/

Copilot: https://copilot.microsoft.com/

Lorraine Finlay, Australian Human Rights Commissioner: https://humanrights.gov.au/our-work/commission-general/human-rights-commissioner-ms-lorraine-finlay

About our guest

Sarah Kaur (https://www.linkedin.com/in/sarah-tamara-kaur/) is a Strategic Designer and Human-centred Design Researcher. Her passion is supporting the creative work of teams across disciplines to collaborate, and come up with smart ways to create impact through using participatory design. She now works as a Design Thinking Practitioner at CSIRO's Data61, where she focusses on research into Responsible AI, and embedding human insight in machine learning and AI research and product development.

She holds a Master of Business Analytics and a Bachelor in Fine Arts, and 12 years of professional experience supporting NFP, Government and Private organisations realise their business objectives by placing humans and quality data, at the centre of their decisions and investments.

About our host

Our host, Chris Hudson (https://www.linkedin.com/in/chris-hudson-7464254/), is a Teacher, Experience Designer and Founder of business transformation coaching & consultancy Company Road (www.companyroad.co)

Chris considers himself incredibly fortunate to have worked with some of the world’s most ambitious and successful companies, including Google, Mercedes-Benz, Accenture (Fjord) and Dulux, to name a small few. He continues to teach with Academy Xi in Innovation, CX, Product Management, Design Thinking and Service Design and mentors many business leaders internationally. 

For weekly updates and to hear about the latest episodes, please subscribe to The Company Road Podcast at https://companyroad.co/podcast/

[00:00:00] Chris Hudson: Hello, happy Wednesday everyone. And welcome to the Company Road podcast. This week's show is for all of those that want to do more good in the work that they do in one way or another. And as intrapreneurs, we're often wondering about the impact we can make the teams that we work with for the companies we've worked with for ourselves.

But also today we'll be talking about ways in which we can impact the society in which we live and the community and the wider planet. And it gives me great pleasure to introduce my next amazing guest, Sarah Kaur. Very welcome to the show. Thank you very much for coming on.

[00:00:26] Sarah Kaur: Hello, how you doing?

[00:00:28] Chris Hudson: Yeah, good, good.

Sarah, you're a strategic designer, human centered design researcher. We're looking forward to, really looking forward to hearing about some of the work that you do, and you're currently a design thinking practitioner at CSRIO's Data61, where you're focusing on research, responsible AI, some of those things, embedding human insight into machine learning, and you undertake AI research and product development, and some really cool stuff.

Just applying that technology. You've got a bit of background in more business analytics, which is that, but also in fine arts and you've spent your incredibly successful career supporting nonprofit government and private organizations alike. Welcome to the show. I'd really love to just hear about some of the things that you've done, tell us a bit about your journey and how you've found yourself in this position where you're bringing together the worlds of analytics, maybe a bit of art, but maybe not. I don't know. Tell me about

[00:01:15] Sarah Kaur: That was a really awesome introduction. I think how I got here was definitely like, if I look back through whether it's been being interested in arts or being interested in digital production and what transformation means in an organization, context or human centered design or artificial intelligence.

I think the through thread is just being incredibly curious about the creative potential that happens between humans when you are given like a meaty enough problem. So whether that's doing a community arts project where they want to tackle an issue that's like close to their heart, or whether that's.

In an organization wondering about, well, what is a good use case for AI now that everyone's jumping on the bandwagon? And how do we do that responsibly? And right now in my role at CSIRO I work in the engineering and design group within Data61, so I'm lucky enough to be working with data scientists and software engineers who have been producing a lot of the original research around safety guardrails and the evaluation of good practices around AI, AI ethics, and how that is implemented and governed in practice.

And so that's a really meaty problem and it doesn't not touch anyone. It's very multidisciplinary. And so right now I'm just very immersed in that facilitation of creative collaboration to tackle a critical, a critical and emerging issue.

[00:02:46] Chris Hudson: Amazing. I mean, that sounds so cool. I think the title Data61 says something in itself, right? And it feels like that's a, that's a top secret organization doing some clever things, you know.

[00:02:56] Sarah Kaur: I haven't found out why it's called Data61. I really should.

[00:03:00] Chris Hudson: No, I mean, it just sounds like a code name for something that people shouldn't know about. But maybe, maybe there's, there's amazing work going on. What's some of the stuff that's keeping you busy at the minute?

[00:03:09] Sarah Kaur: We've just released a report, like an interim report with some initial findings around what investors might want to pay attention to when they're looking at a sector or a company, but they're looking not just, I guess, at the financial considerations, but they're starting to look at ESG and now there's AI.

This report kind of documents some findings from thinking about an ESG plus AI framework for investors to consider. So a lot of it's around identifying material use cases of AI in particular sectors, identifying good management practices that leads to responsible use, whether that's thinking really strategically about AI, as well as managing it right from a policy compliance and regulations perspective.

So that's been released and that's exciting. The other thing that's keeping me busy and really like focused and interested is the idea of an AI. I'm doing like quotes with my hands, an "AI footprint." So the idea is similar to carbon and how we've developed an idea of a footprint. How do we start understanding the external impact of the way companies use artificial intelligence in their systems, in their supply chain, in their workforce, on consumers?

And so I don't think footprint is going to be the name of the concept forever, because it implies that it's a negative thing that you want to reduce. I think the idea is that actually artificial intelligence offers us amazing opportunities to do great, like to have a good positive net impact. But the framework there is aimed at ASX companies in it, and it's trying to encourage them to think about impact at the board and CEO level and not kind of tunnel into the internal governance mechanisms. But, you know, that space that's really nebulous of impact, what is it? Impact evaluation, what is that? And so developing all those metrics and ways to have a conversation around it has been keeping me busy, if not up at night, so.

[00:05:27] Chris Hudson: I'm sure. Yeah, I'm sure. It feels like that sort of awakening around AI and using it as an enabler, particularly in the last 12 months, I'm going to say where it went mainstream. I know it's been around for a long time. Machine learning has been around for a long time.

It just feels like that sense of possibility is very present. Everyone's very aware of it. You're probably finding the conversations within organizations are a lot easier. In and around AI, you know, the acceptance around it, the fact that we can, we can all use it for good. I'm wondering, you know, in those areas that you described how, to what extent it has really been an enabler from the point of view of assessing different datasets probably, and then really understanding how it can translate into meaningful outcomes.

You mentioned impact, but if you were to describe how AI has been an enabler in your world, what would you say?

[00:06:12] Sarah Kaur: Several facets, right? So I think for, I've been thinking, and kind of talking with other design practitioners for the last probably 12 months around specifically Gen AI specifically, you know, with the release of ChatGPT, how that has started to be integrated into design team workflows. And so I think where we've landed is that it's an amazing tool that helps for me specifically, I find it really great at generating speculative scenarios that I couldn't have come up with on my own. So how I've been using that, for example, is if I'm preparing to go into a co-design workshop, what I might do is take a concept and I might work with a GPT to kind of push that concept into a few different variations.

And then take them into the co design workshop to say to real people, Hey, look, this is one concept. These are a couple of different ways it could play out. These were, and I am pretty transparent. I'll be like, I generated this with GPT. This is inspo material just to kind of broaden our thinking together about what shape this solution might take.

And now it's time for us to kind of do our own thing. And I find that that's useful. It's useful as a, as a tool to generate, I guess. There's a lot of good people I'm meeting around the world. And I think it's really important to address ideas that are honestly more diverse than I could have come up with as a solo practitioner.

But it's also useful to take the output back into the human realm of co design, but also in like acknowledge the presence of GPT, because what I'm finding is some people are starting to get worried with online and remote co-design workshops are my participants generating input themselves, or are they also going to GPT?

Like, are we in some massive recursion loop? So it's almost like saying, look, we're in this together. The point is that we're humans, that you've got rich, rich experiences and that you're here for a reason. We've already brought like GPT into the conversation. It's an enabling tool for us to move on with.

Together. So it's almost like recentering the value of that human participation and perspective. That's, that's one way.

[00:08:24] Chris Hudson: Yeah, maybe. I mean, just on that point, it just feels like, we as humans, we're kind of, we're kind of biased by the way things have always been in a way. It feels like we've always like been in these sort of meeting workshop y type scenarios, particularly the corporate environment, organizational environment.

That's run a certain way for a long time. So it's actually introducing something like GPT, generator for ideas. It feels like a bit of a cheat, but actually if you think about echo chambers. The echo chamber was always there because you were in the four walls of your own organization talking about the stuff and the assumptions that everyone had already using the data sets that everyone had, you know, they've been peddling those things for a long time And it was always resulting in non breakthrough thinking in a way, you know It was always resulting confirmation bias of what you already had there to begin with in one way or another and so I think the advance of GPT and the advance of AI is actually giving us the opportunity to, to more easily access an outside point of view, because research and user testing is obviously a little bit daunting in some people's cases, it feels like you have to get that over the line.

You have to business case it, you have to budget for it, you have to spend lots of money and time doing it. So I think in terms of generating ideas, probably more qualitatively, I'd say to begin with, it's quite a good way of just bringing new points of view in. And arguably it would be probably more wide and varied, like you were saying, than what you'd be able to come up with in a room full of 20 people anyway.

So it might just start you at a higher level than you, you're able to begin with in the first place. So has that been your experience?

[00:09:58] Sarah Kaur: Yeah, no, and you made so many interesting points there, right? Like around diversity or divergent thoughts being- it's a tool to help us kind of get beyond the, the people in our little world, because it's true, no matter how carefully you try and curate the people who come into your workshops, there's inevitably a level of bias or limitation.

So I have found it really helpful. Like you've said, I do think it's worth looking at where I draw a line on where I won't use. Like GPT for, for example, for synthesis and insight making.

[00:10:35] Chris Hudson: Hmm.

[00:10:36] Sarah Kaur: Several reasons for that. Maybe some of it is like really old school thinking. I'm like, what do I value about myself?

 I value my, I guess my immersion in a content area, or I value my human kind of perception of who was in the room when we recorded or got. Outputs or got kind of interview, raw details and commentary. And I don't find that when I've tried using GPT to summarize or synthesize a lot of information, I kind of feel like it takes it to the lowest common denominator.

And it's not great at unpicking the tensions or the insights in a dialogue in an interview. So that's something that I still prefer to do myself. 

[00:11:24] Chris Hudson: Yeah, I'm with you. I agree with that. One of the areas that you did just say was around ESG in particular, which I feel has been talked about a lot, but in a way quite loosely defined, and I'm wondering how you found the experience of using AI or maybe not using AI, but how you're trying to evolve the conversation around ESG in a sense, and what's been useful in that regard.

[00:11:45] Sarah Kaur: I think in the conversation around ESG and AI. What we've been finding is that it's really early days for most companies to start thinking about how to get signals back from their use of AI. There's clearly sectors or specific companies and tech startups where that's a very clear value proposition and it's attracting a lot of investment money.

But then there's, you know, really established companies where they are starting to integrate AI use cases. I would say in Australia, they're still mostly internally facing and we're still a little bit hesitant and maybe that's a wise thing to deploy external use cases like consumer facing AI use cases directly, but the trick is if we think about the ESG journey, and again, we look at the environment and look at the massive journey that we've had to go on from looking at carbon usage, and then scope 1, scope 2, scope 3 emissions, I found it really fascinating to go, gosh, like, we did get there. Those are really complex areas to try and wrap your head around. And now as investors and as company kind of directors and leaders, it's part of the conversation. We're not allowed to not have that conversation. What I really want to encourage investors to do is hold companies to the same degree of accountability when thinking about AI and what's really interesting is that because it's such early days.

I think the potential for conversation and behavioral change and accountability can really be driven by the investment community asking smart questions. So in the framework that we're developing, I guess we're trying to give investors a little bit of a sense of how AI might interact with existing ESG components.

So is it a degree of the materiality of risk is increased because the scale of what could go wron, or right, dramatically increases, or can it be thought about as part of an existing category? So we might think about the use of AI as part of tracking your environmental electricity or carbon usage, but there are also areas that are distinct that we need analysts to be able to kind of go, oh yeah, I do need to engage the company on how they're managing a technical governance process for AI that might be new. 

[00:14:19] Chris Hudson: Hmm. Before AI, there are always ways in which organizations would set up their systems and processes and all of the ways, the policies basically, all the things that would surround an unknown topic with some level of safety and people would understand how to engage with it and what to do. Is there something from having to now bring AI in that has been a sort of close precedent in the way that it's been set up or run previously?

[00:14:43] Sarah Kaur: I think this is interesting because I don't know if there is one answer in terms of a precedent, I think there's probably been several tools like that in the past, but this is a very, I think the level of power that consumer facing Gen AI offers with such intuitive accessibility for such essentially a low cost for that high powered tool, like that's pretty unprecedented, but I wonder if we should spend a bit of time talking about the idea of like shadow AI or shadow Gen AI.

Have you come across that concept? 

[00:15:17] Chris Hudson: No, tell us about that.

Sounds cool. Sounds like something you do at Data61. It's very, very covert. Once more, yeah. Let's talk about shadow AI. Yeah.

[00:15:27] Sarah Kaur: Nah, shadow tech in general refers to, it's not been approved by IT. There's not necessarily yet a policy of use for it. 

[00:15:38] Chris Hudson: Mm. 

[00:15:38] Sarah Kaur: it in your personal life, and it might bleed over into your professional life, but the idea is it's kind of hidden from the perspective of your corporate governance process, policies and environments.

What we do know is that tools like ChatGPT and others are being used by plenty of employees around there around corporations and what I wanted to do was think about it less as a massive risk and think about the opportunity that it brings, right? Because I feel like you can be familiar with this in your org design and strategy work because like employees really know a company, they know the company values, they know their job really well, they know their workflows really well, and what you could do is kind of go, not only do I have a workforce that knows me and their job and is really competent, but I also have a workforce that needs to be upskilled in AI.

And I also have a fair proportion of people that I can assume and should assume are already using it. Oh my god, I've actually just got a leveled up workforce. How do I make it possible for my workforce to say, yeah, turns out I'm using Copilot for this and it's amazing. I'm doing it, like, ten times faster or

[00:17:01] Chris Hudson: Mm.

[00:17:02] Sarah Kaur: turns out I'm using GPT in my personal kind of capacity and I've worked with it and I've come up with a more efficient way to design like this process. So maybe it's less about saying slap on the wrist. Don't use it. Don't shadow AI. Maybe it's kind of saying, tell us about all of these creative things.

And that's a really great way to move forward into the new kind of upskilled workforce.

[00:17:28] Chris Hudson: It reminds me a little bit about a previous episode that I recorded with Max Kallis, I think you know him from your time at Portable as well. He was talking about his time at Lloyds, you know, the fact that he set up this Dead Pony Club, which was all about innovation and basically creating a safe environment and an okay environment for these types of thoughts and experiments to actually exist and flourish.

I think creating the acceptability around it is the start and making that sort of a known phenomenon that people can actually jump in and use it and then setting even just very loose or basic guidelines around it to begin with will give people that level of comfort because I think the first step is a really important one, particularly with AI.

It's really about taking that leap into fermentation into the unknown for the first time, where you're experimenting with the technology that you've never used before, but you're also thinking about how you like to work, but also how personally you would like to use it as well. So that's part of it.

And then I was coming on to say that the part that is problematic maybe is around tech adoption more generally. And it feels like there's always going to be a leading cohort where they're way ahead and they want to get into it. And there's always going to be a lagging cohort. That want to know that it's a bit more safe and established and see that other people are doing it before they would jump in themselves. And I think in an organizational capacity, that presents quite an interesting problem. You were talking about AI being a really amazing leveling tool really around how to create equitability and it's like an egalitarian way of working because everyone's got access to the same stuff. It was like when Google first came out and you could just search things up for the first time.

Now there's AI or ChatGPT and people have all got access to the same things. So that presents a newer opportunity. Because if it's easy to use, then actually it does become flatter. And it's not just about tech adoption and some people knowing how to code and other people not knowing how to code.

It's actually something that everyone can use. So, yeah, I'm just wondering within that context, whether you feel, and have seen that that, can work a lot more effectively, you know, particularly with this advanced technology. And is this going to be the way in which tech developments now continue to surface, do you think?

[00:19:31] Sarah Kaur: Oh, I'm really torn on this one, Chris, because I think a lot of the people that you and I will work with either as our clients or our peers will have no problem accessing it. But there are huge parts of our population where the digital divide has already existed for very many years.

And I think this probably will kind of accelerate, inequitable outcomes, which is really worrying. But It's not even just about, do I know about the tool? So it's not just about awareness. It's not just about connectivity. Like, do I have, a way into the infrastructure that allows me to access this?

It's almost about the capacity to imagine how to work with a tool that is of the gen AI category to dream big and be able to do the thing that you want to do so as part of the- there's like a story by Lorraine, the Australian Human Rights Commissioner, and she was making us aware that in schools, the rollout of GenAI has been really interesting because, she was observing that in like private schools or schools in suburbs that are high socioeconomic kind of status groups, students are doing things like using GPT to do coding projects or to help them kind of get ideas for how to launch a robot into space and like amazing. But then she was hearing anecdotes about students who are in different kind of circumstances and different kind of schools and the questions they were asking GPT were like, can you write my resume for McDonald's?

Or like they were, they're kind of constrained in what is my goal? What is my ambition? And I'm going to use the tool for that. So there's all kinds of layers of inequitability in terms of access, but imagination and capability. And so I don't know how to get on top of that. But I think that is a very good challenge that the design community could kind of lean into.

[00:21:37] Chris Hudson: Yeah, definitely. I mean, I want to make a wild leap now into your background, which is around art and creativity, because I think there's some interesting things to explore here around that very point, which is to do with the fact that you can have all the tools at your disposable, you know, you can have all of the paints, all of the acrylics, you know, you can have the finest 300 pencil set, Derwent pencils, right?

 But not everyone can do the same things with it. So yeah, from that point of view, it is limited in a way by the possibility that you see. And yeah, just on that point, we think that, it's pretty popular now to think of innovation or creativity as something that can be taught rather than is probably within you, within your nature.

As you're born, you know, there are lots of frameworks for lateral thinking, we think about all the amazing, corporate innovation frameworks and things that exist and people use very frequently now. But what we're talking now about is not necessarily a framework that's prescribed like that, but it's something that's much more open and it would diverge, and it wouldn't be limited by your capacity. So in a sense, it's probably more like, using paints or pencils or Lego or something, as opposed to having a fixed kind of creative playground and expected way of working. So how do you think people are going to deal with that?

[00:22:52] Sarah Kaur: Oh, it's a big question. I mean, I think your portrayal is right. Like it is a it's a tool kit, right? Of 

colors or palettes. I think it's kind of also maybe up to educators. If I'm thinking about like school children, or us as parents, I'm kind of thinking it might be up to us to, in a way, try to demonstrate the breadth of what, at least we know to be possible to our kids and to even our colleagues and say, hey I've been trialing this thing or check out this kind of quick experiment, because what I'm finding a lot now is that It's quick to make prototypes with Gen AI, it's quick to share them.

And then there's this moment where, people I've been working with, like my colleagues at Data61 might show me something and I'm like, whoa how did you do that? You did that really quickly. I've got a very similar use case. I wonder if I could learn that thing from you or even reuse some of your tooling and code and do that for myself and the next level up is saying to others.

Hey, I built this thing. It's on this link. Go and have a play yourself. I think it's maybe a combination of demonstrating, I don't want to say the art of the possible, just demonstrating what we have been trialing and kind of putting it forward as like, this is an experiment, not a finished product.

Like, can you go have a play and see what you think I've been finding that inevitably that leads to a creative moment for others where they're like. Oh, you can do that. Well, I've been thinking about this. I'm going to do that thing as well, and share it back with you. So I think we're in this moment of like creative exchange with it.

[00:24:29] Chris Hudson: Yeah, yeah. I'm just thinking about other parallels. You can see it in the world of marketing, for example, where marketing practices and expression of creativity through, that medium has taken many twists and turns. And I think the familiarity within that industry of other people's work, but also how propositions and brand and layers and layers upon, campaigns and experiential thoughts and activations and all of that sort of stuff.

It kind of evolves based on familiarity, really, because people see that somebody's done that. Okay, well, there's a new technology. We can bring it together with this idea and we can then do it. So I think there's a degree of like peer-to-peer comparison really that makes it familiar and then safe and then it's out of the world.

You've got to be able to see it to be able to do it. And even us, we're probably quite frequently in talks and webinars and at events where people are talking about the possibilities of AI and you've got to assume that we're in a minority group that really, not a lot of people know really about the possibilities of this or are using them day in, day out, but it's going to get to that point.

So, it's going to be interesting to see what people can come up with. It's a bit like TikTok or Instagram or anything like that. It's then handed over to the masses and we can take it on. Like there'll be people that saw and just do amazingly creative things with it.

And then other people that choose probably to leave it behind and not engage with it so.

[00:25:52] Sarah Kaur: Yeah, it's huge. It's such a I guess it's like one of those classic, like, this is a disruption technology.

[00:25:58] Chris Hudson: Yeah.

[00:25:59] Sarah Kaur: Speaking of marketing and brand and stuff was speaking to a friend, who is a brand and content kind of strategist. And I was saying, hey are you worried at all about like your job changing in a way that you don't like?

And he actually offered that companies are coming to them now to say, Hey, I really need you to write 200, examples of very well written on brand copy about a variety of topics and I'm going to take that and we're going to feed that into like our largely, language model, like knowledge source and train the AI to kind of do that in future.

[00:26:37] Chris Hudson: Wow. 

[00:26:38] Sarah Kaur: He was saying, actually, I would do that at the moment. I would produce that kind of examples or templates about how to use a brand voice. I'd just be doing it for human copywriters. So it might not be human, but actually the role and the expertise that I need to bring doesn't really change.

And that brings up so much for me, right? I'm like, Oh gosh, like, what is that? What are all the layers between those of us who have skills and expertise that can, for now be used to train AI, like, what happens to the workers down the chain that might be displaced?

[00:27:12] Chris Hudson: Hmm. 

[00:27:12] Sarah Kaur: But I, I'm generally really hopeful that it's a matter of going okay, if that's happening, what is the human value that I bring to my job? What is the nuance that I bring that is unique to me, my experience in being human? And how does that become present in my role? I think that's going to be a real meaty challenge for us.

[00:27:32] Chris Hudson: It will be. Yeah. I mean, that's the next layer of self awareness where you'll basically compare yourself to an evolving technology that's moving so fast that you don't learn what you're comparing yourself to in a way. It's really hard.

[00:27:43] Sarah Kaur: So existential. It is. I was going to say, do you have any thing where you're like, I will not use, AI in my design workflow for this or because it's not good or because I don't trust it or where are your no go's at the minute?

[00:28:00] Chris Hudson: I think the one that you mentioned around synthesis and the process, like the insight, insight generation, it feels like that- obviously you can use it for that but it's like, I remember times at school where I was taught how to summarize things in an English lesson, right?

You have to look at a chapter or a piece of work, read a chapter, a book, and then you say, well, what are the key themes in this chapter? And you've got to bring all that. And that's a process that's taught in yourself, but everyone's answer to that might be slightly different.

The fact that, you know, it is being used to kind of force very base connections, you're right. It's down to the base level. It's down to the common denominator and almost the safest way. And the work that I do is never really in that realm. It's more, how do we look for unpredictable patterns, really? You know, I'm looking for quite a clear hierarchy of information that evolves from very base data, raw data into something that's a bit more clustered, a bit more groomed, observed as a theme. But then looking at a constellation of themes and how all of that works together and what it then means.

And I'm actually trying to break theming more than I am, group things naturally as a way of bringing new thought into that process. So I'm struggling to think how that could work from an AI point of view and obviously you can train it to break, to break things too, but it's almost the counter to what you're trying to get it to do. in a, in a lot of ways.

[00:29:24] Sarah Kaur: No, I hear you because I mean, I guess if you think about the large language model, it's really trying to predict the statistical next most likely phrase, a word, a word fragment. And you're saying, actually in my job, it's about not just going to the most intuitive or predictable kind of sensemaking.

It's actually looking for those almost like jarring 

moments of insight and then making sense of that. 

[00:29:51] Chris Hudson: Or unexpected pairings, you know. you think about, in innovation, you can spend a lot of time looking at trends, or you can spend a lot of time looking at successful service products with business models and you strip it right back to what mechanical the model is that sits beneath that.

And then you compare that with something else. You can either create the sum of those mechanics to create something fresh, or you can bring in, if this was the mechanic, but it's only works in financial services, what would it mean if we applied that to a social enterprise or another scenario, not for profit or how can you bring the learnings through from other places to essentially solve for the problem that you're trying to solve.

And I think that sense of connection, it just feels like it's wildly unpredictable to at the minute anyway, maybe the machines will be able to do that too. What do you think?

[00:30:34] Sarah Kaur: I mean, I definitely think it is really wild and really, unpredictable, but I think there's something in what you said that made me, I guess, want to reiterate that even if it's not like a wild use of the technology, but like something that we might consider quite familiar and basic, such as summarization, I think one of the cool things about the way I'm seeing organizations adapt and employ large language models is when they, take a foundation model, they point it at the internal kind of corporate knowledge sources and it becomes a knowledge management tool and going back to equitability and access, if you think about what that means, like all of a sudden it's not Chris, or Sarah who is a knowledge holder, but it's kind of like a web of, it really is like the next level of intranet and a way to query, an intranet that I think is pretty amazing if you think about, who should get access to what knowledge, that's a hard, both technical problem and kind of data governance problem, but let's say that all of a sudden more people can query more things in a more intuitive way and have that kind of corporate knowledge.

I think that's quite interesting and I think that's also quite interesting when you think about really under resourced organizations like not for profits and you think about all of the traditional, administrative, accounting or other knowledge management system burden and maybe what that could offer, even if it is just, hey you can ask a question.

You can pull from a trusted knowledge source and it will come back to you in a summarized form. That's a pretty compelling use case. I feel that's not too wild. 

[00:32:09] Chris Hudson: Yeah. So putting definition around it obviously helps. It's like a packaging, it's like productization of what's there because you could say, this is the use case, this is the product. It's kind of like, it's not chat GPT, which is basically like this scary panel that you've got to put things into and you don't know what you're, what you're telling it to do, but it's actually framed up.

So do you want this? Do you want that? I think, Canva magic does a good job of that. If you've seen that it's basically an AI application for filling in all sorts of useful things, just brought into that under that banner. So you, can get it to do some pretty cool stuff from a design, from a writing point of view as well now.

I think that context feels like it's important for people to be able to adopt it at scale.

[00:32:49] Sarah Kaur: Absolutely. The context of application is key. I think it's key to everything and it's most of all key to , trusting how people use it and trusting how I can use it for the benefit of like my stakeholders or people. I think that one of the biggest fears that I still have, even though I feel very much immersed in it is having a terrible impact or could I accidentally be being influenced and I just didn't even know it and now I've got a like a biased opinion just because I've been interacting with this. So there's always a part of me being like, if I zoom out. And come to an earlier point around, even if I have 20 people in the room, it's still kind of a limit.

We live our life immersed in bias. And I think a lot of my recent thoughts have been, how do I make peace with being sitting with my bias, but also kind of going bias does not have to be a limiting factor. We can acknowledge it, be aware of it, but sometimes bias is good and I'm trying to work with that as raw too. 

[00:33:41] Chris Hudson: I mean, it's a very interesting area, particularly with. chatbots, just as one example, but if you're thinking about how it's set up and the fact that because it's tech, it feels like it's now encouraging a competitor landscape with more rival, you know, it's like Google and Bing or Yahoo. People will use different things for different purposes and they'll get different things out of that.

And to your point around creating the source of truth, where does the source of truth lie? And who's going to win in that race? And it feels like some people will center themselves in one area and other people center themselves in another or maybe not at all. And that's going to fragment knowledge essentially, because it's

application is going to be so varied.

So is it going to get really tribal?

[00:34:20] Sarah Kaur: Maybe it will get tribal. I mean, arguably we're already in a very like tribal social bubble environment in general at the other end of the spectrum, I don't know if it's any better to have like one view of the world either. Right. Like one knowledge source for everyone to draw on.

That's kind of scary too. So I have no idea. I feel like this is entering the realm of philosophy very quickly. Yeah, it's funny how so many of these conversations go that way. 

[00:34:49] Chris Hudson: I was talking on ba previous episode, I was talking to Marco, Dr. Marco. He's a data analyst, data scientist, but also he's got a doctorate in philosophy as well. And it was kind of the mixture of philosophy, philosophy and facts and what's truth.

And, you know, it was very good conversation, but yeah, it's in those sorts of themes. It's sort of like what it all means. You know, we're thinking about the world of business today and there's going to be paths that people are going to have to take. It feels like consciously or unconsciously, you're either aware of your choices or you're just sort of led into them. If you join a company and say you become an entrepreneur or you're part of that culture and a lot of it is sort of dictated to you you know, if there's a cultural possibility, then that's great, but sometimes there is a culture of impossibility where you feel like there is constraint, you can't really work around it. So these discussions we've been having, I think are really interesting from that point of view of, of actually being able to push even on a very small level, like summarizing texts that we were talking about before, like very small use case, but that sort of thing could be incredibly efficient, in some, organization processes, but actually becoming aware of how you can use it, it'll become easier. So from an intrapreneur's point of view, what are some of the things that you think we'd encourage people to probably try out as a way of getting onto the journey of figuring out what they can do with it and how it's right for them.

[00:36:06] Sarah Kaur: I reckon there's a tried and true, question that you and I would probably ask a lot of people that we work with. and the question is, if you had a magic wand, what would you do? But now I feel like the magic wand is almost at our fingertips. You know what I'm saying? Like, I'm starting to develop workshops to identify use cases for AI that basically are like, describe your current workflow and then at each point, yes. Tell me about the pain points tell me about what you wish was different and often what people wish were different is something quite achievable with an AI tooling, assuming that you've got infrastructure and good data governance in place.

but I think there's a follow up question as well, which is if you had the magic wand, what would you do? If AI was doing that for you, what questions would you have about the AI? What would you need to be able to work with it confidently? And are you worried about anything? And I think if we are able to ask the magic wand question, but a few follow ups that really centre on the relationship between the human and the AI tooling, we're going to get a lot of signals back about like this is how we should use it, but also this is how we need to take ourselves along on the journey. This is how we need to design, I guess, like principles into way we use it, either in policy or guidance or the interface itself. And I think the cool thing is that it doesn't need to be executives. It doesn't need to be cross functional teams.

Like in an intrapreneur could sit there. Solo almost and do that exercise and hopefully find a little wedge where they can say, hey I've been thinking about this. Could we try this? And I really hope that all the business kind of leaders are starting to really want to keep their ears open and reward or incentivize that kind of thinking.

[00:37:58] Chris Hudson: That's a really valid point around transformation and, what can be made possible if some of those steps really are knitted together, because what I see in the world of transformation, organizational design, culture change that, you know, that we're looking at is really that so often, and I'm going to say it's like 95 percent of the time, there are standalone initiatives or activities that are run in workshops and it's almost done in isolation and it's always removed. It's very focused on a particular task or an outcome, but it's never really that well connected to the implication for the business more broadly, or the connection to the strategy or the vision more broadly. It doesn't ladder up.

It doesn't have a consequence and it doesn't step naturally into action. So what I'm hearing from what you're saying, which is fantastic, is that previously there was probably quite heavy bias towards strategy and leadership and governance and all of the things that focus on getting the the course, right?

The path that people are taking within their teams, but actually what this is creating is the possibility of a much more granular set of steps that will result an action. So it's creating a bias for action rather than the bias towards strategy, which I think is going to be helpful.

It's going to help in maneuverability and, you know, in businesses and how they need to respond in the world of change that exists today, that's going to become much more important. That feels like a good thing. What do you think?

[00:39:14] Sarah Kaur: Yeah, I think it's a great thing. I think there's still a massive role for leadership in, being explicit about how they want to steward that change and steward that transformation. So

I guess it's like classic what does leadership mean in terms of setting the sights of everyone to be traveling in a direction with appropriate guardrails is such an overused term at the minute, but you know what I'm saying? Like, it's kind of like everyone, we're heading there. If you're in the red zone on either side, like come back in. 

[00:39:41] Chris Hudson: You're out. 

[00:39:42] Sarah Kaur: Within that, you're out like, but within that, there's an amazing spectrum of possibilities.

And, you know, in the cynefin framework, how there's like emergent and like in the complex space, one of the things you can do is start to like probe and sense. I kind of feel like the opportunity in companies is every single person is probably already probing and sensing. How are you going to listen for the signals about what's happening and then use that to influence, or at least interact with the strategic direction that you're setting?

[00:40:12] Chris Hudson: I also think that the hierarchy of an organization is going to be challenged. You There's also a massive move towards a flatter collaborative structure within a number of

[00:40:20] Sarah Kaur: Oh, can you tell, tell us more about that? Tell us what you're seeing there.

[00:40:24] Chris Hudson: Just in terms of practice around the evolution of a business or practice around the evolution of a product or service where no particular team or discipline or departmental lead or leader even has the answer or the direction. It's part of the evolution of the company and it, you almost owe it to the teams to be able to involve them in some ways.

 So the co design of the future blueprint or the product or the services is actually better figured out together in a number of cases. And that doesn't mean you have to invite everyone, like hundreds of thousands that come in, but particular points of view just gives diversity of perspective and it kind of involves people the right time, but that has to be engineered in a way. So I'm thinking that there's a step towards collaborative flatter structures. There's obviously a tool set that's coming in around AI and I reckon there'll be a sort of hierarchical identity crisis of sorts where some people are using the tool, some people are not.

It's more about maturity of usage really within this tool set. And it's going to be interesting because two people inputting is chatGPT, one's a CEO, one's a grad. How does that sort of resolve, is it that my input was better than yours or in my experience is more valid and how will that play out?

Because in theory, if all of these inputs are going into the system from an AI point of view. The truth that comes back will become probably more rounded based on everyone's opinion in the end anyway. But yeah, it just feels like in the beginning it could cause a tension because you've got access to the same tools.

Anyone can look at this definition on Google and compare it to that definition on Google. It'll be the same with AI and the ideas that then follow. So yeah, it could cause a heated debate I reckon.

[00:41:56] Sarah Kaur: It could, it's also interesting around not just where is the source of truth that the AI might be trying to draw from. Like, what is the Information that it's trying to prioritize but I think like coming out the other end, there's a very necessary and as yet no established best practice of auditing the output.

So who gets to say what the information is as source of truth, that's one, but then who gets to kind of qualify or validate the output on the other side, is verified, is trustworthy. And maybe, that still requires that same nuance thinking about should there be a hierarchy where before comms go out, and it's gen AI, like, I still assume that lots of companies are practicing a quality assurance process and there will be the equivalent of what is the gen AI output that still needs the CEO's like eyes across it and goes, Hmm, yes.

I'm willing to step in and be accountable for that output, 

[00:42:55] Chris Hudson: Hmm. Yeah. It's, it's around confidence. It's around risk. What are you taking on when something else has given you the answer or a contribution, contributing facts or some element within the solution that you're creating because it's not attributable to your people alone is the point.

[00:43:10] Sarah Kaur: Yes. And oh gosh, you know what? This makes me so worried about the role of like management consultants.

[00:43:15] Chris Hudson: Yeah. Okay. Go on. I was going to mention McKinsey, but go on.

[00:43:18] Sarah Kaur: Oh, well, I guess putting everything aside, apart from just like, what is the role of the service of a management consultant? If you think about it, like we ask questions, we try and do enough research, quant core, background, competitive kind of scans, we try to develop insights and recommend strategies and pull kind of multiple perspectives together.

But really, if all of that is in one place already and anyone, anywhere could be like, tell me what you think my next three strategic priorities should be given, given everything you know about my company, because I've fed it annual reports and you've also got access to like all of my, I don't know, like financial documentation and strategy ideas, put that together with five trend reports from the internet.

What do you think my priority should be? And AI could do that in a minute and we would take like two months and cost so much more. Like I think that's quite a compelling, prompt for us to Think about what we need to do really. Like what, what do we need to focus on to retain our value to organizations?

[00:44:24] Chris Hudson: Yeah. What should we do? I don't know. What should we do? Maybe we leave it there. No. 

[00:44:29] Sarah Kaur: Yeah. Definitely,

[00:44:30] Chris Hudson: I wonder whether the open AI is obviously one thing, but whether more proprietary way of running it, you know, from a machine learning point of view will become easier as well. If you're a large supermarket and you had your own data set and you didn't want to just learn from the whole world, but just learn from what you fed it to your point, you could just put your, consultancy papers in through the letter box at the front of the robot.

And it would just then provide you with the details after that. I'm exaggerating, but you, you get the point is you could actually create a closed system whereby, you do maintain control of it in some way, and it's still giving you that. You know, certain capabilities in any case.

Do you think that's possible?

[00:45:06] Sarah Kaur: Not only possible, but practical and probably we're going to just see huge uptake of that next year or sooner. Like, I really feel like we're going to see that being enabled, not because you're getting like. a service provider or a vendor, like you, you won't have to, I'm speculating, but I think very soon we won't have to get AWS or Microsoft to help us create that infrastructure.

Like it's going to be a very consumer facing kind of thing to set up your own instances and knowledge sources.

[00:45:39] Chris Hudson: Yeah. Awesome. All right.

We're on the verge of technology just being really unhealthy. Do you think it's still healthy or is it just getting too far down? What do you think?

[00:45:47] Sarah Kaur: I am still enjoying using it and I don't think I'm a complete hedonist, so maybe personally I think it's still healthy because I, what do I consider healthy? It seems to integrate with like my practices in personal and professional life. I feel like it allows me to do more of the things I'm interested in.

What about for you? Is it healthy?

[00:46:13] Chris Hudson: I think it's on, it's right over the edge. I think It's getting to a point where it's probably not. And the,

the possibility is basically a drug, you can find yourself doing more and more, trying to do more and more with it because it's so exciting and if you're a bit of a- if you love technology, then that's, that's where you can end up.

It's hard to have personal constraint when the possibility is basically the payoff and it can give you so much more. I was watching it as there's a documentary on ABC last night about the Australian workforce. And you know, how ABC loves to just cut together bits of stock footage, it was kind of different interviews, different points of view, and they were talking about the workforce and most of the stock footage was kind of people staring at laptops, people staring at computer screens and the points they were making around productivity, around how people were doing amazing things at work.

But if you take a step back from that, what you're seeing from the outside is just hundreds and thousands of people just staring at computer screens for most of the time that they have in their waking, in their days. So, you know, if you compare that to Wall E and the scenes that you see in that, it's not that different.

So I'm just wondering about when, you know, when and how people get taught how to moderate some of this for themselves.

[00:47:19] Sarah Kaur: That is really interesting. And I mean, I can relate to that a little bit personally, right? Because I have just said I think it's healthy, but if I reflect on my last month of late nights,

I've been working with GPT, trying to force it to make a dance film for me, a script for a dance film. And then now I'm trying to kind of like make it, cause I can't draw. I don't have the, like, give me the beautiful Derwent pencils and I couldn't do anything with that. I'm 

like,

[00:47:48] Chris Hudson: you got the fine art degree.

[00:47:50] Sarah Kaur: I know, it was all just like sculpture and photography.

I was like, how can I get through my degree without drawing as much as possible? And you're right, I'm like, I have been feeling like I've been enabled to do a little bit of creative output, but I also haven't been sleeping before 2 a. m. most nights. I wanted to show you something that I, I don't, can you see that?

We should describe it.

[00:48:12] Chris Hudson: Is it, is it hosted? Is it a link? I can probably put it into that. 

[00:48:15] Sarah Kaur: Oh yeah, true, I'll give that to you. it's an image of a, like an experiment. I was doing as an artist about maybe 10 years ago. And it is black box, but it's building on the camera obscura idea. So basically, a box you've got on one side, a pinhole that lets light through on the other side, you've got a flat plane that collects the image and I started thinking about basically making really lo fi ones. So putting a massive cardboard box over your head with a garbage bag. You put a pinhole behind you and in front of you because you're wearing this box, you are needing to navigate the world just by looking at the image that is projected from the pinhole camera behind you. And it's a fun exercise, like, it's a very fun kind of, embodied experience of black box thinking. And I hadn't really like considered it until very recently about, I wonder if this actually has an application in helping like an artistic facilitation of helping people think through what black box means in an algorithm, or what it means to have a view of the world drastically filtered and have to navigate it anyway.

[00:49:22] Chris Hudson: That's a great open question. I think there's some interesting thoughts around that where, it's a bit like any kind of content platform where you're asked about your preferences up front and then you're served, things in relation to that.

You could say, well, you know, I want to, I want to be living my life within these constraints, from a tech point of view, and then the preferences could be what you control, but you could say, well, I want to shut off the rest because it's not so important to me.

So hopefully that, that level of preference probably does exist right now, but it's not controlled by a lot of people. I don't think it's like, yeah I'm all in I'm gonna

see it all 

[00:49:55] Sarah Kaur: Yeah. Give it

[00:49:57] Chris Hudson: I don't want to miss anything either. I want to yeah, I want to see that it's the right thing or the wrong thing, but I'll make my own mind up but we can't consume at the same rate.

I don't think for many more years

[00:50:06] Sarah Kaur: No, we can't consume enough. 

[00:50:07] Chris Hudson: Feels like we're just going to burn ourselves out and nobody will be happy. So it'll be interesting to see what happens. And maybe we, we record another episode of the future where some of these things have developed a little bit more as well, but hey Sarah, I've really loved having a chat today and it's been super interesting just seeing where, where the conversation would go, touching on themes of obviously data, privacy, AI, creativity, many, many things. It's just been fun. So, so thank you very much for your time. Is there any kind of piece of advice that you'd want to end on, in terms of how people within organizations can navigate some of the things that you've been able to navigate in your work?

[00:50:40] Sarah Kaur: Yeah, it's very basic, but creating healthy and respectful containers for conversations

[00:50:47] Chris Hudson: Hmm.

[00:50:48] Sarah Kaur: with your colleagues is probably, the thing that I've been anchoring myself in, because it could be a conversation about whatever, like interpersonal work styles about ways of working, it could be around how you're taking and receiving feedback.

It could be around how you're using artificial intelligence to do part of your workflow. But I think there's something around being able to lean into that humanness and interpersonal relationships and kind of say, hey how do I have a really healthy conversation about this topic? Because what I'm seeing more and more is that we're having, lots of remote meetings.

We're having less kind of interactions and we're starting, I feel like I'm observing more of a transactional kind of relationship happening between colleagues and workplaces. And I'd like to encourage us to be aware of that and maybe where we can get a bit deeper and that's really meta. It's like kind of a complete different track from the rest of the conversation that I've been thinking about that lately.

[00:51:50] Chris Hudson: Yeah, it's around balancing, isn't it? You obviously you can use technology very prescriptively and you can be down several rabbit holes with that at once. But it's probably the time to almost withdraw, step back, reconsider, reconnect with things and people, place, the things that are important to you.

[00:52:06] Sarah Kaur: Exactly. And to your point, like have those healthy limitations around technology and see others as a source of energy and don't burn out. 

[00:52:15] Chris Hudson: That's it. Maybe AI has got the answer for how to manage some of that too, but we'll find out. Really appreciate your time. Thanks so much Sarah and yeah if people want to reach out with a question or if they want to just find out more about the mystical world of data 61, how would they get in touch?

[00:52:30] Sarah Kaur: Probably LinkedIn is the best way. linkedin.com/in/sarah-tamara-kaur, I'll hop a link through.

[00:52:37] Chris Hudson: All right. Well, thank you so much for your time. Appreciate it. Thanks so much, Sarah

[00:52:40] Sarah Kaur: Thanks, Chris. 

[00:52:41] Chris Hudson: Okay, so that's it for this episode. If you're hearing this message, you've listened all the way to the end. So thank you very much. We hope you enjoyed the show. We'd love to hear your feedback. So please leave us a review and share this episode with your friends, team members, leaders if you think it'll make a difference.

After all, we're trying to help you, the intrapreneurs kick more goals within your organizations. If you have any questions about the things we covered in the show, please email me directly at chris@companyroad.co. I answer all messages so please don't hesitate to reach out and to hear about the latest episodes and updates.

Please head to companyroad.co to subscribe. Tune in next Wednesday for another new episode.