The Company Road Podcast

E32 Gareth Rydon - The AI revolution: Unleashing AI for productivity & personal freedom

February 20, 2024 Chris Hudson
E32 Gareth Rydon - The AI revolution: Unleashing AI for productivity & personal freedom
The Company Road Podcast
More Info
The Company Road Podcast
E32 Gareth Rydon - The AI revolution: Unleashing AI for productivity & personal freedom
Feb 20, 2024
Chris Hudson

“When it comes to AI, we talk about augmenting not replacing. We want this to be about how do I augment the individual and give them superpowers so they can do amazing things and liberate their time? How do we make 5 people worth 10 people?”

Gareth Rydon

In this episode, you’ll hear about:

  • AI adoption in 2024: Exploring the scale and speed at which AI is being adopted, the different approaches companies are taking & the key implications for organisations in 2024
  • Evolution of AI use cases: How AI applications have evolved from basic tasks to more advanced functions and how opportunities for use are expected to continue to evolve into this year 
  • Organisation challenges in AI: The key barriers that companies are facing in adopting AI and strategies to simplify the process for efficiency & augmentation
  • AI vs humanity: Understanding where and how AI technology still falls short compared to human abilities and where the key value of humans still lies 
  • Engaging kids with AI: The ethics of introducing kids to AI and how to do so in a way that prepares them for the future and harnesses their creativity but protects the formation of their in-person skills & human connections

Key links

Ep. 29 with Sarah Kaur https://www.youtube.com/watch?v=5GEMbbWssyo

Friyay.AI https://www.friyay.ai/

RambleFix https://ramblefix.com/

Otter AI https://otter.ai/

Ethan Mollick https://mgmt.wharton.upenn.edu/profile/emollick/

Llama by Meta https://llama.meta.com/

DALL·E 3 https://openai.com/dall-e-3

Napster https://www.napster.com/

Ep. 23 with Dr. Gus https://www.youtube.com/watch?v=25jctgmeEVI

About our guest

Gareth Rydon (https://www.linkedin.com/in/garethrydon) works as a service designer, applying this to building generative AI tools for clients. His happy place is when he is talking to and learning from people so that he can help them solve their problems in sustainable, creative ways.

Having previously held roles at Stone & Chalk, Rightpoint and the ATO, he is now the co-founder of Friyay.ai, a generative AI studio which identifies the human experience to augment and then design generative AI solutions.

About our host

Our host, Chris Hudson (https://www.linkedin.com/in/chris-hudson-7464254/), is a Teacher, Experience Designer and Founder of business transformation coaching & consultancy Company Road (www.companyroad.co).

Chris considers himself incredibly fortunate to have worked with some of the world’s most ambitious and successful companies, including Google, Mercedes-Benz, Accenture (Fjord) and Dulux, to name a small few. He continues to teach with Academy Xi in Innovation, CX, Product Management, Design Thinking and Service Design and mentors many business leaders internationally. 

For weekly updates and to hear about the latest episodes, please subscribe to The Company Road Podcast at https://companyroad.co/podcast/

Show Notes Transcript

“When it comes to AI, we talk about augmenting not replacing. We want this to be about how do I augment the individual and give them superpowers so they can do amazing things and liberate their time? How do we make 5 people worth 10 people?”

Gareth Rydon

In this episode, you’ll hear about:

  • AI adoption in 2024: Exploring the scale and speed at which AI is being adopted, the different approaches companies are taking & the key implications for organisations in 2024
  • Evolution of AI use cases: How AI applications have evolved from basic tasks to more advanced functions and how opportunities for use are expected to continue to evolve into this year 
  • Organisation challenges in AI: The key barriers that companies are facing in adopting AI and strategies to simplify the process for efficiency & augmentation
  • AI vs humanity: Understanding where and how AI technology still falls short compared to human abilities and where the key value of humans still lies 
  • Engaging kids with AI: The ethics of introducing kids to AI and how to do so in a way that prepares them for the future and harnesses their creativity but protects the formation of their in-person skills & human connections

Key links

Ep. 29 with Sarah Kaur https://www.youtube.com/watch?v=5GEMbbWssyo

Friyay.AI https://www.friyay.ai/

RambleFix https://ramblefix.com/

Otter AI https://otter.ai/

Ethan Mollick https://mgmt.wharton.upenn.edu/profile/emollick/

Llama by Meta https://llama.meta.com/

DALL·E 3 https://openai.com/dall-e-3

Napster https://www.napster.com/

Ep. 23 with Dr. Gus https://www.youtube.com/watch?v=25jctgmeEVI

About our guest

Gareth Rydon (https://www.linkedin.com/in/garethrydon) works as a service designer, applying this to building generative AI tools for clients. His happy place is when he is talking to and learning from people so that he can help them solve their problems in sustainable, creative ways.

Having previously held roles at Stone & Chalk, Rightpoint and the ATO, he is now the co-founder of Friyay.ai, a generative AI studio which identifies the human experience to augment and then design generative AI solutions.

About our host

Our host, Chris Hudson (https://www.linkedin.com/in/chris-hudson-7464254/), is a Teacher, Experience Designer and Founder of business transformation coaching & consultancy Company Road (www.companyroad.co).

Chris considers himself incredibly fortunate to have worked with some of the world’s most ambitious and successful companies, including Google, Mercedes-Benz, Accenture (Fjord) and Dulux, to name a small few. He continues to teach with Academy Xi in Innovation, CX, Product Management, Design Thinking and Service Design and mentors many business leaders internationally. 

For weekly updates and to hear about the latest episodes, please subscribe to The Company Road Podcast at https://companyroad.co/podcast/

[00:00:07] Chris Hudson: Hello and welcome to the next episode of the Company Road podcast. We're live and kicking in 2024. We've already had some really amazing guests on the show so far, and they've been offering their perspectives on what it will take to change or influence business and organizations.

And this year is really predicted to be the year of change or where everything changes, where adoption of AI technologies continues to rise seismically. And I really wanted to go deeper into this topic because it's such an important one for 2024 and beyond and because we had such a positive response to our previous AI episode which was episode 29 with Sarah Kaur and she's leading responsible AI design at CSIRO.

 And if you've not had a listen to that one I'd urge you to check it out as it was fun and exploratory chat around AI and there was lots of good stuff mentioned in that one too. So, let's move on to today. I think our next guest is one of these people that some people may call crazy for what he is and his company is helping people do.

But this is someone who's really leading the change when it comes to AI adoption and delivery within organizations. And to quote the late and great Steve Jobs, it's the people who are crazy enough to think they can change the world are the ones who do. So first up, a huge and very warm welcome to you, Gareth Rydon. 

[00:01:11] Gareth Rydon: Thanks Chris. Thank you for such a wonderful introduction. I've never been put in the same sentence as Steve Jobs. What a great way to build my ego up before we have our conversation today.

[00:01:20] Chris Hudson: That's the idea. So Gareth, I'll just explain a bit about what you do. So you're the co founder of Friyay.ai, a generative AI studio, and you're an accomplished service designer. You've helped many large organizations in Australia, such as the Australian tax office and AMP to transform their service experiences in your previous roles.

And I want to start with possibly an easy question. When we last chatted, you were saying that you were using Otter's technology, which is AI to attend meetings where you'd simply be listening in and then getting AI to summarize the conversation. And that really helps you free up your time.

So, question for you today is, are you really here? And how has AI helped you this week in preparing for the show? 

[00:01:57] Gareth Rydon: Yes, Chris, I am physically, mentally, emotionally present today. My assistant has got the night off, so my other assistant isn't joining us. And so I'm really looking forward to the conversation being physically present today.

[00:02:11] Chris Hudson: All good. And were you able to use AI just on the lead up to the show in some way or another? Or how has it helped you this week in the broader sense? 

[00:02:17] Gareth Rydon: Yeah, that's a great question. I'm augmenting myself every day and case in point is the great conversation we're going to have today. And really specifically, I did use a couple of AI tools to help me prepare. One of them in particular, I think you'll find the name quite funny is I use on and off is called RambleFix.

 And as the name suggests, it's an AI powered tool that helps fix your ramble. I'm someone that coming from that service design background. I do think when I talk, well, I like to call it thinking. Other people call it rambling, but the way I use that tool today is I really I've been talking to myself the last couple of days.

I've been listening, watching some of your other podcasts and it's sort of giving me ideas. And I used the tool where I basically talked into RambleFix AI. And I got it to produce some notes and speaker points based on my ramble. So, I think the key thing there was, it wasn't a case of it created something for me, more of an autopilot perspective where it says, write me something and come up with it from scratch and do it all for me rather it took my ideas and insights and helped me organize it. And for me, that was fantastic. And I then used it to further iterate and refine a couple of things that I'd love to talk with you about today.

[00:03:25] Chris Hudson: Yeah, wonderful. That's fantastic. And I love the preparation for the way that you present really, because I think I explained this to a lot of people that haven't been involved in a podcast before, that when you get used to hearing the sound of your own voice episode after for episode and for edit after edit, and you're listening to it back so many times, you realize what words are using, you're realizing how you come across, and it can be bad, it can be very awkward for people and quite confronting for people to hear that back, but I love the way that the AI can just take a lot of the emotion out of that.

It just plays back what you said, summarizes it, and it probably gives you a few pointers along the way. So that's a great point. So thank you.

[00:03:59] Gareth Rydon: Yeah, exactly. And I think the other thing that I'm finding is in particular, this space is that the use cases or the ways to use this, that haven't existed before. So there's the great example, as you said, is it's helped me fix my ramble, give me some structured points, but then further to that, what I did was, I gave it a bit of context about Company Road and this podcast. And I said, thinking about the audience and what they value and the context of what you talk about, Chris, I'd say critique what I'm talking about as if you were an audience member. Does this resonate? And it was great because it just gave me some really useful feedback and I had a bit of a structure and it said, well, here's really how you might talk about what you need to talk about in the context of the listener. 

Even myself trying to challenge myself to go, what if it did this, and then also what's the next potential use case that wouldn't have even been possible. I get blown away almost every couple of days where challenging ourselves or my team, my co founders, even our clients to say, great you u can use AI in more of a skeuomorphic fashion, which is how you were doing things before and just getting AI to do it for you. So we often talk about with all my co founders, Ben, when the radio was first invented, what people were doing was the radio presenters would sit in front of the microphone and read the newspaper.

So a great example of skeuomorphism where they were just repeating what they were doing before, but just through this new medium. But then only after radio had been around for a while is they would say things like, I can go to India and interview the Dalai Lama and have the Dalai Lama talk and my listeners can hear him talk.

So that is a fundamentally new experience that hadn't existed, but this technology came about. Our first step was let's do what we did before using this different technology. And then as we started to use it, we go, well, actually I could do something else that wasn't even possible.

[00:05:53] Chris Hudson: Yeah, I mean, it's a massive point obviously for tech adoption and everything else that goes with it at a macro or a granular level, it just feels like you're stepping on and in this podcast, we're also, we're always talking about what it takes to make change possible and how change comes about. You often think about the sort of small battles and the things that you can win more incremental basis, but there's just feels like it's a radical shift and tracing it back to a point in time when everything was, within an organization was just kind of by the letter probably documented by hand, very manual process all the way through to, the evolution of the information and the sheer possibility that interaction design and two way conversation and clicks and websites and all the things that came about through that where information is just fragmenting. So the fact that we can now bring that back in the step on feels like it's a natural step on from Google to be able to look up something and get information back.

But the use case that you were just describing was more than that. It was actually almost presenting the opportunity for the AI to represent persona or an audience or somebody that you'd be sharing information with and getting it to critique. Do you see that evolving and do you see any other areas being really interesting in that space?

[00:07:04] Gareth Rydon: Absolutely. And I think to touch on what you said about the evolution from Google, it's such a good point because what we're seeing with some of our clients as well is the first interactions with generative AI, say chat GPT the one that everyone knows about is a lot of people are using it like Google because we've almost been trained to just ask questions in a certain way to a search engine. So I'm often seeing when I'm working with some of my clients, I'm just watching them being a service designer. I love contextual inquiry and watch how people do stuff and seeing how a lot of the time people are using that for using it like Google, the search engine.

But then to your question around what other use cases might be possible, it's a great point about the fragmentation of data. But also for years and years in my consulting days, it was always a case of, oh Telstra has so much data and somehow they're going to unlock that and take over the world.

But up until now it was really a case of, well, let's just keep cleaning the data, putting it in big data lakes and something's going to happen. Whereas now even when you talk about the fragmentation of data is that right now we're creating in terms of what we're talking about and the ability to overlay a generative AI transcription tool on our conversation and what we might create from that so examples that we're doing quite a lot we're trying to really challenge ourself and you mentioned about Otter and that's one of many really great transcription tools in a skeuomorphic sense. You get it to take your notes and get your action.

So you're replacing some poor person who's probably scribbling and typing and then emailing the notes. So it does that much faster and pretty accurately. The extension of that use case is, well, if I thought about this as a super powered assistant that I have available, what if I invited them to a brainstorm and got this AI assistant to identify the patterns and see things that I didn't see. And so I can talk freely with my team. So we, as an example a couple of weeks ago, we had a meeting with a chairman of a big construction company really interesting meeting. And just so happened that it finished five minutes early.

So we're like fantastic. Normally you'd have that five minutes. I'd run back to my desk or walk back, walk around my home office, that is, and sit back down at the same desk and start typing up the notes and then maybe ping one of my other team to say, what do you think about this email? I might get distracted, go and look in the fridge.

A couple of hours later, or maybe at the end of the day, I've gotten around to an email and sent it back. Well, in this context, what we did is we said, well, what if we thought about how we might use Otter now in this five minutes. So we plonked my co founder's phone down the table, started Otter recording, and we just talked, we jammed and we're like, oh what did we think about that meeting?

What was the chairman really focusing, what are our ideas? And it was that really free flowing conversation and when you're in that situation and it just feels like you're starting to build and there's a core of an idea building and you're not having to break that flow. So we did that for about six minutes.

Stop the recording. And then we chatted with Otter because it has a function where you can chat against the transcript. So imagine that conversation we had is our data and we asked Otter, we said, what was the core of the idea that we were talking about? How might that best resonate with the person that we were talking about the idea for?

And then craft an email in a formal tone for this chairman containing that core idea, packaging in a way that's appropriate for that audience on behalf of Gareth. Within that five minutes, we'd had our jam. We had an email that Ben and I then reviewed before we walked out of the room. It was back in the chairman's inbox.

A great example of the, we sort of said to ourselves, what if we could do this now? But if we okay, meeting transcription tool will give us our notes. So in the next meeting we'll abide it. We would have kept that of mindset to actually what else might be possible.

[00:10:46] Chris Hudson: Yeah, I mean, that's just another powerful example. I think there's so many of I guess the use cases of AI stories that are coming out, you've only got to look at your LinkedIn feed or anywhere else that people are just trying to bend it and morph It into different parts of their lives and show that they can do it. They want to show that it can do something amazing and obviously they want to be the first to showcase that in some way or another, and it's great. They were able to use it for that. I'm wondering whether there's a similar story to that, that inspired you to set up the consultancy that you did for Friyay in and around AI, or whether it was out of observation of the fact that something in the world of work had to change, was this something from the past, that triggered it or is it the technology that triggered it?

[00:11:25] Gareth Rydon: It was more something in the past and the way we were working that triggered it. And I think it really goes to, if we talk about our core purpose at Friyay, we want to democratize access to AI and help people liberate their time. That's really what we see the opportunity as, and both from all of our co founders and myself, that experience that there's been a lot of innovation and new tech and new solutions, but it doesn't seem like we can get off the treadmill.

It felt like rather we were being more productive, we were being more effective, but we weren't working less and when we're looking at that, we're saying, well, there's just something within that cycle. I absolutely love the concept of the four day working week and doing as much reading as I can, a lot of the great stuff coming out, a lot of businesses in New Zealand and even companies in Australia, how they're doing it and they're finding the benefits.

But it still felt purely from my observation that there were compromises there. Either people trying to fit five days in four, but then they were coming back refreshed so they could do almost five days worth of work in four days because they had three days off. And I was like, oh, I can see how that works, but that doesn't seem like it's sustainable, and that's where we saw with the advent of generative AI is how might we apply that technology, which is available to everyone on the planet with a smartphone and a data plan. So, true bottom up revolution type of territory happening right now. Therefore, so if it's available to 6 billion people, rough guess about people who have a smartphone and a data plan, and it's available in a chat function, and we can apply it into our work context.

Surely at the intersection of those three points, this is a chance to reliberate people's time and selfishly for us as founders is we want, we want to work the four day week. We want to be able to work the true four day week, do four days worth of work and have five days worth of outputs and then long term you know, three, two or three days a week.

So I think coming back to what you're saying is that it's really that sense of, we just couldn't really get off the treadmill. And Ethan Mollick's are great writer that I follow, he puts out his blog, One Useful Thing, and if anyone wants to have a great read about general AI that can demystify it and write so clearly, I highly recommend jumping on there and reading some of his blog posts.

One of them in particular he talks about, up until this point, if you wanted to, add more knowledge into a system. You either had to add more people or you had to make those people work longer. We're in a state now where we theoretically have access to infinite knowledge. So how we relate that back into Friyay is that a 20 person small to medium enterprise can in effect compete with a 500, 1000, 1500 person organization with the right use of the tool.

That sort of really inspires us as you're going to help those businesses as well.

[00:14:13] Chris Hudson: Yeah. I mean, I see that empowerment and enablement message as being really strong and we talk a little bit about how often you're seeing individuals really pop up and showcase by describing that what they've been able to do with the platform, with the tool. I think there's another part to it, which is kind of interesting and maybe something that we could just talk about, which is around how the adoption to it is also received within an organization because it's not just the people using it, but it might be a professor who thinks their students are cheating in some way because they're using the AI to come up with the answers and write their thesis for a university submission, dissertation or whatever it is, or it could be that within the work context is architecture, or if it's in legal practice or something where it's just perceived to be a bit of a kind of shortcut, but not in the right way. So is there something in that, that you've experienced through conversations that you've had that has really helped the environment in which it's being received to evolve with the practice of those individuals actually using it as well?

[00:15:08] Gareth Rydon: I think what I'm seeing is a two to three buckets of where companies are landing at the moment in terms of how they're receiving AI. 

One clear one that's emerging is the CIO CTO tech led, we're going to implement this technology group of companies and the way they are, and it's a common pattern in the clients we're talking to that fit in that bucket.

The way they're approaching it is first of all, their first step is banning it completely until they have a complete laid out strategy, risk, governance, guardrails. And then once that's approved through their normal traditional process, so steering committees, teams, working groups, project management office and their strategy gets approved.

Then they have their roadmap and then they start to do it. I'm not saying as in that's right or wrong. It's just that's definitely a pattern. And the value I'm seeing in that is that they're thinking about guardrails. They're thinking about the ethical use of AI. They're setting up an ethics committee.

So these elements of it are really good. The massive risk there is. That's 6 or 12 months before they do anything and how fast things are moving two months, three months, their business could be completely made obsolete by another company just jumping into their market. So big risk trade off there.

On the other end of the spectrum, you've got clients where the leader or one of the key leaders in the business is saying, we're going full steam ahead. First of all, everyone should be experimenting. And if you're not experimenting, help us understand why, because we need to be on this pathway. And they're more thinking about, well, we want to be an AI native business, so I'm going to change my business and how do I do that?

So they're getting the benefits of moving super fast and getting ahead, but also sometimes there are some risks that are cropping up. So we've spoken to a couple of those businesses after, let's say mid last year, they'd already built their own chat, insert their company name instead of GPT, there are two or three of them.

The problem was that they just went straight away and built, didn't involve their teams and staff, so no one ends up using the product. So that was the sort of trade off. And then you've got the company sort of in the middle, which they're almost saying, I can't see yet how it applies to me.

I'm stressing out because to your point before every day I'm getting pounded on LinkedIn by the latest thing. What do I do? Where do I start? How does it apply to me? And they're sort of, they're almost getting a little bit stuck and because of they're in that cycle is then they don't act because they don't know where to start.

So they're neither far left, completely stop it, wait till we've got everything set up, or right on the other end of go nuts, experiment, we'll become AI native. And that's often the ones in the middle are the ones that we're working with the most because they're asking for the help and they're really open to take a bit out of both camps. So be safe and responsible but also be willing to experiment.

[00:17:58] Chris Hudson: Yeah. I mean, it's super interesting. I think that the rate of change, maneuverability within organizations is always a question, really. And the sense of there always being a threat was always there, but now it just feels like it's much more readily presentable if another business came into your market and suddenly powered by other technology and reinvented from the ground up and set up in a way that was basically there to disrupt and that could happen very easily.

So the reality of that situation is that there are many organizations out there who I work with a lot as well, who, you're going in, you're trying to help them implement a new technology, but actually they're still trying to fix the things that have been around for 20, 30 years and the tech architecture and the stack is looking a certain way, and it just feels impenetrable. That was one government client I worked with.

There's another one which had, it's kind of like a group company, holding company, 12 brands or so each of those brands had, several arms and legs, sub brands lots of different digital properties. Teams were fragmented across all of those sort of group level. How do you navigate some of that?

It feels like it would be possible, but for even for the decision to be made around the fact that you had to do it would just be, it would take like you say months and months of campaigning really, and evangelism proof and everything has to be kind of crossed off to the letter for that kind of endorsement to go through, particularly from the board .

So for the companies out there that are really at an individual level, at a team level, within the discipline, is there something they can start with? That would, in world design thinking, as well we talk about low fidelity versions and de-risking and ways of getting experiments running that don't involve the kind of sea change that we were just describing.

But is there something what's a kind of natural starting point? 

[00:19:40] Gareth Rydon: Good question and I'm gonna be very biased here because like you service design human center design love starting with people starting with the humans the people either your staff or your customers and it sounds cliched but what are the key problems were the challenges they're facing 

Arming Yourself with that deep understanding and then thinking about how might you solve for that with the AI tools.

And we found when we talk about that approach, there's almost a sigh of relief. I can almost see the team around the table going, oh okay, I can do that the tools are so new, they're moving so fast. And so it's hard, first of all, to keep up, but people are even not finding it hard to start.

When you take a bit of that pressure off to say, before you look there, look here at your people, even ask them, and we often run a simple session where we're just sitting around with the team and going, your dream session, tell us what you hate doing. What are the things that you hate doing day to day?

And then if you had more time, what would you love to be spending your time doing? And then we bring AI into the conversation and we just look for the patterns across that and we say, there's a clear space here in this moment. So, six times out of ten, in those conversations, it's 

[00:20:57] Chris Hudson: Hmm.

[00:20:57] Gareth Rydon: I hate the time I have to spend in meetings, or I've got too many meetings or like I've got too many meetings on top of meetings. So where would you want to be spending your time if we can liberate that? Oh, well, I might be maybe talking to my customers. I could be spending time with my family.

We haven't been empowered to be able to make those choices.

So then framing that for these companies, if they start, they go, okay, great. Let's understand that. What is that? Okay. So how do we collaborate? Let's look at how we currently do it. What is that experience? What are we doing when we're collaborating? Where is the time being absorbed? And then the other thing that we then often recommend, well, there's two things next, is to say, start with internal use cases, 

because it takes All the pressure off.

A lot of clients would jump to, say, like, sales, external, do stuff with customers. We say, that's awesome. Rather, learn with internal use cases, because the fundamentals about generative AI, once you learn the fundamentals, you start to see other use cases. And then when you think about external use cases, you're approaching them from the voice of experience because you've been doing it to yourself.

That's probably the best way is like, let's start with your people, understand what they're doing, what they want to be doing, think about some internal use cases, and don't jump to building a custom solution straight away. 

Once you've got those first two pieces understood, you can look at what tools currently exist that meet your needs and start using, say, Otter as an example.

Great. Or even elements of Copilot and Teams now. 

Play around with those for a while. Test the limits. If it can't do everything you need it to do, start to document what it can't do, and then your requirements for your custom solution can start to emerge.

[00:22:31] Chris Hudson: Yes. It sounds quite scientific in a way, you're in this sort of lab at school and you basically, you've got this little task to do, you've got two hours or whatever, you're going to try it out. And it's a kind of no risk totally safe environment. It's about setting up those safe environments within the organization without judgment, really, because people are just trying to find their way with it.

Like you point out, there's a degree of fluency that you have to probably build up, which you can't just get by looking up what a list of AI tools is. You actually have to work with them to understand it. It was a bit like the kind of the onslaught of social media when it first landed.

And then it, it sort of moved away from Monopoly into like quite a few different other versions as soon as Insta and a number of others popped up. Web 2. 0, where you probably remember the chart that was on everyone's presentation screen back then. But all of the logos are the things that people could do with user generated content.

All of a sudden that was a thing. And now there are similar charts being produced with logos of all the Gen AI tools as well. So people don't understand that landscape naturally. And I'm also thinking that it has the possibility to be fully democratized, but actually it probably will fall more into the laps of those that are for one interested.

And that want to take on that degree of specialism and really champion it within the organization. So are you seeing that, or do you feel like everyone should have a crack?

[00:23:50] Gareth Rydon: I think where, the opportunities, but also the challenge is, everyone has potential access to this tool through their smartphone, internet connection, but that isn't quite enough. 

We've got to help them understand how it can help 

them in their own individual use cases. It's sort of challenging that ingrained mindset of, oh, when technology and stuff comes like out like this, it's only accessible to the privileged few people in corporate, in knowledge working roles. It can't just be for me to help me be a better dad or be a better mom, spend more time on my hobbies.

So that's sort of what we're constantly challenging ourselves. How do we help do that? And we believe where we want to start is just start with those businesses and help them then think about more and more use cases and then coming out in great conversations like this and hoping that more people can hear it and start taking pictures of their fridge and writing shopping lists. 

[00:24:42] Chris Hudson: I mean, it's snowballing. It feels like people would go to different places for their information, but would there be any kind of repository or point of reference that you would say would be useful for people to check out if they didn't know to start with it?

[00:24:54] Gareth Rydon: That's a great question. And so I think there's.

[00:24:57] Chris Hudson: Google? 

[00:24:58] Gareth Rydon: really good free courses by the big players that are out there. So Google,

[00:25:03] Chris Hudson: Microsoft, 

[00:25:04] Gareth Rydon: Meta, because all of them have large language models, which they ultimately want to, start using, but they're pretty good. I was having a look, I think last week at one of the Google, the latest Google, what is a large language model, how does it apply to me type courses.

So accessing some of those free educational resources and then also a GPT 3. 5 subscription is free. Probably the best thing you can do is have a go. Even if it's starting with something fun. My first foray into chat GPT was. I think my brother and I at a barbecue March, April last year, we were writing funny song.

I think we were replacing Eminem song lyrics with my dad's name, like that was my foray into it, just like mucking around. And then it sort of led to where I am now. And that was less than a year away. And since then I've founded a generative AI studio, I've gone full time in it and I'm helping clients build their own solutions. That's in the space of less than a year.

So I'm seeing I wouldn't say it's surprising me, but even, I think it was this morning, I had a an initial client conversation and I have to keep reminding myself, I have the luxury of being in this day to day every single day as much as possible. And that's a privilege realizing that some people, there's conversations I'm having even today where people have never used it, aren't even thinking about it.

And because of my view is like, well, it's this amazing transformational thing. Everyone should be doing it. But, you know, there's a whole lot of people, as you said, who like oh what? So what's that app? I'm like, Oh, a chat GPT. Oh, what's chat GPT? I'm still having conversation and I'm not saying that in a negative way at all.

That's purely their context. So there's that camp of people who like going back to that treadmill example, they've got so much on in their life, so much happening that I'm just trying to keep my head above water, delivering my day to day, delivering my job, keeping on top of my family commitments and then the other camp, trying to learn as much as I can because I think there's a ton I can learn from and there's a lot coming up now. So I was reading this morning about what Meta's doing is 

labeling AI generated images and content. So either it 

being produced by Lama, which is Meta's model, or content that's being put onto their platforms like Facebook, Messenger, and then the other side as well about if it's learnt off all of these artists, you need content on like screenwriters, visual designers, whatever, isn't that potentially just ripping off their IP. And then when I'm digging into that more, but the, positive side is that there are now companies and a couple of universities and research firms that are trying to create AI solutions that help protect, artists content being used in the learning. So by pictulating images in a way that the generative AI tools can't pull it and learn it, and then the other side, the other camp of people is, well, how do we ensure that the internet, the data that's being trained on is full of bias? Purely because The weight of data that we've created over X decades, tons and tons of bias. The great example is if you go onto DALL E3, which is the open AI chat GPT image generator, or to go onto mid journey, which is the other big image generator, you ask for a photorealistic 

image of an entrepreneur, it's most likely going to give you a white male. To get the picture of a female, you have to say, give me an image of a female. Purely because the data, if you just the ultimate automatically, it's going to produce the ma le. I think your question around my view around, we need people like that to keep challenging and keep pushing to ensure that we are thinking about it because I think about it's probably an extreme example, but I think about when Facebook first came out, had a wonderful mobile purpose, connect the world.

That was amazing. I think about what it is now and in my experience, like I'm not going to let my kids go anywhere near it, that or Instagram, because of the addiction, the toxicity it creates in young people. And there's, court cases happening now where, Facebook, Instagram, other social media giants are being taken to court because of the addictive design principles that are embedded in those tools.

It started out with such a noble purpose, and generative AI is that on steroids, if we don't take a responsible, ethical approach to it. 

[00:29:20] Chris Hudson: A lot of people will obviously use it for recreational use. Other people would try and monetize in one way or another using its power as well. So it feels like there's quite a lot of responsibility there and I'm wondering, like we've talked a bit about the experimentation, the low risk, the safe environments that you can set up at work and all of that seems a little bit innocent, in a way, it's kind of, we're going to try out and see what happens.

We're going to try and understand it for ourselves so that we can have a point of view. But of course there's the other side, which is, let's just use it to the max and milk it for what it's got and use it to it could be commercial gain, it could be presenting certain minority groups within an organization with an unfair advantage that other people don't have, from an inclusive work design and organizational culture point of view that could present some interesting challenges potentially where, the technology in theory is flat, everyone can use it, but actually not everyone is.

And so that's creating some level of separation and segregation, which is going to be problematic potentially. So, in the work that you're doing, are some of these sort of longer term evolutions and consequential thought around, where it might go is some of that discussion being had, or is it really just quite focused on the now and what they need to set up and what they need to get going?

[00:30:28] Gareth Rydon: Very focused on the now. And I think it's kind of, I put the responsibility on companies like mine and others to how do we help bring the longer term thinking. And obviously there's the role of government from a regulation perspective. Absolutely, all of the thinking, all the conversation is the now and from a human behavior perspective, I can't blame the organizations that we're speaking to because that is the whole, I'll quote the castle, the vibe at the moment is it's moving so fast.

There's these amazing things that are happening every couple of weeks. New companies are being built all the time. New tools are coming out. Why aren't you there? In my brain, there's this visual of this massive, this fast flowing river fast, fast flowing river and it's almost as if the dam's been released and you've got all these people standing on the side with their little inflatable donuts and they're like, I've just got to jump in there. I'm not even thinking if it's going off a cliff, I've got to jump in. And, but it's this, the state that we're in. I think it's a really good point that you raise about how do we think longer term? How do we not end up starting with the noble purpose of democratizing AI similar to connecting the world? And actually what we do is we just the wealth and the power of the few, or we have some disastrous consequences. Like, it's a really, really good, good question and a good challenge.

[00:31:49] Chris Hudson: Yeah. I mean, it just feels like if Zuckerberg you know, and his mates and there was Napster and other things going on, I remember seeing the film. Should I remember now what the story was, but no, it's basically the preserve of a few nerdy people getting together and he obviously then brought the thing about and it was a student community and then it ended up launching into something much bigger. But it felt like at the time that not everybody could do that. Whereas now it's a bit like placing that capability into the hands of anybody. So if I'm not a designer, I can design and visualize things all of a sudden, if I just ask it what to do.

And I need to be trained in that in some way, but pretty much, if you're a designer, if you're a creator of some consultant, if you're a lawyer, if you're working in an advisory capacity where your job is to be a subject matter expert in something that all of that could be undermined by somebody else, just popping in from the side with, pre prepared presentation that has been just gathered from the many sources around the world.

And it's been fed by everybody else as well. So it feels like it's kind of egalitarian in a way, but it's also, it's kind of scary to think that if everyone is jumping into that fast flowing river with a donut inflatable and not knowing where it's going, that it's going over the cliff edge, that there's a bit of that leap of faith, obviously, but it just feels like there's an unpredictability to it, which I think doesn't sit too comfortably with everybody.

Do you think that's the case?

[00:33:09] Gareth Rydon: I definitely think there's that what, that I can feel uncomfortable. What does this mean? But also what I'm starting to see now though, is it can amplify the insight and the expertise for the people that have it. So totally agree. And I'm seeing it a lot, whole lots of people jumping in certain spaces saying, Oh, I can now create a whole business from scratch in an area that I have no idea about, and I'll just get GPT to build my business end to end, create a whole marketing campaign, create our business model, everything. But what it's going to do in my view is that the people that can win. It's like, well, who are the people that have the unique insight and expertise in that space? How does generative AI meant them so that you can see the quality and the insight, what you're paying for. Also, where I see this playing out at the moment is LinkedIn. I can spot a LinkedIn written post. And just because I'm constantly on LinkedIn at the moment. I can spot a LinkedIn generated post in about one second. I'm not saying I'm, I'm using generative AI to help me write content but help me build on my insight. It's the people that are using it to enhance them where you're like, wow, that is even better. Like, oh, you've taken that insight. You're either being able to give that insight to more and more people or you're building on top of it. There was a really good piece that Ethan Mollick actually put out, I think it was maybe yesterday or the day before about how everyone can generate images. And I think he did this really interesting chart and he said he was plotting how many, or what are the main types of AI generated images. And Spider Man is showing up heaps. So people are just going nuts on Spider Man images, marbled statues of celebrities, all this random stuff, which is the point of what he was talking about was you get an artist to prompt in terms of how they talk, how they think about art, and what they produce is absolutely amazing. I could produce something pretty good, nothing compares to that, and that's an example of, well they've got that layer of, so I'm here, putting AI on top of me, and I'm making something at this level, they're starting here and putting AI on top of their expertise and they're up here. So, I think that's where the opportunity is as well. 

[00:35:13] Chris Hudson: That sort of riverbed of expertise that you could take into the program is a compelling one, but I guess it's only compelling if that advantage kind of remains the same in a way, because an artist will think of things like you're describing differently to, an eight year old or anyone else that's trying to come up with a similar picture. Give me the next award winning film concept or whatever it is. It's just going to give you what it thinks. But if you actually gave it certain prompts and you then brought that constellation of ideas together in some way or another, that might be more compelling. But I'm also wondering then if that does all go into the pipe, whether at some point it then flips.

So all of the artists, all of that originality in the way of prompting and in the way of content and the iteration that goes with it. It's then just absorbed right into the engine. So if somebody then wanted to turn the key in the future and come to the same artistic output then they probably could because it had been told that before, so that kind of evolutionary output becomes an interesting one. And then does the artist with 20 years experience still have the advantage or are they kind of now more level with the eight year old who is also putting in the same prompts? It's going to be interesting to see.

[00:36:17] Gareth Rydon: That's such a good question space as well, because what we're starting, what's really interesting with the image generators, MidJourney or DALL·E 3, it's phenomenal. So if you give it a brief where you say you know, 1950s style imagery, so if you give it a brief about how art was produced previously, it does it pretty amazingly. But what's really interesting, if you say, create a future concept, it all looks the same. The same way it's doing, which is super interesting, because if, when you think about how it's been trained, it's been trained to really understand generations of art, it can produce art in that form, but creating something new,

it is creating something new, but it looks all the same, no matter what tool we use to generate.

It's really, and I tried it and it, you use all sorts of different ways of prompting, but anything where you say design art for the future in a style that might exist in 2030, and if you say the same prompt, but for 2060, it kind of looks the same. So I find 

to your point around, well, does the 8-year-old and the artist end up being able to create the same thing with the prompt, probably yes, future scary. If my daughter can probably draw better than me today anyway, but to create something new, something that hasn't been thought about still lies within our capability because you go to the fundamentals of what, how a large language model works is you type in a Google email and it's completing the words for you. In principle that's training at a massive scale. So it's being generative, but it's saying, oh I'm predicting based on this massive amount of data. This would be what you're going to do next based on what you're talking to me about. So it's only ever giving you an answer that is creating based on what already exists.

So it can't pull, it can't create something fundamentally new. It uses the themes and patterns on the data it's being trained on. Yeah, it's just an interesting space and yeah, if you get to try it, have a

look at that, how, when you try and create future works, and it's like, oh, that's AI.

It just looks like an AI thing that is a mishmash of everything that came before. But maybe that's how all art was created. I don't know. I'm not, a big, big artist anyway. 

[00:38:21] Chris Hudson: But it feels like it's definitely, it's in its learning stage, it's still in its infancy, in 50 years or even in 10 or 5 years, it'll probably have the ability to prompt you on things rather than you having to come up with these, very carefully crafted questions that people are running training courses on for at the minute, so it feels like that will all be there, you know, it's like, did you want this, you know, did you really mean that, and it's doing all the stuff that Google probably, everyone's racing to kind of evolve that technology and that interface, probably from a UX design point of view.

The future point is also interesting from the perspective of if you look back through the decades, actually evidence of what would become, say, now, presents itself in one way or another through different social communities, pockets of society, enterprise points in history kind of point to the future. And you can see that in retrospect. And, I've done some other work, which was more in the kind of rail and infrastructure area where we were looking at, putting in rail and new infrastructure for an urban environment, spatial design, where you're thinking what do we need in a hundred years time and how do we work that out an example and you can usually find, or extrapolate a way of getting to who the people are, what their interests and values will be, where they live, what they'll need to use and how they'll live. And you can usually then, find evidence of some of that happening in the world today somewhere.

So coming back to the prompting, it could be that if you had a good enough grasp of where evidence of the future sat in the world today, then you would be able to paint that picture, but obviously it just take a bit of effort.

But yeah, the next 10, 20 years for future generations, I think taking a step back and thinking about what you're saying previously, what do you feel you're personally or collectively contributing towards and how do you describe that and how do you kind of rationalize that in your own head?

[00:40:07] Gareth Rydon: I think we really want to think in the Australian context and we want to really double down on helping the small to medium businesses in Australia that our economy's built off the back of. We want to help them grow because 10, 15, 20 years now, if we've got a thriving, small, medium enterprise economy in Australia, that's going to benefit us all because we've got here today, other than digging stuff up out of the ground, and a lot of the success for us as a nation has been built off the back of small and medium businesses. We're really wanting to focus in how do we bring the tools and capability of AI to really help them grow because that future proves our economy. So, it's a hardcore commercial capitalist sort of approach, but in some way contribute to success for all of Australians and us having that really sustainable economy where we're not having to constantly rely on digging stuff out of the ground to pay for all that critical infrastructure, make our economy survive. We can move to 2, 5, 10 person businesses that are being massively successful locally and ultimately globally as well.

[00:41:12] Chris Hudson: I suppose the possibility around the output with fewer resources is kind of where you're landing there. And do you see there being a tension, if everyone had a four day week or a three day week or a two day week, and all of a sudden there were a lot of people out there with skills that that had been replaced by AI in one way or another, do you talk about that or look at that in any more depth?

[00:41:33] Gareth Rydon: We talk about augmenting, not replacing and where we've come from with AI and machine learning, robotic process automation. That was all about how do I RPA Robotic Process Automation this and sack 100 staff. Like I've been in the big banks. I've seen how that works. We want this to be about how do I augment the individual, give them superpowers so they can do amazing things, to liberate their time, to spend time doing things they love. So, output increases. So we're not saying let's just cut as much out as we can and get rid of as many people as possible. Rather, how do we make those people worth 10 people, not get rid of 5 people. So that's really important for us. And that's the first thing we often say is, we've said no to one or two clients where it's that process optimization efficiency.

I need to get rid of these four teams. We're going to do it with AI. Like, great. That's not where we want to play. If you want to focus on growth and how to grow and augment, we're happy to work with you. But that's not the space we're playing.

[00:42:34] Chris Hudson: That would be a practical one. And probably somebody has used one of the big consultancies to come in and do that kind of work, would be expecting to come and have a normal conversation about the fact that, you know, there needs to be a restructure because it looks like a cost inefficiency.

And therefore margin will improve if we do X, Y, Z, and it's all very kind of cold, rational and business like, but that sort of thing can happen probably been the domain of a lot of technological discussions at a board level in the past, where a CTO has said, we've got this thing, it's basically a workflow management tool, or it's this, or it's that, and it's going to help as productivity, they're having to justify and cost cutting is, probably one of the most powerful numbers that you can throw onto the boardroom table where I could see all of that happening. That's probably been the domain of a lot of technological discussions at a board level in the past, where a CTO has said, we've got this thing, it's basically a workflow management tool, or it's this, or it's that, and it's going to help as productivity, they're having to justify and, cost cutting is probably one of the most powerful, numbers that you can throw onto the boardroom table where I could see all of that happening.

I think what you're suggesting there is a noble way of looking at it, which is around enablement and the possibility of what people could do. Should the power of the technology be placed into their hands really? And it's yeah, it feels like one that you could probably probably quantify in the end and it's probably getting to that stage now where you can say, based on previous work that you've done or that teams have been able to look at, the productivity and you can probably put metrics and outcomes and business case for that.

I'm sure. It wouldn't be quite as compelling as the 2 million saved in in salaries and whatever else, but you could still make a quite a good case for it. I should think. And I think it is interesting to tie that maybe to a parallel conversation that's going on in the space of employee experience and obviously mental health within the workplace and hybrid working and all these things.

And there's a massive conversation around the fact that if you do more for yourself outside of work, then you bring more of yourself. Into your work and that, that kind of ties into a little bit to what you were saying, but are those connections also being made that you're also helping with the wellbeing of workforces to essentially liberate them from the things that they hate doing as you were describing, but also give them the time to do what they love doing and then what are we seeing as being the kind of big I suppose the positive news stories from having made some of those changes?

[00:44:58] Gareth Rydon: Yeah, absolutely that. You're giving people a feeling truly empowered to Be able to deliver awesome things and still have their time liberated. And

It's, and we're seeing that it's hard though, because as you said, we've lost work because we've said, that's not what we're here to do. And it's a much easier path, but that point around and I can speak from personal experience. So for us, for our studio is that. And we keep saying about how we want to liberate our time. We want to work for four day a week. And I think three weeks ago, it was pretty great when my co founder Tom, we had a stand up on Friday and he said, why are we meeting on a Friday? And we all went quiet and said, what are we not doing in how we work that we're having to do this on a Friday and a Friyay as we say, and it was a fantastic challenge. And then we took that on and we said, okay, well, what do we need to do? How can we continue to augment ourselves to liberate ourselves, liberate our time on Friday.

So since then, we've started to really think about, first of all, we said, great, we shouldn't be meeting on Fridays. So we don't have meetings on Friday. We try to do other things, use our liberated time to do things that matter to us. And the value that comes back on a Monday when we're ready to be connected, but also though, challenging ourselves to say, it's not just, we won't meet or work because we're a startup. If we don't work. There's a thing about getting cash flow in the door as well, so it's a pretty big trade off for us. But we are seeing the ability to do things at such a high quality, so much faster, and that is really helping us. We're getting through. I reflect on how much work I can get through in an average week now, say in when I'm doing customer research, so, I know like customer research can consume a lot of time, the amount of research synthesis, at a high quality I can get through I reflect on five or six years ago, that'd be a team of four for a month to what I can do in about a week and a half in terms of with the right use of those tools.

So we're making sure we're applying to ourselves. Otherwise it's a hollow message when we say it to our clients, like, well, you guys are working nonstop. it really true? We're seeing it and experiencing it. And I think about for me, I want to spend more time with the family.

I want to be able to cook dinner on a Friday. I want to be home, ready to hang out with the kids, do school pickup. Go to their swimming lessons. So that's how it's manifesting for me. And I feel a lot more balanced as a human now and a lot more what it should be like, not I'm going to grind myself for the next 30 years, then I'm going to retire and then I'm going to live

It just doesn't seem like the right way to go about it. 

[00:47:20] Chris Hudson: And I know you mentioned at the start that you were using it to prepare for this interview, but what has it enabled you to find out about yourself in the way that you work that you wouldn't otherwise known? 

[00:47:30] Gareth Rydon: I think what was interesting when I got to critique what I was going to talk about, the critique came back and it showed me I should talk about how I normally talk like people want to hear the stories. I love telling stories and because what I was asking, it was saying, look, I need to emphasize what I know, what I'm good at, and so I can appeal to audience can actually get something out of it. And what I learned through there was how useful storytelling is, because that was a lot of the critique that came back from a couple of prompts I did with the AIs. Focus on the stories. People love to hear stories. I'm like, yeah, I love to hear stories as well. Don't just go through case study company, but talk about it as a story.

So I think that's I learned about myself is that like storytelling is a powerful tool and I should try and lean into it more and more.

[00:48:14] Chris Hudson: And more broadly for you in the way that you work now compared to say three years ago, how do you think it's changed you? And what have you learned from the process of taking on the technology and embracing it so fully?

[00:48:25] Gareth Rydon: How important spending time with people is because so much more of my time is freed up from sitting in front of a PowerPoint deck, sitting inside a Word document, sitting inside my emails and how much I can create when I'm working with people. And maybe that's just my style, and I'm finding I'm on the phone so much more.

I'm having walking meetings. I'm going to meet people. I'm just, I'm able to listen, not, okay, I've got seven meetings on today. This meeting, I've got to just wait for a break in conversation so I can say what I need to say. Okay. Get out what and then move on to- I can really immerse myself in a conversation like that conversation we had when was last week or so it was awesome because I really felt like I could be present because I was like this is great I've got the time to sit down and learn from you and the stuff you're doing and talk about what I'm doing and how much I could take away from that and I just didn't have that luxury before I would have to force it and be like there were just a couple of moments when I could do it and then it was just not cancelled out.

But the volume or the other type of work just completely overpowered that.

[00:49:27] Chris Hudson: It's a precious thing, right? Meeting up with people and you'd know it if you only had one meeting a week or one meeting a month and you're in a privileged position to be jetting around the world and doing other things maybe with the rest of your time.

But actually, you'd make it count if that's all you had. And I think that element of being present actually listening actively. But also being able to participate probably with more focus and I'd say more sincerity, more of the authenticity as well is a great thing. And it's a really rare thing, particularly in the grind of work if you're working in a large corporate somewhere that could be quite difficult to get to because people's heads are in different places and you've got 20 people around a table that. It's hard to navigate at an individual level. So I totally respect that and I love that you've been able to do it.

You're kind of staying true to your ethos and your direction and you're pretty pushing yourselves to cross examine what each other are doing and how you're practicing as well, which I think is healthy. That's inviting more self reflection, but it's also giving other people the impetus to almost try and learn a bit more about themselves as well. So it sounds like from what you're saying, if you were doing this within your team at work, within your organization, you could almost create a bit of a culture and a bit of transparency around how you were using it and how you were feeling as a result of using it.

And that sort of thing too, because it can be a shared experience of learning rather than just an individual one 

[00:50:49] Gareth Rydon: You articulated that so nicely. The companies you're working with that have no success is when they get a group of people working, experimenting together because they have these sessions where they're like, oh I did this. What do you think of that shared experience, especially in this is massively powerful.

So what we often do is we, if we demo a tool, we always say, please have at least three or four people in your company come along. then also what we say is after the meeting, like when they put a tip and a word of advice for the company, have a debrief with the group, not with us, with your team after cause they leave the demos going, oh. I'm like, what if? And then they have that moment after and it's so powerful. And then we often see the next day they're like, oh, we were chatting and we thought about this and then someone's gonna do this. And it's that shared experience is, I like to think shared experience to generate like gen, there's reasons it's called generative AI, so do it with others.

If you're tapping around on chat GPTb to start, great. But if you're doing it with colleagues sharing the latest thing that you did. Oh, by the way, I took a photo of my bike and I got chatGPT to write me a cycling training program. How cool is this? And I was like, I wonder if it could do this. And then you start to riff off each other and you get all these use cases coming out.

But yeah, that's spot on. That's probably one of the most important things. If you're in a company, have one or two people there with you on your journey and share what you're doing. Build on each other because back to that skeuomorphic example, we're only going to uncover the radio example. We go to India and interview Gandhi by testing it out and you test out with more people and we'll uncover those opportunities.

[00:52:23] Chris Hudson: So I think curiosity as well, and I think time that you gain back would obviously afford you more curiosity as well, because otherwise you're just stuck in the grind. So if you got the time, then you can be thinking about all of these slightly obscure tangential ideas that pop into your head when you're doing your, you know, on the way back from your walking meeting, or on, on the Friday that you've got off.

But it feels like otherwise it's back to back. There's no room for any of that and creativity does sort of flow. It was another podcast episode that we had a few months back with Dr. Gus, and he was talking about his pilgrimmage between Melbourne and Sydney and how creativity just flows from boredom and you have to kind of embrace that a little bit for the real sparks to fly. So I think if we had a bit of that and a bit of a bit more time, and then we had the technology, obviously to be able to quickly convert that into tangible outputs and things that can be a value for the businesses, but also for your personal life, as you were describing, and that seems incredibly constructive.

So it's really good. Have you worked with kids at all? Have you seen how they've been using any of the tools and how younger generations are kind of interacting with it?

[00:53:28] Gareth Rydon: Yeah. There's a couple of interesting spaces I'm seeing. Well, first of all I'm working with my kids, getting them familiar with generative AI, because it's going to be their new world. And the sooner they can understand what it means and how to use it and understand its context, the more successful they're going to be.

So say for my kids, we've recently relocated and they're youngish so we're having to get them excited about the move. So one way was they got to design a poster for each of their rooms. So I showed them how to use DALL E and took them through, I explained to them what it was. I'm like, I'm imagining, I want you to imagine that you've got these wonderful ideas in your brain and this DALL E is going to help you bring those ideas to life.

And they're like, okay. But what was so fascinating about it was A, what they came up with just blew my mind because they had this unrestricted creativity and it's harnessed into something that can be created, which was just amazing. The stuff, like my daughter created a poster of the New York marathon, but all the runners were white English bulldogs wearing Nike trainers.

[00:54:39] Chris Hudson: Okay. I can picture that. 

[00:54:40] Gareth Rydon: But then the other thing though, is it's turned out to be an incredible teaching tool because my son's a little bit younger and he would say something and I'd say, oh well, what's their face doing? Why is their face doing that? How are they feeling? And he's like, oh and it's really getting him at the age he is to express his imagination in a way.

But the beauty is that as he expresses it, he gets a result. So he's seeing oh if I say that, that's what that actually means. Whereas before they were abstract concepts, like I'm like, does that person look happy? Oh yeah, they're smiling. Well, how can you say that? Then when you'd see it. But the other side of it, which, my wife's a psychologist, the big risk side of it is that there's quite a number of instances now where, it's hard making friends as a kid, especially as a teen, and there are Gen AI solutions that are like your friend. And you chat with it like a friend. In one case, I'm like, okay, it's someone you can confide in, but where you're out learning the skills to interact, but the feedback from these things is like, well, why would I? Because people are horrible. Whereas my AI buddy is awesome. It's great to chat with, they always say nice things.

They're always helping me out, giving me ideas. Why should I bother going and talking to another person? So that has me a bit concerned about, especially for a developing brain, it's like, how do you separate learning how to make human connections and going through the challenges, but the rewards of making a true meaningful human connection.

Well, you don't have to do that. You just get all the rewards and you don't deal with the consequences. So do you lose your innate ability to connect and communicate with other humans. So that side of it for me is the the concerning side. 

[00:56:18] Chris Hudson: I think particularly as it gets closer to a real interaction, in a way, it feels like right now it'd be noticeable. It's a bit like sitting in driving theory test or you're in a sort of simulator environment and a cat runs across the road, you're going to hit the brakes. You understand what you have to do in that situation, but from a social context, obviously it's a lot more intricate than that.

And you'd like to think that you'd be able to spot the bot from the real person now, but obviously deep fakes and other things are around everywhere already, the amount of fake content and misrepresentation that exists online is a thing. So, for a kid and increasingly younger now having to navigate some of that, it's getting a lot harder, even for adults, it'd be a lot harder to distinguish one set of content from another.

It's a question around trust and around experience and knowledge and knowing what you're getting into a little bit. So the more it feels like if we are skilled up, we have to help other people almost open their eyes to what the possibility is but go in with an understanding, make it safe, make it trustworthy in one way or another.

Because there's probably an element of risk out there as well. Because it could be generally, like any kind of design, any kind of creative tool could be used for bad things as well as for good things. So there'll be that side of it too which would be the other part.

Maybe we'll discuss that as things come up on a future podcast episode as well. But, yeah. I mean that I'm trying to think my mind's still buzzing about all the things that we've talked about today. And, we talked about the future, the children, the generations, the world of work life at home.

Yeah. Is there anything sort of big in your kind of locker of stories related to AI that you feel like you want to still say about the amazing things that it can do or any kind of words of caution or anything like that that just popped into your head?

[00:57:55] Gareth Rydon: The one thing that I see a fair bit of and more just if it helps, how people might think about it was 90 percent of the business world's on Microsoft and yeah, Microsoft has copilots coming out, which is built off OpenAI's We're using copilot. It's super cool.

There's some really cool stuff it does, but I'll caution people to think that once you've got copilot, problem solved. And the example that I learned from Ben, one of my co founders is when vacuum cleaners came out, there was this people talking about this revolution in cleaning. It's going to free up time from housework.

Everyone's going to have all this time back because of vacuum cleaner. All that happened was the standard of cleanliness went up. So there's no change in how much time you're just expected to maintain your house to a high degree of quality. So I think the word not caution, but I'd say for people when you want, if you think Copilot's the answer, once you've got it, great. It'll do everything you need to do and you just keep working how you're working. If you want to really take that next step. If you want to really see what's possible, create new things, augment yourself free up the time, still take the time to push what the tools can do just because you've got it across all your platforms.

There's so much more it can because I do see that we've got Copilot, it's all good, problem solved. So does 90 percent of the business world. So, do you just want to stay where you are? Or do you want to use it to take that next step? That's probably one thing that I'm just, I'm sort of observing and if you do get Copilot and it's switched on, awesome, but maintain the rage and taking pictures of your fridge, writing your shopping list, playing around with it because there's so much more opportunity having to scratch the surface on for the tools.

[00:59:33] Chris Hudson: That feels like it's an ever present theme when it comes to technology because on your phone, in your car, wherever you are, 95 percent of the technology is not being used by most people, it feels like it's just being used for the base. So I still need my phone and smartphone can do everything.

iPhone 13, I'm just going to use it to make a few calls and some text messages and that's about it. But the level of comfort around possibility is the part that's usually in question. And I think, the testimonies, the stories like you've been kind enough to share today it's all part of it.

It's understanding what other people have been able to do and it might spark another thought and it might then generate something that hasn't been done before. So, we can all be pioneers. It feels like the power is in our hands.

[01:00:12] Gareth Rydon: Yeah, absolutely. Well, actually one last thing I was thinking is that, say for the Company Road podcasts, the tools that exist right now where it can say we're recording this as well. I know verbatim AI is a great Aussie startup where we recorded this. It can convert all of what we talked about into any language and change our lips to sync with the language that we're talking in so maybe there's an opportunity for Company Road to try it out and expose more of the world to the great stuff you're doing on your podcasts as well 

[01:00:44] Chris Hudson: That's interesting because I've got a map of the world, which I look at from a dashboarding point of view and you can see who's popping up and who's listening where, and through Asia and Africa and different countries, you've got to wonder, is it inclusive enough for them?

So that's a great suggestion. Thank you.

[01:00:58] Gareth Rydon: Of course, welcome 

[01:01:00] Chris Hudson: Awesome. Well, I want to end with a sort of slightly cryptic, but also maybe slightly philosophical question, which is what's one thing that, that you personally, as a person, as a human being can think, feel, say, or do that AI will never be able to do for you?

Do you think?

[01:01:16] Gareth Rydon: I believe that my ability to make my wife and kids feel truly special and cared for AI is never going to be able to do that.

[01:01:27] Chris Hudson: It's a good answer. Even if you've used it to fill the fridge.

[01:01:30] Gareth Rydon: Yeah, it can definitely save a lot of time from shopping.

[01:01:33] Chris Hudson: Brilliant. All right, well, I really appreciate your time tonight. Thanks so much, Gareth, for sparing the time to just have the chat and kind of tease out some of the things that I know people are thinking about a little bit, but often we're swept into the day to day. And the norm of work and some of these more future focused and speculative conversations they kind of part for another time.

And now is the time really, it feels like very much for everyone to know about it, understand it a little bit, experiment with it comfortably and just figure out what it's all about, within anyone's own capacity, within your own world, within your own sphere of influence and connection as well.

So really appreciate the clarity with which you've been able to walk us through some of the stories and, and some of the other more far out and left field conversations that we're able to have today as well. So thank you so much. 

[01:02:21] Gareth Rydon: Welcome. Thank you, Chris.

[01:02:23] Chris Hudson: Okay, so that's it for this episode. If you're hearing this message, you've listened all the way to the end. So thank you very much. We hope you enjoyed the show. We'd love to hear your feedback. So please leave us a review and share this episode with your friends, team members, leaders if you think it'll make a difference.

After all, we're trying to help you, the intrapreneurs kick more goals within your organizations. If you have any questions about the things we covered in the show, please email me directly at chris@companyroad.co. I answer all messages so please don't hesitate to reach out and to hear about the latest episodes and updates.

Please head to companyroad.co to subscribe. Tune in next Wednesday for another new episode.