The Dumb Monkey Show - Simplifying AI for business leaders

Deepfakes, Sora 2 & the Price of Trust: How to Fail Fast (Safely) with AI

Aamir Qutub Season 1 Episode 15

Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.

0:00 | 27:42

Send us Fan Mail

Are you trusting what you see online? 

In this episode, Aamir Qutub and co-host Davina Montgomery explore how AI-generated content is reshaping trust, creativity, and business — and what happens when companies rush into automation without preparation. 

From OpenAI’s new Sora 2 model that creates ultra-realistic videos to a $440,000 AI-written government report gone wrong, this conversation dives deep into the balance between innovation and integrity

You’ll learn how to use AI boldly — without losing your authenticity — and why training your people before deploying new tech might just save your brand.  

What’s inside:
• How OpenAI’s Sora 2 is changing video, social media, and storytelling
• The rise of deepfakes and misinformation — and how to spot them
• Why authenticity and trust are now business currencies
• Real-world AI fails: Deloitte’s fake report & the Commonwealth Bank voicebot debacle
• How to “fail fast” the right way: train first, test safely, then scale confidently  

Resources & Links:

📲 Dumb Monkey AI Academy → https://dumbmonkey.ai/

📱 Dumb Monkey App → https://dumbmonkey.ai/dl

📘 The CEO Who Mocked AI (Until It Made Him Millions)https://mybook.to/dumbmonkeypodcast

🚀 Enterprise Monkey → https://enterprisemonkey.com.au 

🛠️ Tools & Platforms Mentioned 

 Sora 2 (OpenAI) → https://openai.com/sora

 ChatGPT (OpenAI) → https://chat.openai.com/

 Adobe Content Authenticator → https://contentauthenticity.org/

SPEAKER_00

Imagine being the biggest consultancy in the world and government paying you somewhere around$440,000. Imagine using ChatGPT to generate that report.

SPEAKER_01

Can we have our taxpayer money back, please?

SPEAKER_00

I think they are they are issuing some refund now. We are working with a law firm and they have just sort of bought Chat GPT Enterprise licenses. And they were saying, we are going to give the licenses to the people and then we can do the training next week. And I said, no, no, no. Private organizations, they are focusing on getting benefit out of it. It needs to be the governments who need to be worried about authenticity.

SPEAKER_02

And let's face it, government's not going to catch up quick enough.

SPEAKER_00

But from a business perspective, you also need to look at with what's possible, how can I ensure that I am actually using the best technology that's out there so that I am not stuck behind my competitors.

SPEAKER_02

Hi everyone, and welcome back to this latest episode of the Dumb Monkey Show. We are joined as always by my co-host, Amir Katob. How are you?

SPEAKER_00

I'm good. How are you?

SPEAKER_02

I'm really well, thank you. Amir, it's been, as always, pretty exciting time in the world of AI.

SPEAKER_00

But has been. There's so much happening, and we met last week, and there's so much that has changed in one week.

SPEAKER_02

I know. We've I feel like we've spent maybe 20 minutes, half an hour before sitting down here today just talking about just all things AI, just chatter.

SPEAKER_00

Absolutely. The crazy world of AI.

SPEAKER_02

The crazy world of AI. So let's dive in and maybe talk through some of the things that have been happening in the news and particularly in company news in over the past past week or so and things that have caught our interest. Sure. And we've been having discussions about. And then there's some news about some of the cool new things that are happening in from the big end of AI.

SPEAKER_00

Yes.

SPEAKER_02

Maybe let's start with what's new. Because as we know, this rollout of AI, new products, new developments, new platforms, they're all coming thick and fast at the moment. Open AI has delivered another banger.

SPEAKER_00

Yes, they have. And uh every time they come up with something new, people say, oh, it's revolutionary, but I'm not saying it just for the sake of it. It they have actually delivered something that is going to change the social media space forever, but the video creation space forever. So they already had a video model called Sora that allowed you to generate uh videos with prompts, but now they have got a new model called Sora 2, and it's at next level. Like the quality of the videos that come out of it is unbelievable. But most importantly, what you can do is you can actually take a photo of yourself or record a video of yourself, and then you can, with the help of prompts, create any type of videos. It's not just you as a talking head. You can be dancing, you can be acting in a movie, and then you can actually share your avatar, which means people can collaborate with you and make videos with you. So you can actually make a video with Sam Altman or any of the celebrity who have allowed you to record a video with them.

SPEAKER_02

Oh, this is incredibly cool. So tell me first off uh what you think the coolest use of this would be that you could think of right now.

SPEAKER_00

So there are a lot of cool uses in terms of how people are using it. Uh so you can not, it's not only creating realistic videos, it is creating anime videos, anime videos, uh, a lot of animation stuff, a lot of explanatory videos as well. So the range of the videos it can create is amazing. Like it can create from the scenes to short, short videos to animations to explainers. So there are a lot of different use cases. And uh one of the biggest, coolest use cases would be storytelling. So uh although the camera and mobile was accessible to a lot of people, but was still it took a lot of effort to still you know record yourself and create a video, and you needed a lot of video editing gear, a lot of flights and things like that. But now anyone who can imagine a scene, can think of a story, can actually use prompts to create a video. And it's not like earlier with earlier models like VO3 or Sora, you would give a prompt and it would create one single scene from one single angle. But now you can create a prompt, it can set up multiple scenes, which means you have character, like you know, how in actual movies, how you're recording from uh distance, and then you could doing a close-up and a shot from the back. It can all do that.

SPEAKER_02

So it's incorporating cuts, it's allowing you to come in from multiple angles. Yes. Um and it's probably unleashing a little bit more of what we would call the high-end video production values into an AI video generator. How realistic is it? How realistic is realistic?

SPEAKER_00

I would say it's uh as close to realistic as it can be at this stage. Like uh it would be very difficult to distinguish a video that's generated by AI really well. So this time what they did was to launch Sora 2. They actually launched a video and it was explaining what Sora 2 can do. But the whole video was actually created by Sora 2. So the you know how Steve Jobs used to come and launch Apple products or someone would come and talk about that. But this time they did the whole launch with the help of Sora 2 generated videos.

SPEAKER_02

It's amazing how fast this is developing, isn't it? Because it really wasn't that long ago that you'd see video produced by AI and sort of go, yeah, look, it's cool. It's it's cool and fun to look at. But would you use that as part of your corporate communications? You'd have to be pretty creative about how you solve that particular message. So this opening up of then the potential of AI created videos sounds like it would have some huge upsides for for business. And we're doing this in a business context. So for people who are you doing it to do their business storytelling, their business communications that explain the videos, things like that, that seems like it would be a really useful tool. What do people need to be careful of? Because nothing is fallible and certainly not AI.

SPEAKER_00

So from a consumer perspective, the the biggest challenge is how do you, when these models get so like they become so good, how do you really distinguish it between what's real and what's not real? So we and especially when we talk about the the people who have not been exposed to AI, like my dad or my aunt or my uncle, who actually and consume a lot of information and believe what they see as true. So there was a a lot of fake news that was already there. Yeah and now it has all become extremely believable. So which means people who want to promote propaganda, there's so much of hate in the world at the moment against everyone, against minorities, against immigrants, against all types of different classes of people. Now it has become so easy to create a propaganda and and spread that hate.

SPEAKER_02

That was going to be my next question. I mean, that's that's I mean, I'm sure I'm not the only one that was sitting there thinking this is it sounds exciting, but it all sounds equally problematic. Um the risk of this being used by at a at a large level by as propaganda tools, as hate speech or hate messaging, um, as fake news that's damaging, but also at a personal level. You know, people can do some pretty horrible things to each other online and do all of the time already. What sort of protections are in place? What how can we explain? How can we start sort of at least starting this conversation rather than actually explaining? But I think really important to dig down and start that conversation of going, what are we going to do with this? How are we going to protect people? How are we going to hold people who produce these videos accountable? What needs to be done at a system level and at a community level?

SPEAKER_00

Yeah. So OpenAI has uh put some good uh security considerations in place. The way they have set it up is that you can record your own video and then you are the owner of your own avatar. And only when you allow other people to collaborate with you, that's when they can collaborate with you. If you're sharing a video out of Sora app, then it puts a bookmark uh uh or or sorry, watermark and says it's it's it's produced by Sora.

SPEAKER_02

And it can't be removed?

SPEAKER_00

Uh it everything can be removed. Yes, yeah. So it can be removed. Uh they have got some meta tags in there as well. But then today I was testing there's another platform called Hejen that uh is used to generate videos and they have got Sora to in there. So I actually uploaded uh one of my friend's photos in there and I thought I'll I'll see if it can generate a video out of that. It was able to generate a video out of that without any consent or any protection. Yeah, right. So although they have built the protection in their application, but the APIs that they have exposed, there might be something that's still allowing people to generate the video. So things to be really, really aware of.

SPEAKER_02

Yeah. Is that these can be great, but they're in the world, they're being used, they're available for everyone.

SPEAKER_00

Yes.

SPEAKER_02

Be mindful that what you everything you see might not be what you're seeing.

SPEAKER_00

That's right. Yes. And and I want to communicate this message to my family is that if you receive a video call from me or a voice call from me, don't trust it.

SPEAKER_02

Uh uh, double check and send me a text message, you know, two-factor authentication. That's right. Every message from me now on.

SPEAKER_00

Because we are seeing a lot of these scams going on where distressed daughters are calling in. Absolutely. I've had messages.

SPEAKER_02

I've had messages saying, hi mom, it's me, and I've lost my phone and can you send me some money to this? And it's far too polite and and far too grammatically correct to have come from either of my kids. So I knew that it wasn't them.

SPEAKER_00

Fair enough. Yeah.

SPEAKER_02

Um, which opens up a new conversation that you and I were having just earlier talking about authenticity and the importance of authenticity in the age of AI. So AI development is leapfrogging ahead of any kind of policy and protocols that we're going to have, particularly at a government level, um, and even at a societal level, like our understanding of AI hasn't caught up with what AI is doing. So in this world of super, super fast change, how do we, or what's being done in the world of development to bring authenticity back onto the table?

SPEAKER_00

So there are some organizations that are working towards it. So uh we were just talking about a platform from Adobe, which is around uh an authenticator, and what it allows you to do is that if you're producing any content uh from AI, then it actually puts the uh metadata in there so that it has been produced by AI. If you post a video on uh LinkedIn that's generated by AI, generally uh LinkedIn marks it as generated by AI. But on the other hand, there are uh open source models and uh tools like Hicksfield through which you can generate videos that are where you can remove that metadata or meta tags. So there are people who are trying to catch up with it, uh, but because a lot of uh private organizations, they are focusing on developing the technology and uh getting benefit out of it. They are not the ones they are who are actually concerned about authenticity.

SPEAKER_02

Yeah, they're not the ones that are paying the price, are they?

SPEAKER_00

That's right. So it needs to be the governments uh who need to be worried about authenticity.

SPEAKER_02

And let's face it, governments aren't gonna catch up quick enough. So if we don't at a community level start making you a noise about it, then they're not gonna act.

SPEAKER_00

They won't, yes. And uh it's going to be uh a bigger problem going forward. But I would say, again, from a business perspective, you also need to look at with what's possible, how can I ensure that I am actually using the best technology that's out there so that I am not stuck behind uh when my other competitors are actually embracing some of it as well. Yeah.

SPEAKER_02

It's it's such a fail-fast environment now, isn't it, with with business and technology development. Um, there's been some some really interesting things that have happened over the last few months in the corporate world releasing AI or introducing AI into their workflows and then having some pretty hectic pushback and very public fails on the back of that. I mean, you know, of course, the Commonwealth Bank in Australia in July rolled out an AI voice uh voice bot, and that after the rollout, almost immediately after the rollout of the voice bot, their call centres became very busy. Staff that they had made redundant on the back of thinking that this voice bot was going to take over some of that work, then found that staff are not only busier, those units were busier, but they had to try and rehire staff, bring in people, and the workload had increased. Um, it was a big public fail.

SPEAKER_00

And it came to me as a surprise like how would such a big mess up happen? Like, did they just say, okay, we are going to deploy voice agents and at the same time fire people and without any testing? Like I wouldn't, I I cannot imagine how such a large organization would make such a huge change, let people go, and then they have to hide the people, you know, in the the next day.

SPEAKER_02

To be fair, they certainly came out and and they their messaging was, look, we made a mistake. Um, we haven't done this the right way. Our internal processes, we're reviewing our internal processes to make sure that we don't do that. We're doing everything that we can to um to apologize to those people and to to give them options as to what works for them, but acknowledging that the damage has been done. Or certainly from the the union's point of view, the damage, their messaging is that the damage has been done. But I think it's a really good example of when you put fail fast in front of your business reputation, particularly in this space where technology is developing fast, we haven't really done a whole lot of testing of what are the outcomes. So everyone's introducing AI, but we haven't had this huge big long run of time to go back and actually look at what does that actually mean. We're really at risk of going down the same pitfalls that we saw when social media first developed. You know, f social media was developed to fail fast as well. And some of those intrinsic problems that we have with social media now are embedded because there wasn't any accountability.

SPEAKER_00

Sorry, talking about social media, what it reminds me of is that OpenAI has launched their own social media platform as well. It's called Sora.

SPEAKER_01

Oh, wow.

SPEAKER_00

Yes. And it's like TikTok.

SPEAKER_02

Which I'm actually disappointed they didn't call Soda Stream, but I get that that name has been taken.

SPEAKER_00

Yes. And it's it's like TikTok, but it's uh an a TikTok for videos that are generated by AI. So it's going to be all of the videos are going to be generated by AI in there. So people can just create their own videos, write a prompt, uh, push the video out, people can follow it, like it, and that's uh a new type of world we are moving into, where like social media is going to be dominated with the help of AI as well.

SPEAKER_02

So we're getting technology that's disrupting technology now, aren't we? That's the way this is working.

SPEAKER_00

Yes, yeah. Yeah. So it would be very interesting to watch this space and what happens over there.

SPEAKER_02

Yeah. So you and I were talking uh just outside the studio about the some of these new advances that are coming through. So open AI is a really good example of that and being able to have um within ChatGPT to be able to make bookings and um to basically be able to run a bit of your CMS systems effectively. So the kind of things that you would have typically done through a website to run them through your AI and what that means.

SPEAKER_00

Yeah, because they have now launched apps inside ChatGPT, which means uh instead of going to booking.com, if you just say, hey, find me a hotel, it will look through the APIs of booking.com, find the right hotels. But then right now, what you're doing is just say just clicking on the link and going to the website and making a booking over there. But with this ChatGPT app, now you don't have to do it because it can make booking for you then and there. Similarly, if you want to make a reservation, if you want to uh purchase something, they are doing the same integration with Shopify as well. So which means they want you to remain in the Chat GPT space and do everything over there. But the opportunity it presents for the businesses is that they have launched what they call as SDK or software development kit, and they are allowing developers to develop connections from their website or applications to Chat GPT. So businesses can, if they want to be found and if they want ChatGPT users to use them, now they should be focusing on developing an app inside ChatGPT that talks to their website. Yeah.

SPEAKER_02

So in the same way that 15 years ago you needed to be on Facebook, now you're gonna need to be integrated with ChatGPT.

SPEAKER_00

Absolutely.

SPEAKER_02

Yeah, that raises for me the specter of authenticity again. Now I'm gonna come, I'm gonna keep coming back to this because I think it's so fundamental to how we how we as people make our decisions, things that we want. So we talked a little bit about, you know, your dad, my dad will believe what they what they see in front of them. Often they haven't been raised in an environment where they need to question what is real, although, you know, they grew up in print media. So anyway, don't get me started. But um but they will believe things that are in front of them. We, I guess, we're probably that first generation that are starting to really look at things with a cynical eye. Our kids are absolutely looking at it into what's real and what's not real.

SPEAKER_00

We have been a very like I would say distressed generation. Like we have seen everything. Like we when we started growing up, it was like all analog and the phones that you had to dial and dot matrix printers and Yes, yes, and then we have seen it all.

SPEAKER_02

Yeah, it's changed really fast. And it does it does change the world.

SPEAKER_00

Or like at least the most evolving one.

SPEAKER_02

Maybe. I kind of think it's uh I actually think it's maybe our kids because I think they're the ones that have had to be have absorbed so much information from such a young age.

SPEAKER_00

Because but they they already had they they saw it, like they have been used to of swiping and using phone like this and so on. Whereas we had to change a lot and learn a lot from that perspective. So I I would sort of say that we are the smart person.

SPEAKER_02

Well I'll I'll let you say that, but I'm still I'm still going with the kids of the kids are absorbing more.

SPEAKER_00

They are definitely absorbing more and they are definitely on the cutting edge. But the effort that we had to put through to Oh, yeah, we've had to remake our brains.

SPEAKER_02

Absolutely.

SPEAKER_00

Absolutely, yeah.

SPEAKER_02

Um sorry. No, I think it's it that's a whole conversation that we're gonna be coming back to. Absolutely. But um and feel free to put your thoughts on that into the comments because Emir reads all of them and through his AI. Um, but this question of authenticity, of trust, of reliability, when we in any form of of business, but even just fundamentally as people, you know, AI can do some amazing things. But if how you're presenting yourself in the world, the service that you're offering, your connection to other people, if that's not real, then you're breaking trust. So for me, I see all these red flags coming through when um for businesses that may jump in and start using AI, and I absolutely want them to, which think it's mad if you don't. But be constantly thinking about how do you show up as a person in the world? How do you shop as a business in the world? Yeah. How do you let people know that want to find you that you are who you say you are, that you are going to do what you say that you're going to do, and that you have something real to say and offer. Because otherwise it's all just noise, right? And anything that's coming through AI, how do you know that that's booking.com? How do you know that that's Shopify that's doing it?

SPEAKER_00

And that's going to be a challenge that we have to cope with. And when you talked about you are doing what you are supposed to do, imagine being the biggest consultancy in the world and government paying you a huge amount of money, somewhere around$440,000, to do uh uh a research report. Like, first of all,$440,000 for a research report.

SPEAKER_02

I'm holding my breath hearing me because I know this story and it just makes me go.

SPEAKER_00

Like$440,000 for a research report, that's outrageous, first of all. But you are a big for consulting firm, you can charge for that. But imagine using Chat GPT to generate that report and putting wrong citations and wrong clauses and like a completely fabricated report.

SPEAKER_02

AI 101. Sure. Use it to draft.

SPEAKER_00

Yes.

SPEAKER_02

Then edit heavily and check everything, particularly links and research and references.

SPEAKER_00

100%, yeah.

SPEAKER_02

And that wasn't done.

SPEAKER_00

Yeah. And the the company that we are talking about.

SPEAKER_02

So you're talking about it, you're talking about a report that would have caught just that single report would have cost tens of thousands of dollars to produce, I'm assuming.

SPEAKER_00

Yes. Uh so they the the Australian federal government paid Deloitte$440,000 to do that report.

SPEAKER_02

That's our taxpayers' money out there, PayPal at work.

SPEAKER_00

They submitted the report. It was accepted by the the government, was published on the website until someone in the parliament called it out and said the the references, the citations quoted in that report are wrong. And that's a very basic thing. Like, I'm all for using AI to generate the reports and do whatever you want to do with it, but at least have someone checking the work. That's like one of the most fundamental ethical AI principles. Like, but what's happening is that these organizations are earlier using graduates to write the reports and governments and large organizations were paying them the money. Now they are using AI or maybe getting their graduates to use the AI without proper training. That's the problem.

SPEAKER_02

That's the problem, isn't it? So we're, I mean, you've talked about this, I mean, and I think that this message comes through so clearly every time that I speak to you is that you treat AI like you would an intern or a new hire, someone that's That's come in you and is learning. So you go, great, go and do this. And then you check everything that they do and keep an eye on it. And then as they're learning, you can build trust. You can start to train that system. You can gain a bit more trust, but you still need to have oversight. You still need to be checking because AI hallucinates. It will make up things when it doesn't have an answer.

SPEAKER_00

Yeah.

SPEAKER_02

And even when you train your models, and if you're using a genetic AI, you can train your model to come back and tell you when it's not sure about something. You still need to check it.

SPEAKER_00

Absolutely. And if if you're not doing that, then what are you charging the money for? Like, isn't the government better off just getting a chat GPT subscription and producing the report themselves?

SPEAKER_01

Yeah, can we have our taxpayer money back, please? I hope that we're going to be able to do that.

SPEAKER_00

I think they are they are issuing some refund now.

SPEAKER_02

But I think the highlight is in this is that we need to fail fast. If you're going to introduce new technology, you need to fail fast, but you need to learn how to do this in safe ways.

SPEAKER_00

That's right.

SPEAKER_02

And that's for for all of us, whether you're in a small business, whether you're a an owner operator or whether you are a government agency or one of the world's biggest consultancies. Introduce it in, test it and fail in your testing. But my God, don't roll it out until you know that you've got protocols in place for your people as well as for your technology that present these kind of things. Because what you're doing is you're breaking trust.

SPEAKER_00

That's right.

SPEAKER_02

You're presenting, you know, a a lack of liability and uh sorry, a lack of reliability. Um there's absolutely still liability in place. And you know, in in the business sense, in the corporate world, this reminds me of things like um, I'd point to the example of um say Qantas. So when I was growing up, you know, Rain May was a big movie, and Dustin Hoffman's character talks about Qantas being the most reliable airline in the world because it hadn't had a crash and it hadn't had a a plane that had dropped out of the sky, which is pretty important when you're buying yourself an airline ticket, particularly if you're buying yourself a premiumly priced airline ticket. Then we went through this process of offshoring and company profits were starting to become more of a bite. And you know, companies, of course, are having to report on their profits and and go back to shareholders. So that push to reduce costs and create profits actually ate into the reliability and you started to see. And they weren't, they weren't huge issues with quantas, but there were just a few issues with flights and then reviews started to come in and people weren't as happy with the service. And it doesn't take a lot when you've built a business based on trust and your your name and your brand and how people feel about you is based on that. It is small things that change, that change how you feel about them. I mean, car manufacturers. Think about the the way that car manufacturers have changed over the years and ones that we we trust and we like and that you know bring out a model and everyone's like, oh my God, this is great. And then a new model comes out, it's like, oh my god, it's horrible, and we don't use it. This is going to exponentially grow with AI. So the good, the good companies, the ones that are going to win out of this, I shouldn't say good companies, that's a judgment call. Um, I'm sure lots of people in good companies that are gonna fail. Don't feel bad if it's you, it will happen. Um but the the ones that win out of this are the ones that are going to put trust and reliability right at the heart and center of what they do. And that means keeping people there.

SPEAKER_00

Absolutely.

SPEAKER_02

So stop hire stop firing your people and putting in AI.

SPEAKER_00

And most importantly, then training the people to use it. So a lot of organizations are buying these co-pilot licenses or Chat GPT enterprise licenses. You can spend thousands of dollars on buying the best piece of technology until you train people how to use it and how to not use it, then it's gonna fail every single time. And it's it's a it's a lesson for us as business leaders. Like it's good to talk about AI, it's good to implement AI, but how are you making sure that people are ready to understand and use AI? That's that's the most important thing. Like uh, we are working with a law firm and they have just sort of bought Chat GPT Enterprise licenses and they were saying, okay, we are going to give the licenses to the people, uh, and then we can do the training next week. And I said, no, no, no. Let's let's let's do the other way around. Yes, let's do the training first. Make sure uh and we go through assessment process, make sure that they are passing through that assessment process and then provide that the enterprise license. Yeah. Because otherwise, you like one day with an untrained person using AI is enough to damage the reputation of your business.

SPEAKER_02

And we get stories like Deloitte or decisions made at Commonwealth Bank. These things are happening every day. So learn the lessons, read the stories, have a bit of empathy that we're all gonna be in it.

SPEAKER_00

100%.

SPEAKER_02

No one's gonna be perfect in this world.

SPEAKER_00

We don't expect people to be perfect in this world, but at the same time, but it's also not about not using AI and saying, oh, it's too bad. There's so much that can happen. It's about using it mindfully. It's about saying, okay, cars are dangerous, they can cause accidents. We are gonna train people on it, uh, give them a driving license, give them a good car, and then allow them to drive it because walking is not an option.

SPEAKER_02

Yeah, absolutely. And AI in business, like everywhere, is a tool for people to use, not the other way around. Yeah. Well, that's it for our latest episode of the Dumb Monkey Show. Thank you for joining us, Amir. I feel like we're gonna come back to a lot of these conversations in in coming episodes. Um, and I think we can bring in some people to give us their perspective too on how to to fail fast and fail well and how to keep trust and reliability at the core of your operations when you're when you're jumping into the wild world of AI. But as always, it's a really exciting chat. It's been a lot of fun. And thank you for joining us. Please come back next time.