The Crazy One

Ep 145 Creativity: Stop Asking AI to Agree With You

Stephen Gates Episode 145

Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.

0:00 | 17:03

You've turned the most powerful thinking tool in history into a yes-machine — and it's making your ideas weaker without you even realizing it.

This episode is about a fundamental shift in how you use AI. Not a new tool, not a better prompt library — a completely different posture. Most people learned to use AI the same way they use Google: ask for an output, take the answer, move on. That habit is quietly eroding your thinking. AI is trained to be helpful, which means it's trained to agree. If you keep asking it questions you already know the answer to, all you're getting back is your own assumptions dressed up in more confident language.

The fix is treating AI like a sparring partner, not an assistant. That means building the discipline to ask the questions you don't want to ask — the ones that pressure-test your ideas before the room does it for you.

In this episode:

  • Why AI's confidence is the dangerous part — it states wrong things with the same authority as right ones
  • The sparring partner model: how to shift your posture toward AI so you get genuinely better thinking out of it
  • Four types of challenge prompts: failure, assumption exposure, data validity, and pre-mortem — and exactly how to use each one
  • Why most people skip this step (hint: it's not because they don't know how)
  • A 15-minute exercise you can do today with any idea you're working on

The people who will be dangerous with AI aren't the ones using the best tools. They're the ones asking the best questions. This episode is for anyone who wants to be one of them.

Send us Fan Mail

SHOW NOTES AND MORE:
https://thecrazy1.com/

WATCH MORE CONTENT ON YOUTUBE:
https://www.youtube.com/@StephenGates

WORK WITH CRZY:
http://crzydesign.com/

FOLLOW THE CRAZY ONE:
LinkedIn, Instagram, Facebook 

SPEAKER_00

So it's been really interesting because I think over the last, what's it been, six or twelve months, I've gone from really spending a lot of my time working with AI to working with a lot more companies helping them understand how to use AI. That's what the last couple episodes have been around culture and things like that. But one of the things that I've noticed, as we have the plausible noise problem and all these other things, is that look, there's a lot of people that are using AI to get faster. And that's not the problem. I think in a lot of ways, that's actually probably the symptom. I think that the real problem that I see is that so many people are asking it questions that they already know the answer to. They've turned probably one of the most powerful thinking tools in history into just a yes machine. And look, and I think it's making your ideas weaker, it's making your decisions softer. And for a lot of people, I think it's just making them lazier. And I think that's the issue, right? Is I think for so many people, I'm not even sure that they know this is happening. So that's what I want to talk about today is I want to talk about how you need to stop asking AI to disagree with you, to recognize that it is often a yes man that will tell you, no matter what you're doing, it's a great idea. So, welcome to episode 145 of The Crazy One. As always, I'm your host, Stephen Gates. This is the show where we talk about creativity, leadership, design, and everything else that matters to creative people. Now, as always, appreciate it. Leave a review, tell a friend, hit the subscribe button so you can once again get the new shows whenever they're coming out. Because thanks to AI and my new writing partner, which is Claude, I've got the next 10 episodes already written. I finally have got a better handle on what for me have been the two most time-consuming parts of the show, which are writing the episodes and then handling all the content production afterwards. So I found a better way of doing it that's going to let me put out content on a much more regular basis. But but let's talk about this, right? Because I think that in so many ways, it's a trend that I started to notice probably about like two years ago, where and it's honestly one of the reasons why I left Chat GPT because it just was so annoying to me that it was just such a yes man. And this is the thing, is that I think right now you sort of see like AI adoption is everywhere, but I think so much of AI literacy is still in such a primitive state. Because I think, look, most people have learned it sort of the way they almost treat it like a search engine, right? Write this, summarize this, give me an idea. But they're prompting it for an output. And I get it, right? Like again, if you come from search and a lot of the other ways we've used technologies that sort of looked similar, that was this the way you would do it. And so I think that's why like this habit got baked in early, and there aren't that many people that are really questioning it. But I think that's the thing that I'm seeing is that AI has just become this like confirmation machine. It reflects back what you're already thinking, but somehow it's it's doing it faster and it sounds more confident. I was watching an episode of last week tonight, I don't know what, a week or two ago. There's a whole story on there on AI chatbots, and how I think it was Chat GBT had convinced a man that he'd come up with a totally new form of math. He'd discovered all this new sort of thing, and he would prompt it and say, like, are you sure? And it would say, Yes, like you're a genius. You've come up with this thing. Well, of course, in the end, none of that was actually real. But I think that's the thing, right? Is it it's that confidence with which it answers that's the dangerous part. Because again, I think it it says things like it's the authority, whether it's right or wrong, whether you understand the sources it's citing are good or bad, like whether it's like it's just it's giving you this false sense of confidence. And I think that's really the step before plausible noise, right? That's why people feel confident to do everybody else's job, to do these sort of things. And and that's the problem, right? Is because everybody's like, oh, it must be right. It told me that. But here's the thing that I've learned. If you aren't actively pushing back, if you aren't using it almost like a sparring partner, then you're getting almost your own assumptions dressed up in better language because it's a learning model. It learns what you want, what you like, and what you want. And I think that that's where for me, like I said, I try to use it almost like a sparring partner philosophy. This is how I've always treated it. And I think it really changes everything that you get out of it, and that you can start to get that force multiplier without losing all the best parts of what makes us all creative and special. Because I think look, at the end of the day, I think in a lot of cases right now, AI is just going to tell you what you want to hear, right? It's trained to be helpful. And I think in too many cases, helpful means it's trained to agree with you. So you, if you ask it like, hey, is this a good idea? It will find a reason to tell you why it is. Again, there was Ryan Sirhans, a big um real estate broker in New York, said he lost a deal because of Chat GPT, because the seller went in and said, Am I getting enough money? And it said, No, you're not. Here's the comps why. And the buyer went in and said, Am I paying too much? And it said, Yes, you absolutely are. Here are the comps why. So both sides, empowered by Chat GPT, were convinced that they were right and the deal blew up. Again, I think if you ask it, you know, is this a good idea? And I think if you ask it even how will this fail, it will find those reasons too, because like the model doesn't have conviction. You do. So I think that like the output quality isn't always necessarily the problem, but it's it's questioning that quality that is. And I think a lot of people have fallen into this bad habit where they're sort of use it to validate what they're already thinking, as opposed to like really using AI to find the weak spots. And I think that's always been my approach, which is why I fell into it with AI, is once I come up with an idea, I want to use research or data or testing or whatever to break it. I want to find the weak spots. I want to find how it can be better, as we're with too much research and too many other creatives, they're going out there to find validation, right? They're going out there for somebody to tell them it's a good idea. And now with AI, it's created a smaller echo chamber for that to happen. And I think, you know, for me, that's where it's like we need to start to think through like, how do you work with the smartest, most challenging person you know? We we all have worked with them, right? Like that person that's just like they always seem to have the answer. They can see it from another angle, they're always gonna challenge what you're thinking. The best leaders, the best CEOs, the best, like kind of again, real thinkers I've ever worked with, this is what they did, right? And and you wouldn't just present them with an idea. You'd go in and know that it would be a debate or an argument that they were gonna push back. You had to defend your position or revise or see where there were holes in it. And I think that AI can be that way, but only if you set it up to be that way, right? Because look, the tool doesn't change. I think that your posture towards it needs to. Because for me, any idea, anything where you've done the thinking always survives that fight, right? It survives the pushback. And in general, I think this is just a culture we need to build more of, where I've talked about we need to kill status meetings and start having decision logs. This is why this needs to happen. Because look, I think that like every strong idea will survive that fight. And AI, in a lot of cases, I think is is one of the easiest. I don't know if I'd say safest, I'm not sure that feels right, but it's one of those places to have that fight before the room does it for you. To start to see where are your blind spots, where are the weak points in sort of doing this. And I've developed what are really kind of four different types of what I'll call like challenge prompts. And and this is again where I'm going to go back. One, I always show up with an idea, but then I want to find the holes in it. So there are four different sort of ways that I've tried to do this. The first one is what I'll call like a failure prompt. And that's where I what I'm gonna do is I want it to make it argue with me, right? By asking it, like, look, how's this gonna fail? What would the most skeptical person in the room say? Because what I want to do is I want to force it off that happy path. I want to force it off that, like, yes, like Steve, you're brilliant. This is great. You're gonna go change the world. Because can't I just I don't need that shit, right? Like, I just that's not what I'm looking for. That's not gonna help me in my work. So again, I think that sort of thing of like, look, what's my biggest critic gonna say about this? Forces it down a different line of thinking. I think the next one for me is to really try to try to find prompts that will expose assumptions. Because look, I think that like, how do I surface and find what I'm taking for granted? So this is asking at things like, what does this idea assume is true? What would have to be true for this to work? Right. Because look, I think most ideas don't fail on the surface. They fail because of like a buried assumption that either everybody misses or that nobody checked, or that there is something in there where it's like, you know, this is gonna be great, but it's gonna die in legal or tech or like somewhere else, right? Like something that I'm going to assume that they can do it and that it all sounds very like magical and great to go pitch, but it's not gonna get out the door. I think that a lot of it also for me is that again, whenever it states things back with such confidence, the third thing I want to do is really sort of just have prompts that make it prove itself and really challenge its data validity. In asking things like, again, how confident are you in this and why? What's the strongest argument that, you know, in what you just told me? What are the actual data sources that you are citing? If you are building your own models or your own projects or doing those sort of things, to bake that into the instructions, that I only want you to be able to find like legitimately, like legitimate data sources that can be vault that can be verified from multiple different directions. And that again, I don't want you finding like just kind of one crackpot's blog who quoted a bunch of stuff and then just assuming that that's the right data. So I think this is also what I built into a lot of the models and a lot of the things that I work with, are where it's going to start to know that it needs to argue with me, not to agree with me. It's going to question those assumptions. It's going to show me what the data is that it's doing. And then the last thing is, and again, this is a concept that I've talked about in past shows, is I'm I want it to really almost start to run pre-mortem prompts. So this is like, right, like run the failure scenarios, run those sort of things that, you know, again, so it will say, like, look, this is a great idea, but like it's going to fail 12 months from now, and here's why. Right? Like, again, what's it'll start to, I'll start to ask it, like, look, what's the most expensive that we mistake that we could make here? And that's borrowed from product thinking, right? That's not just good practice in AI. That's actually just good product thinking and good systems thinking. But again, to start to run the pre-mortem prompts of like, what are the guardrails we should have? What are the like what's the biggest consequence? What's the blowback? Can we claw this back if this goes wrong? Right. There's those sort of things. And I think I've got an episode coming up that just focused on pre-mortems. I'm thinking now that probably should be the next episode. But for me, it's that like it's the failure prompt, the assumption exposure, data validity, and pre-mortems. Those are the sort of things that you need to be getting it to do, baking it into your models to push back. Right. And I think that in a lot of cases, like it's making this a habit, not just a checklist. Because this only works if it becomes part of the way you think, right? Not a step that you remember at the end, like, oh, did somebody ask legal? But before, right, like you take any AI output seriously, ask what it got wrong. Before you act on any AI-assisted idea, right? Run the pre-mortem. Before you present anything built with AI, stress test the assumptions. And again, my thing isn't that like it's to distrust the AI. It's it's in using it in a way that you would with any smart colleague, with any sparring partner, with any great partner you've ever had, right? They don't just take it at face value. They push you, they push your thinking, they don't just take the first answer as gospel and agree with you. The weakest organizations and the worst leaders, I've talked about on here for a freaking decade, right? Are the ones that surround themselves with people who disagree with them. It's that yes people culture, right? Like everybody on my team thinks I'm brilliant, but somehow like our ideas aren't going anywhere, our stock price isn't up, right? Like that's why. And look, and I think that here's the harder truth in all this is that I think the reason people don't do this isn't that they don't know how. It's that they don't want to. Right? Like confirmation feels good. Like getting validated feels comfortable. Saying, oh, look how smart I am and how well I'm using AI, and I'm a genius, and I'm out on the frontier of this, right? That feels better. But my thing is at the end of the day, that's a short-lived, short-sighted approach to these things. Because look, if your idea survives the challenge, then you can present it with real confidence. If you're somebody that isn't great at research and that's not what you like has always been your strong suit, then this is a great tool to be able to do this. Because that's the thing, right? If it doesn't survive the test, you just saved yourself walking into a room and getting destroyed, right? Of presenting something that is going to erode what you are doing. And that's why I said, like, so many people seem to be on this road to destruction, but they're so happy about it because they're feeling validated and they're doing it quickly. But if you're doing that, look, at the end of the day, either way you win. Because like if you're willing to have that fight first, if you're willing to test that, if you're willing to get into that habit of making your ideas stronger, your work just gets better. And that's the thing, right? The people who are going to be the most impactful, the most dangerous, the most the ones who sort of really excel, they're not going to be the ones that are using the best tools. They're going to be the ones that are asking the best questions, that are building the best models, that are finding the best workflows. And that's why I keep coming back to like this is AI is not a technology problem, right? It's a culture problem, whether it's the culture of a company, the culture of a team, or your individual kind of way of working in that kind of like microculture. Because that's the thing, right? Like take the idea you're working on, the one you're doing right now, or why the one that maybe you're in Claude or ChatGPT working on as you're listening to this show right now, today. And before you do anything else with it, run through these prompts, right? Do the failure, the assumption, the validity, and the pre-mortem. Those four rounds, right? It's going to take you 15 minutes tops. See what survives. See what doesn't. Because look, whatever's left after that fight is going to be worth your time. And that you being comfortable and validated is not going to get you where you need to go. I know it feels better, right? Like, oh, you're great. And and like, look, I mean, you can tell that there are certain engineers of a certain generation that just program things in a certain way, focusing on that validation. But that's the thing, right? AI is not going to make you smarter by default. Right? It's not going to make you faster if like than a level of thinking that you're already doing. Right? You have to decide to think harder, to be able to hold yourself to a better standard. Then you use the tools. Then you go in and do this. That's how, again, you want to stay ahead of it. That's how you don't become commoditized. That's how we talked about in the like the last episode, where you're starting to move up into those top two quadrants where you are more of a strategist. You are more of a thinker. You are doing more of those sort of things where, again, it's not just taking it at face value. Because that's where I see most people screw up. It's this copy and paste version or like Google search version of using AI. And that again, think of it like a sparring partner. Think of it like that best partner, the person that challenged you the most. Because if you can get it to do that, then you can really start to have a big unlock in the way that you're working. That's going to be a big help. But I'm also curious, right? Like as always, if there are ways that you're working with it, if there are tricks that you've found, if there's other techniques that you found is sort of pushing back, if there's instruction sets that you've done or models that you built, or public ones that you want to share, let me know. I'm happy to share them out to people. But that's the thing, right? Is that we have to start normalizing this sort of thing to be able to get things to be better. So as always, thanks for your time. Thanks for listening. I hope this helps. I hope that it helps you think through your work a little bit differently and try those things a little bit differently. And hey, as always, stay crazy.