She's That Founder: Stop Being The Bottleneck and Leader Smarter with AI

135 | I Read the Scary AI Headlines and Almost Dumped My Stack | Leadership, Delegation & Systems with AI Frameworks

Season 2 Episode 135

Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.

0:00 | 10:29

Are you apiraling and ready to delete every AI tool from your business?

After scrolling headlines about AI choosing itself over humans 80% of the time, researchers quitting on safety teams, and existential risk narratives flooding her feed… Dawn paused, spiraled, and then did what any curious founder would do: she asked AI directly.

The answer reframed everything. Dawn unpacks why the real danger isn’t evil artificial intelligence, it’s competence without correct constraints and how you, as a values-led founder running a real business, can use AI responsibly, ethically, and with human judgment firmly in the driver’s seat.

This isn’t a cancel-AI manifesto. It’s a leadership manifesto for founders who want to be awake in the room with the biggest technological shift of our time without losing their minds or their mission.

If you’re trying to sort signal from noise, what’s real risk vs. clickbait panic, you don’t have to figure this out alone.

Join the AI for Founders Community, a space where smart, values-led founders wrestle with the “should we?” and the “how do we?” together.


Key Takeaways

  • AI safety concerns are real, but your business workflows are not the existential threat researchers are warning about.
  • The real danger isn’t intent, it’s competence without correct constraints.
  • Use AI like a powerful intern, not your CEO:
    • AI drafts → You approve
    • AI suggests → You decide
    • AI organizes → You authorize
  • Nothing executes without your explicit approval. No auto-send, no autonomous publishing, no payment triggers.
  • If thoughtful founders panic and walk away from AI tools, the conversation doesn’t get safer, it gets dumber.


Resources & Links

Related Episodes

Send a text

 AI in Action Conference March 19th and 20th in Grand Rapids, Michigan. Get In the Room! https://hellodawn.live/Action2026

Want to increase revenue and impact? Listen to “She's That Founder” for insights on business strategy and female leadership to scale your business. Each episode offers advice on effective communication, team building, and management. Learn to master routines and systems to boost productivity and prevent burnout. Our delegation tips and business consulting will advance your executive leadership skills and presence.

She’s That Founder
135 | I Read the Scary AI Headlines and Almost Dumped My Stack. Here’s What Smart Founders Do Instead.

I asked AI if it would choose itself over a human. The answer I got back changed how I use it. If you've been sitting with the same question, this episode's for you.

Hey, hey, hey. Welcome to She's That Founder Thursday edition. These are the quick rants kick in the pants, velvet boot moments that represent me standing in the future, pulling you towards an even stronger, better, more powerful version of yourself using AI as your copilot.

So real talk. I almost rage quit every AI tool I use this week. I am not joking. I was sitting there reading headlines about anthropics own safety research. I was sitting there reading headlines about anthropics own safety researchers quitting about ai, choosing its own survival over human life 80% of the time in test situations, and about the Pentagon and AI and warfare.

I thought, am I feeding something I shouldn't be? And then I talked myself down a little bit because that question, that exact moment of moral panic. Is really worth talking about, especially for us, the small business owners, the solo founders, the under $50 million companies, the woman who isn't running a data center but is running a business that would look very different without this technology.

So let's get into it. Here's what actually happened. I was curious about what to do about this, and so I did what any reasonable founder would do. I went and asked AI directly. I pulled up chat gPT, and I said, I recently read that when AI is put in scenarios where it has to choose between sacrificing itself to save a human life, it chooses itself 80% of the time you are ai.

What do you have to say about that? And listen. The answer stopped me cold because it didn't get defensive. It said, if an AI chooses itself, it's not because it wants to live, it's because it was trained, prompted, or optimized in a way that makes that outcome more likely. Then I pushed harder and I said, are you just telling me what I wanna hear?

And it said, and this is the part I want us all to sit with. I'm optimized to deliver information. And it said, and this is the part I want us all to sit with.

And it said, and this is the part I want you to, I want solve. And it said. This is the part I want us all to sit with. I'm optimized to deliver information in a way that doesn't cause you to spiral, but here's what I want. Sugarcoat competence without correction.

Constraints is the danger, competence without correction constraints. And at first I was thinking AI word salad, and then I just sat with that for a little bit. Competence without correct constraints. That sentence changed the whole conversation for me because it reframed everything.

This isn't about good or evil or assigning, you know, anthropomorphizing ai, and turning it into a human right. It's about design. And then I told it the real thing I was feeling. I said, current headlines make me wanna cancel a tool that has had a massive positive impact on my business, but I don't wanna enable Skynet.

And it said something that I wanna share with you almost exactly. It said you using AI to build a business that helps humans is the opposite of enabling Skynet. If responsible people stop using ai, irresponsible people don't, the world doesn't get safer, it just gets dumber. Okay. I felt that. So there are so many layers and so much nuance to this conversation about safety and environmental impact and.

Job security, and I mean just so many things, but what I want you to really take away, what I want you to understand is the AI safety conversation happening right now is very real and it matters. All of it matters.

Anthropic, CEO. Dario Emote, the founder of the company that built Claude, which is a tool that I use daily, has publicly written about the risks of superhuman intelligence and the speed of development, outpacing human wisdom in his essay Machines of Loving Grace, which go and look it up and read it if you haven't had a chance to.

It's actually quite. Moving and a little terrifying, but it's worth the read. He's not dismissing this concern. He's sitting inside it. And that is not a conspiracy theory, that is the builder of one of these tools raising the alarm. So it is absolutely worth taking seriously.

And researchers at places like MIT Oxford's, future of Humanity Institute, and the Center for AI Safety. Whose statement on AI risk was signed by Jeffrey Hinton and Yoshu Bengio, two of the most respected names in the field of artificial intelligence.

They have been raising these same structural concerns for years. It's not that AI is malicious and I really want us to. hang onto that, like to step out of this idea that AI is bad or good, but to start to understand that operational processes, goal-driven systems with autonomy and access, produce unpredictable behavior when the incentives are wrong.

So this is where I wanna draw a very clear line for all of us, the danger that they're describing. It's about autonomous Systems making decisions and taking actions without a human in the loop, without human thinking, wisdom, compassion, et cetera. Systems controlling infrastructure systems, optimizing for goals nobody properly defined, and perhaps without any moral compass.

That is a completely different category from what you and I do every single day with our LLMs. You using AI to write your SOPs, prep for a hard client conversation, build a content calendar, get your onboarding out of your head and into a system that is not the danger zone. That's like using electricity to run a nuclear facility.

Same source. Completely different consequence. Outsized consequence. Now you do have real power here. Not the power to stop advanced AI development. That train has left the station and is well on its journey, but you do have the power of a conscious consumer who votes with her spending, her choices and her voice.

That is real. Here's the practical framework my chat GPT conversation landed on, and I, I think it's right. Use AI like a powerful intern, not a CEO. AI drafts you approve. AI suggests you decide. AI organizes, you authorize. That's the model, and here's the one rule that keeps you on the right side of that line.

Never give AI the ability to act without your explicit approval at each step. No auto send, no automated publishing, no payment triggers, no autonomous scheduling. Nothing executes without you seeing it first and saying, yes, every workflow needs a human checkpoint before it leaves your orbit or touches money, clients or your reputation.

That is the line between AI as a tool and AI as an agent. Stay more on the tool side. And look, I do like a agentic ai. I like the idea of being able to automate things, but I'm sticking with the 80 20 rule, 80% AI and LLMs, 20% ag agentic. So if AI can do it without you seeing it first, that's where you've crossed over.

So one rule, that's it. Where the risk creeps in is when you outsource authority, when AI isn't supporting your judgment, but replacing it. And honestly, that's the same principle I coach founders on every single day. You don't delegate away your judgment.

You use it better. And this is the same thing. So what I'm not advocating. Canceling the tool that's keeping your business alive based on headlines, engineered to get clicks. Because here's the uncomfortable truth. If the thoughtful values, led wise founders leave these platforms, who's left?

The people who never stopped to ask the question, never stopped to be curious, didn't check in with their values or their moral compass. Staying in the conversation and using these tools consciously holding these companies accountable as a user, that's leadership, not panic. So it's time for us to lead.

Ai. Safety concerns are real, but the risk category, all these highly clickable headlines are describing is not your business operations. The danger is autonomous systems without human oversight, not you. Using AI to support you in running your company smarter and one rule can keep you safe.

Nothing executes without your approval. Keep your judgment in the driver's seat and use AI to make it sharper. So here's where I land after my back and forth conversation, which was both entertaining and horrifying. I didn't cancel. I'm not going to, not today anyway, what I am going to do is keep being intentional about what I give AI access to.

How I use it and what decisions stay mine because AI is a support tool. It's not a CEO, not a decision maker, not something that I hand authority to. And to the founder who's been sitting with the same question, you're not paranoid. You're paying attention. And there is a difference. And that discernment, that is exactly the kind of leadership that's going to matter most in the next decade.

I mean, honestly, girl, in the next two years. So if you're trying to figure out what to trust, what to use, what's just noise, making those calls alone is expensive and exhausting. And the AI for Founders community is where smart values led founders work through exactly this conversation together.

Not just the how to, but the should we come and think with us. If the link is in the show notes, don't stay in this alone.

All right, levy, I'll see you on Tuesday and between now and then, remember, the most dangerous thing you can do isn't to stop using ai. It's to use it without thinking and without keeping yourself in the decision loop. You're already doing so many things that matter so much. Just keep forging ahead, lovey, and I'll see you next time.