AI in 10
The most important AI story—explained in 10 minutes.
Every day, I break down the biggest AI story in just 10 minutes - what it is, why it matters, and how you can actually use it. No tech jargon, just AI made simple.
AI in 10
AI Leader Quits OpenAI Over Pentagon Deal Gone Wrong
Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.
Referenced Links:
Try Claude AI - Anthropic's Ethical Alternative
Contact Your Representatives About AI Oversight
Electronic Frontier Foundation - AI Policy Resources
Hugging Face - Open Source AI Models
AI for Everyone Course - Coursera
Want to go deeper with AI? A community of professionals is learning AI together right now at aihammock.com — show notes, links, tools, and real conversations about how to actually use AI in your life.
Welcome to AI in 10. I'm Chuck Getchell, and every day I break down the biggest AI story in just 10 minutes. What it is, why it matters, and how you can actually use it.
SPEAKER_01Here's what happens when your favorite AI tool gets caught between doing good and doing what the government wants. Last Saturday, Caitlin Kalanowski walked away from one of Tech's hottest jobs. She was running robotics at OpenAI. The reason she left? OpenAI just signed a deal with the Pentagon, and she thinks it crossed some lines that shouldn't be crossed. Now, this isn't just another Silicon Valley resignation story. This is about the moment AI stopped being just a helpful chatbot and started becoming a potential weapon of war. And honestly, it affects every single one of us who uses these tools. So here's what actually happened. Kalinowski saw this announcement and said, essentially, not on my watch. She posted on X that while AI has a role in national security, this deal potentially allows surveillance of Americans without court approval. And it could enable weapons that kill without human authorization. Now OpenAI says they have red lines, no domestic surveillance, no autonomous weapons. But Kalinowski argued the deal was rushed. Not enough guardrails built in ahead of time. Which is like installing the brakes after you've already started driving down the mountain. Here's where it gets really interesting. OpenAI wasn't the Pentagon's first choice. They actually tried to work with Anthropic first. That's the company behind Claude, ChatGPT's biggest competitor. But Anthropic said no. They refused the same deal over the exact same concerns. So what happened to Anthropic? The government blacklisted them. They literally labeled them a supply chain risk. That's the designation usually reserved for companies from countries we don't trust. Anthropic got the same treatment as a potential foreign threat for having principles. The very next day after Anthropic got blacklisted, OpenAI signed their Pentagon deal. Coincidence? You decide. But here's the kicker. Right after OpenAI made this announcement, the US and Israel conducted military strikes against Iran, and reports suggest AI tools were used in those operations, with human oversight, they say. But still, talk about timing. The backlash was immediate. Thousands of people switched from ChatGPT to Claude. Claude's app shot to number one on the app store, beating ChatGPT for the first time ever. Over a thousand current and former employees from OpenAI and Google signed an open letter demanding companies reject Pentagon demands for surveillance and weapons tech. Now you might be thinking, Chuck, this is military stuff. How does this affect me and my family? Well, here's the thing about technology military innovations don't stay military for long. The internet started as a defense department project. GPS was military first. Now you can't order pizza without them. So when we're talking about AI surveillance tools being developed for the Pentagon, we're really talking about what your local police department might have access to in three years. When we're discussing AI-powered decision making and warfare, we're looking at the future of AI making decisions about your insurance claims, your loan applications, your job interviews. The privacy implications are huge. If these AI systems can analyze vast amounts of data to identify threats overseas, what's stopping them from analyzing your social media posts, your shopping habits, your location data here at home? Those red lines OpenAI talks about, they're only as strong as the next policy change. And let's talk jobs, military AI systems that automate logistics and supply chains. Those same systems will eventually automate civilian logistics and supply chains. If you're in trucking, warehousing, or any part of the supply chain, you're looking at your future competition. There's also the economic angle. When the government blacklists a major AI company like Anthropic, it reduces competition. Less competition usually means higher prices for everyone else. Your tax dollars are now flowing to fewer companies with less incentive to innovate affordably. And here's something nobody's talking about enough. When AI gets militarized, it changes how conflicts happen. Faster decisions, more automated responses, higher stakes. That might sound abstract until oil prices spike because an AI system somewhere made a decision that escalated a conflict. The global implications affect your grocery bill, your gas prices, your retirement account. We're all connected to this stuff, whether we realize it or not. So what can you actually do about this? More than you might think. First, you can vote with your digital wallet. If you're using ChatGPT and you're uncomfortable with the Pentagon deal, switch to Claude, Anthropic took a stand on principle and got punished for it. The least we can do is support companies that align with our values. Claude has a free tier that works just as well as ChatGPT for most everyday tasks. Writing emails, research, creative projects, problem solving, go to Claude.ai right now and create an account. Takes two minutes. Your data will be with a company that was willing to lose government contracts to maintain ethical standards. You can also try open source alternatives. Hugging Face has dozens of AI models you can use for free. They're not controlled by any single company, government, or military. Search for Hugging Face chat and you'll find options that aren't beholden to Pentagon contracts or corporate boardroom decisions. Second, make your voice heard. Over a thousand tech workers sign that open letter I mentioned, but you don't have to work in tech to add your name. Search for AI Military Open Letter 2026, and you'll find ways to join that growing chorus of people demanding accountability. Contact your representatives. Go to Congress.gov, find your senators and house members, and tell them you want oversight on AI military deals. Reference Caitlin Kalinowski's concerns. Mention that a leading robotics expert resigned over these issues. That carries weight. Third, educate yourself. This stuff is moving fast, and the more you understand, the better decisions you can make. Take a free course on AI ethics. Coursera has AI for everyone that explains these concepts without getting too technical. Knowledge is power, especially when that knowledge helps you navigate an AI-powered world. As I always say, I'm not a lawyer or policy expert. For specific legal or political advice, talk to professionals. But I am someone who's built AI companies and understands how this technology works in the real world. And here's what I know about the real world. The companies that succeed long term are the ones that maintain trust with their users. OpenAI just made a calculation that government contracts are worth more than user trust. That creates an opportunity for their competitors and for all of us to demand better. Start paying attention to which AI companies are taking military contracts and which ones aren't. When you choose tools for your business, your creative projects, your daily productivity, factor that into your decision. Because every time you use a service, you're casting a vote for the kind of future you want to see. The resignation of one open AI executive might seem like inside baseball, but Caitlin Kalinowski didn't just quit a job. She drew a line in the sand about what kind of AI future we're building. Her message is clear. We don't have to accept AI development that prioritizes government contracts over human rights and ethical standards. We have choices, and the choices we make today will determine whether AI becomes a tool of empowerment or a tool of control.
SPEAKER_00That's today's AI Inten. If you want to go deeper and learn AI with a community of people just like you, join us at aihammock.com. I'll see you tomorrow, my friends.