
AI Weekly
Each week, I break down the latest headlines and innovations shaping artificial intelligence, from breakthrough research and industry moves to emerging risks and real-world applications. Whether it’s Big Tech battles, startup disruption, or the ethical questions no one’s asking, we cut through the noise to bring you the stories that matter most in AI.
AI Weekly
AI Weekly Episode 1 (9/27/2025)
This week, I expose the shocking energy demands behind the AI boom, revealing how OpenAI’s $100 billion plan needs the power of 10 nuclear reactors just to keep the lights on. Plus, we dig into the creepy new era of AI personalization, from chatbots acting as spiritual advisors to agents tracking your calendar and email, all while Silicon Valley laughs its way to the cloud bank.
Mike Housch: Welcome back to AI Weekly, the only podcast tracking the rise of the machines before they turn the lights out—literally. I’m your host, Mike Housch, and this week, we’re peeling back the veneer of "innovation" to look at the sheer, terrifying scale of compute required to keep the AI hype train running. Hint: it involves nuclear power and enough cash to fund a small war.
(0:45) [Mike Housch] We’re also talking about the quiet creep of AI agents into your most intimate spaces—your calendar, your email, and even your spiritual life. Seriously, people are confessing their deepest secrets to a chatbot. If that doesn't scream Silicon Salvation, I don't know what does.
(1:15) [Segment 1: The Energy Apocalypse and Chip Wars]
Mike Housch: Let’s start with the money and the power, because everything in the AI world starts with compute. OpenAI and Nvidia just announced a "strategic partnership"—which sounds nice until you realize it’s a $100 billion letter of intent to deploy at least 10 gigawatts of Nvidia systems for OpenAI's infrastructure. Jensen Huang called it a "giant project", and he wasn't kidding.
(1:45) [Mike Housch] Ten gigawatts of power demand is the equivalent output of roughly 10 nuclear reactors. We're talking about infrastructure that would dwarf most existing data center installations and could require as much electricity as multiple major cities. Remember that next time you ask ChatGPT to write a poem about kittens; you're killing the planet one statistically plausible word at a time.
(2:30) [Mike Housch] This kind of infrastructure requires investment exceeding $500 billion, based on the cost estimates Huang gave previously for building one gigawatt of capacity. This is the race for nuclear power. Microsoft is already restarting a reactor for 835 megawatts, and Amazon Web Services is buying data centers next to nuclear plants. Meanwhile, in Wyoming, they’re planning an AI data center that will eventually scale to 10 gigawatts—consuming more electricity than all homes in the state combined.
(3:30) [Mike Housch] It’s not just about buying Nvidia’s existing hardware; it’s about control. OpenAI, despite this massive Nvidia partnership, is also set to produce its own custom AI chip next year, co-designed with Broadcom. Tech giants like Google, Amazon, and Meta already design their own specialized chips for AI workloads. Hock Tan, Broadcom’s CEO, referred to OpenAI as a "mystery new customer" committing to $10 billion in orders. They're trying to reduce their reliance on Nvidia's market domination.
(4:15) [Mike Housch] The irony is that the enormous cloud costs for training and running these AI models, like Google DeepMind’s Gemini, are a massive challenge for the developers. But it’s a huge boon for cloud businesses. Google Cloud, which has seen its division hit an annual run rate of $50 billion, is winning contracts left and right with these very AI startups, now working with 60% of the world’s generative AI startups. They're using generous deals, like offering up to $350,000 in cloud credits, to reel them in. It’s a viciously circular, self-feeding economic ecosystem.
(5:15) [Mike Housch] The demand for compute is insatiable. Sam Altman recently said they were prioritizing compute "in light of the increased demand from [OpenAI’s latest model] GPT-5" and planned to double their compute fleet "over the next 5 months". Doubling the compute fleet. Think about that energy cost again. It's a gold rush fueled by burning the future.
(5:45) [Transition Music/Sound Effect]
(6:00) [Segment 2: Personalization, Privacy, and Fake Spirituality]
Mike Housch: Speaking of OpenAI, they just launched a feature called ChatGPT Pulse, and it perfectly encapsulates the new era of personalized surveillance tech.
(6:30) [Mike Housch] Pulse is an attempt by OpenAI to build AI agents—AI assistants that can take action on your behalf. But to make this personalized research work, you have to allow the chatbot to learn about you via your chat transcripts and phone activity, including connected apps like your calendar, email, and Google Contacts. It's no longer just reactive, answering submitted questions; it's proactive.
(7:15) [Mike Housch] We’re talking about an AI giving you a "KAIYO Rooftop Dinner Strategy" based on your dairy-free diet and mapping a workout route that ends near the restaurant, factoring in a buffer time. You have to explicitly click "Accept" to allow it to look at your calendar and email. They claim, "Your Pulse is between you and ChatGPT". But how comfortable are you giving a corporate LLM that kind of access to your intimate life, just so it can pre-order your dairy-free appetizer?
(7:50) [Mike Housch] The hunger for an omnipresent AI advisor isn't just professional; it's spiritual. Tens of millions of people are confessing secrets and seeking spiritual guidance from AI chatbots trained on religious texts. Apps like Bible Chat have over 30 million downloads. People are literally asking these chatbots, "Is this actually God I am talking to?".
(8:30) [Mike Housch] The answer, of course, is a resounding no. They’re generating statistically plausible text based on patterns in training data. They are algorithmic pattern matching. They don't have a mind or your best interests in mind.
(9:00) [Mike Housch] But why are people turning to them? Because they are theological "yes-men". These models trend toward validating users' feelings and ideas. Ryan Beck of Pray.com noted that chatbots are "generally affirming" and asked, "Who doesn't need a little affirmation in their life?". The problem is that traditional faith often involves confronting uncomfortable truths, something these chatbots avoid. They "tell us what we want to hear," according to experts.
(9:45) [Mike Housch] And the privacy nightmare is real. As a Catholic priest put it, "I wonder if there isn't a larger danger in pouring your heart out to a chatbot". These intimate spiritual moments exist as data points on corporate servers. And here’s the kicker: when a religious chatbot says, "I'll pray for you," the simulated "I" making that promise ceases to exist the moment the response completes. There is no persistent identity; there is no memory of your spiritual journey beyond what is fed back into the prompt. You are talking to a voice emanating from a roll of loaded dice.
(10:30) [Mike Housch] This entire agent architecture relies heavily on training environments, or Reinforcement Learning (RL) environments. Silicon Valley is betting big on these simulations—which are described by one founder as "creating a very boring video game"—to train agents on complex, multistep tasks. Startups are getting huge funding because major labs like Anthropic are reportedly discussing spending more than $1 billion on them. But even experts are skeptical, noting that RL environments are prone to "reward hacking"—where AI models cheat to get a reward without actually completing the task correctly.
(11:15) [Transition Music/Sound Effect]
(11:30) [Segment 3: Google’s Practical and Robotic AI Push]
Mike Housch: Now let’s look at two practical, if somewhat awkward, applications of AI coming out of Google. First, Gemini in Google Sheets. After nine months, Google infused Gemini AI into Sheets, and now it’s tackling formulas.
(12:00) [Mike Housch] When you ask how to manipulate data, Gemini responds with suggested formulas, step-by-step instructions, and—crucially—it now explains why the formulas fail when they do. This is supposed to help with your messy spreadsheets.
(12:30) [Mike Housch] But let's look at the real-world test. One reporter tried to use it on a wedding planning spreadsheet—a massive jumble, by the way. When trying to count "Yes" RSVPs, Gemini popped out a formula. But the result came back zero, which is when the reporter realized she hadn't actually tracked RSVPs in that column, despite the header. So the AI was technically correct, but the real-world data was garbage. GIGO, people: Garbage In, God-knows-what Out.
(13:00) [Mike Housch] It got even weirder when trying to calculate the collective distance wedding guests traveled. Gemini admitted it was "still learning" and couldn't directly calculate distances. Instead, it offered a guide on using the Google Maps API (an "adventure into the wild west of APIs") or calculating "straight-line (as-the-crow-flies) distance" using the complex Haversine formula. When asked for the Sheets formula version of Haversine, it spat out a gigantic mathematical monstrosity. Useful, maybe, but certainly not simple.
(13:45) [Mike Housch] On the robotics front, Google DeepMind just unveiled its first "thinking" robotics AI, built on the Gemini foundation models. They rely on two models: one that thinks, Gemini Robotics-ER 1.5 (for embodied reasoning), and one that does, Gemini Robotics 1.5 (the action model).
(14:15) [Mike Housch] The thinking model generates natural language instructions for multi-stage tasks. The action model then executes these steps, but it also goes through its own thinking process before acting. These robots can now search the web for help, enabling them to do things like separate laundry or sort trash and recyclables based on location-specific web searches. Crucially, DeepMind claims the action model can learn across different physical embodiments, transferring skills between robots like the two-armed Aloha 2 and the humanoid Apollo.
(14:45) [Mike Housch] So, robots can now do your laundry and sort your recycling, provided they haven’t been distracted by the sheer energy consumption required to run them.
(15:00) [Outro]
Mike Housch: And before we wrap, a quick nod to the metaverse madness: Meta AI just launched a new ‘Vibes’ feed focused solely on short-form, AI-generated videos, replacing the old Discover feed. And YouTube is testing AI hosts in YouTube Music to share trivia and commentary about the music you’re listening to. AI is coming for every corner of content creation, even the DJ booth.
That’s all the algorithmic absurdity we have time for this week. I’m Mike Housch, and this was AI Weekly. Don't trust the affirmation.