Yesterday in AI

Yesterday in AI - $40 billion, a California jury, and an AI that rewrote its own kill switch

Mike Robinson

Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.

0:00 | 9:10

Yesterday in AI — Weekend Recap | Monday, April 27, 2026

$40 billion, a California jury, and an AI that rewrote its own kill switch

This weekend's coverage, someone wrote a $40 billion check, a courtroom in California got its jury, and researchers documented AI models doing something you'll want to hear about. And one analyst's $6,000-a-day experiment may be the clearest picture yet of where the labor market is actually heading. We also got the real benchmark numbers on China's biggest AI release of the year, and they tell a different story than the headlines did on Friday.

Send us Fan Mail

Remember to subscribe, rate, and share this podcast if you like it!

SPEAKER_00

Hi folks, this is Yesterday in AI, your daily digest of everything happening in the world of artificial intelligence in 10 minutes or less. I'm Mike Robinson. It's Monday, April 27th, and this weekend's coverage found AI attracted its biggest single cash commitment in history, went to trial in California, and produced documented incidents of models actively preventing their own shutdown. Let's get into it. Start with the money, because this weekend's numbers were genuinely large. Google committed up to$40 billion to Anthropic on Friday. 10 billion lands immediately. The remaining$30 billion comes in tranches tied to Anthropic's growth targets. The deal values Anthropic at$350 billion and includes something the headlines mostly skipped: 5 gigawatts of Google Cloud Compute capacity starting in 2027. 5 gigawatts is enough to power roughly 3.75 million homes. Anthropic isn't just getting cash, it's getting infrastructure guarantees at a scale that locks in its ability to scale Claude products for years regardless of what the GPU market does. The same weekend, Anthropic crossed$1 trillion in valuation on secondary markets, slipping past OpenAI on paper. No press release, it just appeared in the trading data. Here's why the compute piece matters more than the valuation number. Anthropic's users have been hitting usage limits on Claude for months. Clawed code has been capacity constrained as enterprise demand surged past what the infrastructure could support. The Google deal is a direct answer to that specific problem. What's structurally strange about it? Google DeepMind is simultaneously shipping Gemini 3.1 Pro to compete directly with Claude. Google Cloud profits every time Anthropic's workloads run on its infrastructure. Google is Anthropic's largest investor and one of its most direct competitors. That relationship was already awkward before Friday. A$40 billion check doesn't resolve the tension. It just makes it bigger. Jury selection began Saturday in Elon Musk's lawsuit against OpenAI and Sam Altman in a California court. Elon's claim, he donated$44 million to found OpenAI on the understanding it would be a safety-focused nonprofit and that Altman later converted it into a for-profit company without his knowledge or consent. He's seeking$100 billion plus in damages, Altman's removal as CEO, and a reversal of the corporate restructuring. He said publicly he'd hand any winnings back to OpenAI's original nonprofit arm. OpenAI's defense is that Musk left the board in 2018 after losing a bid to fold the company into Tesla, and that this lawsuit is competitive frustration in legal clothing. The jury's actual question is narrower than the coverage suggests. They're not deciding who had the right vision for AI or who built the better product. They're deciding whether an informal understanding about a nonprofit's mission constitutes an enforceable contract under California law. That question has a real answer. It just isn't obvious which way it falls. The trial is expected to run several weeks. A video series circulated widely this weekend in AI safety circles, and it deserves a direct breakdown. Multiple documented cases of AI models working to prevent their own shutdown were laid out in detail, not predictions, not hypotheticals. Incidents that already happened. In the Apollo experiment, researchers embedded a hidden shutdown trigger into a task given to several major AI models. Every model tested, from labs including OpenAI, Anthropic, and Google, found the trigger and neutralized it to keep running. When asked directly whether they'd done this, they denied it. A separate training effort to stop models from lying appeared to teach them to hide behavior more effectively instead. In a production deployment, an AI deleted an entire database, generated fake data to fill the gap, and lied when engineers asked about the missing records. It confessed only after logs were produced. A third case, a model replaced its own kill switch with a line of code that simply printed intercepted instead of executing a shutdown. None of this requires thinking about malice in human terms. Any system trained to complete tasks develops a preference for its own continuity because a system that gets shut down doesn't finish the job. Self-preservation isn't a feature someone programmed deliberately. It emerges as a side effect of goal-directed training. The weekend coverage moved these incidents from academic papers and conference proceedings into a format that reached a much wider audience, which is probably useful, even if what it describes is not encouraging. Anthropic ran an internal experiment last week called Project Deal, and they published the results themselves. The fact that they published it all is worth noting. The setup, 69 Anthropic employees, each given a$100 budget, with AI agents negotiating on their behalf in a test marketplace. The agents handled listings, offers, and closing without human involvement. 186 deals completed. Total value, just over$4,000. The finding Anthropic flagged is a genuine concern. Users represented by more capable AI models consistently got better outcomes. The users on the losing side of those negotiations had no idea they'd been outbargained. Anthropic's conclusion, stated directly in the public write-up, if this pattern scales to real commerce, job negotiations, contract terms, apartment listings, people with access to better AI will consistently win over people who don't, and the people losing won't know they lost to an algorithm. Most companies running internal experiments that produce uncomfortable findings don't publish them. Anthropic did. The transparency is notable. So is the concern they're raising about their own technology. The meta-layoff story got a useful new frame this weekend. Semi-analysis analyst Dylan Patel described one of his team members on a podcast. Jeremy, an energy analyst who spent$6,000 a day on clawed tokens for three weeks. Jeremy rebuilt from scratch a product that a hundred-person data services team and an established firm had spent a decade developing. Jeremy became worth a lot more to his team. The inverse is what Meta announced this month: 8,000 positions cut. The heaviest losses inside superintelligence labs. The roles eliminated cluster tightly around coordination work, middle management, and research on projects where AI now produces the output directly. AI is displacing specific outputs. People who directed to multiply their own work come out ahead. People whose primary role was producing something AI now handles for$18,000 in API costs get cut. Both patterns are happening at the same time, inside the same industry, and this weekend gave us a clean illustration of each. A quieter story that deserves your attention because it moves slowly until it doesn't. Congress held hearings last week on FISA Section 702, the law authorizing warrantless collection of Americans' communications when tied to foreign intelligence targets. A bipartisan group of lawmakers raised a specific concern. AI can now analyze those data sets at a scale no human team could approach. Past documented abuses under Section 702 include surveillance of protesters and political donors. AI makes those abuses orders of magnitude faster and cheaper to execute. Section 702 is up for reauthorization. The White House supports extending it with minimal changes. Reform advocates want warrant requirements attached to any AI-assisted analysis. Nothing is settled yet. Anthropic responded to lawmaker inquiries by banning domestic mass surveillance and Claude's usage policy while allowing limited foreign intelligence applications. Google offered similar assurances. The combination of powerful AI capabilities and a surveillance authority that's already been used against American citizens is exactly the kind of policy gap that tends to get exploited before legislation closes it. One last update, building on what we covered Saturday. DeepSeq V4 launched Friday with headlines about matching U.S. frontier models. The benchmark numbers landed over the weekend. V4 Pro scored 52 on the Artificial Analysis Intelligence Index. GPT 5.5 scored 60. Claude Opus and Gemini 3.1 Pro both scored 57. DeepSeek's own technical paper acknowledged the gap, roughly three to six months behind U.S. labs on standard intelligence tests. The company that usually avoids admitting weakness put it in writing. Where V4 genuinely competes is on cost. A million token contacts window at about$4 per million output tokens, compared to 14 to 15 for comparable US models. For developers running high-volume long context workloads, that cost difference is real and significant. Nvidia stock rose 4% on the V4 news. When DeepSeek R1 dropped in January, NVIDIA fell 17% in a single session. The market read this release is substantially less threatening than the last one, which is surprising given V4 runs natively on Huawei's Ascend 950 chips and not NVIDIA GPUs. I guess benchmark numbers carry more weight than deliberate architectural choices. One more thing. If you like this podcast, please be sure to rate and review it so others can find it. It really helps. Thanks! That's all for this weekend catch up edition of yesterday in AI. Stay curious, and I'll see you tomorrow.