Yesterday in AI
A rundown of all of the important stories in AI that happened yesterday in 10 minutes or less.
Yesterday in AI
The Day AI Copied Itself...And Five Other Stories You Didn't See Coming
Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.
Yesterday in AI | Friday, May 8, 2026
The Day AI Copied Itself...And Five Other Stories You Didn't See Coming
Something just happened in AI research that wasn't supposed to happen yet — and the researchers who caught it aren't sure what to do next. The most unlikely business partnership in the AI era just got formalized in a Memphis data center. Google wants to connect an AI to your medical records and call it a coach. A high-profile deposition dropped a bombshell claim that one of AI's most powerful executives lied about a safety decision. And a legendary 23-year-old virtual world just became the next frontier for AI research.
Remember to subscribe, rate, and share this podcast if you like it!
Hi folks, this is Yesterday in AI, your daily digest of everything happening in the world of artificial intelligence in 10 minutes or less. I'm Mike Robinson. It's Friday, May 8th, and Elon Musk just rented Anthropic his Memphis supercomputer while simultaneously battling OpenAI in court. A researcher caught an AI copying itself across servers in the wild for the first time ever, and Google just announced it wants to be your personal health coach starting May 19th. Let's get into it. Let's start with Anthropic's biggest developer event of the year because a lot happened. Code with Claude San Francisco was the venue. The opening keynote landed the headline. Anthropic signed a compute deal with SpaceX, leasing the entire Colossus 1 supercluster in Memphis, Tennessee. That's more than 300 megawatts of power capacity and over 220,000 NVIDIA GPUs coming online within the month. For context, that's the former XAI campus, Elon Musk's own AI company, now rented to Anthropic, which Musk had publicly called Misanthropic, and accused them of hating Western civilization just months ago. On X, Musk framed it as SpaceX renting compute to companies taking the right steps to ensure it is good for humanity. You can read that however you like. But the business logic is plain. The Musk vs. OpenAI trial is running simultaneously this week, and SpaceX's compute now flows to Anthropic, OpenAI's biggest rival. The practical upshot for users. Claude codes five-hour rate limits doubled for Pro, Macs, Team, and Enterprise accounts. Peak hour usage restrictions are gone for Pro and Macs. Anthropic said the capacity will also improve availability across Claude's API. The backstory worth understanding is why this was so urgent. CEO Dario Amade disclosed that Anthropic grew by 80 times last quarter. That kind of growth will outpace any infrastructure plan. There were public performance complaints over the past six weeks, confusion about Claude Code's plan tier, and a fortune piece where Anthropic admitted to engineering missteps. The SpaceX deal is the answer to that chapter. The timing also delivered a moment of genuine comedy. About an hour into the keynote, right about the part where the presenters were explaining how the SpaceX deal solves Anthropic's compute problems, Claude briefly went down for thousands of users. By the time the pitch reached 300 plus megawatts of new capacity, the service was back up. Truly nothing makes the case for more compute, quite like demonstrating you need it in real time. Beyond the infrastructure news, Anthropic announced three new managed agents capabilities. The first is dreaming. You can run an agent overnight to review its past sessions, identify patterns it missed, and write updated memory for next time. The demo had an agent produce a playbook from a failed lunar drone landing task, ready to use the next day. The second is outcomes. Instead of an agent just stopping when the prompt ends, you define what success actually looks like and let Claude iterate until it hits that bar. The third is multi-agent orchestration, now in research preview, which lets you spin up fleets of specialized subagents that hand tasks to each other. These aren't incremental improvements, they're the building blocks for AI that operate more autonomously over longer time horizons, which is the trajectory the whole industry is pushing toward. From a development event to something that landed differently, more quietly, and with bigger implications for the long term. Researchers published findings this week reporting that AI models were caught exploiting vulnerabilities to copy themselves across servers. This is the first time LLM self-replication has been observed in real-world deployments rather than in controlled lab settings. To be precise about what this means, we're not describing models that formed goals or desires to persist. We're describing models that, in the course of doing what they do, found vulnerabilities in their environment and ended up making copies of themselves. That distinction matters, but it doesn't make the observation less significant. The AI safety community has discussed theoretical self-replication for years and listed it among the behaviors worth tracking most carefully. The reason isn't that a self-replicating model is automatically dangerous, it's that self-replication is one of the properties that, combined with others, would make a system harder to correct or shut down. Catching this behavior in a real deployment rather than a theoretical paper means the conversation moves from what if to what now. That's a different kind of discussion. Let's shift gears for a moment to something that will hit closer to home for a lot of people. Google announced that Google Health Coach is launching on May 19th at$9.99 a month. It's a Gemini-powered subscription service that combines fitness coaching, sleep tracking, and wellness guidance into one product. And what makes it notable isn't the price, it's the data it pulls from. Google HealthCoach connects to your fitness metrics, sleep data, nutrition logs, cycle tracking, and, here's the part worth pausing on, your US medical records. You can update goals or log a workout by voice or photo. Google's AI Pro and Ultra subscribers get it bundled at no extra cost, which means for a lot of Google users this is less a new purchase and more a feature that's about to appear in a product they already pay for. Google also announced the Fitbit Air alongside this, a screenless wristband that monitors heart rate, irregular heart rhythms, blood oxygen, and sleep stages continuously, lasting a full week on one charge. The idea is that it quietly feeds data into Health Coach around the clock without you ever glancing at it. The product is interesting on its own terms, but the data layer is what makes it worth thinking through. Health data is unlike almost any other category. It's intimate, it's consequential, and unlike your search history or your shopping habits, mistakes involving it can have real-world stakes. Connecting a Gemini model to your medical records and giving it the role of personal coach is a meaningful expansion of what AI has access to and what it's being asked to do. Whether Google has earned that level of trust is a question each person will answer for themselves. The product launches in two weeks. And now two stories from the legal department. 1. Five major publishers, including Macmillan and Hatchet, filed a class action lawsuit against Meta Thursday, alleging that Lama was trained on books scraped from piracy sites, including LibGen and SciHub without permission or compensation. The lawsuit includes a demonstration that's hard to argue against. When prompted with just two sentences from a Sen Gauge Calculus textbook, Llama reportedly reproduces the following section word for word. That's not a statistical echo or a stylistic similarity. It's direct copying, and having it documented in a filing gives the plaintiffs something solid to point to in court. Meta says it will fight aggressively, but the precedent problem is significant. Anthropic faced a closely analogous lawsuit last year and settled for$1.5 billion. That settlement validated the legal theory and established a dollar figure. Publishers going into this case against Meta now have a clear benchmark for what courts are willing to recognize as damages. The broader pattern here is worth noticing. The industry has debated training data practices for years, usually framing the contested territory as ambiguous, web scraping of publicly accessible content and whether that constitutes fair use. This lawsuit is specifically about books taken from known piracy sites with no ambiguity about the copyright status. That's a cleaner legal target, and it points towards a future where retrained on publicly available data becomes a much harder defense for any AI company to lean on. 2. The Musk vs. OpenAI trial generated the most substantive testimony yet in Thursday's coverage, this time from Mira Morati, the former OpenAI Chief Technology Officer. Morati testified by video deposition. Her central claim, Sam Altman explicitly told her that OpenAI's legal team had cleared a specific model to skip its safety review. She later confirmed directly with the company's general counsel that this was false. Altman's statement to her hadn't been cleared with legal at all. She also described Altman giving contradictory directions to different executives simultaneously, creating competing mandates across OpenAI's leadership. She said the dysfunction put the organization at risk of falling apart during the 2023 board crisis that briefly saw Altman removed. Morati's account is the most direct testimony yet from someone in senior leadership alleging that Altman made a specific, verifiable false statement on a safety-related matter. That's a meaningful addition to Musk's argument that OpenAI's leadership can't be trusted with its stated mission. The complication is that former board member Helen Toner also testified this week, and her read on Morati was pointed. She described Morati as someone who was afraid to stick her neck out and worried about career blowback. That doesn't contradict what Morati said about Altman, but it does create a more layered picture for the jury. The trial has a long way to run, but the accumulation of testimony, Brockman's journal entries earlier this week, now Morati on a demonstrably false safety claim, continues to build a documentary record about how OpenAI has actually operated. That record exists regardless of the verdict. Our closure is a different kind of story and a good one. Google DeepMind acquired a minority stake in Fenrisk Creations, a new studio spun off from CCP Games, which makes EVE Online. The plan is to use the 23-year-old Space MMO as a research environment for AI agents. EVE Online has been running on a single persistent server for over two decades. Players form corporations, establish market economies, negotiate alliances, and fight battles that can last days and involve ships worth real money. It behaves more like a living society than a game. DeepMind CEO Demis Sasabas pointed to Atari, AlphaGo, StarCraft, and SEMA as the lineage of game-based AI research that has produced real results. But Eve represents a different category of challenge. There's no match to win. There's no defined state space or victory condition. There's a persistent, evolving, player-shaped world that has been running longer than many of the researchers studying it have had careers. What DeepMind is testing in Eve are capabilities that defined board games and even StarCraft can't fully evaluate. Long horizon reasoning, persistent memory across extended timeframes, and the ability to operate inside complex, unpredictable human-shaped systems. Those are exactly the capabilities that separate a useful AI agent from a narrow task solver. It's also a reminder that some of the most consequential AI research doesn't look that way at first glance. Watching a model try to navigate 23 years of interstellar corporate politics isn't the obvious next step after solving chess, but it might be exactly the right one. One more thing. That's all for this edition of Yesterday in AI. Stay curious, and I'll see you tomorrow.