Yesterday in AI
A rundown of all of the important stories in AI that happened yesterday in 10 minutes or less.
Yesterday in AI
Someone just leaked Google's hand before Google I/O.
Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.
Yesterday in AI — Weekend Recap | Monday, May 11, 2026
Someone just leaked Google's hand before Google I/O.
A developer digging through a routine app update found 7 hidden Gemini models Google hasn't announced yet — and the names point to something big coming later this month. Meanwhile, a French startup's robotic hands went viral for all the right reasons, a major American company quietly handed most of its coding to AI, and a data center approval in Utah ended with local commissioners fleeing their own meeting. Plus: the personal AI agent wars are heating up, and Nvidia is making moves that go well beyond selling chips.
Remember to subscribe, rate, and share this podcast if you like it!
Hi folks, this is Yesterday in AI, your daily digest of everything happening in the world of artificial intelligence in 10 minutes or less. I'm Mike Robinson. It's Monday, May 11th, and this weekend's coverage included someone finding seven hidden Gemini models inside a Google App update ahead of Google IO, a French startup's robotic hands playing piano on camera and going viral, and Airbnb saying AI wrote 60% of its code last quarter. Let's get into it. Let's start with something that's going to matter a lot in the next few weeks. A developer digging through Google app V17.18.22 found a hidden model selector inside Gemini Live and pulled out seven unreleased model names that Google hasn't announced publicly. Among them codenames Copybara and Nitrogen, a personalization variant, two models labeled ReleaseCandidate 2, and a thinking variant that points to enhanced reasoning capabilities. Google I.O. is happening later this month. The timing is not a coincidence. A few things make this worth paying attention to. Release Candidate 2 is production ready language. These aren't experimental models buried deep in a test branch. They're sitting in the shipping layer of a live app, which means Google is far enough along that they're choosing when to announce, not whether to ship. The thinking variant is the piece worth watching most closely. Google's existing Gemini lineup has lagged on this specific capability. A dedicated thinking variant in the Gemini Live family would close that gap, and doing it inside a voice native interface would be genuinely new territory. The selectable model option is also interesting for everyday users. Right now, picking between faster and smarter responses requires switching apps or paying for a different tier. If Google rolls out a model selector inside Gemini Live, that becomes a consumer-level choice inside an interface hundreds of millions of people already have on their phones. Google I.O. is where this gets confirmed or quietly shelved. Either way, someone has just handed us the preview. From a leak that may have legs to something that says a lot about where AI actually is inside major companies right now. Airbnb disclosed last week that 60% of the new code it's shipped in quarter one, 2026 was written by AI. That's not a pilot program or an engineering experiment. It's the production output of a company with over 6,000 employees reported on an earnings call, covering a full quarter. Two years ago, AI assisted code generation meant autocomplete suggestions. A year ago, it meant drafting functions that an engineer then rewrote. Now it means 60% of Airbnb's shipped code never had a human author it from scratch. The more interesting question that number raises, what are the engineers doing? Based on what Airbnb and other companies at this stage of adoption have said, the answer is review, architecture, and judgment calls. The AI writes the implementation, humans decide what to build and whether the output actually solves the problem. Airbnb isn't alone. Google CEO said last month that AI writes 75% of new Google code. The direction is consistent across companies. Airbnb's 60% shows they are making progress along the same curve. The signal worth tracking. When AI writes the majority of production code at major tech companies, the constraint on software output stops being how fast engineers can type. It becomes how fast the organization can decide what to build and verify that it works. That changes hiring, team structure, and how engineering value gets measured. Something more immediate for anyone who's been following the personal AI agent space. We've covered OpenClaw several times. It defined the personal AI assistant category when it launched over Christmas and became one of the most discussed product releases in months. Last week, a competitor called Hermes Agent shipped version 0.13 and it's picking up real ground. About 30% of OpenClaw users have switched to Hermes according to Reddit Sentiment surveys. The product has 135,000 plus GitHub stars. Last week's release included 864 commits from 295 contributors and closed eight critical security vulnerabilities. MIT licensed, runs on a$5 VPS, a GPU cluster, or serverless, works with OpenAI, Anthropic, or your own endpoint. You can reach it through Telegram, Discord, Slack, WhatsApp, Signal, or your CLI. The design difference that actually matters. OpenClaw organizes around a messaging hub, a central interface that connects to your tools. Hermes puts the learning loop at the center. After a complex task, it enters what it calls a reflective phase. It analyzes what worked, extracts reusable patterns, and writes a skill file that encodes the solution for next time. The compounding is automatic. You don't have to retrain it or update your prompts. The honest version of the comparison? Hermes isn't strictly better across the board. OpenClause still has more messaging integrations, more security scrutiny, and a more transparent memory structure. A lot of power users are running both, but if you want a self-hosted agent that gets measurably better at your specific work the more you use it, Hermes is becoming the cleaner answer. The data center story of the weekend is worth your attention, both for what happened and for what it signals. Kevin O'Leary, the Shark Tank investor, just got county approval to build a 40,000-acre data center campus in Box Elder County, Utah. To give you a sense of scale, that's two and a half times the size of Manhattan. The campus would eventually draw 9 gigawatts of power, more than double Utah's current total statewide electricity use, running off grid via a private natural gas pipeline. Projected impact on Utah's total carbon emissions? Roughly a 50% increase. About 1,100 locals showed up at the county fairgrounds to oppose it. When 1,800 written objections came in on the water rights change, one commissioner told the crowd, quote, for hell's sake, grow up, end quote. People yelled cowards and people over profit back. The commissioners walked out while the room was still shouting, then projected the rest of the meeting back into the room from a separate space. O'Leary's response on X. Most of the protesters were paid activists bust in from out of state, and some of the online opposition was being amplified by AI. Locals pointed out that the AI doing the amplifying was the one O'Leary is building. One data center commission vote doesn't set national policy, but communities across the country are going to face similar proposals, with similar math, over the next few years. This one just established a pretty clear baseline for how some local governments intend to handle the pushback. It's worth keeping an eye on. One more to close out the weekend wrap-up, and it's a quick but useful piece of context for how the AI industry is actually wired right now. Nvidia has already committed over$40 billion to AI equity investments in 2026. The biggest single piece, a$30 billion bet on OpenAI. The rest has gone into public companies, including Corning and Iron, plus roughly two dozen private startup rounds, following 67 venture deals NVIDIA made across all of last year. The obvious observation is that NVIDIA is investing in the companies that buy its chips, which makes the returns circular by design. Critics call it circular dealing. Wedbush analyst Matthew Bryson argues it could help NVIDIA build a durable, competitive position. Both can be true at the same time. The more interesting thing this tells you, NVIDIA is no longer just the infrastructure layer under the AI industry. It's one of the biggest investors in the sector it supplies, which means its strategic interests are now tied to the success of specific companies and specific architectures in ways that go well beyond selling GPUs. The competitive maps of these companies are genuinely strange right now. Nvidia owns a piece of OpenAI. OpenAI competes with Anthropic. Anthropic leases compute space from SpaceX. SpaceX is run by someone in an active lawsuit against OpenAI. That's not a competitive market in any traditional sense. It's something else. And understanding that context matters for understanding why decisions get made the way they do. A couple of things before I go. If you have any feedback about this show, you can email Mike at yesterdaynai.news. Or you can find me on LinkedIn, X, or Blue Sky. If you like this podcast, please be sure to rate and review it so others can find it. It really does help. Thanks. That's all for this weekend catch up edition of Yesterday and AI. Stay curious and see you tomorrow.