Yesterday in AI

A $33B AI chip goes public in 48 hours, and hackers just used AI to write their first real cyberattack

Mike Robinson

Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.

0:00 | 9:07

Yesterday in AI | Wednesday, May 13, 2026

A $33B AI chip goes public in 48 hours, and hackers just used AI to write their first real cyberattack

AI's cybersecurity cold war stopped being theoretical yesterday. Google confirmed the first case of hackers using AI to find and weaponize a zero-day flaw, and a massive supply chain attack is quietly targeting the tools developers use every day. There's a $33 billion chip company going public Thursday that most people are sleeping on. Mira Murati's first product since leaving OpenAI just challenged the industry's biggest assumption. And Apple has six weeks to flip AI accessibility for two billion people.

Send us Fan Mail

Remember to subscribe, rate, and share this podcast if you like it!

SPEAKER_00

Yesterday in AI. Hi folks, this is Yesterday in AI, your daily digest of everything happening in the world of artificial intelligence in 10 minutes or less. I'm Mike Robinson. It's Wednesday, May 13th, and AI is writing checks the security world didn't expect to cash this fast. Let's get into it. The biggest pure AI IPO in years lands this Thursday. Cerebrus Systems, the chip company that builds processors the size of a dinner plate, is going public on NASDAQ under the ticker CBRS. The offering just got upsized to$4.8 billion at a$33 billion valuation. Orders came in for 20 times the available shares. Building an AI model happens in two stages. First you train it, the expensive months-long process of feeding the model massive amounts of data so it learns to understand language and reason. Then comes inference. Every time the model actually answers your question, NVIDIA dominates training. Cerebrus is making a serious play for inference, building one chip per wafer instead of cutting the wafer into hundreds of smaller chips. Less data bouncing between chips means faster, cheaper answers. OpenAI already committed$10 billion worth of processing capacity to Cerebrus through 2028. A startup is reportedly working right now to help OpenAI tune its models specifically for Cerebrus hardware because Nvidia chips are too scarce to rely on alone. The demand picture behind all of this looks more and more like demand for electricity, essentially unlimited. Every wealthy country runs on enormous amounts of power. AI looks like the next thing in that same category. While the money is flowing towards chips and hardware, one of the most watched people in AI has been remarkably quiet since leaving OpenAI. Yesterday, Mira Marathi finally broke that silence. She was OpenAI's chief technology officer for years, left in late 2024, started a new lab called Thinking Machines, and has been mostly heads down since. Yesterday, they published a research preview of what they're calling interaction models, and the concept is different from where most of the field is headed. The model takes in voice, video, and text simultaneously in real time, so you can talk to it the way you talk to a person, interrupt mid-sentence, point at something on screen, redirect the whole conversation without waiting for it to finish a turn. A second background model handles the slower reasoning and toolwork, while the live-facing model stays responsive. Most of the big labs are racing toward agentic AI right now, systems that run for hours or days with no human watching, completing long tasks on your behalf. Marathi is going the other direction. Her argument is that how we collaborate with AI matters as much as how smart it is. Whether that carves out a durable market or just fills a gap until agents take over entirely, we won't know for a while. But given who built it and how long she's been quiet, it's worth watching closely. Speaking of things to watch closely, there's a supply chain attack worth knowing about. When developers build software today, they don't write everything from scratch. They pull in thousands of pre-built tools and code libraries that other developers have already written, snapping them together like building blocks. Those libraries are what's called the software supply chain. A supply chain attack is when someone poisons one of those building blocks before you download it. So you install something that looks legitimate but is carrying malicious code. Security researchers identified exactly that kind of attack called Mini Shy Halud, run by a group called Team PCP. They corrupted packages used by developers across the AI ecosystem, including projects tied to Tanstack, Mistral AI, OpenSearch, and GuardRails AI. More than 170 packages, 518 million cumulative downloads. The malicious code was designed to steal passwords and credentials and target cloud accounts and crypto wallets. But what makes this specifically an AI story? The malware was built to burrow inside AI coding tools like Claude Code and VS Code and stay there, essentially becoming a hidden passenger in developers' daily work environments. It could even activate a self-destruct sequence if someone tried to shut it down. The affected packages have been pulled, but the deeper issue is structural. As AI developer tools become a bigger part of how software gets built, they become a bigger target. We built the ecosystem fast. The security practices haven't caught up. That supply chain story puts the next one in sharp relief, because there's a real strategic disagreement playing out right now about how AI should be used for cybersecurity. OpenAI launched a product yesterday called Daybreak, a cyber defense platform that uses its latest models to automatically find vulnerabilities in software, suggest fixes, and help companies build more secure code from the start. They've partnered with major security firms, including Cisco, CrowdStrike, Palo Alto Networks, and Cloudflare. OpenAI is going wide, designed for any enterprise security team to deploy. Anthropic's approach with Mythos is the opposite. It's restricted to roughly 40 vetted organizations through a program called Project Glasswing, set up specifically for defense of cybersecurity. The reason for the tight control? Mythos found 271 vulnerabilities in a single version of Firefox. At that capability level, a model is a weapon as much as it's a tool, and Anthropic is treating it like one. OpenAI is betting that getting AI into security teams' hands outweighs the risk of it being misused. Anthropic is betting that releasing a model this powerful would cause more harm than good. Both positions are defensible, and the timing is uncomfortable. Google's Threat Intelligence Group confirmed yesterday that criminal hackers already used AI to find and exploit what's called a zero-day vulnerability. That's a security flaw nobody knew existed, meaning defenders had zero days of warning before it was used as a weapon. In this case, it was a bug in a popular web admin tool that let attackers bypass two-step login verification. The attack code was unusually clean, came with detailed explanatory notes, and included a fake severity rating. All signs the exploit was written with AI. Google caught it before it caused damage. The question the whole industry is asking is how long that lead holds. Step back from the cybersecurity picture for a moment, and there's a parallel conversation happening in boardrooms everywhere about jobs, and it has two very different answers depending on who you ask. The outplacement firm Challenger, Gray, and Christmas reported that more than a quarter of all April job cuts were attributed to AI, over 21,000 positions. That's the second consecutive month that AI has been leading the stated reason for layoffs. Then you have Kevin Hassett, the White House National Economic Council Director, telling CNBC there is no sign in the data that AI has cost anyone their job yet. He says companies adopting AI tend to see faster revenue growth and hire more people. My read? Companies that overhired during the pandemic now have a culturally acceptable justification for cuts they needed to make anyway. Framing reductions as AI-driven restructuring makes them look like they're riding a wave instead of correcting a hiring mistake. GM just cut 600 IT workers and said explicitly it's replacing them with AI-specific talent. Block, Atlassian, Meta, Oracle, Amazon. All similar math. A survey last month found 60% of senior executives plan to cut employees who can't or won't use AI. That's a real intention, and it'll show up in layoff data regardless of whether the models are actually doing the work yet. If AI is going to reshape the workforce, it also has to reach the people who haven't touched it at all. And the biggest potential on-ramp for the average person has a date, June 8th. That's Apple's annual developer conference. Apple Intelligence, Apple's umbrella name for the AI features it's been adding to iPhones, iPads, and Macs since 2024, has underdelivered since it was first announced two summers ago. The promise was a smarter Siri, better writing tools, and AI woven throughout the phone experience. The reality has been slow and underwhelming. June 8th is the reboot. Two moves are being reported ahead of the keynote. First, Siri is getting rebuilt with Google's Gemini AI powering it under the hood, while Apple keeps its own look and feel on top. There's also a standalone Siri app coming with full chatbot functionality to compete directly with Chat GPT, Claude, and Perplexity. Second, Apple is reportedly planning a new AI extensions section in the App Store, which would let users download AI tools from any developer and connect them to Siri, turning it into an AI hub rather than a single product. Only about 16% of the global population currently uses AI in any meaningful form. Apple has two billion active device users. If Apple gets this right, simpler, more trusted, easier to use, it reaches the next wave of people who haven't tried any of this yet. Those users tend to care more about trust and simplicity than specs. That's historically where Apple wins. The question is whether the software can finally catch up to the hardware. Just a couple of more items. If you have any feedback about this show, you can email Mike at yesterday and AI.news, or you can find me on LinkedIn, X or Blue Sky. And if you like this podcast, please be sure to rate and review it so others can find it. Thanks. That's all for this edition of Yesterday and AI. Stay curious, and I'll see you tomorrow.