Yesterday in AI

Yesterday in AI - Your boss might be training an AI with your mouse clicks right now.

Mike Robinson

Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.

0:00 | 8:36

Yesterday in AI | Wednesday, April 22, 2026

Your boss might be training an AI with your mouse clicks right now.

Something shifted on Wall Street Tuesday, and the language executives are using to describe it is worth paying attention to. A reader poll of tens of thousands lit up the AI world with a 2-to-1 result nobody predicted. An open-weights model from China just threw the pricing math for frontier AI into question. And Apple made a CEO announcement that tells you exactly what bet they're making on their own future. All that, plus what's happening inside OpenAI's product roadmap that should make GitHub Copilot users ask some questions.

Send us Fan Mail

Remember to subscribe, rate, and share this podcast if you like it!

SPEAKER_00

Yesterday in AI. Hi folks, this is Yesterday in AI, your daily digest of everything happening in the world of artificial intelligence in 10 minutes or less. I'm Mike Robinson. It's Wednesday, April 22nd, and while the AI labs were busy counting users and launching new models, Meta spent the day telling its own employees their every click, keystroke, and screenshot is now part of the training set. Let's get into it. Meta confirmed Tuesday it's installing monitoring software on US-based workers' computers. Mouse movements, clicks, keystrokes, and periodic screenshots, all of it captured, all of it going into AI training data. The internal framing from the company is that they need authentic examples of real computer use so their agents can learn how software actually works. The drop-down menus, the keyboard shortcuts, the exact sequence of actions a human takes to complete a task in a messy enterprise app that no synthetic dataset would ever replicate correctly. Meta says the data won't be used for performance reviews, and that there are safeguards around sensitive content, but a policy memo is doing a lot of work there. The real driver is a specific bottleneck. Computer use agents fail on real software because they've been trained on simulated software. Watching actual humans work at scale is the fastest path to fixing that. What comes next is the more interesting story. If major employers normalize this kind of collection, the collision with privacy law, labor agreements, and compliance frameworks is going to be loud. EU regulators didn't write GDPR with, we need your mouse movements to train robots in mind. And the line between training data and performance surveillance is going to be extremely contested the first time an employee gets managed out and their keystroke logs are somewhere in a model's training set. Next, the Wall Street story that I think signals something bigger than finance. The New York Times reported Tuesday that major U.S. banks have collectively cut roughly 15,000 jobs and something has changed in how executives are describing it. The banks in question, JP Morgan, Citigroup, Bank of America, Goldman Sachs, Morgan Stanley, and Wells Fargo. Bank of America's CEO Brian Moynihan specifically attributed reduced headcount to AI when he talked about rising earnings. His phrase eliminating work and applying technology. That language matters. For the past two years, the standard industry line was augmentation. AI helps people do more with the same. The new line is replacement. AI does the work so fewer people are needed. Finance is a useful leading indicator because it has strong process discipline, measurable outputs, and expensive labor. When a bank can cut a position and point to a specific automation that covers what that person did, the economics write themselves. Profits of these same banks are going up as headcount falls. That's the incentive structure that every other professional services sector is watching right now. Law, consulting, accounting. These industries have identical cost structures and similar process heavy workflows. The transition from AI as tool to AI as a worker is happening in finance first. Give it another 18 to 36 months, and you may see the same conversations in other fields. Three things happened Tuesday that are all anthropic stories, and the fact that they landed on the same day is worth pausing on. The Neuron published the results of a reader poll, 3,143 votes from an AI newsletter audience, self-selected and sharp. Claude won at 46%. ChatGPT got 25%, almost exactly 2-1. The write-in explanations are the most interesting part. People cited Claude code quality constantly. They mentioned writing that gets them for long-form work. And then there's a category that came up more than you'd expect, ethics. Readers specifically named Anthropic's stance on the Pentagon deal, personal discomfort with OpenAI's leadership, and wanting to support a company they believe is trying to do things right. For a non-trivial slice of Claude's lead, values move the vote as much as features did. Hours after those numbers hit, Amazon announced it's expanding its anthropic commitment by up to$25 billion more, bringing the total to$33 billion. They're also securing up to 5 gigawatts of compute capacity. Anthropic's revenue run rate has more than doubled since the original deal. Amazon isn't betting on potential here. They're chasing confirmed demand. And then the information reported that Sergey Brynn is personally leading a new internal strike team at Google DeepMind, built specifically to close Gemini's gap with Claude on agentic coding. Brynn has been more hands-on over the past year, but taking personal ownership of a competitive response to a specific rival is a different level of involvement. When a co-founder steps back into an operational role to personally run the catch-up effort, it tells you where the threat is being felt. Three bets on the same thesis from three different angles, all on the same Tuesday. Now here's the story that should make Anthropic at least a little uncomfortable, because it came out of China on the same day. Moonshot AI shipped Kimik 2.6, it's a 1 trillion parameter open weights model, 32 billion active parameters, 262,000 token context window. On coding and agentic benchmarks, it claims to match or beat GPT 5.4 and Claude Opus 4.6. The weights are on Hugging Face with day zero support on Cloudflare Workers AI, so you can start using it right now without any special access. The pricing is where this gets sharp. Kimmy K 2.6 runs at roughly 76% less than Claude, with cash savings on top of that. Moonshot also shipped Kimmy Code CLI, a command line coding tool that can coordinate up to 300 subagents running 4,000 steps on a single job overnight. The pitch? Send it a legacy code base at 9 p.m. Wake up to a refactored version at 7 a.m. and spend a fraction of what Claude code would have cost. The pattern here is the real story. The capability gap between closed premium models and open weights alternatives keeps shrinking. Each time an open model can credibly make that comparison at a fraction of the cost, the business case for the premium product needs a sharper answer. Anthropic's answer is probably safety positioning, enterprise trust, and continued model improvement. Whether that's enough when the price differential is 76% is the question we'll keep watching. OpenAI had an extremely busy product day and the user numbers keep getting harder to process. ChatGPT Images 2.0 rolled out to all users Tuesday, plus and pro subscribers get a thinking mode that actually reasons through the layout before generating. It can produce up to eight coherent images in a single prompt with character continuity across the whole set. If you've tried any of the image tools over the past year, you know character consistency across multiple generations has been the persistent failure point. This is a direct response to that, and to Google and Adobe on professional image quality. Codex, OpenAI's coding agent, crossed 4 million active users. It hit 3 million less than two weeks ago. Sam Altman reset rate limits again Tuesday as a thank you to the community, which is also a way of saying the growth curve hasn't flattened. And OpenAI is reportedly building a persistent agent platform inside ChatGPT, a studio for assembling agent skills, scheduling triggers, and running workflows that operate without you at the keyboard. That's a significant architectural shift, from ChatGPT as a conversational tool to ChatGPT as something that works on your behalf around the clock. GitHub Copilot got harder news. Leaked internal documents show Copilot's weekly operating costs have nearly doubled since January. Microsoft has paused new signups for Pro, Pro Plus, and student plans. They're moving toward token-based billing and tightening rate limits. It's a real-world lesson about what happens when you price AI generously for adoption and the usage curve exceeds your model cost assumptions. Every AI product team right now is working through some version of that problem. And last, a story that's been building for months but landed officially Tuesday. John Turnus becomes Apple's CEO on September 1st, 2026. Tim Cook steps back to executive chairman. Reuters confirmed it Monday, and the analysis coverage landed throughout Tuesday. Turnus is a hardware person. He led the Mac revival, oversaw AirPods and iPad development, and unveiled the iPhone Air. He's not an AI researcher and not a software-first executive. Apple made a deliberate choice to put their chip and device expert in the chair at the exact moment every competitor is sprinting towards AI services. The bet behind that choice, presumably, is that Apple's real advantage is the physical layer. On device processing, Apple Silicon running models locally, and an ecosystem of devices that people already trust with their most sensitive data. Apple recently struck a deal to put Google Gemini behind Siri improvements. They've lost the top market cap spot to NVIDIA, and Turnus's first job, whether it's stated plainly or not, is to answer the question Apple has been quietly sidestepping. What is Apple's actual AI strategy when you set aside the hardware story? He's got until September 1st to figure out his opening answer. That's all for this jam packed edition of yesterday and AI. Stay curious, and I'll see you tomorrow.