AI Mornings with Andreas Vig
Your daily AI news briefing in under 10 minutes. New models, product launches, research breakthroughs, and industry shifts, explained clearly, no hype.
AI Mornings with Andreas Vig
SpaceX's $60B Cursor Gambit & ChatGPT Images 2.0
Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.
Hey, welcome to AI Mornings with Andreas Vig. It's April 22nd, 2026, and we have a massive story to start with today. SpaceX just announced a deal with the AI coding startup Cursor that includes an option to acquire the company for$60 billion later this year. If that sounds staggering, it is. For context, Cursor was valued at just$2.5 billion in January 2025. By November, it had jumped to nearly 30 billion. Now SpaceX is talking about potentially paying double that. Under the deal, SpaceX can either buy Cursor for$60 billion or pay$10 billion for their joint partnership work if the acquisition doesn't happen. The two companies are building what they're calling a next generation coding and knowledge work AI, combining Cursor's product expertise with SpaceX's Colossus Supercomputer. This is clearly tied to SpaceX's anticipated IPO investors are looking for every angle of value in Elon Musk's sprawling empire. But it also reveals a weakness. Neither Cursor nor XAI has proprietary models that can match what Anthropic and OpenAI are putting out. In fact, Cursor still sells access to Claude and GPT models even as those companies roll out their own competing coding tools. Two of Cursor's senior engineering leaders already left to join XAI last month. This deal could be an attempt to finally escape that dependency. OpenAI also had a big product launch yesterday. They released ChatGPT Images 2, a major upgrade to their image generation capabilities. The standout feature here is text rendering. The model is surprisingly good at generating readable, accurate text within images, and it works across a wide range of languages, including Japanese, Arabic, Korean, Chinese, and Devanagari scripts. That's been a persistent weakness in AI image generation, and OpenAI seems to have cracked it. The model handles everything from photorealistic portraits to manga style comics, editorial magazine layouts to children's book illustrations. It can generate complete infographics, posters, and presentation-ready materials with proper typography. It also integrates with Chat GPT's thinking mode, so you can have the model do research and reasoning, then turn that into a polished visual output. If you've ever tried to get an AI to render a chart with accurate labels or a poster with clean typography, you know this has been a frustrating gap. Images too looks like it finally closes it. Moving to Google, there's an interesting internal story emerging. Sergei Brynn is personally leading a DeepMind strike team focused on closing Gemini's coding gap with Claude. According to the information, DeepMind researchers internally rate Claude's code writing above Gemini's, and that assessment has triggered Brynn's direct involvement. He's framing this push as the shortest path to self-improving AI, the idea being that coding is the capability that lets AI systems improve themselves. Sebastian Borjo, who previously led DeepMind's pre-training work, is heading the group under CTO Korai Kavukswoglu. Gemini engineers are now required to use Google's internal agent tools on complex tasks, with usage tracked on a company leaderboard called Jetski. This is less a product response and more an internal mobilization Brynn wants to automate Google itself. On the security front, there's a concerning story about Anthropic's Mythos cybersecurity tool. A private online forum managed to gain unauthorized access to Mythos through a third-party vendor, and they got in on the very day the tool was announced. Bloomberg reports the group made an educated guess about the model's online location based on Anthropic's format for other models. The unauthorized users are part of a Discord channel that hunts for information about unreleased AI models, and they claim they're just interested in testing new technology not causing damage. Still, this is awkward for Anthropic. Mythos was released to a select group including Apple under a program called Project Glasswing, specifically to prevent misuse. Anthropic has warned the tool could be weaponized by bad actors. Now it turns out the access controls weren't quite as tight as intended. In a related theme about data and privacy, Meta has announced it will start capturing mouse movements and keystrokes from its own employees to train AI models. The company says this is about building agents that can help people complete everyday computer tasks, things like clicking buttons, navigating drop-down menus, and moving through applications. A meta spokesperson said safeguards are in place to protect sensitive content. But this trend is worth watching. Last week, reports emerged of startups being scavenged for their Slack archives and JIRA tickets to feed AI training. Internal corporate communications are increasingly becoming raw material for the AI supply chain. If you work in tech, your everyday digital activity might be training tomorrow's AI agents. GitHub also made some significant changes to co-pilots' individual plans. They're pausing new signups for Pro, Pro Plus, and student tiers and tightening usage limits. The company says agentic workflows have fundamentally changed their compute demands. Long-running, parallelized sessions now consume far more resources than the original pricing structure was designed to support. GitHub explicitly noted that some requests now incur costs exceeding the plan price. Opus models are no longer available in ProPlans, though Opus 4.7 remains in Pro Plus. If you're a heavy user of coding agents, this is worth checking the limits are tighter now. A few more things worth knowing about today. A startup called Neocognition just emerged from stealth with a$40 million seed round to build AI agents that learn like humans. Founded by an Ohio State professor, the company argues that current agents only succeed at tasks about 50% of the time, which makes them too unreliable for truly autonomous work. They're building agents that can self-learn to become experts in any domain. Also, Adobe launched CX Enterprise, an agentic platform for coordinating marketing and content across businesses. And OpenAI rolled out a codex feature called Chronicle that captures screen content to build persistent memories for the coding assistant. Finally, a broader industry note: the memory chip shortage that's been pushing up electronics prices is now expected to last through 2027. Production growth is at 7.5% annually when 12% is needed to meet demand. Smartphones are getting pricier, with average selling prices hitting a record$523. But there's a potential bright spot Google has a technique called TurboQuant that cuts LLM memory consumption by six times with no accuracy loss. If those kinds of efficiency improvements scale, they could ease the pressure. That's all for today. Thanks for listening, and I'll see you tomorrow.