AI Mornings with Andreas Vig

Anthropic's "Mythos" Leak & Pentagon Injunction Win

Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.

0:00 | 5:21
Anthropic accidentally leaked details about "Claude Mythos," its most powerful model ever, while also winning a major legal victory against the Trump administration. Plus Google's new chatbot switching tools, Wikipedia's AI ban, and more.
SPEAKER_00

Hey, welcome to AI Mornings with Andreas Vig. It's Friday, March 27, 2026. Anthropic had quite a day yesterday. The company confirmed it's testing a new model called Claude Mythos after an accidental data leak revealed its existence. The leak happened because of a misconfigured content management system that exposed unpublished blog drafts and other assets in a publicly accessible, unencrypted database. Anthropic is calling Mythos its most powerful AI model ever developed, a step change in capabilities, and the company notes it poses what they're calling unprecedented cybersecurity risks. The model is already in early access testing with select customers. It's an embarrassing security lapse for a company that positions itself as the safety-focused AI lab. But the bigger story here is that Anthropic clearly has something significant in the pipeline that they weren't ready to announce yet. Also, on the Anthropic front, a federal judge just handed the company a major legal victory in its battle with the Trump administration. Judge Rita F. Lynn of the Northern District of California granted Anthropic a preliminary injunction against the government's order labeling the company a supply chain risk. That designation, typically reserved for foreign actors, would have required federal agencies to cut ties with Anthropic. The judge said during proceedings that the government's actions looked like an attempt to cripple Anthropic and violated free speech protections. The whole dispute started because Anthropic wanted to limit how the government could use its AI models, specifically banning use in autonomous weapons systems and mass surveillance. The Defense Department apparently didn't like those restrictions. Anthropic's CEO Dario Amodei called the agency's actions retaliatory and punitive, and now a federal judge agrees. Google made a competitive move yesterday that's worth paying attention to. They launched what they're calling switching tools for Gemini, essentially making it trivial to transfer your chat histories and personal memories from Chat GPT or Claude directly into Gemini. You can export your data as a zip file, upload it, and pick up conversations right where you left off. There's also a feature where Gemini suggests prompts you can enter into your current chatbot to extract relevant personal information, then you paste that back into Gemini. The strategic logic here is clear. Chat GPT has about 900 million weekly active users, while Gemini has 750 million monthly active users, despite Google's massive distribution advantages across Android and Chrome. Google's betting that the friction of retraining a new AI assistant on your preferences is one of the things keeping people from switching. Now they've removed that friction. Alright, a few more things worth knowing about today. Wikipedia implemented a new policy that bans the use of AI to generate or rewrite article content. The vote among site editors passed 40 to 2. The policy still allows editors to use AI for basic copy editing suggestions as long as the changes are reviewed by humans and the AI doesn't introduce new content. It's a pretty clear line in the sand from one of the internet's most important knowledge repositories. OpenAI has indefinitely paused plans to build an erotic mode for ChatGPT. CEO Sam Altman first floated the idea back in October, but it faced intense criticism from tech watchdog groups and even internal staff. One advisor at a January meeting reportedly warned the company could be building what they called a sexy suicide coach. The feature's release had already been delayed multiple times. This is now the third project cancellation at OpenAI in the last week, following the Sora shutdown and the deprioritization of instant checkout. The company is clearly in pivot mode, focusing on business users and coders. MetaResearch released an open source framework called HyperAgents, which enables self-referential, self-improving AI agents. The agents can modify their own code and improve through iterative cycles. The project has already picked up over 1,400 GitHub stars. It comes with a safety warning that executing model-generated code carries inherent risks, which is probably an understatement when you're talking about agents that rewrite themselves. And finally, an open source project called Atlas is getting attention for achieving 74.6% on live code bench using a frozen Quen 3 1. An RTX 5060 Ti with 16GB of VRAM. That beats Claude 4.5 Sonnet 71.4% score. The cost per task is about 0.004 US dollars in electricity versus Claude 0.066 US dollars per task via API. It's fully self hosted with no data leaving your machine and no API keys required. The pipeline uses structured generation, energy based verification, and self verified repair without any fine tuning. That's all for today. See you tomorrow.