AI Mornings with Andreas Vig

LiteLLM's Supply Chain Attack & OpenAI's Sora Shutdown

Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.

0:00 | 4:45
A major supply chain security incident hits LiteLLM users, OpenAI shuts down its Sora social app, Arm releases its first-ever in-house chip, and Claude Code gets autonomous decision-making powers.
SPEAKER_00

Hey, welcome to AI Mornings with Andreas Vig. It's the 25th of March 2026. We're starting with a major security incident that every developer should know about. The popular Lite LLM library, which is used for orchestrating calls to various LLM APIs, was compromised on PyPI. Versions 1.82.7 and 1.82.8 contain malicious code that steals credentials from any system where they're installed. And here's the scary part the malicious payload executes automatically when Python starts up. You don't even need to import the library. It goes after everything SSH keys, AWS and GCP and Azure credentials, Kubernetes secrets, crypto wallet files, shell history, and all your environment variables, including API keys. The data gets encrypted and sent to an attacker-controlled server. If you installed either of these versions, you need to rotate every credential that was on that machine. The library has over 40,000 GitHub stars, so this is a significant supply chain attack. The issue was discovered and reported yesterday. In other news, OpenAI is shutting down Sora, its TikTok-like AI video social app, just six months after launch. The app peaked at 3.3 million downloads in November, but fell to 1.1 million by February. It only generated about 2.1 million US dollars from in-app purchases. A planned US$1 billion US dollar investment and licensing deal with Disney has also collapsed. Apparently, no money had actually changed hands before the shutdown. The underlying Sora 2 model isn't going away though. It'll still be available through ChatGPT's paywall. The app had faced ongoing criticism for deepfake content, including videos of deceased public figures and copyrighted characters that bypassed guardrails. ARM Holdings has done something unprecedented in its 35-year history. It released its own chip. The ARM AGI CPU is designed for AI data center inference and was built in partnership with Meta, which is also the first customer. ARM has always licensed its designs to partners like Nvidia and Apple, but now it's competing directly with them. The company started developing this chip back in 2023, and it's already production ready. Launch partners include OpenAI, Cerebrus, and Cloudflare. This is a significant shift in the semiconductor landscape. Anthropic has introduced a new feature for Claude Code called Auto Mode, now in research preview. It lets the AI autonomously decide which actions are safe to execute without waiting for user approval. The system uses AI safeguards to review each action, checking for risky behavior and prompt injection attacks. Safe actions proceed automatically while risky ones get blocked. It works with ClaudeSonnet 4.6 and Opus 4.6 and is rolling out to enterprise and API users. Anthropic recommends using it in isolated environments, since they haven't fully disclosed what criteria determine whether an action is safe or risky. Alright, a few more things worth knowing about today. Kleiner Perkins announced it raised 3.5 billion US dollars across two funds,$1 billion for early stage and$2.5 billion for growth investments. That's a big jump from the$2 billion they raised less than two years ago. The firm has made early bets on Together AI, Harvey, Open Evidence, Anthropic, and SpaceX. They're joining a wave of mega raises from other top VC firms. Spotify is testing a new feature called Artist Profile Protection that lets artists review and approve releases before they appear on their profiles. It's aimed at stopping AI-generated music from being misattributed to real artists. The announcement comes after Sony Music requested the removal of over 135,000 AI-generated songs that were impersonating its artists. Databricks launched a new AI security product called LakeWatch, which does threat detection and investigation using Claude-powered AI agents. To build it, they acquired two startups, Antimatter, which focused on secure agent deployment, and SIFD.ai, which built collaborative notebooks for humans and agents. And OpenAI released a set of open source prompts designed to help developers make AI apps safer for teens. The prompts cover things like violence, sexual content, dangerous challenges, and age restricted goods. They worked with Common Sense Media and everyone.ai on the project. That's it for today. See you tomorrow.