AI Mornings with Andreas Vig
Your daily AI news briefing in under 10 minutes. New models, product launches, research breakthroughs, and industry shifts, explained clearly, no hype.
AI Mornings with Andreas Vig
Meta's Muse Spark Arrives & OpenAI's Child Safety Blueprint
Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.
Hey, welcome to AI Mornings with Andreas Vig. It's April 9th, 2026. Meta just dropped its first major AI model since hiring Alexander Wang from Scale AI, and it marks a significant strategic shift for the company. It's called Muse Spark, and it's the debut release from Meta Superintelligence Labs, the division Zuckerberg built last year, with the goal of what they're calling personal superintelligence for everyone. What makes this interesting is that unlike Meta's Lama family, Muse Spark is closed source. It's designed to live inside Meta's ecosystem available now at Meta.ai and the Meta AI app, with rollout coming to WhatsApp, Instagram, Facebook, Messenger, and AI glasses in the coming weeks. There's a private API preview for select users, but this isn't an open model you can download and run yourself. The model is natively multimodal with support for tool use, visual chain of thought, and what Meta calls contemplating mode. That's their approach to multi-agent orchestration, where multiple agents reason in parallel similar to what you see with Gemini DeepThink or GPT Pro. In benchmarks, MuseSpark hit 58% on humanity's last exam and 38% on Frontier Science Research. Meta is also making some bold efficiency claims. They say MuseSpark achieves its capabilities using over an order of magnitude less compute than Lama 4 Maverick. They rebuilt their entire pre-training stack over the last nine months with improvements to model architecture, optimization, and data curation. One notable area they invested in is health reasoning. They worked with over a thousand physicians to curate training data, which enables things like interactive nutritional analysis and personalized health recommendations. The closed source shift is worth watching. Meta has been the champion of open AI models, but Muse Spark suggests they're now building proprietary motes around their most capable work. Speaking of safety frameworks, OpenAI published a child safety blueprint yesterday that's worth noting. This is separate from the economic policy blueprint they released earlier this week. This one focuses specifically on combating AI-enabled child sexual exploitation. The blueprint has three main components: modernizing laws to address AI-generated and altered abuse material, improving how providers report to and coordinate with law enforcement, and building safety by design measures directly into AI systems. OpenAI developed this with input from the National Center for Missing and Exploited Children, the Attorney General Alliance's AI Task Force, and Thorne. What's notable is that observers are calling this the first detailed architectural framework from a major AI lab specifically designed to prevent AI-enabled exploitation before it happens, rather than just reacting to it. Alright, a few more things worth knowing about today. A startup called Poke just launched an AI agent that works through iMessage, SMS, and Telegram, no app installation required. The company is positioning it as open claw for the rest of us, making agent-based automation accessible to people who aren't comfortable installing software through terminals or managing dependencies. You can set up what they call recipes with a single click for things like daily planning, calendar management, health tracking, and smart home control. The 10-person startup has raised 25 million US dollars total and is valued at 300 million. The Angel Investor list is pretty notable. It includes Stripe founders John and Patrick Collison, Logan Kilpatrick from DeepMind, Joanne Jang from OpenAI, and Cognition Founders Scott Wu and Walden Yan. In research news, a paper called Mega Train just dropped that shows how to train large language models with over 100 billion parameters on a single GPU. The trick is storing parameters and optimizer states in CPU memory and treating the GPU as a transient compute engine streaming parameters in and gradients out for each layer. On an H200 GPU with 1.5 terabytes of host memory, they can train models up to 120 billion parameters, and they're getting 1.84 times the throughput of DeepSpeed Zero, 3 with CPU offloading. This could be meaningful for researchers and smaller teams who don't have massive GPU clusters. Databricks co-founder Matez Zaharia just won the 2026 ACM Prize in Computing for creating Apache Spark. In his interview with TechCrunch, he made a striking comment: AGI is here already. It's just not in a form that we appreciate. He argued we need to stop applying human standards to these models. He also called OpenClaw a security nightmare because it's designed to act like a trusted human assistant with access to passwords and bank accounts. And in Quick Hits, Tubi became the first streaming service to launch a native app inside ChatGPT, letting users browse and watch content directly through the AI assistant. Atlassian added visual AI tools and third-party agent support to Confluence. And Astropad launched Workbench, a remote desktop solution specifically designed for monitoring AI agents running on Mac minis, which have become popular platforms for autonomous agent experimentation. That's it for today. See you tomorrow.