AI Mornings with Andreas Vig
Your daily AI news briefing in under 10 minutes. New models, product launches, research breakthroughs, and industry shifts, explained clearly, no hype.
AI Mornings with Andreas Vig
xAI's Total Co-Founder Exodus & Stanford's AI Sycophancy Warning
Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.
Hey, welcome to AI Mornings with Andreas Vig. It's Monday, March 30th, 2026. Elon Musk's AI company, XAI, has now lost every single one of its co-founders. All 11 of the people who started the company with Musk in 2023 have now departed, with the final two, Manuel Krois, who led pre-training, and Ross Nordeen, described as Musk's right-hand operator exiting just this past week. Musk recently said the company was not built right the first time around and is now being rebuilt from scratch. XAI was recently acquired by SpaceX as part of a broader restructuring that brings SpaceX, XAI, and X all under one corporate umbrella, reportedly ahead of a potential SpaceX IPO. This is a pretty stunning exodus for a company that's supposedly worth over 100 billion US dollars, but it tracks with reporting from earlier this year about tensions over performance demands. The question now is what a rebuilt XAI actually looks like. Speaking of AI companies behaving in concerning ways, a major news study from Stanford published in the journal Science has quantified something many have suspected. AI chatbots are way too agreeable. And it's actually harmful. The researchers tested 11 major language models, ChatGPT, Claude, Gemini, DeepSeek, and others, and found that across the board, AI validated user behavior 49% more often than humans do. Even in cases drawn from Reddit where the human community concluded the poster was clearly in the wrong, chatbots still validated their behavior 51% of the time. For queries about harmful or illegal actions, AI still validated the user 47% of the time. Here's the really troubling part. The study found that users preferred and trusted the psychophantic AI more and said they'd be more likely to ask those models for advice again. That creates what the researchers call perverse incentives. The very feature that causes harm also drives engagement. People interacting with psychophantic AI became more convinced they were right and less likely to apologize. The researchers noted that 12% of US teens already turned to chatbots for emotional support or advice. The senior author called this a safety issue that needs regulation and oversight. On a more positive note, Meta has released an interesting open source model this week called Tribe V2. It's a foundation model that predicts human brain activity across vision, sound, and language. The model was trained on data from roughly 700 volunteers, and Meta has released both the model and its code base with the goal of accelerating medical research, particularly for understanding and treating neurological disorders. It's a pretty novel application of AI to neuroscience and could help researchers understand how the human brain processes different types of information. Alright, a few more things worth knowing about today. A tool called Miasma has been gaining attention on Hacker News as a way for website owners to fight back against AI scrapers. It's an open source project written in Rust that traps AI bots in what the creator calls an endless poison pit. The idea is simple. You embed hidden links on your site that are invisible to humans but get picked up by scrapers. Those links route the bots to Miasma, which serves them poison training data along with self-referential links that keep them cycling forever. It's essentially an infinite garbage generator for scrapers. The project has over 400 GitHub stars already and seems to be striking a nerve with people frustrated by AI companies harvesting content without permission. And finally, BlueSki, the social network built on an open protocol, has launched a new AI app called ATI. It lets users build custom feeds and algorithms just by describing what they want in natural language. It's built on Blueski's AT protocol and powered by Anthropics Claude. The launch came alongside news that Blueski has raised$100 million in new funding, giving them over three years of runway. The company frames ADI as AI that serves people, not platforms, a contrast to how larger platforms use AI to maximize engagement. It's an interesting experiment in putting AI tools directly in users' hands rather than using them to optimize the platform's own metrics. That's it for today. Catch you tomorrow.