AI Mornings with Andreas Vig

OpenAI's $100 Pro Plan & Florida's ChatGPT Investigation

Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.

0:00 | 4:00
OpenAI launches a new $100/month ChatGPT Pro plan targeting Codex users, Florida's AG investigates OpenAI over the FSU shooting, Mercor faces fallout from a massive data breach, researchers reverse-engineer Google's SynthID watermark, and a new study shows coding agents improve when they research before coding.
SPEAKER_00

Hey, welcome to AI Mornings with Andreas Vig. It's April 10th, 2026. OpenAI just shook up its pricing structure with a new$100 per month ChatGPT Pro plan. This is the tier that's been missing forever, sitting between the$20 Plus plan and the$200 Pro option. The main draw here is significantly more Codex usage. OpenAI says the new plan offers five times more coding capacity than Plus, and they're explicitly positioning it as a competitor to Anthropic's$100 Claude plan. The company also revealed that Codex now has over 3 million weekly users, up five times in just three months, with 70% month-over-month growth. That's a pretty staggering adoption curve for a coding tool. Through the end of May, they're offering even higher limits as a promotional period, so heavy users might want to jump on this now. In regulatory news, Florida's Attorney General James Uthmeyer announced an investigation into OpenAI over the alleged use of ChatGPT in planning last year's Florida State University shooting. The April 2025 attack killed two people and injured five. Attorneys for victims claim the shooter used ChatGPT to plan the assault, asking questions about crowd patterns at the student union. Uthmeyer said subpoenas are forthcoming and raised broader concerns about what psychologists are calling AI psychosis, delusions that get reinforced through chatbot interactions. This follows other incidents where ChatGPT has been linked to suicides and violent acts. OpenAI says they'll cooperate with the investigation. The fallout from that Murcor data breach is getting worse. The$10 billion AI data training startup admitted on March 31st that it was hit by credential harvesting malware hidden in the open source Lite LLM tool for 40 minutes. A hacker group claims to have stolen 4 terabytes of data candidate profiles, personally identifiable information, employer data, source code, and API keys. The biggest consequence so far? Meta has paused its contracts with Mercor indefinitely. OpenAI says they're investigating exposure but haven't paused contracts yet. Mercor was reportedly on track for over$1 billion in annualized revenue before this happened, making it a potentially catastrophic turn for a company that just raised$350 million six months ago. On the research side, a fascinating project on GitHub has reverse-engineered Google's Synth ID watermarking system. Synth ID is the invisible watermark Google embeds in every image generated by Gemini. Using only spectral analysis, no access to Google's proprietary encoder, researchers built a detector with 90% accuracy and a bypass that removes 75% of the watermark's carrier energy, while maintaining image quality above 43 decibels. PSNR. The key finding: the watermark is resolution dependent. Different image sizes use completely different carrier frequencies. The project has over 900 stars already. Finally, a new blog post shows something coding agents need more of research before they write code. When researchers added a literature search phase to an order research loop, having agents read papers and study competing projects, first performance improved significantly. Pointing it at Lama, CPP, the system produced five optimizations that made flash attention 15% faster on x86 chips and 5% faster on ARM. The interesting bit studying forks and other backends was more productive than searching academic papers on ARCSIV. Total cost about$29 over three hours. That's it for today. See you tomorrow.