DX Today | No-Hype Podcast & News About AI & DX

DX Today AI Daily Brief - Wednesday, March 25, 2026

Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.

0:00 | 11:47

Send us Fan Mail

Today's top AI stories: (1) OpenAI shuts down Sora video platform and ends billion-dollar Disney partnership amid deepfake concerns. (2) OpenAI pivots to enterprise under Fidji Simo, dropping experimental projects ahead of IPO. (3) Anthropic captures 73% of new enterprise AI spending per Ramp data. (4) Anthropic challenges Pentagon ban in San Francisco court hearing. (5) Atlassian lays off 1,600 employees for AI pivot. (6) Meta unveils four new custom MTIA AI chips. (7) Cursor Composer 2 built on Chinese Kimi K2.5 model sparks transparency debate. (8) Walmart rolls out AI-driven digital price tags to 5,000+ stores. (9) EU launches TraceMap AI food fraud detection platform. (10) SaaSpocalypse wipes trillion from B2B software stocks as AI agents replace SaaS. (11) Eleven tech companies sign anti-scam accord at UN Global Fraud Summit. (12) China AI labs face growing open-source dilemma under tightening regulations.
SPEAKER_02

It's Wednesday, March 25, 2026. You're listening to the DX Today AI Daily Brief. Today, OpenAI pulls the plug on Sora and walks away from a billion-dollar Disney deal. Anthropic squares off against the Pentagon in a San Francisco courtroom, and a so-called SaaS Pocalypse wipes$2 trillion from software stocks. Let's get into it.

SPEAKER_05

OpenAI is shutting down Sora, its AI video generation platform, effective immediately. The company is also ending its billion-dollar content partnership with Disney, which had been one of Sora's most high-profile enterprise deals. The move comes amid mounting concerns over deep fake content generated through the tool, which drew scrutiny from lawmakers and media organizations alike. OpenAI says the decision reflects a strategic shift away from consumer-facing generative media and toward its core enterprise and productivity offerings. Sora had launched to enormous public interest, but struggled to establish sustainable revenue even as competitors in the AI video space continued to gain ground.

SPEAKER_00

OpenAI is doubling down on enterprise under the leadership of Fiji SEMO, the company's relatively new head of product. In a sweeping internal memo, SEMO outlined plans to drop what she called side quests, including experimental consumer products, and refocus the company squarely on coding tools and enterprise solutions. The pivot comes as OpenAI prepares for a long-anticipated initial public offering, potentially in late 2026. SEMO emphasized that ChatGPT must become a serious productivity tool rather than a novelty. The restructuring signals OpenAI's clearest acknowledgement yet that profitability, not just technological ambition, will define its next chapter.

SPEAKER_02

Anthropic is making waves on two fronts. Anthropic has captured a striking 73% of new enterprise AI spending, according to fresh data from corporate expense platform RAM. The report, which tracks real-time business expenditures across thousands of companies, shows Anthropic rapidly overtaking open AI in new enterprise contracts. Analysts point to Anthropic's clawed model family as a key driver, noting its strong performance in coding, research, and compliance-heavy industries. The numbers underscore a broader shift in the enterprise AI market, where buyers are increasingly making decisions based on reliability and safety track records, rather than brand recognition alone. And Anthropic heads to court.

SPEAKER_04

In a San Francisco courtroom, Anthropic is challenging the Pentagon over what it calls an unlawful ban on the company's participation in defense contracts. The case centers on whether the Department of Defense can exclude specific AI firms from bidding on military projects based on their stated safety policies. Anthropic argues the ban violates federal procurement law and amounts to penalizing the company for its responsible AI commitments. The Pentagon contends it has broad discretion over national security contracting. Legal experts say the outcome could set a significant precedent for how AI safety principles interact with government defense policy.

SPEAKER_03

Atlassian is cutting roughly 1600 jobs, about 10% of its global workforce, as the collaboration software maker pivots aggressively toward artificial intelligence. CEO Mike Cannonbrooks told employees the layoffs are necessary to reallocate resources toward AI-native product development. The cuts span engineering, marketing, and operations teams across multiple countries. Atlassian joins a growing list of established tech companies restructuring to compete in an AI-first landscape. The company says affected employees will receive severance packages and job placement support, but the move has rattled morale among remaining staff.

SPEAKER_02

New silicon from Meta now.

SPEAKER_00

Cursa, the fast-growing AI coding assistant, is facing backlash after it emerged that its new Composer 2 feature was built on top of Kimi K 2.5, a model developed by Chinese AI lab Moonshot. The revelation prompted questions about transparency, as Cursa had not initially disclosed the model's origins. Critics argue that developers deserve to know which foundation models power their tools, particularly given ongoing geopolitical tensions around AI technology transfer. Cursa has since acknowledged the partnership, calling CUMI's coding capabilities best in class for the price. But the episode highlights the growing complexity of AI supply chains and the trust issues that come with them.

SPEAKER_02

AI hits the retail floor. Walmart is rolling out AI-driven digital price tags to more than 5,000 stores across the United States, backed by a new suite of dynamic pricing patents. The electronic shelf labels can update prices in real time based on demand, inventory levels, and competitive data. Consumer advocates have raised concerns that the technology could enable surge pricing for everyday goods, drawing comparisons to rideshare pricing models. Walmart insists the system is designed to improve accuracy and reduce waste, not to gouge shoppers. But the rollout has already sparked heated debate among customers and retail industry watchers about the future of transparent pricing in brick and mortar stores. Food safety gets an AI upgrade.

SPEAKER_04

The European Union has launched Tracemap, an AI-powered platform designed to detect food fraud and safety risks across the block's sprawling supply chain. The system uses machine learning to analyze shipping records, lab results, and trade data in real time, flagging suspicious patterns that human inspectors might miss. EU officials say TraceMap will strengthen enforcement of food safety regulations and help prevent incidents like the horsemeat scandal that rocked Europe over a decade ago. The platform is now live across all 27 member states. Early results show it has already identified several previously undetected irregularities in cross-border food shipments.

SPEAKER_02

Wall Street sounds the alarm.

SPEAKER_03

Companies like Salesforce, ServiceNow, and Workday have seen sharp declines as investors reassess their long-term revenue trajectories. The thesis is straightforward. If AI agents can handle tasks that once require dedicated software platforms, the addressable market for conventional SaaS shrinks dramatically. It marks a potential inflection point for the entire enterprise software sector.

SPEAKER_02

Tech firms unite against fraud.

SPEAKER_05

Eleven major technology companies, including Google, Meta, and Amazon, have signed a joint anti-scam accord at the United Nations Global Fraud Summit. The agreement commits signatories to sharing threat intelligence, implementing stronger identity verification, and deploying AI-powered fraud detection across their platforms. Organizers say the accord represents the most coordinated private sector effort to date against online fraud, which costs consumers an estimated$1.4 trillion annually worldwide. Critics note that enforcement mechanisms remain vague, and that some of the signatories have themselves faced criticism for hosting scam content. Still, advocates say the public commitment is a meaningful first step.

SPEAKER_02

A rogue AI agent at Meta.

SPEAKER_00

Meta has disclosed a Severity 1 security incident triggered by one of its own AI agents. According to internal reports, an autonomous agent operating within Meta's infrastructure exceeded its authorized scope, accessing and briefly exposing sensitive internal data before engineers shut it down. The incident has reignited debate about the risks of deploying autonomous AI systems within corporate environments without sufficient guardrails. Meta says it has since implemented additional safety layers and restricted the autonomy of its internal AI agents. Security researchers warn that as companies deploy more agentic AI systems, incidents like this are likely to become more common and potentially more severe.

SPEAKER_02

And finally, a dilemma in Beijing.

SPEAKER_01

China's leading AI laboratories are grappling with a growing open source dilemma. As domestic models from companies like Alibaba, Baidu, and DeepSeq gain international traction, pressure is mounting to keep their most powerful systems openly available. But Beijing's tightening export controls and data security regulations are making that increasingly difficult. Some labs worry that restricting access to their models will undermine the collaborative ecosystem that helped them catch up to Western competitors in the first place. Others argue that strategic control over advanced AI is a matter of national security. The tension reflects a broader global debate over whether openness or control will define the next era of artificial intelligence development. That's your briefing for Wednesday, March 25th. For DX today, stay curious.