Crestvale Newsroom

Nvidia’s GTC 2026 puts AI infra in focus

Crestvale

Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.

0:00 | 5:39

Nvidia opened GTC with a clear message: the next stage of AI will be shaped by infrastructure, not models. This episode breaks down how Nvidia’s new chips, software stack, and inference‑focused roadmap signal where compute, cost, and constraints are heading next.

For operators, these shifts define the practical limits of AI adoption. They influence data center planning, software investment, and the level of control a business can maintain as AI becomes part of its daily operations.

We also cover Microsoft’s growing AI software engine, Alibaba’s push into agentic AI, and a new regulator‑backed risk framework that will guide how financial institutions deploy AI responsibly.

Plus, updates from Frore Systems, BlackSky, AGCO, SK Telecom, and Shopify.

Learn more at crestvale.io

Support the show

SPEAKER_00

Welcome to CrestVail. This is a daily briefing breaking down what's happening across business, technology, automation, and why it matters. Today we're looking at how artificial intelligence infrastructure is shifting as NVIDIA sets its roadmap for the next year. Nvidia just used GTC to show where the next wave of compute is going. And the message is clear. The company wants to own the full stack as AI moves from training to nonstop real-time use. That shift will change how data centers are built, how much they cost, and who keeps control. Markets closed higher in the previous session. The SP moved up. The NASDAQ also ended the day higher. The 10-year yield drifted lower. Bitcoin continued its climb. The overall mood leaned optimistic. Nvidia opened GTC with a strong push to define the next stage of AI infrastructure. Jensen Huang laid out new accelerators, new networking gear, and more software that sits deeper in the stack than before. The message was not subtle. NVIDIA wants to be the default platform for anyone building or running large-scale AI. The company placed heavy weight on its software base. CUDA has been around for years, but Huang framed it as the moat that keeps developers loyal. Most AI workloads are already tuned for NVIDIA hardware, and that tuning becomes a form of friction. Once your systems depend on it, switching becomes a major job. The hardware roadmap got attention too. Nvidia previewed new chips built for faster memory access, tighter interconnects, and better energy use. These may sound like engineering details, but they decide how big an AI cluster can get, how many tokens it can push, and how much power a data center needs to keep it alive. The biggest shift is the focus on inference. Training gets the headlines. Inference pays the bills. As more companies use agents, co-pilots, and real-time AI systems, the real load happens after the model is trained. Nvidia showed new designs meant to cut latency and make production AI cheaper to run, which is where most companies now feel pressure. The full stack story ties this together. Chips, systems, cloud partners, robotics, and software. All wired, so NVIDIA controls the layers between developers and the machine. This matters because infrastructure choices ripple into budgets, timelines, and strategic freedom. Once a company commits to an AI platform, it shapes how flexible the business can be. These GTC announcements hint at what data center planning, hardware spend, and AI-driven services will look like over the next two years. For operators, this is not a tech event. It is an early view of next year's constraints. Microsoft is also pushing forward, but from a different angle. Its heavy spending on AI hardware is now converting into recurring software revenue. Azure AI demand is strong enough that Microsoft is using much of its own compute internally. Copilot adoption continues to climb, helped by a large installed base inside Microsoft 365. The upcoming E7 tier and Agent 365 tools show how Microsoft plans to sell AI into nearly every workflow. For operators, this signals a world where AI is bundled into the software companies already pay for, which raises expectations for productivity and internal automation. Alibaba is taking a more agent-focused path. It is consolidating its AI groups into a division called Token Hub, with CEO Eddie Wu guiding the shift. The company plans to launch a new enterprise agent service built on its Quen models. Chinese firms are moving fast toward AI that takes action, not just produces text, and Alibaba wants to lead that shift. If you operate in markets touched by China, this is a reminder to treat agents as core infrastructure. The companies that prepare early will be able to hand over more repeatable work to automated systems. Financial regulators in the United States released a detailed AI risk framework designed for banks and large institutions. It includes hundreds of controls and a maturity model that helps teams understand how governance should scale with adoption. It separates risks inside foundation models from the risks created when firms build prompts, systems, or data pipelines around them. This gives operators something concrete, a playbook they can follow as they bring AI into production while staying aligned with regulators. Here's what else is worth knowing today. Frore systems raised new funding to expand its solid-state cooling tech for advanced AI chips. This highlights how heat is becoming one of the biggest barriers to denser compute. Black Sky is rolling out more satellites with sharper and more frequent imaging. Demand for live geospatial data is climbing across defense and commercial sectors. AGCO launched a single parts ordering platform for dealers. It gives real-time visibility into inventory and prices, a shift many industrial firms are now making. SK Telecom is expanding internal AI tools and custom agents across the company. Large incumbents are starting to operationalize AI rather than run pilots. Shopify outlined a future where AI agents drive shopping decisions. Merchants may soon optimize for machine buyers as much as for humans. Here's the operator takeaway.