unDavos Summit

Creating Abundance and Avoiding the Zero-Sum Trap | unDavos 2026

Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.

0:00 | 13:38

If the majority of AI’s value comes from cutting jobs, only a select few will become billionaires while the rest of us become jobless. SandboxAQ’s Doron Amir argues we’re at a fork in the road: Large Language Models optimize efficiency, but Large Quantitative Models — trained on physics and mathematics rather than internet text — can create entirely new sciences, new industries, and new jobs. This keynote reframes the AI conversation from reduction to creation.

WHAT THIS PANEL COVERS

  • Why LLMs hit a “semantic dead end” in high-value science — confidently giving wrong answers to physics questions until connected to quantitative models that get the math right
  • How SandboxAQ’s Large Quantitative Models serve as a “rigorous computational engine” that has already identified novel drug candidates targeting obesity, ready for synthesis
  • Why the AI industry skewed too heavily toward LLMs after ChatGPT, and why leaders like Yann LeCun, Fei-Fei Li, and Ilya Sutskever are now calling for new architectures
  • How quantum sensing — boosted by AI to increase signal and reduce noise — is accelerating quantum technology applications today, without waiting for quantum computers
  • Why the strategic framework should prioritize discovery over efficiency, resist short-term thinking, and augment humanity rather than replace it

SPEAKER

• Doron Amir — SandboxAQ (spun out of Google/Alphabet)

unDavos is a community-driven summit running during WEF week in Davos, democratizing the conversation around global challenges.

🌐 undavos.com

Tags: SandboxAQ, Large Quantitative Models, LQM vs LLM, AI and physics, quantum sensing, drug discovery AI, artificial general intelligence, AGI, AI architecture, Google Alphabet, zero-sum trap, AI job creation, abundance economics, responsible AI, quantum technology, unDavos, Davos 2026, WEF