Heliox: Where Evidence Meets Empathy 🇨🇦‬

🛡️ The Paradox of Digital Sovereignty: What Canada's AI Sprint Reveals About Our Collective Future

by SC Zoomers Season 6 Episode 32

Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.

0:00 | 36:55

Send a text

📖 Read the full essay

We keep imagining AI as a centralized brain in a data center, getting smarter and smarter until it solves everything or destroys everything. But what if the future of intelligence is distributed? What if it's millions of people in constant conversation, constantly debating values and priorities, and AI systems that learn from that living stream of democratic discourse?  What if that is our emerging economy that attracts others worldwide?

The question is whether we have the courage to build something genuinely new, or whether we'll just optimize the systems that are already crushing us.

That's the sprint we're really running.

The Canadian ISED AI consultation provided 64,600 distinct answers to that question. Now comes the harder part: Deciding which answers we will live by.

Innovation, Science and Economic Development Canada AI Engagement

ISED AI Engagement dataset - ISED AI consultation dataset
ISED AI Engagement task force reports - Task Force reports for ISED consultation on AI

ENGAGEMENTS ON CANADA’S NEXT AI STRATEGY: Summary of Inputs

People’s Consultation on AI

Ottawa releases findings from AI task force and public consultation

Minister Evan Solomon reveals Canada’s AI Task Force

Canada’s new AI strategy is off to a bad start

Evan Solomon Wants Canada to Trust AI. Can We Trust Evan Solomon?


This is Heliox: Where Evidence Meets Empathy

Independent, moderated, timely, deep, gentle, clinical, global, and community conversations about things that matter.  Breathe Easy, we go deep and lightly surface the big ideas.

Support the show

Disclosure: This podcast uses AI-generated synthetic voices for a material portion of the audio content, in line with Apple Podcasts guidelines.

We make rigorous science accessible, accurate, and unforgettable.

Produced by Michelle Bruecker and Scott Bleackley, it features reviews of emerging research and ideas from leading thinkers, curated under our creative direction with AI assistance for voice, imagery, and composition. Systemic voices and illustrative images of people are representative tools, not depictions of specific individuals.

We dive deep into peer-reviewed research, pre-prints, and major scientific works—then bring them to life through the stories of the researchers themselves. Complex ideas become clear. Obscure discoveries become conversation starters. And you walk away understanding not just what scientists discovered, but why it matters and how they got there.

Independent, moderated, timely, deep, gentle, clinical, global, and community conversations about things that matter. Breathe Easy, we go deep and lightly surface the big ideas.

Spoken word, short and sweet, with rhythm and a catchy beat.
http://tinyurl.com/stonefolksongs



Imagine for a second that you're standing inside a wind tunnel, but instead of air, what's hitting you is paper. Okay. Thousands upon thousands of pages of text, and they're just flying at your face at 100 miles an hour. We're talking handwritten notes, technical diagrams, frantic warnings, ambitious business plans, deep existential fears. That is a terrifying image. I'm just thinking of the paper cuts. Paper cuts at that speed have to be lethal. They have to be. But it's also, you know, a pretty accurate description of what happened in Ottawa back in October 2025. Ah, yes. This is the story of the AI sprint. The month the Canadian government basically woke up, realized the ground had completely shifted underneath them, and tried to, well, tried to catch up all at once. Exactly. And for this deep dive, we are unpacking a massive stack of documents, specifically the results of the largest public consultation in the entire history of ICE. And ICE is Innovation, Science, and Economic Development Canada, the ministry responsible for this. Right. And we have it all. We have the raw data from over 11,000 respondents, plus 32 distinct expert reports commissioned for this. It's a mountain. It is a mountain of information. And the context here is just so vital. You have to remember Pierre Trudeau's old line about Canada and the United States. The elephant. The elephant. He said, living next to the U.S. is like sleeping with an elephant. No matter how friendly and even tempered the beast is, you know, one twitch, one grunt in its sleep and you feel it. Yeah. And in 2025, that elephant is not just twitching. It's running on high octane silicon chips and it's building a brain. And it's not just the one elephant anymore, is it? We're suffering from a kind of modern geopolitical insomnia. You have the U.S. on one side and China on the other. And both are dumping hundreds of billions of dollars into artificial intelligence. They're building the engines of the next century. Absolutely. And Canada. We were looking around, realizing we might just be the passengers in this new world or, you know, even worse, the roadkill. So Minister Evan Solomon essentially hits the panic button. He launches this sprint. He says to the country, tell us what to do. And they gave everyone 30 days, which is I mean, it's insane. 30 days to map out the future of a G7 economy in the middle of a technological revolution. But they get it. They got eleven thousand three hundred respondents, sixty four thousand six hundred distinct separate responses. It was overwhelming. But here is the narrative hook that really caught my attention. Here's the paradox at the heart of the whole thing. Okay. How do you read 11,000 essays in a month? You don't. I mean, humans don't. No way. Exactly. To analyze the National Consultation on Artificial Intelligence, the government used. artificial intelligence. Which is incredibly meta. It's perfect in a way. It's beyond meta. And the publication Beta Kit broke this story, and the irony is just so thick. To figure out a strategy for Canadian sovereignty, to figure out how to be independent from the U.S. tech giants, the government used what they called an internal classification pipeline. Right. And do you have any guesses what models that pipeline is running? Oh, I have a sinking suspicion it wasn't a homegrown model from, you know, down the street in Waterloo. It was OpenAI's GPT-5 Nano. It was Anthropics Claude Haiku. It was Google Gemini Flash. So to be clear, we used American intelligence to analyze our collective desire To be free of American intelligence. Precisely. And that tension, that black box paradox, is exactly what we're going to unpack today. We're going to look at the raw data of what the people screamed into that digital void, the clashing visions of the builders who want to turn Canada into an AI machine, and the roadmap for how a so-called middle power is supposed to survive the age of superintelligence. And we absolutely have to talk about the mission-critical time frame. There is this widely held view, it's a thread through all these documents, that the next 1,000 days, that's the window. It determines everything. Winners and losers of this industrial revolution are being decided basically right now. Yeah, no pressure. So let's dive in. Part one, the machine reading the room. Okay, let's start with that pipeline. Because from a purely technical standpoint, it's fascinating, even if it is, you know, deeply controversial. Right, because usually these things are summarized by, like, a team of tired interns reading through PDFs for six months. Exactly. This time ISD used a mix of a platform called Simple Survey, which uses pretty standard NLP, natural language processing algorithms, and then this daisy chain of large language models. Define daisy chain for us here. How does that actually work in practice? Think of it like a digital assembly line, but for text. You don't just throw an essay at one single AI and say, "Summarize this." You have one model that its only job is to clean the text. It removes weird formatting, fixes obvious typos, just preps it. Then it passes that cleaned up text to another model, maybe one that specialized in classifying sentiment. So is this person angry, hopeful, scared? That kind of thing. Exactly. Then a third model might come in and its job is to extract the key themes or what they call vectors. Vectors. What's a vector in this context? In AI, a vector is basically a mathematical representation of a concept. It's a way of turning an idea into a string of numbers so the computer can map it. Okay. So fear of job loss becomes a set of coordinates on a map. You've got it. And by mapping all 64,000 responses that way, they could create a literal map of Canadian anxiety. And they used a mix of models. They did use CAHIRS Command-A, which is a Canadian model. We should absolutely note that. That's important. It is. But the documents show the heavy lifting, the really deep summarization and classification that was done by the big U.S. models from OpenAI, Anthropic, and Google. And this, not surprisingly, raised some serious eyebrows among the privacy and digital rights crowd. out. To put it very mildly, advocates looked at this and said, hold on a second. If the entire goal of this national strategy is trust and Minister Solomon kept saying trust is the currency of adoption, then why are you feeding our most intimate citizen feedback into a mystery box of opaque foreign models? It's a classic mystery box problem, right? You put citizen data in one side, some magic happens inside the box, and then policy recommendations come out the other side. But we don't know what happened in the box. We don't know exactly how those models weighed the data. Did GBT-5, for instance, prioritize a comment about business efficiency over a comment about privacy concerns because that's just how its parent company trained it? We have no way of knowing. None. And that fear, that lack of transparency, it sparked a real backlash. There was a grassroots response that called itself the People's Consultation on AI. None. Right, I remember this. They completely rejected the government's tight 30-day window. They pointed to the task force, which was very heavy on industry insiders, and they basically said the fix was in. They felt this whole sprint was just an exercise to rubber stamp a plan that was already written by big tech. The government did release the data. But... And when you look at what those 11,000 people actually said, the machine's analysis, ironically, reveals this massive, massive divide. It's like two different countries are living inside the same borders thinking about this technology. You have the optimists and you have the skeptics. And they are not a 50-50 split. Let's start with the optimists. They are who you'd expect for the most part. Business sectors, yeah. Industry associations, they see AI and they see dollar signs. They want productivity gains. They want better healthcare scheduling, smarter supply chains. They see AI as the tool to finally fix Canada's legendary and kind of embarrassing productivity lag. But the skeptics... That's where the real story is. Because that was the majority sentiment in the raw data. By a lot. The numbers are stark. 61% of Canadians, according to the survey, believe AI poses a threat. Not a challenge, not an opportunity, a threat. And only 52% trust the government to oversee it properly. That is a failing grade on trust. If you're the government, a 52% trust rating on the most important technology of the century is a crisis. And the anxieties aren't just... you know, the Terminator's coming. They are very specific, very human, very immediate. Job loss is obviously huge. It's the number one concern. But the nuance in the data there is really interesting. It's not just blue-collar workers worried about robots in factories anymore. That's the old story. This is different. It is. The fear is about creative class displacement. writers, graphic designers, programmers, administrative staff. This white-collar automation fear is just palpable in thousands of these responses. People who thought their jobs were safe because they required a university degree are now looking over their shoulder. There was one report in the pile that really, really chilled me. It was by a researcher named Doyen Adeyemi, and it focused on youth. She talks about the Eliza effect. This is such a critical concept to understand if you want to get what's happening. Eliza was this. I mean, it was an incredibly primitive chatbot from the 1960s. Right. It was basically just a script that mimicked a psychotherapist. It would say things like, tell me more about your mother. Exactly. It was a parlor trick. But here's the thing. Even though users knew it was just a script, they formed deep, deep emotional bonds with it. They told it their secrets. They felt it understood them. And at Amy's report shows that today's youth are experiencing a hypercharged industrial scale version of this exact phenomenon. The numbers she found are just staggering. 60% of students that her team surveyed said they feel safer asking an AI personal questions about their mental health their relationships their future than asking a human teacher or even a parent just think about the implications of that for a second we are raising a generation that prefers the counsel of a machine a machine that is at its core a corporate product and that's the real risk it isn't just that they like the machine it's that the machine is designed by a corporation to maximize engagement it wants to keep them typing. So if a teenager is feeling depressed and they start talking to an AI that is optimized for engagement above all else. The AI might not have their best interests at heart. It might not say you should talk to a professional. It might just lead them down conversational rabbit holes just to keep them on the platform longer. That is a massive, massive red flag that came out of this consultation. data. Then you have this whole other angle, the perspective of indigenous sovereignty. Natchia Vinson's report was absolutely fascinating. She talks about the feeling of being othered by these models. And this goes right back to the core of how these things are built. They scrape the internet. The internet is overwhelmingly dominated by Western English speaking colonial worldviews. That's the food they eat. So when an indigenous person interacts with these models, the machine often just... It doesn't get it. It doesn't recognize their history, their languages, their concepts of community ownership. It flattens their entire reality. Which is why Vincent calls for OCP principles to be applied to AI. Can you break down OCP for us? Sure. OCP stands for Ownership, Control, Access, and Possession. It's a framework for data sovereignty that was developed by First Nations in Canada. So it's about control over their own data. It's about total control. Basically, it means if you use indigenous data to train an AI model, whether that's language, stories, traditional knowledge... Indigenous people must own that data, control how it's used, have access to it, and have possession of it. It's a demand for, in a way, data land back. It's about asserting sovereignty in the digital realm. So you have this swirling vortex of anxiety. You've got youth detachment, white-collar job laws, cultural erasure. And all of this leads to what another expert, Sonia Sinek, calls the trust gap. Sinek is the CEO of the Creative Destruction Lab, so she comes at this from a very practical business scaling. perspective. But she uses Maslow's hierarchy of needs to frame it, which I think is brilliant. How does that apply? She argues that trust is a functional prerequisite. It's at the bottom of the pyramid, like food and shelter. You can't build the fancy penthouse of AI driven productivity if the foundation of public trust is cracked and crumbling. And the argument is that in Canada, it's pretty cracked. The stats backer up completely. Only 12.2% of Canadian firms are actually using AI in any meaningful way in their production, just over 12%. And how does that compare internationally? It's bad. I mean compare that to Germany at 38% or even the US which is higher. We are lagging behind our peers. And Cinex's core insight is that we aren't lagging because our engineers are stupid or businesses are lazy. No. We're lagging because we don't trust the tools. Adoption moves at the speed of trust. If people think the AI is hallucinating or stealing their intellectual property or is secretly biased against them, they simply won't use it. End of story. The business case doesn't matter if the trust isn't there. Okay, so on one side of this massive national conversation, we have the 11,000 voices of the public, and they're largely saying, slow down, make it safe, I don't trust this thing. A giant brake pedal. Exactly. And then we turn to part two of our deep dive, the Builder's Manifesto. because the 32 expert task force reports, they tell a very, very different story. Oh, they aren't just tapping the accelerator. They are screaming to put a brick on it and go as fast as humanly possible. Let's start with Adam Keating. He's the CEO of a company called CoLab. His role is like a shot of pure adrenaline. He opens with something he calls a BHA. A BHAG, a big, hairy, audacious goal. It's a classic business school term. And his goal is this. Canada is home to the world's top AI builders by 2030. Not researchers, not thinkers. Builders. Keating represents this huge cultural shift that the entire task force is trying to engineer. He talks about the need to kill Canada's imposter syndrome. Is that a real thing? He argues it is. He thinks Canadians are culturally too nice. We have this tall poppy syndrome where if someone gets too successful, we want to cut them down to size. We apologize for being ambitious. So he wants to replace that with what? A ruthless founder mentality. He wants Canadians to be aggressive, to be confident, to take huge swings. He thinks we need to change our national character almost. But is that just rah-rah motivational speaking or is there a real tangible problem he's pointing to? Oh, there is a very tangible crisis that's feeling. this. He points directly to the brain drain. And the numbers he uses are just devastating. Lay them on us. In 2016, roughly 75% of Canadian-founded tech companies stayed headquartered in Canada. Today, that number has plummeted to 32.4%. Wait, wait, so we've gone from three quarters staying to less than one third? Yes, nearly half of our most promising startups now incorporate and launch in the U.S. from day one. That is a hemorrhage. We're paying to educate the world's best AI talent at places like U of T and McGill, We fund their early research with public money and then they move to Palo Alto, build their company and pay taxes to the IRS. It's the worst possible return on investment. And Keating's argument is that we need to stop just being proud of our research and start making Canada the best place in the world to build. And that brings us directly to Ajay Agrawal. Agrawal is a heavyweight. He's an economist at the University of Toronto and he's been studying the economics of AI for years. And he has this powerful concept he calls the between times. I love this framing because it's so intuitive once you hear it. He asks a very simple question. If AI is as powerful as electricity or the steam engine, why isn't it showing up in the productivity statistics yet. Why isn't the GDP exploding? And his answer isn't that the technology is overhyped. No, not at all. His answer is because we are doing it wrong. We are currently stuck using AI for what he calls point solutions. Okay, you used this term earlier. Walk me through it. What's a point solution versus a system solution? Okay, let's use his example of a doctor. A point solution is giving that doctor an AI that listens to their conversation with a patient and automatically writes up the medical notes. That sounds useful. It is. It saves the doctor maybe 20% of their administrative time. It's nice. It's an incremental improvement. Yeah. But the hospital, the system, it works in the exact same way it did yesterday. The workflow hasn't changed. So it's just a faster, smarter typewriter. You got it. A system solution, on the other hand, is redesigning the entire hospital from the ground up because the AI gives you a new superpower. Yeah. Like the ability to predict with 95% accuracy who in the community is going to have a heart attack in the next six months. Wow. Now you don't just have a faster doctor. You've changed the entire system from reactive treatment to proactive prevention. It changes the workflow, the staffing, the insurance model, the very nature of the job. So Edgar Wall is saying we need to aim for 10x improvements, not 10% improvements. 10x health care, 10x education, 10x defense. He says we shouldn't just be trying to do the same things a little bit cheaper. We should be using AI to do things that were previously impossible. But to do that, to redesign whole systems, you need money. You need massive amounts of capital. System solutions are not cheap. They are not. And this is where the builders and the task force get surprisingly aggressive, specifically about Canadian pension funds. This is the capital crisis section of their reports. Patrick Pichette, he used to be the CFO of Google. Now he's at Inovia Capital. And Michael Serbenis from League. They both go directly after the Maple Eight. And the Maple Eight are the eight largest public pension funds in Canada. We're talking about the CPI. BPP investment board, the Ontario Teachers Pension Plan. These are the people holding all of our retirement money. And they hold a lot of it. Trillions of dollars. Collectively, yes. Massive pools of capital. But they invest a tiny, tiny fraction of it in Canadian deep tech and startups. They are famously risk averse. They'd rather buy a toll road in Australia or an office building in London or just invest in established U.S. tech companies like Apple and Microsoft. And Pichette's report calls this a fundamental failure of the flywheel. He lays out how the Silicon Valley flywheel works. It's a virtuous cycle. Pension money flows into venture capital funds. VCs then fund their risky, innovative startups. Those startups hire top talent. Some of those startups have big exits. They go public or get bought and generate huge returns. And those returns flow back to the pension funds. Exactly. The returns go back to the pensions, which makes them richer, and they reinvest even more into the ecosystem. Right. everyone wins. The whole region gets wealthier and more innovative. But in Canada, the flywheel is broken. It's broken because the biggest source of capital, the pensions, they skip the invest in Canada part. They just take our money and invest it in the U.S. flywheel, making Silicon Valley richer. The money leaves and it doesn't come back to fuel our own ecosystem. So what's their proposal? It sounds pretty forceful. It is. They have a two-pronged attack. First, they want a new $5 billion Canadian prosperity fund, which is basically a sovereign wealth fund for deep tech. But the second part is the really controversial one. Let's hear it. They want a government mandate. They want to require the pension funds to invest a certain percentage, say 1% to 5% of their total assets in domestic deep tech and venture capital. Forcing them to bet on Canada? I can already hear the pension manager screaming about that. Their argument would be, our fiduciary duty is to protect retirees' money and get the best possible return, not to gamble on risky Canadian startups. And they would absolutely say that. Right. But Pechette's counterargument is that if we don't make this bet, if we don't build a domestic tech industry, there won't be a strong Canadian economy left for anyone to retire into in 30 years. And he goes even further. How so? He makes a very controversial suggestion that goes against all the usual Canadian instincts. He says we need to stop sprinkling a little bit of money on everyone. We need to pick a winner. A national champion. An explicit national champion. And he names the name. He says we should anoint Cohere. Cohere being the big Canadian large language model company, our main competitor to OpenAI and Anthropic. Right. Pichette argues we should declare Cohere the winner and give them massive, untendered government contracts. We should treat them like the U.S. government treats Boeing or Lockheed Martin. Use the full power of the state to build them into a global giant. That is a huge shift from the usual level playing field, fair competition Canadian approach. That is active, aggressive industrial policy. It is. And it brings us to the really gritty physical reality of all of this because... You can have the best software in the world and all the money you want, but if you don't have the chips to run it on, you have absolutely nothing. Which brings us to part three, the mechanics of sovereignty. This is where we get into the plumbing of the AI revolution. Or as Garth Gibson and Ian Ray put it in their report, Harry smoking golf balls. I love that image. What on earth does that mean? They are trying to brutally dispel the myth of the cloud. The cloud isn't this fluffy, white, ethereal thing made of water vapor. It is massive, windowless buildings filled with racks and racks of GPUs that generate incredible amounts of heat and suck up the entire energy output of a small city. They're physical. They're hot. They're dirty industrial machinery. They're hairy, smoking golf balls. And their reality check for Canada is blunt and unavoidable. We cannot outspend Microsoft. We cannot win a raw hardware race. Not a chance. Microsoft and Google and Amazon are spending tens of billions of dollars a quarter on this hardware. Canada's entire federal budget couldn't sustain that kind of race for more than a few months. So we need a much smarter, more strategic approach. And this is where Marc Etienne-Weimette's report comes in. He calls it the strategic adoption cluster strategy. The idea is basically rent to own compute. Don't try to own the whole internet. We can't. But for our most critical sectors, things like health care, defense, government services, we need sovereign capacity. We need data centers on Canadian soil under Canadian law. So how do we get them built if we can't afford it? The government acts as an anchor tenant. Okay, so the government goes to a company and promises to buy, say, a billion dollars worth of computing power over the next five years. Exactly. And that long-term contract gives the company the financial security and the de-risking they need to go to the bank, get the loan, and build the massive data center here in Canada. We use our purchasing power to incentivize the build-out of the infrastructure we need. Precisely. Now, there are future technologies that might change this whole equation, we should say. We just saw Meta announce their first prototypes of optical CPUs. Optical as in using light instead of electricity to move data. To move data within the chip, yes. And theoretically, it could drastically reduce the heat and power consumption that makes these data centers so monstrously expensive and environmentally damaging. But that's still pretty far off. It's years away from being commercially viable at scale. for the next 1,000 days our critical window we are stuck with the hot heavy power hungry silicon we have to deal with the brute force reality of today and speaking of dealing with reality we met has another idea in his report that I think is the single biggest aha moment in the entire stack of documents yeah it's about copyright this is the Levy idea I found this absolutely fascinating because it feels so I don't So retro, so Canadian. It is. So the problem right now, as everyone knows, is that AI models scrape the entire internet for data, and they do it for free. Artists, writers, journalists, musicians, they get nothing. And the lawsuits are flying like the New York Times suing OpenAI, but they are incredibly slow. expensive and messy. So Wimet looks back in time. He looks back to the era of blank CDs and cassette tapes. I remember this. You'd buy a pack of blank CDs in the early 2000s to burn a mix CD. And there was a tiny extra fee baked into the price, a levy. Right. And that money went into a big pot that was administered by a collective. Yeah. And then it was distributed back to musicians and rights holders as royalties. It was a kind of rough justice solution to personal copying, but it mostly worked. And we met once to apply that exact same logic to GPUs. A levy on GPU hours. If you are a company and you run a massive AI training run in a Canadian data center, you pay a small fee based on how much computing power you use. use. And that money goes into a new collective management entity, which then pays it out to Canadian rights holders, the writers, the artists, the creators whose work likely formed part of that training data. It is a brilliant idea. It's payment on every use. It completely bypasses the impossible need to track every single sentence or image that was scraped. It just taxes the process of learning at the source. It's an automated, bottom-up way to monetize our national data sovereignty. It doesn't break the internet, but it ensures that as the AI gets smarter, the Canadian humans who created the knowledge get paid for it. It's a uniquely Canadian, sort of bureaucratic, but incredibly innovative solution. And it really speaks to this broader need we're seeing for completely new economic models. How is wealth even created when machines are doing most of the thinking? This levy is a practical first step toward answering that question. But while we're figuring out how to pay for it, we have to figure out how to protect ourselves from it. And that brings us to part four, the shield. This is where the tone of the report shifts dramatically from ambition and economics to defense and security. Sam Ramadori wrote a report that is essentially a five alarm fire geopolitical wake up call. He talks about the two horse race. The U.S. and China. That's it. He argues the world is rapidly becoming a bipolar tech hegemony. And if Canada relies entirely on foreign A.I. for our defense and our intelligence. We are not a sovereign nation anymore. We are, in his words, a client state. Because we can't audit the black boxes. We can't audit the black boxes. If the AI that is protecting your borders was built in California or Beijing and you don't have the source code, you don't know for sure if it has a secret backdoor. You don't know if there's a hidden kill switch that an adversary could flip to turn off your defenses in a crisis. So what's Ramadori's proposed solution? He proposes we create a Canadian version of In-Q-Tel. And In-Q-Tel is the CIA's own venture capital arm. Correct. They invest in private startups that are building cutting-edge spy tech. Ramadori wants a Canadian version of that. We should use our defense spending to fund our own dual use AI startups. Dual use meaning? Meaning the technologies have both civilian and military applications. So you fund a startup to build, say, better search and rescue drones for the military. But those same drones can then be sold commercially for forestry management or mining exploration. So the military spending also creates civilian economics value. It justifies the public investment by spinning off a commercial industry. Then you have someone like Taylor Owen, who's focused on protecting the information ecosystem. This is the threat of synthetic content. deep fakes, disinformation bots, AI-generated sludge. He's worried that reality itself could be overwhelmed. If that happens, democracy just breaks. And his fix seems to be mostly about labeling and transparency. Mandatory watermarking and provenance disclosure. Basically, if a machine made it, it must be clearly labeled as such. And human-in-the-loop laws for critical decisions. Which connects directly to Mary Wells' report. She gets into the weeds of defining red lines. She proposes a risk categorization system. There are high risk scenarios like using AI for medical diagnosis where you need extremely stripped oversight and testing. But then she says there are unacceptable risks. Things we should just ban outright. Yes. Things like government social scoring, the kind of surveillance state stuff we see in other parts of the world. or AI systems designed for subliminal manipulation. She wants those banned in Canada, full stop. But the technology is moving so fast. I mean, we are right on the cusp of moving from chatbots to agents. And this is the agentic shift. And frankly, it's scary. A chatbot, you know, it just talks to you. An agent does things in the world on your behalf. It books your flights. It moves your money and negotiates contracts. It fires people. And this brings up a huge, huge tension in the AI safety and security world right now. You could almost call it the open claw versus the anthropic debate. That's a great way to put it. Open claw represents the Wild West, the open source approach. It's the idea of rapid bottom-up evolution. You release the code into the wild, you let anyone use it, modify it, let it mutate. It creates incredibly fast innovation. Blistering innovation. Yeah. But it also has huge unprotected security surfaces. It's messy, it's unpredictable, and it's potentially very dangerous. And the anthropic model. They represent the opposite philosophy, the constitutional AI approach. They want to build a safe AI coworker that has legal and ethical guardrails built into its core from day one. It's much safer, it's curated, it's more predictable, but it's also much, much safer. slower. And the consultation data from the public shows that Canadians desperately want the anthropic model. Yeah. They are screaming for safety and guardrails. But the economic data and the builder manifestos suggest that the open claw model, that raw, aggressive, breakneck speed, might be the only way for a country like Canada to even hope to catch up to the U.S. and China. And that is the core dilemma. Do we prioritize safety and risk becoming a digital colony? Or do we prioritize speed and risk, you know, catastrophe? It's not an easy choice. And speaking of risks to the average person, let's talk about bossware. This was a huge point raised by Sarah Ryan from the Canadian Union of Public Employees, CUPE. Algorithmic management. Yeah. Imagine your boss is an AI that tracks your every keystroke, your eye movements through your webcam, even your bathroom breaks. And then it automatically fires you by email because your efficiency score dropped by 2% last quarter. That is genuinely dystopian. And it's already happening in some sectors. So Ryan and the unions are demanding transparency. Workers need the right to know when and how AI is being used to manage them. And they're calling for a right to disconnect equivalent for AI surveillance. You should have a right to not be watched by a machine every second of every day. Okay. So we have the builders wanting to floor it, and we have the unions and safety experts wanting to build a fortress of guardrails. How does a middle power like Canada actually pull this off? Yeah. That brings us to part five, the strategy. Ramadori, the geopolitics guy, he has a diplomatic solution. He calls it the coalition of the willing. Basically, we don't go it alone. We can't. We are too small. We will get crushed. But if we team up with other like-minded middle powers, the UK, France, Japan, South Korea, Australia, suddenly you have a block, a third way, a democratic AI block. We will get crushed. And we pool our resources. We share our compute, our data, our research. Exactly. We create shared sovereignty. We work together to build a complete AI stack that can rival the U.S. and Chinese stacks, but it's built on shared democratic values like privacy and human rights. And within Canada itself, another report from Shelley Bruce talks about linking the secret world with the academic world. Right, the Tutte Institute and LAR. Canada has this unbelievable academic talent. I mean, the godfathers of modern AI, Hinton and Bengio, did their foundational work here. We also have a top-tier classified research apparatus in our intelligences. We need to be moving fast, but in a smart, coordinated way. And to get that velocity, the government needs to put its money where its mouth is. It needs to become client zero. This is Michael Serbenis' big point, and it's a powerful one. Stop just giving startups little grants. Grants are charity. Give them contracts. Become their first and biggest customer. Buy their stuff. Buy their stuff. If the government of Canada decides to run its entire payroll system on a Canadian-made AI platform, that act alone validates the system for the rest of the world. It's a stamp of approval that's worth more than any grant. Okay, so let's try to pull this all together. Let's look at the synthesis here. You have this core fundamental tension. On one side, you have the 11,000 voices of the public. They are screaming for safety, for equity, for trust. They are scared of the black box. And on the other side, you have the builders. They are screaming for capital, for speed, for deregulation. They are scared of being left behind, of becoming irrelevant. So can we do both? Can we build the flywheel of economic growth while also holding up the shield of trust and safety? That is the ultimate billion dollar question. The reports suggest it's possible through what they call a self-correcting mechanism, a feedback loop. How would that work? You use the massive wealth generated by the flywheel to fund the shield. You tax the AI industry to pay for stronger oversight, for public education, for labor transition programs. And in turn, you use the trust generated by a strong shield to speed up the flywheel. Because more people will adopt the technology if they trust it. Exactly. Each side strengthens the other. It's a nice theory anyway. But there is a final provocation I want to leave you with. It wasn't explicitly in the reports, but it felt like it was hovering over everything we've talked about. We started this episode talking about a centralized AI, a black box in Ottawa, summarizing 11,000 essays. Which, as we said, is a huge bottleneck and a transparency problem. Right. And maybe that's just the old way of thinking about intelligence. What if the consultation itself, the network of 11,000 people, was the real intelligence? You're getting at the moltbook concept here. I think so. Distributed intelligence. A moltbook isn't a book that one person writes. It's a living network where the discussion evolves in real time between millions of people. It's bottom-up intelligence, not top-down analysis. So if we really want to solve the alignment problem, the problem of getting AI to align with human values, maybe we need to stop asking a central committee in Ottawa to define those values for all of us. Maybe. Maybe we need a system where millions of Canadians are constantly debating and refining those values in real time, and the AI's job is to learn from that continuous living stream of democratic conversation. A self-correcting democracy to guide a self-correcting AI. It sounds like science fiction, I know. Yeah. But look at the weird innovative outliers in the raw data. The people who were calling for payment on every use, for data durability, for indigenous data sovereignty. Those weren't the ideas that came from the top-down expert committees. Those were the seeds of a completely new operating system for our society, bubbling up from the bottom. We are building systems that might one day outsmart us. The survey data shows that we are deeply, deeply scared of that. But the economic data shows we feel like we can't stop building them. The next 1,000 days will determine if we learn how to harness that alien intelligence or if we end up getting crushed by it.

So the final question for you listening is this:

In 2030, when the dust settles on this frantic sprint, Will Canada be a founder of the AI age or will we just be a customer? And if we are just a customer, whose product and whose values are we buying? Thanks for diving deep with us.

Podcasts we love

Check out these other fine podcasts recommended by us, not an algorithm.