Heliox: Where Evidence Meets Empathy 🇨🇦‬
Join our hosts as they break down complex data into understandable insights, providing you with the knowledge to navigate our rapidly changing world. Tune in for a thoughtful, evidence-based discussion that bridges expert analysis with real-world implications, an SCZoomers Podcast
Independent, moderated, timely, deep, gentle, clinical, global, and community conversations about things that matter. Breathe Easy, we go deep and lightly surface the big ideas.
Curated, independent, moderated, timely, deep, gentle, evidenced-based, clinical & community information regarding COVID-19. Since 2017, it has focused on Covid since Feb 2020, with Multiple Stores per day, hence a sizeable searchable base of stories to date. More than 4000 stories on COVID-19 alone. Hundreds of stories on Climate Change.
Zoomers of the Sunshine Coast is a news organization with the advantages of deeply rooted connections within our local community, combined with a provincial, national and global following and exposure. In written form, audio, and video, we provide evidence-based and referenced stories interspersed with curated commentary, satire and humour. We reference where our stories come from and who wrote, published, and even inspired them. Using a social media platform means we have a much higher degree of interaction with our readers than conventional media and provides a significant amplification effect, positively. We expect the same courtesy of other media referencing our stories.
Heliox: Where Evidence Meets Empathy 🇨🇦‬
👉 The Invisible Hand That's Actually Pushing You
We like to think we’re in control. That’s the comfortable fiction we tell ourselves every morning when we’re standing in front of the open refrigerator, weighing whether to grab the leftover pizza or the sad container of spinach that’s been judging us for three days.
“I’m making a rational choice,” we whisper to ourselves. “I have free will.”
But here’s the uncomfortable truth that Cass Sunstein and Richard Thaler want you to understand: You were never really in control. Not completely. The refrigerator itself—the way it’s organized, what’s at eye level, what’s hidden in the back—is already choosing for you. Someone designed that space. And whether they meant to or not, they influenced what you’d reach for.
Nudge: The Final Edition (Revised & Updated)
This is Heliox: Where Evidence Meets Empathy
Independent, moderated, timely, deep, gentle, clinical, global, and community conversations about things that matter. Breathe Easy, we go deep and lightly surface the big ideas.
Thanks for listening today!
Four recurring narratives underlie every episode: boundary dissolution, adaptive complexity, embodied knowledge, and quantum-like uncertainty. These aren’t just philosophical musings but frameworks for understanding our modern world.
We hope you continue exploring our other podcasts, responding to the content, and checking out our related articles on the Heliox Podcast on Substack.
About SCZoomers:
https://www.facebook.com/groups/1632045180447285
https://x.com/SCZoomers
https://mstdn.ca/@SCZoomers
https://bsky.app/profile/safety.bsky.app
Spoken word, short and sweet, with rhythm and a catchy beat.
http://tinyurl.com/stonefolksongs
Curated, independent, moderated, timely, deep, gentle, evidenced-based, clinical & community information regarding COVID-19. Since 2017, it has focused on Covid since Feb 2020, with Multiple Stores per day, hence a large searchable base of stories to date. More than 4000 stories on COVID-19 alone. Hundreds of stories on Climate Change.
Zoomers of the Sunshine Coast is a news organization with the advantages of deeply rooted connections within our local community, combined with a provincial, national and global following and exposure. In written form, audio, and video, we provide evidence-based and referenced stories interspersed with curated commentary, satire and humour. We reference where our stories come from and who wrote, published, and even inspired them. Using a social media platform means we have a much higher degree of interaction with our readers than conventional media and provides a significant amplification effect, positively. We expect the same courtesy of other media referencing our stories.
Okay, let's just dive right in. We've all been there, right? You're staring at some form, maybe for a retirement plan, or you're trying to figure out a health insurance deductible. Oh, absolutely. Or even just, you know, trying to pick the fastest line at the supermarket. Exactly. Yeah. And you tell yourself you're making this perfectly rational choice, that it's all about free will, and you've weighed all the options. Right, that you're in complete control. But the uncomfortable truth is that we're... Well, we're constantly being guided. Yeah, guided by these sort of invisible hands that are shaping the very moment we make a decision, sometimes for our own good, but not always. And that, in a nutshell, is the huge idea we're getting into today, choice architecture. It's this really revolutionary concept that the environment, the whole context where you make a decision is never neutral. And the person who designs that context, the choice architect, they have this... this immense power to influence our health, our wealth, our happiness. A power that a lot of the time we don't even recognize is there. And the architects who really brought this idea out of the, you know, the academic ivory tower and into global policy are the two guys we're focusing on today. Cass Sunstein and Richard H. Thaler. Right. Sunstein is the Robert Walmsley University professor at Harvard Law School. It was like a roadmap for governments and companies to solve these huge intractable problems. Things like low retirement savings or poor diet choices or energy consumption, all without having to resort to, you know, big expensive taxes or heavy handed laws. And the book landed at the perfect time, didn't it? Right in the mid-2000s when governments everywhere were really struggling with these massive societal issues. It was an immediate political hit. And what was so surprising is how it just completely crossed the ideological divide. Yeah, that's one of the most fascinating parts of its early life. You had figures from the left and the right embracing it at the same time. In the U.S., you had President Barack Obama, a Democrat. And in the U.K., Prime Minister David Cameron, a conservative. Exactly. And they both cited the book as an inspiration. They both set up these dedicated government teams. People started calling them nudge units to apply these behavioral insights to policy. It showed this kind of nonpartisan appeal of just making government work better. But it wasn't all smooth sailing. That popular success came with some immediate friction, didn't it? Oh, not at all. We have to be clear about that. Early on, there was significant pushback. So this wasn't seen as a panacea by everyone. No. Critics jumped on it pretty quickly. They argue that these were often, you know, short-term, politically motivated initiatives. Just designed to get a quick win. Right. And not necessarily based on good evidence for long-term behavior change. There were some real worries that governments were just using behavioral tricks. instead of, you know, tackling the fundamental structural problems. That's a crucial point. And we have to report that critique impartially. That tension between a quick behavioral fix and a lasting policy solution is still a core debate today. For sure. But regardless of that criticism, the idea itself proved incredibly sticky. It just spread globally to the World Health Organization, the UN, governments from Australia to Germany. So our mission today is to trace the deep intellectual journey of these ideas. We're looking at the original book and the more recent final edition from 2021. And we're going to unpack not just the famous nudges, but also the core psychological machinery, like how our brains actually make decisions. And we'll see how the whole framework has evolved to tackle new ideas, including its dark twin ideas. Right. So let's start with the foundation, with one of the architects, Cass Sunstein. The power of nudge really comes from the psychology, but I think the depth comes from Sunstein's career. I agree. He spent decades tackling these massive systemic legal and social questions, and there's this thread that ties all his ideas together. What is that, do you think? It's this systematic focus on how the state structures our reality. Which, when you think about it, is exactly what choice architecture is all about. That's a perfect way to put it. Right. His work before and alongside Nudge shows this relentless focus on the underlying systems. He doesn't just analyze laws. He wants to rewrite the operating system of society. Take his work on the First Amendment. He wasn't just talking about free speech in the abstract. He was worried about the architecture of information. Right. He was deeply concerned about what happens when like-minded people speak or listen mostly to one another. I mean, that sounds terrifyingly relevant today with our social media echo chambers and filter bubbles. And he was writing about this decades ago. He saw the danger of group polarization, this idea that when a group of people only talk to each other, they tend to move to more extreme positions. Long before the technology made it a daily reality for billions of us. Exactly. So that worry about the architecture of public discourse really set the stage for his later work on the architecture. of choice. And that same kind of thinking, that focus on system structure, led him to something
even more radical:his critique of the government's role in marriage. Oh, that definitely made headlines at the time. His proposal was pretty provocative. He argued that the government should just stop recognizing marriage altogether. The word marriage should be removed from the law. In its place, the state would only offer a civil union, basically a domestic partnership agreement that would be available to any two people. His goal was to get the state out of the business of endorsing one kind of relationship over another. Precisely. And this wasn't just some academic thought experiment. In 1996, he actually addressed the Senate, arguing against the Defense of Marriage Act. He was committed to reforming this very basic legal architecture. So you go from something as huge as that to something that sounds almost mundane, like tax day. But he finds profound architecture there, too. Yeah. His argument to celebrate tax day is a fantastic thought exercise. He basically says, look, without government, without police, courts, insured banks, you have no liberty or property. So taxes aren't this punitive thing being taken from you know they're the necessary condition for your freedom it's a powerful reframing a nudge in itself really towards seeing taxes as a civic investment but then we get to what is probably the most controversial application of this kind of thinking the 2008 paper he co-authored on conspiracy theories right this is where the academic idea of information cascades how people adopt beliefs based on what others are doing collided with national security. And the paper was about the risk from conspiracy theories that were hostile to government anti-terrorism policies. And how to counter that rapid spread of misinformation. The specific suggestion that became a flashpoint was what they called cognitive infiltration of extremist groups. So what did that actually mean? They proposed that government agents could, say, enter chat rooms, online social networks, or even real space groups, and try to undermine these theories by raising doubts. Yeah. It showed a really acute awareness that the information environment itself is a choice architecture that can be influenced. and it sparked this huge necessary debate about the government's role in public belief. It really just underscores his whole approach. He takes an established concept, free speech, marriage, taxes, and looks at how to reformulate it based on how people actually behave, not how laws assume they'll behave. Which is the perfect mindset to have when you're collaborating with a behavioral economist like Richard Thaler. Yeah, we had to share one last story before we move on. The story behind the revised edition. nudge the final edition. Oh, this is great. It's the perfect real world example of one of their own concepts. So despite the book's massive success, they were really reluctant to revise it. Taylor admits he's famously lazy and Sunstein could have just written a whole new book in that time. And they only did it when the contracts for the paperback editions expired, which forced them to act. It's a perfect demonstration of the status quo bias, the very bias they dissect in the book, affecting the architects themselves. The ultimate proof that the architects are still human. Exactly. They're subject to the same inertia they're trying to help the rest of us overcome. So let's pivot now from the architects to the foundational problem Nudge is trying to solve. This brings us to the central conflict that really underpins all of modern behavioral economics. the battle between e-cons versus humans right so for decades economics was built on this idea of the econ homo economicus this theoretical perfectly rational being who responds only to incentives and always makes the optimal choice to maximize their well-being the human on the other hand well we're a lot messier we respond to incentives sure but we also respond to nudges We're influenced by all these factors, and econ would say are irrelevant. Like the order of items on a menu, or whether a box on a form is pre-checked. And we often fail to do what's best for ourselves because our thinking is flawed in these very predictable ways. And the big insight that explains why we're so messy comes from the work on dual systems of thinking, which Daniel Kahneman really popularized. This is the absolute core of why Nudge's work. We're talking about System 1 and System 2. The authors prefer more descriptive names. So first you have the automatic system, which is System 1. This is the ancient, fast part of your brain. It's uncontrolled, it's effortless, it's intuitive, it runs on heuristics or rules of thumb. It's the part of your brain that knows how to drive home without you consciously thinking about it. or the part that yells duck when a ball flies at your head. Right. It's impulsive. It's fast. The authors use the analogy of Homer Simpson. Ha, yes. Quick to react, always hungry, and definitely not checking the math. Then you have the reflective system or system two. This is the part we think of as being smart. It's controlled, effortful, slow, deductive, and logical. It's what you use to solve a complex math problem or fill out your tax forms. So if System 1 is Homer Simpson, System 2 is Mr. Spock from Star Trek. Cold, hard logic. But it takes a lot of time and energy. And that's the tension. System 1 generates these fast, easy, intuitive answers. And System 2 has to spend energy to step in and say, "Wait a minute," and correct the mistake. And most of the time, System 2 is just too lazy or too busy. and that's where we blunder. There's a great test for this, the blunder test from Shane Frederick that perfectly shows how the automatic system makes these predictable errors. Okay, let's try one. I'm ready to be the human. All right. A bat and a ball cost $1.10 in total. The bat costs $1 more than the ball. How much does the ball cost? Okay, my brain is screaming $0.10. It's immediate. And that's your automatic system. It's the simplest, easiest split. It's a very attractive answer. But it's wrong, isn't it? It is, because if the ball is $0.10 and the bat is $1 more, the bat would be $1.10, which would make the total.$1.20. So you have to force your reflective system, your system too, to kick in and do the algebra. And it finds the correct answer, which is five cents. Because then the bat is a dollar and five cents and the total is a dollar ten.$1.20. Wow, that takes real effort to override that first impulse. It really does. And these aren't random mistakes. They are systematic, predictable errors that come from our reliance on that fast, automatic system. And these systematic errors are exactly why choice architecture matters so much. Right. Because if we make mistakes predictably, then the architect has an ethical duty to design an environment that helps us avoid those blundings. exactly so let's dig into a few of the core behavioral biases that are driven by that fast and frugal system one let's start with anchoring anchoring so this is when we latch on to the first piece of information we get yeah that's the anchor and then we just fail to adjust far enough away from it even if the anchor is totally random or irrelevant the population example they use is fantastic for this if you ask someone from Chicago where the population is like 3 million to guess the population of Milwaukee they're anchored high so they might guess a million maybe nine hundred thousand way too high but if you have someone from Green Bay where the population is about a hundred thousand they're anchored low so they'll guess maybe three hundred thousand way too low both are wrong but in predictable opposite directions all because of that starting number and you see this play out in real life all the time like with those payment screens is suggest a tip oh this is a perfect modern example there was a study of two cab companies one terminal suggested default tips of 15 20 and 25 percent okay pretty standard the other one used a higher anchor it suggested 20 25 and 30 percent and no surprise The screen with the higher anchors led to significantly higher average tips. But there's a really important nuance there that the authors point out. Yes. The higher defaults also led to an increase in the number of people who left no tip at all. So why would that happen? It's a phenomenon called reactance. When the nudge feels too heavy-handed or too aggressive, it can trigger this defensive, oppositional response. People feel like their freedom is being threatened. Exactly. The automatic system sees that 30% suggestion and feels like it's being pushed around. So some people react by forcefully exercising their freedom and choosing the exact opposite zero. It's a huge lesson for choice architects. be gentle or the nudge can blow up in your face okay so next up is availability or accessibility this is our tendency to judge how likely something is based on how easily an example comes to mind so if an event is really vivid or is just in the news our system one goes oh that must happen all the time mm-hmm we overestimate the risk of things like plane crashes or terrorist attacks because they're emotionally charged and easy to recall and we underestimate the risks that are less dramatic like asthma attacks or strokes, even though they happen far more often. You see this play out perfectly in the insurance market. The book notes that right after a big flood, everyone is scared, the memory is vivid. And sales of flood insurance go through the roof. Exactly. But as that memory fades, inertia kicks in and the purchasing rate just drops off. The lack of a recent scary anchor actually nudges people away from being prejudiced. Okay, then we have what might be the most powerful force of all, the status quo bias. It is so powerful. It's just our deep-seated tendency to stick with whatever our current situation is just because changing requires effort. It requires a system to intervention. And this brings us back to that fantastic and slightly embarrassing story about Cass Sunstain himself. Yes, the ultimate proof that even the experts aren't immune. He was paying for magazines he hated for over a decade. It's such a classic example. He signed up for these free three-month trials through American Express. But the catch, and this is a classic piece of financial sludge, is that unless you actively canceled, it would automatically roll over to a full price subscription. And the sheer effort of canceling, finding the number, waiting on hold was enough to keep him paying for 10 years. Just for magazines he wasn't even reading, it's pure inertia. And he only canceled when he started working on the book and thinking about these very mechanisms. It took a professional nudge to overcome his personal inertia. That's amazing. Okay, finally, let's talk about framing effects. This is all about presentation. The exact same information, presented differently, can lead to completely different choices. So a surgeon could tell a patient of 100 people who have this procedure, 90 are alive after five years. And that sounds pretty reassuring. Your System 1 feels good about that. But the doctor could also say, of 100 people who have this procedure, 10 are dead after five years. Whoa, that sounds terrifyingly risky, even though the information is identical. The second frame highlights the lost death, and that triggers a much stronger negative emotional response. This has huge implications for policy. Like telling people they will lose $350 a year if they don't conserve energy instead of telling them they will save $350. Right. That loss frame is often way more powerful because of loss aversion. We hate losing things more than we enjoy gaining them. Okay, so we've laid out the problem. We're humans, not e-cons. We make these predictable mistakes. So now let's get to the solution. How do you build a system that accounts for these flaws? And this brings us directly to the concept of choice architecture. The core realization here is that you can't escape it. It's simply unavoidable. Because the environment has to be structured in some way. Exactly. The book opens with this great example of Carolyn, a cafeteria director. She has to decide how to arrange the food. She could put the healthy stuff first to make students healthier. Yeah. Or she could arrange it randomly. Or she could arrange it to maximize her profit. or to maximize the profit of whichever vendor gives her the biggest bribe. The point is, even if she chooses a random arrangement, she still has to decide where the salad dressing goes. She cannot avoid influencing what people eat. Simply put, everything matters. And that's the ethical justification for the nudge. Since some arrangement is inevitable, the choice architect might as well pick the one that's most likely to make people better off as judged by themselves. Which gives us the formal definition of a nudge. A nudge is any aspect of the choice architecture that alters people's behavior in a predictable way without forbidding any options or significantly changing their economic incentives. And the key part is it has to be easy and cheap to avoid. And that is the heart of libertarian paternalism. It sounds like an oxymoron. It really does. But it's paternalism because the architect is trying to steer you toward a better outcome. But it's libertarian because you are always completely free to choose otherwise. So banning junk food is not a nudge. That's coercion. Right. But putting the fruit at eye level, that's a nudge. So let's talk about some of the iconic examples. The most famous one has to be the fly in the urinal. A true legend of behavioral design. At Schiphol Airport in Amsterdam, they etched the image of a black housefly into the men's urinals. And this tiny little detail. It gave men an instinctive target, and it reportedly reduced spillage by 80%. It's the ultimate proof that the smallest details can have a massive impact. And that fly is a great example of what they call channel factor. Which are these little things that remove small obstacles to a behavior. Like the Yale students and the tetanus shot. They all heard a lecture. They all intended to get the shot. But only 3% actually did. Just inertia. But a second group got the same lecture, plus a map with the health center circled, and they were asked to plan their route and when they'd go. And that tiny little action, that removal of the channel factor of potential. planning. Compliance jumped to 28%, almost a tenfold increase just by making the path a little bit clearer. Good choice architecture also means you have to design for error. You have to expect that humans running on autopilot are going to make mistakes. mistakes. Like the signs in London. They know tourists from the U.S. and Europe are used to looking left for traffic. So they just paint on the street, look right. It's a cheap, simple design choice that stops a potentially fatal System 1 error. But then there's bad architecture. The most common mistake is violating stimulus response compatibility. Which just means what you see should match the action you're supposed to take. The story from the University of Chicago is perfect. The doors had these big, handsome vertical handles. Which just scream, "Pull me!" Right. But the doors opened outward. You had to push. And people would just pull on them over and over again getting frustrated because the visual cue was totally wrong. We see it everywhere. Confusing stovetops, bad websites, Good design has to work with human psychology, not against it. And speaking of working against ourselves, let's talk about self-control, our long-term planner versus our short-term doer. The planner, our system two, knows we should save for retirement. But the doer, our system one, wants that expensive new gadget right now. The solution is what they call commitment strategies. Basically, you do something now to constrain your future impulsive self. The classic example is Ulysses tying himself to the mast so he couldn't steer toward the sirens. And Feller used this brilliantly with a colleague who was procrastinating on his PhD thesis. The colleague, David, just couldn't get it done. So Feller had him write a series of $100 checks. one for every missed chapter deadline. But the crucial part was the consequence. If Hathala cashed a check, he would use the money to throw a party. To which David would not be invited. That is beautiful. The emotional pain of knowing you paid for your friends to party without you was a much stronger motivator than some abstract career goal. It was a perfect, salient commitment device. Okay, let's shift to where these ideas have had, I think, the biggest global impact. in the worlds of money and mortality. Starting with retirement. Saving for retirement is an incredibly hard task for a human. It's complex. It's abstract. It's far in the future. System 1 just hates it. And the original problem with things like 401ks was that you had to actively opt in. You had to fill out a form. And because of inertia, participation rates were terrible. So the first most powerful nudge was just switching to automatic enrollment. And participation rates soared. But that created a new problem, a problem of bad choice architecture. Right. More people were saving, but they were saving way too little. The default savings rate was usually set at just 3%. And people just passively accepted it. And the authors discovered that this 3% number wasn't based on any sound financial advice. No. It became the global standard because of an accidental numerical example in an early government ruling. Someone just wrote, suppose a firm enrolls employees at a 3% rate. And that number just became this incredibly sticky anchor for millions of people. It's a devastating example. But it led to the elegant solution from Thaler and Shlomo Bonartzi. The Save More Tomorrow program, or SMART. SMART is just a masterclass in applying all these psychological principles. It has three key parts. First, it deals with our present bias, our desire for now over later, by having the savings increases start at a future date. It's the St. Augustine prayer. God give me chastity, but not yet. We're always willing to be virtuous tomorrow. Second, to overcome loss aversion, the savings increases are tied to pay raises. So your take-home pay never actually goes down. You just save a portion of the increase. You never feel the pain of a loss. And third, it uses inertia. Once you're in, the automatic increases become the new default, and you just stick with it. In the original study, people stayed in for four straight raises, often until they hit the maximum contributions. limit. It's just an incredibly powerful combination. And you see the power of these defaults in other countries, too, like with the Swedish premium pension plan. Yeah. Sweden went the other way. They encouraged active choice. They offered people 900 different investment funds to choose from. But they wisely had a very good, low cost default fund for everyone who didn't choose. And what happened with the active choosers? They made classic human mistakes. They suffered from a massive home bias. They invested almost half their money, 48.2% in Swedish stocks. Even though Sweden is only about 1% of the world economy, it makes no financial sense, but it feels safe and familiar to System 1. And they paid way higher fees than the people in the default fund. The big lesson is that the default option, if it's well designed, is often better and safer for most people than what they would choose for themselves. The default is incredibly sticky. It lasts almost forever. OK, so moving from money to mortality, let's tackle the really emotional subject of organ donation. There's that famous graph that shows countries with opt out systems where you're presumed to be a donor unless you say otherwise, have massively higher donation rates. So the default seems to be everything. But the authors make a really crucial, nuanced point here. They do not support presumed consent. Right, and it's because in practice, it's not a hard rule. Even in those opt-out countries, they almost always still consult the family. So if the person just forgot to opt out, their preference is seen as ambiguous. And the family's wishes can override it. Exactly. So the authors argue that the better nudge is prompted choice. So you ask people to make an explicit choice at a key moment, like when they're renewing their driver's license. It removes the ambiguity. And the evidence is that the U.S., which uses prompted choice, actually has one of the highest donation rates in the world. And again, you have the risk of reactants. If you tell Americans the government presumes consent, a lot of them will actively opt out just despite the government. Okay, one more financial blunder before we move on. Insurance. The authors say the rule of thumb should be, always choose the highest deductible you can comfortably afford. Because you should self-insure for small risks. But people have this powerful deductible aversion. They choose low deductibles, paying higher premiums every year just to avoid the potential pain of paying out of pocket. And in one company they studied, this led to a situation where some of the low deductible plans were actually dominated by the high deductible plans. Dominated. That's a really strong economic term. What does that mean in practice? this it means the low deductible plan was literally more expensive for the employee no matter what happened let's say it costs you $500 more a year in premiums just to have a lower deductible okay even if you have a medical event and have to pay that deductible you're still worse off than if you just taken the cheaper plan and paid the claim yourself you are paying hundreds of dollars for literally nothing just to soothe that system one fear of an immediate out-of-pocket cost That is a perfect example of System 1 fear just completely overriding a System 2 calculation. So as this world of choice architecture grew, the authors realized they needed a name for the negative side of it. They needed a word for intentional friction. And in the new edition, they introduced a really powerful concept, sludge. Sludge is... Any aspect of choice architecture consisting of friction that makes it harder for people to obtain an outcome that will make them better off. It is the dark twin of the nudge. It's complex forms. It's bureaucratic red tape. It's impossible to cancel subscriptions. And the private sector is full of this kind of intentional sludge. The authors talk about the unsubscribe trap. Oh, we've all been there. It takes two clicks to sign up for a service online. But to cancel, you have to call an office in another country during their business hours and give 14 days notice. That asymmetry is totally intentional. It's sludge designed to profit from your inertia. We see it with gyms, cable companies. It's basically a form of price discrimination. They're charging the highest prices to their most inert customers. But the flip side of this is the immense power of sludge reduction. Just removing friction can be an incredibly powerful intervention. There's this fantastic experiment with high achieving, low income students in Michigan. The goal was to get more of them to apply to the University of Michigan. And they knew these students were often put off by the complex, scary financial aid forms. That was the sludge. So for one group of students, they just removed the sludge. They sent them a letter guaranteeing them financial aid up front so they wouldn't have to fill out the forms. And the result was incredible. The application rate for the control group, who still had to deal with the sludge, was 26%. For the group with the sludge removed, it jumped to 68%. All without changing the actual financial incentive. They just removed the friction, the administrative burden. Which brings us to tax sludge. Ah. The U.S. tax code is a world champion of sludge. Form 1040 had 108 pages of instructions in 2019. 19. Americans spend on average 13 hours and two hundred dollars a year just to file their taxes. Compare that to a country like Sweden, where 80 percent of people file in minutes on their phone for free because the government already has all the information. Adopting a pre-filled tax return system would be one of the single biggest sludge reduction policies imaginable. So if sludge is the problem with complex processes, what's the solution for complex choices? Like picking a cell phone plan out of hundreds of options. The answer is smart disclosure. Which is all about the timely release of complex data in a standardized, machine-readable format. And a key part of this is making sure the metrics themselves are easy for humans to understand. The example of fuel economy is perfect. Right. In the U.S., we use miles per gallon, MPG. But our brains don't process it correctly because the gains aren't linear. Going from 18 to 28 milligram saves you way more gas than going from 34 to 50 milligram, even though the second jump looks bigger. Our brains just can't do that math intuitively. The European metric, liters per 100 kilometers, is actually better because it is linear. Smart disclosure means using the metric that a human can actually understand. And the ultimate goal of all this is to power choice engines. Like the travel sites we use to book flights. But for everything. For energy, insurance, banking. Which leads to their boldest proposal in this area. That we, as consumers, should own our own usage data. You should own your Netflix history, your phone data history, your banking history. Why? So that you can give it to a third-party choice engine that can analyze it and give you a personalized recommendation for a better, cheaper plan without you having to do all the work. It's the ultimate anti-sludge net. It gives our reflective system a powerful tool to fight back. So beyond money and health, there are these incredibly powerful social nudges. We are all fundamentally influenced by what we think other people are doing. And just informing people about the actual social norms can be a huge nudge, like telling taxpayers in Minnesota that more than 90 percent of their neighbors pay their taxes in full. And that simple piece of information made people less likely to cheat because it corrected their misperception that everybody was cheating. The UK's behavioral insights team used this brilliantly. The most effective letter they sent to people with tax debts had this two-part social nudge. First, it established the positive norm. Nine out of ten people in the UK pay their tax on time. Then it applied this negative identity cognition. It said, You are currently in the very small minority of people who have not paid us yet. That one-two punch was so effective, it increased payment rates by five percentage points for almost no cost. And another amazing use of identity was the Don't Mess With Texas anti-litter campaign. Right. The traditional ads weren't working on the main culprits, who are young men. So instead of lecturing them, they appealed to Texas pride. They got Dallas Cowboys players to growl the slogan. And it completely reframed the social norm. A real Texan does not litter. And they reduced roadside litter by 72%. You also see this with massive shifts in public opinion like on same sex marriage. The change between when the book first came out in 2008 and the Supreme Court ruling in 2015 was just incredibly rapid. And it wasn't driven by new laws or economic incentives. It was fueled by informational and reputational cascades. As more and more people came out of the closet, it created these powerful small scale informational nudges within social circles. Exactly. It accelerated the change in social norms far faster than anyone predicted. OK, so a philosophy this influential is bound to have critics, and the authors rightly include a chapter for the complaints department. And the biggest fear, the most common metaphor, is the slippery slope. The fear that gentle nudges will inevitably lead to heavy-handed mandates. Justice Scalia's famous fear that if the government can make you buy health insurance, it can make you eat broccoli. And isn't sludge an admission that this can be abused? If companies can use sludge to trap you, isn't that already on the slope? That's the key distinction the authors make. They say the real target of our concern should be mandates and coercion. As long as you can easily say no thanks to a nudge, the risk is minimal.- But sludge is the bad side of choice architecture. They gave it a name precisely so we can identify and fight against that manipulative, freedom-reducing friction. Sludge is the problem, not the gentle nudge.- What about the critique of manipulation? That nudges only work because you don't know you're being nudged.- The authors argue that most good nudges aren't manipulative. Things like reminders or calorie counts are just helping your system to achieve its own goals. Manipulation is when the design is hidden or opt-out is hard, which again is sludge. And they stress that transparency is key. And the research actually shows that telling people about a nudge doesn't make it less effective. In fact, sometimes it can even enhance the effect because it builds trust. Okay, so let's end with a really interesting, more recent application of these ideas. It's almost an inversion of the whole theory. applying choice architecture to heritage management. Right. So we usually think of heritage as this passive thing, old buildings, museum artifacts. But this new research paper looks at the historic environment itself as a form of choice architecture that is actively influencing us. So heritage becomes a verb, something that is done, not just a product. It's a really interesting parallel to nudge as a verb. And instead of using nudges to change visitor behavior, the paper uses nudge theory as an explanatory tool to decode how heritage is already affecting society, often unintentionally. For example, it analyzes how the way artifacts are arranged in a museum. the choice architecture of the display-shaped social norms, and our interpretation of the past. It helps explain why the public might draw unexpected conclusions, all because of how the curator structured the experience for their System 1. We've covered a huge amount of ground today, from the intellectual journey of Sunstein and Thauer to the nuts and bolts of our own psychological flaws. And I think if there's one core takeaway, it's that libertarian paternalism is not an oxymoral. on. It's a framework that lets us respect freedom of choice while also acknowledging that as humans, we often fail to choose what's in our own best interest. And by making it easier to do the right thing with low cost nudges and by removing friction with sludge reduction, we can improve human welfare as judged by the people themselves. And the widespread adoption of these ideas across the political spectrum all over the world really proves your utility. Which brings us all the way back to that extreme stickiness of our choices. Those defaults we picked years ago, that 3% savings rate, that low deductible insurance plan, they can last almost forever because of inertia. So if inertia is that powerful, what's the ultimate tool for a benevolent choice architect? How often should we be forced to do a mandatory reboot? A restart. Something that forces us to actively re-examine those old choices before inertia locks us into a suboptimal future. The authors even suggest this for investment plans. Periodically force people to re-choose without reminding them of their current holding so it's a genuine, reflective choice. The default is the king of the status quo. But maybe, just sometimes, the choice architect needs to take the default away and force our system too to wake up and take the wheel. It's a really provocative final thought. What choices in your own life right now are being run by a 10-year-old autopilot setting? And what might it be costing you?
Podcasts we love
Check out these other fine podcasts recommended by us, not an algorithm.
Hidden Brain
Hidden Brain, Shankar Vedantam
All In The Mind
ABC
What Now? with Trevor Noah
Trevor Noah
No Stupid Questions
Freakonomics Radio + Stitcher
Entrepreneurial Thought Leaders (ETL)
Stanford eCorner
This Is That
CBC
Future Tense
ABC
The Naked Scientists Podcast
The Naked Scientists
Naked Neuroscience, from the Naked Scientists
James Tytko
The TED AI Show
TED
Ologies with Alie Ward
Alie Ward
The Daily
The New York Times
Savage Lovecast
Dan Savage
Huberman Lab
Scicomm Media
Freakonomics Radio
Freakonomics Radio + Stitcher
Ideas
CBC