
Heliox: Where Evidence Meets Empathy
Join our hosts as they break down complex data into understandable insights, providing you with the knowledge to navigate our rapidly changing world. Tune in for a thoughtful, evidence-based discussion that bridges expert analysis with real-world implications, an SCZoomers Podcast
Independent, moderated, timely, deep, gentle, clinical, global, and community conversations about things that matter. Breathe Easy, we go deep and lightly surface the big ideas.
Curated, independent, moderated, timely, deep, gentle, evidenced-based, clinical & community information regarding COVID-19. Since 2017, it has focused on Covid since Feb 2020, with Multiple Stores per day, hence a sizeable searchable base of stories to date. More than 4000 stories on COVID-19 alone. Hundreds of stories on Climate Change.
Zoomers of the Sunshine Coast is a news organization with the advantages of deeply rooted connections within our local community, combined with a provincial, national and global following and exposure. In written form, audio, and video, we provide evidence-based and referenced stories interspersed with curated commentary, satire and humour. We reference where our stories come from and who wrote, published, and even inspired them. Using a social media platform means we have a much higher degree of interaction with our readers than conventional media and provides a significant amplification effect, positively. We expect the same courtesy of other media referencing our stories.
Heliox: Where Evidence Meets Empathy
🧠Your Brain’s Secret Saboteurs: How Hidden Biases Hijack Your Decisions
Checkout this Episodes Substack to go deeper
You think you’re in control. You weigh pros and cons, mull over options, and make choices you’re sure are rational. But what if your brain is quietly betraying you?
What if the very machinery of your mind—those lightning-fast instincts and gut feelings—is steering you wrong, and you don’t even notice? Welcome to the unsettling world of cognitive biases, where your brain’s shortcuts can lead you into traps you never saw coming.
This isn’t just academic fluff; it’s the invisible scaffolding of every decision you make, from picking a job to trusting a news headline. And the stakes? They’re higher than you think.
Thinking Fast and Slow by Daniel Kahneman
This is Heliox: Where Evidence Meets Empathy
Independent, moderated, timely, deep, gentle, clinical, global, and community conversations about things that matter. Breathe Easy, we go deep and lightly surface the big ideas.
Thanks for listening today!
Four recurring narratives underlie every episode: boundary dissolution, adaptive complexity, embodied knowledge, and quantum-like uncertainty. These aren’t just philosophical musings but frameworks for understanding our modern world.
We hope you continue exploring our other podcasts, responding to the content, and checking out our related articles on the Heliox Podcast on Substack.
About SCZoomers:
https://www.facebook.com/groups/1632045180447285
https://x.com/SCZoomers
https://mstdn.ca/@SCZoomers
https://bsky.app/profile/safety.bsky.app
Spoken word, short and sweet, with rhythm and a catchy beat.
http://tinyurl.com/stonefolksongs
Curated, independent, moderated, timely, deep, gentle, evidenced-based, clinical & community information regarding COVID-19. Since 2017, it has focused on Covid since Feb 2020, with Multiple Stores per day, hence a large searchable base of stories to date. More than 4000 stories on COVID-19 alone. Hundreds of stories on Climate Change.
Zoomers of the Sunshine Coast is a news organization with the advantages of deeply rooted connections within our local community, combined with a provincial, national and global following and exposure. In written form, audio, and video, we provide evidence-based and referenced stories interspersed with curated commentary, satire and humour. We reference where our stories come from and who wrote, published, and even inspired them. Using a social media platform means we have a much higher degree of interaction with our readers than conventional media and provides a significant amplification effect, positively. We expect the same courtesy of other media referencing our stories.
Welcome to the Deep Dive. We sift through, well, a whole lot of information to bring you the core insights, the things you really need to know to understand the world maybe a little bit better. And today, yeah, we're tackling a really big one. How we actually think. Huge. Our mission today is to explore... the kind of hidden architecture of our minds. You know, those mental shortcuts, the biases that shape our judgments, our decisions, often without us even realizing it. Right. We've got a really fascinating collection of work here that dives deep into the science of thinking. and hopefully by the end of this, you should have a much clearer picture of your own sort of internal workings. Absolutely. We're going to unpack some truly mind-bending stuff. Like, did you know your brain basically has two different operating systems? Or that it often answers a simpler question than the one you're actually asking? Yeah, the substitution thing. Exactly. And maybe most surprisingly, we'll explore why our intuition, which feels so powerful, why it can sometimes lead us, well, pretty far astray. Okay, let's dive in. So one of the most useful frameworks, I think, for understanding how we think involves this idea of two systems. Okay. Think of system one as your brain's fast lane. It's intuitive, emotional, operates largely automatically. Right. The gut feeling. Exactly. System two, on the other hand, that's the slow, deliberate, logical part of your mind. Right. It kicks in for more complex tasks. So system one is doing most of the heavy lifting day to day, this quick, almost automatic thinking. Yeah, pretty much. When does our slower, more analytical system two actually take the wheel then? Well, system two gets engaged when we run into something that needs focused attention, real effort. Like what? Okay, take a multiplication problem like say 17 times 24. Okay. You can actually feel your concentration sharpen as you work through the steps. Definitely. That system 2 kicking into action. Right, that deliberate mental effort. What else triggers it? Surprise is a big one. Surprise. Yeah. Think about that famous experiment, the one with the basketball game and the gorilla. Oh, yeah, where people totally miss the gorilla suit walking through. Exactly. Their attention is so focused on counting passes, which is a system two task, by the way. Right. That system one's expectations about what should be in the scene, they aren't really challenged until something truly bizarre happens, like a gorilla. Wow. Wow. And that jolt of surprise, that activates System 2 to try and figure out what on earth is going on. That's amazing how focused attention can just blind us like that. And System 2 isn't just about math problems and gorillas, right? Yeah. It's also key for self-control. Absolutely, yeah. resisting attempting impulse, like reaching for that extra cookie. Dilted. Or maintaining focus on a difficult task, or even just being polite when you'd rather say something else entirely. These all require the deliberate effort of System 2. But here's a really key insight. System 2 has limited resources. Limited? Yeah. Think of it like a mental energy bar that can get depleted. Okay, limited resources. How does that affect us day to day, like our thinking and decisions? Well, imagine trying to solve that multiplication problem again, 17 times 24. Okay. But this time, you're also trying to walk down a crowded sidewalk and maybe keep up a conversation at the same time. Yeah, that's not happening. Right. You'll probably find that one or more of those tasks will suffer. It shows how these demanding mental activities, they all draw on the same limited pool of System 2 resources. So a key takeaway there is? Our capacity for deliberate thought, for self-control, it's finite. So we're much more prone to relying on that quick, intuitive System 1 thinking when we're mentally tired or distracted. So we have these two systems, the fast one and the slow one. Now let's get into when the fast one, System 1, takes over using these mental shortcuts. Okay. Heuristics, you call them. Exactly. Heuristics. So when we're faced with a complex or difficult question, our minds often pull this clever switcheroo. A switcheroo. Yeah. We unconsciously substitute the hard question with an easier related one. This is really the underlying mechanism behind a lot of our intuitive judgment. Can you give us a clear example, like from everyday life? Sure. Let's say you're thinking about investing in a startup. The real question is, you know, what's the long-term financial viability of this company? Right. Which is super complicated. Incredibly complex. Requires serious research. But your system one might just jump to answer a much simpler question like, do I like the founder's energy? Do I like their vision? Ah, OK. If you get a good gut feeling about the person, that feeling might subconsciously sway your investment decision, even if maybe the underlying financials aren't that solid. That's substitution. That makes total sense. Our brain just wants the easier path, doesn't it? It really does. You mentioned the availability heuristic earlier. What's the core idea there? Right. Availability. That's a mental shortcut where we estimate how likely or frequent something is based purely on how easily examples of it come to mind. A lot of the early research in this whole area really highlighted just how much this one heuristic influences, say, or risk assessments, other judgments too. So if I've just seen a bunch of news reports about, I don't know, plane crashes, I might overestimate the actual risk of flying, even though statistically it's very safe. Exactly. The vividness, the ease with which those stories come to mind. That can infleet your perception of the probability. Right. Now, let's look at how system two sometimes just fails to check system one properly. Okay. Think about the classic bat and ball puzzle. You've probably heard this one. A bat and a ball cost $1.10 total. The bat costs $1 more than the ball. How much does the ball cost? Okay. The first thing that jumps into my head is 10 cents. Almost everyone says 10 cents. It's the intuitive answer. But it's wrong, isn't it? It's wrong. Because if the ball is 10 cents, the bat is a dollar more. So that's a dollar 10. Which makes the total a dollar 20, not one a dollar 10. Go. Right. System two, with just a tiny bit of deliberate thought, can verify that. The fact that so many people get it wrong shows what we call the law of least effort. The law of least effort. Yeah. We often just settle for the first plausible answer system one spits out without engaging system two to do the double checking. It's amazing how often our brains are just happy with good enough. Yeah. Now, you mentioned associative activation. What's that about? Ah, associative activation. It's fundamental to how system one works. Yeah. Imagine you read the word doctor. Okay. What other words or ideas immediately pop into your head? Uh... nurse, hospital, sick, medicine, stethoscope. Exactly. It happens super fast, automatically. Seeing or hearing one word or idea activates this whole network of related concepts in your mind. It creates what we call associative coherence. So it's like our thoughts are all linked up in this giant web and activating one node lights up others nearby. That's a great way to put it. Yeah. And this leads directly to the phenomena of priming. Priming. Okay, how did that work? Priming is when exposure to one thing, A word, an image, an idea unconsciously influences how you respond to something else later on. Example. Okay. There was this experiment where people were briefly shown words related to being elderly, things like wrinkled, gray, Florida, slow. Afterwards, in what they thought was a totally separate task, these participants were timed walking down a hallway. Okay. and guess what they walked slower they walked slightly slower than a control group who hadn't seen the elderly words they weren't aware of the connection at all but that earlier exposure subtly influenced their physical behavior wow That's subtle. It is. It shows our actions, even our emotions, can be influenced by these subtle cues we're not even conscious of. There's also this fascinating related idea called the Lady Macbeth effect. Lady Macbeth? Like Shakespeare? Kind of. The idea is that feeling guilty about something can trigger a specific desire for physical cleansing, often related to the part of the body involved in the sin. Huh. So if you lied, you might feel an urge to wash your mouth out. Something like that, yeah. The desire for cleansing is specific. Now, a lot of people hear about priming studies and are skeptical. Yeah, it sounds a bit like science fiction sometimes. I get that. But the effects, while not always huge, are pretty robust across many studies. We're just not always the rational, fully conscious actors we think we are. Clearly. Okay, let's move on to cognitive ease. What is that feeling and how does it affect our judgment of truth? Cognitive ease is that subjective feeling of fluency. It's the sense that information is being processed smoothly, effortlessly. Okay, like something just feels easy to grasp. Exactly. Exactly. And interestingly, we tend to equate that feeling of ease with truth, with believability. So if something feels easy to understand, we're more likely to think it's true, even if it's not. Precisely. Think about a simple statement. If it's written in a really clear, easy to read font, versus a font that's kind of fuzzy or difficult to decipher, people are more likely to believe the statement written in the clear font, even if the actual content is identical. No way. Yeah. That feeling of fluency, that ease, it signals to System 1, everything's okay here, no need to bother the boss. You know, System 2. So System 2 doesn't get called in to scrutinize it. Right. Okay, so what happens when we encounter information that's actually false, but it's presented in a way that gives us cognitive ease? Well, consider the Moses illusion. The Moses illusion. Yeah. If I ask you, how many of each animal did Moses take on the ark? What's your first reaction? Uh, two. But wait, it wasn't Moses, it was Noah. Exactly. Exactly. But many people automatically answer two without even noticing the name is wrong. Why? Because the question fits the familiar biblical context. Moses, ark, animals. Right. It creates cognitive ease. System one just sort of smooths over the incorrect detail because the overall story feels right. Wow. Now, if you hear something truly jarring, like, "Abraham Lincoln invented the telephone," your System 1 will likely register that clash much faster. Okay. So cognitive ease can make us kind of gullible if something fits a familiar pattern. It can, yeah. It's like a mental shortcut for checking plausibility. You also mentioned the illusion of causality. What's that? Right. Experiments by a Belgian psychologist named Albert Michaud way back showed we have this incredibly strong tendency to perceive cause and effect, even in really simple animations. Like shapes moving on a screen. Exactly. You see one square move, bump into another square, and the second square moves off. We don't just see two separate movements. We perceive the first square causing the second one to move. We see pushing. Yeah. We see causality directly. It seems to be a fundamental aspect of System 1 operates automatically, and it's even present in infants. So our brains are basically hardwired to look for cause and effect, even if it's not really there. Seems like it. And related to that, System 1 is generally quite gullible. It's biased towards believing. Believing first, asking questions later. Pretty much. Doubting, questioning, that requires the effort of System 2. And as we said, System 2 can be, well... lazy or busy with other things. Right. So when our system two is engaged elsewhere, or maybe we're just tired, we become more susceptible to persuasive messages, even weak ones. That's a bit worrying, thinking about how easily we might be influenced when our guard is down. Okay, let's move to this principle you mentioned. What you see is all there is. W-Y-C-A-D-E. I-S-A-D-E. Yeah, it's a really core feature of System 1. It basically means that our immediate understanding, the judgments we form, they're heavily based on the information that's readily available right now. And we don't really think about what's missing. We often fail to account for missing information. This reliance on just what's in front of us, it contributes to that cognitive ease we talked about, and it lets us make quick decisions even with incomplete data. So we build a story based on the pieces we have, and that story feels complete even if it's not. Precisely. And W.Y. Asayati is kind of the foundation for several really common biases. Like what? Okay, take over confidence. Ah, yes. The confidence we feel in our beliefs often has more to do with the coherence, the quality of the story we can tell ourselves based on the available information. Right. Than it does with the actual quantity or quality of the evidence supporting it. We just neglect the possibility of crucial missing pieces because... Well, what you see is all there is that explains why someone can be so incredibly certain about something even when they know very little What other biases come from W.Y. Sayati framing effects are a huge one framing like how you present information exactly Different ways of presenting the exact same facts can trigger drastically different emotional responses and therefore different choices The example the classic one is medical treatments saying 90% of patients survived this procedure Sounds pretty good. Right. Much more reassuring than saying 10 percent of patients die from this procedure, even though it's the same statistic, identical statistics. But the frame survival versus mortality changes everything. Same with food labeling. 90 percent fat free sounds way better than contains 10 percent. at. Wow. So the key insight is? The way information is framed massively impacts our perception and decisions because we focus on what's presented, what we see, and we tend to neglect what's left out or framed differently. It really is all about perspective. Yeah. And how does WYSIYOTY connect to base rate neglect? Right. Base rates. Remember the description of Steve? Shy, tidy, loves order. Yeah. The librarian or farmer question. Exactly. When asked if Steve is more likely a librarian or a farmer, most people jump to librarian because the description fits the stereotype. It's representative. But there are way more farmers than librarians. Vastly more. That's the base rate. But the vividness of the description, the individual information, what we see completely overshadows the statistical base rate, which we don't immediately see or think about. W-Y-S-I-A-T again. So the compelling details just push aside the probabilities. Okay. Let's loop back to substitution, but specifically how it works in making judgments. Okay. So as we said, substitution is answering an easier question instead of the hard one we were actually asked. Right. In judgment, we talk about the target questions. That's the complex assessment we're supposed to be making. And the heuristic question, that's the simpler related question, our system one intuitively answers instead. Can you give another concrete example of that in judgment? Sure. Imagine looking at a picture of someone who looks really happy and successful. If you're asked, how satisfied is this person with their life overall? That's the target question. Pretty complex. Yeah. Hard to know from a picture. Exactly. Exactly. But your system one might automatically answer the easier heuristic question. How happy does this person look right now? Oh, OK. That current expression of happiness gets substituted for the much broader, harder assessment of overall life satisfaction. It's like our brain grabs the most obvious piece of info and uses it as a stand in for the harder evaluation. Okay, let's shift gears slightly. How we perceive randomness, specifically this law of small numbers. Right, the law of small numbers. It describes our tendency really to have excessive confidence in conclusions drawn from small samples. So we think small groups should perfectly mirror the larger population. We kind of intuitively expect them to, yeah. We underestimate the role of random chance in small samples. How could that lead us wrong? Can you give an example? Okay, consider kidney cancer rates across different counties in the US. You might notice that the counties with the very highest rates and also the counties with the very lowest rates are often those with really small populations. Huh. Why would that be? Well, it's tempting to start looking for specific causes, right? Yeah. Maybe something in the water or a particular lifestyle factor in those specific counties. Yeah, you'd want an explanation for the extremes. But the much more likely explanation is just statistics. Yeah. Random variation. In small populations, random fluctuations naturally lead to more extreme outcomes, both high and low. These extremes are often just statistical artifacts, not proof of some underlying cause. So the key insight is, be really careful about drawing strong conclusions from small amounts of data. Exactly. Random chance plays a much bigger role than we intuitively think. We seem hardwired to see patterns and causes, even when it might just be random noise. What about the "hot hand" idea in sports? Is that related? Oh, absolutely. The belief that a basketball player who's just made, say, three shots in a row is hot, and therefore more likely to make the next shot. Yeah, everyone believes that. It's a classic example. It stems from misperceiving randomness and this law of small numbers. When statisticians analyze huge amounts of basketball data, they consistently find that sequences of makes and misses are largely random. A player's chance of making the next shot is generally independent of whether they made the last few. But it feels so real. It feels real because we see a pattern and immediately assume a cause, the player got hot. We struggle to accept that random sequences often contain streaks just by chance. Fascinating how our intuition clashes with statistics there. Okay, let's talk about anchoring effects. How do those mess with our estimates? Anchoring is this bias where an initial piece of information, the anchor, has this disproportionately large influence on our subsequent judgments and estimates. Even if the anchor is totally irrelevant. Even if it's completely arbitrary or irrelevant. Can you give a real world example? How does anchoring affect us? Okay, imagine you're buying a house, the seller's initial asking price, even if it's way higher than the house is actually worth. That number acts as an anchor. It unconsciously pulls your own valuation, your own offer upwards. Even if you know it's too high. Even if you consciously try to ignore it. Research involving real estate agents, people who should know better. Yeah, experts. They were shown houses, given an asking price, which was the anchor, and asked to estimate the house's value. The asking price massively influenced their valuations, even though they insisted they weren't affected by it. Anchoring is incredibly robust. Wow. So even professionals fall for it. Are there ways to fight back against anchoring? Yes, thankfully. Yeah. One effective strategy is to actively, deliberately search for arguments against the anchor. Okay. So if you're given that high asking price for the house... consciously think of all the reasons why it might be worth less. What are the flaws? What comparable houses sold for less? Actively thinking the opposite helps break the anchor's grip. Makes sense. Like pushing back mentally. Exactly. And in negotiations, if you can, try to make the first offer because that first number often becomes the anchor for the rest of the discussion. Good tip. So the key insight is just being aware that any number you hear can act as an anchor. Pretty much. Be aware. And if the stakes are high, you need to mobilize system two to consciously challenge that anchor and consider alternatives. Right. OK, you mentioned revisiting the availability heuristic and fluency. How did those connect again? OK, so we first said availability is judging frequency by the ease of recall. Right. Easy recall, more frequent. But then researchers found this interesting twist. Sometimes, if you struggle to recall examples, it can paradoxically make you judge the thing as less frequent or less characteristic of you. Wait, straddling makes you think it's less likely. How does that work? Okay, imagine I ask you to list, say, 12 times you acted assertively. 12? That sounds like a lot. That would be hard. Exactly. It would likely feel quite difficult for most people. And because the difficulty of recalling is surprisingly high, you might subconsciously conclude, hmm, maybe I'm not actually that assertive after all. So it's not the number you recall, but the feeling of difficulty while trying. Precisely. The subjective experience of retrieval fluency, or lack thereof, becomes a piece of information itself. It's sometimes called the unexplained unavailability heuristic. That's a really subtle but important point. The feeling itself is data. Okay, let's quickly contrast how the public and experts might look at risk differently. Generally, yeah, the public's perception of risk tends to be more swayed by emotional factors, vivid stories, things that are easily available in memory. Like we discussed with plane crashes. Exactly. Whereas experts tend to lean more on statistical data, numbers, probabilities. But And this is important. Experts aren't immune to biases either. They just might be more aware or use different heuristics. So a scary but rare disease outbreak might loom larger for the public than, say, the baseline risk of heart disease, even if the stats show heart disease is a much bigger threat. That's often the case. Yes. The vividness and emotional impact outweigh the numbers for many people. OK, let's move to another big heuristic. So representativeness. How does this guide our probability judgments? Right. Representativeness. So when we have to judge the probability that something, a person, an event, whatever belongs to a certain category, our minds often use a shortcut. We assess how similar or how representative that thing is of our stereotype or mental image of that category. So we judge based on resemblance to a type. Exactly. Think back to Steve, the shy, tidy guy. Librarian versus farmer again. He seems more representative of our stereotype of a librarian than a farmer. So intuitively, we think librarian is more probable, even though we're ignoring the base rates. She's smart, outspoken, was concerned with discrimination and social justice in college, participated in anti-nuclear demonstrations. Okay, it paints a pretty clear picture. Right. Then you ask people which is more probable. A, Linda's a bank teller. Or B, Linda. Linda is a bank teller and is active in the feminist movement. Option B sounds more like Linda based on the description. Exactly. And a huge number of people choose B because the description is highly representative of someone active in the feminist movement. But wait, logically. Logically, option B is a subset of option A, right? Everyone who fits description B also fits description. So B cannot be more probable than A. Right. It's a basic rule of probability. It is. But the representativeness heuristic is so powerful here, the similarity judgment just overrides the logic for many people, even people trained in statistics. Wow. So the key insight is that representativeness can make us violate fundamental logic because the stereotype fit feels so compelling. Absolutely. You mentioned that how you ask the question matters. Yes. If you frame the question differently, maybe asking people to estimate frequencies like out of 100 people like Linda, how many are bank tellers? How many are feminist bank tellers? People are much less likely to make the error. Why does that help? It seems like thinking in frequencies, maybe visualizing groups of people, makes the logical relationship, the fact that the subset can't be larger than the whole set much clearer. It helps engage system two. Interesting how format changes thinking. Okay, let's revisit base rate neglect again, but add this idea of causal stereotypes. Right. So we know vivid descriptions can make us ignore base rates. Now think about a cab accident. Imagine one version just gives you statistics. 85% of cabs in the city are green, 15% are blue, and an eyewitness identified the cab in an accident as blue, but the witness is only 80% reliable. Okay, standard base rate problem. Now imagine a second version. Same statistics, but it adds a causal story. The green cab company is known for its safe drivers, while the blue cab drivers are often reckless and have more accidents. Ah, adding a reason. Exactly. When there's a causal explanation linked to the base rates, people are much more likely to actually use the base rate information in their judgment. So we pay more attention to the statistics if they fit into a plausible story. It seems so. Our minds just love causal explanations. Which also ties into this resistance we often have to changing our minds about, say, human nature, even when statistics contradict our beliefs. How so? Well, if some statistical finding clashes with our intuitive, often causal theories about why people behave the way they do, we tend to find ways to dismiss the stats or explain them away rather than update our fundamental beliefs about people. Our pre-existing stories are sticky. Right. Okay, let's tackle regression to the mean. This sounds statistical but important. It is. Regression to the mean is simply a statistical fact. In any sequence where measurements aren't perfectly correlated, which is almost always extreme values, tend to be followed by values closer to the average, or the mean. So if something is unusually high, the next time it's likely to be lower. And if it's unusually low, it's likely to be higher. Generally, yes. And crucially, this happens purely due to statistics. It doesn't necessarily mean there's a specific cause making it happen. Can you get a clear real life example? Sure. Think about student test scores. Because extreme scores usually involve some element of luck, good or bad. Ah, okay. You had a really good day, guessed well on some questions, or a really bad day, blanked on things you knew. That luck factor tends not to repeat exactly the same way, so the score naturally drifts back towards the average. So the key insight is extreme performances often have a luck component. And subsequent performance will likely regress towards the average, even without any intervention. Exactly. Don't be too quick to attribute cause when you see regression. You mentioned a classic misinterpretation of this in flight training. Yes. Flight instructors noticed that if they praised a student pilot after an exceptionally smooth landing, the student's next landing was often not as good. But if they yelled at a student after a really rough landing, the next landing was usually better. So they concluded? They concluded, wrongly, that praise makes pilots worse and criticism makes them better. But it was just regression to the mean. Almost certainly. The super smooth landing probably had some good luck involved, which wasn't likely to repeat. The very rough landing probably had some bad luck or unusual error, and the next attempt was simply more likely to be closer to the student's typical performance, which was better. A great reminder to be careful about assuming causality. You also said correlation and regression are two sides of the same coin. Yes. Whenever the correlation between two measures isn't perfect, meaning not plus one or NACA1, you will observe regression to the mean. The weaker the correlation, the more regression you'll see. If there's no correlation, the best prediction for the second measure is just the average, regardless of the first measure. Okay, that makes sense. Imperfect relationships, a regression. Now, how can we try to correct our intuitive predictions? You said they tend to be overconfident and too extreme. Right. Correcting them requires deliberately engaging System 2. It's effortful. The basic steps are, first, identify a relevant reference class. What are similar situations or cases we have data for? Okay, find a comparison group. Second, get the baseline statistics for that reference class. what's the average or typical outcome in those cases? Find the average. Third, use any specific information you have about the current case to make an adjustment from that baseline. But, and this is key, make that adjustment less extreme than your initial intuitive prediction. Pull it towards the mean. So anchor on the average, then adjust modestly based on specifics. That's the idea. Anchor your intuition in the data. And you said this takes effort, so it's most important when. When the stakes are high. When accuracy really matters. But there's a catch. No. Making statistically unbiased predictions often means you'll almost never correctly predict the truly rare extreme events. predicting closer to the average is more accurate overall but less exciting maybe less satisfying than nailing that one long shot based on a gut feeling right a trade-off between overall accuracy and capturing outliers okay let's discuss the halo effect what is that the halo effect is this bias where our overall impression of someone or something like a company influences how we judge their specific traits or qualities so if we generally like someone we assume all their traits are good pretty much We tend to like or dislike everything about a person, including things we haven't even observed. It creates a halo, positive or negative. Can you give an example? Well, the statement, Hitler loved dogs and little children, often feels jarring, right? Yeah, it seems contradictory. Because any positive trait feels inconsistent with the overwhelmingly negative halo we have for him. Conversely, think about a highly successful company like Google early on. The positive halo around their success might lead us to assume everything they did was brilliant, overlooking maybe mistakes they made or the role luck played. So the overall feeling colors the details. Key insight. Our general feelings about something create a bias, making us see individual aspects in a way that's consistent with that overall feeling, even if it's not objectively true. It's like our brain wants that consistent story again. OK, moving on to the illusion of skill, especially things like stock picking. Yes. Well, study after study shows that most individual stock traders actually do worse than if they just bought and held a simple index fund. Really? All that trading doesn't help. Often, the more actively people trade, the worse the results are, usually due to trading costs and timing errors. There's a strong illusion of skill. People think they can pick winners or time the market. What about the professionals, the fund managers? Even most professional investors fail to consistently beat the market average over the long term. It strongly suggests that a lot of what looks like skill in investing is actually just luck in a given period. That's pretty sobering for the finance world. You also mentioned Philip Tetlock's research on expert political forecasting. Yes. Tetlock did incredibly extensive research tracking the predictions of political experts over decades. And what did he find? He found that while experts obviously know more than lay people, their ability to predict future political events was often, well, not much better than chance. Barely better than dilettantes in some cases. Seriously. And even more interestingly, the experts who were more famous, more confident in their predictions, said, They tended to be more overconfident and not necessarily any more accurate, sometimes less so. So more knowledge doesn't always mean better prediction and can even lead to more overconfidence. Exactly. Their knowledge can sometimes help them build more convincing, but ultimately wrong stories strengthening that illusion of skill. Wow. This leads perfectly into the power of algorithms. How do they stack up against human experts? Well, this goes back to the work of Paul Meal in the 1950s. He reviewed studies comparing clinical predictions made by expert psychologists or doctors based on interviews and intuition with statistical predictions made using simple algorithms or formulas. And the results? Overwhelmingly, the algorithms did as well as, or often significantly better than. the human experts better than experts algorithms are consistent they apply the same rules weigh the same factors every single time they don't get tired or biased by irrelevant details or swayed by a compelling story like humans do Even simple formulas based on just a few key predictors can be surprisingly accurate. Can you give a concrete example of a successful algorithm? A classic, really important one is the Apgar score for newborns. Oh, yeah, I've heard of that. It's a simple algorithm developed by Dr. Virginia Apgar. Nurses score newborns on five factors, heart rate, breathing, muscle tone, reflex response, color right after birth. It gives a quick, objective measure of the baby's health and need for immediate care. It saved countless lives by standardizing assessment. So a simple checklist basically outperformed subjective judgment. What about that attempt to use a formula for interviewing soldiers you mentioned? Right. This was in the Israeli army. They tried to implement an interview system where interviewers rated candidates on specific factual questions about their past behavior on six dimensions, and those scores were put into a formula to predict success. How did the interviewers react? They hated it at first. They felt it ignored their intuition, their ability to get a feel for the candidate. They much preferred making a global, intuitive judgment at the end. But did the formula work better? Yes. When they compared the predictive accuracy of the formula based on the specific trait ratings versus the interviewer's final intuitive judgments, the formula was significantly better at predicting which soldiers would perform well later on. Another win for the algorithm. It really seems like we should trust formulas more often. Yes. Now let's talk about something we all probably do. The planning fallacy. Ah, the planning fallacy, yes. This is our incredibly common tendency to underestimate the time, the costs, the risks involved in future projects, and to be overly optimistic about how well things will turn out. We focus on the best case scenario. We tend to focus on our specific plan working out smoothly. Okay. and fail to adequately consider all the things that could go wrong, the potential delays, the unexpected obstacles. We take an inside view. I feel seen. I'm sure we've all started projects thinking, oh, this will only take a weekend, and it ends up taking weeks. Can you share that personal anecdote about the textbook? How did the planning fallacy play out there? Right. Years ago, I was part of a team writing a curriculum and textbook. Our initial timeline for finishing. Well, looking back, it was wildly optimistic. Based on? Based on our own enthusiasm and estimate of our productivity, the inside view. We knew, rationally, if we looked at the outside view, how long similar academic projects usually take. You'll be much longer. Much, much longer. We even had data on that. But did we significantly change our optimistic plan? No. We fell right into the planning fallacy. And predictably, the project took years longer than initially planned and caused a lot of stress. So the key insight is our initial plans are probably too optimistic. We need to actively seek out that outside view, look at data from similar past projects. Definitely. And curiously think about potential pitfalls. Try to imagine ways it could fit. That sounds related to the premortem technique you mentioned later. Before we get there, what about that research on inventors? How does optimism fit in? Right. Tomás Ostebro's research. He studied thousands of inventors who has submitted their ideas for commercial assessment. Okay. Even when these inventors received strongly negative feedback from experts about the market potential of their invention, basically... being told it was unlikely to succeed commercially. A significant number of them remained highly optimistic and persisted with their projects anyway, often investing a lot more of their own time and money. So optimism kept them going, even against expert advice. Exactly. It highlights how optimism, while often essential for starting ventures, can also lead to a reluctance to cut losses and abandon failing projects. So optimism is a double-edged sword. It motivates but can also blind us. You also connected optimism to competition neglect. What's that? Competition neglect is when we focus too much on our own plans, our own strengths, our own product. And we fail to properly consider what competitors are doing or how many of them there are. Like we're the only player on the field. Sort of. Yeah. Think about someone opening a new restaurant. They might focus entirely on their great recipes, their chef, their decor. Right. And neglect to realistically assess how many other similar restaurants are already competing in the same neighborhood. or what those competitors might do in response. It's like planning in a vacuum. What happens when lots of people do this? It can lead to what economists call excess entry into markets. Too many businesses jump in, driven by individual optimism, which can drive down average profits and lead to higher failure rates overall. So bad for the average business owner, maybe. be. Potentially, yes. Though interestingly, this collective optimism might actually benefit consumers through more choice and lower prices. They become sort of optimistic martyrs for the economy. Huh. An interesting perspective. Now, how can organizations fight this optimism bias and planning fallacy? You mentioned Gary Klein's premortem technique. Yes, the premortem. It's a really clever and practical technique. How does it work? Okay. Imagine a team is about to launch a major project or make a big decision. Before they finalize everything, the leader says, okay, let's imagine we're a year into the future, and this project has been a complete disaster, a total failure. Ouch. Okay. Now, everyone take five minutes and silently write down all the reasons why you think it failed. Ah, so you're looking back from a hypothetical failure. Exactly. It legitimizes doubt and criticism. By framing it as explaining a past failure, it overcomes the usual pressure to be optimistic and supportive. It encourages people to surface potential risks and problems they might have otherwise kept quiet about. That sounds really useful for getting potential problems out in the open beforehand. It really is. A simple way to inject realism and counteract overconfidence. Okay, let's shift again now towards how we value things and make choices involving risk. You mentioned Bernoulli's error and prospect theory. Start with Bernoulli. What was his key observation? Right. Daniel Bernoulli, way back in the 18th century, he noticed that people generally don't value gambles based just on their expected dollar value. They're usually risk averse. Meaning they prefer a sure thing over a gamble, even if the gamble might pay out more on average. Exactly. He proposed that people evaluate outcomes based on their subjective psychological value. He called it utility, not just the money amount. And crucially, he said this utility has diminishing returns. Diminishing returns. So the difference between having $0 and $100 feels bigger than the difference between having $1 million and $1,100. Precisely. That extra $100 means less psychologically when you're already wealthy. That was Bernoulli's big insight about utility. Okay, but you called it Bernoulli's error. What did he get wrong, according to Kahneman and to Bruce? The key flaw they identified was that Bernoulli's theory didn't account for a reference point. A reference point. How does that change things? It changes everything. Prospect theory, which Kahneman and Tversky developed, argues that we don't evaluate outcomes based on absolute levels of wealth, like Bernoulli thought. Instead, we evaluate them as gains and losses relative to a specific reference point. usually our current situation or maybe an expectation. So it's not about how much money I have, but whether I'm gaining or losing compared to where I am now. Exactly. And prospect theory introduced two other huge concepts, loss aversion and diminishing sensitivity for both gains and losses relative to that reference point. Okay, loss aversion. That's a famous one, right? Losses hurt more than equivalent gains feel good. That's the one. The pain of losing, say, $100 is psychologically more powerful for most people than the pleasure of gaining $100. You can feel that intuitively. Like, if I offered you a coin flip, heads you win $100, tails you lose $100. Most people would turn that down. The potential gain doesn't feel like enough to compensate for the potential pain of the loss. That's loss aversion. Okay. And what about diminishing sensitivity for losses? Similar to gains, the difference between losing $10 and losing $20 feels bigger than the difference between losing $1,000 and losing $1,010. Right. And this combination loss aversion and diminishing sensitivity for losses explains why people sometimes become risk-seeking when faced with losses. How so? If you have a choice between a sure loss of $900 or a 90% chance to lose $1,000 – and a 10% chance to lose nothing. Hmm. The sure loss feels really bad. Right. Because of diminishing sensitivity, the extra potential loss of $100 in the gamble doesn't feel proportionally worse than the sure loss of $900. So many people will actually choose the gamble they become risk-seeking to avoid the certain loss. Fascinating. It's all about how things are framed relative to that starting point. Let's dig into that more. Reference dependence and the status quo bias. Absolutely. Reference dependence is just fundamental. Our preferences aren't fixed and stable. They're heavily shaped by our current reference point or status quo. This is a huge departure from traditional economic theories that assume your preferences are constant regardless of your situation. Can you give some everyday examples of reference point shaping choices? Sure. Think about salary negotiations. Both the employer and the employee usually see the current salary or the previous contract as the reference point. The whole negotiation is then framed in terms of gains or losses relative to that point. Right. A raise is a game. A pay cut is a loss from that baseline. Exactly. Exactly. Or think about moving to a new city. You constantly compare the weather, the cost, the lifestyle to your old city. That's your reference point. And loss aversion often makes the downsides of the new place feel more significant than equivalent upsides compared to what you left behind. That definitely rings true. And this contributes to the status quo bias. Yes. Yes. Because losses loom larger than gains, we often develop a preference for just keeping things as they are the status quo, even if an alternative might be objectively a bit better. Changing involves potential losses from the current state, which feels risky. Like we're anchored to what we currently have. There's an example with twins who get different inheritances. One gets cash, one gets stock. Even if they could easily trade to match their preferences, they often stick with what they initially received. Right. The status quo feels like the neutral reference point. We get attached to what we're given. Okay, what about fairness and entitlements in our decisions? Do we always act purely selfishly? Not at all. People have really strong intuitions about fairness. And these intuitions often lead us to act against our narrow economic self-interest. Like punishing someone even if it costs us. Exactly. Think about the ultimatum game. Or even simpler. Imagine your employer cuts your wages during a highly profitable year for the company. That would feel incredibly unfair. Right. Even if the reduced wage is still better than quitting and having no job, employees are likely to see it as deeply unfair. Right. They might retaliate by reducing effort, quitting or protesting actions that could harm them economically because the violation of fairness feels so strong. Our sense of what we're entitled to, what's fair, is heavily shaped by reference points and social norms. And we treat actual losses differently than just not getting a gain, right? In terms of fairness. Yes, exactly. Imposing a loss, like cutting wages, is generally seen as much more unfair than failing to provide a gain, like not giving an expected raise. The reference point matters immensely for fairness judgments. Fairness is a powerful motivator. Now, how do framing effects fit back into prospect theory? Prospect theory provides the perfect explanation for framing effects. Because we evaluate outcomes as gains and losses relative to a reference point, and because losses hurt more than gains feel good, the way you frame the exact same options, emphasizing the gains versus emphasizing the losses, can completely flip people's preferences. Can you walk us through that disease outbreak example again using prospect theory? Sure. Remember, the choice is between programs to combat a disease expected to kill 600 people. Okay. Frame one, game frame. Program A saves 200 lives for sure. Program B has a 13 chance of saving all 600 and a 23 chance of saving no one. Here, people usually choose program A, the sure game. Yes, they're risk averse for games. It's the exact same outcomes as A and B, just described differently. Exactly. But now, framed as losses, people tend to choose program D, the gamble. Why? Because the sure loss of 400 lies feels really bad, loss aversion. And diminishing sensitivity makes the risk of losing 600 instead of 400 seem less daunting than the possibility of losing no one. They become risk-seeking to avoid the sure loss. That's incredible. The same choice just worded differently leads to opposite preferences. It really highlights the power of the reference point, saving lives versus lives lost and loss aversion. You even see different brain activity. Different brain regions light up depending on the frame. Yeah, neuroeconomic studies have shown that. Gained frames tend to activate areas associated with reward, while lost frames activate areas associated with negative emotion and risk. And we usually just accept the frame we're given, right? We don't automatically reframe. Reframing takes effort. It's a system two task. So most of the time, we passively accept the frame presented to us, which means whoever controls the frame can significantly influence the choice. That's a powerful realization, which leads nicely into nudges and choice architectures. Exactly. Nudges use these insights from prospect theory and behavioral economics to design choices, the choice architecture, in ways that gently steer people towards beneficial decisions without taking away their freedom to choose otherwise. It's called libertarian paternalism, right? Helping people without forcing them. That's the idea. You make the desired option the easiest or the default, leveraging biases like the status quo bias. What are some really common examples of nudges? Organ donation defaults are a huge one. Opt-in versus opt-out. Right. Countries where you have to actively opt-in to be an organ donor have very low donation rates. Countries where the default is that you are a donor unless you actively opt-out have dramatically higher rates. Just changing the default makes that much difference. Massive difference. Yes. People tend to stick with the default due to inertia and the status quo bias. Another big one is retirement savings plans. Automatic enrollment. Yes. Automatically enrolling employees into a savings plan, but giving them the option to opt out, leads to much higher participation rates than requiring them to actively sign up. It makes saving the easy default path. It shows how small design choices and how options are presented can have huge impacts on behavior. Okay, let's shift to a really fascinating distinction. The experiencing self versus the remembering self. Ah, yes. This is a crucial distinction really highlighted by Daniel Kahneman. It's about the difference between our moment-to-moment experience of life, what it actually feels like to be alive from second to second. That's the experiencing self. self. Right. And then there's a remembering self that's the part of us that looks back, evaluates past events, tells the story of our lives and makes decisions based on those memories. And these two selves, they don't always agree. They often don't. What we remember about an experience and how we judge it overall isn't necessarily a simple sum of how we felt moment by moment. How does the cold hand experiment show this? Okay, in this experiment, people put their hand in painfully cold water for two different trials. Ouch! Trial one, 60 seconds of painfully cold water, then hand out. Trial two, same 60 seconds, plus an expector 30 seconds, where the water temperature was raised just slightly, still unpleasant, but less painful than the peak. So trial two was longer and involved more total pain overall? Yes. Objectively, more total discomfort. But when asked later which trial they would prefer to repeat? They chose the longer one, trial two. A significant majority chose trial two. Why? Because their remembering self didn't care so much about the total duration of pain. It focused heavily on two things, the peak level of pain, which was the same in both trials, and the end of the experience. Because trial two ended on a less painful note, it left a better final impression in memory. So the memory is dominated by the peak and the end. Exactly. This is called the peak end rule. And the remembering self also exhibits duration neglect. The length of the experience had surprisingly little impact on our overall memory and evaluation of it. So our memories aren't like recordings, they're more like edited highlights, focusing on the intense bits and the finish. That's a great analogy. The remembering self represents experiences by prototypes, like an average of the peak and end moments, not by summing up all the moments. You mentioned examples like an opera or someone's life story. Right. Imagine a long opera that's mostly wonderful but ends with a terrible screeching sound. It fades into the background of moment to moment experience. So even major life changes become less prominent in our daily thoughts. They become part time states in terms of their impact on our mood and attention. While a paraplegic's life is profoundly changed, their experiencing self isn't necessarily unhappy every single moment. They adapt. They focus on other things. The same happens with positive things like marriage. The initial bliss fades into a new normal that doesn't dominate every conscious thought. So the experiencing self lives in the present and adapts while the remembering self tells a story that might miss some of that adaptation. That's a good way to think about it. Okay, let's bring it all together. Final thoughts on these two selves and the two systems we started with. Well, this potential conflict between the remembering self and the experiencing self is really important. Sometimes the choices we make, guided by our remembering self trying to create good memories or stories, might actually lead to objectively worse experiences for our experiencing self, like choosing the longer cold hand trial. It raises questions about what we should prioritize when making decisions. A tricky question. And the two systems? The two systems. Framework, system fast, system two, slow. It's a metaphor of simplification, but a really useful one. It helps us understand the interplay between automatic, intuitive thought and effortful, deliberate reasoning. And the key takeaway is that System 1, while efficient, isn't always reliable. Exactly. That feeling of cognitive ease from System 1 doesn't automatically mean something is true or right. And System 2, our checking mechanism, is often lazy or busy, too willing to just accept System 1's plausible-sounding answers. And all those features of System 1, W.Y. Zayadi, associative coherence, the heuristic. They all work together to generate these predictable biases, these cognitive illusions we've been talking about, which systematically influence our judgments and choices in ways we're often unaware of. So ultimately, understanding all this, how does it help us? I think just having this deeper awareness of how our minds work, the shortcuts, the biases, the illusions, can empower us. It allows us to potentially recognize these patterns in our own thinking, maybe pause, engage system two more deliberately, and ultimately make more informed, more rational choices. This has been an incredibly illuminating deep dive, seriously fascinating stuff about how we actually think. We've covered so much ground. The two systems, all those heuristics and biases. Representativeness, availability, anchoring. Right. And framing effects, prospect theory, lost inversion, and ending with this really profound distinction between our experiencing and remembering selves. Yeah. Our aim really was to give you a kind of framework, a set of tools for understanding these mental shortcuts we all use and the potential pitfalls that come with them. Because if you become more aware of these cognitive mechanisms... Hopefully, yeah, you can start to recognize them when they pop up in your own thinking and how information is presented to you and the world around you. Definitely food for thought. And on that note, here's maybe a final thought for you, the listener, to ponder. here. Think about a recent, maybe significant decision you made. Now, reflecting on everything we've talked about today, do you see that decision any differently? How might those two systems have been interacting in your brain? Were any particular biases, maybe anchoring or framing, playing a role? What might your experiencing self say about the moment-to-moment process of versus what your remembering self tells you about the outcome now. Yeah, it's definitely a question worth exploring as you continue to navigate the fascinating complexities of your own mind.