Peter: Michael.


Michael: Peter.


Peter: What do you know about Blink? 


Michael: All I know is that my first impression of this book was that it was very dumb. And I'm looking forward to hearing about how first impressions are always true. 


[If Books Could Kill theme]


Peter: Alright. The book is Blink by Malcolm Gladwell. 


Michael: Heard of it? Heard of him. 


Peter: Big bestseller, at least a few million copies sold. It came out in 2005. It was the third big Gladwell. His first one was The Tipping Point. Then you had Outliers, which we did an episode on. And then you get Blink: The power of thinking without thinking. This is also the first Gladwell book, I think, just based on my casual perusal to get some very negative early reviews. I think there was a good number of people at this point who were getting wise to the shallowness of the pop science airport book genre. 


Michael: Yeah, this is at the time when four of the top ten TED Talks are just thoroughly debunked. [Peter laughs] I'm sorry, grit is not a thing. 


Peter: I guess it's perfect for me because I am ever a contrarian. And I guess the twist here is that I like this book. 


Michael: Yeah.


Peter: [laughs] I think this book is pretty good. 


Michael: I think we should talk about your complicated feelings on this because you've been agonizing about this episode for weeks. 


Peter: I expected this book to be pseudoscientific trash. 


Michael: Yeah, same. 


Peter: There is a lot of pseudoscience in this book and we'll talk about it. But I don't think it's quite as bad as some of his other work. Even though, he makes a lot of these narrative mistakes. He touches on a bunch of really interesting science. And I'm not an expert. I might have missed something especially because this book is just a torrent of anecdotes. [laughs] There are so many anecdotes. It's wild. But I didn't find too many significant cases of him misrepresenting the science. 


Michael: Or your brain is so cooked from hosting this podcast that you're like, only 30% of the anecdotes are fake, rookie numbers.


Peter: [laughs] Before we get into it, we have housekeeping, right? For the first time ever, we've decided to do one episode where we do a little bit of sellout shit because we wanted to plug our merch. 


Michael: We have merch.


Peter: T-shirts, sweaters, mugs, the whole, the usual stuff ifbookspod-re.com, we'll put a link in the show description. And then also because we're already doing sellout shit, subscribe to our Patreon.


Michael: Lean in, go for it.


Peter: We have monthly bonus episodes. 


Michael: Get Peter's newsletter. 


Peter: Oh yeah, my newsletter stringinamaze.net. What do you want to sell? Anything? 


Michael: [laughs] Nothing. I'm against this entire thing. But we're doing it because you want to mention on the show. 


Peter: This is why you put out teasers of our bonus episodes that are 85% of the episode. 


Michael: [laughs] I mean, whatever. I feel like there is a huge contingent of listeners who just think that some episodes cut off with a little sound because nobody reads the descriptions of podcasts. So, I feel like some people are just like, well, that was an hour-long podcast that ended abruptly and have no idea that we have a Patreon or anything else. 


Peter: That's because Michael does teasers for these episodes that consist of just an unbelievable amount of the episode. 


Michael: I'm trying to get the word out. 


Peter: You don't do it for every episode, which is the only reason that we're still friends. 


[laughter] 


But I swear, this last one where you did 60 minutes of an 80-minute episode. 


Michael: I want to tell the people. 


Peter: I honestly thought that you were trying to pull some kind of prank on me. I thought that this was a joke that was specifically targeting me where you're like, let's see how much of this episode I can release for free without Peter getting mad. 


Michael: We did not talk about it specifically, but I just said, I'll schedule a teaser and then. [laughs] 


Peter: Yeah, you're just like, “I'll publish a teaser.” And I was like, cool. And then I saw that it was 60 minutes long. 


[laughter]


All right, enough sell out shit. That's it forever. We will never be mentioning our Patreon or our merch again.


Michael: And the teasers will get even longer until morale improves. 


Peter: People who don't listen to the Blink episode will never know about this shit.


[laughter]


Let's get into the book here. I am going to send the opening paragraph to you.


Michael: Okay. He says, “In September of 1983, an art dealer by the name of Gianfranco Becchina approached the J. Paul Getty Museum in California. He had in his possession, he said, a marble statue dating from the 6th century BC. It was what is known as a Kouros, a sculpture of a nude male youth standing with his left leg forward and his arms at his sides. There are only around 200 Kuroi in existence, and most have been recovered, badly damaged or in fragments from grave sites or archaeological digs. But this one was almost perfectly preserved. It stood close to 7ft tall. It had a light-colored glow that set it apart from other ancient works. It was an extraordinary find. Becchina's asking price was just under $10 million.” We did spoil this one in the Outliers episode.


Peter: Yeah. And look, this is now a very famous anecdote because of this book.


Michael: Oh, yeah. 


Peter: So, the Museum conducts a 14-month investigation into the statue's provenance. And everything looks great. The ownership records are consistent. A geologist says that the rock looks right, the style looks accurate to the period, but Gladwell goes on.


Michael: The Kouros, however, had a problem. It didn't look right. The first to point this out was an Italian art historian named Federico Zeri. When Zeri was taken down to the museum's restoration studio to see the Kouros in December of 1983, he found himself staring at the sculpture's fingernails. In a way he couldn't immediately articulate, they seemed wrong to him. Evelyn Harrison was next. She was one of the world's foremost experts on Greek sculpture, and she was in Los Angeles visiting the Getty just before the museum finalized its deal with Becchina. Arthur Houghton, who was then the curator, took us down to see it, Harrison remembers. He just swished a cloth off the top of it and said, “Well, it isn't ours yet, but it will be in a couple weeks.” And I said, “I'm sorry to hear that.” What did Harrison see? She didn't know. In that very first moment when Houghton swished off the cloth, all Harrison had was a hunch, an instinctive sense that something was amiss.” Her first impression, Peter. Her blink was correct.


Peter: For Freakonomics, I did Freakonomics. And for this one, it's just Blink


Michael: [laughs] You got blinked.


Peter: Every title we just do a stupid voice for to make fun of it. [Michael laughs] Several other experts have this same immediate reaction. The museum gets concerned. They convene a symposium on the issue. And then everything starts to fall apart. Not only are the experts skeptical, eventually some of the documentation used to verify the past ownership proves fraudulent. Other experts explain how certain forgery techniques could have fooled the geologist. It turns out the statue, very likely a fake. And the initial instinct was proven correct. Gladwell says, “In the first two seconds of looking, in a single glance, they were able to understand more about the essence of the statue than the team at the Getty was able to understand after 14 months.” Blink is a book about those first two seconds. 


Michael: I mean, we talked about this in the Outliers episode that these are people with specialized expertise. 


Peter: This is actually part of Gladwell's thesis. One, intuition is a very powerful thing. Two, it's something that is most effective when someone has an expertise in the subject. And then three, there's this ancillary idea that even though their intuitions are telling them something, these experts can't quite articulate the source of it.


Michael: Okay. 


Peter: For all of the faults of this book, and there are quite a few, all of these basic ideas have a lot of truth to them.


Michael: Also, because my brain is also cooked from hosting the show so much like, discourse now is about egghead elites and how we should disregard anything that, “experts say.” A book that is like, “Hey, experts are right about something, even if they can't describe it all that well, is fine overall.”


Peter: Right, right. 


Michael: Yeah, maybe. Maybe listen to somebody who knows more about Greek sculptures than you. 


Peter: I actually was surprised by this because I think a lot of the popular understanding of the book is just intuition is magic. Not at all what Gladwell is saying. And in fact, there are various parts of the book where he basically says that untrained intuition is very dangerous and precarious. So, the heart of the book is the concept of thin slicing. Gladwell defines thin slicing as the ability of our unconscious to find patterns in situations and behavior based on a very narrow slice of experience. He's not making this up. This is a term coined a few decades back by a couple of researchers, Nalini Ambati and Robert Rosenthal. They conducted a study which Gladwell talks about, where people were exposed to silent video clips of teachers teaching. 


The clips were very short, 2, 5, and 10 seconds. And then they were asked to rate the teachers across different metrics based on what they saw. The researchers took those ratings and then compared them with ratings that students gave those same teachers at the end of a whole semester. They found that the ratings based on even the two-second clips were strongly correlated with the ratings that the students gave teachers after an entire semester. People were basically observing these very basic cues, body language, posture, etc., and coming away with the same impression that someone did after actually being taught by these teachers. 


Michael: So, from this one study, I can make broad, sweeping conclusions. 


Peter: Absolutely. 


Michael: I am thin slicing from the research of thin slicing. 


Peter: I think you can say based on this, is that, one, our brains are making determinations very quickly, and then two, those determinations are lasting. So, in this particular case, there's a caveat here, which is that teacher ratings are subjective. It's not like the statue, where it's either fake or it's not. It's not like the unconscious mind here is identifying some objective truth in a matter of seconds.


Michael: Right. Because you could also make the argument that the kids in those classes were also basing their impression on body language, these kinds of superficial cues. 


Peter: Right. 


Michael: But then they just kept those impressions throughout the course of the semester, regardless of new information, yeah. 


Peter: So, that's the seminal thin slicing study. He tells another story. 


Michael: He says, “Imagine that I were to ask you to play a very simple gambling game. In front of you, there are four decks of cards, two of them red and the other two blue. Each card in those four decks either wins you a sum of money or costs you some money. And your job is to turn over cards from any of the decks one at a time in such a way that maximizes your winnings. What you don't know at the beginning, however, is that the red decks are a minefield. The rewards are high, but when you lose on the red cards, you lose a lot. Actually, you can win only by taking cards from the blue decks, which offer a nice, steady diet of $50 payouts and modest penalties. The question is, how long will it take you to figure this out? 


A group of scientists at the University of Iowa did this experiment a few years ago, and what they found is that after we've turned over about 50 cards, most of us start to develop a hunch about what's going on. We don't know why we prefer the blue decks, but we're pretty sure at that point that they are a better bet. After turning over about 80 cards, most of us have figured out the game and can explain exactly why the first two decks are such a bad idea.” Interesting. 


Peter: And the twist here is that researchers then hooked participants up to some machinery to measure their stress responses, like sweaty palms and found out that people started to have stress responses to the red decks, only 10 cards in well before they know something's wrong, and even farther before they can articulate what's happening.


Michael: So, it's like your body starts to notice, and then your instincts start to notice, and then finally your brain notices, and you're like, “Fuck those red cards.” 


Peter: Right. This isn't quite thin slicing in the same sense as I originally understood it. But it's a cool example of how your subconscious is pretty powerful and ahead of you at times. Before you know what's going on, some part of your brain is building the case and you're actually acting differently before you know why.


Michael: Right. Although, racing through my brain is a million examples of when that is not the case, right?


Peter: Right, right. 


Michael: I always think of those studies that find that conventionally attractive people are considered more honest. That's not us accurately assessing a situation. That's us believing that we're accurately assessing a situation, but we're actually operating on something else. 


Peter: You're going to think of all these different examples of rationality and stuff. And I guarantee you Gladwell, somewhere in this fucking book, has this example because there are just so many anecdotes. He has a whole chapter that he calls this the Warren Harding problem. Because Warren Harding was tall and handsome and people thought that made him presidential. And it turns out he's sort of a dipshit.


Michael: Okay, yeah. 


Peter: That is a good example of where this irrationality bubbles up. 


Michael: Right. 


Peter: The next big anecdote is about a psychologist, John Gottman, who specializes in analyzing couples. 


Michael: Oh, yeah, he's University of Washington. He's famous in Seattle. He does this marriage lab stuff.


Peter: Yeah, you know about the marriage lab. All right, perfect. Gladwell tells a story where a young couple visits Gottman's lab, and he hooks them up to some equipment to monitor their movements. They have a videotaped counseling session where they talk about little things like whether they want a dog for about 15 minutes. So, this is what Gladwell says, “Gottman has developed a coding system that has 20 separate categories corresponding to every conceivable emotion that a married couple might express during a conversation. Disgust, for example, is a 1, contempt is a 2, anger is 7, defensiveness is 10, whining is 11, on and on.” So, Gottman has his coding system. He runs all the interactions through it and then this gets translated into a row of 1800 numbers.


And then he has a system for analyzing those numbers and “On the basis of those calculations, Gottman has proven something remarkable.” If he analyzes an hour of a husband and wife talking, he can predict with 95% accuracy whether that couple will stay married 15 years later. If he watches a couple for 15 minutes, his success rate is around 90%.” This is very cool. This is very interesting. It's also definitely not thin slicing, right?


Michael: Yeah, yeah. 


Peter: Thin slicing in the opening anecdote, is about these very fast, almost split-second reactions we have to certain things. And then here Gottman is using a ton of prior experience and information to create complex heuristics for evaluating marriages in a short period of time. And so, I'm reading this and I'm like, well, this is just a different thing.


Michael: Because a thin slicing version of this would be like someone who doesn't know anything watches a couple and then they predict it. 


Peter: If an expert watched a couple for 15 seconds and was able to be like, they're going to get divorced with 75% accuracy, that would be thin slicing in my mind.


Michael: But this is actually quite scientific. You're actually doing a discourse analysis of the way that they're interacting with each other. And you're testing it over time and you're developing a model that presumably becomes more predictive as you refine it. 


Peter: So, I'm going to send you what Gladwell says about this before. 


Michael: [laughs] It's funny. This all comes back to before we were recording, I was telling you about, I met a guy online, and he gave me his phone number, and I was putting it my phone, and it turned out he was already in my phone as Martin Bad Vibes.


[laughter] 


Now I feel good about making that determination. 


Peter: That's how I know that your marriage will not work. 


Michael: [laughs] I was thin slicing. I was like, “Something's off man.” 


Peter: Using the incredible power of my brain, I figured out that the giant spider tattoo on this man's neck makes [Michael laughs] him unlikely to be a winning candidate. 


Michael: He says thin slicing is part of what makes the unconscious so dazzling, but it's also what we find most problematic about rapid cognition. How is it possible to gather the necessary information for a sophisticated judgment in such a short time? The answer is that when our unconscious engages in thin slicing, what we're doing is an automated, accelerated unconscious version of what Gottman does with his videotapes and equations. Can a marriage really be understood in one sitting? Yes, it can. And so can lots of other seemingly complex situations. What Gottman has done is to show us how. Oh, yeah. He's basically saying, like, “Isn't it the same thing when you're like, Martin has bad vibes?” And also, when you spend years refining a model to predict marriage dissolution.


Peter: I guess what Gladwell is saying is, like, maybe your brain can also condense a ton of information down, right? In ways that we can't really see, which I guess is true. But like many Gladwell anecdotes, it sort of feels like he was like, I'm going to put this anecdote here because I love it. And then later, he's like, “And here's my reason at the very end. I'll add my reasoning afterwards.” [laughs]


Michael: Yeah, they're both good stories. 


Peter: Only a few of the examples that he gives in this book are really thin slicing. So, before we keep going, I want to do a little dive into the idea of thin slicing and specifically into, like, what I think is really thesis of the book, which is that intuition is very powerful, especially when combined with expertise. 


Michael: Oh, okay. 


Peter: So, somewhat ironically, some of the clearest work I found on this was written by another airport book author, Daniel Kahneman, who's a psychologist who wrote Thinking, Fast and Slow.


Michael: Future IBICA episode. 


Peter: He wrote a paper in 2009 alongside another researcher, Gary Klein. They basically conclude that it's true that experts can develop extremely reliable intuition even when handling very complex problems, but only under certain conditions. First, the environment has to be what they call high validity. In layman's terms, it means that there are reliable cues that an expert can depend on.


Michael: Right. The statue is fake or the statue is not fake. That's something that you can confirm. 


Peter: And then second, there needs to be an adequate opportunity for the expert to actually learn those cues, like practice. Right? 


Michael: Right. 


Peter: You need your 10,000 hours, folks. It's all one book. To give some examples, a lot of the early research on expert intuition involved chess masters. Chess has both of these qualities. It's very high validity. The player can see the entire board. The pieces can only move in very specific ways. And also, experts can play many, many games over their lifetimes, allowing them to learn all of the relevant patterns. 


Michael: Right. And there's a clear outcome too. You win or you lose, yeah. 


Peter: That's right. A less obvious one is firefighters. Gladwell touches on this too and there's a good amount of research on it. There's research showing that experienced firefighters make very effective, quick decisions about where fire will spread, whether a structure might collapse, etc. And part of that is because the environment is high validity, there are reliable environmental cues for them to use. Even though a lot of researchers were baffled about what exactly they were seeing, and firefighters were not always able to articulate it. 


Michael: I think a counterexample would be something like, you often hear cops say something like, “I know when someone's guilty.” But the problem is the way that you measure someone's guilt versus innocence. You can measure whether they went to jail. That's an objective metric. But it might just be you're good at telling the kind of person who will go to jail later, which is not the same as somebody who's guilty. 


Peter: Yeah, cops don't have blink. Cops have []Michael laughs] something else. It's called bonk. And it's where your brain barely works. There are actual examples of environments that are not high validity. A good one is politics. There was a researcher who tracked the long-term forecasts of political and economics experts and found that they were no more accurate than like casual newspaper readers over a long period. This is probably because politics and economics are low validity environments. There are too many variables, which means there are very few reliable cues that experts can learn from. There was a review from the early 90s showing that expertise was found in livestock judges, astronomers, test pilots, soil judges, chess masters, physicists, mathematicians, accountants, grain inspectors, photo interpreters, and insurance analysts. 


In contrast, there was poor performance by experienced professionals in stockbrokers, clinical psychologists, psychiatrists, college admissions officers, court judges, personnel selectors, and intelligence analysts. 


Michael: It is a little bleak that the blink professions are like livestock inspectors and then the bonk professions are like who goes to college and who goes to jail. 


Peter: You could think of more broadly applicable scenarios where you are a “expert” because you encounter these situations on all of the time. 


Michael: Like gay men on the Internet. [Peter laughs] I know the vibes. I know about Martin. 


Peter: To simplify all of this a bit, intuitions stem from pattern recognition. If you have the skill to recognize a pattern and the opportunity to practice identifying it, over time you can develop this seemingly magical intuition. You're blinking. Bad intuitions can be the result of having inadequate information or inadequate practice, right?


Michael: Yeah.


Peter: Which results in your brain drawing on unhelpful or irrelevant information. So, like a lot of the biases that we know are real, like anchoring biases. The idea that, like, if I put a certain number into your brain and then ask you how much a car costs, you're slightly more likely to move toward that number, right? that bias is because you're not pulling on accurate information in your brain. And so, your brain is just sort of reaching around, right? 


Michael: Right. 


Peter: If I tried to do that same experiment with someone who is an expert in cars, it wouldn't have any effect on them because they have accurate information to draw on. 


Michael: This is a really interesting example of how science is supposed to work because the TED Talk airport book version of this idea is like, “Did you know that your first impression is always correct,” but then once you get into the actual data, it's like, “Well, your first impression is correct under very specific circumstances.” So, first of all, you have to be an expert. Second of all, you have to be an expert in a domain where you can reliably measure over the course of your career whether or not your expertise is correct. There is like this broad fun fact like cocktail party fact. But then the actual rule or the actual science tells us it's far more conditional. 


Peter: Right? 


Michael: It's like, well, sometimes. 


Peter: Another early chapter is about the concept of social priming or behavioral priming in this context. 


Michael: A concept I'm actually kind of skeptical of. 


Peter: Oh.


Michael: Yeah.


Peter: You just blinked the shit out of this right now. 


Michael: [laughs] Just because like, you come across so many of these studies and some of the findings are pretty implausible. 


Peter: Let's talk about it. 


Michael: All right. 


Peter: So, priming is the idea that you can influence behavior by exposing people to an idea or even just a word, right. Gladwell talks about an experiment devised by a psychologist named John Bargh. Send you something. 


Michael: He says, “Imagine that I'm a professor and I've asked you to come and see me in my office. You walk down a long corridor, come through the doorway and sit down at a table. In front of you is a sheet of paper with a list of five-word sets. I want you to make a grammatical four-word sentence as quickly as possible out of each set. It's called scrambled sentence test.” Ready? I feel like you're going to make me do this right now and I don't want to do it. 


Peter: I'm not going to make you do it. I thought about it and then I was like, that's too much of a dumbass podcasting gimmick even for me. [Michael laughs] I do feel like you'd make me do it and I want to say that.


Michael: I would. Absolutely. 


[laughter] 


This is why I thought it was coming, because I would make you do it. 


Peter: So, I'm just going to read a bunch of. Again, these are five-word sets and the idea is that you lose one word and develop a sentence from the other four. Right? 


Michael: Okay. 


Peter: Him was worried she always from our Florida orange temperature.


Michael: Okay. 


Peter: Ball the throw toss silently, shoes give replace old the, he observes occasionally people watches, be will sweat lonely they, sky the seamless gray is, should now withdraw forgetful we, us bingo sing play let, sunlight makes temperature wrinkle raisins. 


Michael: Are you priming me to think of old people? 


Peter: Holy shit. You got it, dude. 


Michael: Are you, okay? 


Peter: Got it. 


Michael: Because it's like wrinkle Florida. 


Peter: Wow. You're fucking blinking me crazy right now. 


Michael: Boom. Got him. You can't blink me, son. 


Peter: Alright? I just sent you that. 


Michael: This seems straightforward, right? Actually, it wasn't. After you finished that test, believe it or not, you would have walked out of my office and back down the hall more slowly than you walked in. With that test, I affected the way you behaved. How? Well look back at that list. Scattered throughout it are certain words such as worried, Florida, old, lonely, gray, bingo, and wrinkle. You thought that I was just making you take a language test, but in fact, what I was also doing is making the big computer in your brain, your adaptive unconscious, think about the state of being old. It didn't inform the rest of your brain about its sudden obsession, but it took all of this talk of old age so seriously that by the time you finish and walk down the corridor, you acted old, you walked slowly.


This is why I'm a little skeptical. [laughs] 


Peter: No, dude. 


Michael: The concept of old people being introduced to that. I don't know that that would make me walk slower. 


Peter: He references a bunch of comparable experiments. There's one where priming people to think about aggression makes them more likely to interrupt a conversation for example. Similar to you, I was blinking my ass off reading this part, [Michael laughs] like, it just feels a little too stupid. People walk more slowly after hearing the word Florida. 


Michael: Yeah, yeah, yeah, yeah. 


Peter: And lo and behold, the science here is very shaky. Now, I will say this is not really a Gladwell error.


Michael: Yeah.


Peter:  From what I can tell at the time Blink was published, this was not particularly controversial. In Thinking Fast and Slow, the Daniel Kahneman book, that's 2011. And in that book, he said that the evidence for this was such that, “Disbelief is not an option.” 


Michael: Oh.


Peter: Since then, social priming has gotten caught up in what's often called the replication crisis. The replication crisis is referring to a situation in many scientific fields, but particularly in psychology, where there's been a widespread inability to replicate the findings of prior research. In 2011, there was an experiment by a guy named Daryl Bem where he brought some students into a room. There is two doors. Behind one was an erotic picture. And he said that they were able with statistical significance to predict which door it was behind. 


Michael: Fuck. Yes. 


Peter: A lot of people saw this and said there are some results. Bad enough, weird enough that you should assume that there has been a methods error somewhere. 


Michael: Yeah. There's also the one about how Hurricanes with female names do more damage. That was another famous one where people were like, “No, [laughs] what the fuck are we doing here?”


[laughter] 


Peter: I mean, look, when a psychologist is just like, “Yeah, I can see a pussy through a door. What are you talking about?”


Michael: This hurricane's named after my bitch ex-wife. That's how you know it's bad. You're like, “Whoa,” [laughs] I don't think this is real. 


Peter: [laughs] I never looked into the hurricane thing. I always thought there was some plausibility in that. We were maybe just naming the bad ones female names. But are they alternating though? Did they alternate male and female, I do not know.


Michael: Yeah, they alternate. Yeah, they alternate. 


Peter: Okay, so then it can't be that.


Michael: They go up through the alphabet by letter, the first letter of a name, and they alternate. 


Peter: Got it, got it, got it. 


Michael: It's totally at random. 


Peter: So, then in 2012, some researchers wanted to highlight the potential for errors in priming studies. And they conducted a study intended to lead to a ridiculous result where they found that people who listened to When I'm 64 by the Beatles became literally younger than the control group. 


Michael: What? 


Peter: They basically just looked at the data in different ways, found a statistically significant result by fluke and they said, “Look, this approach is common in the field. And we just used it intentionally to come up with a result where you can show like, oh, people who listen to this group are going to be younger on average.” 


Michael: Right, right. 


Peter: This causes people to perk up. Some researchers tried to replicate John Bargh’s original study, the original one where people walk slower when they're being primed to think about age. They could not-- except when the people observing the experiment were told about the expected result.


Michael: Oh, okay. 


Peter: Which of course suggests that they are probably, without really realizing it, manipulating the data to produce the result. On top of this, around the same time, a well-known social psychologist, Diedrich Estoppel, was found to be faking data, including in studies involving priming. From there, you had researchers undertaking larger scale efforts to replicate social priming studies and finding that very few of them actually did replicate. To give you a sense of where the consensus is now. The way that one researcher put it was a good example, is that people can be primed with words like diet on a menu to make lighter food choices, but only if those people are trying to eat lighter. It's not like this magic thing. It's just sort of like serving as a reminder. 


Michael: The real version of this that I've heard is in the order of polling questions. 


Peter: Yep. 


Michael: Where if you ask people like, “What do you think about immigration? Do you think immigration is out of control? How do you feel about immigration?” And then you're like, “What are your top 10 priorities for the United States?” Like, people are more likely to put immigration at the top because you've like reminded them that that is an issue. 


Peter: [laughs] I keep thinking about walking slower after saying Florida. [Michael laughs] Why am I-- I'm just like, I'm wearing fucking bifocals going 15 miles an hour down the road, slow down. 


Michael: You're like all of a sudden doing anal after it's pride month. You're like, “I don't know why, I just have the urge to do this.” [Peter laughs]


Peter: Did you know that for two hours after watching Fire Island, a straight man will walk like a little fairy? [Michael laughs]


Michael: You become 6% more likely. You go, “Yasss in your daily life,”


Peter: Girl.


[laughter]


Michael:  Damn, I've been primed. 


Peter: That's priming, baby. [Michael laughs] There is another theme that Gladwell taps into. That's basically the idea that people who are making these split-second decisions and determinations are generally unable to articulate why they're making them. So, I'm going to send you a little bit. 


Michael: Not long ago, one of the world's top tennis coaches, a man named Vic Braden, began to notice something strange whenever he watched a tennis match. In tennis, players are given two chances to successfully hit a serve. And if they miss on their second chance, they're said to double fault. And what Braden realized was that he always knew when a player was about to double fault. A player would toss the ball up in the air and draw his racket back. And just as he was about to make contact, Braden would blurt out, “Oh, no, double fault.” And sure enough, the ball would go wide or long or it would hit the net. 


One year at the big professional tennis tournament at Indian Wells near Braden's house in Southern California, he decided to keep track and found he correctly predicted 16/17 double faults in the matches he watched, okay. 


Peter: We don't have to dig into this particular example. This guy's claim has not been rigorously tested. I'm also a little bit confused about why he can catch double faults but not faults, since they're basically the same thing, but it's not totally impossible. Maybe something about the player's body, maybe the player realizes and he can see that. I don't know.


Michael: Low confidence, something. Yeah, sure. 


Peter: But there's a degree to which we all know that our brain makes decisions without our conscious input. If I throw a ball at your head, you're not thinking like, “Oh, no, there's a ball coming towards me. I should get out of the way.” You just duck, right? 


Michael: Right. 


Peter: If I asked you why you ducked after I threw the ball, you would probably say I ducked because the ball was going to hit me, right?


Michael: Yeah. 


Peter: But there's a real question of whether you know that or whether you're backfilling a plausible explanation for what you did, right? 


Michael: Right. 


Peter: So, back in the 70s, there's a very famous paper published by Richard Nisbett and Timothy Wilson called Telling More Than We Know Verbal Reports on Mental Processes. They do a series of experiments where they ask the subjects questions, but then manipulated some of their responses. So, for example, they ran an experiment where they asked consumers to rate several identical products based on quality. People would choose one and then when asked for their reasoning, the people would provide a reasoning despite the fact that the products were identical. They consistently found that people were unable to reliably identify the sources of their decision making. 


Michael: Yeah, I think that's all the time. People are very bad narrators of like, why they do the things that they do. 


Peter: Right. The thing about that experiment where they give the identical products is they found out that people were choosing the rightmost product frequently when they were looking at it.


Michael: Nice. Yeah. 


Peter: And then they would ask people, are you just choosing that because that's on the right? And they were like, “No, no, no.” 


Michael: Yeah, of course. Yeah. 


Peter: They said, “Well, hey, they don't know it, but they're choosing it because it's the rightmost. And we don't know why, but that's what they're doing.” And then some other folks postulated, actually what's probably happening is that they're going through these left to right, and because they're identical, they're just choosing the most recent one that they looked at. Because they trust themselves more or whatever. So, it's not that it's to the right, it's that it's the most recent one that they looked at. 


Michael: Do you know the thing about nuclear submarines they do on public polls? 


Peter: I don't think so. 


Michael: There's a thing where, when you take online polls that pollsters use, where people will just fill out the first bubble of every question. And so, now they include a question “Are you licensed to pilot a nuclear submarine?” And the first answer is yes. And that way, if somebody just clicks the first answer for every single one, it'll come up with that and then they can throw out the rest of the test because they're just clicking the first one. 


Peter: That's interesting. Yeah, yeah. So, in 2005, there's some good follow up on this idea by some researchers who use the term choice blindness. They showed people two pictures of women and asked them to choose the one that they felt was more attractive. Because in 2005, even scientific research was doing hot or not. [Michael laughs] Most of society was just hot or not in various forms. 


Michael: But the twist is they're both behind a door. You can't tell how the women look so you just have to pick one. 


Peter: So, then what they would do is show them the picture again and ask them to explain their choice. Except they would show them the wrong picture. They would show them the one that they didn't choose. Less than a quarter of people caught the error and the ones that did not provided an explanation for the choice that they did not actually make. 


Michael: So, you show them the less hot one and then you're like, “Why do you think she's the hotter one?” And they're like, “Well, I just love her hair.” Like whatever. 


Peter: Right.


Michael: That's so [crosstalk]


Peter: This is something that Gladwell pretty consistently pokes at and that he's basically right about that. There's this disconnect between our subconscious decision-making process and our conscious mind. There's also evidence, which Gladwell talks about a little bit that actually trying to articulate the reasons for your decisions can make your decision making worse. This is called verbal overshadowing. Gladwell talks about Jonathan Schooler, who publishes about this in 1990. He showed people video of a robbery and he found that they were less likely to pick the robber out of a lineup accurately if they were asked to verbally describe him first. There's another very famous experiment by Schooler that Gladwell references. I'm going to send you this. By the way, I know I'm sending you brutally long Gladwell excerpts and I promise you that each one of these is edited down. [laughs] 


Michael: I feel like you're including excerpts with as many proper names as possible to catch me in a mispronunciation. 


Peter: No, this is just what he does. He loves a character. So, he won't just be like there's the study and do a dry rendition of the study. He's like, “It was a windy day in Chicago.” 


Michael: Yeah, yeah, yeah.


[laughter] 


Peter: All right, here you go. 


Michael: He says, “Consumer reports put together a panel of food experts and had them rank 44 different brands of strawberry jam from top to bottom according to very specific measures of texture and taste. Wilson and Schooler took the 1st, 11th, 24th, 32nd and 44th ranking jams and gave them to a group of college students. Their question was how close would the student’s rankings come to the experts? The answer is pretty close. The students put Knott's Berry Farm second and Alpha Beta first, reversing the order of the first two jams. The experts and the students both agreed that Featherweight was number three. Overall, the student’s ratings correlated with the expert’s ratings by 0.55, which is quite a high correlation. 


What this says, in other words, is that our jam reactions are quite good. But what would happen if I were to give you a questionnaire and ask you to enumerate your reasons for preferring one jam to another? Disaster. Wilson and Schooler had another group of students provide a written explanation for their rankings. And they put Knott's Berry Farm, the best jam of all, according to the experts, second to last and Sorrell Ridge, the expert's worst jam, third. The overall correlation was now down to 0.11, which for all intents and purposes means that the student’s evaluations had almost nothing at all to do with the expert’s evaluations. So, basically all of the kids are testing the jam, tasting the jam, but some kids just taste it and immediately are like, 1, 2, 3, 4, 5.


And the second group tastes the jam and then they're like, “Okay, this one's like a little bit more sweet, this one's like a little bit more sour.” And the ones that are describing the jams end up scrambling it compared to the expert assessments. 


Peter: That's right. 


Michael: So, what is this explanation of this effect? 


Peter: It's not entirely clear. We don't know exactly why this happens. I'm sort of prone intuitively, if I'm just blinking it to the idea that the verbalization is interfering with your memory. We don't really know why we do or feel things. And so, being asked to describe it is just like, it's fucking you up. It gets you thinking about these different factors that you weren't thinking about before. And then you're trying to map your instinct onto this framework and it just gets cluttered in your brain. 


Michael: Right. It's like hearing somebody try to describe why a joke is funny. Ultimately, you're talking about an involuntary response. You're talking about something totally subjective. And by trying to intellectualize it, you can convince yourself of anything or you can talk yourself out of your gut level response. 


Peter: I was a little bit skeptical of this stuff. Probably, the most established version of this phenomenon is in that facial recognition context, like the robbery example. And after the replication crisis drama unfolded, they attempted a replication of that study that actually did manage to replicate an effect I think was smaller than in the earlier studies, however. So at least in certain circumstances, it does seem like this is a real replicable phenomenon. Gladwell does dedicate a good amount of time to talking about how our intuitions can fail us. I mentioned that Kahneman had said, in situations where you don't have experience, your intuitions are not necessarily going to be reliable. And Gladwell basically arrives at the same place. He does not do this whole intuition is magic thing. And he talks about a bunch of areas where your intuitions might lead you astray. 


Some very serious and dark, some not at all. One of his simplest explanations is the Pepsi Challenge and now you remember the Pepsi Challenge. 


Michael: Yeah. They blindfold you and they give you Coke and Pepsi, and you're like, “Which one tastes better?” And allegedly more people liked Pepsi. 


Peter: So, this is right. So, they started doing this as a marketing ploy in the 80s, and then it continued into the 90s. Pepsi is getting people on the street giving them a little sip of Coke, little sip of Pepsi, and they are choosing Pepsi. Coke does internal testing and they find the same thing. People prefer Pepsi by 15% margins-- [crosstalk]


Michael: That's [crosstalk]


Peter: -Crazy margins. Coke freaks out. Very famously. This leads to new Coke. 


Michael: Oh, yeah. 


Peter: They change the formula, make it a little sweeter. Everyone gets mad. Grotesque humiliation for the company, right? [Michael laughs] What happened here? It turns out the advantage that Pepsi had only lasted a couple of sips. 


Michael: Right. 


Peter: People's initial reaction was to prefer it. But for drinking entire cans or bottles or whatever, the advantage dissipates. Again, I thought this was interesting. Are we talking about intuition or is this about sugar? 


Michael: Also, I've noticed it's like every single thing they give you at Larry's Market. You get a little thing of smoked salmon, a little tiny dollop of it. And then you buy the whole thing of smoked salmon, and it just isn't as good when you have more of it. 


Peter: Yeah, it's every little fucking snack at Trader Joe's one bite and you're like, “Nice.” 


Michael: Yeah. Exactly.


Peter: Three bites. You're like, Jesus Christ, [Michael laughs] Whole bag of this shit. 


Michael: It's too much, yeah. 


Peter: I don't think that's Blink


Michael: Yeah, that's not okay. 


[laughter] 


That's a cute little story about Coke and Pepsi. 


Peter: Blink, I ate too many S'mores. 


[laughter] 


What are you talking about? God damn you, Gladwell. So, he points out how intuition can result in stereotyping. His big example is the murder of Amadou Diallo in 1999. Black immigrant murdered by NYPD. He was mistaken for a rape suspect or a robber. The officers used both justifications. He's confronted in a hallway. Officers say he didn't comply. They confused his wallet for a gun. He is shot many times. Gladwell basically posits a theory of the case where the officers were not explicitly racist probably, but made a series of misjudgments that may well have been influenced by Diallo's race. The more problematic portion of this is just a complete curve ball. Gladwell starts talking about autism. 


Michael: Okay. 


Peter: His basic premise is that most people are able to intuitively read people's faces and gestures very quickly Blink style, while people with autism struggle with that. He calls it mind blindness because they are purportedly unable to read the intentions of other people.


Michael: I don't think that's true, anyway. 


Peter: It's not true. It's a grotesque oversimplification.


Michael: Is this in the last couple chapters of the book, Peter? Is this where he's putting it? 


Peter: We're in the last chapter and I was halfway through the book, and I was envisioning the episode. And I was like, “Yeah, you know, we'll probably talk about some of the science that's outdated, some of the things that I think he oversimplifies. But a lot of it's just going to be science anecdotes.” And then you hit the last chapter, and it's Amadou Diallo. And I was like, “Here we fucking go.” [Michael laughs] Now, I'm going to send you this. In this context, he has just told a story about an autistic guy named Peter. Almost changed the name just to avoid you taking a little stab. 


Michael: I'm not doing any stabs. 


Peter: I know you're too woke-- [crosstalk]


Michael: Exactly.


[laughter]


Peter:  But our listeners are like, an autistic guy named Peter, huh?


Michael: He says, but I can't help wonder if under certain circumstances, the rest of us could momentarily think like Peter as well. What if it were possible for autism, for mind blindness to be a temporary condition instead of a chronic one? Could that explain why sometimes, otherwise normal people come to conclusions that are completely and catastrophically wrong? Well, if they would ban the vaccines, Peter, we wouldn't have this problem. 


Peter: So, he then goes on to theorize, loosely referring to some science, that certain very high stress situations, like confrontations involving firearms, cause us to lose our ability to read people's faces and gestures and movements, a phenomenon that he calls temporary autism. 


Michael: Oh, my God. 


Peter: And then he says that an important element of police training is to, “Avoid the risk of temporary autism.” 


Michael: Dude, “He made up a term, mind blindness.” Just say temporary mind blindness. He created a fake concept, but then he used a real one in his term. Just use the fake one. 


Peter: Of course, he throws in temporary autism at the end of the book. He's like “final chapter. It's Malcolm time, baby. Let's go.”


Michael: He's like, [laughs] “Everyone stop reading. It's just you and me.” 


Peter: He's got this notepad of his most problematic ideas, and he's just like, “final chapter, baby.”


Michael: But also, is he using this to exonerate the guys that killed Amadou Diallo? That like, “Well, this is a normal human thing rather than just like rank racism, which I think is probably a much stronger factor.” 


Peter: I mean, I will say that he seems to be implying that training can fix this. And it's like, “All right, yeah, they should better trained, sure.” But the idea that they were unable to sense Diallo's intentions is built on an assumption that I don't think you can make. And we've seen a lot of body cam footage now where it's not so much that they couldn't read the intentions as they were looking to pull that trigger. Now, do we know what happened in the Diallo case? No, there's no body cam footage. There are no eyewitnesses besides the cops. So, we don't really know what happened there. But a story that is told based on police testimony where it's just like, yeah, they were in this really high stress situation, and those are really tough to function in. 


Michael: Yeah, they became autistic suddenly. 


Peter: Yeah. And they actually got autism for a little bit. 


Michael: It's like the perfect TED Talk brain thing where it's like, “You might think this was racist policing, but actually there's a more I guess, “interesting” scientific explanation of it. But it's like, I think it was probably just the racism. [laughs] This is a department, like, a long history of racist policing.


Peter: Oh, you think I'm racist? Actually, you are being ableist. Because for that one and a half seconds, I was autistic. Ever think of that? 


Michael: I was as bad at reading faces as Koreans are at flying planes. Malcolm Gladwell's like, “Yes, we got it. Ship it.” 


Peter: Oh, God. 


Michael: Final chapter, baby. 


Peter: That's where I was basically going to end this episode. And then I encountered something even worse. 


Michael: I want to know what your rabbit hole this morning was. 


Peter: [laughs] Yeah, I had a little bit about this, and then I made a quick discovery at like, 9 PM last night that sent me down a rabbit hole. And my wife was like, “Hey are you coming to bed” at 12:30 last night. And I was like, “I need a little more time.” [Michael laughs] In the Diallo chapter, in the hearts about being able to read people's expressions and such, there is an interview with a guy named Paul Ekman. Ekman is a psychologist who claims to be an expert in micro expressions, little expressions that cross our face. And he has mapped out different types of expressions. 


Michael: Oh, no. 


Peter: As he believes that they correlate to different emotions and so forth. 


Michael: This feels like some TikTok body language expert shit. 


Peter: I am going to send you a very brief excerpt. 


Michael: Ekman recalled the first time he saw Bill Clinton during the 1992 Democratic primaries. I was watching his facial expressions, and I said to my wife, “This is a guy who wants to be caught with his hand in the cookie jar and have us love him for it anyway.” I mean, that's kind of true. 


Peter: Yeah, it is true. Except it's a pretty convenient little story to tell after Bill Clinton got caught cheating. 


Michael: Also, he had numerous sex scandals during that campaign. It's not like the Lewinsky thing was totally impossible to predict. 


Peter: This guy's face looks like he's going to murder Vince Foster. [Michael laughs]


Michael: And endorse a sex pest in New York for some reason, 25 years later. 


Peter: Ekman claims that he can use this approach not just to see someone's emotions, but to detect deception to see when someone is lying. And he got extra popular after the publication of Blink. The TV show Lie to Me revolves around a character that can read facial expressions and so forth. That's based on Ekman. He starts to sell training modules primarily to cops and other government agencies. 


Michael: Oh, my God. 


Peter: He lands a large contract with the TSA. This ends up being TSA's spot program screening of passengers by observation techniques, which launches in 2007. In 2010, the Government Accountability Office looks into spot and they put out a report where they conclude with this “TSA deployed SPOT nationwide without first validating the scientific basis for identifying suspicious passengers in an airport environment.” 


Michael: Yes. 


Peter: “A scientific consensus does not exist on whether behavior detection principles can be reliably used for counterterrorism purposes.”


Michael: Unbelievable. 


Peter: This results in congressional hearings. Ekman himself testifies and he claims that his methods can detect deception with 90% accuracy. Maria Hartwig, a psychologist who specializes in deception detection, testifies that not only is there no research indicating that you can detect lies through analyzing micro expressions, there's no evidence that people consistently exhibit micro-expressions at all. 


Michael: Perfection. 


Peter: Ekman's 90% accuracy claim is not verified because he has never published a study testing the efficacy of his training. When asked about that, he claims that he cannot publish it for national security purposes.


Michael: Hell, yeah. 


Peter: Because now that we're using it, terrorists could try to beat it.


Michael: That's actually how you know it is good, because there's no evidence for it. 


Peter: In 2015, someone leaked to The Intercept the criteria that TSA was using. 


Michael: Yes, yes. 


Peter: It is a 92-point checklist-


Michael: Fuck yes.


Peter: -where different behaviors get you either 1, 2, or 3 points, depending on how risky they are. If you get to four points, you get additional screening, sending it to you now.


Michael: Yeah, Please God, yes, yes, yes. 


Peter: The very first one which gets you one point out of four needed for additional screening is arriving late for your flight.


Michael: [laughs] That doesn't mean you're lying. 


Peter: As an early to the airport guy, I say “Go get him folks. Lock him up.” 


Michael: Exaggerated yawning as the individual-


Peter: Exaggerated yawing.


Michael: -approaches the screening process. How would you even know? Also, people are oftentimes for flights, you're getting up at 3 in the morning to make your flight. 


Peter: Excessive fidgeting, clock watching, head turning, shuffling feet,- [crosstalk]


Michael: Clock watching at the airport. [laughs] 


Peter: -leg shaking. Imagine looking at your watch at the airport. 


Michael: My favorite one, Peter. These are the ones that are one point each. These aren't even major stress factors. Three points each, deception factors. One of them is “appears to be in disguise.” [laughs] 


Peter: That's one of my favorite ones because again, you need four points for additional screening as long as everything else is smooth. 


Michael: Yeah. 


Peter: If the TSA knows that you're in a full ass disguise, you still can't get screened. 


[laughter] 


I got the big fucking fake mustache and they're like, “God, we want to screen him, but he's not yawning.” 


Michael: It also has wearing improper attire for location. But if somebody has flown in from Hawaii, they're probably in shorts and flipflops. They might be in Detroit. 


Peter: The location's an airport. I'm trying to think of what would be improper location because you could be going anywhere. 


Michael: Yeah, literally, there's no such thing as improper attire for an airport. It depends on where you are going and coming from. 


Peter: [laughs] You don't know where they're going. 


Michael: Powerful grip of a bag. 


Peter: Cold, penetrating stare. I will never get through a security line. Bag appears to be heavier than expected. 


Michael: Than expected, sure. 


Peter: You haven't found the most overtly racist one? 


Michael: Oh. Face pale from recent shaving of beard. 


Peter: Recently shaved beard is one of the indicators. 


Michael: What the fuck? 


Peter: He's trying to pretend he's not Muslim. 


[laughter] 


Jesus Christ.


Michael: Nice try. 


Peter: The TSA, by the way, revised its criteria around this time. And then in 2017, the accountability office put out a third report saying that there's no evidence for the revised criteria either. 


Michael: [laughs] Are they still using this? Do you know? 


Peter: I think that the program still exists in some form but it's like, not in this form anymore. That's my recollection. 


Michael: Okay. 


Peter: By the way, I was so close to being like, “You know, this book is pretty harmless at the end of the day, but then it turns out that it indirectly launched a completely fraudulent and possibly racist TSA screening program.” Dude, Malcolm Gladwell is so powerful. 


Michael: [laughs] I know. 


Peter: There had to be congressional hearings because in 2004, when Malcolm Gladwell heard this guy say that he could read people's faces, he wasn't like, “I don't think so, buddy.” 


Michael: That's got to be Gladwell thinking it's a better story if this guy is an expert rather than just debunking his claims. Because debunking his claims doesn't really fit into Gladwell's book. 


Peter: This is why it's so exhausting doing Gladwell shit. I told you there's 100 anecdotes in this book. I didn't do deep or even medium dive into most of them because it's just so time consuming. But it's basically a guarantee that Gladwell did not look enough into any of these. 


Michael: It does make me question the double fault guy, to be honest. 


Peter: Before we wrap, I want to talk about some of the bad reviews. 


Michael: Oh, yeah.


Peter: Because I actually thought that a lot of the reviews were sloppy in the same exact way that they accuse Gladwell of being sloppy.


Michael: Yeah.


Peter: The big one was Richard Posner in The New Republic wrote a review called Blinkered. And Richard Posner is a judge. Famous conservative libertarian guy, revolutionary introducing economic analysis into judicial opinions in case you thought the law wasn't stupid enough.


Michael: These are all of my deception factors. These are three points libertarian, conservative, judge. He passes the threshold. That's more than 4%. 


Peter: If I see a deceptively heavy bag, this guy's fucked. He's going in the cage. [Michael laughs] He writes this scathing review. He hits on some, well, let me send you some excerpts, hold on. 


Michael: Posner says one of Gladwell's themes is that clear thinking can be overwhelmed by irrelevant information. But he revels in the irrelevant. An anecdote about food tasters begins. One bright summer day, I had lunch with two women who run a company in New Jersey called Sensory Spectrum. The weather, the season, and the state are all irrelevant. And likewise, that hospital chairman Brendan Riley is a tall man with a runner, slender build or inside, JFCOM looks like a very ordinary office building. The business of JFCOM, however, is anything but ordinary. These are typical examples of Gladwell's style, which is bland and padded with cliches. Yeah, I mean, to be honest, I also find descriptions like this annoying, but they're fairly standard for this nonfiction writing. 


Peter: I'm not saying you have to like this style of writing. I agree that it's a little cliche, but these are just examples of Gladwell painting a picture for his audience, right?


Michael: Yeah, yeah. 


Peter: Sorry that it's not written like a judicial opinion. Otherwise, the book is just a series of descriptions of studies. 


Michael: Right, exactly. 


Peter: You have to have some of this. 


Michael: And journalists constantly do this. They're like, we met in his office where he was wearing a blue polo shirt and his hair in a ponytail. Like, this is just part of nonfiction storytelling. You make a scene. 


Peter: The color of her hair is not relevant to the final conclusion here. [Michael laughs]


Michael: You love that voice. That's your Posner voice. [Peter laughs]


Peter: Yeah. That was actually a recording of Richard Posner that I played as a trick. [Michael laughs] Gladwell talks at one point about an experiment by a law professor who sent a different mix of white and black men and women to car dealerships to negotiate deals for cars. And he found that black people, especially black men, ended up being quoted higher prices even after negotiation. He says, “Look, this probably isn't conscious racism because why would the salesman purposefully price something inefficiently?” That's Gladwell's thesis. And so, guess what Posner's objection to this is?


Michael: It's not exonerative enough. 


Peter: He says it would not occur to Gladwell, a good liberal, that an auto salesman's discriminating on the basis of race or sex might be rash, a rational form of the rapid cognition that he admires. If two groups happen to differ on average, even though there is considerable overlap between the groups, it may be sensible to ascribe the group's average characteristics to each member of the group, even though one knows that many members deviate from the average. 


Michael: So, Gladwell's being like, low key racist. And he's like, “No, no, it should have been high key racist.” 


Peter: Gladwell is like, “Look, these guys are accidentally doing racism, which definitely does not give enough credit to how racism operates in reality. And then you get the objection from Posner, which is like, “Well, maybe black people are just worse at negotiating.” 


Michael: Yeah, they were racist. And that's chill. 


Peter: I love this mostly because to go through this entire book and pick this out as like a specific thing that irritates you to the point where you dedicate three paragraphs to it, it's the most libertarian judge thing you could possibly do. [Michael laughs] Remember when you were talking about people thinking that tall and handsome people are more honest? And I told you that Gladwell sort of pokes at that too. Here's Richard Posner addressing that.


Michael: The average male CEO of a Fortune 500 company is significantly taller than the average American male. And Gladwell offers this as yet another example of stereotypical thinking that is not very plausible. A CEO is selected only after a careful search to determine the candidate's individual characteristics. Gladwell ignores the possibility that tall men are disproportionately selected for leadership positions because of personality characteristics that are correlated with height, notably self-confidence and a sense of superiority perhaps derived from experiences in childhood when tall boys lord it over short ones. Has he considered that tall people are better? 


Peter: The idea that decision makers at large companies might be subject to biases rather than perfectly efficient market machines is offensive to Posner. He's like, “How dare you question the wisdom of Fortune 500 companies?” This is such good insight into the conservative brain. Because someone is just like, people have a rational bias. And then he's like, “Maybe there's a slightly more rational explanation. Maybe society is ordered exactly as it should be. Do not question it.”


Michael: Also, it's like, why even do this? Just admit that there's a little bit of bias in favor of tall people. You know me, I'm a miniature gentleman. I don't particularly care that much, but it's like, “You have to be able to at least admit, like, yeah, discrimination exists.”


Peter: Also, I poked around at this a little bit. It seems pretty well established. I have to say I didn't dig too deep because this is just fucking Richard Posner who gives a shit? But, like, there are various contexts in which tall people seem to have an irrational advantage. The idea that they're just, like, cooler. [laughs]


Michael: Yeah. [laughs]  


Peter: Part of the reason I wanted to talk about the Posner review is because I hate Richard Posner. I don't care that he got more liberal in his old age. He wasted his entire life. So, it doesn't really matter to me. In a vacuum, it’s good that you saw this reaction to pop science, right? Where people are like, “Hey, this stuff is sort of bullshit,” right? 


Michael: Yeah. 


Peter: But the fact that The New Republic is like, “Well, why don't we let a federal judge have at it? You couldn't find a psychologist.” This is so bizarre. And speaks to how even in our rejections of anti-intellectualism we don't really know what intellectualism is. 


Michael: It's also part of this era of The New Republic and liberal magazines writ large. 


Peter: Absolutely. 


Michael: You might think your liberal values are correct, but have you heard from the worst person you've ever met? 


Peter: Next time you theorize, why don't you show a little respect Fortune 500 companies? 


Michael: l know. [laughs]


Michael: I wanted to put this at the end because even if Malcolm Gladwell accidentally got thousands of Muslim presenting people detained, I still feel a little bit angrier towards Richard Posner. 


Michael: [laughs] The enemy of our enemy is not our friend. 


Peter: We're a hater podcast. That doesn't mean you can get on our good side by hating on a book that we hate. 


Michael: [laughs] Nice try, Richard. 


Peter: Still hate you, nerd. 


Michael: He's in my phone as Richard Bad Vibes. [Peter laughs]


[If Books Could Kill theme]


[Transcript provided by SpeechDocs Podcast Transcription]