Think It Through: the Clearer Thinking Podcast
Think It Through: the Clearer Thinking Podcast
Episode 44--Misinformation, Disinformation, Bots, and Trolls
Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.
While it's always been the case that "A lie can travel halfway around the world while the truth is still putting its boots on," it's exponentially more true since the rise of the internet, particularly social media. In this episode, April looks at how and why misinformation and disinformation (which are not the same thing, btw) travel so quickly around our media landscape and affect the way we view the world.
Episode 44 Show Notes
(I have so many sources I can't put descriptions on them because I would go over the character limit, but they're generally in the order they are found in the episode):
https://theonion.com/planned-parenthood-opens-8-billion-abortionplex-1819572640/
https://campaignlegal.org/results-lawsuits-regarding-2020-elections
https://shorensteincenter.org/research-initiative/the-hks-misinformation-review/
https://news.mit.edu/2018/study-twitter-false-news-travels-faster-true-stories-0308
https://www.imperva.com/resources/resource-library/reports/2024-bad-bot-report/
https://www.clrn.org/how-much-of-the-internet-is-ai-generated/
https://mediabiasfactcheck.com/
Music + Greeting:
“Hello, and welcome to Think It Through.
If you were active on social media back in 2011, you might remember seeing this news article that was posted and shared frequently on Facebook: “Planned Parenthood Opens 8-billion-dollar Abortionplex.” The story began with the lede “TOPEKA, KS—Planned Parenthood announced Tuesday the grand opening of its long-planned $8 billion Abortionplex, a sprawling abortion facility that will allow the organization to terminate unborn lives with an efficiency never before thought possible.”
Well, you can imagine the immediate outcry of people from all walks of life, but especially from those affiliated with conservative and religious organizations. Planned Parenthood was called evil, disgusting, satanic—you know, the things it usually gets called by such people. Yeah, this was like that, only on steroids.
The thing was, many millions of other people thought this was not only NOT a big deal, but it was pretty funny, if a little cringy. Those people were the ones who knew that the media entity publishing this article was The Onion, a satirical news website that parodies the tone and format of mainstream media organizations, either just for laughs or, as in this case, for biting social commentary. Its humor often depends on presenting events in a surreal or alarming fashion. There was no Kansas Abortionplex, this article was completely false. But man, it caused quite a stir, and while most people finally figured out it was satire, I’m sure there are still some people out there who remain convinced it was an actual thing that was shot down because of a large public outcry.
This event occurred when social media was really coming into its power, and it was an early example of the speed at which news travels on these platforms. Now, one of the benefits of social media is how quickly relevant information can be disseminated--for instance, breaking news like weather emergencies or other events that are important for the public to know. But that ability to quickly transmit information to a wide audience is also one of its main drawbacks— “news” that may not be correct, like the opening of a Kansas City Abortionplex, can also be spread rapidly and be believed, inflaming people whose worldview might make it easy for them to buy into it, and adding to the polarization we’re already experiencing.
So today we’re diving into a topic that affects every one of us who’s ever scrolled through social media or simply searched for information online — yes, I’m talking about misinformation and disinformation — and how it spreads online, not just by us humans, but by armies of bots and trolls. Let’s get to it, shall we?
music
First, I think it’s important to distinguish between the terms satire, misinformation, and disinformation.
The primary difference between all these things is intent.
Satire uses what’s supposed to be obviously false and exaggerated information for humor, entertainment, and social critique. Satire relies on the audience being savvy enough to immediately recognize that the information is “made up.” This is not to say there isn’t an element of truth underlying satire, because there usually is, and it can provide a viewpoint that both exposes and criticizes the status quo. And if indeed the audience understands that’s what’s happening, it’s effective. While The Onion is considered the “gold standard” of satire sites, there are a lot of others, including the Babylon Bee, the Borowitz Report, and Click Hole. Duffel Blog is a good military satire site, and TV shows like Saturday Night Live, the Simpsons, and South Park also rely heavily on satire for their humor. These media sources are dependent on the audience having an understanding of what satire is and recognizing it when they see it.
Misinformation and disinformation, however, both involve content that is genuinely false or misleading, but that is meant to be believed. A lot of people use those terms interchangeably, but there is a difference. According to the Freedom Forum, a non-profit organization dedicated to educating people about the freedoms contained in the First Amendment, Misinformation is false information that’s shared without the intent to deceive; usually the person sharing it sees it somewhere on the internet and believes it to be true, like when your aunt sends you an article from an anti-vax website because she thinks you really need to read it, it’ll change your mind about everything. She’s not lying to you, but she may not have the necessary background or context to recognize that it’s false or misleading information. Now, something that was initially meant as satire can actually become misinformation when a satirical piece is taken seriously. In the case of the Onion’s Kansas abortionplex article, one of my dearest friends didn’t realize it was satire and shared it on her feed, expressing her dismay about it. And she was a very smart person, but for whatever reason she didn’t catch on that it was satire and spread it as if it was real. Eventually someone commented on her post and told her what it was, and she did admit to being fooled by it. But the majority of people who spread misinformation online are blissfully unaware that they’re the unwitting purveyors of falsehoods.
Now, Disinformation also obviously involves false information, but the intent with which it is shared is different. In this case, false information is deliberately spread in order to deceive, manipulate, or even to cause harm. The people or entities spreading it are doing it with full knowledge that it is deceptive, and it’s usually done for political or financial gain. Disinformation often comes from large, coordinated campaigns that are designed to deceive. Here are some examples that may be familiar to you:
Pizzagate—this completely fabricated story claimed a popular Washington DC pizzeria called Comet Ping Pong was actually ground zero for a child sex-trafficking ring involving Hillary Clinton and other Democratic politicians. The story was started by members of the alt-right, conservative journalists, and others who were looking to vilify Clinton, and it was amplified through social media sites such as 4chan, 8chan, reddit and twitter. Finally, one determined guy who truly believed this story took his gun and busted into the crowded pizzeria. He fired off a couple of shots and searched around looking for a secret entrance to where the children were supposedly being held. He couldn’t find it because it didn’t exist. But his arrest and conviction on weapons charges was VERY real.
Another major disinformation campaign involves the 2020 election. Although it has been confirmed that that election was one of the most secure and safest ever held, many people continue to believe that Joe Biden somehow stole the presidency from Donald Trump. Many organizations, including The Brennan Center for Justice, the Center for Election innovation and Research, the American Bar Association, the National Academy of Sciences, the National Association of Secretaries of State, the Brookings Institute, and all the judges in over 60 court cases that were dismissed, thrown out or withdrawn due to lack of evidence, all these people and organizations are firm in their assertion that there was NO evidence the election had been stolen or that the results were somehow tainted. But some right-wing entities and major Republican elected officials have continued to make this claim, and a large percentage of the population, somewhere around 30%, have doubts about the legitimacy of that election.
And of course there’s Alex Jones’ disinformation campaign surrounding the Sandy Hook shootings. He repeatedly claimed that the 2012 shootings were fake, and that “crisis actors” were hired by the government to stage the tragedy, which would supposedly lead to confiscation of all our guns. He did this knowingly for the express purpose of gaining followers and selling products to them through his InfoWars website. And he continued to repeat these claims until several lawsuits against him culminated in a 1.5-billion-dollar judgment.
These are just three of many disinformation campaigns floating around out there. Those who traffic in disinformation know that a certain number of people who see them will believe them and spread them as misinformation—remember, once someone believes something incorrect and repeats it without intending to deceive, just to inform, it becomes misinformation. You might have noticed that all of the disinformation campaigns I mentioned originated from right-leaning sources. That’s because, as a study from the Shorenstein Center on Media, Politics, and Public Policy found, and I quote, “online misinformation sharing is strongly correlated with right-leaning partisanship.” While they found that liberals are also vulnerable to misinformation, it’s not really a symmetric relationship: the association is stronger among conservatives. And you can see it in the kinds of misinformation that’s out there linked to left-leaning partisans, like the idea that Ivana Trump is buried on Trump’s golf course because he hid incriminating documents in her casket; or the assassination attempt on Trump during the 2024 campaign was a setup—there is no solid evidence to suggest that either of those things is the case, yet many left-leaning people believe them.
So dis-and misinformation are inextricably linked, and are a big factor in the proliferation of conspiracy theories. The pizzagate conspiracy led to the criminally liable but possibly well-intentioned shooter’s arrest; the claims of a stolen election were a major factor in the January 6th attack on the Capitol and the continued mistrust of our election system, and the Sandy Hook conspiracy led to years of online abuse, stalking, and death threats towards the families of the murdered children and teachers from people who believed Alex Jones’ deceptions.
Music
Now you know that misinformation is usually spread unintentionally, but disinformation is both intentional and strategic. Because these types of false information are closely linked, they can spread quickly and go viral in seconds. Researchers at MIT’s Sloan School of Management found that false news stories spread far more quickly on social media sites like Facebook and X than factual stories. Their study showed that misinformation was 70% more likely to be shared than things that were true; factual stories took 6 times longer to reach people than the false stories.
There are several mechanisms at play that explain what causes this:
- The first one has to do with the concept of “novelty:” the researchers say that humans tend to like things that are new and interesting; on social networks, attention can be gained by posting something novel (whether it’s true often isn’t the most important thing). A report by the National Academy of Sciences says that a mere 15% of habitual social media users are responsible for spreading around 40% of false information. Users who post and share frequently are often incentivized for doing so (they gain attention, followers, money); and the report shows that the more they post, the more likely they are to spread misinformation.
- The second has to do with emotional appeal: False stories are very likely to trigger the kinds of emotions that cause us to want to hit “share—” surprise, fear, anger, disgust, or outrage. This is a key reason disinformation campaigns are so successful; as I’ve said many times, we act on feelings far more often, and far more quickly, than logic and reason.
- The “echo chambers” of social media also play a part: Back in episode 25 I did a deep dive into algorithms and the ways that they push us into echo chambers, showing us what we already agree with. So, misinformation that fits with our worldview and aligns with our personal identity feels believable, and we are more likely to share it without checking to see if it’s actually true. The Shorenstein Center study corroborates this, noting that misinformation spreads very efficiently within those echo chambers, and a lack of diverse perspectives in them can make it difficult for correct information to be heard.
- However, there’s one mechanism that we don’t really think enough about, but is an increasingly large part of how mis and disinformation is spread--bots and troll farms: Automated accounts constantly amplify online content to make it seem popular or credible. The 2024 Imperva Threat Research Report noted that about half of all internet traffic is related to bot activity, and that 1/3 of that activity is comprised of what they refer to as “bad bots,” or malicious programs designed to cause harm. Now, not all bots are bad; that nice little bot that pops up whenever you log onto your banking website to ask if it can do something for you? That’s a perfectly fine use of a bot; but “bad bots” created by cybercriminals can do nefarious things like steal login credentials, hack accounts, and of course spread disinformation. The California Learning Resource Network estimates that around 40% of postings on social media platforms are from bots. Yeah, you might be arguing with some stranger on Reddit who isn’t a real person, in fact odds are pretty good that’s what’s happening. That’s a huge amount of non-human activity, and we don’t generally stop to ask ourselves if what we are reading on social media is coming from these sources.
An article in The Conversation by psychologist Sophia Ricciardone says bots that are used for this kind of psychological targeting could covertly exploit weaknesses in our character and persuade us to take action against our own best interests. - Troll farms act similarly, but they are composed of actual people (although they also use bots to spread disinformation). They are usually organized and run as businesses, and most of them are funded by governments. Russia’s Internet Research Agency is probably the most well-known, but China, Saudi Arabia, Brazil, the Philippines, and Nicaragua are among the countries that use them as well. They pay their employees to post on social media platforms. Researchers at Carnegie Mellon’s Heinz School of Public Policy and Management describe these businesses as “groups of organized online agitators who identify grievances in other countries and then insert themselves into those debates with the aim of inflaming them.” Their job is to find, and amplify, tensions that already exist, and then fan those flames to get people to turn on each other. This “divide and conquer” technique is very effective and has been sadly successful in influencing some important political outcomes.
So ALL of these mechanisms—the novelty aspect, the emotional content, the algorithmic echo chambers, and the proliferation of bots and troll farms—work together to spread mis and disinformation quickly and effectively, affecting the social and political landscape.
music
Now we get to the part of the episode where I give you some information that might conceivably help you.
How can you tell if you might be interacting with a bot?
There are certainly some key indicators that could let you know whether someONE you’re talking to on the internet is actually a “someTHING.”
- Strange usernames or profile pictures that look generic or AI-generated is one clue. On Facebook or Instagram you can certainly look at someone’s profile to see what kinds of photos they have posted. If there aren’t a lot of personal or family photos, just photos of things or places or memes, no casual comments, no genuine interaction—that might be a clue.
- Posting frequency: Here’s the thing—bots don’t sleep. They often post hundreds of times per day, at all hours. That’s another clue.
- Repetitive content: If you can see that someone is posting the same link or message or meme across multiple threads, like on Reddit, that’s possibly a bot.
- Tells in language: If their interactions are full of awkward phrasing, inconsistent grammar, or sudden topic shifts—probably a bot.
Now, all that being said, while sometimes bots are easy to spot, they’re getting more sophisticated at mimicking real people, and as time passes it’s going to be even harder to determine whether they’re real or not.
But the most important thing to determine is not whether the person providing the information is real, it’s whether the information itself is real. And that’s where our critical thinking and media literacy skills come in. And of course the tips for fact-checking I’m sharing here are things I’ve said before in earlier episodes. So yes, it might feel like I’m repeating myself, because I am:
- Check the source: Who said it? Who’s behind it? Is it from a reputable organization? If you’re not sure, use a website like Media Bias Fact Check to see what kind of reputation a particular news source has.
- Look for corroboration: Do multiple, independent reputable sources report the same thing?
- If it’s a picture, do a reverse image search: This helps identify photos that are actually from different, earlier stories or are totally lacking context.
- Check the date: Old news stories often resurface as “new.” So you might see something that looked like it just happened, but after checking closely you discover that it happened years ago.
- Use fact-checking sites: Snopes, PolitiFact, AP Fact Check, Reuters Fact Check.And you can also use Media Bias Fact Check to see if a source has failed any fact checks.
I can’t impress this upon you strongly enough--falsehoods spread faster than truth. This fact is leading to the erosion of our shared sense of reality. It divides us, breeds distrust, and makes it harder to solve the real problems that we face.
It truly is our personal responsibility to do our best to provide good, solid, factual information to the people around us. Each time we share something, we’re not just passing along information — we’re shaping the world’s conversation. We need to make sure it’s an honest, thoughtful conversation on our part.
And that’s it for this episode. If you found it helpful, share it with a friend. And you can share this with impunity—I’m not a bot, and I back up everything I say with evidence from good solid sources, and you might just want to check out the show notes and look at those sources, because they go into far more depth than I can in this episode. So share away, and I hope you use the information here to help you think it through.
Music outro.