.jpg)
Think It Through: the Clearer Thinking Podcast
Think It Through: the Clearer Thinking Podcast
Episode 25: Algorithm Literacy
Yes, she's back!!! In this episode, April discusses "algorithm literacy" as a critical part of overall media literacy. It's important to understand that algorithms, while they are a necessary and useful part of the online universe, also play a big role in online polarization and the normalization of extreme viewpoints. The more you know about them, the more effectively you can control what you see online.
Episode 25 Show Notes:
Here's the article discussing the results of the study by Project Information Literacy:
https://www.edsurge.com/news/2020-01-16-report-colleges-must-teach-algorithm-literacy-to-help-students-navigate-internet
Pew Research Center's discussion of the need for algorithm literacy:
https://www.pewresearch.org/internet/2017/02/08/theme-7-the-need-grows-for-algorithmic-literacy-transparency-and-oversight/
The Algorithm and Data Literacy Project, a great source for kids to learn about algorithms (you'll find the YouTube video I mentioned in the podcast here):
https://algorithmliteracy.org/
A LibGuides page from the University of Singapore's website on the topic of algorithm literacy:
https://libguides.nus.edu.sg/digitalliteracy/algorithm
Paper by Harvard professors Cetina Presuel and Martinez Sierra on the problems caused by social media platforms' reluctance to see themselves as news publishers and distributors:
http://www.scielo.org.pe/pdf/rcudep/v18n2/2227-1465-rcudep-18-02-261.pdf
PBS Nova investigates the spread of radical extremism on social media through algorithms:
https://www.pbs.org/wgbh/nova/article/radical-ideas-social-media-algorithms/
Troubling information about the ways that Russian troll farms used Facebook algorithms to spread disinformation before the 2020 election:
https://www.technologyreview.com/2021/09/16/1035851/facebook-troll-farms-report-us-2020-election/
Financial Times op-ed advocating for more accountability of social media algorithms:
https://www.ft.com/content/39d69f80-5266-4e22-965f-efbc19d2e776
Some helpful articles with tips and tricks about how individuals can limit the influence of algorithms:
From Mashable: https://mashable.com/article/how-to-avoid-algorithms-facebook-youtube-twitter-instagram
From LinkedIn: https://www.linkedin.com/pulse/how-reduce-effect-algorithms-your-behavior-worldview-guide-mikko/
From BBC: https://www.bbc.com/news/blogs-trending-38769996
Episode 25—"Algorithm Literacy”
Yes I know, you probably wondered where I was. Well…I’ve been teaching six classes AND completing an online teaching certificate (which was a very intense seven month process). I’m done with that now, so hopefully I can get back to some kind of regularity in my podcasting, but I didn’t want you to think I had fallen into a black hole or something.
Anyway, on to the subject matter for today’s podcast. One of the big goals here at Think It Through has always been helping listeners to improve their media literacy. In the past few months, I’ve started to hear more about something called “algorithm literacy,” and while I know that algorithms are an integral part of the way in which we as individuals get information on the internet, I really never considered that having a deeper understanding of how they work to accomplish that goal (in other words, being more “algorithm literate”) might be an important part of overall media literacy. And as a college communication professor, it’s part of my job to help my students understand how to find the best, most current, most accurate information out there. So I was disturbed to learn that students really don’t think college instructors are doing a very good job on that front. A 2020 study conducted by Project Information Literacy looked at the ways that college students find, evaluate, and use information. One of the findings of that research was that less than a third of the students said that their professors had given them any information about how algorithms work, and that they had pretty much dismissed faculty as being a source of information about them. And when asked, the professors themselves agreed that they really didn’t teach much, if anything, about algorithms, though most of them were aware of their existence. And, frankly, even though many of us have heard of algorithms, we don’t really know how we are being affected by them. In fact, a survey by the Pew Research Center found that 74% of Facebook users said they weren’t aware of the ways that Facebook determines their “interests.” They know it happens, they just don’t know how it happens. So today I’m going to give you a general understanding of what algorithms are and how they work, how they can be helpful, and ways in which they can be not only unhelpful but very problematic. Then we will look at a couple of ways to “tame” algorithms whenever we are online. Alright, let’s get started!
Music
So “what is an algorithm?” and “how does it work?” Ummm…ok, I’m not an IT professional and I don’t code, so I can’t explain them to you in that kind of scientific or technological detail. But I can give you a basic explanation using some good sources, which—guess what—algorithms helped me find! Let’s start with a definition from the website “The Algorithm and Data Literacy Project.” It’s an organization with the stated goal of raising awareness and educating people, especially kids “about the presence of algorithms and how they influence our digital experience.” Because it’s focused on younger digital users, it’s probably a good place for newbies to begin to understand algorithms. This site describe them as “step-by-step plans or instructions to perform a task or solve a problem,” and they compare them to recipes that coders use to take information and produce something or achieve a result. There’s a YouTube video on the page that I’ll link to in the show notes that gives a very basic explanation of algorithms (and maybe should be something you show your kids!). In it, they explain that when we like something, search for something, or share something online, we are creating data, and the data we create by doing that is used by algorithms in the programs we are using to make connections between those things and other similar things, which the program then looks for opportunities to show us, like the results you get when you search for something on Google, or ads that show up on your social media pages. There’s also a very good description of algorithms on the University of Singapore’s Digital Literacy pages. They describe algorithms as “prediction machines” that use existing data to predict missing data. So when you feed them data by liking, sharing, or typing into a search engine, they then use that to predict what you’re going to want to see in the future. And, because they are using the very data that you’ve given them over time, they tend to be at least somewhat accurate, and sometimes disturbingly so. Have you ever thought about something, or talked about some topic to someone, and then the next time you go onto social media or a website there’s an article or an advertisement about that very thing? It can be pretty creepy and make you wonder if your phone is stalking you; but it’s very likely that it’s just those sneaky algorithms putting two and two together and coming up with exactly what you were talking about with your friend. That’s how accurate the predictions of algorithms can be.
MUSIC
Creepy or not, Algorithms are neither inherently good nor inherently bad. They certainly do one important and helpful thing for us, so let’s talk about that first. Those of us who are of a certain age before the internet can remember when we were looking for information for some school assignment or just for our general knowledge—If I wanted to find out information that I didn’t immediately have on hand I’d have to go to an actual brick-and-mortar library. If I were to use today’s research terms to describe yesterday’s research, my fingers would be the search engine, and the card catalog would be the database. I’d have to look through lots and lots of little cards trying to find a listing of a book or a magazine or a journal article that would give me the information I was looking for, and that could take a great deal of time. And even after I thought I’d found something helpful, then I’d have to locate it on the shelves and look through it to see if it would be helpful or not, and if it wasn’t I’d have to go back and start the process over again. But now algorithms on search engines like Google do all of that work for you--they absolutely filter out all the things you’re not looking for and give you descriptions of articles or books or whatever, so you can immediately see if those are what you want, and that makes your time doing research much more efficient. Or so you’d think…One of the great things about the internet is that there’s soooo much information out there and it’s easy to find; but that great thing is also a not-so-great thing about the internet. Why would having lots of information be potentially bad? Because the sheer volume of data means that when you type something into a search engine, the algorithms spend about three hundredths of a second grabbing everything they can find that they think you want and then present it to you in the order they think you want it. And even if algorithms find millions of articles about that topic (which they often do), they present what they believe are the most important things to you based on all the things you’ve ever typed into Google’s search bar. Well, that’s not completely true—often the thing that ends up at the very top of your results list is—you guessed it—an advertisement by a company that paid Google for that position at the top of your list. And so many of us just automatically believe that those first results are the best, most relevant, and most accurate results about that topic, and of course we start clicking on those top results. You might assume that the further you go down that list, the less helpful those results are going to be. But depending on what you’re looking for, that is not necessarily true. I’ve sometimes found the best, most relevant, most accurate information ten or even twenty pages into a Google search result. But most people don’t generally go that far, preferring to stick to the results on page one, maybe page two. Even Google Scholar has a problem with an overabundance of potential results. In 2018 it was estimated that there were over 110 million scholarly documents written in English on the internet, but critics complained that Google Scholar’s algorithm makes no distinction between refereed (aka peer reviewed) articles and non-refereed articles. And unlike college databases, there is no way to filter out non-refereed papers. Which means that, when someone searches on Google Scholar for papers on a particular topic, questionable research is often presented in the same results as peer-reviewed, high-quality research, and it may be difficult for someone not aware of the distinction to know what is good solid trustworthy information, and what is possibly sketchy and unhelpful information. So this leads me to discuss some of the potentially problematic things algorithms do—if you let them, they can limit your options, influence your choices, and get you stuck in a preference bubble that may even lead you down the rabbit hole to crazytown.
MUSIC
It’s no secret that many people get the bulk of their news from social media. Harvard professors Rodrigo Cetina Presuel and Jose Martinez Sierra published a 2019 article about how social media platforms are really news publishers and news distributors, even if that was not what they were initially intended to be. Here’ s what they say: Quote: “Social media platforms use algorithms to perform functions traditionally belonging to news editors: deciding on the importance of news items and how they are disseminated. However, they do not acknowledge the role they play in informing the public as traditional news media always have and tend to ignore that they also act as publishers of news and the responsibilities associated with that role.” End quote They go on to argue that social media platforms need to acknowledge and embrace this role; and once they do, they must also embrace the editorial responsibilities that come with fulfilling that role. Sadly, that’s not happening right now, and a lot of the problems with how news travels online has to do with algorithms. A 2019 PBS Nova article discussed the ways that algorithms actually promote extreme content. While they assure us that algorithms aren’t designed to be malicious, that doesn’t mean we shouldn’t be wary of them. According to data scientist Elisa Celis, the companies that use them are simply trying to optimize their metrics, to get clicks and engagement that ultimately leads to more revenue. So if the goal is to keep you coming back, and if you start clicking on what the algorithms thinks you might want to see, what starts out as personalization can eventually turn into polarization. Algorithms are just programs, they’re not sentient, they don’t have a conscience or a thought process that causes them to consider whether the thing they’re showing you might not be correct information, or it might be disturbing or even dangerous. All an algorithm knows is that that’s what you are clicking on, so it’s going to provide more of it to you. And over time, your opinions can tend to be swayed toward whatever you’re looking at, and you require more and more of it, and the algorithms are only too happy to oblige. And as you become immersed in these viewpoints and interact with other individuals who believe in the information, or rather misinformation, that’s being provided to you, you start to feel a kinship with others who hold the same beliefs, a camaraderie and empathy towards them. In fact, many people who’ve gone down those rabbit holes have cut themselves off from their own friends and family who don’t think the same way, preferring instead the new friends they’ve met through Facebook pages or Twitter feeds or message boards on radical websites. Computer scientist Nisheeth Vishnoi says, and I quote, “Hate is a relative thing. If you see it from a hundred kilometers, it’s different than when you’re right in the middle of it…this may be one reason people are becoming more extreme without even realizing it.”
Even though the algorithms themselves aren’t malicious, they can be exploited by groups with nefarious agendas. According to a report in the MIT Technology Review, Facebook’s algorithms were used in the runup to the 2020 election by Russian troll farms, who posted disinformation on some very popular pages targeting the Christian and Black communities. Then that disinformation would end up in people’s newsfeed, not because they were necessarily following those pages, but because Facebook’s algorithms would automatically insert that content into their newsfeeds as “recommended” posts due to other posts those people had liked or shared in the past. So the algorithms were helping troll farms to spread disinformation to people who might otherwise not have seen it.
MUSIC
The NOVA article makes it clear that the spread of mis and disinformation was really an unintended consequence of using algorithms to get more engagement on platforms. But now that we’ve let the genie out of the lamp, so to speak, is there anything that can be done to limit the potential damage that algorithms are capable of creating?
Well, while there isn’t really a fix for this quite yet, there are a lot of ideas floating around out there. For instance:
1. There is a call for more transparency of how algorithms work. Most companies that use them consider the codes that run their algorithms to be “trade secrets” and don’t share them willingly. A June 2021 op-ed in the Financial Times on this topic says that these algorithmic systems are not subject to community review, and it’s not just that we don’t know how they work, but we don’t even know WHO the people are that know how they work. And at this point, they are unchecked and can wield enormous power over billions of people worldwide. But there are a lot of security researchers, human rights organizations, and academics who are advocating for these businesses to make their algorithms open, transparent, fair and accountable. Also there are people out there creating open-source algorithms and sharing them freely so they can be studied and utilized.
2. Researchers at Yale University are working on ways to place limits on the amount of “personalization” that algorithms can give to a user and provide more diversity in the kinds of content presented to those users. They postulate that simply seeing that other viewpoints still exist could be helpful to someone who would otherwise be trapped in an online bubble and only see content that supports a radical viewpoint.
3. Facebook’s response to the troll farm pages that I mentioned earlier has been hit-and-miss, and although they did shut down most of those pages that Russian trolls were using, it took until well after the 2020 election for them to do it. So while they say they are trying to keep crap off of Facebook, it might also be on you to do your part to reduce the effect of Facebook’s algorithms on your feed. One of the things that you can do to go around an algorithm is, next time you’re on Facebook on your computer, click on See More under Suggested on the Left Side of the page, then click on Most Recent and that will show posts from all your friends in the order they’ve posted, and that’s going to be entirely different from your regular, algorithm-personalized feed. Of course it goes back to that feed when you click on Home again, so you probably want to select Most Recent on a somewhat regular basis. And you can reduce pesky ads by clicking on the three dots on the upper right of an advertisement and hide or report it and even opt to never get any recommendations from that company or others like it.
4. And then there’s YouTube—those algorithms are probably even more damaging than Facebook’s because they are very quick to determine that you like problematic content—once you’ve viewed one video by, I don’t know, let’s say Jordan Peterson, even if you click it by mistake and after a minute or two you go, “Oh crap I didn’t mean to do that!” Guess what, now you’re gonna get more Jordan Peterson whether you want it or not. Sorry if you like him, I was just using him as an illustration; but I do think he’s a misogynistic jerk. But that’s just me…anyway, the point is that you can actually go into the Up Next list and turn off Autoplay, so you can just keep selecting content you want instead of having YouTube start playing something else for you. Now that doesn’t stop YouTube from recommending things to you, but at least if you fall asleep to a nice boring Ted Talk video you won’t wake up in the middle of some whack job discussing how the earth is really, truly flat. Those are just a few suggestions, there are other ways to limit algorithms, and I’ll link to some articles about that in the show notes.
MUSIC
In the end, algorithms can be helpful and since there’s just soooo much content out there, they are necessary to help us find things we need. But until there’s a fix for the problems that algorithms create, and until social media companies take more responsibility for the way that news spreads online, we as individual consumers of online content must take responsibility for ourselves when it comes to algorithms. You don’t have to read or watch everything that is suggested to you by algorithms. Of course you have to WANT to avoid particular types of content and you have to try, really hard sometimes to not encounter it, and if you’re already down the rabbit hole it’s probably too late for that. But if you’re listening to me, I’m going to assume that you have the wherewithal to take responsibility for what you’re consuming online. And you’ve already taken the first step by learning about algorithms, how they work, and how they affect us, and just being aware that they are out there, may help you to put those critical thinking skills to good use and take back some control of your online content. And I hope that what I’ve told you today will help you Think It Through.