The Just Security Podcast

Free Speech and Content Moderation in Missouri v. Biden

July 08, 2023 Just Security Episode 32
Free Speech and Content Moderation in Missouri v. Biden
The Just Security Podcast
More Info
The Just Security Podcast
Free Speech and Content Moderation in Missouri v. Biden
Jul 08, 2023 Episode 32
Just Security

On July 4th a federal judge restricted the Biden administration from contacting social media companies about their content moderation policies. The court found that federal agencies, including the Department of Health and Human Services and the FBI, could not flag specific posts to social media platforms like Facebook and Twitter to encourage them to remove content. Though the order provides exceptions for the government to contact or notify social media companies about posts that involve crimes, national security threats, foreign attempts to influence elections, and other similar risks to public safety.

While an appeal in the case, Missouri v. Biden, is pending, the decision is a major development in the legal fight over online speech and the First Amendment. Some elected Republicans have accused social media sites like Facebook, Twitter, and YouTube of disproportionately silencing conservative viewpoints, while others argue that content moderation is necessary to prevent the spread of misinformation and hate speech. 

To unpack the initial decision in Missouri v. Biden, and what it means for the First Amendment and online speech, we have Mayze Teitler. 

Mayze is a Legal Fellow at the Knight First Amendment Institute where they focus on the surveillance of incarcerated people, spyware, and government transparency. 

Show Notes: 

Show Notes Transcript

On July 4th a federal judge restricted the Biden administration from contacting social media companies about their content moderation policies. The court found that federal agencies, including the Department of Health and Human Services and the FBI, could not flag specific posts to social media platforms like Facebook and Twitter to encourage them to remove content. Though the order provides exceptions for the government to contact or notify social media companies about posts that involve crimes, national security threats, foreign attempts to influence elections, and other similar risks to public safety.

While an appeal in the case, Missouri v. Biden, is pending, the decision is a major development in the legal fight over online speech and the First Amendment. Some elected Republicans have accused social media sites like Facebook, Twitter, and YouTube of disproportionately silencing conservative viewpoints, while others argue that content moderation is necessary to prevent the spread of misinformation and hate speech. 

To unpack the initial decision in Missouri v. Biden, and what it means for the First Amendment and online speech, we have Mayze Teitler. 

Mayze is a Legal Fellow at the Knight First Amendment Institute where they focus on the surveillance of incarcerated people, spyware, and government transparency. 

Show Notes: 

Paras Shah: On July 4th a federal judge restricted the Biden administration from contacting social media companies about their content moderation policies. The court found that federal agencies, including the Department of Health and Human Services and the FBI, could not flag specific posts to social media platforms like Facebook and Twitter to encourage them to remove content. Though the order provides exceptions for the government to contact or notify social media companies about posts that involve crimes, national security threats, foreign attempts to influence elections, and other similar risks to public safety.

While an appeal in the case, Missouri v. Biden, is pending, the decision is a major development in the legal fight over online speech and the First Amendment. Some elected Republicans have accused social media sites like Facebook, Twitter, and YouTube of disproportionately silencing conservative viewpoints, while others argue that content moderation is necessary to prevent the spread of misinformation and hate speech. 

To unpack the initial decision in Missouri v. Biden, and what it means for the First Amendment and online speech, we have Mayze Teitler. 

Mayze is a Legal Fellow at the Knight First Amendment Institute where they focus on the surveillance of incarcerated people, spyware, and government transparency. 

Hey, Mayze, welcome to the show. Thanks so much for joining us today.

Mayze Teitler: Thanks so much for having me.

Paras: So, on the Fourth of July, as many of us were enjoying the fireworks, a federal district court judge in Louisiana did something pretty remarkable by issuing this preliminary injunction, which is an order to stop the Biden administration from contacting social media companies about their content moderation policies. And just to help us orient a little bit, can you tell us about this case? What's going on here?

Mayze: Sure. So, Missouri v. Biden is this lawsuit that was brought by two states, Missouri and Louisiana, as well as several individual plaintiffs. And the plaintiffs in this case are claiming that federal government agencies and officials violated their First Amendment rights by pressuring social media platforms to censor a variety of content. That content spans some different topics. So it includes information about the Coronavirus pandemic, the security of mail-in voting, the Hunter Biden laptop story that's been mentioned in the Twitter files, some alleged parody content, content about the census -- really like a broad range of information. And the plaintiffs have alleged that defendants put pressure on the companies through a variety of interactions. 

These range pretty widely. They include just meetings and communications between the officials and some platform employees, including some independent entities like Stanford's Internet Observatory. They also include the use of these post-flagging portals that the platforms came up with to help identify violations of their own policies. And then some statements by government officials that kind of range from general condemnation of different types of misinformation, to specifically identifying some accounts that had inaccurate information on them. So that's sort of the general background.

Paras: What did the judge decide this week? 

Mayze: As you mentioned, Judge Doughty issued this sweeping preliminary injunction on Tuesday, and that injunction restricts various federal government agencies and officials from communicating with the platforms. This order is extremely broad. And if it's left in place, it will bar much of the administration from engaging with the platform's in pretty much any way on matters relating to content moderation, with a limited number of exceptions. Some of those exceptions are a little bit surprising, because they seem to encompass communications that the court described as problematic. So, there's an exception for information that could be harmful to the public. You might think that health information that's not accurate could be harmful to the public and fall within that exception. And it's pretty likely that if Judge Doughty doesn't narrow the order, the appeals court will step in and do it for him. 

Paras: Okay, so as you mentioned, this is a pretty sweeping injunction. But this case also sits within this larger debate around content moderation and the role of hate speech online and how the government should regulate it. So, how do we understand that debate and what the First Amendment says about it?

Mayze: That's a great question. I think the judge's opinion really touches on a couple different areas of that law that are in development right now. So, to support the preliminary injunction, the judge first found that the plaintiffs were likely to succeed and arguing that the interactions between the platforms and the government violated their First Amendment rights. But the judge's opinion doesn't actually provide a coherent explanation of how the government violated the First Amendment in those interactions. Some of the facts that the judge describes might actually raise some difficult First Amendment questions. But it's hard, the Constitutional lines here are pretty blurry. But to offer an example, Judge Doughty describes a parody account for one of Joe Biden's relatives. I took a look at the record and it's actually a little hard to tell if this was a true parody account or if it was someone impersonating a member of the president's family, which might violate the platform's Terms of Service. But someone working in the White House reached out to Twitter about this account, asked the platform to remove it. We might say like, if the White House is pressuring social media companies to remove parody content, that's obviously concerning from a First Amendment perspective. You can think back to 2019, there was an incident where Chrissy Teigen tweeted critically of Donald Trump and the White House contacted Twitter asking for the post to come down. 

If there was evidence that the Biden administration took the same kind of actions, that might potentially seem problematic to the average legal listener. But there's a lot of other interactions that were described in this lawsuit that seemed far less troublesome, and simply demonstrate the challenges and frustrations that private platforms faced as they sought guidance from the government in responding to an emergent global health crisis, and this perceived influx of misleading information. And because the opinion doesn't have a clear theory of when these interactions violate the First Amendment, there are just all of these unanswered analytical questions. And it really leaves the government with very little guidance, going forward, to understand how they can provide information to the platforms without potentially violating Judge Doughty's theory of the First Amendment here.

Paras: Can you just give a sense of what the First Amendment landscape looks like here? What does the First Amendment have to say about hate speech, generally?

Mayze: I think this is an issue that extends just beyond the context of hate speech. It sort of falls into a series of legal challenges that are against what's known as government jawboning. So that term describes these informal efforts by government officials to pressure private entities in relation to speech, sometimes will be carrying an express or implied threat of regulation. And the core question that's presented in these cases with online platforms is a difficult one because of the competing principles that are at play. So on the one hand, the government needs to be able to govern and governing requires speech, including speech that might be directed at private platforms. So if a newspaper, for instance, were publishing a story that government believes to be false, we might want the government to be able to call that newspaper out and say, "Hey, that's not an accurate story."

On the other hand, we don't want the government to be able to escape the First Amendment's prohibition against censorship by relying on these informal mechanisms. And that might be particularly important in the context of social media, where the platforms are hosting speech that is not entirely their own, but also where their speech rights are implicated because they have an interest in deciding what content is included on their platforms. And I think it's important, also, to note that these incidents aren't really limited to one side of the political spectrum. 

You know, we've seen lawmakers on both sides of the aisle reach out to try to change what's on the platforms. After Roe v. Wade was overturned last year, some Democratic lawmakers asked Google whether it would adhere to this pledge it made to take down results for crisis pregnancy centers -- so, fake abortion clinics -- and these Republican states' attorneys general sent their own letter to Google, basically stating that if Google changed its content moderation policies, they would conduct investigations into antitrust law and religious discrimination and consider legislation.  

So again, I want to like paint the picture broader of, lots of government entities are interested in controlling what speech is on these platforms. And also, they have their own interests in being able to express the government's position on emerging issues. There's sort of two leading Supreme Court cases on this topic. One is Bantam Books, v. Sullivan, which is a case where a Rhode Island commission reached out to these bookstores that had books that the commission saw is obscene for minors. And they threatened prosecution if the books weren't removed from circulation. The Supreme Court ultimately held that letters that were threatening prosecution were a scheme of informal censorship that was prohibited by the First Amendment because the commission set out to intimidate the booksellers. So that's kind of one general principle in this space, is that the government can't indirectly censor through an intermediary. Like I mentioned before, on the other hand, we have government's own interest in speech. We have the platform's interest in deciding what content remains on their own services. And then also, there's this area of law listeners might be familiar with called state action doctrine, which helps decide when the activities of a private actor can be attributed to the government. 

The leading case -- it's a few decades old now -- is Blum v. Yaretsky, which involves some complicated facts about Medicaid recipients that were getting state subsidized care through a private nursing home, who sued the Medicaid administrators attributing the decisions to discharge them from that home to the state administrators. And the Supreme Court rejected the idea that those private actions were attributed to the government because the government had not provided such significant encouragement or coercive power to make it so that they could be held liable for the decisions of the private actors. So, with all that context, these jawboning cases have been bringing forward claims where litigants want to sue platforms or government officials under the state action theories, for consulting with the government on content moderation decisions. And something that's really notable about this case is that it's the first time that a court has accepted one of those arguments in the social media jawboning context. So I think it's an expansion of what we've already seen as existing state action doctrine in the content moderation space.

Paras: Just as a reminder, state action is important here because the Constitution applies to government conduct, not private or individual conduct. And so there needs to be that state action nexus in order for the constitutional claims here to reach the private entities. And I know you mentioned a number of these difficult questions that were unanswered by Judge Doughty's opinion here. So, what are some of those questions that we should be looking for?

Mayze: Sure. So, a clear theory of what interactions like this violate the First Amendment might address a range of different questions in helping understand when something's coercive. So you might wonder whether it makes a difference if a government official encourages or requests a social media platform to enforce a content moderation policy that the platform came up with on its own. So, take the facts of Missouri v. Biden: a lot of these platforms created these COVID-19 misinformation policies early, in the first months of the pandemic. Some people might think that that makes the communications from the federal officials a lot less coercive. The policies pre-existed, federal officials were just like any other user identifying violations of a policy that they did not come up with. So the policy would be the platform speech. Another question that's left unanswered is whether it makes a difference if the government makes a very specific request, or a very general request. So, you might think that the government targeting a specific user or a specific post is more troubling than the government generally saying, “Hey, we're worried about misinformation on your platform.” Particularly when there are concerns about viewpoint discrimination.  

So again, in this case, some of the communications between the White House and the platforms were really broad. In one email, Rob Flaherty reached out to Facebook to ask generally about their content moderation policies. And he said, 'we're really concerned that your service is one of the top drivers of vaccine hesitancy.' That seems like a pretty general expression of concern that's originating from the government and just expressing the government's position on the prevalence of misinformation. On the other hand, Flaherty reached out to Twitter about what might have been a parody account, linked to a relative of President Biden stating, 'please remove this account immediately.' And if that really was a parody account, this seems less like a government position and more like pressure for a specific action on behalf of the platform. Another of, kind of the many questions that are still open in this space is, is there any difference between contact from different types of government officials or entities? So, you could have elected officials making these types of requests, administrative agencies, law enforcement agencies. 

You might think that communications from elected officials are maybe less troubling, they might echo the sentiment of their constituents, be more accountable to the public. On the other hand, you might think law enforcement reaching out to these platforms is like particularly troubling because they have the power to arrest and prosecute. In the Missouri case, a lot of the examples from the lawsuit involve outreach from the platforms to the CDC, like eliciting the agency's recommendations about whether a specific pieces of information are true or false. That seems like really distinct, from communications, say, between the White House officials and the platforms. Or even, there's this leading case from the Sixth Circuit, Backpage v. Dart, where a sheriff reached out directly to some payment processors to pressure them to stop processing certain transactions that were like sex-related, and referenced adverse legal consequences. A sheriff, being a law enforcement official, that might be a lot more concerning than the CDC providing neutral information.

Paras: This case may or may not be the vehicle to tee up a lot of these questions based both on the facts and just the decision that we have currently from Judge Doughty. But what are the implications here, and the fact that we even have a sweeping injunction like this? What does that tell us about the state of The First Amendment here?

Mayze: I think it's hard to generalize from this decision, because the justification and the theory is so thin. And I think for that reason, it's pretty likely that the Fifth Circuit is going to step in and potentially change the scope of this. But I think, you know, these competing principles are showing up again and again, in recent decisions. We've seen like, cases grappling with similar questions and the Ninth Circuit and the Second Circuit. And this is really just the first case to grant one of these really sweeping injunctions. So, I think developing a coherent theory of when these interactions violate the First Amendment, what is too coercive, is a really important challenge for the courts to be taking seriously. And we should just hope that the Fifth Circuit will take up the pen on creating that kind of functional framework on appeal. 

Paras: It'll be interesting to see, especially because in two other content moderation cases this term, the Supreme Court really punted on the question of platform liability and content moderation in Section 230 cases. I wonder if you have a sense of how the courts are thinking about these questions, and if they feel like they might have the expertise to be able to address them -- or does this require action from Congress? 

Mayze: That's a great question. I think that courts so far have been pretty deferential to the government's ability to step in and communicate with private entities. You know, the Ninth Circuit recently decided this case, Kennedy v. Warren, which had to do with Elizabeth Warren's public letter to Amazon about this book, "The Truth About COVID-19" that was also like a COVID-skeptical book. And the court, in that case, looked at these factors like word choice and tone, whether the government official had regulatory authority, the perception of the recipient, whether the contact referenced adverse consequences. 

So, I think courts are like reluctantly taking up this challenge, but not with the kind of analytical attention to detail -- especially this recent decision -- that we might hope for. And I think, ideally, here, whether it came through the courts or through Congress, we would have an outcome where there really were clear tests that, you know, gave latitude to the government to advise and promote its view; that gave the platforms the ability to enforce and elicit input on their own policies; and that still appreciated the speech interest on behalf of the public. So, I think it's a challenge, but it's a challenge that, you know, we will all be better off if we take seriously.

Paras: Yeah, we definitely will be looking for what happens in this First Amendment landscape. There's a lot to watch both in this case and with these trends, generally. Mayze, thanks so much for joining the show. We really appreciate it.

Mayze: Thank you so much for having me.

Paras: This episode is hosted by me, Paras Shah, with co-production and editing by Tiffany Chang, Michelle Eigenheer, and Allison Mollenkamp. Our music is the song, “The Parade” by Hey Pluto. 

Special thanks to Clara Apt and Mayze Teitler. You can read all of Just Security’s coverage of Missouri v. Biden on our website. If you enjoyed this episode, please give us a five star rating on Apple Podcasts or wherever you listen.