STAND with Kelly and Niki Tshibaka

Navigating the New Frontiers of Free Speech in the Digital Era - Mike Matthys on STAND

March 06, 2024 Kelly Tshibaka and Niki Tshibaka
Navigating the New Frontiers of Free Speech in the Digital Era - Mike Matthys on STAND
STAND with Kelly and Niki Tshibaka
More Info
STAND with Kelly and Niki Tshibaka
Navigating the New Frontiers of Free Speech in the Digital Era - Mike Matthys on STAND
Mar 06, 2024
Kelly Tshibaka and Niki Tshibaka

Join the conversation as we sit down with Mike Matthys, a Stanford alumnus and Silicon Valley expert, to tackle the ever-evolving challenge of protecting free speech in our digital world. Drawing on his telecom insights and experiences with global censorship, Mike helps us unpack the work of the Institute for a Better Internet and brings to light the alarming trends of government interference in media. Our discussion delves into the stark figures from a recent survey, revealing the public's strong disapproval of such meddling and emphasizing that the battle for free speech transcends political lines, reminding us that it's a vital right for everyone to uphold.

As we navigate the complex terrain of content moderation and social media censorship, we take a hard look at how these practices are shaping user engagement and trust in platforms. Comparing the contrasting strategies of the EU and US, we underline the significance of advocacy and education in finding a balanced approach to regulation. We also explore the role of significant political events, like Brexit and the 2016 US election, in eroding confidence in media and official narratives, highlighting the necessity for a nuanced approach to handling information online.

Wrapping up, we critically examine the potential and pitfalls of AI in the context of censorship and discuss the promise of the Online Media Regulatory Authority (OMRA) as a solution to preserve online freedom of speech.  Matthys shares his insights on the importance of transparency, neutrality, and accountability in the age of information manipulation, and we ponder the partisan divide on censorship awareness. With Kelly and Niki Tshibaka, we encourage our listeners to participate in this crucial discourse and keep pushing for a future where diverse voices can flourish online without fear of unreasonable restraint.

You can learn more about the Institute for a Better Internet here: https://4betterinternet.com.

Subscribe to never miss an episode of STAND:
YouTube
Apple Podcasts
Spotify

STAND's website: • StandShow.org
Follow Kelly Tshibaka on
Twitter: https://twitter.com/KellyForAlaska
Facebook: https://www.facebook.com/KellyForAlaska
Instagram: https://www.instagram.com/kellyforalaska/

Show Notes Transcript Chapter Markers

Join the conversation as we sit down with Mike Matthys, a Stanford alumnus and Silicon Valley expert, to tackle the ever-evolving challenge of protecting free speech in our digital world. Drawing on his telecom insights and experiences with global censorship, Mike helps us unpack the work of the Institute for a Better Internet and brings to light the alarming trends of government interference in media. Our discussion delves into the stark figures from a recent survey, revealing the public's strong disapproval of such meddling and emphasizing that the battle for free speech transcends political lines, reminding us that it's a vital right for everyone to uphold.

As we navigate the complex terrain of content moderation and social media censorship, we take a hard look at how these practices are shaping user engagement and trust in platforms. Comparing the contrasting strategies of the EU and US, we underline the significance of advocacy and education in finding a balanced approach to regulation. We also explore the role of significant political events, like Brexit and the 2016 US election, in eroding confidence in media and official narratives, highlighting the necessity for a nuanced approach to handling information online.

Wrapping up, we critically examine the potential and pitfalls of AI in the context of censorship and discuss the promise of the Online Media Regulatory Authority (OMRA) as a solution to preserve online freedom of speech.  Matthys shares his insights on the importance of transparency, neutrality, and accountability in the age of information manipulation, and we ponder the partisan divide on censorship awareness. With Kelly and Niki Tshibaka, we encourage our listeners to participate in this crucial discourse and keep pushing for a future where diverse voices can flourish online without fear of unreasonable restraint.

You can learn more about the Institute for a Better Internet here: https://4betterinternet.com.

Subscribe to never miss an episode of STAND:
YouTube
Apple Podcasts
Spotify

STAND's website: • StandShow.org
Follow Kelly Tshibaka on
Twitter: https://twitter.com/KellyForAlaska
Facebook: https://www.facebook.com/KellyForAlaska
Instagram: https://www.instagram.com/kellyforalaska/

Kelly Tshibaka:

Welcome to Stand where courage advances faster than the AI revolution. I'm Kelly Tshibaka, joining you from Alaska's last frontier with my best friend, husband and co-host, Niki Tshibaka. A shout out to our community of standouts who helped make the show possible. Let's invite more folks to join us. Make sure to share this episode with friends or family members. Subscribe to your podcast on your favorite podcast platform. We are also on YouTube at the stand show. You can find us on social media under Kelly for Alaska. And remember, if you want to leave us a review on your favorite podcast platform, understand, with Kelly and Niki Tshibaka, you could be selected as our audience member of the week to receive a free hydro flask sticker. So make sure to leave your review. We will whisk one awesome sticker out to a favorite audience member.

Kelly Tshibaka:

We are so excited today to have a fascinating conversation with our guest, Mike Matthys. Mike calls a master degree from Stanford University and he's a 30-year veteran of Silicon Valley tech companies and the telecom industry. A known subject matter expert of online content, moderation and censorship. In other words, Mike is a champion of free speech and freedom of the press. In 2021, he co-founded the Institute for a Better Internet. This is a nonpartisan, solutions-oriented entity that's laser focused on quote fighting to protect citizens and news organizations from government encroachment on their right to free speech.

Kelly Tshibaka:

Let's remember and I know our audience knows this a threat to one person's free speech rights is a threat to everyone's free speech rights. You can't cancel one person's voice without setting up the justification for canceling your own. So, fortunately, Mike is one of those bold voices out there who's fighting to protect everyone's freedom of speech, regardless of their political or religious or other views. He understands that freedom for some ultimately ends up in freedom for none. It's got to be freedom for everyone. So today we're going to talk with Mike about big tech and big government, censorship and what we can do about it. What can just one person do? So, Mike, it's great to have you on the show. Welcome to Stand, yeah thanks.

Kelly Tshibaka:

We're so happy to have you and we know that you take a bold stand for us, so let's start off with some background. We're particularly excited about having you on today. I spent nearly 17 years holding government agencies accountable, protecting civil liberties and overseeing and auditing IT operations, so I have some idea of the challenge that you're in. Niki was in private practice as a telecom attorney. He's got a little bit of an idea of what you do. But you've spent 30 years in Silicon Valley tech companies in the telecom industry during an IT revolution, and so tell us about that. What did you do and how does it lead to where you are today?

Mike Matthys:

Well, thanks for having me. It's great to be here. I spent most of the 90s and 2000s living and working overseas, a couple of times in Europe, but particularly in Japan and Asia, building platforms for telecom companies. It was kind of the plumbing for the new internet services. I also worked on new wireless services before the Apple iPhone, for example, putting email and internet on a phone, which at that time was considered a completely radical idea. I have experienced cultures like China and Singapore with very pervasive government censorship on all sorts of topics that the government found inconvenient or didn't like. More recently, I started and run a small venture capital fund that invests in Silicon Valley startups and we also help these companies go to market overseas, particularly in Japan and Asia.

Kelly Tshibaka:

That's really extensive, awesome experience.

Niki Tshibaka:

Yeah, Mike, if we could just jump right in. Taking that experience, you are leveraging it now to ensure that our freedoms in this generation are protected. The responsibility of each generation is to ensure that we pass on the freedoms we've enjoyed, untainted, uncorrupted, to the next generation. We're going to talk about this organization. You've started Institute for a Better Internet in a moment, but tell us a little bit more about what led you and your partners to launch this organization and get involved in the fight for free speech. You referenced some of your experience in Singapore and China. Were there other things that sort of inspired you to move forward and get this organization going there, sure was.

Mike Matthys:

My representative in Congress is Representative Anna Eshoo. You may or may not know. Normally she's a backbencher who doesn't really make a lot of waves, but in 2021, anna and another Northern California rep named Jerry McNerney wrote letters as members of Congress to highly regulated companies in cable TV and satellite TV and suggesting they ought to reconsider whether they should carry conservative news channels like Fox or Newsmax Coming from telecom. I recognize that this is actually pretty serious for these companies when they get letters like this from Congress. They're very dependent on government for approvals, for spectrum and all sorts of things like that.

Mike Matthys:

As you know, northern California Silicon Valley, where I live and work, is a very deep blue liberal part of the country and I asked some of my friends and colleagues what they thought about this and it was universally viewed as pretty a negative thing by Anna Eshoo, which was surprising to me. But rather than just kind of whinge and complain, what I did was I decided to hire a professional survey company and we created a survey of our area to find out what voters really thought. So it was demographically set up we are 45% Democrats, about 23% Republicans and the rest say they're independent, but they tend to be left of center and both Democrats. But the results of this survey were as expected. 60 plus percent Majorities of this very deep blue liberal area thought that that was wrong for the government to try to penalize or incentivize Media about what content they should carry, and similarly results for questions about online content.

Mike Matthys:

My two co-founders are actually we're longtime friends. We are nonpartisan, as you mentioned at the beginning. One of them is a very active Democrat who works on Democratic campaigns. He describes himself as a blueberry, you know, floating in the Red Sea of Texas. He lives in the Dallas area.

Mike Matthys:

he had some stories news stories that he shared on Facebook about from overseas doctors, about COVID that were blocked so that triggered him and the other co-founder is an insider who's worked 15 16 years inside Google and Facebook, and he At the same time all three of us at the same time he was a little unhappy about Amazon blocking a book called "When Harry Became Sally, and so the three of us decided to work on this, and that was now two and a half years ago, and We've done on, you know, quite a few things since we started there.

Niki Tshibaka:

That's fascinating, and it's great to see people coming together, across ideological and party lines, to stand together for one of those basic, foundational principles that has held our country together since the very beginning, without which we really wouldn't have the freedoms that we enjoy. I mean, speech is an overflow of thought and to the extent you censor speech, you're censoring thought. So this, this is a very dangerous development that I'm glad that people across party lines and ideological lines are recognizing for what it is. Can you tell us a little bit more about the Institute for a better internet? What? What is it and what does it do? We've got about two more minutes or so before the break.

Mike Matthys:

We're nonpartisan, we're think tank here in Silicon Valley. We're focused on online content moderation, censorship and, more recently, on AI issues such as, you know, ChatGPT. We self fund our activities. There's no donate button on our website. We talk quite regularly with the companies here For two reasons one is to get a sense of what they would find workable as a solution, and the other is because we recognize, with the way Congress has set up right now, with a basically neither of the two political parties is, they would get much done. So we're also looking at the industry to see whether they could put together a solution.

Mike Matthys:

You know, even independent of the government, we've continued to do surveys, professional surveys, for example, of social media users To focus on their experiences with censorship and how they reacted.

Mike Matthys:

That and, by the way, it turns out literally surprisingly to me half of users have personally experienced a form of censorship and a small majority of those have actually said that it makes them reduce the number of hours and time they spend on a platform after they experience this. We've also been investigating Europe. You know the EU is kind of ahead of the US in implementing Laws around content moderation. They have something called the Digital Services Act and the companies are faced with. You know different sets of rules between the US and Europe and other countries, so over time we've developed some policy. You know proposals and and now we're kind of working as I visit Washington about four times a year to meet with members of Congress, their offices, some of the people in the FCC and a few other groups that are related to this industry to continue to refine the proposal with feedback and also to educate and advocate for a solution.

Kelly Tshibaka:

That's fantastic. We know that in order to have the courage to do what's right, you have to be informed and, as you said, the education piece of what you're doing and then to advocate, to have people who come from different backgrounds and political ideologies, but all being on the same page with that subject matter expertise, to say, actually on the front lines of the IT wave, this is where we need to go if you're going to be out here in the frontier of social media platforms and the emerging Inter information technology. This is what needs to happen in order to do that tightrope balance between the exchange of ideas and the Constitution and the fact that we're on other people's platforms, because I think the legislators need to know how to balance that. We'll be back after this break with stand with Niki and Kelly talking to Mike Matthys about how the big tech and and big corporations and media and Congress can take a stand without censoring us. Stay in fire. Weka tactical specializes in combat effective weapon systems and prides themselves on the best prices. In the state of Alaska. Weka tactical sells firearms, ammunition, gear, body armor, night vision and much more. They offer a price match guarantee as well as a discount to all first responders. Visit Weka tactical at 56 30 B Street in Anchorage. Weka tactical, alaska's premier store for combat effective weapon systems.

Kelly Tshibaka:

We're back on stand with Mike Matthys, the co-founder of Institute for a Better Internet.

Kelly Tshibaka:

Mike, I want to kick off with reading the institute's vision to our audience, because Niki and I found it both inspiring and sobering.

Kelly Tshibaka:

So your vision is working towards a day when fairness and progress flourishes in America, because its citizens can access and discuss ideas and principles without worry about government efforts at censorship and shutting down public debates and discussions. This is beautiful, but we've got to tell you we never imagined that in our country, which we call the land of the free, that we'd ever even need to talk about a vision like this. I mean, this vision essentially is supposed to be captured in our Declaration of Independence and guaranteed by the Constitution. I mean, didn't we all learn this in grade school? And according to the Twitter files, FBI has now assigned 80 of its agents and a task force to monitor social media and send suggestions for content moderation, also known as censorship, to executives over at Twitter, google, facebook, et cetera, and according to the Washington Examiner, they published this in February 2023. We're also living in country when the State Department funded the Global Disinformation Index. We know this, but this is a.

Kelly Tshibaka:

British entity that has been blacklisting and working to defund conservative media. How did we get here and how did this happen so fast? What are your insights on all this? You?

Mike Matthys:

know, I think there were three events that kind of shocked the so-called elite establishment, which is, the insiders of media, government and academics. And those three events were something called Brexit. That was a vote in Europe where the UK voters kind of shocked their establishment and government elites by voting to actually exit the European Union, partly because of its many rules on immigration, and general government by bureaucracy rather than by elected officials. And the second big event, which was clearly the earthquake, was Trump winning the election in 2016, which deeply shocked both political parties' establishment and especially the insiders of our government, media institutions kind of the East Coast establishment, if you will and so this created a big increase in looking to find a scapegoat or to blame.

Mike Matthys:

They ended up blaming Facebook for enabling Trump to win, and then there was a third thing that isn't as well known but I think also was influential, and that was an episode called Climate Gate, where the academics were shocked to find out that many of their own most famous climate scientists were purposefully misleading the public and blocking studies by other scientists who had different results or different opinions about climate alarmism, whether it's right or wrong. So these three episodes shocked our media, academic and government institutions. And how could Trump win 70 million votes? And they started making up a lot of crazy stories Russia Gate that assumed Russia and Putin were somehow controlling Trump in his campaign.

Mike Matthys:

As you mentioned, we see now the US government, and the UK government as well, funding efforts by some academic groups, the Atlantic Council, a group at Stanford called the Internet Observatory and some other groups at the University of Washington to kind of outsource the idea of censorship and influencing what the citizens are able to read and, as you said, to think by what they read online. And a whole network of mom and pop kind of liberal media fact checkers became an industry, you know, led by these media institutions to jawbone and influence the Facebook, google, twitter, the social media platforms, to censor certain types of inconvenient content and even to ban. You know some writers and content producers who you know regularly disagreed with, whatever the current consensus narrative might be, you know, from our government and media. And so what happens is all these efforts get in trouble because basically they're trying to arbitrate for all the rest of us. You know which information is true and which is false.

Kelly Tshibaka:

Just to throw in there. Why is that a problem? So you hear these arguments out there. Hey, I support free speech. I just don't want misinformation or disinformation. What would your response to that argument be?

Mike Matthys:

Well, the simple response is who decides? You know, the social media platforms have worked very hard and I think, have done a good job to eliminate many types of harmful content, especially child porn and violence, spam, computer viruses, all this stuff. But under pressure from the left, you know, they've really taken a wrong turn and started kind of, you know, accepting these requests to moderate or block content based on whether it's true or false, which inevitably leads to a question Is it just that somebody disagrees with the content? You know how do you decide if it's true or false when you have two academic you know writers or two people from different viewpoints disagreeing about something? You know I would like to be the decider of what's true and false.

Mike Matthys:

You know how did we end up with a, you know, kind of a liberal media fact checker from the Washington Post or CNN gets to be the decider on behalf of a monopoly social media network like YouTube or Facebook. So whether it's something is just a disagreement or a debate or whether it's somehow, it's important to decide if something's true or false. We believe at our Institute for Better Internet that the better test is not whether something is true or false, but the better test is whether it's imminently harmful or not? Is it imminently harmful to a person, a group of persons? And this dramatically simplifies and reduces the problem for these companies, but also eliminates the tendency and the opportunity to censor perfectly non-harmful content based on disagreeing about it. You know, because I have a different political viewpoint or a different scientific, you know, viewpoint.

Niki Tshibaka:

What? Just following up on all of that, what trends are you seeing in terms of online censorship right now?

Mike Matthys:

So interestingly, very recently, twitter seems to have actually led the others to reduce their censorship of political speech. Facebook, google and some others have followed Elon Musk and Twitter's lead, but quietly laid off thousands of content moderator positions and shifted more to rely on AI and algorithms to moderate content. I think they may have concluded that just continuing to follow left-wing news censorship especially in the lead-up to 2024, may not be good for their business or for their PR kind of reputation. There still is rampant censorship, particularly on certain topics. For example, youtube is still censoring openly any medical content that doesn't conform to the WHO. The WHO is a flawed political organization of the United Nations that suffers the same political pressures as every other political government-run organization.

Mike Matthys:

Some of these platforms are still censoring information on climate science that doesn't conform to maybe what we might think of as climate alarmism, or just because it doesn't help the government promote their efforts on massive spending or to kill off the energy industry before new green technologies might be ready.

Mike Matthys:

Even this week, we read that Biden has canceled whatever is left of the Alaska energy reserves out there, the leases and ability to explore for energy, and I forget what it's called the parts of Alaska where the federal government still owns the land and Google today.

Mike Matthys:

If you read the rules, they describe that search results are based partly on reputation scores of the news sites or the media where the results come from, and their reputation scores tend to be longer-running, mainstream media sources. So that means they discriminate against younger, more right-of-center sources. I spoke a few months ago with a former very senior executive at Google who described he said, mike, we get an absolute fire hydrant-sized torrent of requests from left-wing media, left-wing academics, more recently from the government, in order to please remove content that they don't like, as opposed to really a trickle from the right suggesting they remove content from the left they don't like. And they as a management team they proactively try to lean to the right. But if they recognize it, inevitably whenever they make any accommodation to his request they are kind of shifting more and more toward the left. But it's simply because they're just dealing with this torrent of requests coming.

Niki Tshibaka:

That's amazing that there is that many requests flooding into these social media platforms to silence or shut down people who have opposing views. I mean, I assume some of these requests are related to what you spoke about the concern about imminent harm and injury and things like that but it's really. I remember just during the whole pandemic, just talking to a lot of different people who were just their jaws dropped, like what is this? What's going on? What's happened to our country? This isn't what we're about. This isn't the country that produced people like Thomas Payne, right, and the founding fathers who were willing to sacrifice their lives and their sacred honor for this fundamental principle of liberty, right? So, anyway, we're going to take a break. What I'd like to do when we come back on the other side of the break is talk to you about the impact of artificial intelligence on censorship or the potential impact. So I think that's going to be a fascinating discussion.

Niki Tshibaka:

Folks, don't go away. We'll be right back with Mike Matthys and talking about his pursuit, with his partners, to restore and protect freedom of speech and freedom of press online. You can subscribe to our show on YouTube, at the Stand Show or on your favorite podcast platform. Be sure to follow us at KellyForAlaska and remember to leave a review so that you can get that awesome hydroflask sticker. We'll have one lucky winner each week. All right, stand by.

Kelly Tshibaka:

Welcome back to Stand with Niki and Kelly Tshibaka. We've got Mike Matthys with us talking to us about censorship and big tech and we were landing. Nikki had teed up some questions about the AI revolution. Niki, take it away.

Niki Tshibaka:

Yeah, so I'm really fascinated by this, mike. So Elon Musk has been sounding the alarm he's been on the forefront of this about the dangers of artificial intelligence. Surely there's a lot of benefits, but the dangers in terms of this rapid advancement we're seeing in AI and how quickly it's advancing and our ability to regulate and manage it is falling way behind. You referenced earlier about Google bias in Google searches in terms of the algorithms and whatnot. But our artificial intelligence technology, the advances we're seeing there, could take that kind of bias to a whole new level and make big tech and government censorship even worse, and so I mean it is a very real concern. Can you explain in layman's terms I mean I'm not attacking myself Could you explain in layman's terms how AI could be used to censor or suppress free speech in a more expansive and unregulated way than we're already seeing right now?

Mike Matthys:

Yeah, this is a great question and it's what everyone is talking about on Capitol Hill and, of course, as you mentioned, Elon Musk and leaders here in Silicon Valley. For me, the problem of AI is less about censorship and more about privacy, transparency and especially bias. So chatGPT and most of the other they call them large language model AI. They're more like a publisher, they're more like the New York Times, meaning they get information from the wild or from the internet, and then, in this case, the public internet, and then they create new content with their software.

Mike Matthys:

And this is different than what Facebook or Twitter do. These social media platforms simply share content generated by one user and make that available for other users, and that's where you see the censorship coming into play. So AI engines need to scrape as the verb they use, which is to gather online information in order to train their AI data engines, which means to teach the AI software how to answer questions based on content out there on the internet, so they literally go out and scrape and read data Every single day from all sorts of sites, and especially from social media sites like Twitter, where all of our posts are public.

Mike Matthys:

Facebook's a little different. They tend to be more restricted to your so-called online friends. So these AI computers try to read posts, read information, new sites at a very fast machine level scale, which is why Elon Musk recently announced and limited the number of tweets that you know someone can read in a period of time if you're not a blue check user. He was specifically doing that to limit the massive scraping of Twitter by all these AI companies. You know there are literally 500 AI startup companies here in Silicon Valley, probably probably much more. So privacy is an issue when AI scrapes mine and your personal posts and tweets are invading your privacy without your permission. This is a very big part of what the Europe EU laws are doing are dealing with when they regulate AI, and the bias issue comes in because there's no transparency. We don't really know where does chat, gpt or these other engines get their training data. If they gather their data primarily from right wing news sites, you can expect the answers will generally be, you know, reflecting right wing points of view right From Fox or Breitbart or Newsmax or whatever they gather data, primarily from the New York Times and CNN and Washington Post, you'll get more of a left wing bias. But when they don't transparently publish where they're gathering the training data, it's not, it's mysterious, and the engineers of course don't know exactly how the software curates it very well, but they do know where they're getting the training data and that is something that should be transparently published and known. So that's kind of on the input side.

Mike Matthys:

On the output side, these AI platforms may also introduce very blunt tools to bias the output. For example, when I was in Washington DC earlier this week, I'd learn apparently chat GPT doesn't allow you to create an image of the Clintons that also includes blood on their hands, just simply won't let you do it. But it will absolutely let you make an image of Trump with blood on his hands. So that just creates sort of an obvious blunt type of bias and, by the way it turns out, you can get an image of the Clintons with strawberry jam on their hands, which apparently is how people go around.

Mike Matthys:

You know these kinds of things, but you can see how these biases are not very transparent and you know if any AI platform, for example chat, gpt does become a big monopoly and take over, if you will, the village town square where all of us go to learn and have conversation, this will start to become a real issue. You know, today there isn't quite any one dominant monopoly. In fact, there's Chat GPT, apparently there's free GPT, there's GOP GPT, Left GPT, so today it's more like New York Times versus Fox News. But if somebody becomes a monopoly, then absolutely bias, neutrality, transparency. You know these pillars that we've talked about also kind of apply to AI just as well as they do to social media.

Kelly Tshibaka:

Well, can you talk to us about those four pillars, why they're important? These are some of the solutions you've put forward for protecting against big tech and government censorship.

Mike Matthys:

Exactly so. We sometimes call them the four guardrails and because for the social media platforms and these other big companies, their ability to innovate content moderation rules is very important for them and for the industry. But the four pillars are safety, transparency, neutrality and accountability. And safety means just hit each one of those four. Transparency means that not all content should be published. There is some content that is truly harmful and we can all think of examples exhorting people to use violence, child pornography, spam, computer viruses, doxing people by sharing their real home address, stuff like that but that should be blocked. Transparency is kind of what it sounds like and this is the low hanging fruit those proposals focus on. Transparency, at least from Congress, means they have to publish their content moderation rules. They have to publish the enforcement actions that are related to maybe first strike, second strike, third strike what rules you broke. These are the enforcement actions. We'll label your content, We'll ban you for a week, We'll demonetize your site, whatever the enforcement might be. So if someone receives a notice that says, hey, your content is blocked or censored, at least you will know exactly why, what rules you broke. That has to be transparent.

Mike Matthys:

Neutrality means platforms need to avoid taking sides content disputes. They should boost or share or deboost equally and not discriminate based on viewpoint or differences of opinion. This becomes very important. As you mentioned our slogan Fairness and progress are achieved Only when all voices are heard, and if you don't have neutrality, you're going to miss voices and harmful things come from that. And accountability means that these companies, who are effectively monopoly platforms, they need some independent third party to hold them accountable to make sure they stay within these guardrails or these pillars. And this accountability entity would be outside of Facebook, for example, or outside of Google, but should not be a government agency. There needs to be a non-government entity, Because as soon as you have a government agency trying to regulate what should be blocked or censored, you've now started an even bigger problem, which is constitutional issues with our free speech rights, where the government starts getting involved with their natural incentives toward political partisanship.

Mike Matthys:

So anyway those are the four pillars, or guard, or as they all work together and allow these companies to still innovate their content moderation.

Niki Tshibaka:

That's fascinating and I think if we could dig into that accountability guardrail a little bit more, because I think that's going to be of some significant interest to our audience. You kind of anticipated a question I was going to ask. When you talked about we need to have a third party sort of regulatory body, I was thinking, oh no, I hope you're not thinking government, because that wouldn't that be like the proverbial fox guarding the henhouse, right? Or even within these tech companies themselves, right, if they had their own quote accountability, it would be the same kind of thing the Fox guarding the henhouse. So we need to have a structure and a system in place that protects our freedoms and our ability to engage in a level playing field in that cyber town square, so to speak, while also making sure these companies are able to do what they need to do to enforce their content, moderation rules and et cetera. So those guardrails also are fascinating, but I really want to dig into the accountability piece.

Niki Tshibaka:

We're going to get into that on the other side of this break. So audience hold on, because what you're going to hear in this next segment is this really innovative, fascinating idea that Mike and his partners have for how we can protect each other's freedom of speech and the freedom of press online. You're going to love this, so stick with us and, while you're on break, make sure to subscribe at the stand show, on YouTube or on your favorite podcast platform. Leave us a review if you want to be eligible for a hydro flask sticker, and be sure to follow us on Kelly For Alaska. See you in a few minutes.

Kelly Tshibaka:

Welcome back to stand with Kelly Niki Tshibaka. We've got Mike Matthys with us. Today we're talking about the importance of protecting freedom of speech online and of course, that matters to us because we, courageously, won't be censored here on stand. Mike, before the break we were asking you hey, these are all great solutions and ideas, but government can't really regulate it and we don't trust the big tech companies to regulate it. So what exactly is the solution here? And it turns out your team actually has some ideas that's been talking to our national leaders about. Can you share with us what that?

Mike Matthys:

idea is yeah. Well, just as a quick intro, it's very clear the government has its own agenda. The political party in power has its own preferred narrative on most of the big topics of the day, and I love your characterization of having our political partisan government be the regulator of online content is exactly the fox guarding the henhouse and introduces irresistible problems to muzzle the government's political opponents and people who the power or the resources. And, by the way, our Supreme Court has clearly stated multiple times that falsehoods and information that goes against the government narrative is absolutely considered protected free speech and the government can't block that. And the reason is the Supreme Court has clearly said they understand. Yes, sometimes falsehoods are harmful, but it's far, far more harmful for the government to try to decide for us what is true and what is false. I mean, this is effectively North Korea. It's what puts us on the road to authoritarian government, and we've already seen evidence of that from the Twitter files investigation, where people from the White House and government agencies specifically identified not only misinformation and disinformation, but also, in their emails that became public, they specifically called something malinformation which they knew was true, but they still ask these platforms to block it or censor it because it was just inconvenient for their narrative, particularly regarding COVID policies. So, outside of government, we've explored quite a bit how other industries saw this type of issue and being in venture capital.

Mike Matthys:

One model that I found is an organization called FINRA, which stands for the Financial Industry Regulatory Authority. It's a non-government entity. They certify and approve financial advisors, stockbrokers, companies that provide these services, wealth planners, and they provide, very importantly, an independent mechanism to arbitrate disputes between millions of investors and these financial advisors or these financial companies. Finra has access to 8,000 arbitration judges, who are basically retired judges and lawyers who do this part-time For FINRA. All their arbitrations are conducted online using Skype or Zoom type calls to reduce time and cost. There's no travel involved and clearly these judges are very capable of accessing whether harm occurred for an investor and whether the wealth advisors followed the rules and the principles that they're supposed to follow.

Mike Matthys:

Finra is not funded by the government. It's funded by the industry using a very simple mathematical formula, and FINRA does have real teeth. They have fined companies millions of dollars. They've actually banned many dozens, I should say, of advisors and employees of these companies from the industry or suspended them from a year. The government has no role whatsoever in appointing the leadership of FINRA, hiring or firing anyone, they don't get to approve their budget, et cetera. They are independent of the government.

Mike Matthys:

So we have developed a proposal, which we've been test driving with the companies in the industry, with the FCC and with Congress an entity very similar to FINRA, which we call AMRA, which is online media regulatory authority, and it would operate in a similar way. It would certify these large platforms, particularly for having transparency and publishing all their content rules and ensuring that those rules fit within the four guardrails, and then AMRA would be able to provide an independent entity, non-government to place to go to handle appeals of the disputes that will inevitably come up between users and these industry platform companies. It could be financed, funded by the industry. The top five companies in this industry generate over $500 billion in annual revenue, so cost isn't really a fundamental issue for setting up AMRA.

Niki Tshibaka:

Interesting. This idea makes me think of 1984, the book that a lot of us read in high school, George Orwell's 1984, where you have Big Tech and Big Brother trying to basically determine what is truth for everybody and AMRA comes in and says, no, that's not going to happen. The people get to decide what the truth is for themselves.

Kelly Tshibaka:

Make Orwell fiction again.

Niki Tshibaka:

Exactly so. If you could just very briefly give us a sense of how would you get these companies to submit themselves to AMRA's regulatory authority to begin with and give us a practical example of how it would work if we could give a couple of minutes on that.

Mike Matthys:

Let me take the second one first and then go to how do we get the companies to participate in this? Which leads into what kind of government mandate or change to section 230 would cause that. But an example of how AMRA might work. Let's say a user it might be a business, it might be a news media site, or it might be your podcast or even an individual person, a citizen, who has some content blocked or they notice their content is no longer visible. You notice your podcasts are no longer visible to your normal-sized audience, so something is going on.

Mike Matthys:

The platform would be required, as part of these guardrails, provide with an online mechanism so that you can dispute this enforcement action and ask the platform to stop doing that or to undo their action. Honestly, this is pretty much available today from most of the major platforms. There is a way to dispute and you can describe why you think they made a mistake. Most of the enforcement actions today are initially done by a computer algorithm and when you make a dispute, what that normally triggers at Facebook or YouTube is that a human employee will look at the content and then employee will then render some decision. So in the case, the platform looks at your content and refuses your request and they continue to block it or censor it or whatever they're doing. In that case, the user can now opt to appeal their dispute to AMRA. They would have to pay a nominal filing fee because we have to prevent frivolous appeals, because we're talking about millions and millions of users posting every day and describe why they are appealing the enforcement action and AMRA would have someone look at that and make some assessment whether AMRA should accept it.

Mike Matthys:

Should AMRA do an arbitration of this dispute? For example, if it's clearly child pornography, you know AMRA will probably say sorry, this is just not something we're going to get involved with. The platform was correct and here are the content rules that you broke. However, if AMRA does accept the appeal, then it would be assigned to an arbitration judge through some process where random judges are offered to the user and to the platform. The platform would definitely want to participate because if they lose the arbitration they would be facing serious material financial penalties the same way that FAMRA does that in the finance industry, and the arbitration would be conducted. You know the user and the platform submit information.

Mike Matthys:

The judge may decide to set up a real-time Zoom call or a Skype call to hear, you know, a debate about this, and then that arbitration judge would render a verdict based on whether the platform remember the platform is certified, you know, by AMRA and whether they followed their already certified published content rules and they stayed within these four guardrails, and then they can render a penalty.

Mike Matthys:

Or they might go the other way and tell the user, you know what they followed their rules, they're within the guardrails, your content being blocked is acceptable. So what's important is for AMRA to be able to handle the pressure of what the industry calls massive scale, because there's not only millions of users posting every day, but, if you think about it, there are literally thousands or tens of thousands of new topics being posted and created every day online. So scale is massive. So AMRA needs to have a way to only accept the appeals that are kind of not already very well dealt with through precedence, you know, but are new areas or interesting areas that need to be more clearly defined and there needs to be a process to prevent frivolous, you know, appeals, some kind of nominal filing fee, that sort of thing.

Kelly Tshibaka:

Yeah, all good points. I was thinking when you were talking. I wonder how many times right now, users appeal decisions that are made by algorithms and AI and bots. You know just, I can't even tell how many times people I know are like oh yeah, I got put in Facebook jail for three months and it does make the question how large would this enforcement agency have to be?

Kelly Tshibaka:

It's a really great proposal and it seems like the challenges to get that set up would definitely be worth the benefit that our free speech rights our First Amendment rights would get from it. I love the Supreme Court language that you captured. Did you have anything else on that?

Niki Tshibaka:

No, we've only got about a minute left at this point anyway, so we'll probably need to wrap up here. But real quick, mike. You've been in Congress reception 30 seconds. What's it been like?

Mike Matthys:

It's as we. You know it shouldn't be a partisan issue, but it has absolutely been a partisan issue. I have struggled and struggled to get Democrats to engage on this topic and articulate to them that there is a risk in the future that they might feel put upon, you know, by censorship policies. So generally I focus on judiciary and commerce committees and the Senate and the House, and the reception from Republicans is quite good, except for, you know, people are a little bit down because they just don't expect anything really can be done, while the Democrats are just not ready to engage on this topic. It's almost like they just can't imagine the world might turn over if the White House changes parties, or you know that things might just shift, like a few decades ago it was liberals who were getting censored, you know, by a more conservative, you know sort of government establishment in the 1980s at that time.

Kelly Tshibaka:

So thanks for being on the show, Mike. This is Stand with Kelly and Niki Tshibaka. Remember to subscribe to the show and share with a friend. We'll see you next week.

Digital Age Free Speech Protection
Government Censorship and Media Surveillance
Online Censorship and Freedom of Speech
AI Threat and Censorship Solutions
Protecting Online Free Speech With AMRA
Partisan Divide on Censorship Awareness