The Decentralists

Episode 2: An Interview with Dr. Chris Rowell

August 20, 2020 Mike Cholod, Henry Karpus & Chris Trottier Episode 2
The Decentralists
Episode 2: An Interview with Dr. Chris Rowell
Show Notes Transcript

On this episode of The Decentralists, we are joined by Dr. Chris Rowell from the University of British Columbia. Dr Rowell teaches at the Sauder School of Business and is an expert on digital trust. His research is focused on strategy and innovation, with a particular focus on blockchain and other technologies which have the ability to disrupt established information technology architectures.

What does trust have to do with all of this?

The digital world  has changed the nature of trust. Companies like Uber and Airbnb have convinced many of us to share cars and houses with strangers—and we’re not just trusting strangers, we’re trusting Internet companies (which collect our data) with all of our personal information. 

On the Internet, how does trust work? How does anyone create a digital system of trust? Who should we trust?

If the business model of most Internet companies depends on collecting our data and then selling that data to the highest bidder—how trustworthy can they possibly be?

Dr. Rowell notes that as decentralized ledger technologies (such as blockchain) become more important, the nature of digital trust is bound to change.

Henry: Hey everyone. It's Henry Mike and Chris of the Decentralists. Well, we've got a pretty interesting podcast for you this week. It's not just the three of us barking and yelling at each other. We actually have an interview subject, and not just any interview. We've got Dr. Chris Rowell from the University of British Columbia, and he's an expert on digital trust. I'll tell you a little bit more about Chris. Chris is post-doctoral research and teaching fellow at the solder school of business at the University of British Columbia in Vancouver, Dr. Raoul received his doctorate in science, in the field of technology strategy and venturing from Alto university in Helsinki. Chris, research is in strategy and innovation with a particular focus on blockchain and other new technologies that hold the potential to fundamentally disrupt established ways of organizing. He is currently following the emergence of the blockchain field in British Columbia and more broadly studies, how blockchain technologies can change the way we organize and interact. Plus, the implications of all of this for companies and industries worldwide. Chris, welcome to the Decentralist.

Dr. Chris Rowell: Thanks for having me.

Henry: So, we've established Chris that you study blockchain technologies and how they can change, the way we organize and interact. But what does trust have to do with all of this?

Dr. Chris Rowell: It's a good question. I mean, that's something I've been trying to figure out for a little while now. I think I started looking a lot more into trust, my background is as an organization theorist, so looking at how companies organize, create capture value, especially in a digital context. So, I got really interested in trust because the digital world that we, were moving towards and have moved towards for the last 20 years has given, it has changed trust kind of fundamentally, in a lot of different ways. It's kind of allowed us to trust other people, to a large extent that we wouldn't have done before. So, think about, what easy things like Uber, Airbnb, all of a sudden you get into a car with a stranger, staying in a stranger's house, all that kind of stuff.
 
 So, just the ability to now interact with other people, in a much broader context than how the sort of the smaller worlds we used to live in has been enabled by the digital age, but I yet kind of relating the blockchain. I started a question; I suppose what are the externalities of this? Because a lot of the ways that these organizations are securing trust between different individuals, connecting people online, connecting people, so you can jump in their car and that kind of stuff, the externalities of this they're collecting a ton of information. So, we're also trust seen that company to kind of use that information in ways that that benefit us and secure that information in ways that if that protect our privacy, that protect, the integrity of the company, all those kinds of things. So, it's a long-winded explanation, but it becomes kind of fairly complex when you think about all the different ways that we're actually trusting.
 
 Mike: So, Chris let me jump in here and ask you a question. Okay. So, let's step it back a level, and because I know that we talked a little bit in the preparation for this call, and I'd just like to ask you, what is your definition of trust? What is trust itself, right? Because you just talked about some great examples with Uber and Airbnb and folks like this, right. Where in effect what they are acting as is trust brokers. I mean, you can trust Airbnb to properly assess that this person's house that you're going to stay at is not an ex-murderer or something like that and you can properly trust, well, this is the case, I mean, I can download Uber right now. I've never used it; I have no way idea who any of the drivers are. I press a button; somebody shows up at my house and I trust that they're going to drive me safely to the airport. So, go ahead Chris, I'm interested. What is trust?
 
 Dr. Chris Rowell: So, I think easy definition, Rachel Brotzman, who's been studying this, I think she's at Oxford and she's talked on this quite a bit. Her definition of trust is that it's a confident relationship to the unknown. So, it is we kind of the ability to not know what's going to happen next, but then trust that the thing that you're hoping will happen will eventually happen. So, we can play for a long time, we just place trust in individuals, and that was the kind of analog context, outside of the digital, we also place a lot of trust in institutions. So, knowing that people are going to follow the laws and things like that and those institutions will kind of will monitor and ensure that happens, and now we're seeing a crisis in institutional trust, in particular, where seeing this massive crisis in terms of trust in governments and large companies.
 
 The digital age has brought us to another kind of trust, which is now we have intermediaries that kind of allow us to broaden who we are trusting. So, be confident in relationships with inter interactions with people online or in-person facilitated by things like reputational systems and all those kinds of things. But at the same time, we're still trusting those intermediaries with that information that's put there our ratings or whatever you put up there, things that you post are accurate or haven't been misrepresented, or those people are real in the first place. So, like I say, I have been listening to some of your episodes in preparation for this as well.
 
 I think you've done a fantastic job of highlighting those, are we interacting? You start to talk someone Facebook, are they even real, I mean, that's a good question. So, we're actually trusting these organizations now. So, the interesting thing for me is not necessarily trust is always subjective. Trust is the way that we interact with others and it's always contextual. I'm not so much interested in what, I'm interested to an extent on how that's changed between individuals, but I'm really interested in the kind of the externalities of that. So, what are we giving up and who are we actually placing trust in? We have this call right now we're trusting a bunch of different platforms and whoever's running the conduits for this.
 
 Kind of implicitly and then, but we're also giving up trust in that we're of a data that's produced is going to be used to serve us and not kind of disseminated, in ways that we don't want it to. So, I think that's kind of what I'm really interested in this trust model that we've got to now, this kind of intermediary model, the way that I want to go on interact with other people online, it's sort of I know what I'm doing here, but the externality of that, like all the things that I give up to do that, is something that we haven't really thought about a lot and in the last 20 years, we're starting over the last five years, in particular, we're starting to see that actually, there's a lot of this surveillance economy that's been built up around it's creating some sign problem. So, these externalities are actually it's the core business model of a lot of these platforms to collect way more data on me than I intended just done in the first place.
 
 Chris: So, Chris I just have a question here about how is blockchain and trust related to each other?
 
 Dr. Chris Rowell: So, that's a good question. So, for me, blockchain and I'll kind of use the term blockchain to as a proxy for all sorts of distributed ledger technologies. So, it might not be a blockchain that does this in the future but blockchain as a distributed ledger technology is something that can enable us to interact with a network in a digital context and trust the output of the system. So, trust the information that's put there without necessarily trusting the individuals or all the individuals in the system. So, when you put when you add data to the blockchain, whatever it is you can, that hasn't been changed, right. You can look at the network, you can see how this is working and that that information is still accurate.

You can also interact in-process transactions, with others in the network without necessarily relying on third-party intermediaries. So, the thing that I'm really interested in is kind of, how does that change, what does that mean for how we trust in this digital context? Because now we can trust place more trust in technology. I'm not saying we can only trust the technology. We still need to trust other people in the system to an extent, but the balance has shifted quite a lot from placing huge amounts of trust in intermediaries to trusting a lot more of this technology that can allow us to interact in different ways basically.

So, I want to go a little bit deeper on that point. So, one of the interesting, when we prepped for this call, Chris, one of the interesting things that I took a note of was you made a comment about trust, where you said that trust was in effect something that will happen as the result of an action, and so in effect that is a cause-and-effect relationship that if you trust somebody, you will do A, if you don't trust them, you will do B and this idea of blockchain, kind of being an arbiter of that trust. So, it's the blockchain, the original Bitcoin, white paper was essentially proposed as a solution for two people who did not maybe even know each other to be able to transact in a trusted way. So, kind of I guess what I'm trying to figure out is blockchain, to me, seems like a very simple kind of elementary and public way to ensure that two people can transact in some way where one cause creates a specific effect. Is that kind of what you're referring to in that form of trust?

I think, to know that transaction's been processed and not have to place this sort of external or this I talked about externalities before, like not have to worry about trusting a platform to authenticate that transaction. I think there's positives to that definitely, I think blockchain enables you to program certain types of transactions and then know that they can be verified by a distributed network and know that that's happened and trust the information that's there now. Obviously, you can't do you can only do the things that you've pre-programmed, so that's why I said, you can't just trust technology. I think when Bitcoin and those early blockchain systems came out, people were talking about trustless networks, which I think is kind of a bit misguided in the sense of, you can only program something very specific. So, if anything else happens, if anything unexpected happens, then, of course, you're going to have to trust someone in the world to kind of workaround that. So, I mean I don't think we can kind of decide exactly what's going to happen beforehand. So, we're still reliant on others in the network, but blockchain provides a different kind of architecture for securing trust and for securing, for knowing that information's accurate without having to place, as much trust on third parties.

Chris: Speaking of which Chris, one of silicon's famous Silicon values, famous mottos is break things and move fast. Unfortunately, one of the things that they've broken is trust.

Mike: Yeah. That's fun.

Chris: Specifically, Facebook, has done quite a lot of that, but I'm also seeing a pendulum shift. Why, is the pendulum shifting right now towards trust?
 
 Dr. Chris Rowell: So, I think there's a few different things here, I think, to be honest, the, like I said, the core business model of Facebook is to collect as much information on you as possible, and to aggregate that and analyze it and then kind of sell access to it or do different things with it and people are sort of starting to become aware of that, because now we're seeing that when there's a data breach or there's a hack or something, there's negative consequences of it. So, there are stakes to them collecting all this data and now we're seeing regulatory shifts as well and I mean, you see GDPR and different regulators are looking at how can we actually ensure people have privacy have real privacy and have the right to be forgotten.

So, I think the discourse is shifting and people are becoming increasingly aware that for a long time, we're kind of in the dark about how much these platforms actually knew about us, and now we're sort of starting to figure it out and thinking actually there's this kind of matters. If you were on Ashley Madison, for example, you know, you're on a platform to cheat on your spouse and then the whole world knows about it all of a sudden and not only do they know that you were on that platform, but they know that you were talking to bots most of the time probably. So, these are the kinds of stakes that happen when you think you're just going into interaction with others, but you're actually trusting this platform.

That's only collecting its own data on you, it's buying data from all sorts of other places to build this kind of picture of you and then, sell others access to you so that they can advertise. So, I think that it just took a while for us to kind of figure out what that model is and we're starting to see these different repercussions of that too. I think the thing that really interests me about blockchain and perhaps it might not be blockchain, but could be blockchain in combination with other technologies, or it could be some other kind of distributed ledger technologies that it could it holds potential to shift the point of aggregation from these companies, from these intermediaries to the individual that produces the data in the first place. So, I think that's, what's really interesting and now the vigil becomes not only the owner of their data but the custodian and they can decide what to do with it. So, I think that's what's super interesting for me because they can it shift the way that what kinds of models can work there?

Henry: I think that's an interesting point, Chris because one of the things that has always kind of fascinated me about the study of trust and the definition of trust is the fact that trust is kind of to me something that is subjective. So, you've got this basically what the goalposts are and who you trust may be different than who I would trust, and who Chris would trust and who Henry would trust and our levels of trust can be different yet what's happening is that we are all being forced into kind of a vanilla format of trust because they there's no mechanism in a centralized system like Uber, Airbnb, Facebook, Google, like the entire kind of centralized internet. It only can function on basically one trust model.

Enter your data, subscribe, and then share, and you can change your settings and things like this. But by the time you're changing settings, you've already trusted the platform. To kind of take your data and set up an account. So, kind of in this type of scenario, I mean it seems to me that one of the things that need to happen is we need to, I think we talked about trust models. So, how do you in the context of a centralized system model trust in such a way where because I would say a centralized system is in inherently objective, right? So, how do we kind of make the objective subjective through using say trust modelling or things like this?

Dr. Chris Rowell: So, I suppose when I talk about trust models, so I think about the organization of trust. So, how is trust produced? I said, we went from this peer-to-peer trust, you had to know someone and we gossip about each other and done by had that 150 people, you could possibly keep track of it one time. To this institution, trust that we trust institutions. So, we know that there's laws and norms and all those kinds of things that we can predict people's behaviour based on that to this intermediary type trust. So, that we know that an intermediary is kind of authenticating people's identity they're managing those reputation systems or those kinds of things. So, when we move that said all these kinds of well, the intermediary model is highly centralized as you pointed out.

So, it's we're really placing a huge amount of trust in that intermediary, not only to authenticate the individuals, that are interacting with the system so we can, they're kind of acting as a conduit for trust in one another, but we're also trusting the Interned themselves as I mentioned, but also the information that we put there. So, we know that if you upload a video, we want to know that that video hasn't been changed and that's going to become increasingly important when we have things like, you know, really credible, deep fakes and new services are doing the same. All the publishers are kind of we want to know that things happen. So, it's think there's a relationship between, its trust in the information itself as well, and then it kind of links to truth which I think trust is subjective.

We can decide to trust more or less, but I think there is truth. I think truth is objective to an extent. We can say that things happened, we can say that we have a certain number of confirmed COVID cases in an area like, that's a fact, right? So, we need to be able to authenticate facts as well and authenticate that this is information that's been put there by someone that we do trust and that information hasn't been changed. So, that also links in with things like blockchain, because we know once information's been put there, if we can authenticate who actually put it there, we know that once it's there, it hasn't been changed as well. So, this it all kind of gets messy and I wish I had a kind of a cleaner vocabulary for talking about all this stuff.

But, basically, like I said, you know, the model of that is kind of where are we, you know, what does this? What do these trust architectures look like? What are this, where are we kind of, ensuring that we have trust from? Is it through other people that, are provide reputation? Is it through our direct interactions? Is it through institutions? Is it through intermediaries or is it through some kind of technology that can track and provide like the Providence of data? Provide like authenticate identities, all these kinds of things that's, you know, that allows us to all of this is a combination of like technology information and social trust, if that makes sense. They're all in different amounts and I don't think you can ever have purely have the other right. It's always a combination of things.

Mike: So, an interesting point there is Chris, you know, what's starting to happen now is we look at like kind of some recent phenomenon. So, let's go like the last six to 12 months where basically the COVID pandemic has, kind of exposed the lack of trust that people have in the institutions and you could combine that with some of these more extreme things like Q-Anon. Which is this phenomenon of this anonymous intelligence officer that apparently is inside, you know, kind of the government establishment, it's a big conspiracy against Donald Trump and his presidency. So, there are people out there who trust Q-Anon, they've never met this person because they, nobody knows who it is. This data, you know, social media has basically kind of created a platform where I guess because we've got a history of relying on them as an arbiter of trust.

Now what we've got is kind of, we're questioning that trust, we're questioning what we see and so to me, that trust model is broken. So, what kind of is the risk to our society of this broken trust model, continuing the way it's?

Henry: Yeah. If fails.
 
 Mike: You know, because we've now got people who are, you know, we've got people on the, who are complaining about being censored or not being censored. We've got people who say that the platforms aren't doing enough to control the stop hate for-profit and Black Lives Matter movements are saying that these platforms are not doing enough to control kind of the dissemination of information, which is inherent to trust.

Henry: And on top of that, you've got some people who trust that particular platform, others trust that platform, and others trust that one. So, which is right, how do you make sense of it, Chris?

Dr. Chris Rowell: So, I think and this kind of gets into the weeds of what are these what is a good trust model, I guess, who should we be? We have reputation systems, for trusted individuals. We have reputation systems trust in Uber driver, something like that, but we don't have reputation systems for trust in a platform. And, I think when you change that point of data aggregation or you change where, how information is stored and presented. So, at the moment that's both highly centralized, you know, put information somewhere, they own it, they present it in ways that they feel and they're the ones that kind of decide, well, this is maybe not true and they're starting, you know, you can see Twitter starting to do that.
 
 Facebook maybe resist a little bit, but kind of starting to do that as well, deciding, what's true and what's not I think, a more decentralized model of that in the same way you could decentralize the point of data aggregation, back to the individuals you could have a network of trusted actors that would have reputational systems for pieces of information as well, and for platforms and for actors, like, just like we do now. I don't know, I'm kind of imagining more of an at the moment we have this sort of aristocracy information where, you know like these huge platforms just get to decide, and they're the ones kind of showing us or curating this information for us and deciding what's true and what's not, it's pretty difficult for them to do that as well.
 
 Spend a ton of money on it as you guys have pointed out before. So, I think, I'm not sure what could happen in the future. I'm kind of imagining, what I'd like to see is something around, I said, we know that truth exists, or I think truth exists. I think there's facts and if we can figure out a way to crowdsource what facts are. So, everyone kind of different informational points will point to whether something's actually true or not. So, if it's only Q-Anon saying something, then, you know, we can pretty much disregard it. If lots of different people are saying something, you know, lots of these, various kind of actors that have reputation themselves, then perhaps we can point to something and say, well, that's probably correct or that probably actually happened. So, that just shifts it from the responsibility from one single, organization doing that to a network, doing that. So, I think there's different ways.
 
 Chris: So, Chris it's interesting that you're speaking about all this because well, we can't really talk about trust without talking about, social capital and to give a concrete example of this the way Twitter establishes social capital on its platform is it offers blue checks, to certain individuals. Now at first, it started as a way of account verification, but now it's a way to basically, establish trust on the platform. Who is deserving of a platform on Twitter and further than that who deserves space on the newsfeed but last week, if you recall there was a problem with the blue checks, basically a scam artist got a hold of all the blue check accounts and started tweeting out Bitcoin scam links? So, Twitter had to shut down every blue check account for almost an entire day. Which, basically in my mind kind of compromises the whole point of blue checking. So, clearly in my mind, I think this is a problem with centralization. If Twitter gets to decide who is worth trusting, and it's so easy to compromise, how do we move away from that model?

Dr. Chris Rowell: I think, well, I think you guys have mentioned this before, but Twitter looking at more decentralized models for the underlying trust architecture for their platform. They maybe don't want to hang onto the data themselves and they could operate on top of a more distributed system. I think ways of there's different ways to verify identity and if that's what you want to have with the blue check, that could be, you know, I think there's various different ways to be, do it. Like if Twitter were to decentralize the underlying architecture for how information is collected and stored, and then they would provide some kind of curation service. So, it would look like Twitter on top of that, but then you could pick Twitter 2, Twitter 3, Twitter 4, whatever else.

And each of them could have their own way of issuing blue checks. Some might be from the provider themselves who was to provide an interface, some might be, you know like I mentioned before, reputation-based. So, they might be voted by different actors, and then you might have, something like we saw in black mirror where, you end up with a reputation system where your rankings are worth more than others, depending on to bring where you sit. I'm not sure, I think there's lots of different ways we can do it, but I think we are moving away from that model already. I think these actors that we're sort of seeing the shift from, some organizations that are realizing that there's a risk in these highly centralized models because they open themselves up to all sorts of legal, economic risks of holding that data.

And huge costs, you know, increasing costs based on what's happening in the regulatory context, in different places. Then others that are kind of doubling down and these massive companies, like Google's buying all this healthcare data and Amazon and Microsoft, etcetera, Facebook kind of juggling down on data aggregation. So, maybe there's kind of a shift on both, but I'm what I'm hoping is that, like I said, the discourse has shifted, so people are starting to value personal privacy and when these new models come out where it's like, actually, I have control over the day that I produce and then decide whether to engage in these different networks, maybe I can carry my data across different social networks, pretty seamlessly, because all I'm doing is plugging my underlying data into a different interface.

So, I can do it simultaneously, this switching cost of have been significantly dropped in this new architecture. So, I'm hoping you, once something like that's available and working I have, have the luxury of not worrying about the technical side of things because I work at the business school. So, I just look like I'm talking, 10, 15 years ahead here hoping that someone's going to figure this out by then. But what I'm hoping is that the discos will continue to shift. So, the people will value that and then they'll just move over, they'll just say, that kind of makes sense. There might be legal pressures, like regulatory pressures, for them to do that too. I think, companies like Facebook at the moment where they're kind of ignoring, you know, continuing to ignore personal privacy or not ignore, but like disregard to a large extent, things like privacy, and sort of not kind of continuing to aggregate massive data sets and leaving themselves open to these vulnerabilities.

If we see more and more hacks, these large actors that are kind of pursuing with this centralized model and like a really ramped up steroids version of this centralized model are kind of drive themselves off a cliff because they're also showing us what the downsides of that model is like repeatedly, by getting hacked by breaking laws and by kind persisting with to fly in the face of this shifting discourse.

Chris: And further to that even when companies like Twitter, Facebook, try to establish trust by doing things like fact-checking, you know, the president's, social media posts, you know, they get themselves into hot water because now the president, wants to enact legislation that blocks them from doing so.

Dr. Chris Rowell: Well, and this is the, you, I think you guys have covered this quite a bit the publisher versus platform debate, should they be doing this in the first place? I don't know. I think these are companies, so I think they should be able to decide, who gets to say what, and, you know, there to run a business so you can see why they want to minimize hate speech, just because it's going to drive people off the platform and what they rely on is, well, okay. You want to manage hate speech to hate speech, to an extent because some of it gets its clicks and some of it goes too far. So, they're still trying to find that balance and I think that balance is, is broken as you guys have pointed out previously as well.

Mike: Isn't this kind of you know, it seems to me that it's kind of like at its heart, right. One of the challenges with trust, especially in kind of the digital realm, right? So, there's the idea of trusting somebody, that like those stick you used to do in those workplace kind of things where you get together and you try to do these team-building exercises, and they tell you to fall backwards into your colleagues' arms, and just trust that they're going to catch you. There's that type of trust and then we've got this challenge right now where when these platforms become the arbiter of trust. So, you trust Facebook to give you’re a good experience for you and your Facebook friends and not have it with hate speech and stuff.
 
 The truth of the matter is that it's the real problem is that the trust in the digital platform is bonded to the platform and not to kind of a digital version of me or my digital identity. I mean, I have a separate Facebook digital identity that has a certain level of trust or view of trust with Facebook. I have a different kind of digital persona that has a trust, a relationship with Airbnb, and all these other ones and these trust relationships on a centralized scale are not portable between organizations. So, you get into this situation where trust for the average person is even hard to define. Anyway, in a digital environment, it would seem at least in a centralized one where my ability to trust somebody is directly related to how I interact with them and what that platform is.
 
 Dr. Chris Rowell: Yeah, absolutely. I think

Mike: It's right. I mean, there's so, what I mean? So, what do we need to do to kind of take trust away from a third-party actor and take it back to ourselves?

Dr. Chris Rowell: So, I mean, I look at my work, I look at technical solutions to that so, shifting that point of data regulation, so you have control over your identity over your data. I know you've worked a lot on this Mike as well, so I think there's a relationship there between trust and accountability, and accountability depends on the type of interaction, the type of platform like as you mentioned. So, I guess it depends what the stakes are for you as an individual. Like, how trustworthy do you want to be on these different platforms? And like you said, perhaps accountability is lower. If there's no portability of your identity across platforms, because you can kind of get away with something in one area and it doesn't affect the rest of your life. I'm not sure, I don't really have a good answer to how do we build digital accountability for people? I mean, there's ways to do that. There's ways to link people's identity across platforms, but I'm not sure I think it depends on what the stakes are some people are getting away with a lot. Well,

Mike: Let's try to put the stakes in context then. We've got people the kind of the model for the average person with the social media is for some of these people, there's an influence model where they can make money. Okay. So, you can recently you've got 200 million people in India, for example, many of them building influence networks on TikTok. Okay. Because of the particular platform, and they're actually kind of monetizing their TikTok influence, and this is the same with Facebook and Twitter and all of these folks, right. Where the model that the social media guys have is to drive more traffic, which drives more advertising. If you drive more traffic, you'll get more ads on your website. We'll give you a cut.

Okay. And now what's happening is that you take because trust is basically, I would argue broken on a geopolitical level. Now you now have the weaponization of social media, right? So, you have this India had a border skirmish about three or four weeks ago. I don't know if you recall with China up in some remote part of the mountains, and as a result, India turned around and banned TikTok and 58 other apps that were manufactured in China from being used and sold in India. So, now what you basically have is a kind of a geopolitical spat that is causing a tip for tap thing that is affecting the trust, not even the trust, but the platforms and the ability for people to trust in these platforms to deliver the service that they want, because even the platform now has no ability to guarantee that they're going to be able to deliver you know TikTok and say, you can trust us to provide a decent platform for you to exchange goofy videos and make some money.

But now all of a sudden somebody unplugged TikTok. So, when our trust and because it's connected to our data and our networks is now the, you know, this subject of a data cold war. How now is trust even where do we even start?

So, I think there's multiple levels and I think things like TikTok, you can trust TikTok that if you put a video and it gets a bunch of likes or whatever I don't know, TikTok, whatever happens on TikTok, however, you're graded, that you're going to get the money like that. I can probably trust that TikTok is going to do that because that's core to its business. I can't really trust what else TikTok is going to do with my data. What else to do with, you know, behind the scenes, in terms of selling others, access to my data, or collecting different types of data on me from other platforms, third parties, or those kinds of things. I think this goes to, you know, we thought of the original ideas of like online trust were like, oh, I can now trust this other person.

Or I can trust this business. I can put my credit card online. Isn't this cool because we have this, you know, these different ways to trust other parties because we have these new conduits for securing those, those interactions. but those conduits are, you know, they have their own agenda and exactly and what we're seeing increasingly that these countries aren't neutral and that's the problem. These countries come from a political, jurisdiction, or they come from particular countries. So, TikTok is Chinese, and it's doing, I mean, it's collecting huge amounts of data on people outside of that, it has its own agenda in terms of being a company, but it has its own political agenda, very likely and it can be used by other jurisdictions like India too if we ban TikTok, it's going to hurt the Chinese government, something like in some ways.

So, I think that's from... so what I'm talking about, you know, when I advocate this new trust architecture, it's like, I, okay. We have conduits for doing stuff that we can never do before and it's kind of creating this different types of interpersonal trust around the user, like around the periphery of users, complimenters different companies, they're engaging, like, that's awesome but the conduits are broken. Like the conduits are the ones that we're placing too much trust in and they have their own agenda. So, what I'm looking at is, you know, blockchains trust architecture is a more kind of neutral set of conduits basically because you can have that as distributed as well. So, all of a sudden, if we decentralize the conduits, it becomes more like the traditional internet where no one owns or controls it and of course some people own different parts of the internet. We have Amazon hosting most of the websites and all that kind of stuff, but well you are hosting most of the servers, but it's still more of a distributed system, less kind of...

Henry: But so, you agree that probably the only way to get away from the mess that we're in with centralization is to have a decentralized system and I guess inherent in that is that each of us owns our own and controls our own data and information.

 Dr. Chris Rowell:
I think that's the end game. I really think so I mean, I'd love to see that happen, right.

Chris: I just want to bring this up Chris, because while India might be blocking TikTok and China might be blocking Facebook. One thing I noticed that they're not blocking is the email protocol or the HTTP protocol or the WordPress CMI, CMS right. and I'm seeing kind of a key trend here, which is usually if it's an open standard, governments around the world, people around the world tend to trust it more, but if it's owned by a singular institution, naturally, governments want to remove access.

Henry: And corporations want to make money.
 
 Chris: Exactly, why just wanted to ask, Chris, why do you think people, and institutions, and governments tend, to trust the email protocol more than they trust things like private social networks.

Dr. Chris Rowell: Well, I think it's kind of back to yeah. A similar thing. the email protocol, at least, it's fairly agnostic, right? It's something that can be picked up and then, you can build security around it. So, your emails are being secure by a particular vendor and if I'm a Gmail, then Google's reading my email, that's fine, but it's using the same underlying protocols that I can send emails to people with other accounts, like whatever, like outlook or whatever else. So, I think, there's a lot to be said about having again, like these, the underlying protocols, there's an advantage to having them open-source because then we can use them in different ways and trust that. Well, I guess it comes down to transparency as well, you can kind of look inside and see how it works, versus a company that's sort of vertically integrating all that. You really have no idea what's happening with how things are getting sent around or where information's going.

 Chris:
So, you would say that transparency is a key metric for establishing trust.

Dr. Chris Rowell: It depends how you define trust. I think you would, you know, transparency almost like negates the need for trust in many cases, because, you know, if you can see exactly is like how something, you know, the full history of something, then you really need to trust anyone around it. So, I think, transparency is a good thing in many cases and it kind of helps, it's really important in many cases and it reduced the need for us to place trust in any of you other parts of the process that we can't see. So, trust I think comes up when you don't have transparency if that makes sense but there's also instances where you don't want transparency, right? Like in a bidding process. We want to that the bidding process works, but if you had transparency, then we could just collude.


 So, we want to trust in the process. We want to trust in whoever's facilitating the process, or if it's a technology, we could do it through, we could have a single institution that's like public procurement, for example, is a good example of this. You know, we all, we need all this PPE, we need ventilators, who's doing this, you know, governments are doing this are these governments corrupt? In many cases, and what does this process look like? So, transparency, you're on transparency in some parts of that process. So, it's working, but you don't want transparency in the bidding, like what the bids are specifically. So, there's a bit of nuance there as well but so there's certain things you want to make transparent. There's other things you want to conceal because that's, you know, that's how it should work.
 
 But then you could, yeah. If you had a bidding process that was sort of distributed technology like blockchain, it's like, okay, now we have, you can make bids, they're kept private. And then the result is revealed, we know, you know, we can see how this technology is set up. We can see how these bids have been made and we, sort of post HOK with have a bit of transparency around exactly what happened during that process because it's going to be all these timestamped directions, but during that, we would conceal it. So, I think there's different ways you can set it up in terms of having transparency, insert it, some places which, brings accountability but then maintaining pasidy, because there's certain things you want to, you don't necessarily want to reveal at the time.
 
 Mike: If you basically, if we want to take that to the next level, so I guess the question I would have then is it would seem to me that when you combine things like the sheer, subjectivity of trust and you combine basically this kind of weaponization of trust. How does decentralization, for example, kind of help us change this, I guess this kind of disconnect, right? Because at the end of the day, if you're looking at what's happening with social media and this intersection of kind of the fake news, the world falling apart and the weaponization of things, basically platforms now are being forced to potentially, I guess, change or alter their trust models with their users in order to kind of accommodate geopolitical realities.
 
Right? I mean, one of the things that's happening with TikTok right, is that TikTok now there's lots of, and I was funny, we had this discussion the other day, but TikTok is now allegedly looking furiously for an American company to buy them because they know that the only way, they kind of potentially survive is if there's no doubt the Chinese government is colluding with them because they're owned by Facebook or they're owned by Twitter, or they're owned by Google. So, the idea of, in fact, the platforms that you trust to establish trust are now able to be coerced on a business model level. Is there any solution, but decentralization for trust on an individual level?


 Dr. Chris Rowell: Yeah. I think decentralized decentralization does a couple of things. Firstly, it lessens the trust that you have to put into any particular, provider like TikTok because if, you know, suppose you could own your own data, you could choose between different providers at that point in time, this is kind of future world, you know, where there's no switching car us and you can carry your data across platforms but yeah, if you're able to do that, then you could basically choose at that point, which provider, you know, whether it's T TikTok or something else that you want to engage with? I think it gives us more options basically or I'm hoping that we kind of create more of a meritocratic, competitive environment. If you eliminate those switching costs, then these organizations like TikTok that are Twitter that are going on top of these, providing curation providing, you know, like putting the blue checks in and all that kind of stuff. They have to provide additional value for you to want to use them and there's more competition between them. It's not that, say if you're on Facebook and Facebook has 10 years of your history, you've got, there's a massive switching cost to pull that data and take it across platforms now. So, there's far less choice in terms of that. So, you're kind of I guess, you know, I don't think anyone really consciously makes a choice to trust the platform at the moment.
 
 I think we're starting to get there, but I don't think, it's shown that like it's come to the point where some of these platforms have shown they're untrustworthy, that we're actually starting to think about it. I don't think anyone's like, oh yeah. Like, would I, you know, do I actually trust Facebook to manage my data and to secure itself against breaches and all that kind of stuff. No, one's really thinking about that and firstly, like, I think it's going to, we've gone from a transition. Like, firstly, we didn't realize how much data they had on us, as general part, they weren't kind of thinking about that kind of stuff because they were just thinking, they perhaps didn't understand the, you know, what the business model was, now we're realizing that you know, how much data they have and we are kind of in the stage where like, oh, maybe, I've got nothing to hide.
 
 I don't care too much and I think we starting to get to a point in the broader discourse where it's like, okay, they have a shitload of data on me and this matters and there's a real downside if this gets out and maybe I can't trust them because we keep seeing like evidence that I can't trust them. So, I think it's a longer transition. I think it's been happening for 10 years or something and it's going to continue to happen for the next 10 and I think that's, what's going to push us to a more decent those models because on one hand you have the user’s kind of becoming aware of this. The other hand, you have the regulators saying, all right, like if you're going to hold onto this data, you have to be really sure that, it's not going to get into the wrong hands and you have to use it in correct ways and you have to delete it and show us that you ensure that you've deleted it. If people ask you to, if they use ask you to. So, I think we're starting to see pressures from different places, towards a more decentralized model and I'm hoping that those decentralized models will give us more choice and will give us more.
 
 Henry: Well, that a good thing because people need to be more selective and over time, when they get used to these technologies, they will decide, which should I trust or can I trust any of them? So, Chris, I've got an open-ended question for you. and that is and I know this is what you spend your days thinking of if you could envision an ideal platform or a way to change what we have now, or completely start something new, can you lay it out, lay it out for us, as to what you think the ideal path forward would be for social media in the future?
 
 Dr. Chris Rowell: Yeah, sure. So, for social media, I would say so something in which, you know, like I said, the something like blockchain, but it might not be blockchain, but something like that could allow you to own control, and even have custodianship of the data you produce. So, I would say it's not really before the platform level, we have a protocol that allows you to control the data that's about you. So, that secures your identity and it basically, on top of that, you can then, so this is the underlying, protocol that's connected to your identity and your data, on top of that venue would have the platforms. So, it would be these different providers that would come in and say, all right, I'm going to provide an interface.

That's good for tweeting in 140 characters or whatever and you can jump on this and this is the value proposition of engaging with my platform and then, so they would provide this service that you would plug into, right. where like, okay, I'm going to use this service now because I know the network there. I know this is how they create stuff and then I'm going to go over this other platform with the same account, with the same underlying account on this distributed protocol, and then plug into that for something else. So, like some other kind of social media where I want to watch videos or whatever else and then, so you would have these, it's more like these yeah. These per providers that work on top of an underlying protocol.

So, they're kind of in the current model you would call these, I don't know what you call them. Like at the moment the platform's doing all of this, it's securing your identity, it's, allowing you into the platform, it's allowing others into the ecosystem, and then it's, facilitating these transactions like interactions. So, now it's kind of a breaking up of that. So one is, you've got this underlying account that no one owns and controls, except you, that you can plug into different interfaces to different applications that would allow you to do certain types of things with that protocol, with that profile, sorry.

Mike: Well, and I mean to just to sort just, I interject here, but also, I guess, the other thing that you would do by doing something like that is you would address this issue that you brought up Chris about portability between platforms, right? But you would also address, what would be portable would not just be your identity, but would also be your connections, right. Because part of the challenges, right, is that, you build a network on LinkedIn, you build a network on Twitter, you build a network on Facebook, you build a network on TikTok and none of these actual connections are yours either. You know what I mean? So, the theory about that the idea of portability, isn't just about me owning my data and my and kind of taking trust in the out of the platform and putting it with me and giving portability. But that portability isn't just my identity, that portability is my connections because it's, you could argue that LinkedIn, the difference between LinkedIn and Facebook is one say social and one's bit work-related and TikTok is videos and Instagram pictures, but the reality is the people that you may be sending that info to they're all your connections.

Chris: Chris, I have one final question for you and it's something you alluded to pre obviously about establishing trust based on merit, so to kind of give you an analogy, the finance market, has something called bond ratings, right. Where they evaluate your trustworthy and assign kind of a rating to you, for example, a triple AAA rating if you're so completely trustworthy that the banks know you're going to repay your debts every single time. Right.
 
 Henry: Or like a credit rate.
 
 Chris: Exactly a credit rating and this is RG built on merit and if you're untrustworthy, you go into to junk bond status. Right. What I want to ask is how would, merit and trust work with technology and social networks?

Dr. Chris Rowell: So, in the new system I'm describing, or?

Chris: Yeah. Like how would we evaluate trust based on merit?
 
 Dr. Chris Rowell: Yeah. Merit, I think it would have to be a similar kind of, you know, so a self-regulating system would be where the users would be kind of evaluating one another similar to the Uber Airbnb kind of example. So, I think these different and actors would have to there has to be some kind of reputational system, like there has to be costs of not having merit and perhaps it's a reputational cost. That's kind of what I can think of currently.
 
 Mike: That's a very interesting point, right. Like, think about that, you know, merit is in effect, my rating, do I get three stars, four stars or five stars and arguably, right, when you're talking about merit and you're talking about trust, right? Trust is like merit on steroids. So, trust would be like the amalgamation of all your merit and so I could be somebody who has a five-star rating on Uber because I get from the airport to the Airbnb, but I have a two-star rating on Airbnb because when I get to Airbnb, I trash the place right. No, but let's say that's what I do. But right now, none of that kind of Daisy chain of trust doesn't follow me off the platform that nuance, you know what I mean?
 
 So, I could, I could be and I would argue that if I was somebody who, every time I got an Uber, I was respectful and nice, and everything was great because I wanted to get to from point A to point B. And then the moment I got to point B; I trashed the place. I'm not inherently a trustworthy person, but Uber would tell you, I am and Airbnb would say, I'm not. And so, this kind of merit thing that these, you know, that this merit type system that is applied by these reviews on these sites, because they usually go both ways, rights, the person who provides the Airbnb and the person who stays in it are the things that are supposed to affect my ability to kind of continue to have a convenient experience on that platform. Whereas I would argue that, you know, if we could find a way to decentralize trust and make it portable with individuals then A you would have, you know, basically, something that would allow us to go between platforms, but you would also have a trust score, like a credit score. You mentioned Henry that would come with us and we wouldn't be able to Dodge, you know, kind of trust, deep deficits that we may have in our character. If that makes sense.
 
 Henry: Yeah, I agree I think, you know, like I said before, trust to contextual, so if you are great in a cut but terrible in a house, you know, maybe that's okay still if I'm an Uber driver, maybe I don't care too much, but I think that's, you know, there's a really kind of there's certain advantages of having that, you know, identity that's carried across platforms and we can see that, you know, you are ranked across basically a bunch of different things you do to a degree. I think there's also risks that introduces as well. We have a cancel culture where it's like, okay, if someone really wanted to ruin your ranking across every platform, they could probably organize and do it then. So, at the moment you might get ostracized on Twitter, but the rest of your life is intact, right.

But this is, we're moving to a world where you, you could really be ruined by something you do on if there's if you upset one particular community that has, you know, that can mobilize against you, right. Because they can then reach every part of your life. So, I think, I don't know, it's perhaps there's a lot of nuances and I think there's a lot of different ways you could organize this depending on what information you share across platforms, like how is and mission across platforms. But basically, the argument I've been kind of trying to push in terms of because I study these organizations that are facilitating these interactions mainly, also the user experience is an important part of it, but like I think what does merit mean for the platforms themselves?

Like why should we be, you know, I think they've had a pretty free reign in so far based on the trust model that we have, right? Because there are arbitrators of, I mentioned more of a meritocratic system if it was decentralized because we could have different vendors that are putting up different interfaces, different applications that you could plug into with very low switching costs. So, then all of a sudden there's this competition as it is now, we have this plutocracy of one or a few like massive companies that collect your data and they keep collecting your data and because there's huge switching costs, they, you know, the rich get richer in terms of the data that they have. So, they become better at analyzing that data in your, you know, they have access to making the experience better and better for you or at least, you know, keeping you on the platform longer and longer. So that's what I mean in terms of like, you know, that system's broken and the merit for the platform itself, like it's becomes really hard to judge we're addicted to Facebook, we're addicted to, TikTok or whatever else, but does that mean that they have merit? Right,

Mike: That's very interesting. Okay. So, Chris, this has been like an unbelievably fantastic discussion. It's fascinating to get into the idea of trust. What I want to do is kind of take us to a wrap-up with a technical question. the best digital technology to share trust open-source protocol or private API.

Dr. Chris Rowell: It's such a tricky question cause like I said, trust is contextual trust, depends on what you're doing, what the action is. So, I think, I'm just going to say it depends like a real researcher. I think if you want a system that's, trust comes from being sensor resistant and you can see how it works and all that kind of stuff. You're more likely to trust that system, you know, something like Bitcoin is meant to be open, right. It's meant to be sensor-resistant. It's meant to be, permission-less and it should be able to participate. So, I think the app kind of makes sense like the open system makes sense for that. If we want to have trust in governments that they're looking after us, they're securing national secrets, they're not giving away, nuclear codes and things like that. You maybe you don't want an open system. So, it really depends on what we are trusting, what the information is it's supposed to be doing but yeah, sorry, I can't be more specific than that. I don't know if you want to talk about examples.

Henry: I can tell you Dr. Rowelll, that was, as Mike said, it was fascinating. it was great to get those insights. All I can say is thank you so much. Perhaps we can chat with you in the future again but you know, you've really made a difference here on this decentralized episode. Thank you.

Mike: Absolutely. Thank you, Chris.

Dr. Chris Rowell: Awesome. Thanks so much for having me.