Intangiblia™
#1 Podcast on Goodpods - Intellectual Property Indie Podcasts
#3 Podcast on Goodpods - Intellectual Property Podcast
Plain talk about Intellectual Property. Podcast of Intangible Law™
Intangiblia™
Jean Marc Seigneur - In Trust We Build: Designing the Future of Digital Reputation
What if your glasses could spot a deepfake before your gut does? We sit down with Jean Marc Seigneur, a veteran researcher of decentralized trust, to map where security failed, where it’s catching up, and how proof—not vibes—will anchor the next decade of digital life. From central bank digital currencies to NFTs that carry qualified electronic signatures, we unpack how legal recognition and cryptography can finally meet in the middle, turning tokens into enforceable rights and payments into reliable public infrastructure.
We also go beyond buzzwords to the missing pieces: education and design. Friendly apps hide sharp edges, so we talk about why countries need their own experts, not just imported tech, and how wallets must evolve with safer recovery, better defaults, and interfaces that explain risk without slowing you down. AI raises the stakes, so we explore signed videos, verifiable identities, and provenance trails that help you tell a real voice from a cloned one at a glance. Reputation won’t live on a web page for long; it’s moving into the physical world as augmented overlays that can help or harm depending on what they reveal and to whom.
Bias won’t vanish either, because human trust is social and local. We discuss how to balance peer signals with regulators’ oversight, why transparency about AI use will give way to tracking human effort, and what a time-based “work token” could add to creative markets. The red thread across it all—payments, NFTs, augmented humans, and AI media—is simple and demanding: protect freedom while proving claims. If we want technology that empowers rather than deceives, we have to design, debate, and defend the trust layer itself.
Enjoy the conversation? Subscribe, share with a friend who cares about digital trust, and leave a review to help more curious minds find the show.
Check out "Protection for the Inventive Mind" – available now on Amazon in print and Kindle formats.
The views and opinions expressed (by the host and guest(s)) in this podcast are strictly their own and do not necessarily reflect the official policy or position of the entities with which they may be affiliated. This podcast should in no way be construed as promoting or criticizing any particular government policy, institutional position, private interest or commercial entity. Any content provided is for informational and educational purposes only.
My my my research is about trust. And so of course when there is a new system uh emerging and created, I'm going to look at if is it trustworthy? How could you hack it or uh how could you improve uh its trustworthiness?
Speaker 1:Um welcome to our podcast.
Speaker 2:Thank you for the introduction.
Speaker 1:Um so the first question is please uh let us know who you are and how um you came to be this uh IP expert.
Speaker 2:I mean um uh I've been working on uh decentralized trust since uh 2000. I started my research at Trinity College Dublin, I did my PhD there on this topic. Uh then after I joined the University of Geneva, where I uh teach uh online reputation as part of the Media Lab of uh the Faculty of Um Social Uh Sciences, and um uh as well as directing the certificate of advanced studies on blockchain technology, which is also uh a technical trust mechanism.
Speaker 1:Yeah, perfect. Um so you have been working on digital trust since the early 2000s, that's uh quite a long time. Um, what has changed the most in how trust is built online?
Speaker 2:Uh uh so online uh for the last 20 years uh it's got it's gotten worse uh because uh the technology uh hasn't changed much from a security point of view. Um and uh the hackers uh have no uh know better how to attack those systems where security has not been enough uh uh sought and and uh invested in. Uh so uh now there are more attacks and and security is less uh as before, uh almost as before. Um but in the last years we we have seen coming up uh stronger mechanisms, uh maybe linked to uh chips on identity cards where you could really prove your uh real identity uh as a human uh citizen of a country uh online, and uh it's going to help securing more uh the internet. Uh but so far no, I mean for the last uh 20 years uh not much has been done because uh not security has not been taken into uh consideration as a strategic uh uh asset. Um even countries uh have not even invested in in uh digital uh technologies. Uh some countries have, like uh the USA or or China, but uh Europe, Europe, for example, or even Switzerland, they are talking about uh digital sovereignty, but uh they have invested nothing in the core technologies, uh and so uh it's very weak now.
Speaker 1:Okay, so the focus has been to digitalize and to become more modern, but not thinking about the risk of that uh digitalization of being more exposed to the risk online.
Speaker 2:Yes, and and we see different risks because they are like independent hackers but also uh like aggressive countries now. Oh okay. Um well, and so uh the internet is uh is not uh uh always uh a playground where we only uh play.
Speaker 1:Everyone is there, the good and the bad. Okay, makes sense. Makes sense. So you have helped create one of the first academic programs on blockchain and decentralized apps in Geneva. Why is education a critical pillar in shaping through tech?
Speaker 2:This is very important because otherwise you you seem to uh to know how to use the those systems because they are very no uh they are user-friendly, uh, but you don't know the underlying uh risks. Uh and so um it's important for that in each country uh a good number of people uh know uh the details uh and know how to protect uh their citizens.
Speaker 1:So education is key in this sense.
Speaker 2:Yes, if you don't know the details. I mean uh uh you rely on on other countries who would provide you uh we will uh which will provide you some um like technologies that they say are secure, uh but you don't know. Uh you you need to to do peer review and also internal peer review.
Speaker 1:To verify yourself just in case.
Speaker 2:Yes.
Speaker 1:Yeah, makes sense. So and also to make sure that you learned what the technology behind it and you understand um what is what exactly it's about in order to be able to regulate or to use it properly.
Speaker 2:Yes, especially when you have hackers or aggressive countries, you know, who provide some technologies.
Speaker 1:Yeah, makes sense. So from CV CBDCs uh to NFTs, what's the one sample um where decentralization improved or challenged public trust?
Speaker 2:This is very broad because CBDC are central bank digital currencies, uh, which means that uh the central central bank would uh create uh, for example, in Switzerland a digital franc or in the US a digital dollar. Uh it's another way of paying. Uh now we have coins, uh, we have uh banknotes. Um in some countries we have check. Uh then you can pay also with application, uh like uh online payment. Uh you have cryptocurrencies, and here uh uh the idea would be that you have uh uh a money uh which is uh uh also provided as another uh with another means as a digital currency, but it would be like a real dollar or uh real uh Swiss franc. There are many different ways to do it. Uh usually uh we're talking about cryptocurrencies, like uh an equivalent of uh it would not Bitcoin but uh a dollar that would be created uh as a cryptocurrency by the Fed or uh in Switzerland the Swiss National Bank. Uh but it's not the only way to do it because on the blockchain cryptocurrencies are also on a blockchain and they are tracked, and from a privacy point of view, it's not so so good. So there are other uh ways to do it. So uh CBDC are centralized because they are uh provided by uh a central bank of a country. So it's very different than uh NFTs, which are non-fungible tokens, uh, which are usually used to uh yes uh you you you in your crypto hardware wallet or crypto software wallet, you can have uh cryptocurrencies but also those uh NFTs, non-fungible tokens, uh, that can be linked to uh a certificate of ownership of uh piece of art or um of something in the real world, which is um supposed to be um like uh real ownership. Okay. Uh one piece which is missing in current NFTs is that okay, on the blockchain this is secure, this is uh really in your uh when it's it it's written in the blockchain uh and you own this uh NFT in your crypto wallet. Um yes, you have it by and it's protected by maths, by by cryptography, uh on the blockchain, in your uh crypto wallet, but lots of judges uh or laws uh in different countries they don't know about blockchain, so it's very difficult to explain to them uh you know uh yes, this is my ownership because this is my uh crypto wallet. Uh so uh it's difficult to prove really that uh you own it really in the real world. Um so then we have worked with uh International Telecommunication Union uh for a new standard called the Sign NFTs, meaning that with the NFT uh it's inside your crypto wallet, but also uh you have a uh digital signature which is uh uh of very high uh strength, qualified electronic signature. And uh then when you open this uh NFT, you have also a link, for example, for a PDF, and this PDF will be signed digitally by, for example, the artist, uh and it's equivalent legally as uh uh the handwritten uh signature um by law in Switzerland or Europe, and I think there are some uh it will be also recognized in other uh jurisdictions. So um, yes, I mean so then you see from CBDC, which is centralized, to NFTs that can uh a piece of art can be created by any artist in the world. Uh you have a big difference because it's uh centralized and NFTs are more in a decentralized way because anybody can create NFT, uh a central bank currency can only be created by a central bank, right? Um, and so uh the um last point of your question was like so it's um what is the the challenge or the improvement on public trust?
Speaker 1:From what you're saying, the one with the the the extra proof for the NFT, that's great for trust because then you have an authority that is confirming the validity of of this um of the of the art or the work behind the NFT. Um and what the other one about the um the the one that is uh managed by the banks, you have the banks, people usually trust banks in the majority. Um what challenges do you see in this kind of because uh in this kind of development? Because um if the um coin is gonna get all digitalized, um it can get uh as well, it can get uh lost or it can get uh uh break down or someone can attack that you can lose everything because it's only digital. Um so in this case, how do you build trust on that?
Speaker 2:Yes, alors so um uh from uh what I understand is that also from the public point of view, um then all of this is is very new. Uh and so uh there's a big question in uh the population or the citizen trusting uh these new cryptocurrencies. Uh so from uh a CBDC point of view, there is uh for the normal citizens who uh who try to trust in this system, if it's provided by the central bank, uh then already from a legal point of view there is no uh trust issue because this is a legal tender uh um they are regulated and so you know this is a uh um a governmental organization. Um and so then uh uh from this point of view, uh there is less the the trust question is less important because this is uh uh provided by uh a by law uh an institution that uh says this is uh uh if you have such CBDC in your crypto wallet, then you you you own this uh this money and this is an official money. Um so here there is a less less risk. Uh from NFTs, then you you you also have this education for for uh uh normal people you know in Switzerland or in developed countries, you have between 10 to 15 percent of the population who have already bought some cryptocurrencies, uh, but sometimes via intermediaries. Um, you can you you don't need your own crypto wallet, you can buy it from some banks. Uh for example. Uh in Geneva, you you have a Arab Bank Switzerland, for example, uh you have Swisscoat. Um and uh so even some of these people don't understand really uh don't have their own crypto wallet, so uh you have a lot of of uh uh education to do to for them to really be uh uh used to use this new technology. As the web, you know, it took like 20 years, for example, before there was a large scale adoption. Uh and then also for the crypto wallets you need uh you need uh other components for at the user interface level that will, as you said, help not to lose the the the keys for the uh the crypto wallets, uh to recover it. Uh and um for the next five years uh there are some different working groups working on uh bringing um easier user interface for cryptocurrencies and NFTs. And uh already, as I said, NFTs uh they miss uh already uh an official digital signature of the artist, or you know, like the the company that would uh sell um um let's say a flat uh or uh house uh ownership certificate. Uh you need to add some qualified electronic signatures. But the good thing is that uh now in Europe and Ireland you have these. This is not part of blockchain, but is this is another part technology technological part. Uh most you uh most uh national IDs will have chips uh uh uh that will be uh certified by a number of trusted parties. Uh in in Switzerland, for example, for the arts, uh Swisscom is uh is one of these trusted third parties. Um as I said at the beginning, now we we are moving into uh uh maybe an era where it's a bit easier to be sure who uh uh who is talking in front of you, even if you have all these AIs that you could impersonate uh the face, the image, and so on, they will not be able to impersonate the mats and the signatures. Uh so um uh yeah, at the moment, even with AI, it's even uh more difficult because if hackers are uh building AI of people uh with a voice, the face, yeah, all the information which are uh known about us. Uh this interview, for example, uh already uh an AI could be uh recreated uh uh and they're uh just but what we said um could already speak like like us. Yes, and fake fake videos could be made uh by us uh just thanks to this uh like the 10 minutes that we have spoken. Uh but then the video that would be generated would not be signed, and uh in the future uh uh you will be able to sign even official videos of you uh with your your uh your IDs uh and uh strong maths. So uh it will be possible to differentiate an officially uh an official uh video or uh audio of you uh or one which is created by someone else.
Speaker 1:Oh, that's great. Um the last season of my podcast that I generated completely with AI. So I I made the the script and then I I cloned my own voice and make myself do the entire um the entire episode without even speaking. Okay. So it was quite fun, but also scary at the same time. Uh my husband couldn't tell the difference between when it was me and when it was the AI, but my mom could. Because I think moms they have a better understanding. And this is something it's fascinating, but I love the idea that you can you have this extra layer of protection. Like if in in the case someone clones your voice or your likeness, you will be able to identify it as something that is not authorized by you. Yes, and you can still do it if you want to.
Speaker 2:You you know you sign on digitally. Uh but this is uh and this is uh uh an outcome of uh uh some European projects because I participated to to many European projects as part of University of Geneva leading uh this kind of projects, and some you know Europe has not been very good at uh creating unicorns uh so far for different different aspects, but uh at least uh for all those projects, like hundreds of millions uh uh uh that worked on this uh uh ID and signature uh uh technology, now it's being embedded in many countries, even at the notary, uh and disqualified electronic signatures actually are an outcome of some of these early projects I made in uh 15 years ago, or even like 20 years ago. Uh so from an identity point of view, this is good because it's also uh embedded into laws as well. And even Switzerland is following because um you know it's it's the relationship with the yeah.
Speaker 1:Oh no, that's amazing. I really like that idea, and I think it's it's uh it's the a good answer of every technology that we're seeing uh popping up and how easy it is to clone uh people and and how easy also to be deceived by that. Do you you think it's the real person behind it, but it's actually AI?
Speaker 5:Yeah.
Speaker 1:So um you work uh with artists, cities, and technologists. When designing system for trust, how do you balance technical verification with human intuition?
Speaker 2:So uh human intuition is is a bit dangerous from a trust point of view because um all my work is interdisciplinary work. So my PhD was of course having uh uh computational models, but based on the human notion of trust from uh social science. Um so I um you know read many articles on uh uh what was the the notion of trust in uh in different countries, different cultures. Um and from that you know I built a computational trust system. Uh then of course you you know you not only have uh trust research in social challenge but also economics, psychology, and so on. Uh uh I also uh integrated some medical aspects because we did some brain computing uh research on you know what happens in your in your brain when you someone trusts someone else. Oh wow. And if quite if a system knows uh how to trick what uh what is what tends to make a human trusting someone else. So, for example, from a face point of view, if you have a very symmetrical face, uh you will have uh the human in front of you will uh have a tendency to trust you more than someone else who has a not a symmetrical face. Uh so for example, even AI, for example, AI could create faces which are sim very uh perfectly symmetrical. Uh and so um some science is dangerous intuition because uh the uh you know uh thousand of years, uh hundreds of thousands of years of evolution uh that we have had in uh trying to uh uh As human uh trust someone else uh could be used against you. Uh you know, so intuition is uh is not really like uh what I would recommend for uh trustworthy uh systems. Uh so then of course technical trust is much more important. As we said, like for example, um I you know your intuition would think that you know a video or someone that you see uh seems trustworthy, uh but maybe there would be no, as we said, signatures, qualified electronic signatures from uh as part of this video from a real human. And so then you would have an alarm saying that be careful, uh it's very trustworthy because actually your your smart glasses could watch uh also uh would have watched the the video uh or the person in front of you uh and say, okay, uh yes, it seems uh very trustworthy, but there is no uh digital signatures uh of a human uh behind it. So be very uh be very careful.
Speaker 1:Well that's scary. So um by the fact that uh there's a science behind behind how we react to certain to the stimulus, so how we react to people talking, how their face looks, and you can map that in the brain, then you can create also that with uh uh which you could create an AI that emulates that. That's that's very interesting. So it's it's interesting.
Speaker 2:Interesting for records, yes.
Speaker 1:Yeah, interesting is scary. So reality can really be uh manipulated against us, yes, against what the instinct that we normally would have.
Speaker 2:And from a social point of view, you know, usually trust is not perfect because uh uh me, I really use trust uh as a human notion of trust, not as a uh a security, which is usually you know, in security, uh either you have the wrong the right password or the wrong one, or if it is uh signed correctly or wrongly. But even signatures, maybe in 10 years' time when you have a quantum computing, the current algorithms uh uh could be broken. So a signature is never uh sure, yeah, it's sure for today, uh, but maybe in 10 years' time uh you know it will be broken, so it will be you will need something else. You really uh should uh be careful uh when you when you uh deal with uh information in front of you and without technical trust.
Speaker 1:Okay, okay, that's that's interesting and scary at the same time. So your work has uh touched everything from offline crypto transactions to augmented humans. What's the red thread connecting all of it?
Speaker 2:Also, I mean uh it's a bit opportunistic, but um anyway, uh my my my research is about trust. Um and uh and so of course when there is a new system uh emerging and and created, uh I'm going to look at if it is it trustworthy, uh how could you hack it or uh how could you improve uh its trustworthiness. Um however, uh myself uh you know the uh because you know we are we are moving from the uh use of a smartphone uh looking at screens to uh as you said of the more a metaverse where you you would have would you would access uh digital information anytime anywhere uh by sound with like uh an AI uh on your smart glasses giving you uh some recommendations or some warnings uh uh by what you see, uh but in in augmented reality. Uh and myself I'm uh also into uh what we call like this uh biohacking from a personal point of view.
Speaker 4:Okay.
Speaker 2:Uh so uh from augmented reality to uh augmented your human body, you know, um in 2010 we started these conferences called augmented uh augmented human uh uh international conferences, uh, where we were the focus was more on the human body uh than the augmented reality environment.
Speaker 1:Oh okay, to enhance uh our capabilities.
Speaker 2:Yes, or enhance your capabilities uh or uh retrieve your capabilities because we have uh some tracks which are about uh rehabilitation, you know. Someone, for example, would lose uh a leg, uh, and then with an exoskeleton and you know, maybe uh some uh uh different uh techniques, uh sensors in the spine and so on, uh, could uh gain regain uh a leg. So it's not only augmented uh it's sometimes it also uh just gaining back your abilities.
Speaker 1:Yeah, of course, especially with people with uh with this kind of disabilities that by accident or a disease they lost uh uh an arm, a leg, or or or something else, and they they can have uh uh another chance of of uh regaining mobility, full mobility.
Speaker 2:And so, yes, almost in humans, then uh I look at uh some trust issues. Um but then it's very bold because we not only have um like digital systems, but also uh we uh look at also uh medicine uh supplements uh and so on.
Speaker 1:Anything that can improve uh the human uh so it's really augmented from the broad sense, it's not only the the little part of technology that we imagine. Okay, okay. No, it's it's it's a great work when you when you can really change people's life to for the best.
Speaker 5:Yes, that's right.
Speaker 1:So moving to um online reputation, um, which is something that have shifted very much and it's it I think it's it's uh it keeps changing. So it was once just about reviews, so uh five stars, two stars, and so forth over people commentaries in in certain uh blogs or online. Now online reputation can shape your credit, your identity, um also your future employment many times. Um, and even innovation funding. Where is it going next? Where do you think the online reputation is gonna take us?
Speaker 2:Um yeah, I mean, so online reputation will be also uh in the real world because, as I said, uh you are going to interact to have access to the digital information anytime anywhere. So uh when you're going uh someone is going to be in front of you uh with your smart glasses, maybe uh your uh AI will uh tell you a number of things about this person. So uh any uh yeah, the problem is the reputation will be not only accessed online but also in augmented terms, like anytime.
Speaker 1:Uh exactly I will see you when I will see when I will see you.
Speaker 2:Yes, I will see uh uh all the good things you did because you you convicted it can be uh on the good side as well. Yeah, of course. You can do uh uh when someone googles about you, it could be uh negative, but it could be also positive and find good information if you made the the effort, like with this blog, for example. Thank you. Uh qua this vlog, or like I don't quite or interviews as you as you call it. Um then uh maybe for your next promotion uh it will help you a lot to get because you will be noticed, you know. Uh so in a good way. In a good way, yes. So online reputation is a you know uh sometimes you you see the we see the bad sides, but of course the the one which are most popular, uh then they gain from it. Uh and yeah, it will be in real time anytime uh anywhere. So it will not be online, it will be augmented the reputation. Okay, yeah, augmented because you you will retrieve anything that you uh which is public or even which was not public but stolen and put on the dark web.
Speaker 1:Oh, okay.
Speaker 2:Yeah, so um yeah.
Speaker 1:So the the hidden files that someone got from you.
Speaker 5:Yes.
Speaker 1:Okay, but that's it is it is good in a way of sense of security as well. For for example, I'm uh I think it is into a in a social setting when you're meeting someone for the first time, or when you want to make sure that it's not a clustered killer or something like that, it's good to know their reputation.
Speaker 2:Yeah, it's good to know. Uh but um yeah, I mean, you know, you you you already have some some tools like PMIS, uh, which uh with only one image of you can retrieve all the public images uh of you online. Okay, and I can tell you uh, well, I mean, um uh when you do something, you you you should be prepared, especially online, even if you use a digital tool. Uh prepared that sometime someday it could be public. Okay, okay. Yeah, uh and our generation, you know, we are not we didn't we didn't know about the risks that we were taking by by by uh you know all these generations. Uh they don't were on TikTok, for example, when they publish something, yeah, this is there forever, it's been collected already, yeah, and could be retrieved back. Uh uh, and even uh yes, I mean uh even if you changed uh and you have some TikTok videos where you were uh maybe drunk uh yes, uh this will be uh retrieved uh and be seen by uh in 10 years time. Uh you will have uh so so you should be prepared that uh I anyway, even if you don't want to do it digitally, you have so many sensors everywhere uh that uh we should be prepared to be to accept that uh we are being watched, we are being watched the full time. And so uh well I mean you it's difficult sometimes uh to to to restrain yourself or to yeah of course but um uh yeah some but then then uh yeah yeah and then it will be very difficult to to to protect it.
Speaker 1:So yeah, it's um it's it's interesting in that way that uh there's a lot of um um belief that privacy itself is not gonna be something that is uh the same sense that we have it right now is not gonna exist anymore. Because we will be constantly being observed, either either monitored by devices that are around us or by the the the ones that are outside in the public when we go outside. So there's never gonna be a pure privacy moment um in the near future.
Speaker 2:Yes, it's it will be possible uh to remain private, uh, but uh it will be uh difficult uh and this is difficult to remain private for a long time.
Speaker 1:Yeah, yeah. Well, interesting. So you explored uh the legal weight of digital signatures on NFTs. Do you believe we're moving towards enforceable trust in the metaverse?
Speaker 2:Although uh trust as a human notion of trust uh is not enforceable. Um this maybe you could have uh uh a number between zero and one hundred. Um but you see you cannot force the whole world to trust you. So you see, from a world point of view, you cannot force everybody uh because the world is uh fortunately is still uh has still some independence. Yeah, uh so uh from a social because trust uh is also linked to your social network. Uh from the sociology point of view, uh we always talk about those social networks, they existed before, uh of course, when the when you know uh a number of uh you have your parents, your family, your friends, and then they know other people, uh you you have this kind of uh social networks, they existed for a long time. Uh now with the social networks you have maybe weaker uh links because uh, for example, on Instagram, so the new generation we are doing some research at the moment at University of Geneva. Uh before it was more like the the brain is more made to really have close quite some a circle of uh 150 people. But uh it's not uncommon that uh the younger generations from 15 to 30 years old now, uh they have uh 500 uh for the men to 1000 uh weak followers or or contacts, you know, on those Instagrams or social networks. Uh so it's getting uh it's it's getting broader. And anyway, uh by you know the trust also involves recommendations from others. Um so when you build a trust mechanism when reviews, for example, okay, you could have reviews which are non-linked to you, but if they are if your your close friends uh also provide uh reviews, uh and sometimes at some some stage they they try to add this into uh uh into triple advisor. For example, you could see uh you could connect with your Facebook account, you know you had a Facebook Connect, but you know now everybody uh is afraid of Facebook Connect. So but at some stage it worked, and then you were seeing uh reviews of hotels of your friends.
Speaker 4:Oh, okay.
Speaker 2:And so uh somehow, of course, uh you tend to trust more the people, the close friends than than uh than others if you uh if you if if you know that they are uh stable.
Speaker 1:Yeah, because it it it makes more reference to the same person, so more confirmation about who the person is and how it be they behave.
Speaker 2:So you see, you cannot enforce. Uh you can try to influence because you can try to build on your social networks. You can as a hacker, you can try to uh uh become the friend because if someone is very uh hard into uh getting into uh you could try to select to to analyze the social network of the uh of that person and try to see uh well position uh profiles which seems to be easy to to trick into trusting you. And because then after you become a friend of this person, uh and someone, for example, on LinkedIn, you know, is trying to uh a hacker is trying to get connected into your so linked uh social networks, professional networks, but it's a social networks, uh you would try to uh be already a contact of someone you know, and you can see you know, oh this guy knows this person. Uh and uh by that, if there is no double check, uh you would see a request from this uh other person or contact of uh a professional contact you know, so you it will facilitate this request uh more easily than than um if uh it was there was no connection.
Speaker 1:You have no one in common. So how how did he get to me if I have no one in common?
Speaker 2:Yes. So uh and then you cannot con you cannot have a social contract network of the whole world. That's the same. Uh uh the farthest you go, uh the less influence they have. Um but then you you you can try to influence, uh, that's what I say in my courses. Uh, online reputation at the Media Lab in uh University of Geneva. We have a uh online reputation monitoring um course, and uh so it's not enforceable, it's um uh you can influence it, but not enforce it.
Speaker 1:Okay, interesting. I would I would like to know what it comes out if you put my name. A lot of things about IP probably. So with the rise of generative AI, which is in everyone's mind right now, um, what's your take on trust boundaries? How much should we reveal about what's AI made? So if I generate uh something with AI, should in every every moment I should always declare that I made it with AI, or can I just make some things and without saying that it was AI generated? And how would that affect the trust? So the people will expect that full transparency. I would like to know everything that is AI generated, I would like to know beforehand, or it could it could be more flexible, like maybe not I don't need to know everything, just things aren't sensitive, or that can affect me.
Speaker 2:The thing is uh AI is going to be everywhere anytime, anytime any anywhere. Uh you know, as I said with audio or even like uh in in in other terms. So um you are it can generate, but it can also recommend you things, so uh it will inf it will influence uh anything you decide. Um so to me, I mean it's more like um we are more um we are more interested in in knowing how much human work has been put into a product or uh art or uh activity uh than knowing that AI was used because uh in 10 years' time uh AI will have been used for everything almost.
Speaker 1:Okay, it's gonna be the norm.
Speaker 2:So you would have a warning for uh for everything constant everything. So uh but then uh on the other hand, uh how much human work has been uh done for a specific uh product or task uh is uh more interesting to me. We are working on uh as part in Geneva as part of a project called House of Good AI. Uh, we have also a good AI uh uh society uh when we do some meetups, and we discussed um uh that it would be interesting not having a uh human-made label because uh if you craft you craft uh uh like for wood, uh of course it could be be 100% human-made. But if it's a document or if it's anything else, um uh you you you would use already uh a computer which is not human-made or uh or some some some programs which are not human-made because uh some AI has already uh participated to it. So um it would be uh more interesting to uh than to have uh 100% human-made, to have uh uh and what what we call uh uh in house of good AI we we have this way of uh uh allowing any any people uh to uh have their own cryptocurrency, which one uh cryptocurrency of Jean-Marc is equivalent to one hour of my time, and you would have uh uh your own token uh uh which is uh one hour of your time that you have lived, and uh you could decide uh how much of those tokens are are uh being spent for some activities. So, for example, in art, uh a painter could say, Okay, uh this uh painting I made four hours, uh it took four hours of my time, but also an AI generated video, even if you uh used uh AI generation, it it took some of it of your time to uh to to spend. So you could say, Okay, it took me 10 minutes to uh to write the prompt and so on. Uh and then uh instead of having a percentage, because it's a percentage is difficult to really um uh compute or to have the right information uh by counting the number of tokens that the product has uh has been spent for for this um then you would have an estimation uh estimation of uh uh how many hours of work of human work has been spent uh to this product or even like just paid as employment uh sometimes we we we we are laughing because um uh you know some people they say that they have a bullshit uh job uh meaning that you know they they are not doing much for the company but uh I mean they are here and they are paid uh so uh instead of having a uh a minimum uh salary that uh uh to compensate that you don't have a job and um and you are only paid uh an amount maybe an AI could pay you uh just to spend some of your time tokens for uh uh for having a a per like a number of uh human work paid uh even if you did nothing but at least uh uh maybe close to your your your social network to give some uh value uh other value uh human value to to the product that or service that the the uh they buy so you're thinking about that this uh very long debate about the the general minimum wage that people are talking about that everyone should receive a minimum wage if even if they're not uh especially those that are not um able to work just yet or the young people that uh still don't are not enter the workforce um instead of having that just because of existing then the idea would be if you are putting yourself into um work as a human hour for for AI purposes would be a good way to earn this yeah an AI could could choose you know uh like some people uh it's kind of hiring but maybe doing nothing it could be more flexible yeah could be do could be used to uh maybe uh market the the product to some to their social uh circles okay uh because maybe the AI has spotted that uh uh this is uh this is a wealthy uh family uh um and so uh yeah in this way I mean at least uh it's a bit clearer I mean also well positioned people could get more uh paid more could be paid more for that yeah interesting so so it's uh there's a lot of uh there's a lot of ideas here to um to address that this um the one about most things will be AI generated anyways in the near future so the labeling should be for human input not for AI yeah uh uh uh uh you would have uh on the label like uh the number of hours that have been uh spent and proven you know that this is secure on uh the the House of Good AI have a we have a um uh an an attack resistant uh algorithm uh also put on a public blockchain yeah uh and I mean cheating is uh uh we're making mechanisms uh against cheating so it would really uh kind of uh it would be public information uh that any uh any AI could uh retrieve on on uh on the blockchain. Okay interesting and moving on to the next question how do we prevent algorithm trust from becoming automated bias I know um as I said my vision of trust is more like a human notion of trust and uh uh it will tend to be biased of uh based on your social network and and history and so on uh and trust is not global as I said uh two different persons could have a very different trust value in some something else due to their social network and and past experience and recommendations uh and anything that that that happened so um um yeah um so there's there's always gonna be uh bias and this it's yeah I mean to me it's by by design it is biased okay because we are by design yes but on the other one the uh the bias uh will uh uh tend to uh to be uh to make decisions or um to to be qu would would be uh used uh and and and should the the goal of of this system or this trust uh mechanism is to have the the the most the optimal risk reward you know so um usually I mean the bias is the ideally uh if you have good trust mechanism uh metric mechanism uh would um be in your favor because it could be biased against you um so it would yeah yeah so we it it's something that is uh is human nature to to have uh different judgment and to have different approaches yeah uh if if you take trust as a human notion of trust uh it is biased uh you would you you will tend to protect more your family than uh than strangers in many cultures almost all cultures because uh you know the maybe the ones that were not protecting their families they they disappeared already due to evolution makes sense if you could embed one trust metric into everyday platforms what would it measure what what it would measure um to me I mean it it would be uh maybe based on one of my early work uh that I uh designed uh it was called the trusto um and really is the the would be the human trust you know the if you have a human component common component in your system would be uh the human uh uh trust uh of the current users you know the the trust of the of this human uh regarding the mechanism or system uh which is being uh evaluated uh so it would be the human notion of trust. Okay okay then what triggers us to trust uh I mean as as the component in in the distrust metric okay uh it would be important to take into account perfect so now we have the the fun part of the episode um I will ask you um one um I I'll ask you to pick one of these two statements so first one is verify synonyms or total transparency or verify you mean like uh uh that it there there's still a synonym you don't know exactly who is behind but it's verified that it's a person but you don't know the name of the person then the second one is total transparency of course uh come to me at least verified so it's gonna be uh um flower one to three if you know it's a verified user but you don't know the name of the person okay perfect NFT has proof of ownership or traditional IP registration uh NFT has proof of ownership maybe sign NFTs you know one global language powered by translationships or keep every language alive I mean keep every language alive but uh as we said uh AI will translate everything uh very easily so uh so a combination yeah I mean no but uh it's it's very interesting I think to have uh the diverse uh diversity into in into in the languages um but you know AI will translate uh you know automatically what we hear so um but then maybe it even like somehow it might it might protect maybe more like uh uh less spoken languages because uh usually what happens is that you need to learn the national language to to be able to work in in the nation um but then if you know uh at that stage if when you speak your native language or like local language uh dialect uh it's automatically understood by the person in front then uh you have less uh you you need less to uh to to forget about your dialect and and can keep it it will keep them alive because then you will be able to use it really without uh um without without having to go to and learn the other one yes so I mean maybe that that's the other way I mean maybe it's about it's positive. I love that I love that so to use the AI has a tool to preserve the language yeah so decentralized data governance or citizen led data trust so people will run the data or a decentralized way to run the data the the citizen would um would own their own data or it would be centralized yes uh no it should be uh you should quite ideally your privacy and your private your your data should you should uh uh be able to own it and decide when you uh would have to share it uh yes I'm more in in this favor like uh for for this than than uh of course having a centralized uh monitoring system yes so citizen led immutable records of everything or selective forgetfulness by design do you have the right to be forgotten or not um but it depends I mean um you mean after you you you you uh you die yes but um um difficult to to to uh to uh to answer us those questions yeah that's that's the idea because they both of them have merit yes um I think for like from a freedom point of view uh once you are died you you you should have the the right to be uh to be forgotten because the the thing is that if you did malicious activities uh and you are still alive and they are forgotten uh well I mean uh you cannot be uh judged uh for for what you uh what you did but once you are you are dead so I think uh yes I mean uh forgetting uh choosing to that your data is erased after you you die uh I think for me would be fine before uh especially for criminal activities yeah of course those should be kept because there's public interest for them yeah and then they were they are they were malicious uh yeah yeah okay so a combination of both transparent uh transparent governance or invisible protocols that work well it depends for what also uh uh so some system you like uh transparency for example maybe for governments it would be nice to see exactly uh what you paid for how they reached that decision how they how much they invested in that uh did it go to corruption or not how much went to corruption okay uh but um uh yeah so it depends on on on which system you are you're you are talking about for the public uh goods uh yes I mean the governance should be transparent uh when it matters of uh private uh uh aspects uh as I said ideally privacy uh should remain but is it possible yeah okay AI trust ratings issued by peers or issue by regulators uh could you repeat the first one sorry I mean uh so AI trust ratings yes either issue by peers or issue by regulators do you trust more what the people think in general the general public think or what a regulator uh decides that if if it's truthworthy or not uh but then why why do we say AI trust? Because uh so it's a let's say it's an AI tool and they're ranked the option is is either ranked uh by the public or it's ranked by the regulators by the government who would you trust more to rank the AL2 you the tool you need a mix of both because um uh in many uh in many aspects uh you have a competition between the attackers and the defenders and your government uh as much money than uh uh than yourself usually uh so they can go into more details uh about some technical aspects um and again I think it we need both yeah so good so every everyone will have their own role on on assessing the tool yeah you you you you you need to have your own thinking uh of course uh but then as I said uh sometimes um you wouldn't have enough to uh to assess you know the what is in front of you and so uh then because it's your government uh I mean if you then you also need have to trust your government of course but uh usually I mean uh in democracy it's uh it should be uh in your favor uh to um then you need the more okay uh bigger uh players to to assess what is in front of you uh to spend a lot of resources and and knowledge uh and somehow uh because it's your government uh you tend to give it a bit more trust than uh if it was the competing government you know okay makes sense ecosystems where trust can be restored or ones where it is lost it is irreversible no you can you can always uh uh try to recover trust but this is uh difficult you can lose trust you know it takes a long time to build trust in the human uh world um it takes more time I mean to build trust than to lose it yeah it's true you can lose it in one second you can upload one memory to the internet private forever or public for everyone um could you repeat again this is a bit uh sci-fi kind of thing you can upload one memory to the internet one you mean one memory so one memory from your life to the internet and you will allload it to be private or to be public no but of course I mean the the same uh depends on uh it should be good to that it's it's kept uh as a as a as a souvenir uh memory uh but then you should be able to to choose if it's private or not I mean then again as I said now as soon as you put something online uh even on a supposingly secure uh service uh when it's digitalized uh uh it can become public so um but ideally uh you should yes the for me I'm uh for the human privacy and so uh should be able to choose uh uh if it's private or not.
Speaker 1:Okay, so for everyone's to choose.
Speaker 2:Yes.
Speaker 1:Okay now uh take the palette okay and this is this is part of the game of true or futurist. Okay. So I will give you a statement and you choose that is true that is meaning that it's happening right now or futuristic that is going to happen in a hundred years or it's never going to happen. Okay ready? Yes NFTs will be legally equivalent to a notarial signature so like a notary um yeah also then uh true true augmented humans will carry dynamic trust layers visible through augmented reality so we what we were talking about futuristic is what like uh five years ten years or more no no uh 50 years uh more more more further in the future uh true true okay because we were just talking about that before more in ten years time okay ten years okay interesting web3 platforms will have ethics protocol voted on on by users um yeah I think true true interoperable identity will become a requirement for cross platform login no more passwords and usernames um yes I mean the interoperable uh uh I I access this is true also you know no problem yeah so before 50 years of course before 50 years I I would love to be able to forget all the usernames and passwords that I have to create every time but then you might lose some privacy yeah that's true but you can never uh there's uh my husband says that he doesn't believe in progress because whenever you gain something you lose something else so there's always gonna be a bargain you always got there's always gonna be a bargain so it's interesting because it's true it's that you you have to you don't have to hurl out of anymore of creating usernames and passwords but then you have the issue that someone can take it and then you're you're locked out of everything.
Speaker 2:Okay conflict resolution will shift from legal courts to smart trust ecosystem I mean the lawyers are are very uh strong even if AI is is getting uh smarter um so I think uh I may think in future we still need uh uh lawyers okay that's so so futuristic it's good so we still we still need it okay your rowsing behavior will be used to compute trustworthiness by insurance companies
Speaker 1:Well this is this is uh ah well usually the is it really important for them to to to know uh your trust or sinus is more like uh your elf no or if if they from that they can get real information that if you're googling uh something that is uh I mean or you're behaving recklessly.
Speaker 2:I mean uh if they if they succeed uh it's the it's a bit difficult. Um but um yeah more and it's easier and easier uh uh as I said because we have more and more information on online uh or digital.
Speaker 1:Okay. AI will have to explain itself before making a decision about you.
Speaker 2:In 10 years' time, I mean of course uh it will be already here. I mean they will uh help us to to make decisions, so I mean it's true. Um but sometimes I mean you you you know uh people they when they they they have to to make security decisions for example uh as we have seen for cookies on on website and so on, usually I mean uh they don't have time to to read the the the licenses, uh they don't have the knowledge, for example, uh uh or they don't have the money uh to pay for some uh strong AIs or mechanisms to help them. Um and so because they don't have time, uh for some decisions the AI, I guess uh people will uh clearly let uh AI deciding deciding for them because uh they don't have time and they are sometimes maybe also uh lazy sometimes and then of course there are risks and then maybe uh but then explaining themselves they they will be able to um to do that in 10 years' time.
Speaker 1:We'll need a license to use powerful AI tools like we do with cars. With cars just or with a medical license, for example, engineering uh um certification, so uh AI tools that can really create something that could pose a harm to the public. Would that require also like a certification, a license, or an official authorization to use them?
Speaker 2:Well, I think it's it's true because that that's what they are trying to do now, but uh if AI continues to to make progress uh like that, uh in 10 years' time or 20 years' time, we might have uh super intelligence. Uh uh and so uh this will be a bit uh you know uh we won't understand you know what what's going on, and and so uh it's gonna be beyond us. I mean um so um now it's a case, but then it might disappear.
Speaker 1:Okay. You receive personalized personalized laws based on your online behavior. Personalized laws that apply only to you.
Speaker 2:Well, yeah, you know, I'm not uh I'm not uh uh into legal uh like nanny state kind of thing. Uh no, I uh no uh uh well I guess law uh should be a bit uh like for a country uh size or like uh it's uh uh a kind of a uh regulation uh uh uh for a community of people. So only for you. I don't see it. So I mean uh because yeah, you know it's not even now, not in the future, it's never no, never so yeah, it's never like this.
Speaker 1:Final question. You'll take person you will take personality test before being allowed to use certain platforms to see if you're messed at a stability or you have the connectivity uh or you have the development.
Speaker 2:I think this is already known. I mean, uh uh, but in the future, as I said, we we won't use platforms. Uh we will interact with the metaverse like uh uh automatically, like everyday life. Yeah, anytime anywhere. It's kind of uh what we call Vaser, for example, uh was uh calling it uh ubiquitous computing. Uh and we already worked on on this, like a global global computing, you be ubiquitous computing. He was talking of the disappearing computer. Um yeah, and then everything will we we we'll we won't have platforms anymore.
Speaker 1:Um so it's gonna be the everyday life?
Speaker 2:Yes.
Speaker 1:Okay. Is this so true?
Speaker 5:Uh yes.
Speaker 1:Okay, so now um the question to wrap the episode. Has technology gets faster and more visible? Was one human truth you believe every trust system should protect?
Speaker 3:Human truth.
Speaker 2:Um, this is very narrow one, but uh let's say uh I would say uh freedom.
Speaker 1:Yeah, I love that. It's it's the it's the core of humanity. Thank you so much, and Marcus has been an amazing conversation.
Speaker 2:Sometimes difficult, but uh but very well thought, you know, like you you spent a lot of time to to to think about these uh these concepts and even yourself. I mean you you uh you have a very good you did a very good brainstorming, so it was enjoyable.
Speaker 1:Yeah, that's very sweet, thank you so much. No, I had a great time. It is it was very, very interesting, scary in a good way. Because the future is is uh is something that is um enticing, and we always want to know what is going to happen next, but it can be also scary if uh if the people the right people are not making the right decisions. So yeah. Yeah. So thank you so much for reminding us uh that trust is a framework, and in a digital world, that framework must be designed, debated, and defended. Your work shows that what we trust shapes what we build. And if we want technologies that empower, not just impress, it starts with a reputation that is earned, not bought. Because the future isn't just decentralized, it is deeply personal.
Speaker 2:Thank you.
Speaker 1:Thank you.
Speaker:Thank you for listening to Intangibilia, the podcast of Intangible Law. Plain talk about intellectual property. Did you like what we talked today? Please share with your network. Do you want to learn more about intellectual property? Subscribe now on your favorite podcast player. Follow us on Instagram, Facebook, LinkedIn, and Twitter. Visit our website www.intangibia.com. Copyright Leticia Caminero 2020. All rights reserved. This podcast is provided for information purposes only.