Response-ability.Tech

Why Human Rights Law is AI Ethics With Teeth. With Susie Alegre.

Dawn Walter Season 1 Episode 42


Our guest today is Susie Alegre. Susie is an international human rights lawyer and author. We're in conversation about her book, Freedom To Think: The Long Struggle to Liberate Our Minds (Atlantic Books, 2022). Susie talks about freedom of thought in the context of our digital age, human rights, surveillance capitalism, emotional AI, and AI ethics.

Susie explains why she wrote the book and why she thinks our freedom of thought is important in terms of our human rights in the digital age. We explore what freedom of thought is ("some people talk about it as mental privacy") and the difference between an absolute right and a qualified right, and why absolute rights are protected differently.

Susie shares some historical examples including witch trials as well as the work of Ewen Cameron, a Scottish psychiatrist in Canada, who experimented on ordinary people without their consent to explore ways to control the human mind. 

Facial recognition technology is a modern attempt to get inside our heads and predict such things as our sexual orientation. Susie explains why researchers shouldn’t be experimenting with facial recognition or emotional AI: you’re “effectively opening Pandora’s box”.

Susie explains the difference between surveillance advertising, which uses data captured about our inner lives that is sold and auctioned on an open market, in order to manipulate us as individuals, and targeted advertising.

Over the past few years there’s been a great deal of focus on ethics and Susie suggests we need to move away from the discussion of ethics “back to the law, specifically human rights law”. She explains that human rights law is being constantly eroded, and says “one way of reducing the currency of human rights law is refocusing on ethics”. Ethics are simply a “good marketing tool” used by companies.

The inferences being made about us, the data profiling, the manipulation means it's practically impossible to avoid leaving traces of ourselves, it's beyond our personal control, and privacy settings don't help. In her book Susie suggests that by looking at digital rights (data and privacy protection) in terms of freedom of thought, "the solutions become simpler and more radical".

It’s a point that Mary Fitzgerald, in her review of Susie’s book in the Financial Times, suggested was a "unique contribution" to the debates about freedoms in the digital age, and that "reframing data privacy as our right to inner freedom of thought" might capture "the popular imagination" in a way that other initiatives like GDPR have failed to do. Susie explains for us how this approach would work.

Follow Susie on Twitter @susie_alegre, and check out her website susiealegre.com.

Read the full transcript

Read the conversation as a web article.

Watch the interview on our YouTube channel.

Dawn Walter: [00:00:04]

So, thank you so much. Susie, really delighted to see you here today to talk about your amazing book, Freedom To Think.

Susie Alegre: [00:01:35]

It's a pleasure. Thank you so much for having me back.

Dawn Walter: [00:01:36]

Yes, I was looking at, we spoke the podcast that we you were on last time was the September 16th 2020. We were just talking about your book. I think at that point you were already writing. So, 18 months later and here it is.

Susie Alegre: [00:01:52]

Absolutely, a labour of love. 

Dawn Walter: [00:01:57]

Thanks so much. So, let's start off with why you wrote the book. And also why is our freedom of thought important in terms of human rights in the digital age

Susie Alegre: [00:02:08]

So the reason that I decided to write the book was really a fairly long way around in that I've spent the last 20-25 years working in human rights in various different capacities, including working on issues related to privacy and the digital world, looking at human rights and counter terrorism. But while I knew the law, I knew the arguments, I delivered the arguments, I wrote tons of policy briefings and press releases, and all those things that you do as an activist. 

I suppose, I've never really felt something so viscerally as I did when I first read about the Cambridge Analytica scandal, and what I read about the question of political behaviour on micro-targeting. This idea that companies say that they can understand who you are, what your personality is as an individual, based on your data, scraped off the Internet, off social media sites, and use that information to then target and manipulate you, to change how you might behave on voting day. Not necessarily changing you from a Remainer to a Leaver, but if you are, for example, a Remainer and maybe make it less likely that you’re bothered to get off the sofa, or if you're a Leaver, giving you that extra push to get out there, if that is the desired outcome. 

And when I read about it, and it wasn't that it was Cambridge Analytica. It wasn't about the personalities, which is what the story became a year later. It was this idea that our minds could be for sale, wholesale, in an election environment and that this is actually very common. It goes far beyond Cambridge Analytica, it goes far beyond Brexit or Trump. It's something that is increasingly used an election environment. 

And to me this idea that my mind might have been hijacked in that way, whether or not it was, I'll never know. But the idea that it could have been and that maybe that made me feel complacent that I had curated complacency through my Facebook newsfeed where everybody agreed with me and everything was going to be great, just felt so viscerally disturbing that I thought this is something that goes beyond privacy.

Now, privacy is crucial, privacy is a gateway, right? But to me, it felt that it was something more intimate, that this was about freedom of thought, and when I first started to look at freedom of thought, I discovered that there’d been very little written about it, certainly nothing at that stage writing about freedom of thought in relation to Big Data.

So, of course, you know, being a woman, my initial thought was, oh well, I must be wrong, I must have got this wrong somewhere, but luckily, I had friends and colleagues who I spoke to and people started saying no, you're absolutely right, yes, no, this is what it's about, it's about freedom of thought. And so I wrote my first article about it [Rethinking Freedom of Thought for the 21st Century] in 2017 in the European Human Rights Law Review. But I couldn't let it go. 

I couldn't just leave it at that. So I slightly went on a one-woman mission for the next couple of years talking to people about it, raising the idea everywhere I could and trying to chip away to get traction because I thought, if this is something that's upset me so much, that it set me off on this one-woman campaign, maybe it's something that will cut through and make other people understand why these issues matter and why we need to worry about them in our digital future

And the more that I looked at it, the more that I realised, that it's not just about Cambridge Analytica, it's not just about politics. This tendency for technology to be designed to understand how we think and feel, how we operate on an individual level, but escalated to a global platform. 

So understanding and potentially manipulating us as individuals on mass, in our billions, to change society is something that you see in all areas. You see it in criminal justice, you see it, you know, predictive policing, in the security field, but you also see it in your daily shop and how your daily shop is being, or your weekly shop, is being used to analyse what kind of a person you are, and how that data is then going into a massive soup of data that may tell companies how risky a person you are, which might have an impact on your ability to get credit, on your ability to get insurance, on the kind of jobs you're served up when you go online looking for a job search, what recruiters might think about you.

It really goes across all aspects of our lives and that's why I felt that I wanted to go beyond academic articles, beyond policy papers, and write a book that would be accessible to anyone, not just people working in this domain. It doesn't have huge surprises for people who have spent the last 20 years working on privacy in terms of the things I'm talking about. But what I hope was that, by focusing on this idea that it's our minds, it's all our minds, that are fair game in this environment, that maybe that would shift how we, as societies, look at and think about digital futures and that way we could bring about change.

Dawn Walter: [00:08:11]

It's a very readable book, I must say. For a book on human rights, it's a very readable book. 

Susie Alegre: [00:08:16]

Thank you. That's what I hope. But I mean, one of the reasons why I made it quite personal is because I want everybody to realize and think about how this reflects on their lives. You know, it's all of us. It's not other people. 

Dawn Walter: [00:08:34]

So let's talk about freedom of thought. So what is it? I know there's a difference between an absolute right versus a qualified right so if you can just explain what freedom of thought is and the difference between an absolute right and a qualified right for us as well. 

 Susie Alegre: [00:08:50]

So the right to freedom of thought and the right to freedom of opinion have two aspects initially. There's an inner aspect, what goes on inside our heads, and the outer aspect, which is where we express ourselves or we manifest our beliefs. And what I'm talking about in the book is that inner space and that inner space is treated quite differently to the expression of our opinions. 

We hear a lot about free speech and freedom of expression in the digital world, and that is a really serious issue. But freedom of speech can be limited for things like protecting public health, protecting the rights of others, protecting national security. So there are justifications for saying, actually, you can't say that, or you can't say that in this particular environment. 

But what goes on inside our heads is treated differently. It is, as you say, what's called an absolute right to have freedom of thought and freedom of opinion inside our heads, and that's kind of crucial to what it means to be human. And there are very few absolute rights.

The two classic absolute rights are the prohibition on torture in human and degrading treatment, and the prohibition on slavery. And if you think about sort of slavery, torture and the idea of losing your rights to freedom inside your own space, inside your own head, all of those things are really fundamental to what it means to be human. And all of them have that quality that, once you've lost them, you can almost never get them back. 

And so absolute rights are protected differently. Absolute rights are protected in a way that says that, if something amounts to an interference with that right, you can never, ever justify it. So you can never have a reason for interfering with an absolute right. 

Having said that many of the issues that I talked about in the book of flagging a direction of travel, they wouldn't necessarily amount to a violation of the right in the strict kind of legal sense. But what I wanted to do was to scope out how technology is trying to get inside our heads, and to understand where the limits to that right should be, where the limits are, where you can say, actually this far and no further, technology cannot reach any further inside the mind.

Dawn Walter: [00:11:15]

And what I like, what I like about your book is that you give us lots of examples in terms of history, of those in power using their power to kind of break or read minds through various means. And I'd love you to just to take us through some of those examples just to kind of, I guess, show what's happened in the past and potentially how it's kind of happening in the future in terms of the digital age. 

Susie Alegre: [00:11:41]

Absolutely. When I started to look at the history, I realized that while there’ve been developments in philosophy about what freedom of thought was, and ideas, particularly coming out of the French Revolution, the Enlightenment, great ideas about what freedom of thought might mean, but in practice, for most of us, for most people in history, they didn't have freedom of thought and they certainly didn't have a legally enforceable right to freedom of thought. 

And so, a couple of the areas I looked at which affect ordinary people if you like, rather than martyrs of thought and philosophy, one of the areas that is hugely shocking is when you look at witch trials and witch hunts through the ages, even up until the present day in some places. But looking at the thousands or millions of people around the world who have been accused and found guilty of witchcraft, not based on anything they've done, but on inferences about what's going on inside their head. I mean, how on earth you can prove you're not a witch is a very big question.

And that's one of the problems, I think, that stretches through into technology. And the drafters of international human rights law recognized this problem, that violations of your right to freedom of thought and belief don't necessarily need to get your thoughts or beliefs, right?

You know, if the witchfinder decides based on, you know, your black cat and your wart that you are a witch, that you are a believer, and that you are doing spiritual things which are affecting other people, it doesn't matter if that's complete rubbish, you're still going to be burned at the stake. 

And that is one of the big problems, that inference is being drawn from our data. These inferences about our inner lives, which are based on outer signs that are being drawn together and categorized as reflections of what's going on inside our heads. 

In a less sort of religious, and more prosaic area, and a more recent example that I looked at, was the way psychiatrists had used science to try to get inside our heads and to change the way our heads worked. One example, which is particularly shocking, was a Scottish psychiatrist operating in Canada, Ewen Cameron who was connected to CIA thought-manipulation research, looking at ways to control the human mind as part of the Cold War battle to get inside our heads, in order to be able to extract information, question spies under duress, and which spilled over into the war on terror and the way that terrorists latterly have been interrogated.

But Ewen Cameron wasn't looking at CIA operatives. He wasn't looking at international spies. He wasn't looking at suspected terrorists. He was carrying out experiments on ordinary people who had been brought into a psychiatric institution for depression, for postnatal depression, for anxiety, for other psychiatric illnesses, and he used the power that he had over these people to carry out experiments to see if they could essentially wash away what was going on inside their minds and replace it with something else. Using things like sleep deprivation, white noise, these kind of techniques to really get inside the head, wipe it away, rearrange the furniture. 

But as I say, what's really shocking is that this was happening to ordinary people walking into a psychiatric institution, hoping for help, looking for help. And many of them have talked about the devastating consequences that this had on their lives and on their minds for the rest of their lives. 

Dawn Walter: [00:16:09]

It's a really shocking example, actually. I think that's one of the most shocking examples in your book. That actually is a nice segue into the next question. If we're talking about trying to get into people's heads, one of the current threats to our human rights, to our freedom of thought, is facial recognition technology, particularly when it claims to read our minds and personalities through our faces, and predict such things as our sexual orientation. Can you explain for us why researchers shouldn't be experimenting with facial recognition AI? 

Susie Alegre: [00:16:40]

I think there is a difference potentially between facial recognition that just recognizes you as you, and facial recognition that's being used to interpret who you are on a deeper level, if you like. And whether that is emotional AI trying to work out that you're lying, or whether it is, as you say, this supposed research that claims to be able to tell what your sexual orientation is or what your political opinions are based simply on a photograph of you. 

When you look at that kind of research, and the researchers say, often, that the reason they're carrying out this research is to show how dangerous this technology is, which is possibly laudable. But when you do this research, you are effectively opening Pandora's box

When you say, I can show that you can tell the sexual orientation of a person by a photograph run through an algorithm to tell you with a certain percentage likelihood whether or not this person is gay or straight, for example, it doesn't really matter whether that research works. If that technology that you have researched and developed can then be used around the world in places where homosexuality, for example, may still be a capital offense in some countries, it's certainly illegal in many countries around the world. 

Just saying that you're doing it to show how dangerous it is, while essentially putting this technology out into the world, doesn't absolve you of the consequences of that research. And there really is a need for a much more serious ethics review to be done around research to look at the longer-term consequences of what that research can do. 

You know, researchers can't design a project that's going to explore torture techniques to see whether or not they work. And essentially designing research in order to violate a person's right to freedom of thought, particularly when that could have discriminatory consequences or existentially devastating consequences for democracy worldwide, then that research really shouldn't be happening in the first place.

Dawn Walter: [00:19:19]

Nice segue into ethics. You suggest that we move, we need to move away from the discussion of ethics, back to law, specifically, human rights law, and that ethics are simply a good marketing tool. And I did wonder, and I'm probably playing devil's advocate here a little bit, but has the focus on AI ethics been a distraction, do you think, when we should have been focusing on human rights law the whole time? 

Susie Alegre: [00:19:42]

I think it has to a degree, and that's not to say that there aren't fantastic ethicists working in AI, or that it's not important. And as you say, my view is that human rights law is ethics with teeth, but one of the dangerous things in the current environment is that human rights law is itself being undermined and eroded, both in legal terms but also in the way we think about it.

So we're constantly being told that human rights law is a problem, that human rights law is for other people. And one of the things I want to do with the book was to show how we all have human rights, we all need them, and we mustn’t ever be complacent about them. 

Human rights law to function as it should needs to be able to work in practice. And as I say, it's one of those things that, by focusing on ethics, you make it appear optional and you take away the power, potentially, of saying, well, actually, there are these legal constraints and we need to be serious about them. 

For human rights law to work in practice we need to make sure that the institutions that enforce it are able to work in practice, and that's something that today around the world is very much under threat. 

But one of the examples that I used in the book and it's the only example I know of to date, where a constitutional right, the right to an ideological freedom in Spain, which is the equivalent to the right to freedom of thought or the right freedom of opinion in international law, was used in a legal case, in a legal discussion, which also talked about privacy, striking down a law, which allowed for political micro-targeting, for the use of voters’ data to be used for targeted advertising and profiling in Spain. 

But that was able to be used, that discussion was able to be had, because Spain has a human rights institution that can bring constitutional challenges about the law in its own right. And it has a constitutional court that can consider whether or not all laws are in line with the Spanish Constitution, which is where you find human rights law in Spain. 

And so that is an example of a court being able to consider human rights arguments in detail, and at speed because there's a human rights institution that can bring that case to court. But where it becomes problematic is if you're relying on individuals in countries where litigation is expensive, or difficult to bring, where you don't have the same powers in human rights institutions. 

And so while I say human rights law is ethics with teeth, we need to make sure that those teeth are not rotting in the mouths of our democracies, if you like, we need to make sure that it's used. And one way of reducing the currency of human rights law is refocusing on ethics, talking about voluntary self-regulation, things that are nice to have, rather than things that we need to have, to make our societies work properly.

Dawn Walter: [00:23:13]

As you note, the inferences that are being made about us, the data profiling, the manipulation. It's practically impossible, as you say, to avoid leaving traces of ourselves. It's beyond our personal control. And privacy settings, certainly don't help. You suggest that by looking at digital rights, data and privacy protection in terms of freedom of thought, that the solutions become much simpler and more radical. And I'd love you to explore that a bit more for us.

Susie Alegre: [00:23:39]

It goes back to that question of the freedom of thought being an absolute right. And so, it being, I mean, another way of looking at it is the absolute core of privacy is mental privacy, so some people talk about it as mental privacy, mental privacy is essentially that right to freedom of thought. 

And so when you think about this absolutely protected core, then you can start to think about things that must never be allowed, rather than looking at ways of regulating things that are happening and that will develop, which is the way that sort of privacy and data protection law has worked historically. 

So if you look at something like surveillance advertising, the fundamental oil of what Shoshana Zuboff calls the surveillance capitalism business model, that is based on an idea that it should be okay for advertisers and marketers to gather so much information on us to understand, not only what kind of a person we are, what our demographic is, but to understand our psychological vulnerabilities, to understand how we're feeling minute by minute. And for that to be sold, auctioned, on an open market so that advertisers can tailor and direct advertising to us to manipulate how we're thinking based on a momentary analysis of how we're feeling. 

Now to me that whole idea, whether or not it works in practice, but that whole idea is a violation of our right to freedom of thought. And so if I'm right in that and you look at this question from that perspective, you're not looking at what specific bits of data you can use to inform surveillance advertising, you're just saying surveillance advertising that relies on an analysis of what is going on inside your head in order to manipulate you as an individual should never be allowed. 

That's not the same as targeted advertising based on context. So saying okay, here's somebody who's looking a camping website, we're going to sell them a tent. That is completely different to advertising that says, well, here's someone who is clearly feeling a bit low now because it's 3:00 in the morning, they're scrolling through a miserable Facebook feed, and maybe that's a good time to sell them online gambling, for example, because they're feeling vulnerable. 

All those are two very, very different propositions. So, if you look at that from freedom of thought rather than how you're managing the data, it's about saying actually there are some things that your data can never be used for. And that turns it around. Rather than just saying, there's some bits of data that can't be used, or there are some ways of processing the data that can't be done, or you've got to explain how your data is being used. Rather it's just saying you can't use data in order to interpret someone's feelings and manipulate them for advertising, whatever it is you’re advertising. 

Dawn Walter: [00:26:52]

I think Mary Fitzgerald, who reviewed your book in the Financial Times, picked up on that particular point, which you make later on in your book and she's, you know, she said that reframing data privacy as our right to inner freedom might capture the popular imagination in a way that other initiatives like GDPR have failed to do. So let's hope. 

Susie Alegre: [00:27:14]

Absolutely. That is my hope and that is the bottom line for why I wrote the book. 

Dawn Walter: [00:27:19]

So to finish on a note of hope, what can we all do, what are the concrete actions that we all as citizens can take? 

Susie Alegre: [00:27:27]

Well, that's you've said, you know, messing with your privacy controls will only take you so far. But I think becoming more aware of the information that there is on you, and I listed some examples in the book of apps and websites where you can go and get a readout of what the tech is inferring about you. And what's important there, is to remember, even if it gets you completely wrong, those incorrect inferences may still be informing the kind of job adverts you see, the kind of opportunities you have, the kind of opportunities your kids might have or the people that you know, as a result of those inferences. So, becoming more aware of what is being read about you, is I think a first step. 

But another thing is really taking on board the importance of human rights to our lives and to our futures, supporting human rights organizations who are already doing this heavy lifting. And again, I've cited a lot of them through the book. You know, really supporting the people who are taking the fight to protect our rights for the future. Voting for people who say they're going to protect our rights, voting out people who aren't protecting our rights. Asking your MP what they are doing to protect our rights, but also trying, in whatever small way it might be, to recognize how your interaction with your technology may be affecting how you think, how you feel, and trying to put it down.

It sounds like a small thing, but even just putting your phone down at the end of the day, stepping away, switching it off, giving yourself a few hours breathing space, instead of scrolling through a newsfeed while watching Netflix at the same time. Those kind of things can actually really bring home how much your technology is affecting the way every single one of us thinks and feels.

Dawn Walter: [00:29:39]

That's great. That's a good list of things for us to do. Thank you so much, Susie. It’s been lovely talking to you today. Thank you so much.

Susie Alegre: [00:29:47]

Likewise. Thank you so much for the opportunity.