Response-ability.Tech

What data scientists can learn from feminist social scientists in India. With Radhika Radhakrishnan.

Dawn Walter Season 1 Episode 43

In this episode, we're in conversation with feminist scholar and activist, Radhika Radhakrishnan. Radhika is a PhD student at the Massachusetts Institute of Technology (MIT) in the HASTS (History, Anthropology, Science, Technology & Society) programme. This programme uses methods from history and anthropology to study how science and technology shape – and are shaped by – the world we live in.

Trained in Gender Studies and Computer Science engineering in India, Radhika has worked for over five years with civil society organisations to study the intersections of gender justice and digital technologies using feminist, qualitative research methodologies.

Her research focuses on understanding the challenges faced by gender-minoritized communities with emerging digital technologies in India and finding entry points to intervene meaningfully. Her scholarship has spanned the domains of Artificial Intelligence, data governance pertaining to surveillance technologies and health data, and feminist Internets, among others.

Radhika shares with us what she'll be researching for her PhD and why she moved away from computer science to social science.

In 2021 Radhika’s paper, “Experiments with Social Good: Feminist Critiques of Artificial Intelligence in Healthcare in India” was published in the journal, Catalyst, and we explore her findings, as well as why she was drawn to artificial intelligence in healthcare.

We also discuss her experiences of studying up (see Nader 1972) as a female researcher and some of the strategies she used to overcome these challenges.

Lastly, Radhika recommends Annihilation of Caste by B. R. Ambedkar, and explains why it's important that we openly discuss caste. (Check out this article in WIRED about caste in Silicon Valley.)

Follow Radhika on Twitter @so_radhikal, and connect with her on LinkedIn. Check out her website, and read her blog on Medium.

Dawn Walter: [00:00:04]

Today we are delighted to be talking to Radhika Radhakrishnan whose interests and research lie at the intersection of gender justice and digital technologies, from a feminist perspective. Previously the Gender Research Manager at the World Wide Web Foundation, Radhika is now pursuing her PhD at MIT in the HASTS (History, Anthropology, Science, Technology & Society) programme. This programme uses methods from history and anthropology to study how science and technology shape – and are shaped by – the world we live in. In 2021 Radhika’s paper, “Experiments with Social Good: Feminist Critiques of Artificial Intelligence in Healthcare in India” was published in the journal, Catalyst.

 

Dawn Walter: [00:01:29]

So welcome Radhika. Congratulations on your on starting your PhD this year MIT the HAST programme. As part of your PhD you're working with Dr. Catherine D’Ignazio at the MIT Data and Feminism lab. So is your PhD building on the research you've done on the problems with AI deployed for health care in rural India?

 

Radhika Radhakrishnan: [00:02:00]

Thank you so much and I want to first thank you for having me on the show. I'm really excited to be here and also for the wishes on my PhD. My previous research on AI in healthcare in India was actually my Master's dissertation research. So, this was something I worked on a couple of years ago. I continued it after my Master's as well, and then I wrote it up and published it. So I feel like I want to work on a new project now in my PhD but it is still within my, you know, area of interest. So my broad research interests are within the field of feminist technoscience which is an emerging interdisciplinary field at the intersection of gender, justice and digital technologies. 

So my current research interests are within feminist technoscience but not relating to AI in healthcare. I'm currently looking at feminist surveillance studies and critical algorithm studies and participatory action research. Which all sound really broad. But so you mentioned the MIT Data & Feminism lab that I'm working in with Dr. Catherine D’Ignazio. And so with her, I'm currently working on a community-driven project which is focused on the Indian government's safe city project. Now, this is a multi-billion dollar project that's been funded by the Indian government in collaboration with some US companies actually. And the aim is to provide safety for women in public spaces through the installation of urban, surveillance infrastructure. So that includes things like drones, CCTV-enabled cameras, with facial recognition, etc. And I am very critical of projects that promote the idea that surveillance produces safety because I think from feminist perspectives surveillance actually produces its own kind of violence. So quite the opposite of the government has in mind. 

And so I'm working with local grassroots communities of women in India to really understand what the experience of surveillance is, and how these vast resources that are being mobilized towards surveillance can be better utilized to actually help women with, you know, public safety. So it's a participatory community-led project. We're using all the resources we can to try to amplify the voices of people on the ground and try to see what best ways we can counter surveillance resistance about this infrastructure. 

 

Dawn Walter: [00:04:45]

You looked like you were sort of heading towards a career in computer science because you've got a very strong sciences background and you graduated with a Bachelor of Engineering in Computer Science. And then you did a Master of Arts in women's studies and now you're doing your PhD at MIT in the HAST programme, which is obviously history, anthropology, science, technology and society. So I was really curious about what drew you away from computer science and toward social science?

Radhika Radhakrishnan: [00:05:10]

Absolutely, that's a great question. And so in India, there is actually this, you know, notion cultural understanding that people who do computer science are smarter than the people who do social sciences and humanities. So, I often got questioned this a lot when I made this shift actually, but I was in the tech space for two years after my bachelor's in Computer Science Engineering, but I faced many personal experiences of sexual harassment, gender discrimination, and a lot of patriarchal attitudes. I was actually, I spent almost a year, you know, in a legal case against sexual harassment in the workplace and by the end of that experience, that's when I found feminist writing. 

It was a turbulent time in my life and I feel like feminism really gave me a language to make sense of and to articulate my experiences. So I wanted to not only study it further, I also wanted to work towards building a more gender-just world and so when I began my masters in gender studies, that's when I came across feminist science studies as a course. And that led me to then, you know, find the emerging field of feminist technology, studies of feminist technoscience, and through this, I really was able to look back on my undergrad training with more of a critical lens. So I was able to, you know, the systems that I was building in the labs, I was able to apply an understanding of feminist theory and you know, gender perspectives to really understand what the problems with that were. 

And so I found that that would be the area that I was able to work best at because I was able to bring in my background of tech and also apply this new social sciences perspective to it and work at the intersection of both. You know, in undergrad in India they don't really teach computer scientists how to think critically about the things that we learn in the classroom and we build in the lab. We are not taught how they impact the world outside the lab. It's a lacuna in our education system, I think. In the US I've noticed that you can take courses, you can minor in a social science subject and you can, you know, major in computer science, you sort of have that interdisciplinary thinking right from the start. That's unfortunately not something that our education system in India provides. So for me to be able to even take courses on any social science subjects, I had to do an entire Masters in in that field.

 

Dawn Walter: [00:07:55]

In your 2021 paper, Experiments with Social Good: Feminist Critiques of Artificial Intelligence in Healthcare in India, you critically question and destabilise the “AI for social good” narrative that is being used in the healthcare industry in India given that, in reality, AI technologies are disempowering (rather than empowering) underserved communities in accessing equitable, quality healthcare. I was wondering if you could summarise your paper for us and particularly what drew you to look at artificial intelligence in healthcare, I know you did your Masters in it.

 

Radhika Radhakrishnan: [00:08:36]

Absolutely. So my motivation for really looking at this field was as I, you know, briefly mentioned earlier I have taken courses on AI during my undergrad, I had learned how to design AI systems and, as I mentioned, my Master's coursework really helped me look at some of those AI applications more critically and what was particularly distressing was how we were taught about AI systems as being really neutral, objective, and unbiased and given the hype around AI applications, I decided to use feminist theory, which was, you know, saying that, you know, it was questioning whether AI really is that neutral and objective and I wanted to apply that feminist theory to study AI systems in India. 

I think there was a huge gap even at that point in India for this kind of work because feminist technoscience is not really an established field at all yet in India. It's barely actually emerging. And there are a lot of Global North companies were and are building a lot of AI tools that are being tested on Indian populations, that are being deployed in India, without really an understanding of what the Indian context looks like. So, I was concerned by some of these developments that I was reading in the news around me, and I thought well, you know, I seem to have an understanding of the AI systems and now I’m, you know, also learning about feminist theory, so let me bring these together. 

With health care, well, I was actually first mapping out where all AI is being used in India and it's being used from all domains, ranging from healthcare to agriculture to the military. And the ‘AI for social good’ narrative seeps across all of these domains. I ended up choosing healthcare I think, at least partially, because during my Master's, my Master's dissertation adviser, Dr. Asha Achuthan, she specialized in healthcare and the medicalization of women's bodies. And, well, as a student who was new to the social sciences at that time, I wanted to learn as much as I could from the expertise of those around me, but I think some AI applications in other domains are also just as worrying and should get this kind of critical attention. 

So that's sort of around my motivation in terms of how I came to this research subject and you asked me to also summarise some of like my main points in the paper. So very briefly...in the paper I'm critical of the dominant narrative of, you know, AI for social good which has been very widely adopted by many stakeholders in the healthcare industry and the tech industry. In the healthcare industry specifically, the problem that's being identified is the fact that in India, we don't have a lot of medical professionals. There's a huge shortage of medical expertise in India. And so AI applications are being designed that are targeted towards the sick and the poor. So areas that don't have access to medical care can now receive it through these applications, which all sounds great in, you know, theory and on paper. 

But when you really apply a feminist lens to this, and you start looking at what the gendered implications are. I spent, I spent a year doing fieldwork in Southern India for this project. And you know, I noticed a lot of issues that were coming up on the ground. So I wanted to, I mean, I saw like three main reasons for why India was being used as a testing ground for these AI diagnostic systems. So I was focusing on healthcare diagnostic systems that used AI applications and so, these were built with collaboration between Global North tech companies and Indian healthcare providers. The healthcare providers would provide the medical records, which would act as the input data and the training data for these AI algorithms. 

But medical records seems like a euphemism. What are our medical records and where do medical records come from. They come from the bodies of people and now who are these people? They are largely the sick and the poor in India. It's not happening largely the urban areas, but in the semi-rural, peripheral areas where this AI for social good initiative is targeted. 

So I analyze that there are three main reasons for why this scenario is, you know, emerging. Well, first is the diversity of Indian populations which is what the, which, which contributes to therefore a diverse data set for the AI applications. The second is the reduced cost of making these AI applications in India because they have been combining patient, you know, treatment with these experimental trials for building the system. So you don't have to fund the AI training separately and therefore there's a reduced cost. Of course that brings up a lot of ethical issues. But from the perspective of the deployers it's efficient for them. And the third is the fact that in India we have a really unregulated ecosystem for these kinds of technologies, we have a technocratic government, which is very, you know, uncritical of the social impacts of these technologies. 

So all these factors put together really make India an attractive destination for some of this. And in my paper I argue that this is a form of experimentation upon people. And it's a form of re-colonization in a different way. So I use my fieldwork observations to point to various ethical issues that arise on the ground when such systems are developed in an ecosystem of this sort, and then I offer some social and policy recommendations for how we can improve the scenario going forward. And what we need to keep in mind when we are building systems for underserved populations so that their interests are prioritised above the market logics of, you know, deployment and regulation of AI systems in healthcare in India.

 

Dawn Walter: [00:15:18]

Some of the stories you were bringing back from the field in terms of how, you know, the people in the rural areas were not really understanding what they were being asked, you know, that in terms of the consent. So I think you made a point somewhere else about the, you know, the the people who are designing the systems really just saw these people as data points, not as human beings, and so there was very little active consent going on and so they weren’t really consenting to it. But because they weren't objecting to it, it was sort of deemed consent. 

 

Radhika Radhakrishnan: [00:15:57]

Absolutely. So consent is something that is already murky when it comes to healthcare. Because, you know, people, you know any person when they walk into a clinic they're already distressed. You're already worried about the situation and you're already given consent under some amount of duress at least. That situation really compounds when you have significantly reduced bargaining power, with respect to the medical establishment. So when we're talking about the sick and the poor people who hold positions in society that are, that don't have the kind of social capital or resources to question, resist, fight back against some of these applications, that's when these issues become problematic. 

So the areas that I did this fieldwork in and where this is really being currently tested in India and Southern India, I noticed that, you know, the people who were, whose data was being collected, most of them couldn't read and write. So what is the point of giving them a consent form because they can't really consent to anything that's written on it unless it's explained to them. But, you know, for the healthcare providers at the tech company, the consent form is more like a formality that they can just tick off and say, well, you know, from our end we’ve done our bit, but it's not really translating into any practical. You know, it's not translating into the people on the ground actually understanding what they are consenting to. 

So, and because the problem is, because all of this is happening under this grand narrative of AI for social good, when you ask the medical practitioners and the tech companies, you know, whether they are aware of how consent is being obtained on the ground, their cop-out is saying that well, you know, we're all in this boat of good intentions. And so, if something goes wrong, it's not intentional which is, which is a massive evasion of ethical responsibilities on the part of experts. So I would definitely say a lot more should be done in terms of holding people accountable, and holding systems and structures accountable here, and finding more meaningful ways of engaging with people if we are going to be building systems that are claiming to benefit them in turn.

 

Dawn Walter: [00:18:21]

You also suggested reframing the question sort of, from ‘how can AI solve a problem’ to “what problems can AI solve’? 

 

Radhika Radhakrishnan: [00:18:32]

Absolutely. And I think this is because of the, you know, massive hype around AI that we see today, which is also what motivated me to work on this project in the first place. We want to apply AI in every sphere around us and, you know, I don't want to come across as someone who is critical of all technology. The fact that this podcast is happening right now online and people can listen to this on their phones, their devices that's something beautiful that tech provides us. And I think it has absolutely wonderful applications that we, that we should benefit from, but we should also think twice about really where we are applying certain kinds of technological solutions. 

So I would say that, you know, the example that I generally give is that of the law of the hammer which is that, you know, if you find, I mean if you, if the only tool that you have is a hammer, then you're going to treat everything around you like a nail and that's what we, I think, are doing with AI unfortunately. Today we want to treat all the social development policy, you know, problems around us as nails that can be fixed through AI. And some of them can definitely be helped with these technologies but the problem arises when we don't understand the context in which this technology is deployed. And so for foregrounding that context and the experiences of people at the end of the day, I have proposed that we first ask ourselves the question of what problems even need to be solved, how can those problems, can those problems even be solved using AI, and are there better ways of doing it. 

Sometimes, you don't need a complex multi-billion-dollar project to solve certain problems. Sometimes, you know, you just need people on the ground, you need better mobilization of resources and, you know, changes to structures and policies that don't necessarily involve technological interventions of this complexity at all. At the end of the day, I think we need healthcare for all, and not AI for all, and so I'm trying to see how best we can do that. 

 

Dawn Walter: [00:20:56]

So what were some of the problems of studying up that you encountered as a female researcher and how did you overcome them? 

 

Radhika Radhakrishnan: [00:21:02]

There are a lot of issues with studying up that I encountered in the field. First was the issue of access to the institutions that I wanted to study the corridors of power per se. Many of the tech companies that I interviewed they haven't actually published public results, research, of the tech that they are currently testing and deploying in India. And when I approached them for interviews, they either outright rejected the invite, or they only allowed me to speak to their engineers in the presence of a PR person, who then intervene in the middle and say oh this is not a question that's appropriate, you can't ask this, this is, you know, confidential, this is proprietary, etc. So there is almost like, what are you hiding, right? And so there is that one, you know, huge issue with respect to just access.

But then there's also unique issues that come up when you focus on the positionality of the researcher. So in my case, as a queer woman in India, accessing and speaking to people who were, you know, representatives of global tech companies, medical practitioners who were in positions of power in large medical establishments, There was a clear power dynamic in our interactions and there were a lot of patriarchal attitudes. I mean, people just assume as a woman, would I even understand the kind of complex tech that they were building. So there was this infantilization, almost, you know, that they were trying to like dumb it down for me when while talking to me often. So those kinds of attitudes I definitely faced.

But also, I think, more troubling issues as well. In the field I was stalked at one point and there was really not a lot of recourse available. So I think they're, these are the kinds of challenges that you do face in the field, and thank you for asking about that because often we don't see what goes into the process of doing some of this research when we are reading about the findings. And I think it's important for more people to work on it to also understand the problems that one navigates when they are actually in the field.

Radhika Radhakrishnan: [00:23:10]

I do ethnographic research and I think there are a lot of issues navigating this kind of rich research. So the studying up and down also is like, I want to like double-quote the “up” and double-quote the “down” of course, but I have found that it actually is really helpful to sort of build connections in the field in order to gain the trust of people when you talk to them. This doesn't necessarily apply as much to…actually it does apply to speaking up as well because, you know, sometimes union workers, for example, will be able to talk to you a lot more about these issues than for example, someone who's actually having their hands died because of a lot of these obligations. 

Some of the, I mean, I made a lot of these intermediary connections in the field, in order to reach these corridors of power and to even access some of these spaces that I otherwise would not have had access to. So, union workers, I would say, grassroots workers, are actually excellent connections in that, in, that sort. 

But also more generally, I would say, I offered the option of speaking anonymously to a lot of the research participants and I did speak to quite a few representatives of tech companies who were themselves worried about what their labs were building in this space but they couldn't officially be on the record to disclose this. And so I would, you know, meet with them and offer them the option of keeping their identities and their affiliations anonymous so that they would be able to talk to me about what's happening inside. So I say that, you know, these are sort of commonly-used tactics where you can balance the ethics of the research and also understand and study the space that you've gone into do. 

 

Dawn Walter: [00:25:10]

So you’ve worked at organisations such as the Internet Governance Forum at the United Nations, the Centre for Internet and Society, and the World Wide Web Foundation. I wanted to know, what would you like to do once you’ve finished your PhD (which I know is another six years away). Do you think you’ll return to policy work? Where do you see yourself?

 

Radhika Radhakrishnan: [00:25:27]

That is truly a farsighted question. I suppose life will happen in the next six years in ways I cannot possibly foresee now, but I do have an idea of where I would like to be. I don't know if it will stay the same at the end of the six years, but I don't see myself as going back into policy spaces. Actually, I don't think I really liked policy spaces in India very much. In India, they really, I think, are dominated by lawyers and you know, focused more on legal desk analysis with little to no space for social scientists and the kinds of questions and grassroots methods that we are trained in using, which is not to say anything bad about the policy space in general, but more of my fit in those spaces.

And I am not the kind of person who works exclusively at a desk. I like working with people with communities on the ground. It's also in line with my politics. So, I would actually like to set up an action research centre in India after my PhD. I'm really interested in using social science methodologies, and in particular feminist methodologies, participatory action research, to work with local communities of women and solve for the social challenges that they faced with emerging digital technologies. 

And I want these to be, you know, solutions that are that are, you know, community-driven, not top-down implemented, and as far as I know I don't think there are any such research centres in India along these lines as of now. There is a lot of technological intervention that's happening, that's going uncontested, that's not being focused upon because I think the political climate in India has not been conducive for doing such critical work in the past few years which was also one of my justifications for coming to the US for my PhD. It simply became unsustainable to continue this kind of work in India because the government was, you know, passing laws that was cutting funding out for NGOs, you know, organizations working on these issues were being shut down, dissent was being criminalized. 

So all of this is currently ongoing in India and it's extremely difficult to be an activist or an academic working on issues that are critical of the government and critical of the political landscape in India right now. So I but I'm hopeful that things will change in the next six years. I'm hoping research of this sort contributes to those things changing. And for me, it's important, as an academic, that my research actually informs grassroots realities and I also see myself as an activist.

So it's important for me that my academic research informs my activism and that my activism and forms my academic research as well. So, whatever I end up doing after my PhD, I hope to stay true to that. 

 

Dawn Walter: [00:28:34]

So, before we go, is there anything you want to leave us with that we haven't covered today?

 

Radhika Radhakrishnan: [00:28:39]

A book recommendation that I'd love to offer is the Annihilation of Caste by Dr. B. R. Ambedkar. He, Dr. B. R. Ambedkar, was one of the founders of our Indian constitution, but more importantly, he was, he was a people's leader and an anti-caste intellectual. He worked with, he was himself a dalit, which is a person who is outside of the caste system and in India considered, you know, quote-unquote “untouchable”. 

And so he has, he spent his whole life really working towards the upliftment of dalit communities in India. And is today, very rightly so, worshipped as one of the, you know, one of the most prominent leaders of the anti-caste movement in India and he has an incredible book written in English, and written very accessibly, called the Annihilation of Caste, which is based on an undelivered speech that he had given in India. So if anyone wants an accessible introduction to just understanding caste, I would definitely recommend this book. It's an incredibly powerful book and very relevant to understanding how caste works, and plays out in India, even today. And I would really recommend that people read Dr. Ambedkar.

And, in general, I would also maybe leave with just maybe some advice. I suppose you could take it or leave it for the listeners but it's, this is something that I noticed when I moved to the U.S. which is just, you know, a few months ago. But already I see that in the Global North there’s a huge silence over certain means of social stratification we observe in India, most importantly that of caste. I have the, I have caste privilege based on my identity. So, I, you know, I think it's important to sort of discuss issues around caste and tech in India. 

We have a lot of interesting work that's coming up that is not reaching across the borders. I think most of the people we read in the Global North, when it comes to India, are diasporic Indians and, not to take away anything from diasporic Indians, but I think the true representation of what Indian life and experiences are, are best captured by people who are actually living there and who, you know, really go through the grind of the everyday life of India.

And we have amazing work coming up, not just in academic circles but also in activist spaces and civil society spaces, that are questioning some of these developments that that some of these developments that we spoke about today, and a lot more. And I think these voices should also be heard in these spaces. So we need to really go beyond the, you know, obvious picks for the people we read, and diversify our reading of India to include people from India. 

And I would also say in that aspect, we should discuss things like caste, which is quite invisibilised in all our discussions around social stratification in the US. Just as race is a huge social stratification in the U.S., we have caste as a dominant axis of social stratification. And it is not true that when you move to a place in the Global North, it is not true that you transcend those categories, I think you bring those categories with you, so it becomes really important to visiblise them, and to talk about them, and to find ways to resist them by building cross-border solidarity, by really being able to support people on the ground, and the first and the best way to do that is to first start talking about it, so I would encourage people here to read more about some of these issues that we're facing in India.

One thing I have noticed when I come to the U.S. is that there is a lack of understanding of really what the politics in India today look like. India is not the kind of largest democracy in the world that we think of here. It's today there is really no democracy in India. We have a far-right Hindu nationalist government in power that has implemented a project of absolute violence against minorities, against women, against Muslims, against Dalits, and people in India are unable to continue their day-to-day lives. They are unable to continue their jobs, their work, and so much is happening that requires critical focus from the Global North, that requires solidarities from the Global North. And I would urge everyone here in positions of power to highlight some of the issues that we're facing in India right now that we cannot highlight from there, because dissent is being curbed so heavily that it's impossible to speak up without being criminalized today in India. 

So those who have the freedom of expression here to be able to speak out against these atrocities, I would urge them to use that power and privilege to please amplify the voices of activists on the ground in India who are fighting for basic human rights and are not only being denied that, but are being killed, are being thrown in jail, and are facing absolute mass, you know, discrimination and violence. So just to please focus on that as well. 

 

Dawn Walter: [00:35:05]

Thank you so much for highlighting that Radhika, really, really important. Thank you so much and an important note to end on. So thank you. 

 

Radhika Radhakrishnan: [00:35:14]

Thank you so much for having me on this. This was so interesting for me to speak about as well and it’s really rare that you know, that podcast in the, you know, Global North even engage I feel mostly with scholars who come from the Global South, or activist from the Global South. So I really appreciate that, thank you so much for having me.