
Response-ability.Tech
Response-ability.Tech
Understanding Data and Privacy as a UX Researcher. With Laura Musgrave
Our guest today is Laura Musgrave. Laura was named one of 100 Brilliant Women in AI Ethics™ for 2022. Laura is a digital anthropology and user experience (UX) researcher. Her research specialism is artificial intelligence, particularly data and privacy.
Laura gave a short talk at the inaugural conference in 2019 on privacy and convenience in the use of AI smart speakers. And at the 2021 event Laura chaired the panel, Data: Privacy and Responsibility.
We start our conversation by exploring Laura’s interest in data and privacy, and smart assistants in particular. During her research on smart speaker use in homes, she's noticed a shift in people’s attitudes and a growing public awareness around privacy and technology, and the use of AI. This shift, she feels, has been aided by documentaries like The Social Dilemma (despite well-founded criticisms such as this article by Ivana Bartoletti in the Huffington Post) and Coded Bias.
Laura talks about where the responsibility of privacy lies — with the technology companies, with the users, with the regulators — and that as a user researcher, she has a part to play in helping people understand what’s happening with their data.
I ask Laura what drew her to anthropology and how she thinks the research methods and lens of anthropology can be used to design responsible AI. She says, "The user researchers that really stood out to me very early on in my career were the anthropologists and ethnographers" because "the way that they looked at things…really showed a deep understanding of human behaviour". It "set the bar" for her, she explains, and she wanted to know: “How do I do what they do”.
Laura shares the book she’d recommend to user researchers, like her, who are starting out on their ethnographic journey, a book which helped her “make sense of how ethnography fitted into my everyday work “.
Because Laura’s been named one of the 100 Brilliant Women in AI Ethics™ for 2022, I ask her to share what the AI ethics landscape, with respect to data and privacy, looks like for 2022. As she explains, “in some senses it is much the same as last year but it's also a constantly developing space and there are constantly new initiatives” before sharing some of the key themes she thinks we are likely to see in 2022.
Lastly, Laura recommends two books, both published by Meatspace Press: Fake AI, and Data Justice and Covid-19: Global Perspectives. (The former we picked for our 2021 Recommended Reads and the latter for our 2020 Recommended Reads.)
You can connect with Laura on LinkedIn and on Twitter @lmusgrave.
Read an edited version of our conversation which you can read online and also download as a PDF.
Dawn Walter
Today we're delighted to be talking to Laura Musgrave. Laura was named one of 100 Brilliant Women In AI Ethics™ for 2022. Laura is a digital anthropology and user experience researcher and her research specialism is artificial intelligence, particularly data and privacy. Laura gave a short talk at the inaugural conference in 2019 on privacy and convenience and the use of AI smart speakers. And at the 2021 event Laura chaired the panel, Data: Privacy and Responsibility. So we're really pleased to have the opportunity to chat to her. Welcome, Laura. Thank you for joining us. How are you today?
Laura Musgrave [00:01:29]
Thank you, Dawn. Thanks for having me.
Dawn Walter [00:01:33]
So my first question is, I'm really intrigued why data and privacy, in particular, is your research interest. So where does that interest stem from?
Laura Musgrave [00:01:40]
I'd started off conducting some studies on smart assistants and became very interested particularly in the relationship between people and smart home technology and it's sort of continued from there really. Around about that time that I was doing those studies, it was sort of 2018 / 2019, and smart home devices particularly like the ownership of smart home devices was growing really rapidly at that point, and it was around sort of one in ten homes in the UK had at least one smart speaker. And in some cases, many, many more smart devices as well.
And even since then, so up until last year, it's continued to grow and it's now one in two homes in the UK, have at least one smart speaker. So you can kind of see the rapid, you know, rise that smart home tech was kind of following. So it seemed a very interesting area to explore, in particular given how fast it was growing.
And for me, it was really interesting to think about that in the context of a home and what that might mean. So thinking particularly about sort of boundaries, in terms of boundaries between, what's public, what's private, also thinking about, you know, how space is used in the home and how smart home devices might play a part in that. So what shared space, what's private space?
And it also led me to think as well and wonder about the boundaries between the corporations that make the devices and the consumers, the people living in the smart home as well. So, there was lots of really interesting stuff to kind of dive into there, and it was one of those situations where one question led to another question led to another question, and it was almost like pulling on sort of like a ball of yarn and suddenly going oh, this thing goes for miles, you know. It just was endlessly interesting to me.
Dawn Walter [00:03:48]
You gave a lightening talk about AI smart speakers at the 2019 conference. The key takeaway, for me at least, is that the general public tend to trust companies to keep their data private, and that they are, generally-speaking, willing to forgo sharing their data for convenience. Privacy, by and large, isn’t something the general public is particularly interested in. How can people working on responsible tech, like you, get users to care?
Laura Musgrave [00:04:13]
That's a really interesting area to explore as well. And this was a really interesting area when I was studying the smart speaker use and in the sense of there were some of the participants that I worked with who were perhaps more concerned than I'd anticipated. But there were also some participants I worked with who perhaps were a lot more comfortable with the idea of, you know, privacy concerns and smart home technology. And so there was a real spectrum, I guess, in terms of people's attitudes and, you know, approaches to having smart home technology.
It was kind of an interesting time around then because in the same year Shoshana Zubov brought out her book, The Age of Surveillance Capitalism, and the conversation publicly about sort of privacy and corporate digital responsibility started to shift a little bit. There had been a lot of conversations around, you know, all of these types of themes before, certainly in the technology industry and in academia, but this was the point where it started to almost become, you know, a real public conversation.
And certainly from that time onwards, there's been this sort of growing public awareness around privacy and technology and wider questions about, you know, what's socially responsible AI is, for example, and it was interesting to me because, as well as sort of hearing reports from other people, in terms of, you know, be a journalist or be it, you know, academic researchers or be it industry researchers talking about them seeing the same sort of theme of growing privacy, I was seeing it in my own interviews, my own participant sessions, where you know, unprompted participants would bring up some of these types of things. So, for example, that the Cambridge Analytica scandal was mentioned when people were talking about the sort of security and the privacy of their own sort of personal data and you know, conversations coming up around, sort of, you know, GDPR being sort of like a commonly-used term and, you know, conversations as well around the use of algorithms.
So, what was again, quite a technical and, you know, limited audience conversation around how algorithms were used, for example, in social media feeds or, you know, in sort of decision-making processes, were suddenly coming out in conversations and interviews that weren't really around that theme, they were around other things, and, you know, so it really interested me that some of these things were actually starting to become, you know, publicly discussed. And there was a lot more sort of growing awareness.
I think as well seeing also some of the documentaries that have come out too. So there's been various ones on those types of themes. I think some of the ones that have been mentioned to me quite a lot from members of the general public, have been things, like The Social Dilemma, which was controversial in some circles. But you know, it did become a public talking point, and similarly for Coded Bias, which looked at Joy Buolamwini's work. And of course, we had a viewing for that didn't we at the conference as well.
So all of these types of things seem to really bring up conversations around, not only privacy, but around, you know, the use of AI and how it should be used. And it seemed to be, there seemed to be a real sort of shift there in terms of how people were talking about technology in public and talking about privacy.
Since then it seems like as well as the conversation moving forward there's also been, you know, sort of a real uptake of kind of privacy controls in terms of technology from the general public. And the pandemic, of course, has played a part in that because so many more people now are using more technology more often than perhaps they did previously.
So there's a lot more consideration, there's a lot more, you know, thinking happening around this and some recent examples being, you know, 96% of iPhone users in the United States have opted out of app tracking as a, you know, sort of recent, you know, finding that's come out. And similarly so DuckDuckGo, the kind of privacy-focussed search engine, they had a 55% search traffic increase in recent years, so there seems to be, you know, a bit of a shift happening here in terms of that conversation around privacy, and certainly in terms of public attitudes to it, and what it should look like.
I think, as well in terms of the tech industry itself, more conversations are starting to happen, and certainly, when I was looking at the smart speakers work, there was very much a focus on sort of privacy in exchange for convenience, and it was almost a binary thing, it was like, you know, you have a choice one or the other, which is it going to be, and actually the conversation is sort of moving on a little bit more now and perhaps as a result of the, you know, public awareness that has sort of grown around this, and more, you know, people are starting to ask, well, why, why can't we have both, you know, and, and what would that look like?
And even when you're looking at sort of different technology companies, different organisations, there seems to be more of a realization now that you know, privacy can be profitable certainly, as a, you know, a brand pillar or as part of, you know, what your brand represents. So there's a bit of an interesting shift. I think kind of underpinning all of this is a massive question, which is, where does the responsibility lie for privacy and other responsibilities when we're talking about technology, and different people may give different answers to that, you know.
Some folks, you know, have talked about the idea of the privacy responsibility sitting with, you know, the general public, the end-users, and having some sort of choice in that. Other people have talked about it sitting with the companies, the organizations that are making the technology or building it and deploying it. And other folks yet have said, well, actually, the regulators have got a part to play in, the lawmakers have got a part in this, and it's not particularly straightforward, but the truth of it is probably a little bit of all of the above, you know.
But I think the key thing for the public is really the knowledge and awareness and having control. So as you know, user researchers in particular, working on these types of projects, that's something that we need to bear in mind and try and, you know, support you know, how do we, how do we help people understand what's happening with their data? How do we help them, you know, I guess understand what they, what they would like to opt into or not.
Dawn Walter [00:11:48]
You help organisations to build and use technology in a socially-responsible way, particularly AI and emerging tech. As a senior user researcher, from your work and your perspective, how can the research methods and lens of anthropology be used to design responsible AI, and tech more broadly?
Laura Musgrave [00:12:07]
This is an area that really I find interesting in a sense because my career journey went from user research, through that into anthropology and most people that I know their career journey has been the opposite way on, so they've studied anthropology gone through that into user research. So I've there's probably some things that we have in common, in terms of perspectives, from that sense, but for me, it's really interesting because I've done and seen user research both without and with anthropology. So I'm coming to it from that perspective in case that helps anybody that's listening to make sense of how I'm describing this.
So, the user researchers that really stood out to me very early on in my career were the anthropologists and ethnographers. And the way that they commented on technology developments, the way that they interpreted what was happening in the headlines, there was something about the way that they were able to look at things and understand them that really showed that sort of deep understanding of human behaviour.
And I knew straight away that they had something different than I had and it wasn't just in terms of differing levels of experience, although that certainly was true at that time, but they also had this lens or perspective that helped them to pinpoint the sort of deeper meaning and the sort of cultural currents and I hadn't seen that anywhere else, it sort of, it really gave me pause for thought.
And if I was scrolling my Twitter feed and seeing things that they were commenting on, I would always stop and read what they'd written. And more than read it, I'd actually sit there and think about it for a few minutes and ponder it. And that sort of set the bar for me at that early stage because I was reading what they were writing about, and I was seeing the talks that they were doing and I was like, what is that magic sauce, you know, how do I do what they do, you know, that was where I wanted to go.
And the thing that they all had in common was anthropology and ethnography. So I was like, well, I guess that I guess that's where I'm going, you know, guess that's where I want to be. I think there's a different aspects to it, of course, you know, I mean ethnographic methods, you know, you'll agree, are really powerful and they give you such, you know, rich data that is so invaluable when you're trying to understand the relationship between people and technology and even more so if you're trying to design something new, or if you're aiming to have something that's innovative as an end product or an end result, you know, there is very little that can compare to ethnography and ethnographic methods. So that's something that, you know, I'm really passionate about.
The other part of it that I don't always think necessarily gets the focus that it deserves is probably the social theory and the sort of philosophic side to it as well, which is really crucial. Because understanding that not only makes your research process more thorough, in terms of it helps you to explain why you've chosen certain research methods, but it also helps you to make connections more quickly in your analysis, and it helps you understand in deeper way because you can make sense of patterns, and you've got some sort of framework or context from the literature, you know, you're not on your own. It's not just your project in isolation, you're actually part of a bigger picture and you're helping yourself to make sense of that and understands what it means, as part of that bigger picture.
So that then enables you as a researcher to provide much deeper level of insight and clarity for the people you're working with, and the people that you're, you know, sort of working on behalf of, you know, so whether that's your stakeholders or your client or whoever it is that you're, you're ultimately, you know, sharing the research with, that actually helps you to deliver more for them, really.
The other part that I will just add on aswell is that when you are a researcher, and it depends on the context, whether you're an in-house researcher or whether you, you know, a consultancy researcher, it helps you to understand the organisational culture in a more in-depth way as well, both for your own organization and any others that you're working with, like partners or clients. And in that way you can, it helps you to be more effective with what you do with your research.
Dawn Walter [00:17:00]
One of the questions I do get asked, what book should I read, if I want to learn more about anthropology, and one of the books I recommend is Think Like An Anthropologist. I wondered what books that you've read when you sort of starting your journey that you recommend to people who might also be in the situation that you're in.
Laura Musgrave [00:17:19]
It's a tricky one. The first one that comes to mind is probably one of Sam Ladner's. And she's not an anthropologist but she is an ethnographer. She's written a book called Practical Ethnography, which is a very slim volume but every single word is really powerful, you know, so it's very well written, it's very concise, and it sums up a lot of how you incorporate a lot of these types of things, actually in your work and gives you sort of pointers as to where you can go to find out more. And I think when I was trying to read lots about this and trying to understand lots about this, that was a very practical, as the title suggests, way to actually make sense of how it fitted into my everyday work.
Dawn Walter [00:18:12]
It is a very good book and I think she articulates quite clearly why social theory is so important to help you understand what you're discovering and like you said, you know, I think Rosie Webster said quite nicely in the last podcast, she sort of said, like it was start, you know, you're standing on the shoulders of giants. There's people who've done work before you who you can draw on. Thank you for that. That's a good, a good book tip.
You were named one of the 100 Brilliant Women in AI Ethics for 2022. The award aims to highlight women working in this largely male dominated space. So I wanted to know from your perspective, what does the AI ethics landscape, with respect to data and privacy, look like for 2022?
Laura Musgrave [00:18:55]
This is quite an interesting one because in some senses much the same as before (as in last year) but then in a sense it's a constantly developing space and there's constantly new initiatives, you know, just from what we were talking about earlier, you know, from year to year, even probably every six months there's changes in terms of the technology itself, there's changes in terms of public attitudes shifting. So there's a lot that's the same and there's a lot that's probably different. So even what I say now will probably be quite different by the end of 2022, you know,
So I think probably some of the key themes that we're going to see is, you know, conversations about regulation being an ongoing area that needs to continue to be assessed and needs to continue to develop. And I think, as it is progressing, looking to see how well that fits the use of AI now. So there's already kind of conversations around whether, for example, deep fake technology, when it's used in a negative way, whether that's already covered by certain regulation, or whether we need new regulation to cover that. So, that's just one example of some of the areas where, you know, we might need to rethink that.
I think it's also worth, as I don't think you can say too many times, that obviously the legal requirements are obviously only part of the picture of responsible tech and I don't think you can sort of park everything at the, you know, regulation, just say, oh, well, we'll wait for the regulators to kind of get around to it because there are areas of socially responsible technology, some principles which aren't necessarily covered by the legal aspect, but still very important to make sure that we're maximizing the benefits of technology and reducing the risk of harm to people.
There's also the part about regulation obviously takes quite a lot of time, you know, in terms of the law-making side and there's very good reasons for that. But at the same time, you know, we also want to address things sooner. So that's a still a very important part of the picture, but it is only part of it, which I think is probably the key thing.
The other aspects that I think there's been a lot of conversations in the tech industry and academia about AI governance, for example, at different levels of what that might look like. And I think, even focusing down at the sort of company / organization level, there's really opportunities there for building in responsible AI and responsible technology processes at both the sort of strategic decision-making level but also lower down at sort of project level or day-to-day, you know, sort of design level.
And there's a lot of, you know, really switched-on organizations that are doing that now that are trying to stay ahead of the, what the public or their consumers, expect of them and where relevant their competitors as well, because this is an area as we just been saying that is continuing to develop and, you know, there are organizations out there that are actively, you know, putting these processes in place, you know, have got, you know, real long-term plans for how they're going to address this moving forward as well.
I think a lot of conversations as well around sort of acceptable use of AI and devices. So, for example, last year, there was conversations around smart home technology, which as you know, is my particular area that I'm very interested in. For example, like smart doorbells that, you know, are looking out onto the street, or looking out onto neighbours’ properties, and things like that. And there was a court decision made last year around smart doorbells, you know, in terms of overlooking other people, recording other people, that haven't necessarily consented to be recorded, and that was ruled to be a breach of the data protection act and UK GDPR, so there's various things like that, that almost we haven't quite ironed out yet, what the boundaries are, if you like, you know, what is acceptable in terms of this use. And that's obviously use by kind of private individuals, by the public, also thinking about, you know, corporations and how they're using smart technology as well in workplaces, for example, so there's a lot of things that like for us to first try and unpick.
Other areas that have sort of been an ongoing theme that I think are going to continue, so questions around children and AI and their relationship with it, you know, obviously, they're more sort of, I guess vulnerable, to the influence of technology in their lives, particularly if they're growing up alongside these, you know, technologies coming in. There's a lot of work happening already on sort of, how do we sort of, make sure that's a healthy relationship that they have with technology and that they do get the benefit from it. But at the same time making sure that we try and avoid any unintended consequences or any potential harms that happen there.
And I think that the final one that I've seen quite a lot on is around sort of how data is used in terms of data collation and profiling, use of sensitive data, particularly in things like advertising, and assumptions that may or may not be made about people's identity and things like that, and how that's managed and governed as well.
So, that's kind of my top things. Hopefully that gives a little bit of a flavour of some of the things that I, you know, continuing to be of focus.
Dawn Walter [00:25:02]
That's wonderful. Thank you so much. So before we go, is there anything you want to leave us with that we haven't covered today?
Laura Musgrave [00:25:07]
I thought I might bring along a few book recommendations to the party because I know, you know, we've got a lot of book enthusiasts in the community. So, some of the recent ones that I've been looking at the first one, Fake AI, picked up that one. That's got a real range of different perspectives, on sort of responsible AI, some of them are focused on privacy, but some of them are focused on other aspects so that one's a really interesting one, if anyone’s wanting to dive into the AI side.
For anybody that's interested in sort of more of the data side and maybe more around privacy not specific to AI, there's the same publisher actually has an interesting one called Data Justice and Covid-19: Global Perspectives, which has got some really good essays around privacy in the context of the pandemic and the technology that's being used. And the thing that I like about it there's a series of essays on different themes, but then it goes into a series of essays from different countries around the world and there's a real selection there. So, yeah, I got both of those from Meatspace Press. I've enjoyed both of those recently.
Dawn Walter [00:26:30]
Is there anything else you'd like to say, before we go?
Laura Musgrave [00:26:32]
Just to invite anybody to get in touch with me if they've sort of had any thoughts off the back of this podcast. You can contact me on Twitter or LinkedIn. On Twitter I'm @lmusgrave. So, yeah, please message or drop me a line and be really interested to hear what your thoughts are around privacy and AI in particular.
Dawn Walter [00:26:57]
Thank you so much Laura. It's been an absolute pleasure talking to you today.
Laura Musgrave [00:27:00]
It was very enjoyable. Thank you for having me.