Design As

Design as Trust | Design as Doubt

Season 3 Episode 2

Design As Trust | Design as Doubt  features Lee Moreau in conversation with TB Bardlavens, Ryan Powell, Terry Williams-Willcock, and Ellie Kemery

Follow Design Observer on Instagram to keep up and see even more Design As content. 

A full transcript of the show can be found on our website

Season three of Design As draws from recordings taken at theShapeshift Summit hosted in Chicago in May 2025.

Lee Moreau: [00:00:01] Welcome to Design As, a show that's intended to speculate on the future of design from a range of different perspectives. And this season, like everyone else, we're talking about AI. I'm Lee Moreau, founding director of Other Tomorrows and professor at Northeastern University. This past May, I attended the Shapeshift Summit at the Institute of Design in Chicago, where designers and technologists came together to try to get a handle on what responsible AI is and what it could be. In this episode, we're going to be talking about trust and doubt. This is a round table in four parts. On this episode you'll hear from TB Bardlavens— [00:00:39][38.0]

TB Bardlavens: [00:00:41] Right, and I talk to a lot of folks about what does it mean to build trust versus automatically have trust? What does it means to move at the speed of trust?  [00:00:50][9.2]

[00:00:51] Ryan Powell —.  [00:00:51][0.3]

Ryan Powell: [00:00:51] We sort of peel that back a little bit, okay. Well, how do we design for trust? What does it actually mean?  [00:00:56][4.8]

Lee Moreau: [00:00:57] Terry Willcock— [00:00:57][0.2]

Terry Williams-Willcock: [00:00:58] So kindness is a thing, safety is a thing. Yes, trust is a thing. But if you've got a commercial model that doesn't allow trust to be in that, then be honest about it.  [00:01:06][8.4]

Lee Moreau: [00:01:07] And Ellie Kemmery.  [00:01:07][0.4]

Ellie Kemery: [00:01:08] Everybody's aware that without tracer there's no adoption, but there is something that people aren't talking about as much which is that people should also not blindly trust.  [00:01:16][7.8]

Lee Moreau: [00:01:22] Who do you trust? How can you trust. For designers and for makers, so much of the confidence in our design outcomes is based in the fact that we worked through the problem, that we iterated and we challenged ourselves. And ultimately, the final result was the embodiment of our efforts. We knew where it came from. There's always gonna be a bit of subjectivity around, is it good or bad, could we have done something differently— but there's little doubt about how we got there. For many designers, and I include myself in this category, the leap that AI makes as a tool just feels bigger than the quote unquote new tools and technologies that were introduced in the past. Some of the gen AI outcomes that we're seeing right now appear so far from the source code of previous technology that it feels untraceable, that it feel untrustworthy. Where did it come from? Whose ideas did it borrow? Would this have happened without me? And, is that a problem? These are the questions that I think we need to be asking ourselves right now as designers and as people who are charged with defining how the future will look and feel and how all this technology will impact human life. So to continue that conversation or at least advance it, let's hear what people had to say at the Shapeshift Summit in Chicago earlier this year.  [00:02:36][73.7]

Lee Moreau: [00:02:41] Right now I'm here with TB Bardlavens at the ID. It's Friday, May 30th. Hi, TB.  [00:02:46][5.1]

TB Bardlavens: [00:02:47] Hey.  [00:02:47][0.0]

Lee Moreau: [00:02:47] Welcome. TB Bardlavens is general manager and director of product equity at Adobe. To be honest with you, TB, that sounds too simple. I just saw you on stage, uh, in probably the most interactive presentation we've had so far here at the summit. Would you mind describing yourself and what you do to our listeners?  [00:03:05][17.5]

TB Bardlavens: [00:03:07] So first, I always like to start with what is product equity, right? So product equity is a state in which every person, regardless of human difference, can access and harness the power of digital products without harm, bias, limitation. And so the work that I do is very different than what one would expect. So it's not really around like people in the sense of HR, it's that around like, you know, sort of diversity, equity, inclusion through the lens of who's hired, how are they promoted things like that. A lot of the work that we do is around what is the human impact of our products on our customers. And so that work is really centered on how do we ensure, particularly at Adobe, how we ensure that we are meeting the three sort of CSR pillars that exist, which is creativity for all, Adobe for all, and technology that transforms. And so we say, hey, we're doing great when it comes to some of our CSR efforts, but what does that mean for our actual products? What does it mean for Photoshop, and Express, and Acrobat, and even Workfront, and Marketo, like how do we make these things feel like they're for everyone? Particularly when a lot of these are really geared towards enterprises and very niche creatives. And so that's where my work lives, is really bridging that gap to say, how do create products for the future? How do we focus on market expansion, market penetration, business growth, and brand love? And how do ensure at the end of it all, the outcomes for anyone who uses Adobe productsIs equal and equitable.  [00:04:34][87.3]

Lee Moreau: [00:04:34] So you've worked at several different large companies, some of which are tech firms over the time, how did that prepare you?  [00:04:41][7.0]

TB Bardlavens: [00:04:45] It didn't. I mean, it is in many ways new territory, but I will say that I actually, I was talking to some students earlier and I talked to them, they asked me, hey, how did you get into this line of work? And I was like, well, you know, it's always sort of been a thing. I've just started having to really think about how do we create the actual space more concretely. And it's not just me, I'll say like it's, I'm not the only person in the field. I'm not the first either. But my approach, I believe, is very different than others. And so I'll start with, like, I was at Microsoft, is one of my first tech firms that I've ever worked for. And I mean, even before that, you know, I thought a lot about diversity of teams and how does that lead itself to how we even think about some of the things that we're building. And at the time, everyone was talking about organizational diversity. This is like during the boom of the chief diversity officer and all that stuff— [00:05:37][52.0]

Lee Moreau: [00:05:39] Organizational diversity would be like who's-who's in the room.  [00:05:41][2.0]

TB Bardlavens: [00:05:42] Exactly.  [00:05:42][0.0]

Lee Moreau: [00:05:42] Like who is on the staff. Okay.  [00:05:43][0.8]

TB Bardlavens: [00:05:43] Exactly. And at some point, I just pause and said, man, there's a lot of noise about this, but are we actually talking about the things that are being built? You know, are we actually thinking about, you know, if we like, for example, when I was at Microsoft, and we're working on the browser, I was on the Edge team for a little bit. And it was well, you know, the most of the people who use our product were actually middle aged white women in the middle of America, right, and that was most of those folks. Outside of tech workers and even most tech workers were still using Internet Explorer, even though it was all but dead, you know and so, what- even in that space, you knows thinking through like well, what does it actually mean? Like if we have this community of people if they're confused about these things if they have certain expectations of how a browser should operate ,then how do we meet those expectations? How do we sort of think about how do scale some of this work, reset? I think about the work that's sort of been coming out of Stanford around targeted universalism. And that's where I really started building my thoughts around this. And I also am the chair of the board for an organization called Creative Reaction Lab. And so one of my really good friends, Antoinette Carroll, who is the founder of the organization, as well as the creator of Equity Center Community Design, between those two things, between working with her and just thinking about this other- the space is like, oh, if we actually applied equity to this, what would that actually do? How will we actually expand and transform our products? And so there's a very long winded way to say, it's been a journey of thoughts and ideas. And then when I came to Adobe, it was really interesting because even my introduction to the role was a really close friend of mine saying, hey, do you want to come and be my boss? And I was like, well, what do you mean? And he said, well, we've had this head count for this leader for some time, but we wanted to wait to find the right person. We've been sitting on it for almost two years. And so as I talk to them more about it, they actually wrote the role around me. They wrote the role around the work that I do, which is a really amazing experience, particularly for, I'm sorry, for a gay black man from the South who never thought he'd be in tech in Seattle at the time, let alone— my goal was to make $60,000 and have a Nissan Altima. That was success for me growing up. And so this was a completely different space. And so like coming into these and like taking all this into context and leveraging my lived experience where a lot of times you've been told to erase it has all sort of played its role in where I got-am now. And so now I feel it's my responsibility to do things like this where we talk about the work, we talk the impact, the business side of it, and the human impact to say this is a growing discipline and we need more people who want to be involved in it and we business understand that it is an imperative just like product development, just like engineering, just like design and other disciplines.  [00:08:45][181.9]

Lee Moreau: [00:08:45] One of the implications when you talk about at least the, my understanding of pro- this product equity that you're referring to is bringing more people into the product, right? And so it has a significant impact or implication on market share.  [00:08:58][12.5]

TB Bardlavens: [00:08:59] Yeah.  [00:08:59][0.0]

Lee Moreau: [00:08:59] That narrative must go over very well. Um, but it might not be the most obvious and immediate thing that people come to, right, when you walk in the door.  [00:09:06][6.4]

TB Bardlavens: [00:09:06] Yeah, and that's actually, so I usually talk to leaders about two things when it comes to product equity. The first is, and I talked about this a bit in my talk, which was like, I'm no longer looking for values aligned partners initially. And it's not to say that values aren't important and it's that to say a lot of things about values particularly, but to say, especially in turbulent times, values can shift. If we can align on what the outcome we hope for is the same, which for me was, hey, we all want people to use and love our products. So if that is the outcome, we want to drive towards, then all we need to figure out is the tactics to getting there. And through that, we also build trust over time, right? And I talked to a lot of folks about what does it mean to build trust versus automatically have trust? What does it means to move at the speed of trust? And so that's a lot of the thoughts from that end. The second part is that market penetration piece, right? It is to say that if you focus on communities of people that were previously ignored, then that will not only build greater market, particularly for companies like Adobe that has all but, like has most saturation of the market, particularly in the global North, like that's really a huge deal to say, we have a way to find more business and more opportunity. But then also we have an opportunity to build brand love. Because when you focus on communities that you previously ignored, you build brand to love because they feel seen. They become, like we talk about net promoter scores all the time, right? That is the MPS you want, right. And so those are the two big chunks when I talk to leaders about this is, we can actually do more. Actually, there's a third one which is around product innovation and it's to say that equity has never existed in this country or in this world. You can't point to a thing and say, oh, that is an equitable outcome and we know we did that. Right? And so if you start to think about how you build products in that way, in a way that has never been done before, you are driving innovation and you're changing the face of innovation. So no longer do we have the Apple and the Google sort of big showcases where we to reveal all the new things, but instead... we actually got a route that Twitter was starting to go for a little bit before their takeover. They started using community spaces to get feedback on features. They talked with folks that were using the platform to say, what do you think about this? What are your thoughts on that?  [00:11:29][143.4]

Lee Moreau: [00:11:30] A much more subtle approach.  [00:11:31][0.6]

TB Bardlavens: [00:11:31] It was community co-creation.  [00:11:32][0.8]

Lee Moreau: [00:11:33] Right.  [00:11:33][0.0]

TB Bardlavens: [00:11:33] It's saying, hey, instead of us just creating things to pump out, basically become a feature shop, which it makes. Apps feel overloaded like Facebook. How do we do it in a way that actually feels useful for the people who use it every day? And in doing that work, you start to understand the nuances, right? And then you take that and you say, well, who are the detractors? Who are the people who don't want to use this in a while? What's preventing them? So how do we solve for them? And so when I go back to target universalism, this is a big piece of it. And so I talked to some of the design and business majors here. And I say, hey, the thing about business school that sucks is that they always talk about this bell curve. And it's always about focusing on that 80%. But if you focus on the detractors and the folks who either can't afford or can't access it, you say, hey, who are the most marginalized and marginalized people in this? Who are the, who has the hardest time to access this product or resource, whatever? You solve for them, then everyone else gets that benefit because you're solving for a person that has a extreme lack of resources. So as you expand out to the rest of the community of people, then all of a sudden, resources start to increase. So those who have greater resource see great benefit from it, but those who had no resources or very little resources also see benefit from it. And so it actually helps create a much more sustainable business practice because you're really scaling from one to many. And that's what most businesses want to do, they want to scale.  [00:12:59][85.6]

Lee Moreau: [00:13:00] Right.  [00:13:00][0.0]

TB Bardlavens: [00:13:00] And but they are trying to scale the 80%, but where do you go if you already have the 80%? And so this is like, it's just really how you rethink even how you build businesses in this way.  [00:13:10][10.3]

Lee Moreau: [00:13:10] TB, thank you so much for your time. It was great having you with us.  [00:13:12][2.0]

TB Bardlavens: [00:13:13] Thank you.  [00:13:13][0.1]

Lee Moreau: [00:13:19] Right now I'm here with Ryan Powell at the Institute of Design. It's Thursday, May 29th. Hi, Ryan.  [00:13:24][5.4]

Ryan Powell: [00:13:24] Hi, Lee. Thanks for having me.  [00:13:25][1.2]

Lee Moreau: [00:13:26] Thanks for being here. Ryan Powell is the Director of UX Research and Design at Waymo, a self-driving technology company. Prior to joining Waymo Ryan led teams at Google, Samsung, Xbox, and Motorola, and is a graduate of the Institute of Design, you're a graduate of this program.  [00:13:43][16.4]

Ryan Powell: [00:13:43] I am. Yes. Yes, it was a little while ago, but yes, I am a graduate, a very proud graduate.  [00:13:47][4.3]

Lee Moreau: [00:13:48] So tell us why you're here. I mean, Waymo, self-driving car, technology, I remember seeing it on the streets of Mountain View maybe 10 years ago in a very early phase. Growth has been substantial. What brings you into a conversation around responsible AI and the future of design?  [00:14:06][17.5]

Ryan Powell: [00:14:06] Yeah, what attracted me when I got the invitation to come and join the summit was this idea that as we bring our autonomous driving technology to more and more people, we are laser-focused on building trust. And that's what we see as sort of the key to a wide-scale adoption. Around the world every year, there are about 50 million people that are injured from vehicle crashes, and they're about 1.35 lives that are lost due to vehicle crashes. And when you look at those incidents, most of the time you have human error or inattention that is a root cause of that situation. And so that's a big motivator for us at Waymo. And so of course what we're doing is innovative. We're working on self-driving cars, but we also kind of think about this more as an imperative to really get this technology out there. And so we're in this scaling moment now where we're trying to get into more cities. And have more people be able to experience and use us day-to-day as they integrate into their sort of transportation options.  [00:15:12][65.9]

Lee Moreau: [00:15:13] That's interesting. I would have thought the natural instinct and it's kind of a flip, right, the natural instinct to do a self-driving vehicle thing is to move people. It's a mobility play, but it's also a life safety play if framed in the right way. Are there moments that you kind of like wear one hat and you're thinking about mobility and wearing the other hat thing about safety? Or I, how do you approach that?  [00:15:33][20.1]

Ryan Powell: [00:15:33] Safety is at the heart of everything that we do. We've been at this for a long time, over a decade, and we've taken a very cautious approach to how we scale up our technology. As designers, what we have really focused on is that idea that more people will use us as a serious transportation option if they trust us. We peel that back a little bit. Okay, well, How do we design for trust? What does it actually mean? When we think about our ride-hailing experience and getting people to give us a try. So that's where we really spend a lot of our time.  [00:16:10][36.7]

Lee Moreau: [00:16:10] And what does that mean? I, cause as a designer, I'm, I know you can't design trust. You can't just like make it. You can wish it to happen. You've got to earn it. What are the things that you do at a platform level to kind of build that?  [00:16:22][12.0]

Ryan Powell: [00:16:22] So I look back to my time at ID and the work that the school does here to really take a human-centered design approach. And what that means in practice is being very rigorous about understanding people's needs. And so of course, part of that toolkit is going out and asking people what they need. But a big part of it is also doing a lot of observation and taking the time to really try to tune in to what's happening in a particular moment. And so we spent a lot of time riding around with people in cars that are driven by humans to really understand what's happening. And what we noticed in those moments is there's a lot of communication that happens between a passenger and a human driver. It can be explicit or direct and, you know, for example, asking the driver which route they're taking or why might they not be moving because the lights green. But more often than not, there are these subtle cues where you might kind of feel the car start to slow down and you might be on your phone in the backseat. You might glance up to sort of see what's going on. And what you'll most likely notice is that the human driver might be noticing something outside the window. And in that moment, it's like, OK, there's a cyclist in front of us, therefore, he's slowing down, and that's enough information for me to know why the vehicle is doing what it's doing. And then I'll go back to my phone or something. But in the absence, of course, of a human driver, we have to think about what's a proxy for that type of communication when you're riding in a car that's driving itself. And so that's really grounds a lot of the work that we do at Waymo.  [00:17:52][90.1]

Lee Moreau: [00:17:53] That sounds like absolutely fascinating research and slightly terrifying being in all these cars with other people driving and the kind of sense of trust. I wonder, having spent as much time as you all have spent with drivers, especially as you're seeing them transition from controlling their own vehicle to being in a self-driving car and autonomous vehicle, whatever you want to call it, do you think the aspirations of drivers are changing as a result of having these experiences? Like what they seek from the driving experience — is that subtly changing over time?  [00:18:27][34.0]

Ryan Powell: [00:18:28] I think that it is. I meet lots of people that say that they love driving and they can't imagine giving up driving. I think that's true. I certainly, I enjoy driving as well, but I think there's this universal pain point that people have, which is even if you love driving, you probably don't like traffic. And so I live in San Francisco. I often will commute down to Mountain View and I spend a lot of time sitting in traffic. And when I think about sort of, you know, they're all sorts of tricks, right? That we all do like listening to podcasts. I'll take, you know, certain phone calls and try to be somewhat productive during that time. But I think what the big realization that a lot of people have when they get into a self-driving car for the first time is that I have this environment all to myself and I can choose how I wanna use that space. So if you're in a city like Los Angeles where it's not uncommon to maybe spend 30 minutes getting from one side of the city to the next. How do you use that time? You might just want to chill out and sort of listen to music. You might want to take a phone call. I used to catch up with my mom sometimes, just check in on how she's doing. And you don't do those things when you're with another person, particularly a stranger. And so not having that social pressure there by being a guest in somebody else's car, it really kind of opens up all these possibilities. And so we also think a lot about that as designers, where we want to be careful not to intrude on that space, meaning that we want people to be able to use it the way they want to. So of course, we think a lot about music services and integration, and we want you to listen to your music. And that's a big part of it. But we also want to make sure that the user experience inside the vehicle is one where we aren't intrusive and that we're kind of respectful to how people want to use the space.  [00:20:17][108.7]

Lee Moreau: [00:20:17] This notion of free space, free time. I mean, actually this is one of the things that people, one of great hopes that AI will create is this space for free time that you can task in other ways or just be. We're not hearing a lot about the applications yet of AI having these kind of beneficial, almost fanciful outcomes, but this is, this is of them. Do you talk about this internally?  [00:20:45][27.1]

Ryan Powell: [00:20:45] Yeah, we do because we think a big part of the value proposition is that you do get some time back in the sense that you can use it the way that you want to. So I've heard countless stories from parents that will use us to commute with their kids, maybe to soccer practice or, you know, picking kids up from school and coming home and just having that, you know 12 to 18 minutes of time to sort of catch up on what- how was your day, you know, what kind of happened? Or even just listening to some music together and doing that, there is some quality time that happens there. That's how I like to think about it, is the quality of the time is much higher in a self-driving car than it is if you're the parent and you're driving that car and the kids are in the back and you're talking, you know using the rear view mirror and you are having a conversation, it's just not the same.  [00:21:38][53.2]

Lee Moreau: [00:21:39] So I wanna shift the conversation a little bit selfishly because I'm trained as an architect and I think about the built environment a lot. And I'm wondering if, what Waymo seems to do, and you talked about being a good road citizen, which is being kind of responsive and acknowledging the conditions that are already there. But I think there's a world in which, if this takes off, we have these autonomous vehicles that can potentially use the built-environment in different ways fundamentally than the way a human driver uses it. And we can maybe reshape space. As a, as a business, I mean, obviously there's a business model to all of this. There's a hope for profitability and that kind of a lot of ambition about what this kind of platform can do. Um, do you see a world in which the built environment can be altered or start to evolve as a result of these new technologies?  [00:22:24][45.4]

Ryan Powell: [00:22:25] I do a good example of that as a small step towards maybe a greater future is in San Francisco. We have Market Street, which is one of the main thoroughfares that goes downtown and forever that was closed to personal cars. And so you could only have public transit basically use Market Street to get in and out of the city core. Recently the new mayor of San Francisco is now allowing Waymos to use Market Street. And so it's a small example of having a street that's dedicated to public transportation plus autonomous vehicles. So I think we'll see small steps like that at first. But the other thing I personally believe too, is that over time, what you might start to see is instead of having a second car or a third car in a household, you might begin to use a service like Waymo where it's more on demand, I can get somewhere when I need it. And again, unlike traditional ride hailing, it feels a little bit more like your own time and space when you are in the car by yourself. And so you might start to see things like second cars and third cars numbers start to kind of come down a little as a teenager gets to driving age instead of like when I did, I saved up, you know, and had my first car at age 16. You might start to see people depend more on a service like Waymo for some of those use cases. And so I think as you do that, you can then begin to imagine like street parking. You might need less of that if there are fewer personal cars on the road or parking garages. I think that's where it gets really fun. But I would say it's probably a little bit too early and I probably have some bad guesses on how all that might evolve. But I do think it'll be small steps that we'll see over time, like the Market Street example.  [00:24:11][105.9]

Lee Moreau: [00:24:12] Fair enough. You just brought a terrifying vision to me, which is the 16-year-old driving. And it struck me that actually a 16- year-old's driving is more terrifying to me than an autonomous vehicle. Like if I'm just kind of using my hands to like weigh these two options and suddenly I'm more accepting of autonomous vehicles and self-driving cars than I am with 16-years-olds behind the wheel. Ryan, this was a fascinating conversation. Thank you so much for spending time with us.  [00:24:40][28.5]

Ryan Powell: [00:24:40] Thanks for having me.  [00:24:41][0.5]

Lee Moreau: [00:24:45] I'm here with Terry Williams-Wilcock at the ID in Chicago. It's Thursday, May 29th. Hi, Terry.  [00:24:51][5.8]

Terry Williams-Willcock: [00:24:51] Hello. Nice to meet you.  [00:24:52][1.3]

Lee Moreau: [00:24:53] Nice to meet you too. Terry Williams Wilcock is the chief customer officer at Rush Digital in New Zealand, a design and technology company. Now this is the Shapeshift Summit here in Chicago, but there were several convenings. I think they were referred to as salons.  [00:25:08][14.9]

Terry Williams-Willcock: [00:25:08] Yeah. Yeah.  [00:25:09][0.6]

Lee Moreau: [00:25:09] One of those was in New Zealand. Tell us a little bit about that experience.  [00:25:12][2.5]

Terry Williams-Willcock: [00:25:12] Yeah, look, that experience was actually originated from a friendship with Albert Shum that came about because I just loved a lot of his, I suppose, responsibility angles on design generally. So he reached out because he'd started a new role with Anijo and he said, look I'd love guys to be part of a salon event that focuses on you know, pretty large existential problems that we could be facing in society, and then thinking about how AI might help solve those problems. And then we had the daunting task of choosing the problem. It's like, okay, what do we want to focus on, right? And we went away and had some thinking and we started thinking around bias and how AI kind of, you know does inherently have bias and the data that we put into it will inform that bias, but we kind of moved away from that through another colleague called Roger Dennis, who's a bit of a futurist, um, and you really need to look this guy up he's a special kind of individual, um, understands a lot of stuff that I have no idea about. And he mentioned that that is part of the problem, but one of the biggest threats that he saw that we all kind of got very excited about was the threat to democracy. And In a world where AI has the power to accelerate the amount of information we consume and the types of information, and it has the to make it, I suppose, disinformation or misinformation and help you generate that really quickly, it was starting to concern us that that actually could be a real big, a big issue. And one other thing that I thought was very interesting around why we would talk about this in New Zealand, was when I asked a Anijo, why did you think about New Zealand apart from the fact we had some friendship? He mentioned that from afar, you seem a very open society. And we think you could have those types of conversations that we might not be able to have in our country at the moment. And that I think that galvanized us to kind of focus on that topic. Yeah, so that was the kind of the key reason.  [00:27:23][131.0]

Lee Moreau: [00:27:23] I don't know if this makes me feel good or bad or how I'm supposed to feel about this, but the idea that you're feeling this notion of a threat to democracy from some of this technology somehow makes me good, given that I'm coming from an American context. And there's a lot that is common about some of these technologies in different places in the world. But what's particularly unique in New Zealand that might have been revealed in some of these conversations?  [00:27:48][24.2]

Terry Williams-Willcock: [00:27:48] Yeah, I think, I mean, it was, it was an international problem. So we did recognize it as an international problem. What we did notice through the COVID pandemic, actually, that there was a lot of coming together as a nation, believing in the same values at the start of the pandemic. And it was all based around the, the idea of kindness first, right like, and look after your citizens. Take the clinical advice and trust in it, and then make decisions based on that. They were hard decisions, right? Locking the country down.  [00:28:25][36.3]

Lee Moreau: [00:28:25] Right, you were very aggressive.  [00:28:26][1.3]

Terry Williams-Willcock: [00:28:27] We were seen, I think, as very aggressive, and what that did, and I was part of building tools to help with that whole kind of pandemic response, was we did see that we helped save a lot of lives. We helped stop the virus actually spread as quickly as it possibly could have. But you had to make those sacrifices around business and you have to go call, you know, we need to tell people that they can't open their businesses. So there's gonna be economic kind of repercussions. But everyone galvanized around that to start off with. And the information was clear, clinical professionals were put on TV at one o'clock and we were told about these things. And then the vaccination conversation started and the vaccination needed to be rolled out pretty quickly. And then a lot of misinformation was spread at that time and it created a real fracture in, in what was a really unified response. So it kind of showed the power of how that information and the speed at which it could be pushed out and believed that kind of gave us a kind of a motivation and a bit of a kind wake up call that actually technology has kind of fueled this. So how can we respond to that? So that was the, but that was happening globally. You know, the vaccine was a massive controversial rollout and it, I don't know, it kind of broke down some of the trust foundations we had with large kind of clinical institutions, which was a real shame.  [00:30:05][98.0]

Lee Moreau: [00:30:06] And we're still feeling the repercussions.  [00:30:07][1.0]

Terry Williams-Willcock: [00:30:08] We are totally feeling the reprecussion. I had conversations with friends around this that I thought would be reasonably logical around these decisions, but suddenly it's like, I don't trust these, you know, these large pharmaceutical companies are all driven by these agendas so da, da, da, and that suddenly leads you to, I'm not going to put a vaccine in my body that might protect me from a virus that has been proven by X, Y, and Z, because I've hearing all this other stuff. And then what we saw is just that, you, you was able to spread through certain social media platforms. And content was then generated by people that was then spread even faster and faster.  [00:30:47][39.6]

Lee Moreau: [00:30:48] And in the world of AI, where models are learning off of whatever they can get. I mean, we're running out of data, I think, to train models on right now, I think that's something I'm starting to hear.  [00:30:57][9.3]

Terry Williams-Willcock: [00:30:56] Well, yeah, labeling data is a big, big thing, but I think the interesting fact is the models, the commercial models that are put around these platforms, and I'm going to go there,.  [00:31:06][9.8]

Lee Moreau: [00:31:08] Yeah, please.  [00:31:08][0.2]

Terry Williams-Willcock: [00:31:08] Are around that attention economy. And so, you know, there's a fundamental need to keep people engaged in the feed. Therefore, the content you're going to present to them and the or what the algorithm is optimized to is more engaging content, which is gonna be more inflammatory every time you see it because that's how the algorithm is produced. So if a misinformation is fed into that pool of content, into that pool of data, it's probably gonna rise to the top because it's more engaging. It's more interesting, even though it might be false.  [00:31:40][31.7]

Lee Moreau: [00:31:40] It's unique, basically.  [00:31:41][0.9]

Terry Williams-Willcock: [00:31:41] It's unique, so more people will see it. And we know what that does in the advertising world. You say a thing enough times, people will believe a thing. So that's, yeah, that was the real. A real motivator and something we felt when it comes to democracy, we might be able to think about ways or create a discussion that would generate thinking and ideas. We wanted to actually generate ideas that could potentially make a difference.  [00:32:07][25.4]

Lee Moreau: [00:32:07] I want to, you said something and you're, you're a designer, right?  [00:32:10][2.6]

Terry Williams-Willcock: [00:32:11] Yeah in my heart.  [00:32:11][0.2]

Lee Moreau: [00:32:11] So like you come to the world, you come to the worlds and you confront the world with the mind of a designer and the approach of a, of a designer. And you said something, I put it in quotes here, kindness first, you don't hear that a lot, uh, in conversations around technology, generally around AI — talk about that as a motivator, and how that's different from everything else going on.  [00:32:38][27.3]

Terry Williams-Willcock: [00:32:38] Yeah, it's a great question, thank you. I think the motivator, I suppose at the heart, it comes down from being a parent, if I'm honest, like I was probably very frivolous in my early days of design and I came around into design when the internet had just started and I was a graphic design student and the web suddenly appeared and I could start moving my design around, cutting it up. Dave Carson was a big inspiration for me and then studios like Tomato came out and.  [00:33:09][30.8]

Lee Moreau: [00:33:12] Oh, love them.  [00:33:15][2.6]

Terry Williams-Willcock: [00:33:16] Yeah, love that. And they were playing with composition and typography and being kind of, I don't know, rebels, I suppose. And I was like, oh, I'm a rebel.  [00:33:23][6.8]

Lee Moreau: [00:33:25] Maybe, let me try that out.  [00:33:26][1.4]

Terry Williams-Willcock: [00:33:26] But it was all kind of frivolous and playful. And I love that spirit. And so I don't want to dismiss that at all. It's a huge motivator. And then I think stuff's got a bit more complicated in the world since I was a youngster. And then you have kids, and you feel a lot more responsible for their futures. And they're dealing with, you know, climate change, technology, you know, growth, there's there's still inequities happening, you know, the gender pay gaps, and there's so many things. And I think they're brought to the surface a lot more now through social media. So there's some very good things that social media is doing. So when- as a designer, when I naturally start to confront a problem, I would really want to understand what the human emotional response or reasons for those problems are and, and when you start to interview people and you talk to them and you see them in situations, then you can't help just build this empathy. And I know it's part of the process, but it's kind of obvious as well, right? Like go and see how people are responding to something. And if they've got a tear in their eye, there's probably a problem there. Like, you know, and you, you want to uncover that you want to really understand where's that coming from. So I think that's a huge thing in design that I think when it comes to the technology and the approach to technology, I think they're kind of doing what I did when I was a younger designer. They're just enjoying it and they're not necessarily thinking of the consequences and observing what those consequences are because it's hard, it's intangible, it' AI, where is it? It's not that visible.  [00:35:11][105.2]

Lee Moreau: [00:35:12] I think it's even more than that, because it's not just technology that you can play with and not worry about the unintended consequences, but I think there's also a sense that there's money to be made there, whereas when you're referring 20 years ago or 25 years ago, that early internet time, I don't think it was entirely clear that designers were going make a lot of money on this but now there's just so much conversation about that.  [00:35:35][23.0]

Terry Williams-Willcock: [00:35:36] Yeah, it's interesting, isn't it? Like back in those days, I think there was a lot of venture capitalists that were going to make a hell of a lot of money,.  [00:35:43][7.2]

Lee Moreau: [00:35:44] And they understood it.  [00:35:44][0.2]

Terry Williams-Willcock: [00:35:44] Like the dot-com bubble and burst, you know, and, but the consequences of that weren't profound and they, cause it wasn't at a scale that it is now, like this scale of platforms, you know, billions of users, it is insane, like on one platform, in a way that shouldn't be a thing. So yeah, that, I, think that scale's a thing and so there's I think there's always a commercial reality to it. We live in that environment. That's the structure that we put around us. But it doesn't mean that we shouldn't put human values right at the top of the chain, right? Because if you put human value is right at the top, they're going to incentivize kind of commercial values. When are we going to make money? When are you going to decide that we're not going to make money, like when are we gonna put more cost into this? When am I going to take more time? And then they start to inform the principles that you apply to your design, right? So what am I gonna value? Am I gonna to value trust? I might say trust, but trust starts right at the top of the chain of I suppose of the human value. So kindness is a thing. Safety is a thing. Yes trust is a, but if you've got a commercial model that doesn't allow trust to be in that, then be honest about it.  [00:36:58][74.0]

Lee Moreau: [00:36:59] Right.  [00:36:59][0.0]

Terry Williams-Willcock: [00:36:59] Like, I think that's what I'm hearing a lot is, I don't know, dishonesty. And that's the frustrating thing that I hear around a lot of the big tech. It's like, you're trying to make it out like it's for people, but you've kind of lost my trust already because you've made sure that people stay in your platform more than, and you've—you've got a commercial model that fundamentally doesn'tEnable trust to happen. So I think those are the those are the things that I kind of think people need to be a lot more honest about and straight up about. And I can't call out companies because I'm on a podcast.  [00:37:29][30.2]

Lee Moreau: [00:37:30] That's fine. But I think the idea that that's a conversation that, first of all, needs to happen and second of all can happen right now at this event.  [00:37:39][8.4]

Terry Williams-Willcock: [00:37:39] At this event, absolutely.  [00:37:40][0.8]

Lee Moreau: [00:37:41] Quite sure this is not the last time we'll have this conversation here at this event and hopefully as we move in the future, we'll continue to have this conversation.  [00:37:48][7.0]

Terry Williams-Willcock: [00:37:48] Yeah, yeah, absolutely!  [00:37:49][0.4]

Lee Moreau: [00:37:50] Terry, this was fantastic. Thank you so much for being with us.  [00:37:52][2.3]

Terry Williams-Willcock: [00:37:52] Thank you.  [00:37:52][0.2]

Lee Moreau: [00:38:01] Right now, I'm here with Ellie Kemery at the ID. It's Friday, May 30th. Hi.  [00:38:05][4.0]

Ellie Kemery: [00:38:07] Hello, how are you, Lee?  [00:38:07][0.9]

Lee Moreau: [00:38:08] Wonderful. Ellie Kemery is principal research lead, advancing responsible AI at SAP. Very relevant to our conversation here at Shapeshift.  [00:38:15][7.0]

Ellie Kemery: [00:38:16] Yep.  [00:38:16][0.0]

Lee Moreau: [00:38:16] Tell us more about what you do.  [00:38:17][0.8]

Ellie Kemery: [00:38:17] Well, so I lead a practice of researchers. We are horizontal across the company. So we are really looking at this from a global perspective. And we are working very closely with the AI ethics team to advance responsible AI in the product development process. So end to end, we are focused on bringing humanity into technology and humanity first. So looking at things like the implications on society, sustainability, other things, but really prioritizing humans in the process, end-to-end and trying to deliver outcomes that materialize in value for the world.  [00:38:58][41.2]

Lee Moreau: [00:38:59] I know this is probably obvious to you, but what does end-to-end mean for our listeners?  [00:39:04][4.9]

Ellie Kemery: [00:39:05] Yeah.  [00:39:05][0.0]

Lee Moreau: [00:39:05] How do you think of end- to-end because the product suite is pretty broad.  [00:39:08][3.3]

Ellie Kemery: [00:39:09] Totally, I think of the end-to-end product development lifecycle, but even beyond that. So once something goes into the wild, for example, it's still our responsibility to monitor the performance of that thing and make sure it's delivering the outcome that we intend. And again, if there's anything amiss, we can then go dive in and make those necessary improvements. So it really is an ongoing effort. But when I say end-to-end, I am really referring to the product development lifecycle.  [00:39:40][30.9]

Lee Moreau: [00:39:41] So, Ellie, yesterday you were on the panel that was called Beyond Silicon Valley,.  [00:39:44][3.7]

Ellie Kemery: [00:39:45] Yeah.  [00:39:45][0.0]

Lee Moreau: [00:39:45] And I think there was many people, and myself included, kind of struggling with what Beyond Silicon Valley really means.  [00:39:51][5.3]

Ellie Kemery: [00:39:53] Yeah, right.  [00:39:53][0.0]

Lee Moreau: [00:39:53] But that's also an opportunity, right? So for you, Beyond Silicon valley could be about, yes, AI is being generated in lots of different places, not just by the tech bros in Silicon Valley. I'm using air quotes here. But also that the users of these technologies are also beyond Silicon Valley, right?  [00:40:09][16.2]

Ellie Kemery: [00:40:09] Yes.  [00:40:09][0.0]

Lee Moreau: [00:40:10] So actually, if we speak to kind of the richness of where that framing could go, what were your reactions coming out of that panel? I mean, there was a lot of talk about responsible AI, a lot to talk about trust and what were you impressions?  [00:40:25][14.8]

Ellie Kemery: [00:40:25] So my impressions are, and this is the work that I do, is that you know, I mean Silicon Valley, these tech bros as you put it.  [00:40:35][9.3]

Lee Moreau: [00:40:35] Sorry tech bros, the tech bras are going to hate me but I can move with that.  [00:40:38][2.9]

Ellie Kemery: [00:40:39] But all the rage right now is AGI, right? That is really not a focus of ours. We are focusing on people and creating value for people. Generative AI, like a lot of other tools, needs to be used in a way that it's going to deliver- you know, solve a problem. And it should be the right tool for that problem as well. So we are really less focused on the hype of this technology and more so on the practicalities. How do we help people be more productive, but also how do we deliver that value? Productivity is one aspect, but there's also helping to unlock new potential that has yet to be realized. Other technologies need to be leveraged and we need to identify the problems and center our work there, so that really is our focus.  [00:41:37][58.5]

Lee Moreau: [00:41:38] The word that really struck me yesterday, that came through loud and clear from your remarks was the word dignity.  [00:41:43][4.8]

Ellie Kemery: [00:41:44] Yeah.  [00:41:44][0.0]

Lee Moreau: [00:41:44] Talk a little bit about that. At one point you actually said, you know, people don't hate work.  [00:41:49][5.1]

Ellie Kemery: [00:41:50] That's right.  [00:41:50][0.2]

Lee Moreau: [00:41:51] And everybody sort of was like, oh, that's right, yeah, let's not fix that. Yeah, go ahead.  [00:41:55][4.3]

Ellie Kemery: [00:41:55] People are showing up to their workplace for a reason. They need to earn a living but they also have a sense of purpose about what they're doing. We need to make sure that we're amplifying that and that we are leveraging these technologies in a way that bolsters them and helps them achieve new things that are also in their scope but have maybe never been accessible before. Right? So it's not about automating away all the rules, you know. It's really finding those places where, okay, what can we take away so they can continue to do their passion and maybe open them up to new potential, right? So it's about understanding why they show up for work, right, and then, like I said, helping them achieve more in that area.  [00:42:46][50.5]

Lee Moreau: [00:42:47] And increase human value, right? Ultimately, that is the goal.  [00:42:49][2.8]

Ellie Kemery: [00:42:49] Yeah.  [00:42:49][0.0]

Lee Moreau: [00:42:49] Ultimately, that is the goal. [00:42:49][0.1]

Ellie Kemery: [00:42:49] Yeah, exactly, ultimately increasing the value for their organizations, for potentially the world. So I am an AI optimist in the sense that I believe that this technology has profound capabilities and can actually help us realize things we've never been able to realize before. But we have to really take a hard look at how it's being deployed, how it is being put into product experiences. And we need to think about ethics is a part of this because the unintended consequences, especially at the scale that we operate, are just too big, right? Supply chain is a core part of our business. You can imagine what harms could be caused if somebody bases a forecast on something that is not accurate or they can't have confidence in, right, and they blindly trust the system. So we focus a lot of our energy on value, delivering the right value, but we also focus a lot of our energy on making sure that people are aware of how the technology came to that output, that then they're leveraging, right? So, um, you know, we talked a little bit about this yesterday, but making sure that people, are in control of what's happening at all times, because at the end of the day, um they need to be the ones making the call, right. They need be able to check the work of the AI, right, they need to able to understand where these things are coming from so they can leverage their expertise. So these things are all baked into how we think about this. And you know, on the topic of trust, which was also part of that panel discussion yesterday, I feel like everybody's talking about trust. Everybody's aware that without trust, there is no adoption. But there is something that people aren't talking about as much, which is that people should also not blindly trust a system, right? And there's a huge risk there because, humans we tend to, you know, we'll try something a couple of times and if it works it works, you know. And then we lose that critical thinking, right? We stop, you know, checking those things and we simply aren't in a space where we can do that yet. And so making sure that we're focusing on the calibration of trust, right, like what is the right amount of trust that people should have to be able to benefit from the technology while at the same time making sure that they're aware of the limitations.  [00:45:10][140.3]

Lee Moreau: [00:45:11] That has very large behavioral implications. This notion that we're now starting, and I haven't thought of this until just now, as you said it, we're introducing this new technology, which means that systems learn. And the assumption is that they're gonna learn and get better, but we can actually train things to learn and get worse.  [00:45:28][16.6]

Ellie Kemery: [00:45:29] Yeah.  [00:45:29][0.0]

Lee Moreau: [00:45:29] And if we are blindly trusting all of these systems and not just double checking, just being a little bit focused,.  [00:45:36][6.3]

Ellie Kemery: [00:45:37] Right.  [00:45:37][0.0]

Lee Moreau: [00:45:37] We can slip into a very dark place. I've not been hearing that.  [00:45:43][5.3]

Ellie Kemery: [00:45:43] Yeah, I mean, it's really important. It's something I know that is top of mind for anybody working in the responsible AI space. And I don't think anybody's figured it out yet, but the components include things like reliability, transparency, things like consistency, which is super important and hard to do in probabilistic environments. And also things like I keep pounding the drum on humans being in control and not in the loop because even though that's the most popular thing that people are saying these days is like make sure humans are in the loop, but that's a very passive approach and it takes the control away, actually. So I mean visibility is important But control is more important. I feel like they're they're all important though when it comes to this and so- and then you know we are always concerned and user experience about cognitive load, right? You don't want to burden people with all the details at once so figuring out what those thresholds are in terms of explainability, right? So what's the right level of transparency to provide at the top versus allowing them to dig in deeper? And that's gonna be especially important with agentic AI. You have potentially thousands of agents, you know, at your fingertips, if you will, through an orchestrating agent, that you're still gonna need to figure out how that happened, right? Like, how did that output happen? You still need to be able to dig in there, right, so in a way that doesn't, you now, overtax you but provides you with enough information to be be able make an executive decision as a human in control of the outcome.  [00:47:21][98.0]

Lee Moreau: [00:47:22] Ellie, thank you so much for being with us. This was great.  [00:47:24][1.8]

Ellie Kemery: [00:47:24] Oh, my pleasure.  [00:47:24][0.3]

Lee Moreau: [00:47:27] Design As is a podcast from Design Observer. For transcript and show notes, you can visit our website at designobserver dot com slash designas. You can always find Design As on any podcast catcher of your choice. And if you liked this episode, please let us know. Write us a review, share it with a friend, and keep up with us on social media at Design Observer! Connecting with us online gives you a seat at this roundtable. Special thanks to the team at the Institute of Design, Kristen Gecan, Rick Curey and Jean Cadet for access and recording. Need design training for you and your team? The Institute of Design Executive Academy offers work-integrated learning that can power your organization forward in our brave new world. Ford, Steelcase, United Airlines, and more have partnered with ID to think like designers, act like strategists, and lead like systems changers. Learn more at institute dot design. Special thanks to Design Observer's editor-in-chief, Ellen McGirt and the entire Design Observer team. This episode was mixed by Justin D. Wright of Seaplane Armada. Design as is produced by Adina Karp.  [00:47:27][0.0]

[2735.0]

People on this episode