
Design As
Who does design belong to, and who is it for? How does it serve us—all of us—and how can we learn to better understand its future, and our own?
On Design As— podcast from Design Observer—we’ll dig into all of this and more, in conversation with design leaders, scholars, practitioners, and a range of industry experts whose seasoned perspectives will help illuminate the questions as well as the answers. In our first season, we considered the topics of Culture, Complexity, and Citizenship in terms of their impact on the design practice and also in terms of how they themselves are being shaped by design today. In season two, recorded at the Design Research Conference 2024 in Boston, we gathered new round tables to discuss Design As Governance, Care, Visualization, Discipline, Humanity, and Pluriverse. Plus, our bonus episodes are exclusive recordings of conference panels!
Design As
Design Beyond Silicon Valley
Design Beyond Silicon Valley features an exclusive panel from the Shapeshift Summit featuring Ruth Kikin-Gil, Responsible AI Strategist at Microsoft; Liz Danzico, VP of Design for Microsoft AI and Founding Chair of the MFA in Interaction Design at SVA; and Ellie Kemery, Principal AI User Research Lead at SAP Business AI, the panel examines how companies are realizing the potential of AI. The panel will explore how businesses are breaking out of Silicon Valley to consider AI’s implementation and impact around the world. It is moderated by Kevin Bethune, CEO of dreams.design+life, and author of Nonlinear:Navigating Design with Curiosity and Conviction.
Follow Design Observer on Instagram to keep up and see even more Design As content.
A full transcript of the show can be found on our website.
Subscribe to With Intent from the Institute of Design for even more bonus content from the Shapeshift Summit.
Season three of Design As draws from recordings taken at the Shapeshift Summit hosted in Chicago in May 2025.
Lee Moreau: [00:00:01] Welcome to Design As, a show that's intended to speculate on the future of design from a range of different perspectives. And this season, like everyone else, we're talking about AI. I'm Lee Moreau, founding director of Other Tomorrows and professor at Northeastern University. This past May, I attended the Shapeshift Summit at the Institute of Design in Chicago, where designers and technologists came together to try to get a handle on what responsible AI is and what it could be. This episode is a bonus episode, where we're sharing exclusive access to some of the panels from the Shapeshift Summit, where we recorded earlier this year. The panelists you'll hear from, Ruth Kikin-Gill, Liz Danzico, and Ellie Kemery, are all featured this season. And the moderator is also featured and is a long-time Design Observer collaborator and contributor, my friend, Kevin Bethune. What I really liked about this conversation and frankly the set of voices Liz and Kevin obviously are familiar to our listeners but Ruth and Ellie also participated in an ongoing fashion throughout the remainder of the conference so these were voices that were not only present and really important during their own panel but you could see them engaging the conference broadly speaking for the rest of the event and I think it speaks to the kind of nature of this conference a true conversation among leaders in technology and design, trying to confront this challenge of what are we going to do with AI and where is it going? I look forward to sharing this with you. Enjoy! [00:01:25][83.6]
Kevin Bethune: [00:01:28] So Ruth, Liz, Ellie, definitely want to thank you for making yourselves available for this conversation. Really excited. So this panel, for everyone's benefit, is examining how companies are realizing the potential of AI and really getting beyond the Silicon Valley sort of notion that consider AI's implementation and impact all around the world. I'm also just excited by this morning's conversations and the global convening that has happened here. Just, talking with everyone in the earlier sessions and over lunch, just really intrigued. So I wanna just ask that we give a round of applause to Albert and Anijo for what you've pulled together. Amazing. Thank you. So just a few things to prime the conversation. Personally, I think, like everyone, I'm learning about AGI and trying to do my best to experiment and keep up with the pace of things. And in parallel, I've been on this sort of book tour with Nonlinear. And the thesis for Nonlinear is really, it's not an AI book, per se, but it does address the topic. And what it really tries to galvanize is the importance of creative intuition that we all need. Especially as the world undergoes nonlinear change. And for the purpose of this conversation, there's just a few things that I want us all to take a moment to recognize, and then we'll get into the questions. Let's recognize that I think the complexity of our times require sort of multidisciplinary collaboration more than ever before. I think hopefully we can all agree with that. We're dealing with a much smaller world thanks to digitization. And so the need for multidisciplinary teaming and to ensure that our teams are evolving to be diverse, to respond to the needs of the world, is paramount. So change is hard. Non-linear change is even harder. And I think uncertainty and ambiguity have to be part of our conversation. We have to embrace those variables for what they are. And while AGI has potential to do so much for us, as well as appropriate concern for it, Ideally, through making an experimentation, we can develop evidence that will inform our convictions and give us some guiding principles to move this forward. And ideally, we keep humans, I think we all agreed in our prep call, that humans need to stay in the loop to guide where this is going. And I get the sense that this notion of making as leadership, experimentation as leadership building evidence, informing convictions, that's sort of the work that I understand that you're doing in each of your respective organizations. Very excited to get into it. So for the first question, let's sort of pick on Silicon Valley for a little bit. I would say there's a lot of just very loud narratives, I think they were mentioned in earlier talks. Coming out of the valley, notions from like everyone's going to have their personal flock of agents and forming their friend circle, to computer vision, to autonomous driving, but I will say Ryan's, you know, give me some comfort of getting into a Waymo. So, but what does Silicon Valley, I hate to say, get wrong about AI's potential and even its consequences? And what perhaps are some clear differences of the work that you're trying to drive within your respective organizations? How do we actually move from this sort of techno-optimist attitude to like real applications that address the human systems, as Anijo pointed out earlier? And anyone can— [00:05:07][218.7]
Ellie Kemery: [00:05:10] I mean, when it comes to the work that we're doing at SAP, we are operating at a scale that is very global. And the implications of the unanticipated harms at that scale are very, very high with technologies like this. So we're less about the hype. And we're not focused on AGI. We're focused on actually delivering value and identifying what is going to be materially valuable to people and to these businesses that are our customers, but starting with people and what is going to help them do their best work, make a difference in their organization, all those kinds of things. So we are very much not in the hype cycle that Silicon Valley is, and focused on human centered principles at the core. We're also like very pragmatic when it comes to ethics, and it shows up in how we work, but we can talk more about that. I don't know if you guys have— [00:06:14][64.3]
Liz Danzico: [00:06:19] Yeah, I think that. You know, a study comes to mind that we did in April as we are considering the future of search as we do, I think monthly, or if not daily. I will adjust my microphone in the process. The work that we were doing kind of considere, you know does a human recognize the difference between search and AI? And in doing so, 1,000 participants recognized that they would like some degree of AI in their search. Over 60% of people said that. And then over 25% of the people said, no, I would not like some of the degree of the AI in my search experience. And then the rest of the folks were sort of undecided or inconclusive. But when we dug deeper into those findings, what we found is that people really didn't understand what AI in search really meant. And so as I thought about this question and sort of keeping the human centered in the experience, I think it's really important to truly understand the jobs to be done, the human experience, and what we really are, instead of getting swept up into what is possible, really understanding what is needed and what truly matters, instead of the kind of sort of tricks and sort possibilities of what we can't do., I'' point from a recent study. [00:07:50][90.9]
Ruth Kikin-Gil: [00:07:52] I want to say that in my mind, there's something about this, I don't know, techno hubris, right? No one did it before, therefore we must do it, no matter what, because we want to be the first. And that's, so I think that it's a very strong drive. If it's in academia and you really want to get into the place where you're unlocking new knowledge is one thing, but someone else also referenced that earlier, about technology is not positive, it's not negative, nor is it neutral, right? And it's all about what we're doing with it. So, okay, we unlock this new knowledge. We can do great things with AI right now, but what exactly are we doing with? And I see that there is a lot of, even today, right, even the last panel, we talked a lot about, Oh, where all the new things that it's unlocking and the new ways that one person can create something that is so amazing and would take whole crews to produce in a long time with a much bigger budget. But the question that I think, and this goes into the centered on humans, is what do we want for ourselves? Like, what kind of future do we want for ourselves, for our kids, for next generations, and how do we, and we can't predict the future, right, but we can make the future. So if we're making it one way or making it another, this is a way to predict. So also going back to what you said before, the human in the loop, I think that's the wrong way to talk about it. And it should be the human in control, right? And leading with humans, not just saying, oh, thank you, great AI for keeping us in the loop while you're doing your important things. So I think that this is also part of the thinking, right, that is different. [00:10:09][136.9]
Kevin Bethune: [00:10:10] That's a powerful reinforcement. And I guess what I gather from these early thoughts that you're sharing is in getting our mindset beyond just the West Coast sort of thinking in the hubris and asserting human control, which I agree with, this notion of design dignity, or digital dignity actually, and just understanding the consequences, the unintended as well as intended consequences of every design and business decision. What expectations are we actually building around these new technologies? And I guess, how do we communicate those expectations? With whatever changes we're making, are we, actually, respecting humans and their efforts, their contributions in that in terms of fair labor, fair compensation, some of the hidden costs that look behind the scenes, as you were alluding to, Ruth? And celebrating actual human ingenuity and talent in the right way, what are we even making that align to positive societal mindset? And what is the role for design in that picture as we sort of provoke these questions? Maybe Ellie, start with you. [00:11:26][75.3]
Ellie Kemery: [00:11:27] Yeah, when it comes to dignity, I mean, this is something we think about a lot because, you know, with AI, there's this temptation to automate everything. And with agentic systems, you know, that all that work can happen in the background. And, you know, the human can just show up and verify or, you know, because we do also deeply believe in human control. The other thing that we want to keep in mind, and when it comes to value, people don't hate work. People actually are passionate about what they do. And we have to find out is- where those passions are, what gets them fulfilled, and then automate away the other stuff, right? So that's really a big focus. And I think we have keep in that, yeah, human value is like, what, I guess when it comes to dignity specifically, like what gives somebody dignity is very personal too. And so we have to account for those differences. [00:12:28][61.0]
Ruth Kikin-Gil: [00:12:32] And another perspective on that is, how do we create that technology? It feels like everybody is talking about the magic of AI. And there is a sense of magic in there. But actually, this magic is built on a lot of exploitation. And if you're looking at how is data being gathered, How is data being processed? What kind of data? What do those data sets look like? What do they represent? Who they represent, who are the people that are doing all the kind of grant work to make it available for the large language models to ingest all that? There's a lot of human labor in it that is barely acknowledged or barely compensated. So I think this is also part of the dignity. What are we building this big, amazing technology on? And are we doing it in a way that respects us as humans? So this is all also part of the of the dignity and I agree with you, Ellie, you know, not everybody hates their jobs. And not everybody hates all the parts of their jobs and actually the more that the technology is is growing and building up and can do more things, it actually takes some of the fun parts of the jobs, not just the grunt work, right? So again, it's on us as those who build the products, those who built the systems, those who designed the systems to think about that and say, okay, am I giving all the fun stuff to the AI and now we have human supervisors? Right, at first it was very, I think I was very excited about the vision of human and AI working together and the centaur metaphor of, yeah, we're better together. And then people as curators and they have the, they do have the good taste, they have the intent, they had the intention and then here's the AI that is doing all the rest, and then somehow the tables are turning, right? And we kind of get to be, okay, yes, no, yes no. That's not really fulfilling. So to go back to your dignity point, we should dignify ourselves first, right, and then build the technology on top of that. [00:15:29][177.0]
Kevin Bethune: [00:15:30] I hear a lot of, oh sorry— [00:15:32][1.9]
Liz Danzico: [00:15:33] Oh, well, I was just going to add that there's been a thread of trust and transparency throughout the day. And I think something about what you both said made me think that, magic is the lack of, I don't know, I've not thought about this sentence before — magic is the lack of transparency. I don't know if that's fair to say, but. And so, and what you're saying is part of the automation is not being let in on the process. And I think that there could be a new sense of dignity just with the part that you're describing, where if the automation is done and productivity is increased in the way that people are describing, then there might be a new form of seeing that we're able to have, a new form of productivity, such that we are able to see things that we weren't able to see before. A new connection, new materiality. And someone had said that the artifacts that we once did that we were focused on doing were nouns. And so this process allows us to kind of begin to turn things into verbs and our jobs are changed from nouns to verbs. And there's something about that that feels more active rather than passive. And I think the combination of transparency, building trust and passive to active has to happen, but we are nowhere near that point yet. And I think it's our job, you know, from where we sit, our collective job, to help make that connection, and we're not there yet. [00:17:19][106.7]
Kevin Bethune: [00:17:21] Not there yet. [00:17:21][0.2]
Ruth Kikin-Gil: [00:17:21] I like that as a vision. [00:17:21][0.1]
Ellie Kemery: [00:17:23] I would just add one more thing, because I think this focus on automation and, you know, agentic systems in the background taking most of the tasks. If you, and this is what we're doing at SAP too, is if we focus on the value, and I know that's been brought up a lot, and then what you're saying, Liz, about being able to help them make new connections, it's like help them amplify what is valuable. So, I think that's a big part of dignity too, and the positive potential of it. [00:17:58][34.9]
Ruth Kikin-Gil: [00:17:59] And, you know, you talked about, you asked, okay, how does design fit into this notion? And I think it has two levels. One is the, you now, when we're talking about design, is it what should we do and then how do we do it? So I see our value in both areas, right? What should we do with this? Is this thing that we are trying to make? Is it fit for its purpose, right? Is it doing the right things that it should be doing? Are we building it with the right intent, right? Are we paying attention to all the things that needs to be paid attention to? And then the other part is, how do we do that? And this is where the tools, kind of the design tools come into place, right, what are the-what are the principles, what are the guidelines that we need to work with in order to make sure that the thing that we already decided that it's more positive than negative, how do we make sure that we are thinking about all the edge cases there, right? That we are providing the transparency, that the experience is not causing people to overreliance- rely on the AI? That they can actually make their own decisions in there? Are we setting right expectations for what is going to happen in that experience? Are we collecting the right type of feedback in the right type of way to feed back into our system? So all of these things are really important when we are creating this product. But all of this comes after our analysis and harm's analysis of should we even release that. [00:19:56][116.9]
Kevin Bethune: [00:19:58] This provokes almost like a multi-layered question then. How does this, like if we're talking a lot about the could or should around the what, and then how you do it, how does that roll up to a larger question around ethical AI imperatives that not only affect your company, your target audiences, but the broader world? And that's a high level question. And then at a more brass tacks, concrete level, are there best practices, publications, best benchmarks that design folks in particular can look to as sort of early references to inform maybe their work in a concrete sense? [00:20:35][37.9]
Liz Danzico: [00:20:37] Yeah. [00:20:37][0.0]
Kevin Bethune: [00:20:37] I know you come from a security standpoint, Ruth. [00:20:39][1.7]
Liz Danzico: [00:20:41] Product design standpoint, when we first started this work, of course we had things within Microsoft that we could draw upon, but there was not much particularly that we were able to draw upon because things were being done in such, let's just say secrecy. And so early on we started working on something that we are calling the AI fluency scale. And as we started launching and shipping products in public, we started making that fluency scale more sophisticated. And so we've now been building it up a little over two years. And so every time we launch a study, we build our repository of these AI fluency scales so that we have this baseline that we're building upon. That's just one reference point, but it's sort of an internal baseline that we work with. And that's just one, but that's a small-a small piece based on some of the tools that these folks have. [00:21:42][60.4]
Ellie Kemery: [00:21:43] Yeah, I was just going to say that, like, you know, we started at the governance level back in, I think, 2018, we started to put into place principles around AI ethics. And then, obviously, those have really evolved since GenAI. And, and we have put in place this infrastructure. So there's actually a mandate that the company has that no products go out the door without a thorough assessment. That are, you know, incorporating GenAI. So we're seeing, we want to evaluate the use cases. We want to, we have multiple levels of evaluation. I also sit on a review board for these use cases and we're really trying to pressure test them in every possible way all throughout the process as well. So we are taking those principles and the guidelines, there's actually an AI handbook, an AI ethics handbook that it has been operationalized all across SAP globally, that we apply in the trenches. And so that's the other thing. I mean, it isn't an overnight solution by any means, but we're really trying to make ethics part of the fabric of how work happens within SAP, especially in the context of AI. I think it's a long game, it doesn't,-like I said, a lot of people are just getting up the speed on the technology too. But, that's just one example. [00:23:12][88.3]
Ruth Kikin-Gil: [00:23:13] Yeah, and Microsoft has similar processes. Microsoft has the six AI principles, reliability, accountability, privacy and security, inclusion, transparency. And sorry, I forgot the last one. It will come to me. But— [00:23:33][20.0]
Liz Danzico: [00:23:35] Human in control? [00:23:36][0.2]
Ruth Kikin-Gil: [00:23:36] No, that's not one of them actually. [00:23:38][1.8]
Kevin Bethune: [00:23:39] It should be. [00:23:39][0.0]
Ruth Kikin-Gil: [00:23:42] Well, it could be, and based on that we have any AI product or feature that goes out the door goes through an extensive review and on the design side we have these, I happen to carry with me everywhere, so these are the guidelines for human-AI interactions. They're also publicly available. So you can also just go on the website, go for HAX tools, and you'll find it there. And they are based on 30 years of white papers, academic papers, and where we were a group of people that sifted through all of that and created, based on that, these 18 guidelines. And as things evolved and we used more and more AI in security, so I'm part of the security organization, and we distilled them even further to the five main things, because 18 is sometimes a lot to remember. So we actually have a process of going through every feature that goes out and reviewing them for alignment with these principles. And that really helps us not just to make sure that the model is good and that we are putting all the right mitigations where they should be, but we are looking at the user experience as a very important mitigation in our tool set when we are coming to the how, right? So, you can, again, sorry, that's a little bit of selling, but we also have a new framework for preventing over-reliance on AI that we are working with in Microsoft, but we're also sharing it with everyone who wants to, just to think about this very important risk of people kind of forgetting all their expertise, and this is based on research, they're forgetting their expertise. And they're kind of saying, oh, there must be something there that I can't see. And they believe it. And there are so many cases that you read and you're saying, really? People didn't think to check? And so sometimes these things are really obvious, but sometimes it's very subtle. And sometimes, especially when, as the AI is getting better, sometimes it is like the 1% that is incorrect and that 1% could be detrimental, right? So how do we, what are we giving our users? How are we building these experiences in a way helps the user give us the appropriate amount of trust. Not a blank check, but we're giving them the tools that they need so they can say, OK, I need to go and look deeper and make sure that it's right. [00:27:19][217.1]
Kevin Bethune: [00:27:20] These concerns are real when we have a conversation around AGI, generative AI, but more recently there's been an explosion of conversations, right, wrong, or indifferent, around agents. [00:27:30][9.7]
Ellie Kemery: [00:27:32] Yeah. [00:27:32][0.0]
Kevin Bethune: [00:27:32] So can you speak to what is agents in your mind? Is it a marketing construct? Do you see utility? And where are you starting to see regulation come in in terms of how we steer and control these notions of an agentic capability. [00:27:48][15.6]
Ellie Kemery: [00:27:49] So we recently did, actually, in 2024, we did a big body of research around multi-agent systems and really taking a human-centered approach to understanding the implications on the end-user experience for these systems. And I think it really highlighted the need for transparency and ethics. While there is a lot of, I have a lot of optimism about the concept of agentic systems and the potential value it can realize for people because it's, imagine like a lot specialized agents, you know, that are focused on a certain domain and have developed a deep expertise being able to help and aggregate, right? So, but at the end of the day, you know there has to be a very high degree of transparency in terms of how the AI that we're calling the orchestrating agent, the AI the end user is interfacing with surfaces the how it got there, right? So, and really breaks down what agents were involved and makes all of that visible to the end-user at some level, like maybe multiple levels in fact is what we're exploring. And I love what Ruth is saying about appropriate trust because at the end of the day, it's about this calibration of trust where people are able to benefit from the technology but realize that ultimately they need to leverage their expertise and they need to know when to leverage their expertise. They need to be made aware of the potential inaccuracy, the potential harms that can result if they don't. [00:29:40][111.0]
Kevin Bethune: [00:29:44] Let's move to this notion of efficiency. It's been mentioned before, and again, we shouldn't over-rotate to that. But there's been, again, loud narratives around some of the West Coast mindsets around efficiency. Even a few CEOs from the tech sector has declared that any open head count req, validate that AI can't do it first before you open that req. And it's like, okay. So I guess, how do you think about your own teams? With this new paradigm in terms of the talent that are under your care as servant leaders, which I respect that each of you bring that important competency to the table. How are you caring about your teams? Are there specific nuances that design as a field needs to think about? However would like to start? [00:30:34][50.5]
Liz Danzico: [00:30:35] Yeah, I can say that an academic backgroundas an academic as well as a person in practice. And so I showed up on the scene at Microsoft sort of bringing that to bear and have brought Microsoft as a principle of learning mindset as we, I think as we all do, but as this to emerge this concept. The sufficiency concept that you're talking about, I think we became curious and brought curiosity to it, both the individual contributors, but also leadership, and tried to take a kind of an early approach to a learning mindset. So how could we take the concept of all designers should code and try to get ahead of that with what's now become, you know, vibe coding, but before that was different kinds of prompting, and before that, was early experimentation with AI, and try to understand what future careers were going to be possible for, we'll take designers and researchers and writers on our team. What does the future look like when we have these tools available in a kind of optimistic, positive kind of way? And what might our future teams look like? And when we started working on the tools and working on products ourselves, both the processes and the products, we started using those to build the products. And so we were sort of dog fooding the products as we were kind of experimenting with these new ways of working. And I don't have any metrics to reveal about, although we do have them, about how many people we were able to like, you know, we were to use three people instead of 14 and this kind of thing. Oh, we do have those. But one thing that I can share that was pretty surprising as an outcome is that the role of a person so much less than it did before. In other words, we're all blending in the kind of roles that we had. And so one of the outcomes that we're kind of leaning toward now is that we're looking to kind of collapse roles, collapse sounds so negative, but sort of learning to expand the thinking. I don't know, I'm not, we're learning to, but we're collapsing, you know, two kinds of roles such that you are truly expanding your skill set as a designer and an engineer, such that if those collapsing are beyond our control, then we will be prepared. And we're anticipating changes that are beyond our control. So that's one of the ways that we have approached it. Number one, curiosity. Two, learning mindset. Three, creating the conditions so that people can be ready to approach that change. And four, sort of building products in that mindset and applying the learnings back again. And it's been quite successful and quite invigorating. [00:33:52][196.8]
Kevin Bethune: [00:33:53] I want to stay with what you're describing, maybe double-clicking in some more. You mentioned earlier as well, we may be able to see new things and make new connections. As a creative, I think a lot of us since making our work in very abductive ways, we're pulling together maybe disparate pieces of information and coming up with creative stories from that. I guess what ways are you seeing AI agents or whatever, these capabilities are positing How are you seeing roles expand? How are we seeing new skill sets? Are there any tangible examples where you're excited? You're excited about what could unlock from this? [00:34:33][40.1]
Liz Danzico: [00:34:33] Yeah, well, there's some things I can't talk about, but of the things that I can talk about. But I wish I can give specific examples, because there are a lot of things that we're really excited about. I think it's more like, well some tangible examples that I think might be predictable. It's like, you know, we couldn't, the team that was designing ad experiences typically couldn't have built out the full ad experience end to end. And now they're building that out in order to share it. Whereas before they would have had to rely on these multiple people. But now we do one day workshop and they're able to share that experience end to end. And that's not just one person who's kind of enterprising or has the time to do it. It's an entire team who is able to do those end to end experiences and so you're getting that kind of blending of kind of capability. And so they're able to expand that thinking in that way and that's one kind of thing. Another kind of thing is in the same way you have this associative lateral thinking, you have someone who's working with generative AI to build experiences that they wouldn't have been able to build before because you have that associative thinking that might be more akin to a creative writing process than anything else. I can only say that much for now. [00:35:51][77.5]
Ellie Kemery: [00:35:53] I can share some perspectives. [00:35:53][0.2]
Kevin Bethune: [00:35:54] Please. [00:35:54][0.0]
Ellie Kemery: [00:35:54] So from a research perspective, we've really been leveraging AI to help augment our process and actually help us accelerate insight, get faster insight for teams. And I think it's really important because we've actually shifted our whole way of working to just in lieu of how fast things are moving to be much more continuous and iterative about the insight that we're generating and giving to teams. I'm still grounded in the rigor. I mean, we don't fully outsource our analysis or anything like that, but we're leveraging it in ways that can help us speed up the process. One example is like secondary research, getting a sense for a domain, those kinds of things. And that's been really, really powerful, I think. And the other thing I would say is, cause you mentioned this, prototyping. So in this space, you know, and especially being your leading researchers, because I lead a small team of AI-focused researchers, we have typically relied on design to create stimuli for us in our, in our before we do research or if we're trying to do something like Wizard of Oz style, you know, like bring something to life that isn't there. And we can now do that. So to some degree of competancy, you know, just enough to get the kind of insight that we're trying to gather, so. There's a few ways in which it's really helped us, but I would also say that it's been a forcing function for teams, like colleagues, to work cross-functionally together more. So that's been really exciting, especially I would say research and data science and AI development. And all of a sudden, the research that we're doing isn't just informing design direction or product direction, it's informing systems and the technology investments that the company is making, which is a shift as well. [00:37:55][120.9]
Kevin Bethune: [00:37:57] We're running a little tight on time. I want to open it up for the audience to ask a couple of questions. We probably have time for that before we break. Do we have microphones that can run to people? Up here we have two. [00:38:05][8.7]
Ruth Kikin-Gil: [00:38:09] And if I will like your question, you'll get one. [00:38:12][2.8]
Kevin Bethune: [00:38:18] No sale, right? [00:38:19][0.7]
Audience Member 1: [00:38:21] Hello, thank you. That was really, really great. I've been thinking a lot about the word value over the last couple of days. And as we talk about and hear about responsible and ethical AI and the imperative for that, there's also the realistic imperative for shareholder maximized value. And I'm just wondering if you could speak to what happens when you see an AI pathway that feels ethical to follow, that it's in conflict with shareholder value? [00:38:50][29.0]
Ellie Kemery: [00:38:57] I have it easier than a lot of companies because we're based out of the EU, so we are held to a high standard when it comes to the EU Act and other things, and so it's less of a sell, I guess, on my side, but yeah, exactly. It's not always the case, yeah. [00:39:18][21.2]
Ruth Kikin-Gil: [00:39:18] Yeah, I can say that I can't give you specific examples, but I can that if we see things, we are definitely poking into them and trying to figure out what's the right solution. So the whole point about responsible AI and doing this is our recognition that things will go wrong. Like if there's one thing that we, as designers, we need to work from this mindset is that things we'll go wrong, there will be mistakes. Somehow things are going to go into directions that we cannot predict. What can we do beforehand to make sure that this delta is as small as possible and have a plan B. So we know that when things go wrong, right, when, not if, when things go wrong we have a plan B, right? And we are ready with it. So, um, not a direct response, but it's the best I can do. And I like it. So here we go. [00:40:33][74.4]
Ellie Kemery: [00:40:34] And I will also add that it's one thing to have a governance, but like Ruth is saying, it's really important. Those everyday decisions that are being made in the trenches is where things can really go off track. And so one thing that we're doing as an example is we're educating designers on the role that they have to play in all of this. And there's actually a list of we call them design considerations, but they're questions that designers should have in their pocket to just be asking at all times of their colleagues, right? As they're going through the process because even one of those questions could stop the entire process potentially, you know? And have to go back to the drawing board, so. [00:41:21][46.7]
Kevin Bethune: [00:41:22] Time for one more question. Here comes the mic. [00:41:27][4.5]
Audience Member 2: [00:41:31] I see three brilliant leaders on AI here at the stage. And I know that one of the biggest challenges is the gender bias when it comes to AI. What kind of measures, what kind of actions do you guys see to mitigate yet another gender bias in technology? [00:41:54][22.8]
Ellie Kemery: [00:41:57] That's a great question. So one of the things that is foundational to our approach with Responsible AI is that we are doing, especially when it comes to research, but really the whole process, we are making sure that the folks that we're learning with, the people that we'e involving, represent a diverse and global lens, and that includes gender. So we take an inclusive approach. Research approach to things. And I would say that's true for design as well, making sure that we're prioritizing, you know, that equity in terms of representation. But again, it's in the trenches. So it's like, you now, everybody has to be doing it. I won't say that we are perfect at that by any means. And you know AI in a lot of ways is a black box because we don't understand, you you know, I mean, we know it's the internet. So we know we have that going for us. But in terms of the transparency and how to mitigate those harms, I think we're all working on it. [00:42:56][58.9]
Liz Danzico: [00:42:57] And I would agree with that from my perspective as well. And it's not just one thing. So it has to happen on so many different levels from so many angles. I'll just say another example, in addition to red teaming, hiring, education of the team, support of initiatives, et cetera, is that on our team, we have a writing team, a UX writing team. And half of the UX writing team actually is on the fine tuning and evaluation team, which does fine tuning evaluation of the model itself, which then looks at all the things you might expect from positive to negative things that are happening on the model and evaluating things that are happening in the model side to then reinforce what's happening in experience. So it's sort of everything from kind of like leadership to the model behind the scenes. And thank you for asking that question. [00:43:51][54.9]
Ruth Kikin-Gil: [00:43:53] And responsible AI is both a mindset and a tool set, right? So you have to think about things slightly differently, right? When we are, currently when we're creating products, we think about the vanilla scenario, right, we say, oh yeah, and it's going to be great and we're going to make all this money and retire at the age of whatever. And it doesn't work that way, right. And the mindset of responsible AI is, one, yeah, design for being wrong and always, and try to be proactive about how you're thinking about what you're doing, right? Knowing that you will be wrong. So what are the harms? And we do a thing that is called the impact assessment, right, we're trying to figure out beforehand what could go wrong, what are all the things? And then This is kind of from the mindset part. We also want to make sure that people understand that it's not once and done. It's not a thing that you do once. It's how you do it. And Mallory, who sits here and she's on my team as well, she talked about how do we move things from process to practice. So it's not something that is imposed on the things that you're doing, but really this is how we work. This is the new way of thinking, the new ways of working. And then on top of that, there are all the mitigations and mitigations, if we're talking about collaborative thinking and holistic thinking, they are happening throughout the entire product life cycle from the Initiation part to after you shift, right? You keep measuring and you keep assessing and you get learning and getting things back into the loop. But also collaboration between all of the disciplines together and each one of them knows how to mitigate harms in some way. And there's no one way that you can do and say, okay, I'm done, right. It's kind of a layered approach and each discipline is bringing their own set of tools into the mix, and saying, okay, I'm going to do this, and you're going to that, and you are going to to that. And hopefully, there are no, after you have like six, seven layers like that, nothing comes through. Of course it does, but that's that. [00:46:31][158.2]
Ellie Kemery: [00:46:31] So one other thing to add is, and I love what you brought up in terms of mindset, because we actively talk about this problem seeking mindset, right, that you need to have. And honestly, we should've always had that mindset. I actually just like that word validation, because it's confirmation bias, essentially. But in any case, the other thing that I was going to say is when it comes to getting humans control, that post-delivery of something when it goes into the wilds, right? Like there's opportunity from a design perspective to in the context of the experience, give them, empower them to flag bias, right. Like empower them, to provide that feedback. And then that can go back into the loop and hopefully improve the experience. But we need to know about it. So how can we make that part of the person's role? [00:47:25][53.7]
Kevin Bethune: [00:47:26] On that note, give it up to these wonderful leaders. [00:47:28][2.2]
Lee Moreau: [00:47:32] Design As is a podcast from Design Observer. For transcript and show notes, you can visit our website at designobserver dot com slash Design As. You can always find Design As on any podcast catcher of your choice. And if you liked this episode, please let us know. Write us a review, share it with a friend, and keep up with us on social media at Design Observers. Connecting with us online gives you a seat at this round table. Thanks to the team at the Institute of Design, Kristin Gecan, Rick Curaj, and Jean Cadet for access and recording. Subscribe to With Intent for even more bonus episodes from the Shapeshift Summit and events and content from the Institute of Design, available on Spotify and Apple Podcasts. Special thanks to Design Observer's Editor-in-Chief, Ellen McGirt, and the entire Design Observer team. This episode was mixed by Justin D. Wright of Seaplane Armada. Design As is produced by Adina Karp. [00:47:32][0.0]
[2757.0]