Design As

Design As Fast | Design As Slow

Design Observer Season 3 Episode 1

Design As Fast | Design As Slow features Lee Moreau in conversation with Ruth Schmidt, Albert Shum, David McGaw, and Ruth Kikin-Gil

Follow Design Observer on Instagram to keep up and see even more Design As content. 

A full transcript of the show can be found on our website

Season three of Design As draws from recordings taken at the Shapeshift Summit hosted in Chicago in May 2025.

Lee Moreau: [00:00:02] Welcome to Design As, a show that's intended to speculate on the future of design from a range of different perspectives. And this season, like everyone else, we're talking about AI. I'm Lee Moreau, founding director of Other Tomorrows and professor at Northeastern University. This past May, I attended the Shapeshift Summit at the Institute of Design in Chicago where designers and technologists came together to try to get a handle on what responsible AI is and what it could be. In this episode, we're going to be talking about the space between moving fast and going slow. This is a round table in four parts. On this episode you'll hear from Ruth Schmidt.  [00:00:41][39.3]

Ruth Schmidt: [00:00:41] Friction is usually something that we try to take out of experiences. Sometimes we want to add friction. Like positive friction can be really good. It means that we pause and we reflect.  [00:00:51][9.3]

Lee Moreau: [00:00:53] Albert Shum.  [00:00:53][0.3]

Albert Shum: [00:00:53] Oftentimes, we're so focused on creating the thing, we don't get a chance to actually step back and say, wow, what about the people that's gonna be impacted?  [00:00:59][5.5]

Lee Moreau: [00:01:00] David McGaw [00:01:00][0.3]

David McGaw: [00:01:01] I feel like the scale of the opportunity is mind-boggling, and it's unsettling in some ways, because AI does some things a little faster and then other things it does tremendously faster.  [00:01:12][11.2]

Lee Moreau: [00:01:13] And Ruth Kikin-Gill  [00:01:13][0.3]

Ruth Kikin-Gil: [00:01:14] I would put myself into the side that says let's slow down and have a look.  [00:01:21][6.7]

Lee Moreau: [00:01:27] A few months will have passed by the time you hear this episode. And that might mean this is old news. What I can tell you is that the biggest takeaway from Shapeshift was that we need to slow down and think as designers, as a community, we need to take a beat. I think a lot of what was driving that was the sense that the blazing acceleration of computation and tech development is radically outpacing our perceived human ability to assess, to plan, and even to imagine. And that's scary. Now, speculative thinking and future envisioning are wonderful tools that we as designers have in our bag of tricks, but it feels like they're moving so fast that we're at risk of playing catch-up in the years ahead. To reflect on that, let's hear how this conversation unfolded at Shapeshift.  [00:02:11][43.8]

Lee Moreau: [00:02:15] I'm here right now with Ruth Schmidt at the Institute of Design. It's Thursday, May 29th. Hi, Ruth.  [00:02:21][5.9]

Ruth Schmidt: [00:02:22] Hi, Lee.  [00:02:22][0.2]

[00:02:22] Thank you for being here.  [00:02:23][0.6]

Ruth Schmidt: [00:02:24] Thanks for having me. This is terrific.  [00:02:25][1.2]

Lee Moreau: [00:02:25] Ruth Schmidt is an associate professor at the Institute of Design, focused on behavioral design. Before joining ID, Ruth held several design consultancy and leadership positions. And from 2009 to 2017, she served as a senior leader at Doblin, which is now Deloitte, or part of Deloite, where she developed applied behavioral design methodologies and led teams to advance innovation efforts within client organizations. Ruth, thank you so much for being here. I admire several parts of your background, so I wanna kind of unpack each one. You were at Doblin. This is one of the legendary places where design matured into the type of practice that started to really have a significant impact on the world. Can you talk a little bit about that experience, kind of your work there, and what you witnessed as part of that organization?  [00:03:15][49.3]

Ruth Schmidt: [00:03:16] Absolutely, and thanks for having me and even starting back then because in several ways my time at Doblin actually informed heavily what I do now. So Doblin was an innovation firm, still is part of Deloitte. My work there was initially as a design strategist. So I came in, I had design training that I got at ID, had a master's from here at ID, and my work was primarily around how to help companies become more innovative. So innovative, innovation are all kind of horrible words because they mean a lot of different things to different people. But essentially, we were helping them try to imagine who they could be down the road. So sometimes solving today's problems, but more often than not, trying to equip them to think about what they wanted to be three, five years down the line. My particular specialty is, as you mentioned, behavioral design. So what I tended to do was to bring a behavioral sensibility to what we did. An example that came up frequently, companies want to be innovative, but it's really hard to get their employees to be innovators if the organization doesn't allow them to be. So frequently we would go, a company would say, well, we want people to be more innovative in healthcare or financial services, whatever it was. And then you'd find out that the entire organization didn't really support that.  [00:04:37][80.6]

Lee Moreau: [00:04:37] So the companies were aspiring to be innovative, but actually did not actually support it internally.  [00:04:41][4.4]

Ruth Schmidt: [00:04:42] Exactly, it was the structure.  [00:04:43][1.1]

Lee Moreau: [00:04:44] Yeah.  [00:04:44][0.0]

Ruth Schmidt: [00:04:44] Of the organization that sometimes made it difficult. So if you're being measured, for example, on ROI in a monthly or quarterly basis, that doesn't really help you become innovative because oftentimes those timelines are much, much longer, for examples. It could be harder sometimes to see where your career could go. Innovation wasn't always a dedicated department, so being innovative could feel very risky to people. And if you failed, that often also made you feel, you know, you were the one who was whispered about in the hallways, for example.  [00:05:13][29.5]

Lee Moreau: [00:05:13] You can innovate yourself right out the door.  [00:05:15][1.3]

Ruth Schmidt: [00:05:16] Yeah, exactly. Exactly. And so companies on the one hand would say be innovative. On the other hand, it was sometimes they made it hard for people to do that. So that was kind of an example about how we were bringing a design sensibility. And I particularly could bring also this behavioral design understanding, so that we could help the companies or think about how do they organize better? You know, how do they design new career paths? That kind of thing. So yeah, so I did that work for about eight and a half. Or nine years, mostly in financial services, health care, and education.  [00:05:48][32.1]

Lee Moreau: [00:05:50] There's an exhibit behind you, literally behind the glass, which is documenting the history of the ID and telling stories. Behavioral design probably isn't represented if we analyze it at the very beginning, the foundation, but emerges at some point. And behavioral design is tricky because it's not always something that you can see or take a picture of and document. It doesn't necessarily feature well in a magazine. Talk about the role of advocacy of what you do within some of these conversations in the history of design. There's no question that it's important, but you have to maybe do a little bit more.  [00:06:25][35.3]

Ruth Schmidt: [00:06:25] Yeah, behavioral design is— what makes it trickier, I think, is because it really is sort of this amalgam of two fields. So there's behavioral science, which is a lot of smart people understanding why people do the weird things that they do. And then bringing a design lens to it says, well, how do we apply these things? You're absolutely right that, yeah, there isn't necessarily a tangible thing that we put out there. Arguably, a lot of the work that ID does actually suffers from the same challenge, where it's hard to say what service design sometimes is or systems design too. But I think with behavioral design, the one thing that I find really, really useful is that people recognize it in themselves. So, when we are talking about certain concepts or certain types of solutions... people recognize, it's hilarious, you can almost see the light bulb going on over a student's head when they realize, oh, that's why I do that weird thing that I do. Like we'll be talking about whatever it is in class and then suddenly they have a word, they have concept for sort of understanding themselves better.  [00:07:29][64.0]

Lee Moreau: [00:07:30] What's an example?  [00:07:30][0.4]

Ruth Schmidt: [00:07:31] One of the examples I sometimes use, there's an idea of friction. Friction is usually something that we try to take out of experiences, right? Usually we want to make things as easy as possible. Sometimes we want add friction, like positive friction can be really good. It means that we pause and we reflect. There's another term that sometimes gets used though, and this is for the bad kind friction called sludge.  [00:07:56][24.9]

Lee Moreau: [00:07:56] Ooh, sludge, okay?  [00:07:58][1.3]

Ruth Schmidt: [00:07:59] And basically, sometimes you go through experiences where it is unnecessarily difficult. And so one example that often comes up when we're talking about this idea is anyone who has ever tried to quit a gym membership has experienced this. Some student, I remember, talked about having to literally write a letter, print it out, like people barely have printers, let alone a stamp and envelopes, and mail it in in order to quit this gym membership.  [00:08:27][28.2]

Lee Moreau: [00:08:27] That's horrible.  [00:08:28][0.3]

Ruth Schmidt: [00:08:28] So they'd had, yeah, they'd have this horrible experience that was very memorable because they hated it and, you know, they dragged out for months and months kind of thing. And now suddenly they had a way to actually like frame that like, oh, I can design that differently. I actually have the ability sort of understanding behavioral design to recognize not only this is a bad experience, but now I have words for it, I could make a case for it. I have data that can actually supports that people don't come back when you treat them badly, that kind of thing. [00:08:55][27.0]

Lee Moreau: [00:08:56] I love that story. And this notion of friction has come up already in a couple of conversations around topics of convenience or making things easy or seamless or whatever. But you said friction can be useful, it allows us to pause and reflect. We're here at this summit where we're hopefully pausing and reflecting, I would love to say it's a pause on this AI surge, right? Can we just pause it for a second, think about what we want and move forward? But- How are you already sort of taking it? We're in day one of the conference. How are processing what's going on downstairs?  [00:09:30][34.5]

Ruth Schmidt: [00:09:31] I think it's hard to turn off the behavioral lens, so everything that I'm hearing I think is being filtered through that. And actually the topic of friction in an AI context I think is really essential. It's a great example of making a lot of things super easy, right? All you have to do is like literally speak something and you get all this content. I mean it's magical, right, it sort of feels like technology as magic. Unfortunately, what that also means is that we don't always fact check. We don't always understand what it is that's coming back to us. We hear about the hallucinations. So yeah, when it comes to friction in an AI context, part of it is about just the systems that allow it to very easily just, you know, we all know that misinformation is all over the place. It's much easier to be irresponsible about that kind of thing than to be responsible. One thing that we try to teach as a school is around, like, how to take a critical lens on all of these kinds of technologies, but it's work. It's extra work that we need to do. And recognizing where friction is the responsibility of the human who's getting the information, of the people who are disseminating the information, right, it's not a single question about sort of who should be taking charge of this. But, yeah, when it's that easy to get wrong information out there, that's something we need to be super skeptical about.  [00:11:03][91.7]

Lee Moreau: [00:11:03] As a behavioral designer, is this something that you've been, this notion of responsibility, has this been increasing over time if you think of the arc of your career? Is this something you're more mindful of or was this always there and we just need it more than ever?  [00:11:17][14.0]

Ruth Schmidt: [00:11:18] I think it's always been there. I think design actually has been pretty on the forefront often. I think the design kind of got burned by recognizing that, Oh, it's really, really easy for people to infinite scroll or to create one serving pack things that creates a ton of waste.  [00:11:35][17.6]

Lee Moreau: [00:11:37] Right.  [00:11:37][0.0]

Ruth Schmidt: [00:11:37] Design, we got fingers pointed at the field, right, for not being thoughtful about that.  [00:11:41][3.5]

Lee Moreau: [00:11:41] Yeah.  [00:11:41][0.0]

Ruth Schmidt: [00:11:42] So we sort of got the hands left. And there have been lots of conversations in the field of design, I think, that are maybe ahead of certain other fields. It also probably helps having a human-centered background, too, where we can put ourselves in the shoes of a consumer or a human on the receiving end, as well as somebody who's creating stuff and recognizing where we need to kind of put the brakes on things. In behavioral design, I have to say, though, there's always been a recognition of the more we understand about how people digest and comprehend information, you know, it can quickly turn into manipulation and puppet mastering. And there's always been a real recognition that we don't want to go there. So even within the behavioral field, that has been part of the conversation. But there's also the idea of designing for other people means we always need to be super aware of where our sensibilities are dictating what we think a good idea is or who gets access to it. There are different levels of being thoughtful in design about even what do we solve for in the first place.  [00:12:49][67.1]

Lee Moreau: [00:12:49] I'm so glad you're here at this summit, participating in this conversation, bringing your expertise to a conversation about technology, which quite often, I think, ignores behavioral design or behaviors of humans and what we need. Thank you so much for being with us.  [00:13:04][14.9]

Ruth Schmidt: [00:13:04] Thank you so much for having me.  [00:13:05][0.9]

Lee Moreau: [00:13:10] Right now, I'm here with Albert Shum at the Institute of Design. It's Wednesday, May 28th, and the conference is about to start tomorrow. Things are already underway. What's going on, Albert?  [00:13:19][9.0]

Albert Shum: [00:13:20] Hey! Thank you for having me, Lee. And I'm really, actually really excited. We're actually doing this conference. It's been a lot of hard work by a lot of people to bring all these amazing people together. [00:13:32][12.1]

Lee Moreau: [00:13:34] I'm really excited for the attendees because I can sense the enthusiasm that you have for this. I know, you know, you're really starting in earnest tomorrow with the event, but maybe we can step back and talk a little bit about your background and how that led you here. So by way of sort of an introduction, could you kind of tell us how you got to this point and how you built this network?  [00:13:54][19.8]

Albert Shum: [00:13:54] Yeah, I feel like now we're going to go to way back time machine.  [00:13:57][2.5]

Lee Moreau: [00:13:57] That's totally fine.  [00:13:58][0.5]

Albert Shum: [00:13:59] So my background, how far do we go back? Well, I have a career in design, a very fortunate career. And if I go all the way back to just education, because we're at Institute of Design, I had a fortune in even the high school to I went to, and this sounds really weird now if I talk about it, I went into a vocational high school. So actually I studied drafting in high school. That's a thing, because I was really interested in architecture and design and making things and taking things apart. And then I quickly realized that, wow, like drafting is a means, and it maybe ties to how we're talking about technology today, it's really a tool. And I was more interested in like, wow, what can I go create? And the creativity part. And I grew up in the middle of Canada and when I tried to explore that field, there wasn't necessarily design. It was like, oh, you want to make things and you should be an engineer. So my undergrad was in mechanical engineering. That seems like the natural thing if you want make products, you go into mechanical engineering. And then I quickly realized that engineering is a great foundation for problem solving, but usually you're given the problem and then you have to come up with the answer. And I felt that I was missing that kind of the creative part of making things, which I really. Learned by now, it's actually what design is.  [00:15:23][84.3]

Lee Moreau: [00:15:24] Or even identifying the problem.  [00:15:25][1.1]

Albert Shum: [00:15:25] Right, exactly. And understanding what users' needs are and translating that to hopefully create better solutions. And so that led me to continue my education. I went to Stanford product design program, working with David Kelly, and that was totally eye-opener. And for me, it was the first time I felt like, oh, this is my tribe. This is my kin, and. And that led me to amazing people like Tony Hue and Maria, who were my roommates when I was in grad school. And they've gone on to do amazing things.  [00:15:59][33.3]

Lee Moreau: [00:15:59] Colleagues at MIT at this point, yep.  [00:16:01][1.6]

Albert Shum: [00:16:01] Exactly, and so I feel like that allow me to have this more expansive view of design. It's not about making things. I mean, it's really important to make actually the thing. But also, like you said, identify what the problem is. But also this idea of shaping the meaning, meaning this is the hard part, especially with AI and a lot of technology, we can do so much, but what does it mean for us as people? And I think that to me, it's really bringing that empathy into technology. And maybe that's what I really learned at Stanford. First thing is that you have to have empathy around the end user, but also all the different stakeholders, and so much of my work is to serve others. And I know it seems a bit of, sounds like a calling and very holy, but no, we have tremendous responsibility. We get to shape things that impacts literally millions, if not billions of people. And it is a responsibility. And I do feel like we've been. At least in some of our work, we've been entrusted. And I feel like we're more like stewards of technology now. And that's probably the reason for these conversations.  [00:17:29][88.0]

Lee Moreau: [00:17:30] So I'm a designer, you're a designer. Tell me more about this "Shapeshift" concept. Where did this come from?  [00:17:35][4.9]

Lee Moreau: [00:17:35] Yeah, Shapeshift, as a member of a board of advisor for Institute of Design and advisor to Dean Anijo Mathew, we had a conversation, I think about two years ago, where AI was really happening to us in some ways I felt like. There was so much going on. And I'm sure you remember that moment when ChatGPT launched and everyone's like, hey, have you tried this out? And there's so much excitement, but also so much, I would say hesitancy or if not anxiety about, wow, how's this going to impact us? And what's the future? And I felt, at least for me, there was a lot of conversation happening, especially in tech and working in tech, and being as part of big tech. Oftentimes, we're so focused on creating the thing, we don't get a chance to actually step back and say, wow what about the people that's going to be impacted? And so Anijo and I thought like, how do we create a space? For us to have conversation that's outside of big tech that reaches even outside of the United States or outside our own kind of natural community and start looking at broader areas around the world, but bring more of an outside-in perspective rather than inside-out of technology. We build a technology and then we go ask people to make something with it and then hopefully we'll solve some problem versus, hey, what are the issues that's really confounding a lot of different communities around the world, and then have those conversations on how technology can enable better solution. And so we did that outside-in approach, and we had the fortune to kick off the series with different salons around the world, we identified some amazing people through our networks, through people similar, were interested in having these conversations. So we went to places like New Zealand. And they were looking at different issues around how democracy and technology can enhance the process, especially for elections. So that was a really interesting workshop that we had in New Zealand and Auckland. To go into Tokyo, where especially with an aging population with a lot of changes, I think mental health was an issue that they wanna tackle, and we explored that topic. And brought a lot of different insights. Also learn about, especially AI is so focused on language. When you're actually using AI in a different language, whether it was issues like in Japan, and that was eyeopening to go into India and Pune, India. And the topic the group was interested in exploring in Pune was upscaling with AI, there's gonna be new opportunities and at the same time. Uh, traditional roles might change. What are the new skills required to help the next generation or current generation of, of, uh, be it office workers, information workers, uh confront the, the new realities? So that's kind of a global perspective that what we're trying to bring.  [00:20:43][187.6]

Lee Moreau: [00:20:45] These are huge topics that you're talking about, so upskilling, democracy, housing, like just the things you're naming off. You don't think of those aligned typically with an AI conversation. And so like, how does this all come together at the summit?  [00:20:58][13.3]

Albert Shum: [00:20:58] In the summit here, we're actually going to have each of the salon groups present their work and share out what their findings are, what their interests are, how they're leveraging technology like AI, or not, to address some of these challenges. But more importantly, I think making sure, and maybe this is kind of core to the theme of the conference and Shapeshift, is to making sure it's the human at the center of the conversation. And not technology first, and that's the shift. And having the summit, to me, it's not about, hey, we're gonna create this event and just talk to each other. To me, it's really about getting that network and how do we build that knowledge together and share the learnings to, and maybe just the aspirational part, To hopefully create a way a platform, if you like, where we feel like, hey, there are areas of opportunities that we can all work on together and share amongst all of us.  [00:22:10][71.8]

Lee Moreau: [00:22:11] Maybe to paraphrase a little bit, the way that you're seeing this conference is about serving human performance at scale and at speed, which is sort of a synopsis of kind of your career journey, but then what are some of the challenges that we face as we actually do roll this out to millions and billions of people? That's those, that's what you want to be talking about.  [00:22:30][19.0]

Albert Shum: [00:22:31] Exactly. And a lot of, even when I was at Microsoft, I was really inspired by the team's work around inclusive design with Kat Holmes, like a lot folks like August de Los Reyes, amazing people who taught me about the role of design is to address exclusion. Like people who might not have access to the technology, who are excluded. And I feel like sometimes when we talk about especially emerging technology like AI, we tend to forget like not everyone have access and not everyone can have that benefit of these amazing tools. So how do we address those gaps? I think that's really important.  [00:23:16][44.2]

Lee Moreau: [00:23:16] And I think it's particularly important that you're addressing some of those questions here at a university. We can't assume that every corporation and large company is going to be thinking about that, but we certainly can in this dome.  [00:23:27][11.3]

Albert Shum: [00:23:28] Yeah, and I like to think, again, and maybe I'm way too optimistic that companies actually want to know what are some of the potential blind spots. And when I was working at Microsoft or working with technology partners, I think back to that empathy part, I I think we have to show, not tell people like, hey, you're doing something wrong, but actually show the impact, both positive and not so positive. And when you actually show it, I think you bring that empathy to the customer, but also to understand the company's needs, then that's when you create change. You cannot just create change by saying something is wrong. You actually have to show what the impact is.  [00:24:13][45.2]

Lee Moreau: [00:24:13] Albert, thank you so much for your time. This was a wonderful conversation. I'm going to take some of this back for me and my personal life. It was great spending time with you and I look forward to the conference.  [00:24:21][7.8]

Albert Shum: [00:24:22] Yeah, thank you! And yeah, look forward to continuing the conversation and hanging out.  [00:24:26][4.2]

Lee Moreau: [00:24:32] I'm here right now with David McGaw at the ID. It's Friday, May 30th. Hi David.  [00:24:36][4.0]

David McGaw: [00:24:37] Hey, how's it going?  [00:24:37][0.6]

Lee Moreau: [00:24:38] Great. David McGaw is the Design Strategy and UX Research Lead at Google DeepMind. Now David, that could mean a million different things. You're talking to an audience of designers, design students, design professionals. What specifically is your role at Google DeepMind? And it sounds like a slightly terrifying thing. So unpack it and make it feel less frightening.  [00:25:00][22.4]

David McGaw: [00:25:01] Yeah, I'm still trying to figure out what I'm doing. And I have the privilege of working within a team that is inventing a lot of AI technology. So Google DeepMind, founded a number of years ago, acquired by Google. And it's largely technologists and engineers who are envisioning the future of AI. I'm within a time that thinks about what's the human side of that. Not that they're not, but that we have some special skills to bring to the table. And so I tend to be thinking two, three, even five years into the future, it's kind of a product strategy role to figure out what should we be making, for whom, and why. Some of the specific skills would be familiar to those who know the title of UX research. But that's part of what I do. And then I also try to think about how does it fit into an existing ecosystem of products. And I have a lot of great colleagues on my team who are more traditional designers. So how things look on the screen, conversation designers, research scientists and prototype builders. So we all work together. I don't know how much I lead exactly, but there's a lot of sort of collectively envisioning the future.  [00:26:10][69.6]

Lee Moreau: [00:26:11] Oh, I love that. So design research and UX research and UX strategy, these kind of disciplines, sub-disciplines, et cetera, can be applied to almost anything.  [00:26:19][8.0]

David McGaw: [00:26:19] Yeah.  [00:26:19][0.0]

Lee Moreau: [00:26:20] Uh, you've spent time in your career, probably exploring many, many different arenas, how is this space of Google DeepMind different for you as a practitioner?  [00:26:30][10.6]

David McGaw: [00:26:31] I feel like the scale of the opportunity is mind boggling and it's unsettling in some ways because AI does some things a little faster and then other things it does tremendously faster. It has the potential to swap in for humans in some kinds of roles but also not in others and so it doesn't just plug in and like a word processing app make the of writing a little bit faster. It could write the thing for you. And so now it means, as we think about what AI could do, we have to think about, what do we want to do? And what are we willing to entrust to AI? How do we rein it in or verify what it's doing? And if everyone else is doing that also, and if AIs are not only working with other humans, but working with each other in ways that we can't see, there's so many permutations. And while I think there's no reason to be alarmed... I think there's an opportunity to say, what's the world we would like to create? And because everything is moving faster and faster, that means the process of design, which typically is good at being deliberative and thoughtful, also has to move faster.  [00:27:45][73.8]

Lee Moreau: [00:27:45] And making designers move faster is not an easy thing to do necessarily.  [00:27:50][4.9]

David McGaw: [00:27:51] Oh, completely. And, you know, much of what I do is anchored in user research. Doing that thoughtfully, rigorously, carefully, respectfully, may mean that you spend several months trying to understand who are the users we should speak to, what are the issues on their mind that we don't even know how to ask, so we need to research before we research, and we want to make sure we cover different parts of the world, different types of people. And so a thorough foundational research effort that tries to uncover unmet needs or understand a new space, used to think of it in terms of a two, three month effort. And that might need to happen in a week now. And there are ways to get faster. And there's, there are way to figure out, well, what's not absolutely essential. Sometimes AI can help, but it's also making some really like quick gut judgments. And that can feel uncomfortable. And then I also need to figure how do I double check what I've done to make I didn't just have an opinion and start believing it.  [00:28:51][60.6]

Lee Moreau: [00:28:52] I, I saw your face as you said the word. And it was like, okay, that's clearly something you've experienced. My fairly pedestrian understanding of DeepMind is that it's not a singular thing, that it's actually many things and many efforts. It's a broad package of research and understanding. For our listeners, could you unpack what DeepMnd really is within the bigger portfolio of Google and Alphabet.  [00:29:17][25.6]

David McGaw: [00:29:18] Yeah, well, and I should clarify, I'm not speaking officially on behalf of the company. So just in my own personal capacity, I think there's a lot of foundational research to understand what could AI do, and then some experimentation to build models, and see how might you apply those models in the world. There's teams that are working on science questions. Our founder, Demis Hassabis, recently shared in a Nobel Prize. For figuring out how AI could solve some problems related to protein folding, which I don't understand. But I understand that it's valuable. And so there's a lot of basic science applications of AI. Then there's questions about how does AI become useful for everyday people and large language models of the kind that we're all becoming familiar with. They exist, but how they become useful depends on nuances of how much data has it been trained on. How bright is that data? How has the model been tuned in order to, you know, of the possible answers it could give, how do you help it give the most helpful answers? Neither too long or too small, properly sourced. And then there's questions of design and presentation. Do you want it verbally? Do you wanted in a chat? Do you wanna multi-modally? So there's teams that work on all those parts of it. And to be honest, I've only been part of DeepMind for about a year. So that's my broad understanding.  [00:30:40][81.9]

Lee Moreau: [00:30:40] Well, that's a year longer than me.  [00:30:41][1.0]

David McGaw: [00:30:42] Yeah, fantastic. Well, and I will also say that, You know some of the work is in products that are on the market like Gemini and when i say on the market i mean it's available to anyone but it's out there and you can use it other efforts are a little bit more experimental so we've talked about project Astra which is us basically saying how might we apply an AI a large language model but giving it the ability to see the world through your phone camera as you turn it on and point it around your world. It can interpret what it sees and then you can use that to ask further questions or take actions. And that's not publicly released, but we've shared some YouTube videos. And this is a way we try out ideas and things that work then later become part of other products like Gemini.  [00:31:31][49.2]

Lee Moreau: [00:31:32] So as someone who runs a design studio. I guess I will confess that we spend a fair amount of time struggling over how we should incorporate AI into design workflows,.  [00:31:45][13.5]

David McGaw: [00:31:46] Yeah.  [00:31:46][0.0]

Lee Moreau: [00:31:47] Especially because we're not very big, right? So a dozen people. And so part of that is a conversation about cost and about speed and about return on investment for our clients. And there's so many things about it. And also like, you know, what is the balance between what we want to learn as human beings with our own mind through the tools that we have in our bodies? And then how do we want learn through these other augmented tools and AI and so forth? What do you think as someone in my position, I should be thinking about, you run design research teams and so forth that I should be thinking about? [00:32:24][37.3]

David McGaw: [00:32:25] Whatever they are today, they're going to be more capable in the future. And I think there's an opportunity to just start learning by doing whatever you can. There's many AI tools out there. I've played with a number of them, not all of them. They're helpful for distilling large amounts of information, even qualitative information. So it used to be that design researchers would speak to a lot of people. Probably there's a video or audio recording. Someone transcribes it at great, to great effort, with great effort. They transcribe the conversation. And then you look through those conversations and try to identify, are there patterns? Are there interesting outliers? Are there intriguing new questions? That's a fairly manual process, usually helpful if multiple people do it. Now today, I can throw a raw recording into a tool — Notebook LM is one that Google makes — can throw a bunch of things in, and then have it summarize for me what are key themes, what are unusual things that stood out, can you give me some quotations that I can track down with a timecode. So that's helpful in getting a quick summary. And why not try that? Why not try it? It's a free tool right now. So why not tried and see what happens? Problem is you also have to be a little cautious to make sure you're not over relying on it because there's always, you know, that special leap of intuition that you make, uh, as a human, when you're hearing something, you're like, wait a second, you just mentioned, you just mentioned bringing your lunch into the office, but then there's this particular part of the packaging that doesn't quite work. And like, say more about that. I worked on a project for Ziploc and so we were observing people eating lunch and it was the side comments that like were the spark of really interesting, um, innovation opportunities.  [00:34:21][116.5]

Lee Moreau: [00:34:21] And these are the stories you hear design researchers tell from the field, right?  [00:34:24][3.1]

David McGaw: [00:34:25] Oh, completely.  [00:34:25][0.4]

Lee Moreau: [00:34:25] Like about somebody and their lunch. Yeah.  [00:34:27][1.1]

David McGaw: [00:34:26] Yeah, and by the way, when you do field research with other trained people and colleagues and other kinds of clients, if it's a consultancy, you go have a conversation with someone in their home or office or whatever, and then in the car ride back, you talk about what you saw. Like that was, yeah, that confirms a hypothesis. That seems like we're seeing a lot of this. Did you notice that? That was weird. Like, what was that? Or we never thought about that. We never once asked anyone else that question. And sometimes that like reactive debrief where you just, your brain is turning over and spotting inconsistencies, or honestly, you're kind of looking for something funny to say. Like, wasn't that funny? Like, it's crazy.  [00:35:12][46.1]

Lee Moreau: [00:35:13] That was amazing.  [00:35:13][0.3]

David McGaw: [00:35:14] Yeah. So the point is, when you're experimenting with AI as doing a quick summary, the AI doesn't notice that kind of thing. And might it in the future, if you could spot a pattern in the patterns, if there's a meta pattern, you could train your AI to look more specifically for things like that. But I like letting it do that first pass and then I look for the random stuff. Or another way of thinking of it is maybe my behaviors and gathering stuff in the first place needs to change. Because I used to focus on trying to take notes about every single thing. And now I don't because I don't have to.  [00:35:50][36.8]

Lee Moreau: [00:35:51] Right.  [00:35:51][0.0]

David McGaw: [00:35:52] But what I can do is whenever there's something unusual, I'm like, mm, make a star.  [00:35:55][3.9]

Lee Moreau: [00:35:56] That's the thing.  [00:35:56][0.5]

David McGaw: [00:35:56] Or write down that time code. Because we need to come back and think about that. And it frees me up to focus on those moments.  [00:36:02][5.7]

Lee Moreau: [00:36:03] David, thank you so much. This was wonderful speaking with you. Hope to see you again out at the summit.  [00:36:08][5.2]

David McGaw: [00:36:09] Fantastic.  [00:36:09][0.0]

Lee Moreau: [00:36:18] Right now I'm here with Ruth Kikin-Gil at the ID and it's Friday, May 30th. Hi Ruth.  [00:36:24][5.3]

Ruth Kikin-Gil: [00:36:24] Hello, hello.  [00:36:25][0.4]

Lee Moreau: [00:36:25] Ruth Kickengill is a design strategist that leads the responsible AI practices for Microsoft's security organization, and is a co-creator of the guidelines for human AI interaction. In addition, she teaches design at the Human Center Design and Engineering Department in the University of Washington in Seattle. Ruth, tell us a little bit about what you've been hearing so far at the conference, the summit.  [00:36:46][20.6]

Ruth Kikin-Gil: [00:36:47] I think that so far my big impressions, I don't know if to call them takeaways because maybe it's too soon, but there are basically two camps. The one that our one camp is looking at everything through a very optimistic lens, looking at all the good things that AI can bring into our lives, the enhancements that it can create for us, thinking through, okay, how do we productively and happily live with this technology? And there have been some very interesting things around that. On the other hand, there's the other camp that looks at AI with, let's say, caution and curiosity and looking at the ways in which we think, really deeply think about the meaning of incorporating AI in every streak of life, as some people might want to imagine the future, right? And say, OK. What do we have here? What are the things that we are doing okay with? And what are the thing that we need to really deeply think about? Because if this thing is going to shape how we live and the way our world works, what do we need to do to make sure that we're leaving a space for us as human beings, as people with values, as people with aspirations, with creativity and making sure this remains, right? Like the essence of our humanity is being amplified rather than degraded. And I think that when I'm listening to all the talks and there have been so many inspiring talks on both ends, right, I'm trying to tease out, you know, what are the things that I can take away from. And my personal thinking is very cautious optimism, or maybe realism. Let's call it realism, right? There are things that we can change, there are things we can do, there things that probably we'll need to live with for a while. And what does mean? What does it mean for... not just myself, but the ways that I'm trying to make that dent in the universe, as we all do, right?  [00:39:50][182.6]

Lee Moreau: [00:39:50] I think if I was going to ask you which camp you are situated in, based on having seen you around the conference for the last couple of days and seeing the panel discussion yesterday, I think I would know, but I also sense that you're kind of floating above this conversation a little bit. Where would you situate yourself in all of this?  [00:40:11][21.0]

Ruth Kikin-Gil: [00:40:12] I would put myself into the side that says, let's slow down and have a look, right? I really like to compare this with a slow food movement, right, like it's about enjoying the moment. It's about not thinking about food as something that you want to consume, but as something that helps you live better, right? And this is where I would like to see that technology going, right? Something that brings people together, something that really is built with human values. And I think that right now it's we're going to McDonald's and it's way too soon, right.  [00:41:03][51.0]

Lee Moreau: [00:41:03] That's brutal. Yes, we, that's how it feels. We're going to McDonald's. Absolutely. Like there's no, this notion of slow food, of slow anything related to the conversation about AI and its deployment and evolution, there's nothing slow about that.  [00:41:18][15.0]

Ruth Kikin-Gil: [00:41:20] You know, living in a business reality the way I am, I recognize everything, right? Like the entire X system, I can understand where things are coming from. But I also want to say that we need to cross those thinking spaces. And especially, I think that this conversation is not, is not about one company or one country or one continent. It's about how us as people, as product creators, as culture creators want to see our world and want to our world for our children and the next generations. We need to take the long view here.  [00:42:09][49.7]

Lee Moreau: [00:42:10] You sound like a philosopher. Is a compliment, but you also work at Microsoft, which is one of the largest and most significant and influential tech companies in the world. Tell us about what you're doing on the ground in your own organization to foster this kind of conversation.  [00:42:31][21.4]

Ruth Kikin-Gil: [00:42:32] So I think that Microsoft is one of those companies that actually care, right? We have a very robust and evolving programs around responsible AI, right. And if I'm looking at what we had a few years ago and I'm look at what have now, I think there is a lot of growth, a lot thinking, a lot looking at things in the right way. Nothing is perfect. The technology is changing so fast all the time. When I started to work on Responsible AI, we were facing the machine learning issues, right? And that was the big thing. And now it looks naive, right. But the truth is that everything that we have around machine learning is still there, right? Like a lot of the issues with it, it just now manifesting itself in different ways. And some of them are exacerbated with this gen AI technologies. So we need to be agile. We need to work with what the new developments are. And in some ways, we have to be reactive to them. And in some ways, we need to be proactive, right? And we need think beyond what we have now and try to think, okay, if the technology goes that way, what can happen? You know, what should we do? What should we be prepared for? And specifically in our organization, it is one of Microsoft divisions. Every division has a program around responsible AI. Each program has different responsible AI champs that are making sure that our products are going through the right processes, right? And as a designer, that's my background, right. My goal is to enable design to shine and to be recognized as a very important mitigation for some of the problems that cannot be solved through, I don't know, fixing the algorithms because we can't really do that, right, technology has its limitations. So when I'm talking about responsible AI, it's really, it's both a mindset and a tool set, right? So when we are looking at a product, we need to look at it from a product perspective that has more than one model because this is how all products are built. I don't know of any model that is just the pure model that is facing a customer. So, it's a bunch of different models working together. Moving information and actions from one to another, and there is the layer of the user experience on top of that. So when we are looking at ways to try to find what the harms are, we need to look at that product level. And then when we're looking at mitigations, we need to look things like the model, of course, the user's experience, the architecture, right, from the engineering side, how things work, right? How do we talk about the product and the AI in the product? And what's the communication like? What are the legal side of it? How can we mitigate some of the things through that? So really thinking holistically about the entire picture. And when I say holistically, it's not just about, OK, here are all the disciplines and here's how each one of them can contribute. It's also when we're looking at it from the product lifecycle, what are the things that we need to do to think about and to integrate from the responsible AI perspective in each part of that product lifecycle including after we ship, right? So it never stops.  [00:47:02][269.6]

Lee Moreau: [00:47:03] So you talked about this thinking holistically. You're obviously spending time and investing time in building the tools and really just having the conversation. When you describe the kind of shift from machine learning to gen AI, there's a couple of things that happen along that journey, right? So obviously massive scale and vastly increased accessibility to these tools. Suddenly, these tools are in the hands of lots of folks. How does that influence the rate of change that is happening, right? How does that influence the work that you're doing? Do you feel like you're having to play catch up or as an organization or do you personally feel like you are having to like rush out and play an extremely active role as things are being created?  [00:47:51][48.4]

Ruth Kikin-Gil: [00:47:52] I agree, there is a big change, right? But it's also not changing. Like in any place, I would want to say, right, there are, one is the technology, this is one aspect, the other aspect is people, right. And it's really about how you change people's perception of that technology and both the benefits and the limitations, right And it goes both for the internal people I work with and external people I worked with. And in a way, I'm thinking about my role as someone who changes culture, right? And it is done through changing the way people are doing their work. And it means that it's not about specific processes, it's about how you integrate a certain way of thinking about things, integrated that responsible thinking throughout the entire process. It's not like I always say that my ultimate goal is to get myself out of my job, right? The moment that no one will need responsible AI. Person is the moment where it's going to be, this is how we think, right? This is the human-centric, benefit-centric point of view, right. So I want everyone to be responsible. I don't want to be the responsible adult in the room.  [00:49:41][109.2]

Lee Moreau: [00:49:42] Little bit about the panel discussion yesterday. You had yesterday on stage, which was fairly provocative. You said something about, and it got a fair number of laughs, something along the lines of, let's stop talking about having humans in the loop, but maybe we should be thinking about having humans and control. And that was kind of provocative. We just have almost taken for granted this notion that, yeah, humans in a loop are enough. And as long as we ensure that, We'll be fine. Clearly you have a different take.  [00:50:14][31.4]

Ruth Kikin-Gil: [00:50:15] I don't know, just try to imagine a future, right? Like, sure, it's semantics, and every, you know, when you're reading about responsible AI practices, human in the loop is just a thing that you say, and we need to do that, and there's some regulations also connected to it. But really what you want to do is to have a future where you still have a place in, right? And some of the conversations, in my opinion, are just taking humans and putting them at the margins. And one of the things that we're talking about when we're talking about shifting how the workspace looks like, for example, right, So there's the loud voices that are talking about how AI is going to make us more creative. And this is great. This is something that I would subscribe to, right? Like, I think that I'm not against AI at all. I'm just thinking if we are creating those experiences as a society, right, that elevate humans, then make sure that they are elevating humans. And this is where I'm coming from, right? And some of the things that I'm hearing in the world, right, like you're reading things in the media and the way that people are talking about what AI will do is talking about it in a way that, yeah, people will still have the role of making sure that AI doesn't make mistakes. And I don't think that it's a very aspirational future, right? In the olden days, we talked about the metaphor of a centaur, right, how AI makes us better, just helps us and to tap into our inner creativity, inner capabilities. And we talked about the person as a curator. As someone that kind of shifts things and directs things into the right places and I subscribe to that, but I also see a lot of the actual AIs that are being built or products that are being built that don't give that role to the human and it's more like quality control. So this is why I'm saying it's not a human in the loop.  [00:53:09][173.8]

Lee Moreau: [00:53:09] We want something greater. Ruth, this has been fantastic chatting with you. Thank you so much.  [00:53:13][3.8]

Ruth Kikin-Gil: [00:53:14] You're welcome, that has been pleasure.  [00:53:15][1.2]

Lee Moreau: [00:53:19] Design As is a podcast from Design Observer. For transcript and show notes, you can visit our website at designobserver dot com slash designas. You can always find Design As on any podcast catcher of your choice. And if you liked this episode, please let us know. Write us a review, share it with a friend, and keep up with us on social media at Design Observr. Connecting with us online gives you a seat at this round table. Special thanks to the team at the Institute of Design — Kristen Gecan, Rick Ciurej, and Jean Cadet, for access and recording. Admissions for spring and fall are now open at the Institute of Design, home for the only design school in the U.S. Devoted completely to graduate students. Master design and responsibly build a future that keeps humans at the center. Start your application now at institute dot design. Special thanks to Design Observer's editor-in-chief, Ellen McGirt, and the entire Design Observers team. This episode was mixed by Justin D. Wright of Seaplane Armada. Design As is produced by Adina Karp.  [00:53:19][0.0]

[3097.0]

People on this episode