Design As

Design As Control | Design As Collaboration

Design Observer Season 3 Episode 4

Design As Control | Design As Collaboration features Lee Moreau in conversation with Liz Danzico, Jai Shekhawat, Liz Gerber, and Kevin Bethune. 

Follow Design Observer on Instagram to keep up and see even more Design As content. 

A full transcript of the show can be found on our website. 

Season three of Design As draws from recordings taken at theShapeshift Summit hosted in Chicago in May 2025.

Lee Moreau: [00:00:01] Welcome to Design As, a show that's intended to speculate on the future of design from a range of different perspectives. And this season, like everyone else, we're talking about AI. I'm Lee Moreau, founding director of Other Tomorrow's and professor at Northeastern University. This past May, I attended the Shapeshift Summit at the Institute of Design in Chicago, where designers and technologists came together to try to get a handle on what responsible AI is. And what it could be. In this episode, we're going to be talking about the space between controlling and collaborating. This is a round table in four parts. On this episode you'll hear from Liz Danzico.  [00:00:39][37.6]

Liz Danzico: [00:00:40] There's new craft in being able to see the details in the new way of making. That is human. It's dynamic. It's, here's a technical term, re-queryable.  [00:00:51][10.9]

Lee Moreau: [00:00:51] Jai Shekhawat [00:00:51][0.0]

Jai Shekhawat: [00:00:53] Humans have five senses. Only one or two of them are being invoked by the present interface. What might a future interface look like?  [00:01:01][8.2]

Lee Moreau: [00:01:01] Elizabeth Gerber [00:01:01][0.0]

Liz Gerber: [00:01:03] AI, as an example of a technology, has been promised to do whatever we want. The one thing it can't do is tell us what we want  [00:01:11][8.8]

Lee Moreau: [00:01:12] and Kevin Bethune,  [00:01:13][0.7]

Kevin Bethune: [00:01:14] I'm looking for ways that design can intervene to make how we leverage these new models and platforms relevant, ethical, responsible, and also get some of the productivity benefits that these platforms can potentially give us.  [00:01:29][15.7]

Lee Moreau: [00:01:35] One of the first keynotes we heard at the Shapeshift Summit was from Rohit Prasad from Amazon. And obviously Amazon is an organization that's developing this new technology and is pushing it out in the world and has a really vested interest in the success of where this technology will go. And one of the things he talked about was the relationship between personal computing from the past and collaborative computing in the future, which AI and new technology will enable. It's pushing. And he described the need to create the next interface. How will we interact with this new technology? What is it? How will it work? What will it look like? And this is a really exciting, I think for designers, a really thing to think about. But it's not just about the new interface, it's also about what we do with it and how we interact. I think these are the bigger questions that we should be talking about. One of the defining questions I think for a lot of designers that I've talked to recently is: will they be able to or willing to make the leap from a world in which they control all the creative tools and outcomes to a world in which they work in a collaborative mode where they have limited or even very little relationship to the creative tool and outcomes, pushing them further and further and farther away from the design outcome? This really gets to the question of purpose. What is our purpose as designers? What are we here to do? Are we here to manage a suite of tools? Or are we here to define how society interacts with technology and with each other? This is perhaps the most tangible contemporary crisis that I think we have as designers. When it comes to the development of this new tech, it's something that a lot of people are talking about. Let's listen to how some of these conversations played out at the Shapeshift Summit.  [00:03:25][110.3]

Lee Moreau: [00:03:28] Right now I'm here with Liz Danzico at the Institute of Design. It's Thursday, May 29th. Hi, Liz.  [00:03:33][5.6]

Liz Danzico: [00:03:33] Hi, Lee.  [00:03:34][0.2]

Lee Moreau: [00:03:34] Thank you for being with us. Liz Danzico is VP of Design at Microsoft AI and the founding chair of the MFA Interaction Design Program at the School of Visual Arts. And for our Design Observer listeners, you'll remember that Liz was a frequent guest host in the Futures Archive. And in season one, she introduced us to our dog, Harriet, with all of Harriet's special abilities and magic. And then in season two, she was also my co-host as we focus on objects at the intersection of communication and human-centered design. Liz, it's great to have you back in the booth.  [00:04:08][34.2]

Liz Danzico: [00:04:09] Thanks, it's great to be here. And I'll just mention that Harriet is still talking about it.  [00:04:12][3.3]

Lee Moreau: [00:04:13] WonderfuL! Liz, you were just on a panel. I- literally just came off stage in conversation with a few of our other attendees, some of whom we'll be talking about in the show. What are your impressions so far?  [00:04:23][10.2]

Liz Danzico: [00:04:24] I'm so impressed by the conference program so far. And it's been, we're on day one. So just to put us in context. And the diversity of content is what I'm most impressed by. Our panel was about sort of shifting away from Silicon Valley and getting away from big tech, even though the members of the panel were squarely in big tech. I've used the past tense for no apparent reason, but we are in big tech. And the broader summit has representatives from urban planning and civic design and film. And so far, it's just been so invigorating to see the possibility space of AI, because I think the media coverage of it, and I do have high respect and lots of admiration for the media. But I think what we typically hear media side is about what big tech is doing with AI. So it's quite inspiring.  [00:05:30][65.9]

Lee Moreau: [00:05:31] What you're kind of think you're referring to is the conversation is shifted. I mean, it goes with the name shapeshift here, but at the summit, but you sense the conversations intentionally being shifted.  [00:05:42][10.7]

Liz Danzico: [00:05:43] The conversation is supposed to be away from the typical conversation about like what the technology can do, centering the human. So centering the human and what the human needs. And Albert Shum, who's one of the co-founders of this event, and I'm paraphrasing said something like, are we being slightly rebellious in even to dare think that we might be centering the human in this conversation. But what if we did? And what would be possible if we do that? So that's sort of the thesis here.  [00:06:18][34.8]

Lee Moreau: [00:06:18] And Albert was formerly a director of design or head of design at Microsoft, which is where you are now. Talk about the massive change— we've seen different eras of the company, Microsoft and different offerings over time. They have more of the front page than they've had in a bit because of some of the work that's happening.  [00:06:38][20.2]

Liz Danzico: [00:06:40] Yeah.  [00:06:40][0.0]

Lee Moreau: [00:06:40] What has been your role in that and what have you seen? What can you actually share with us right now?  [00:06:44][4.0]

Liz Danzico: [00:06:44] Yeah, it's been an interesting journey from my perspective in that I've been there three years now. And in that three years, everything has changed from my vantage point. I think other people might say different things, but starting just three years ago was the introduction of the consumer sort of level of the chatGPT came out, not even three years. And so from the time that I joined through now our journey in the org that I'm in, which is called Microsoft AI— so it's the org, that's consumer facing different than say Windows or Azure, the Xbox or LinkedIn or these other products that are more enterprise focused or business focused or even behind the scenes. We are the ones who are working with consumers, future consumers, humans, and kind of launching and shipping products directly with with and for alongside them. And that journey from not having something called chatGPT and working alongside in the beginning OpenAI along what would be possible to now launching sort of week to week or month to month, co-pilot products integrated throughout our entire ecosystem of products. I heard recently that we may have something along lines of 150 co-pilots throughout Microsoft, probably not a number we want to stay with. We probably wanna reduce that to maybe one or two, but throughout the life cycle of my time there, we've been working on what is the experience that we want people to have with this kind of co-Pilot that has grown up over the course of two and a half or three years. And so it has been quite remarkable we've done. Org changes, process changes, the way that we've approached design, language, research. I mean, any category it is likely that it's changed. And that I'll just say that that's for Microsoft AI and a bit of the other orgs that we're working closely with. Microsoft itself is 220,000 people, triple the size of my hometown. It's a little hard to speak for all of Microsoft.  [00:09:04][139.9]

Lee Moreau: [00:09:05] Hard to get your head around that.  [00:09:05][0.4]

Liz Danzico: [00:09:05] Yeah, but it's likely true across the company in many different ways.  [00:09:10][5.0]

Lee Moreau: [00:09:11] It's my belief that designers, which we're both designers, do a better job of asking the why than anybody else. Or maybe often we're the only people in the room that ask why. Technologists tend to just like to see what technology can do and that's good enough, but that's why we need you at Microsoft, Liz.  [00:09:30][19.4]

Liz Danzico: [00:09:31] That's right, yeah.  [00:09:31][0.6]

Lee Moreau: [00:09:32] And I'd love you to talk a little bit more about that role and what design can do to make people feel safe and confident. Because we design experiences.  [00:09:41][8.9]

Liz Danzico: [00:09:43] Yeah. Well, you know, I would say I've always said when my students say we design experiences, I've always said we design the possibility for experiences. I think that's related. I mentioned that because we can only do so much. I mean, we can't actually force them to, you know most of us anyway. And so I think it's a related point in that the role for us is going to change from creating things that are passive or our role being passive to creating things are active. And it's a working theory. So I'm kind of working this through with you now. But you know, if one of the jobs of AI and it is just one of probably close to infinite is to improve efficiency and improve efficiency say in a workflow, and typically in a workflow, you would come out with some artifact and you would present that artifact at the end. Even if it were talking about something that is a animated artifact or an interactive artifact, it's still an artifact unless, but that artifact would be able to be created with AI or augmented by AI. If that is the case, and it would be supplemented or assisted by AI, our role then as designers is now changing from passively accepting the artifact to being active in that process by being able to see new things, being able see details that we wouldn't have been able to before because we were so consumed by the process of making in a good way. Nothing bad about that. [00:11:28][105.3]

Lee Moreau: [00:11:30] Craft is good, yep.  [00:11:30][0.5]

Liz Danzico: [00:11:30] Craft is excellent, but there's new craft in being able to see the details in the new way of making that is human, it's dynamic, it's, here's a technical term, re-queryable, not above your head or anything, but it's just, it feels.  [00:11:48][17.5]

Lee Moreau: [00:11:48] It's a little ugly as a word,.  [00:11:49][1.0]

Liz Danzico: [00:11:49] It is ugly.  [00:11:49][0.1]

Lee Moreau: [00:11:49] But yes, okay.  [00:11:50][0.2]

Liz Danzico: [00:11:50] So I don't know another word for it, but you can re-query that artifact, you can keep going back again and again. Ask more questions and it can still kind of recreate itself, regenerative.  [00:12:02][12.3]

Lee Moreau: [00:12:03] It's alive, basically.  [00:12:04][1.0]

Liz Danzico: [00:12:04] It's alive, thank you. So I think that that is a new way of thinking for us that we haven't had before in the way that we work. And so if our role had been, as a consultant say, more cyclical, you can imagine our role being more, I don't know, choose your own adventure. Where that choose your own adventure book never ends.  [00:12:29][25.1]

Lee Moreau: [00:12:30] Liz, it's always inspiring. Thank you so much for spending time with us.  [00:12:33][3.1]

Liz Danzico: [00:12:34] My pleasure, anytime. Thank you for having me.  [00:12:36][2.2]

Lee Moreau: [00:12:40] I'm here with Jai Shekhawat at the ID. It's Friday, May 30th. Good morning, Jai.  [00:12:45][4.8]

Jai Shekhawat: [00:12:45] Good morning.  [00:12:45][0.2]

Lee Moreau: [00:12:46] Jai Shekhawat is the founder and former CEO of Fieldglass, a cloud platform for the management of contingent labor and services. He's an advisor at Madison Dearborn Partners, Starvest Partners, and Chicago Ventures. He's also a founding member of the Fire Starter Fund, which mentors and invests in Chicago startups. It's lovely to have you here and see you on stage yesterday. Tell me a little bit about why you're here specifically to talk about Responsible AI.  [00:13:13][26.7]

Jai Shekhawat: [00:13:13] Well, thank you for having me, firstly. In a sense, the world of AI really developed after I had sold my company, just became a public thing. I sold my company in 2014. And I think if I had to do the entire entrepreneurial journey again, it would be a very different experience. We'd probably not hire as many people, wouldn't have to raise as much money. And now that I do a lot of investing in tech firms, a lot advisory work, just knowing the implications of AI, and it seems to be a technology that is proceeding at breakneck speed, the idea of Responsible AI has just become very important for all sorts of reasons that we can discuss and that we have seen spoken about at the conference yesterday.  [00:13:58][45.0]

Lee Moreau: [00:13:59] Yesterday, you were in conversation on stage and you were talking about, and you kind of just alluded to it about how with these new technologies, you would have approached this completely differently. You would have staffed differently, ironically, because that's part of the emphasis of what your offering was. When you look at what's happening in AI right now, what are you most excited about in terms of the kind of opportunity that you didn't maybe have to take advantage of in the past? And what are the challenges that you're seeing and really mindful of?  [00:14:31][32.5]

Jai Shekhawat: [00:14:32] The thing I'm most excited about is that the way AI is developing, especially large language models, it allows an individual to extend their range. It becomes a range extender, reach extender because at my fingertips now is essentially the world's knowledge and not just the broad set of knowledge. But private knowledge that I might be allowed entry into within a corporation. So if there's a customer, a customer for instance can put all of their contracts or all of the supply agreements into a custom LLM that they develop using some sort of a co-pilot tool and the software can do so much more for the customer than it could in the days that we built it out. And conversations of that sort are going to be very powerful as we go forward. Implications for that are that I see it now in some companies where I'm on the board where agents are replacing employees. And this is going to be a near term concern but technology has always had that tech has always replaced people then the people are repurposed and they're retrained. So there lies a concern is that there is a disruption coming at the intersection of AI and robotics and XR basically various forms of alternative augmented virtual realities. And how that is going to play out is something we'll have to wait and see.  [00:15:52][79.8]

Lee Moreau: [00:15:52] It's interesting given the scale that your business had operated at, a lot of times the implication of when we refer to workforce you think of someone who's moving something in a factory, right? That automation will transition that worker. The idea of workforce kind of gets to the mental model of a single person. But the way that you're describing it it's a plural it's workforces. The scale I think is just completely different. You don't have a mental image for I think the kind of disruption that you're referring to. And disruption of a individual human is tricky enough. Disruption of massive amounts of people is another scale. What are the sort of political implications of this as well? Like once we get sure the businesses are gonna go there but— this ladders up.  [00:16:42][50.1]

Jai Shekhawat: [00:16:43] It does ladder up. Your question reminds me of something one of the speakers said yesterday where they said that in our organization, this is a large organization, before we can hire a person we have to make the case that an AI can't do that work. So you're already seeing examples where in active recruitment cycles the companies are saying, can it be done by AI?  [00:17:04][21.6]

Lee Moreau: [00:17:05] Right.  [00:17:05][0.0]

Jai Shekhawat: [00:17:05] And if it can then why do you need to hire this person? So this is, it's a very direct challenge to work for which generations of people have been trained as coders, as programmers, as people who work in warehouses. Each of these jobs is now at the very least gonna be modified by some form of AI or robotics. And when we say robotics, we might picture a robot in a sense of a humanoid figure with some arms. But everything that is automated is in a sensor robot like a conveyor belt, an escalator, a smart escalator. These are all serving the functions that were previously done by other things, you know. Things that move objects, you know, a human with a forklift might've moved something but now it's done in a machine. So it is, you now, as you said, it's gonna be hard to visualize the extent of this but it's going to require massive retraining of people. And I'm confident that as with all these big tech waves, there will be an alternative reality. And, you you know one can only strive for the optimistic view. And it can be very much better.  [00:18:13][68.0]

Lee Moreau: [00:18:15] But we have to sort of plan, I'm watching your face, we have to plan for all those various outcomes.  [00:18:18][3.8]

Jai Shekhawat: [00:18:19] We have to plan for the various outcomes because there are in fact negative outcomes possible. There are always, there are, you know not just the bad actors, but with the best of intentions sometimes you can end up with negative outcomes here. And that's the whole theme of the idea of Responsible AI.  [00:18:34][15.4]

Lee Moreau: [00:18:35] Jai, yesterday you were in conversation a sort of fireside chat with Rohit Prasad from Amazon and you brought up this, you invoked the image that I think for those of us in May, 2025 will seem very familiar, which is the in my mind, it was the photo of Johny Ive and Sam Altman together kind of introducing this new partnership between OpenAI and Johnny I've legendary designer from Love From. Talk to me about why you wanted to invoke that image and tell that story.  [00:19:07][32.0]

Jai Shekhawat: [00:19:09] When I saw that picture, it was a black and white picture and the two of them were almost cheek to cheek or close enough to convey that impression. Someone skilled took that photo. It looked to be like a bridal announcement or like a wedding announcement. And part of me suspects that it was intended that way because then they produced the object of this relationship which was little gadget that looked like an iPod shuffle. Which one of the use cases was that it's going to follow you everywhere. You'll wear like a necklace. It has a little camera and a microphone and will record everything you say and do. And then that gets fed into, I guess, a personal AI or the broad AI. It's what I thought might lead to dystopian outcomes. Rohit, who's a professional optimist said it might lead utopian outcomes, both of course possible. And I would hope that he's right in this case, but it led to me asking the question, are we settled now on the form factor of the user interface for modern AI or is it still up for grabs? And of course he, Amazon has the idea of ambient computing, which he proceeded to describe. Still an open question in my mind as to what forms this interface will take and if it will be multiple forms.  [00:20:26][76.8]

Lee Moreau: [00:20:26] We're not hearing a lot of conversation about the interface to this new technology. I think it's appropriate that in a design school, that is one of the central questions. So I'm really grateful that you had brought that into the conversation early at this event so that could kind of linger in everyone's mind. But it is interesting when we step outside of the design school how little we're hearing about the implication of the human interaction with some of these tools and forces. Where do you think we need to go — and I presume it's urgently — where do we need go to unlock this? And who do you might be brought into that conversation?  [00:21:03][36.8]

Jai Shekhawat: [00:21:04] I think where it's headed without someone significantly altering it is that the AI capability seems to be disappearing into existing interfaces. And this very much suits the purpose or serves the purpose of large organizations who have already locked up those interfaces. So AI will now be in your phone in various forms with Siri and with Alexa and so on. Or it'll be in you're washing machine that'll order its own replacement pods, et cetera. It'll disappear into those sources and maybe in some ways it should and that's logical. But that is the invisible AI where I don't have any agency over what it does. The part where I have agency it appears today will be the same form factor. I will engage in LLM, at least that's the form factor today. Those models also shifting. But I think in the future humans have five senses. Only one or two of them are being invoked by the present interface. What might a future interface look like? If I had to hypothesize, it would invoke more of those senses to be a true AI.  [00:22:11][66.3]

Lee Moreau: [00:22:11] I love that, yes.  [00:22:12][0.8]

Jai Shekhawat: [00:22:12] And today you only see examples of that in sci-fi movies perhaps or in old versions of movie theaters that would have haptic effects in your chair and so on. It's felt hokey back then but maybe all that will be back. So beyond that, I can't say, my guess is as good as anyone's.  [00:22:28][15.6]

Lee Moreau: [00:22:29] No, but that's really provocative because if we just allow the limitations of the current technologies and interfaces to be the de facto thing that AI plugs into, we are going to just literally repeat the past. No, there's a tremendous opportunity and it's exciting to be in this conversation. How do the various scales that you're interacting with affect your perceptions on where this technology is going?  [00:22:54][25.0]

Jai Shekhawat: [00:22:54] Speaking on behalf of the smaller firms where I sit on the board and involved in, most of their AI work is done in embedding AI capabilities in the products that they sell to their larger enterprise customers. And here I have a software and a technology bias, so I speak in those terms. And their pitch to the customers is if you use this agent or this capability, it will save you cost and resources on your side. And that's always well received. That's what a startup should do. At the other end of the scale, like with Amazon and others, the capabilities are first embedded into the stream of products and services that they're already selling to us, to the individual customer. And so you'll start to see it in a consumer sense first there. And in a business sense, you'll see it in startup communities that are embedding it in the software that they are selling you. My main observation is that the frameworks for managing AI, such as who is managing all these agents that are talking to each other and communicating, this will have to come from the large organizations. This is not a place where new startups can easily play. You had an OpenAI that was once a startup and an anthropic and so on. But the world of foundation models and the world standards to manage, say, agentic AI, that game is. Going to be over soon and it'll be a game of the big players for the most part.  [00:24:19][85.4]

Lee Moreau: [00:24:20] We will see how this plays out. Jai, thank you so much for being with us. This was fantastic.  [00:24:24][3.6]

Jai Shekhawat: [00:24:24] Thank you very much.  [00:24:24][0.1]

Lee Moreau: [00:24:29] I'm here with Liz Gerber at the ID. It's Thursday, May 29th. Hi, Liz.  [00:24:33][4.0]

Liz Gerber: [00:24:34] Hi, how you doing, Lee?  [00:24:35][1.2]

[00:24:35] I'm doing so great. I'm so excited you're here. Liz Gerber is a professor at the Segal Design Institute at Northwestern University. Liz, I've known you for a few years. We got kind of pulled into these different conversations around extreme design and what the future of design is. And I know that you talk about the topics of design and AI intertwined, interchangeably, all the time, it's on all of your feeds. And so I'm so excited to bring you into this conversation and that you're here at this summit. Tell us a little bit about what you're working on right now.  [00:25:07][31.8]

Liz Gerber: [00:25:07] Awesome. Thank you, Lee. Thrilled to be here. What I'm working on right now is really trying to figure out, it's humanity. I feel like I'm work on humanity. I'm trying to help us be the people we want to be, be the society we want be. And then the secondary question to that is what role does technology play in that? I think the conversation has gotten flipped upside down a bit and has been for several years.  [00:25:29][21.8]

Lee Moreau: [00:25:29] For sure, yeah.  [00:25:30][0.5]

Liz Gerber: [00:25:30] What's the role of tech, like we're putting technology first. We've been doing that for probably a century. And I think it's never been more critical than to say, well, what do we actually want? AI, as an example of a technology, has been promised to do whatever we want. The one thing it can't do is tell us what we want and so I think we need to get very clear on what we wanted individually, communally, and as a society in order to manage and design the future we want to live in.  [00:26:03][32.4]

Lee Moreau: [00:26:03] And as a designer, how are you sort of testing out hypotheses or trying to create this world?  [00:26:11][7.2]

Liz Gerber: [00:26:12] Yeah! So this is where future thinking is coming into play because the tools that we've always used to kind of imagine what's tomorrow are no longer really working for us because it's just such a big leap. And so really envisioning big forward audacious futures that we want to live in and then figuring out how to get there from here. I feel like we've worked really incrementally in the past, like what's the little thing we can do tomorrow to improve a little bit tomorrow and that's just not gonna cut it anymore with the speed of technology. I think we need to imagine what do we want our relationship to ourselves to be to each other? What do we wanna relationships to be like? I increasingly feel like although trained in engineering my entire career, I feel like I'm really relating to the philosophy professors. In fact, my philosophy colleagues said. You're actually a closeted philosophy professor. You're asking the deep questions like what does it mean to be human? What does it need to relate to each other? What does that mean to me in this world? Asking those more and more because if we're not clear on those we don't have that North star. We're directionless and we're gonna let a lot be dictated.  [00:27:21][69.1]

Lee Moreau: [00:27:22] And technology is ultimately a means to an end.  [00:27:24][2.4]

Liz Gerber: [00:27:25] It is a means to an end.  [00:27:25][0.0]

Lee Moreau: [00:27:25] So by letting technology drive those answers to those questions we're — yeah we have it all flipped around.  [00:27:28][3.4]

Liz Gerber: [00:27:30] We have it all flipped around. And so it's never been more important to look people in the eye and not just look them in the eyes and like intuitively know this is important. This feels good. But articulate what feels good about it because the convenience of having a relationship with AI is so seductive. This is what humans have struggled with for a long time. It's why we're in the environmental situation we're right now and a lot of other situations it's like short term versus long term. Like I'm so enamored by the short term reward. I can't see- I can't see the long term consequence. And that's, I think that's the situation we're in right now. And then just to add onto that the economic incentives are just lined up for short term convenience. I mean, just absolutely lined up. And also not just short-term convenience but also lined up for not thinking about systems and impact of systems.  [00:28:20][50.2]

Lee Moreau: [00:28:21] So one of the things that's kind of interesting about this particular rise, emergence of new technology is that a lot of the times this has happened in the past it's been in a different vertical, right?  [00:28:30][9.6]

Liz Gerber: [00:28:31] Yeah.  [00:28:31][0.0]

Lee Moreau: [00:28:31] So like, you know, like you are a mechanical engineer there was a time when the new technology was emerging as mechanics, new mechanisms, new and now we're seeing it in computer science. And it's interesting that the ethics conversation has had to transition into new territories and it evolves over time. I think. This particular summit is a reflection of that but it doesn't come by- you brought up philosophy earlier and philosophy has been there forever. It's been in that other building on campus, right? It hasn't always moved into these new emergent fields at the same rate. How is this kind of playing out as someone who teaches mechanical engineering and designing and moving into this new conversation about tech and AI? The flow of this conversation and who's invited—  [00:29:19][48.3]

Liz Gerber: [00:29:20] So I teach human computer interaction and really that's about the understanding how is it that people interact with computer interfaces that could be on a screen. It could be verbally, it could be through touch. There's just many different ways, obviously now that we can interact with computers. And so the question is how has ethics been integrated into the formal education or not?  [00:29:45][24.6]

Lee Moreau: [00:29:46] Yeah.  [00:29:46][0.0]

Liz Gerber: [00:29:46] Is that the question?  [00:29:46][0.2]

Lee Moreau: [00:29:46] And how is it transmitted too, like, a lot of this is about communication and the flow of this dialog.  [00:29:51][4.7]

Liz Gerber: [00:29:52] So, well, first of all, it astounded me that ethics has been sidelined for so long because people, I would say technologists of which I identify just felt like it was out of their lane. Like they weren't equipped to talk about it because they hadn't read all the greats and they hadn, but I think what I, the transition I've seen in the last five years which has been really refreshing is people saying, you know, I can't wait. I can get a PhD in ethics. I just need to speak from who I am as a human being. And that, I really appreciate it. They still, of course, defer to professionals but I think it was almost like we were, as technologists, we were forgetting that we were human when we were designing the technology. We were forgetting our humanity, that we had a voice and a lived experience we could speak from. And so I'm seeing a lot more in the educational setting, a lot people speaking from their experience even if they feel like they're not professionally trained. In my classroom, I'm doing a lot more reflecting. So we're, you know, for example, we'll do an ideation session and we'll it kind of three conditions, if you will. First do it solo, then we'll do it with other people and then we will do it with AI and then will kind of compare and contrast those experiences. And really get people to think about what worked, what didn't. And, you know, I just did this this winter. One of the obvious responses is, I got a whole lot of ideas with AI but it was a lot less fun. And it's like, okay, well then this is interesting question. How many ideas is enough? And how much are you willing to trade off, right? If you want to prioritize connection with others and sharing, there's so much that happens during a brainstorming session beyond generating ideas to think of it as just the output just as generating ideas is false. It's an opportunity for sharing knowledge with each other, understanding what the other person knows. It's an opportunity to have a collaborative shared experience to serve you later when working with that person, right? There's— [00:32:05][133.1]

Lee Moreau: [00:32:05] Mutual discovery [00:32:05][0.9]

Liz Gerber: [00:32:06] Mutual discovery. There's so many things that come out of brainstorming beyond how many ideas. And yet we have regressed in our evaluation of generating ideas with AI in what numbers do we come up with? And that's one thing I'm hearing that's a little disconcerting to me is kind of our obsession with AI around numbers and fast and speed. And it's like, yeah, we can get a lot more ideas and we can do a lot faster. And do we want that? I joked recently that like, I don't know about you, but I think we're in information overload and product overload already. When I go into the store, I am not dying to see more—  [00:32:48][42.5]

Lee Moreau: [00:32:50] More stuff.  [00:32:51][0.3]

Liz Gerber: [00:32:51] More stuff. I mean, I am interested in seeing things that are designed for broader populations, things that are design for more diverse populations, things that meet the needs of people who have historically been underrepresented in product. Those are the things I'm looking for. I'm not looking for more generic goods. And so that concerns me if we're speeding up the pace, why, and do we really want what we're gonna get?  [00:33:22][31.0]

Lee Moreau: [00:33:22] Yeah, are we not happy with what we have now?  [00:33:25][2.2]

Liz Gerber: [00:33:25] Yeah, I think here's where I think AI could come in very, and one thing, one adjustment, because I feel like I've been a little bit of a, every the world is all bad. One thing I've seen that I do really like is the use of AI to personalize and I think on-demand manufacturing. Like if we could come up and get better at getting people specific personalized products that work with them. That they keep for longer, that's a critical part. I don't know that we have that down. I think we have the personalization, but not necessarily the longevity. Then I think, we are using AI in a good way, but just to create more units to ship and sell feels, it's just kind of disgusting to me, honestly.  [00:34:07][41.6]

Lee Moreau: [00:34:08] So if you can embed more love or more, I don't know, that attachment into an object that allowed it to stay meaningful and relevant to my life longer, that's a breakthrough.  [00:34:22][13.6]

Liz Gerber: [00:34:22] I think that could be very interesting. But we have to take time to reflect on that, not just move on to the next thing. So I think reflection is gonna be part of our key future, which yeah, how do we do that? How do we reflect more?  [00:34:39][16.8]

Lee Moreau: [00:34:39] I'm grateful to have had this time to reflect with you. Liz, this was wonderful. Thank you so much.  [00:34:44][4.5]

Liz Gerber: [00:34:44] Thank you, Lee. Take care! [00:34:44][0.5]

Lee Moreau: [00:34:54] Right now, I'm here with Kevin Bethune at the Institute of Design on Wednesday, May 28th. Hi, Kevin.  [00:35:00][5.9]

Kevin Bethune: [00:35:01] Hey, Lee, good to see you again.  [00:35:02][0.8]

Lee Moreau: [00:35:02] Good to see again. Indeed, you're a long time collaborator and partner and we do so much together in the Design Observer ecosystem. It's great to see your again.  [00:35:12][9.6]

Kevin Bethune: [00:35:12] Love the team, love any way that I can contribute.  [00:35:14][2.0]

Lee Moreau: [00:35:15] You're a Design Observer legend, founder and Chief Creative Officer of dreams design + life. You're also an author and we'll talk a little bit about that in a second. But first, tell us a little about what you're gonna be doing here in the summit.  [00:35:28][13.2]

Kevin Bethune: [00:35:28] Sure, no, thank you for the welcome. Tomorrow, Thursday, I'm gonna moderate a panel called Beyond Silicon Valley with a couple of esteemed leaders from Microsoft and SAP. And we're just trying to think about and really explore generative AI's application within the corporate space, especially for organizations that serve the rest of the world, like large global target audiences. And ideally, we wanna bring to life in the conversation some concrete use cases, stories of wins, some trials and tribulations that extend beyond the typical rhetoric that we hear from Silicon Valley around some of this stuff around AI.  [00:36:08][40.2]

Lee Moreau: [00:36:09] And these are big representatives of really big companies, in particular for you as a designer and someone who's like working in craft, what are you listening for when you're talking to some of these leaders?  [00:36:20][10.4]

Kevin Bethune: [00:36:21] I'm looking for ways that design can intervene to make how we leverage these new models and platforms relevant, ethical, responsible, and also get some of the productivity benefits that these platforms can potentially give us. So I wanna hear some real use cases where large organizations are serving diverse, extensive audiences, hear some stories that are far beyond just the hype of AI.  [00:36:48][27.7]

Lee Moreau: [00:36:49] I mean, I think it seems like a big aspect of this summit is the convening of many different voices, the sort of network effect of people talking about, yes, responsible AI, but AI in general and what it should be doing for us. You have a tremendous network. How is this playing out in your networks right now? These conversations around AI, using these tools, et cetera.  [00:37:13][23.5]

Kevin Bethune: [00:37:14] You know, if anything, I'll first say that I'm learning just like everyone. I'm trying my best to experiment with some of these tools and platforms. But I have sort of dimensionalized in my mind's eye almost like a two by two and forgive the consulting speak when I say two by too, but-  [00:37:28][14.4]

Lee Moreau: [00:37:28] I'm drawing it right now as you speak, so.  [00:37:30][1.8]

Kevin Bethune: [00:37:32] Among all the peer to peer conversations that, you know, we have in forums like this, there's like, I would almost like put on one axis, the Y axis, this notion of observable behavior. Oftentimes we are either in reactive, fight or flight mode and we're coping, or, you now, some folks might garner some offense and wanna be proactive and find ways to constructively utilize this stuff, generative AI being one example. On the horizontal axis, I might. I might describe the nature of change. Maybe from the predictable in one extreme, things that we've been able to track for years to runaway change where it's like non-linear. It's like, think about the observable behaviors when you talk about runaway change, head in the sand behavior if you're on the reactive side, zero sum thinking, risk aversion. And I'm hoping that these conversations like the one we're about to have over the next couple of days skews us to almost- almost to the upper right quadrant where we figure out proactive ways to understand this runaway change. If AI is one of the paradigms that feed that. You know, could we see different forms of making and experimentation that inform new leadership paradigms and opportunities? Could making be a form of evidence creation that informs new values and stoicism in choppy waters? Does leadership look different? Do we think about pluralistic opportunities versus zero sum thinking? Can AI sort of be something that we fear less and develop real concrete use cases that are believable and credible? These are things that I'm thinking about, these dimensions.  [00:39:17][104.5]

Lee Moreau: [00:39:17] That quadrant you're describing is a pretty powerful one, but it's, I think maybe the way you're describing it is it's both enabled and also engaged.  [00:39:24][6.8]

Kevin Bethune: [00:39:25] Mhm yes.  [00:39:26][0.3]

Lee Moreau: [00:39:26] And maybe conversations like this allow people or maybe systems and communities to be both enabled and engaged and do the best possible, get to the best, possible outcomes in that realm. You mentioned non-linear and that's the title of your new book. You know, in the transition, you've had a couple of years since you published reimagining Design and you've written this new book. Tell us a little bit about what your journey has been moving from, I think was Reimagined Design was sort of a lot of, it was reflection on your career and what you had done up until the point of writing that book and where you saw things going. Where does Non linear go from that point forward?  [00:40:04][38.5]

Kevin Bethune: [00:40:06] Non-linear is not so much a sequel to the first book, but it stands on its own. But I do think there was more to say around, like what does the actual journey and the actual approaching of the work feel like, look like to find that next new and novel innovation? And I will say that Non linear is not much a process book. It's more of a book that hopefully garners intuition in the reader to think about perhaps some courageous creative acts that can take and understand maybe the vectors of choice, more choices that they maybe thought they had when they're in moments where they're navigating ambiguity and that failing forward, establishing vectors to learn, not necessarily posit a hypothesis or posit an answer right out of the gate. There's all types of learning vectors that we can take, making vectors that we can posit to not just assert an answer, but maybe even create speculations that create more questions. The learning journey is what really entices me and what brings to life in the new book.  [00:41:10][63.3]

Lee Moreau: [00:41:10] I love that idea of vectors of choice. It's 2025 now. We're in a moment where vectors of choice has a different ring to it than it did in 2024. Like let's be blunt about this. And I think being able to have a conversation both about the powerful technology that we have to engage because it's there in our face, but also what we do as designers navigating what you described as these vectors of choice, that's a really powerful conversation that we to be having right now.  [00:41:39][29.0]

Kevin Bethune: [00:41:39] Unfortunately, I've also noticed the last handful of years, even with the rise of generative AI, if companies applied a method, maybe from the design thinking playbook, they may want their teams to codify that practice and apply it to the next 10 projects. But when it breaks, when the 11th project ends up being a weird one or change around that opportunity ends up sort of being run away and different, the teams are sort of left to their devices to figure out, am I failing if I'm doing something different than the method, and am I violating something? And if I do take off the beaten path, will I be dinged for it later because I took that risk? So in this emerging exponential time that we're living in, change is hard already, but nonlinear change is even harder. And I think we're wrestling with that in this present moment. And we'll continue to feel that and experiencing that moving forward. So that's why I believe this intuition that's required is so much more important. And hopefully we can bring to life some of these ideas in the next couple of days.  [00:42:38][58.6]

Lee Moreau: [00:42:38] I think I'm reading between the lines here a little bit, but what I think heard was design and risk go together. You can't have design without risk.  [00:42:46][8.3]

Kevin Bethune: [00:42:47] To create anything new and novel, you have to be prepared to do something different to get after it. Like not to diminish methods that have been useful in the past, but the world has changed so much in the last 12 months.  [00:42:59][11.6]

Lee Moreau: [00:42:59] And risk feels a little scarier now that it perhaps did before. And those are my words, not yours, but I'm kind of reading the room a little bit. I wanna talk a little about an article that you published on Design Observer, I wanna say 18 months ago, maybe. It got a lot of traction about Adobe. And this was one of the kind of first forays that the creative community kind of had exposure to the way information was being sort of mobilized, used, archived within one of these big tech companies, especially in the creative community. Can you talk a little bit about your realization there and the kind of conversations that you were having in your community? And where do you think things are gonna go in the future? What's been the repercussions of that article?  [00:43:47][47.5]

Kevin Bethune: [00:43:48] I do think the jury's out in terms of what that conversation, starting with an article like that, will eventually lead. I share a healthy dose of constructive concern and maybe some optimism that design can have a hand in shaping an ethical way forward, almost like a new playbook, a rule book for how to go about this. But back to the original part of your question, art and design, originality is almost like a deeply implicit, understood value that we all sort of can look at each other, blink and say, thou shalt not steal another person's work or creative originality. And I think respect as an artist or respect as a designer is rooted in originality, so for a company like Adobe, where the creative community has used their tools forever, to have them almost have a hiccup, whether they were well-intentioned or not, but to have a a hiccup where you force your consumer base to question, are you using my stuff to inform your models and how they behave, and am I gonna see mirrorings of my work, original work, in what your models sort of derive generatively? That's a tremendous level of concern. It's like a core violation of creative principles and how maybe they handled it could have taken some work as well. Maybe so many parties might have done that differently in how that was handled. So it just leaves everyone guessing in terms of, is it all about the almighty dollar? Are you still just gonna do what you wanna do and leverage what you're allowing these models to leverage as the sources of their generative capabilities? Are we just gonna continue to allow that to happen? I mean, there was a recent headline that, I think I forget which leader said: Oh, if we require almost intellectual property tracking and tagging, this is gonna put the AI industry out of business. Well, that's an extreme point of view to take.  [00:45:45][116.6]

Lee Moreau: [00:45:45] Right, right.  [00:45:45][0.3]

Kevin Bethune: [00:45:46] That without ethics, this capability can't survive or function. I don't wanna believe that.  [00:45:51][5.7]

Lee Moreau: [00:45:53] No, I think that's a relatively insane position to think that we must abide by the systems that are emerging if we want to remain creative and have control of our own authorship.  [00:46:05][12.6]

Kevin Bethune: [00:46:06] It's crazy.  [00:46:07][0.4]

Lee Moreau: [00:46:08] I have to say for our listeners, you should read the article again. I read it in preparation for our conversation here today and it still feels fresh and timely and I hope some of that conversation takes place here at the summit. I don't know if you're bringing it into your conversations tomorrow specifically, but I think that has to be in, we're in a design school. We're having an AI conversation in a Design School. If we're not having these types of conversations, we're now doing it right, I would think.  [00:46:34][25.6]

Kevin Bethune: [00:46:34] Totally agree, totally agree.  [00:46:35][0.6]

Lee Moreau: [00:46:35] Kevin, as always, you're basically part of the family here. Wonderful to chat with you. I can't wait to see your panel tomorrow and thank you so much for your time.  [00:46:44][8.4]

Kevin Bethune: [00:46:44] Thank you, Lee. Always a pleasure.  [00:46:45][0.9]

Lee Moreau: [00:46:49] Design As is a podcast from Design Observer. For transcript and show notes, you can visit our website at designobserver.com slash design as. You can always find Design As on any podcast catcher of your choice and if you like this episode, please let us know. Write us a review, share it with a friend and keep up with us on social media at design observer. Connecting with us online gives you a seat at this round table. Thanks to the team at the Institute of Design, Kristen Gecan, Rick Curaj, and Jean Cadet for access and recording. Special thanks to Design Observer's Editor-in-Chief, Ellen McGirt and the entire Design Observers team. This episode was mixed by Justin D. Wright of Seaplane Armada. Design As is produced by Adina Karp.  [00:46:49][0.0]

[2701.1]

People on this episode