
Design As
Who does design belong to, and who is it for? How does it serve us—all of us—and how can we learn to better understand its future, and our own?
On Design As— podcast from Design Observer—we’ll dig into all of this and more, in conversation with design leaders, scholars, practitioners, and a range of industry experts whose seasoned perspectives will help illuminate the questions as well as the answers. In our first season, we considered the topics of Culture, Complexity, and Citizenship in terms of their impact on the design practice and also in terms of how they themselves are being shaped by design today. In season two, recorded at the Design Research Conference 2024 in Boston, we gathered new round tables to discuss Design As Governance, Care, Visualization, Discipline, Humanity, and Pluriverse. Plus, our bonus episodes are exclusive recordings of conference panels!
Design As
Design As Short | Design As Long
Design As Short | Design As Long features Lee Moreau in conversation with Anijo Mathew, Grace Colby, Brandon Schauer, and Ashley Lukasik.
Follow Design Observer on Instagram to keep up and see even more Design As content.
A full transcript of the show can be found on our website.
Season three of Design As draws from recordings taken at theShapeshift Summit hosted in Chicago in May 2025.
Lee Moreau: [00:00:00] Welcome to Design As, a show that's intended to speculate on the future of design from a range of different perspectives. And this season, like everyone else, we're talking about AI. I'm Lee Moreau, founding director of Other Tomorrow's and professor at Northeastern University. This past May, I attended the Shapeshift Summit at the Institute of Design in Chicago, where designers and technologists came together to try to get a handle on what responsible AI is and what it could be. In this episode. We're going to be talking in the space between taking the short view and the long view. This is a roundtable in four parts. On this episode, you'll hear from Anijo Mathew. [00:00:40][39.4]
Anijo Mathew: [00:00:40] I don't think progress by itself is bad. I just think that we should be having larger conversations about what is it that we are building. Just because we can build it, should we build it. [00:00:52][12.1]
Lee Moreau: [00:00:53] Grace Colby, [00:00:53][0.2]
Grace Colby: [00:00:54] You stop and think of everything a human does with intelligence. It's amazing what humans do and we certainly have not encapsulated all of it. [00:01:04][10.1]
Lee Moreau: [00:01:05] Brandon Schauer, [00:01:06][0.3]
Brandon Schauer: [00:01:06] If we want to solve the wrong problems that lead to the wrong intentional or unintentional consequences down the line, then we've just given over this responsibility for cognition to something that's going to come up with the answers we don't want. [00:01:22][16.4]
Lee Moreau: [00:01:24] And Ashley Lukasik, [00:01:24][-0.1]
Ashley Lukasik: [00:01:24] And think about how you design for actual outcomes for actual impacts. What are you actually trying to address and what do you need to be aware of when you're designing and deploying these technologies. [00:01:35][10.8]
Lee Moreau: [00:01:38] The decisions that we make now matter. Just throwing stuff on the wall to see if it sticks or how it sticks, et cetera, it's just far too risky with these new powerful technologies. The potential for this technology to improve lives by eliminating unnecessary and potentially dangerous labor and work is extraordinary. The promise is there. But the danger that could also reduce human agency is great. I support automation. I believe in it. I've worked in industrial settings and job sites. I've work at a grill at McDonald's in the 90s, which was an extraordinary space of automation, but not one in which I felt particularly empowered. And I teach in engineering schools like Northeastern and MIT. So I personally see many sides of this topic. I understand it. But there's no simple answer here. At the summit, we heard a lot of talk about humans in the loop and that's great. It makes everyone feel better— better about these technologies, what they can do, where we sit within them, but that will only lead to better outcomes if the role of having humans in the loop is meaningful. I think as a community, designers have lately been caught in tension, trying to carve out a space and it's gonna take a long time to get this right. Let's hear how this conversation played out at the Shapeshift Summit. [00:02:54][75.5]
Lee Moreau: [00:02:59] I'm here with Anijo Mathew at the Institute of Design on Wednesday, May 28th. Hi, Anijo. [00:03:03][4.6]
Anijo Mathew: [00:03:04] Hi, Lee. [00:03:04][0.4]
Lee Moreau: [00:03:05] Anijo Mathew is Dean and Professor of Entrepreneurship and Urban Technology here at the institute of design. So this is a unique event. I've been referring to it as a conference, but actually you described it as summit. It's called a summit. You've convened a bunch of people. Tell us what you're trying to do and why you've gathered everyone here at Illinois Tech. [00:03:25][20.4]
Anijo Mathew: [00:03:26] Let me take that one at a time, right? So one is why are we calling it a summit? It's not a conference. In a conference, you are talked to. In a summit, you're talking with the people, right, so we want to, and we intentionally try to keep it small so the people who come in are talking to each other as much as listening and engaging with the speakers on the stage. And this is one of the founding ideas of this event, is that we wanted to have a conversation. My co-host, Albert Shum, who was former CVP of design at Microsoft and now an advisor to the team, me at ID, we started to brainstorm. What does it mean for designers in an age of AI? What conversation should we have? And we realized that almost all the conferences and summits that we were seeing at the time, this was a year and a half ago, that we will brainstorming, was focused on technology. It was focused on, to put it simply, how many GPUs you need or how many teraflops you need to create a certain type of model or what is the UX of AI. And nobody was really addressing what we thought was the biggest problem. What will happen to human systems when artificial intelligence becomes a reality? What happens to democracy? What happens media? What happens to technology, societies, all of the things that we take for granted as these new ascending technologies come and shape our lives in the future. [00:05:01][95.4]
Lee Moreau: [00:05:02] I know for your background that you spent time at the Harvard Graduate School of Design, you have an architecture background, spatial thinking, right? Thinking about the city. How does that kind of inform or inflect the way that you're approaching the topic of AI in this emerging technology? [00:05:16][13.8]
Anijo Mathew: [00:05:16] Lee, one of the ways that we- that I would ask you to think about AI is not from AI itself, but it's positioning or it's juxtaposition n with other technologies. So we are seeing this new trifecta of technologies come to shape and form. AI is just one of these technologies. The other two are robotics and XR or AR and VR, right? The last time we saw a trifecta of this shape and form was in the 1990s with the mobile phone, the internet and the computers, the desktop computers coming to shape our lives, right? That trifecta led to trillion dollar industries, company names that we take for granted today, Amazon, Google, Uber, things like that. And our cities and our homes and our lives, our necks, our bodies were reshaped by the coming together of these technologies, right? We don't look the same as our predecessors in the 1950s because we are hunched down looking at our phone while we are walking through the city. We used to pick up a telephone and say, hello, how are you? We pick up our cell phones and say: Hey, where are you. Because in a cell phone, you know that if they pick up the phone, they're fine. We just don't know where they are in geolocation. This is the transformation that we saw with that trifecta coming together. What we are seeing right now is the coming together of this new trifecta. Can you imagine what our cities will look like? What our nation states will look like? What our schools will look like? Hospitals? Anything you can imagine, it's going to radically transform and change in this coming together of these new technologies. This is what I think we should be talking about today. As we assembled our speakers for the summit, we were intentionally trying to understand what are people thinking about when AI hits civic systems, networks and organizational systems? What are people thinking about when AI hits human systems? Democracies, nations, cultures, all of that. And this is the idea that we used to build out the agenda for the summit. [00:07:46][149.2]
Lee Moreau: [00:07:46] One of the key topics here at this conference is responsible AI. It's central to the way that you framed the summit. And I'm wondering, it's 2025, middle of the year, when you see so much of the world being irresponsible, what is the responsibility that you have to be responsible or to at least have the conversation around that topic? How does that fit into this whole conversation? [00:08:13][26.4]
Anijo Mathew: [00:08:14] I love how you picked on responsible as an objective of AI rather than a state of art term. Responsible AI is a thing that companies use to talk about guardrails and systems that they're putting in place to make sure that the AI systems are controlled and they're not going beyond what human beings would like the AI to do. However, when we took the word responsible AI, we're using it as an objective. We are arguing that implementations of AI around the world, we have a responsibility to make sure that this AI implementation is actually taking a human centered approach rather than a techno-optimistic approach. I'm a techno optimist. I believe that technology is good and will change humans for the better. But I also want to be cautious that not all leaders, not all systems are benevolent and they may not actually use these technologies to benefit humankind. And in many ways, the conversation that we want to have at the Institute of Design is to talk about what happens when systems are not designed to be benevolent. What happens when systems are not design to be human centered? What happens when technology is designed for technology's sake, rather than for human advancement or societal benefit? I am very aware of the fact that we live in a capitalistic world and the money that these organizations generate pays my salary, right? So I'm not ignorant of that. And I don't think capitalism by itself is bad. And I don't think progress by itself as bad. I just think that we should be having larger conversations about what is it that we are building? Just because we can build it, should we build it? That's really the focus of Responsible AI. [00:10:14][120.0]
Lee Moreau: [00:10:14] And that's the role of design too, is to ask those questions. [00:10:16][2.1]
Anijo Mathew: [00:10:17] That's exactly right, right? So a good designer never walks into a room and says, I know the answer, we're gonna build it. A good designer walks into our room and say, I don't know what the answer is, but I know there ways that we can work together to figure this answer out. What is the way forward? We'll figure it out together. [00:10:37][20.1]
Lee Moreau: [00:10:38] I'm excited about this. And I look forward to seeing you out there in the summit. [00:10:41][3.5]
Anijo Mathew: [00:10:41] Thank you, Lee, and thanks to your team for coming over. [00:10:44][2.6]
Lee Moreau: [00:10:48] I'm here right now with Grace Colby at the ID, and it's Friday, May 30th. Hi, Grace. [00:10:53][4.7]
Grace Colby: [00:10:53] Hi, how are you today? [00:10:54][0.6]
Lee Moreau: [00:10:55] I'm doing great. [00:10:56][0.5]
Grace Colby: [00:10:57] Great. [00:10:57][0.0]
[00:10:57] Thank you so much for being here. Grace Colby is a filmmaker, a designer, a technologist, and a former board member here at the ID. And she's been exploring a lot of the topics that have been live in the conversation here at the Summit. So Grace, tell me a little bit about the work that you've been doing recently, specifically about AI. I know that you have been doing this project in education, testing some hypotheses. [00:11:17][19.8]
Grace Colby: [00:11:18] Yeah, I developed a course with the purpose of giving designers foundational knowledge about artificial intelligence so that they could participate in the development of AI products and services. I have a background in AI through my master's degree at the Media Lab at MIT. I also have a back ground in design. I have an undergraduate degree from the Institute of Design here in Chicago. So I've always been combining design and technology my entire career. [00:11:50][32.4]
Lee Moreau: [00:11:51] Describe the kind of students that you had — I mean, you just said foundational knowledge, that can mean a lot of different things in a lot a different contexts. Describe some of the students that you had and what some of that projects or project outcomes were. [00:12:03][11.2]
Grace Colby: [00:12:03] Yeah, so the students were graduate students here in the Master's of Design program here at the Institute of Design. Typically, those students come from different walks of life, different disciplines when they come in to this program, which is great. The way I structured the course, I had each student select an existing AI product or service to use as a case study throughout their entire course. And then examine it against the kind of the topic of the week, whether it was AI ethics or whether it what kind of hardware is going to be used or the foundational topics that I had selected. It was really great because the students chose the case studies from things that really resonated with them. So we had a great variety. There was someone who did Khan Academy, so that's an educational function for AI. Someone did a social robot called LEQ, so we had that kind of that parasocial relationship and a physical entity or robot. Someone did Tesla Autopilot, so this student examined autonomous driving and the aspects of that. Couple of more business applications, IBM Watson X and Microsoft Copilot. Another one was having to do with using AI to help with disabilities, the product is called Be My Eyes where you can take your phone, take a picture of something if you're vision impaired, then you'll get an audio description of what that is. So really great variety of products to kind of like show the breadth that AI can be applied to. [00:13:56][113.2]
Lee Moreau: [00:13:57] And the students were then kind of taking those case studies and advancing them with adding technology or understanding the technology implications? [00:14:05][8.2]
Grace Colby: [00:14:06] The case studies already had AI technology. So that was an existing AI product or service. Then the students were examining the case study on, for example, intelligence. We spent a whole week on intelligence separate from technology to really stop and think about how much intelligence do these products really have? And I used Howard Gardner's theory of multiple intelligences as an exercise for them, so it has eight types of intelligence, including emotional intelligence. So the students examined their AI product or service against Gardner's list to just point out what types of intelligent it does have and what types it doesn't have. And that I think ended up being one of the favorite assignments among the students, cause I got the choice to highlight their favorite part of all the assignments at the end of the class. And almost many of them did that. [00:15:11][64.6]
Lee Moreau: [00:15:11] It strikes me that we're talking about AI constantly in the last couple of days — an interrogation of the meaning of the word intelligence is not happening very often in these conversations. We're almost like, I think we think we're maybe past that point and we're just looking at how do we deploy it, how do we mobilize it, et cetera. You're really saying, take a step back here. [00:15:34][22.5]
Grace Colby: [00:15:34] Yeah, and I wanted the students to kind of understand or realize on their, come to their own realization that the functionality of, for example, chat GPT is kind of not the beginning and end of AI. It hasn't been solved. Again, if you step back and think of everything a human does with intelligence, it's amazing what humans do. And we certainly have not encapsulated all of it. [00:15:59][25.1]
Lee Moreau: [00:16:00] So Grace, you were, as you were talking about the class, brings up the academic journey that you had, right? So you were here at the ID, got a degree, then you went to the MIT Media Lab. Fast forwarding from that point to where we are now, talk about what you see and kind of like where you're focusing most of your energy right now. [00:16:21][21.0]
Grace Colby: [00:16:21] Well, I'm always, when I ever kind of dip back into design and technology, I realize how exciting it is to me personally, the combination and going back to this class that I taught, I impressed upon the students kind of in that summary remarks of the class that this combination of AI and design is, that's what's happening right now for them. But in the course of their career, there are gonna be other new technologies. That they are gonna have to figure out how design and that technology fits together, what are the opportunities? And I gave them a timeline showing my trajectory. So I worked with desktop publishing and that was, do you remember the desktop publishing revolution? That was like the new technology that was changing everything. So I did that. I worked in the early days of the internet. I didn't work in mobile computing, but you can look at that as the next thing. I put AI on the timeline. And then I put these other dots on the time line in the future. Like, we don't know what those are, but you young designers are gonna encounter these and you should do what we did in this class. Learn about the technology. What are the core aspects of it from like the software point of view, whatever the-the mechanics of it, how it works. What are the design implications? Because for every single assignment we had, I devised in this class, the final question was, what are the designed implications of this? The software, the hardware, the intelligence or lack thereof. So I said, moving forward, what I'm hoping I gave you kind of a template to explore and master and understand and lead the way for future technologies. Incorporating design. I think it's the credit to my education, both at the ID and at MIT, that I was taught to look at these technologies and design in a fundamental way. So that at age 60, I can come and teach a class of the latest, greatest technology. And I am an expert of sorts at how to approach it. It's the mindset from these insitutions- academic institutions that shaped me, hopefully shaping the students now graduating to approach, not be afraid of what's happening. Just say, what is this thing? What does it do? How can it help people? What's the designer's role? [00:18:59][157.7]
Lee Moreau: [00:18:59] What is the designer's role? That is the question. I mean, we're here in the School of Design. What is designer's roles? [00:19:05][5.5]
Grace Colby: [00:19:05] Well, I think it's the same role it's been. I think some things that are new, for example, something just as straightforward as conversational AI, that isn't really designing conversation would be something you would do now if you are designing the interface for something that's LLM based, right? But if you think about typical foundation courses for designers, they're doing visual design or product design or system design, but designing the verbal back and forth, that's not part of those foundational things. So I think there are new types of expertise or understanding that designers would need to have. And that's just one example. I think there's other-we've talked about in this conference yesterday about screenless interfaces. So that's a whole new world. I think another interesting topic is humans interacting with robots, co-bots sometimes are called. [00:20:07][61.6]
Lee Moreau: [00:20:07] Co-bot, okay, cool. [00:20:08][0.9]
Grace Colby: [00:20:09] That you might be working with a robot or you might working with robotic arm or the people in Amazon warehouses are working with robots. [00:20:17][8.0]
Lee Moreau: [00:20:18] Right. [00:20:18][0.0]
Grace Colby: [00:20:18] So again, kind of thinking of that as an extension of like part of the product design discipline I think is fascinating. So yeah, I think there's new ways of thinking that are whatever need to be figured out. [00:20:31][13.7]
Lee Moreau: [00:20:32] Grace, it was wonderful spending time with you. Thank you so much for sharing with us. [00:20:35][2.9]
Grace Colby: [00:20:35] Thank you. [00:20:36][0.3]
Lee Moreau: [00:20:42] I'm here right now with Brandon Schauer at the ID and it's Friday, May 30th. Hi, Brandon. [00:20:47][4.3]
Brandon Schauer: [00:20:47] Hey. [00:20:47][0.0]
Lee Moreau: [00:20:48] Brandon Schaur is the SVP of Climate Culture at Rare and serves on the Board of Advisors here at the ID. So what have you seen so far, Brandon? [00:20:56][8.3]
Brandon Schauer: [00:20:57] Oh, I've seen a interesting glimpse of IA from across the world that Shapeshift has actually begun much earlier in other countries of what's going on in Japan, what's on in New Zealand, what's throughout the world. And it's really been great to see those different vantage points. And the differences of how people are exploring but it just seems such still like the early days of like, hey, we don't actually know where this is going. It reminds me of other earlier trends in the world of digitification, of moving to the web, of these other things of like: Oh, there's some gimmicky uses of it right now but where's this really going? And I feel like this is getting explored here. [00:21:49][51.9]
Lee Moreau: [00:21:49] That's one of the interesting things about this conference and maybe it's just any time that you focus on a topic but the fact that we've been focused so much on AI and the sort of current hype cycle because it feels like it's been forever. Like that we'd been talking about AI and everything. It's actually relatively recent. And so one of things that keeps coming up over and over is like we still have time to get this right. That's actually this morning really highlighted that like that technology has this like long tail of being mobilized and deployed into our society but we've got to really be thinking about those things right now. How does this sense that now is the moment to start getting organized about this and to really thoughtful and make important decisions. How does it implicate us as designers? [00:22:36][47.0]
Brandon Schauer: [00:22:37] Yeah, I think as designers we have to think through okay what is our responsibility? Like over time we think about it, designers help businesses see a wider lens. Businesses may be looking at next quarter, next year or the fiscal budget for the next three but sometimes we can help sense what's larger. So designers in early days went hey maybe production hurt you, right? Maybe they should be safe or maybe they should usable. Or maybe they should be delightful and brought in these other aspects — maybe they should accessible or inclusive. And I think that these are some of the questions and topics we bring to AI is, okay but what do we want from it that's going to be better for business over the long term? It may not be the choices that automatically shape next month's revenue model but they are going to go oh over time though we're going to build trust with a customer or we're going build a better community because of this and it's going to have the impacts we want to see as a business long term and I think without design you sometimes miss that lens of what do we really want. [00:23:49][72.2]
Lee Moreau: [00:23:50] It's one of the analogies that Anamitra Deb who was speaking this morning was using is like we can't imagine driving in a car on unsafe roads without seat belts, without some sort of speed limits and sort of regulations but those are all acts of design. He didn't say this but basically he's saying these are all things that we over time have designed into this landscape. What are some of the things that you think designers need to bring to the conversation around AI, equivalent to those kind of things about cars and automobiles and stuff like that? [00:24:22][32.6]
Brandon Schauer: [00:24:22] Yeah when you think about it, one way of thinking about AI is like cognition. Like okay well we need to do more cognition more quickly and so we're going to incorporate AI to take over that or automate some of that and I think we have to think about what do we give up when we give that, right? And a lot of discussion over the past day has been okay well it comes down to what problems are we really thinking we should solve with this? If we want to solve the wrong problems that lead to the wrong intentional or unintentional consequences down the line then we've just given over this responsibility for cognition to something that's going to come up with the answers we don't want. And so how do we as designers help make sure the businesses, the organizations, the communities are going to start thinking about what are the real questions we want be answering with this? How do we also when we ask it, to do some of this cognition work, we heard a lot about transparency or traceability, like oh how did you come up with this answer? What sources did you use? [00:25:31][69.0]
Lee Moreau: [00:25:31] Where did this come from? [00:25:32][0.7]
Brandon Schauer: [00:25:33] Right, and I think we had to think through that as well as designers is what are the questions that we drive the inputs with, where does the data come from and the interface of the AI that we provide to humans as well. [00:25:46][13.7]
Lee Moreau: [00:25:47] So Brandon, we're here at the ID, the Institute of Design here at IIT at Illinois Tech. This is the site that can be credited in many ways of expanding the role of design. It's not just about form generation and making that kind of like cool thing that everybody lusts after. It's also how new technologies and social forces interact with the world around us and design has played a role there too. As you look from your seat at the table on the board of advisors, where do you want students to be thinking or playing or intervening in the world as we move into the future? [00:26:22][35.2]
Brandon Schauer: [00:26:23] It's an important question, I think to build on your sense of the Institute of Design. I think one of the things beyond really bringing that human perspective to solving problems is also a systems perspective of being able to look not just at the surface level, but what are all the systems going on behind it? What do we want from those systems? How do we change those systems to get yes, elegant experiences, but also better other outcomes down the line? And so that's outcomes for business, but it's also outcomes for people in society and being able to bring that big of a lens. And so for designers today, for the students today, I really want them to think and take seriously the soft power that they have as designers. We talk a lot about the seat at the table with design. And yes, there are great designing executives who take that seriously and are molding that and working that forward. I think even entry-level designers or graduate school alums who are entering the workforce now, they need to take seriously that when they're working with gen-AI or with their hands to create what are the possible futures? What are the possible concepts that we are going to ship next week, next month, next year? They are rendering the future of where that organization is going. I remember one CEO saying, my favorite place to go, in the organization is to where the designers are because that tells me where our company is going to be. It's showing me a picture of where our tech company is going be, you know, quarters from now. [00:28:03][99.6]
Lee Moreau: [00:28:03] I have not heard that term that you soft power as it pertains to design very much. And it's almost like maybe there's a discomfort in our community in using a term like that. There's a bit of honesty in that too, or at least like a recognition that if you don't acknowledge that on some level, we have a fair amount of influence, we're not going to really mobilize it and use it properly. So I love that way you're talking. [00:28:28][25.2]
Brandon Schauer: [00:28:29] Yeah, you think about it. I think it's Henry Dreyfus who learned to draw upside down so he could help his clients feel like they were thinking of the idea as he unveiled it to them, right? And so when you think about digital design world, it's often tech, product, and designers working together at the table to solve things. And while tech has the most staff, product controls maybe the budget and the shipping cycle, and then you have designers, but we're choosing even what futures we're considering by showing, you know, what are we, are we going to nudge the consumer to one side or the other based on where we place a button? These little tiny decisions still have big influence over future outcomes, but you can maximize that too. You know, designing a whole customer journey or designing a widget that's going to go out into the world. Brandon, thank you so much for sharing with us. [00:29:28][58.6]
Brandon Schauer: [00:29:28] Thanks for the time. [00:29:28][0.3]
Lee Moreau: [00:29:39] It's July 11th and I'm speaking with Ashley Lukasik. Hi Ashley. [00:29:43][3.9]
Ashley Lukasik: [00:29:44] Hi, how are you? [00:29:44][0.6]
Lee Moreau: [00:29:46] Good. So it's been a while since we were in Chicago. Shapeshift was maybe, maybe two months ago. At this point, time flies. And I know you've been busy in the interim, but let's talk a little bit about you and maybe get a brief introduction. [00:29:59][13.6]
Ashley Lukasik: [00:30:00] Sure, I'm happy to. So I'm the founder and CEO of Murmur Ring. We're an experienced design and storytelling agency based in Chicago. The work we do with clients is sort of threefold. So we'll work at a strategic capacity, applying a lot of human-centered design principles and approaches, including qualitative research into our process to help organizations define and drive strategy. And then we use our immersion model and the storytelling components to essentially catalyze those strategies and those objectives that organizations have. [00:30:38][38.5]
Lee Moreau: [00:30:39] Well, that's a great introduction for the collaboration we'll talk about here at the conference, AI Shapeshift on Responsible AI. So at Shapeshifts, your immersion basically kicked things off. It was the day before the conference officially started and there was a sort of echo effect that the immersion that you hosted had on many of the people, particularly the speakers who were there for the next couple of days. Can you talk about how you curated that and what was your conception going in? And then I'm going to ask obviously about like, what do you think the net effect of that was afterwards for the rest of the conference? [00:31:13][34.3]
Ashley Lukasik: [00:31:14] Yeah, so the immersion was limited to 40 people, whereas the conference was about 200. So it was sort of first-come, first-serve who could participate in the immersion. It was one day, but one very long day from about 8 a.m. to about 8 or 9 p.m.. The conference theme was Responsible AI. So the goal that Anijo Mathew, the Dean at the Institute of Design and Albert Shum, who co-curated this program, had for this was to look beyond features and functions of technology to go kind of further upstream and think about how you design for actual outcomes, for actual impacts, what are you actually trying to address, and what do you need to be aware of when you're designing and deploying these technologies? So what we wanted to do with the immersion was give people a much more intimate firsthand experience with the implications and applications of AI, both kind of what we're experiencing right now and what's already happening and what is coming. So we kind of started the experience with a what's now moment. And we brought people to the National Public Housing Museum, which is a beautiful new museum. It's like, you know, I think that about 18 years in the making. And we had a very powerful conversation with Lisa Lee, who's the founder and executive director there. And she's an author, she's a expert on housing and other topics. That was a deliberate choice in the sense that, of course, housing is something that we can all relate to. It's an essential need. But what was interesting about that visit is that we were grounded in this kind of conversation around public housing, that the museum is teeing up for you in the curation of that space. But what we did was brought in a group called Open Communities, which is a housing advocacy group that had discovered that AI and machine learning was being used to discriminate against renters. So as a housing, advocacy group, they then filed a lawsuit. That lawsuit got settled, but they took their work a step further and actually engaged some young technologists to develop kind of a counter technology. So using AI to find discrimination in housing listings that they could then go after. So it was a really interesting way of looking at something that's happening right now that has real implications for people's lives. That's something we can all kind of relate to, but where AI was sort of for good or for evil, if you will. So that was our first site visit. The second visit we did was at Mime Works, which has a relationship with the Center for Applied Artificial Intelligence at University of Chicago Booth. And we visited with a researcher there, his name is Dr. Alexander Todorov, who is an expert of behavioral sciences and facial recognition. And he's developing essentially an understanding of how facial recognition works and it relates to things like trustworthiness, attractiveness. What we-we explored there at Mime Works — and Mime Works functions as a kind of, it's a lab, it's on Michigan Avenue. You can go in there and engage with research as it's going on, you basically contribute to research, but also see what that looks like as a passerby in the city of Chicago —but the conversation there was really, okay, so you're developing this research and this understanding, but when that then gets commercialized or taken up by say a law enforcement officer or an employer, what are some of the implications of this around facial recognition? And so at that point, I think in the course of our immersion, people were starting to have kind of a, like a little bit of a doomsday sort of a reaction to what we were hearing and seeing and concerned about these potential implications. And we're not talking about people who are anti-tech, we're talking about a lot of people who are developing strategies and not necessarily the folks who are doing like the coding, but definitely people who are trying to figure out how large organizations use this stuff. [00:35:31][257.0]
Lee Moreau: [00:35:32] There were a number of conversations that I heard where they were going back to those initial conversations in the immersion saying: Oh, remember that time that we saw this or the conversation we had about these practical experiences that are gonna affect everyday lives. Just taking it down from the sort of lofty conversation around responsible AI to something that was super tangible, very meaningful and related to everyday life. [00:35:56][23.7]
Ashley Lukasik: [00:35:56] Right. [00:35:56][0.0]
Lee Moreau: [00:35:57] That was happening a lot. [00:35:58][0.7]
Ashley Lukasik: [00:35:58] Yeah, yeah. I mean, and that was the goal, right? It was a way to very quickly make this real for people and catalyze that understanding so they can then bring those inputs into the conference and hopefully get more out of it. So essentially at this point, this is like midday, we've gone to what is happening now, what's likely soon to be happening that will have implications. We then took that kind of exploration further of some of this dystopian potential. And spent time with the speculative artist duo, Parsons & Charlesworth and their work is very kind of Black Mirror. They focused a lot on the contingent labor force and have taken this very well-researched kind of exploration of what's possible around like biohacking and things like that, and they've developed objects that are meant to spark a discourse and again, help us imagine what are some of the most far reaching possibilities that might not be as far reaching as we think. You know, some of what seems kind of shocking and crazy actually could be right around the corner. When they actually display this work, they've done that at the Venice Biennale, they won a Lumen Prize for this. Sometimes people are confused as to whether these objects they've produced are real products that could be purchased. [00:37:15][76.9]
Lee Moreau: [00:37:16] Right. [00:37:16][0.0]
Ashley Lukasik: [00:37:16] So it's fun, it's satirical, it' disturbing. Again, it helped push people in terms of what they could imagine as a possibility. And then our final kind of main stop on the immersion for the day was at Salesforce where we then are kind of trying to come back to, okay, what is one organization that's one of the big tech giants, what is organization doing about this and how do they view responsibility? So we're visiting with their philanthropy and climate folks who are developing partnerships and deploying AI for some of those efforts. And they have a number of partnerships with not-for-profit some of it has to do with disaster relief to help not only increase kind of the scale at which these nonprofits can serve and have a positive impact, but they're also trying to help those organizations increase their own internal capacity. So a lot more sort of learning and development as opposed to only giving them the free technology to use. At Salesforce, I think it became very clear and they were candid about this, any altruistic moves that a company, any seemingly altruist move that a company is making to have a positive impact or take responsibility also has a business case. So I think that was a really important kind of key takeaway from that is, okay, first of all, we should make sure that we create economically viable models for things that are good for people, good for the environment, et cetera. And we also can't leave all of those decisions up to corporations to make. There is this really kind of urgent need for more regulation in this space and really more alignment, I think, first and foremost on what we want this technology to do so that we don't find ourselves in a situation where we let it run wild and then we're trying to undo and clean up some potentially very substantial messes. [00:39:24][128.0]
Lee Moreau: [00:39:25] And that was a huge tension in the rest of the conference. To what degree should we self-manage, self-regulate, self-monitor, et cetera, all of these new technologies that are being deployed by us, by our companies and by our competitors versus should we just kind of have a sort of laissez-faire attitude to this and see what happens. I think having that conversation in one of the top half dozen major tech companies who are the big deployers of AI right now is a really important conversation to be having. [00:40:00][35.5]
Ashley Lukasik: [00:40:01] Yeah, absolutely. And I also can't help but wonder, how different would some of these conversations be, say, three years ago? We're in a really specific political and cultural moment right now, and politics and tech are in bed with one another. And I think there's a real lack of trust about all of these kinds of systems and institutions. So it makes it really, really hard to kind of determine, like, who holds that accountability and who should? And, of course, there's a lot, you know, the sort of country as a whole, and the United States at least, is very polarized around these issues. [00:40:42][41.0]
Lee Moreau: [00:40:42] Ashley, it's been wonderful chatting with you. Thanks for being here. [00:40:45][2.5]
Ashley Lukasik: [00:40:45] Thank you so much for having me. [00:40:46][1.0]
Lee Moreau: [00:40:48] Design As is a podcast from Design Observer. For transcript and show notes, you can visit our website at designobserver dot com slash designas. You can always find Design As on any podcast catcher of your choice. And if you like this episode, please let us know. Write us a review, share it with a friend, and keep up with us on social media at Design Obslever. Connecting with us online gives you a seat at this round table. Special thanks to the team at the Institute of Design, Kristen Gecan, Rick Curej, and Jean Cadet for access and recording. Tomorrow's leaders need a new skillset. Consider a creative alternative to the MBA. The Institute of Design's MS in strategic design leadership is for remote part-time learners. Learn more at institute dot design. Special thanks to Design Observer's editor-in-chief, Ellen McGirt, and the entire Design Observers team. This episode was mixed by Justin D. Wright of Seaplane Armada. Design As is produced by Adina Karp. [00:40:48][0.0]
[2373.6]