MINDWORKS

Mini: What’s the worst that can happen?! (Julie Shah and Laura Majors)

May 09, 2021 Daniel Serfaty
MINDWORKS
Mini: What’s the worst that can happen?! (Julie Shah and Laura Majors)
Show Notes Transcript

Famous last words: “what’s the worst that can happen?"  When introducing automation, artificial intelligence, and compute robotic devices into our lives, what should we be worrying about? MINDWORKS host, Daniel Serfaty, asks Prof. Julie Shah, associate dean of Social and Ethical Responsibilities of Computing at MIT, and Laura Majors, Chief Technology Officer at Motional, to learn more. 

 

Listen to the entire interview in Human-Robot Collaboration with Julie Shah and Laura Majors

 

Daniel Serfaty: Julie, I know that part of your job as associate dean of the school of computing is to consider or to worry about the ethical and societal dimensions of introducing automation, artificial intelligence, compute robotic devices in our lives. What are you worried about? What's the worst thing that can happen with introducing these new forms of intelligences, some of them embodied into our lives?

Julie Shah:There's a lot to worry about, or at least there's a lot I worry about. I was delighted to take this new role as associate dean of social and ethical responsibilities of computing within MIT's new Schwarzman College of Computing. I was predisposed to step into the role because much of my research has been focused around aiming to be intentional about developing computing that augments or enhances human capability rather than replacing it and thinking about the implications for future of work, what makes for good work for people? So it's not about deploying robots in factories that replace people or supplant people, but how do we leverage and promote the capabilities of people. That's only one narrow slice of what's important when you talk about social and ethical responsibilities.

But the aspects that worry me are the questions that are not asked at the beginning and the insight and the expertise, the multidisciplinary perspectives that are not brought to the conception and design stage of technologies in large part because we just don't train our students to be able to do that. And so the vision behind what we're aiming do is actively weave social, ethical, and policy considerations into the teaching research and implementation of computing. A key part of that is to innovate and try to figure out how we embed this different way of thinking, this broadening of these are the languages our students need to speak into the bread and butter of their education as engineers.

On the teaching side, our strategy to do that is to not give them a standalone ethics class, but we're working with many, many dozens of faculty across the institute to develop new content as little seeds that we weave into the undergrad courses that they're taking in computing, so the major machine learning classes, the early classes in algorithms and inference and show our students how this is something that's not separate, that needs to be an add on that they think about later and check a box, but how it's something that needs to be incorporated into their practice as engineers.

And so sort of applied, almost like a medical ethics type thing. What is the equivalent of a medical ethics education for a doctor to a practicing engineer or computer scientist? And by seeding this content through their four years, we essentially make it inescapable for every student that we send out into the world and show them through modeling, through the incredible inspiring efforts of the faculty to at a different stage in their career also work to bridge these fields, show them how they can do it too. A key part of this is understanding the modes of inquiry and analysis of other disciplines and just being able to build a common language to be able to leverage the insights from others beyond your discipline to even just ask the right questions at the start.

Daniel Serfaty: I think this is phenomenon. Introducing this concept for our engineers and computer scientists today, we're going to create a new generation of folks that are going to, as you say, ask many questions before jumping in coding or jumping and writing equations and understanding the potential consequences or implications of what they're doing. That's great. I think that we should all rather than worrying crazy about Skynet and the invasion of the robots, I think it's a much better thing to do to understand this introduction of new intelligences, in plural, in our lives and in our work and thinking about it almost like a philosopher or a social scientist would think about it. That's great.

Laura, I want a quick prediction, and then I'm going to ask both of you for some carrier advice, not mine, perhaps mine too. Laura, can you share with the audience your prediction. You've been in different labs and companies and you're lecturing all over the world about this world. What does human-robot collaboration look like in three years and maybe in 15 years?

Laura Majors:That's a big question. I know it's a broad question too because there are robots in many different applications. Yeah, we've seen some really tremendous progress in factory and manufacturing settings and in defense settings. I think the next revolution that's going to happen and really why we wrote the book the way we did and when we did it was because I think the next revolution we're going to see is in the consumer space. So we haven't really seen robots take off. There are minor examples. There's the Roomba, which is a big example, but very limited tasks that it performs. We're seeing robot lawnmowers, but I think the next big leap is going to be seeing delivery robots, robotaxis, this type of capability start to become a reality and not everywhere, but I would say in certain cities. I think it's going to start localized and with a lot of support in terms of mapping and the right infrastructure to make that successful.

I think that's the three-year horizon. I think the 10-year horizon you start to see these things scale and become a little more generalizable and applicable to broader settings, and again, start to be more flexible to changing city, changing rules and these types of things that robots struggle with. They do very well with what we programmed them to do. And so it's us, the designers, that have to learn and evolve and figure out how do we program them to be more flexible and what are some of those environment challenges that will be especially challenging when we move a robot from one city to another city, whether it's sidewalk or robotaxi.

But I think we're... After the deployment in a few years where we start to see these things in operation in many locations, then we'll start to see how do we pick that robot up and move it to a new city and how can we better design it to still perform well around people who have different norms, different behaviors, different expectations from the robot, and also there are different rules and other kind of infrastructure changes that may be hard for robots to adapt to without significant technical changes.

Daniel Serfaty: Thank you. That's the future I personally am looking forward to because I think it will make us change as human beings, as workers, as executives, as passengers, and that change I think I'm looking forward to it.