MINDWORKS

Mini: Who is in control: man, or machine? (Julie Shah and Laura Majors)

Daniel Serfaty

Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.

0:00 | 14:52

When designing the systems that would take man to the moon, Apollo engineers had to figure out how much control they would give to the guidance computer versus the astronauts. 60 years later, engineers are having a similar debate: how much control do humans relinquish to machines and robots. As time goes by, the debate gets harder since machines and robots become more intelligent, thus they can do more. 

MINDWORKS host, Daniel Serfaty, speaks with Prof. Julie Shah, associate dean of Social and Ethical Responsibilities of Computing at MIT, and Laura Majors, Chief Technology Officer at Motional, to get their take.

Listen to the entire interview in Human-Robot Collaboration with Julie Shah and Laura Majors

Daniel Serfaty: …let's jump into that because I want to really dig deeper right now into the topic of human-robot collaboration. And my question and any one of you can answer is humans have been working with machines and computer for a while. Actually, Laura, you said at Georgia Tech you walked into a human-machine interaction class a couple of decades ago, or at least that. So we've been teaching that thing. Isn't human-robot collaboration just a special case of it? And if not, why not? What's unique here? Is there a paradigm change because of what? Any one of you can pick up and answer.

Laura Majors:I remember one of my first projects at Draper was to work on an autonomous flight manager for returning to the moon. I was so surprised to find a paper from the time of Apollo that was written. I think Larry Young was one of the authors since MIT Emeritus professor. And even back then, they were talking about how much control do you give to the guidance computer versus the astronauts. So you're right, this discussion and debate goes way back. And how is it different now? I think it's only gotten harder because machines and robots have become more intelligent and so they can do more. And so this balance of how do you figure out what it is they should do? How are they going to be designed to be that puzzle piece as Julie described to fit with the people that they interact with or interact around?

Julie Shah:I fully agree with that. And maybe the additional thing to add is I don't think human-robot interaction is a special case of a subset of human-computer interaction. There's different and important factors that arise with embodiment and thinking about interaction with an embodied system. Maybe to give two short examples from this, I'm not a social robotics researcher. I started my career working with industrial robots that work alongside people in factories. They are not social creatures, like they don't have eyes, they're not cuddly. You don't look at them and think of them as a person.

But we have this conference in the field, the International Conference on Human-Robot Interaction. And up until lately when it got too big, it was a single track conference. There's a foundation of that field that comes from the psychology background. And so in this conference, you'd watch all these different sorts of papers from all different sorts of backgrounds. I remember there was this one paper where they were showing results of differences in behavior when a person would walk by the robots, whether the robot tracked with it head camera as the person walked by, or whether the robot just stared straight ahead as the person walked by. And if the robot just tracked the person as the person walked across the room, person would take this very long and strange arc around the robot.

I just remember looking at that and thinking to myself, "So I'm working on dynamic scheduling." Like on a car assembly line, every half second matters. A half second will make or break the business case for introducing a robot. I say, "Oh, it's all about the task." But if you get these small civil cues wrong, if you just like, "Ah, maybe the robot should be social and watch people around it as they're working," that person now takes a second or two longer to get where they're going and you've broken the business case for introducing your robot.

And so these things really matter. You really need to understand these effects and they show up in other ways too. There is an effect of not trust related to embodiment of a system. So the more anthropomorphic a system is, or if you look at a physical robot versus computer decision support, the embodied system and the more anthropomorphic system can engender inappropriate trust in the system. You might engender a high level of trust, but it might not be appropriate to its capabilities. And so while you want to make a robot that looks more human-like and looks more socially capable, you can actually be undermining the ability of that human-machine team to function by engendering an inappropriate level of trust in it. And so that's a really important part of your design space and embodiment brings additional considerations beyond an HCI context.

Daniel Serfaty: So what you're sending us is a warning about do not... think first before you design a robot or robotic device in a way that looks or sounds or behave or smells or touches more like a human. It's not always a good thing.

Julie Shah:Yeah. Every design decision needs to be intentional with an understanding of the effects of that design decisions.

Daniel Serfaty: Now I understand a little more, is the fact that the robots and like classical machines of the '70s, say, has the ability to observe and learn and as a result of that learning change? Is that also changing the way we design robots today, or is that something more for the future, this notion of learning in real time?

Julie Shah:So there's a few uses of machine learning in robotics. One category of uses is that you can't fully specify the world or the tasks for your robot in advance. And so you want it to be able to learn to fill in those gaps so that it can plan and act. And a key gap that's hard to specify in advance is, for example, the behavior of people, various aspects of interacting with a person as opposed to like a human is like the ultimate uncontrollable entity. And so it's demonstrated empirically in the lab that when you hard-code the rules for a system to work with the person, or for how it communicates with the person, the team will suffer because of that versus an approach that's more adaptable, that's able to gather data online and update its model for working with a person.

And so the new ability of machine learning, which is it really transformed the field in the last 5 to going back 10 years, it certainly changes the way we think about designing robots, it also changes the way we think about deploying them, and it also introduces new critical challenges in testing and validation of the behavior of those systems, new challenges related to safety. You don't get something for nothing, basically.

Laura Majors:On that point of online learning, machine learning is, I would say, core to most robotic systems today in terms of their development, but online learning and adaptation is something that has to be designed very carefully and thought through very carefully because of this issue that most robotic systems are safety-critical systems. And so you need to go through rigorous testing for any major change before fielding that change for new software release or software update, for instance. I think some of those online learning adaptation can also create some unexpected interaction challenges with people. If the system they're using is changing underneath of them, then it can have negative impacts on that effective collaboration.

Daniel Serfaty: Yes, that makes total sense. We'll get back to this notion of mutual adaptation a little later, but your book is full of beautiful examples. I find them beautiful of basically the current state of affairs as well as the desired state of affairs because many people that are in the field tend to oversell the capability of robots, not because they're lying, but because they aspire to it and sometimes they confuse what is to what could be, will be. You're describing different industries in the book, there are beautiful examples, I would like, Laura, for you to take an example that is particularly good maybe in the world of transportation in which you live to show what do we have today and what will we have in the future in that particular domain, whether it's autonomous cars that everybody obviously is talking about or any other domain of your choice. And Julie, I'd like you to do the same after that, perhaps in the manufacturing or warehousing domain.

Laura Majors:In our book, we talk a lot about air transportation examples and how, again, some innovation we've seen in that space can also yield some more rapid deployment and improvement for other ground transportation robotics. One example that I really love is what's called TCAS, traffic collision avoidance technology, where the system is able to detect when two aircraft are on a collision course and can make a recommendation and avoidance maneuver. I think the beauty of combining that system with... There's air traffic control, there's also monitoring these aircraft, and then there's, of course, the pilots on board. And when you look at air transportation, there's been these layers of automation that have been added, and not just automation within the cockpit, but automation across... I mean, that's an example of automation across aircraft. That's really enabled us to reduce those risks where errors can happen, catastrophic errors.

And so I think we see some of that happening in ground robotics as well, I think in the future ways for robots to talk to each other. So if you imagine TCAS is a little bit like the aircraft talking to each other, if we could imagine the future robots to talk to each other, to negotiate which one goes first at an intersection, or when it's safe for a robot to cross a crosswalk, I think that's when we look into the future, kind of how do we enable robots at scale? It's that type of capability that we'll need to make it a safe endeavor.

Daniel Serfaty: So you introduced this notion of progressive introduction of automation and robotic, not to step function with more of a ramp in which the system eventually evolve to something like the one that you described. What's the time horizon for that?

Laura Majors:I think you have to get to a core capability and then there's improvements beyond that that we learn based on things that happen, not necessarily accidents, but near accidents. That's the way that aviation industry is set up. We have this way of collecting near misses, self-reported incidents that maybe didn't result in an accident, but could inform a future automation improvement or procedure improvement. I think if we just purely look at air transportation as an example, this automation was introduced over decades, really, and so I think that's maybe one of the misconceptions is that it's all or nothing. We can get to the robotic capability that can be safe, but maybe have some inefficiencies or have certain situations that can't handle where it stops and needs to get help for maybe a remote operator. We learn from those situations and we add in additional... Again, some of this automation may not even be onboard the robot. It may be across a network of robots communicating with each other. These types of capabilities, I think, will continue to enhance the effectiveness of robots.

Daniel Serfaty: So the example that Laura just gave us is maybe not mission critical, but lives are at stake when people are flying if you misdirect them. They are situations that maybe people may not think of them are dangerous, but can become dangerous because of the introduction of robots, perhaps. Julie, you worked a lot in understanding even what happened when I press Buy Now button on Amazon or Order Now, what happened in the chain of events that eventually led the package to show on your doorstep the morning after, or other situation in the manufacturing plant in which robots on the assembly lines interact with humans? Can you pick one of those examples and do a similar thing? What we have today and what will we have once you're done working on it?

Julie Shah:Sure. Yeah. In manufacturing, maybe we can take the example of automotive manufacturing, building a car, because most of us probably think of that as a highly automated process. When we imagine a factory where a car is built, we imagined the big robots manipulating the car, building up our car. But actually in many cases, much of the work is still done manually in building up your car. It's about half the factory footprint and half the build schedule is still people mostly doing the final assembly of the car, so the challenging work of installing cabling, installation, very dextrous work.

So the question is why don't we have robots in that part of the work? And up until very recently, you needed to be able to cage and structure the task for a robot and move the robot from the person and put a physical cage around it for safety because these are dangerous, fast moving robots. They don't sense people. And honestly, it's hard and a lot of manual work. Same thing with building large commercial airplanes. There's little pieces of work that could be done by a robot today, but it's impractical to carve out those little pieces, take them out, structure them, and then cage a robot around to do it. It's just easy to let a person step a little bit to the right and do that task.

But what's been the game changer over the last few years is the introduction of this new type of robot, a collaborative robot. So it's a robot that you can work right alongside without a cage relatively safely. So if it bumps into you, it's not going to permanently harm you in any way. And so what that means is now these systems can be elbow-to-elbow with people on the assembly line. But in the introduction, this is a very fast-growing segment of the industrial robotics ecosystem. But what folks, including us as we began to work to deploy these robots a number of years ago, noticed is that just because you have a system that's safe enough to work with people doesn't mean it's smart enough to get the work done and add value. So increase the productivity.

And so just as a concrete example, think of a mobile robot manipulating around a human associate assembling a part of a car and the person just steps out of their normal work position just to talk to someone else for a few moments. And so the robot that's moving around just stops. It just stops and waits until there's a space in front of it for it to continue on to the other side of the line. But everything is on a schedule. So you delay that robot by 10 seconds, the whole line needs to stop because it didn't get to where it needed to be and you'd have a really big problem.

So there's two key parts of this. One is giving these [inaudible 00:31:52] systems smart enough to work with people, looking at peoples more than obstacles, but as entities with intense, being able to model where they'll be, order why. A key part of that is modeling people's priorities and preferences and doing work. And another part of that is making the robots predictable to a person. So the robot can beep to tell people they need to move out of the way. Well, actually, sometimes people won't, unless they understand the implication of not doing that. So it can be a more complex challenge than you might initially think as well.

So the key here is not just to make systems that... The way this translates to the real world is now we are increasingly we have systems that are getting towards safe enough to maneuver around people. There are still mishaps like security guard robots that make contact with the person when they shouldn't and that's very problematic. But we're moving towards a phase in which these robots can be safe enough, but in making them safe enough does not mean they're smart enough to add value and to integrate without causing more disruption than benefit. And that's where the leading edge of what we're doing in manufacturing know some of that can very well translate as these robots escape the factory.