MINDWORKS

Mini: Is your AI teammate just your AI replacement? (Jared Freeman and Adam Fouse)

March 02, 2021 Daniel Serfaty
MINDWORKS
Mini: Is your AI teammate just your AI replacement? (Jared Freeman and Adam Fouse)
Show Notes Transcript

As Artificial Intelligence (AI) matures out of its infancy, is it going to cause a mass displacement of human jobs? Is this sentiment simply just Luddite fear mongering or is it something we should be concerned about? Luckily, MINDWORKS host, Daniel Serfaty, knows just who to ask: Dr. Jared Freeman and Dr. Adam Fouse!


Listen to the entire interview in The Magic of Teams Part 4: Human-AI Teams with Jared Freeman and Adam Fouse.

Daniel Serfaty: So we know, and a lot of studies have shown that automation and robots and AI are going to displace a lot of jobs over the next 10 years. Some people even think that 25 to 30% of the jobs on the planet are going to be disrupted by the introduction of those technologies. In some areas we're already seeing that. So why is that still important to understand human performance? If humans are going to be replaced, taken over, perhaps by all these technologies or is it something else there? Should we go even deeper? Jared, what do you think?

Jared Freeman: So I think there's a fallacy lying in there. Every introduction of a new technology begins by changing a job or eliminating one, speeding up the tasks, making task work more accurate. But the really significant technologies, right? Think of the weapons of war, think of medical discovery and invention. They change everything, they change the tasks, they change the tactics by which we string tasks together. They change the strategies by which we choose which missions we're going to execute. So we have to think larger than simply displacing individual jobs. We have to be able to look at an incredibly dynamic future in which there are new jobs, new missions, and figure out what we can do with that to enrich human life.

Daniel Serfaty: So you're saying that it's not really about elimination, but rather transformation?

Jared Freeman: Yes.

Daniel Serfaty: And that transformation forces us to double down in a sense, in our understanding of human cognition, human performance because it's going to be transformed by those very technology that we introduce. It's almost like the introduction of the car, for example, or the introduction of the airplane eliminated a bunch of things, but primarily created new jobs and transform old jobs into something else. Do you agree, Adam?

Adam Fouse: Completely, I think that it's not so much about displacement as it's about transformation. And I think going along with that transformation, this understanding about human performance and understand that in the context of technology is really important to help us avoid bad things and make the good things better. There's potential when you introduce technology for all sorts of negative unforeseen consequences, and we want to make sure we avoid those. But there's also potential for really amazing, great things to happen. And we can't do either of those things, we can't avoid making bad things happen or ensure that good things happen if we don't understand what this transformation looks like to what humans are able to do when new forms of technology are introduced.

Daniel Serfaty: Yes. And we already witnessing several of those examples today, we will explore them in a little while. Like the job of a radiologist, for example, has changed with the introduction of Artificial Intelligence that can interpret MRI or ultrasound pictures for example. We are not eliminating the job of the radiology, its just a radiology test to adapt to that new reality. Hopefully as a result of that, patients will get a better service. So let me backup for a second, because I think our audience deserves an explanation. Humans have been working with computers for a while. Adam, you say, as a young kid, you are already banging on a Macintosh or something like that. Would you explain to me two things, what is human computer interaction engineering? What is that? And whether it's consistent? Then the second question that I like both of you to explore is, isn't AI just a special case of this? Just yet another computer or another technology with which we have to interact?

Or is there something different about it? Maybe Adam, you can start with telling us a little bit, what does a human computer interaction engineer does for a living? And then maybe the two of you can explore that is AI exceptional?

Adam Fouse: Sure. So a human computer interaction engineer fundamentally looks at what should the interaction between a person and the machine look like. But computer broadly construed because these days what a computer is isn't just a box sitting on a desk, but it's a tiny little box in your pocket, or it's a refrigerator. Fundamentally we want to say what are ways to make that work well? What are ways that can let us do things we weren't able to do before? Part of that is user interface design, what should buttons look like? How should things be laid out? But part of that is also what different types of interaction might you have?

And so a great example of that is the touch base devices that we're all familiar with these days with smartphones and tablets and things like that. Years before those came out, say, I think five years, a decade before those came out, computer interaction engineers were designing bigger tables to study how you might do things with touch. Like the idea of pinch to zoom is something that showed up in that research field years before you saw it happen on an iPhone. And that's trying to understand and invent the future of what those interactions look like.

Daniel Serfaty: And we know what happened when this is not well-designed. When that interaction is left to random acts or improvisation, we are witnessing many accidents, many industrial accidents, sometime very fatal accidents that happened when that interaction is not well-engineered. But Jared, perhaps based upon this notion that there is a discipline that deals with, how do we use the computer as a tool? How do we engineer the interface so the intent of the human is well understood by the machine and the machine actions are well understood by the human? But when we introduce AI into that machine, Artificial Intelligence, is there a paradigm shift here or just an extension?

Jared Freeman: I think there's a paradigm shift. I think that AI is qualitatively different from most computing systems and certainly most mechanical systems that we use now. And it's different in two ways. First it's enormously more complex internally, and second it has the potential, which is sometimes realized to change over time, to learn new behaviors from its experience. So, this has a couple of effects, it makes it harder for humans to predict how AI will react in a given circumstance because we don't understand the AIs reasoning. We don't even understand its perception often. Additionally, current AI at least is quite fragile. We have all seen the examples of putting a piece of duct tape onto a stop sign and suddenly the AI can't identify what that object is. Yet in human teams, we've put immense value on the reliability of our teammates, that they be competent, right? They not fail when the stop sign has a piece of duct tape on it, and that their behavior be fairly predictable.

These are exactly the attributes in which there's weakness in AI, and yet there's huge potential there, right? Such that it's really in a different domain from classic computing system.

Daniel Serfaty: We'll explore that notion of trust and reliance on a teammate and certainly the metaphor that is sometime disputed in our field of the team, the AI as a teammate. And primarily because I want to explore this notion that the Artificial Intelligence, unlike other machines that may be very fast or can accomplish many tasks, AI can actually learn and learn from working with a human and change as a result. Adam, can you tell me about the recent project in which you are trying to do exactly that, what Jared is saying. And combining the human expertise we use Artificial Intelligence, computational power, and any insight you could provide to our audience about what you're learning so far. What is hard in that new engineering in a sense?

Adam Fouse: An interesting example is on a project that I get to be a part of. It's a project for the DARPA agency that we mentioned earlier, that it's looking at trying to identify software vulnerabilities. So this is the idea that we've all seen all sorts of hacks that happen out in the world, things, software, where there's some weakness you have to update your phone once a week to prevent some vulnerability that was found in the software. And it's a really hard thing to do to find where those vulnerabilities exist. And so what we're trying to do is to say, are there ways that we can bring together AI based methods to look for these vulnerabilities? Both human experts that know a lot about this, but also human novices that may know something about maybe computer programming or might know about different aspects of technology, but aren't hackers.

Daniel Serfaty: Those human experts are what? What kind of expertise do they have? They [inaudible 00:20:27] or what do they do?

Adam Fouse: They spend their life looking for software vulnerabilities. Major companies like Apple or Microsoft will hire companies to say, find vulnerabilities in our software. And so they're people that know things like here's common ways that a software engineer might make a flaw in a system and we can look for places where those might occur. Or here's a recently introduced vulnerability that someone discovered and we can look for variations on them. So their job is to keep track of what are common vulnerabilities? What are new things that have existed? And what, just from a very practitioner perspective, what are the tools they can use to look for these things? How can they be looking at things like computer memory addresses to say, is there something here that might let me cause some trouble? So we're trying to say what that process I just described, those people are very time consuming. It was also a very specialized skill that isn't well distributed. And so as more and more software gets created, less and less of that software has that process applied to it. And so we need to be able to scale that.

Daniel Serfaty: At that point, the AI can help or the AI can replace that need for very fast reaction or very complex insight into those systems?

Adam Fouse: A good way to answer that is to say that at one point, people thought the AI could replace them. And so there was another project by DARPA that was funding an effort to do exactly that. To create fully automated systems to find software vulnerabilities. And that was semi-successful. They were able to create tools to fit this kind of thing, but definitely not reach the point where it could replace what the humans do. But the interesting insight that they had in the process of creating those, was they would watch what those systems are doing and they had to be hands off. And they realized that they could just get people in there to help guide things, provide a little bit of initial guidance to cut off paths that weren't going to be fruitful. That they could be much, much more effective.

In this project we're trying to figure out how can we do that? How can we incorporate this insight from experts? How can we find things where someone has less experience but has human insight and might be able to look at something? Let's say that you can might be able to look at some image output and see whether something looks correct or not. Which might be hard for a automated system to do without the right context, but a human that will pick that up pretty quickly. And so one of the nice insights from this is, how can we design this human-AI system to bring in input from multiple different things, multiple different people that have different skill levels, multiple different Artificial Intelligence systems that might have different strengths and weaknesses and bring that all together.

Daniel Serfaty: Thank you, Adam. Jared, this is an example I think, of what you said earlier. Where by the job of that cyber defender, of that vulnerability specialist has changed from these very tedious looking at enormous system, and looking at small vulnerability to one of the guides that led to the AI look at those tedious, extremely complex systems and guide the AI here and there. So that's a job transformation, isn't it?

Jared Freeman: I think it is. And in some sense, that work is quite similar to work that we're conducting on the test evaluation team for another program in which, the goal is to build AI that detects deep fakes. A deep fake is a piece of text, a photo, a video, a piece of audio, a blend of all of them, that is either generated by AI or perhaps elegantly edited with machine help. In the worst case, these may be designed to influence an election, a stock market decision. And so you can look at this challenge in Semafore in two ways, one is simply building machinery, which throws up a stoplight, this is a fake, this is not a fake, this is legitimate, right?

Or you can look at it as changing the task, right? Finding some way, as Adam said, to prioritize for an overworked analyst, what to look at, but more deeply giving them opportunity to infer the intent of those fakes. Maybe even aid them in making that inference about whether this fake is evil or comical from the onion and accident. This is the deep qualitative issue that analysts don't have time or energy to address. And it's what the AI will give them time to do. And the AI must also help with that.

Daniel Serfaty: Well and I hope they will succeed because certainly society with this extraordinary amount of information we are asked to absorb today, doesn't let the user or the consumer of that information really easily distinguish between what is real and what is fake.