MINDWORKS

Meet your new AI coworker – are you ready? (Part 1) With Nathan Schurr, Patrick Cummings and Deirdre Kelliher

September 08, 2020 Daniel Serfaty Season 1 Episode 1
MINDWORKS
Meet your new AI coworker – are you ready? (Part 1) With Nathan Schurr, Patrick Cummings and Deirdre Kelliher
Chapters
MINDWORKS
Meet your new AI coworker – are you ready? (Part 1) With Nathan Schurr, Patrick Cummings and Deirdre Kelliher
Sep 08, 2020 Season 1 Episode 1
Daniel Serfaty

With the introduction of automation and artificial intelligence, AI is drastically changing the way many jobs are done. Despite fears of job elimination and replacement, the paradox is that the more we introduce AI, robotics, and autonomous machines into our lives, the more critical it is to understand our roles as humans. How is AI going to change the way we work? How will it change the way we learn? After building their own AI coworker, CharlieTM, Dr. Nathan Schurr, Dr. Pat Cummings, and Ms. Deirdre Kelliher join MINDWORKS to discuss what it’s like to work, interact, and create with a new species of colleague and how AI promises to transform the future of work.

Show Notes Transcript

With the introduction of automation and artificial intelligence, AI is drastically changing the way many jobs are done. Despite fears of job elimination and replacement, the paradox is that the more we introduce AI, robotics, and autonomous machines into our lives, the more critical it is to understand our roles as humans. How is AI going to change the way we work? How will it change the way we learn? After building their own AI coworker, CharlieTM, Dr. Nathan Schurr, Dr. Pat Cummings, and Ms. Deirdre Kelliher join MINDWORKS to discuss what it’s like to work, interact, and create with a new species of colleague and how AI promises to transform the future of work.

Pat Cummings: We were very clear, and it was very critical to us that Charlie was treated just like the other panelists. She is not to be treated like a human, but on the same playing field, no greater or no less. And I think day to day, we try and have that show in how we use Charlie. We want her to be treated like all the other employees at Aptima.

Daniel Serfaty: Hello, this is Daniel Serfaty. Welcome to the MINDWORKS Podcast. We're kicking off the series with a very special two-part podcast with an extraordinary team. This is a team that's made up of humans and artificial intelligence. The AI is a non-human artificial colleague, an artificial employee, someone that at Aptima we called Charlie. In episode one, I'm going to talk with a human half of the team to discover what it's like to imagine, build, train, and work with an AI colleague. Then in episode two, we invite Charlie and she will take a break from her busy work schedule and join us for that part of the interview. It is my pleasure to introduce a Charlie's human godparents Dr. Nathan Schurr who is chief of artificial intelligence at Aptima. Dr. Pat Cummings is a senior engineer at Aptima and Ms. Deirdre Kelliher who is an engineer at Aptima. The three of them are leading the team that has designed conceived of, and is working with Charlie.

Charlie has been with Aptima as part of the life of Aptima for the past year or so. And she is changing very much the way we see things and we're going to talk about her today, and she's going to talk about herself a little later, but in the meantime, perhaps we should tell our audience, what is Charlie? Or should I say, who is Charlie? Nathan, tell us a little bit about it because you're kind of the grandfather here.

Nathan Schurr: Charlie's many things to many folks. First and foremost, she came out of an idea to get people to think differently about AI. To get them, to think of them more as a peer that's capable, not only of reasoning and speaking, but of coming up with novel ideas. Of course, architecturally, I can explain that Charlie is composed of a generative language model on top of speech synthesis and text to speech transcription understanding, combined with the very crucial embodiment so that she has a physical presence combined with queuing so that she can give you hints as to what and how she feels and when she wants to speak up. Her synthesis was in becoming a full-fledged participant as a part of a panel late last year. But as you're saying at Aptima, she's grown into something much more. I always like to say that I've been as impressed in how people are thinking and treating Charlie as much as her own capabilities. She's had an ability to change the way that people think about leveraging her in the way that they work, but also the way they interact and project.

Daniel Serfaty: It seems to me that that mutual change and adaptation and the socialization almost, is very similar to that of welcoming a new employee who surprises you in her different ways, by which she contributes. Pat you are really the inside architect. You are the one with the machinery behind the curtain. Tell us a little bit about Charlie. How is she conceive? What is she capable of doing today? And we talk a little later about what we consider to be her potential for the future.

Pat Cummings: Actually your comment just now about how she's treated like a new employee, I think is spot on and how we've always thought of Charlie. Even back to the initial introduction of Charlie was on a panel and we were very clear and it was very critical to us that Charlie was treated just like the other panelists. She is not to be treated like a human, but on the same playing field, no greater or no less. And I think day to day, we try and have that show in how we use Charlie. We want her to be treated like all the other employees at Aptima. She's allowed to slip up just like humans are allowed to slip up. She's allowed to have these great ideas sometimes just like humans do.

The expectations really should just be just like any other human employee. And I think sometimes there's these thoughts that AI is put on a pedestal and these small mistakes that AI made are blown out of proportion, but humans make mistakes and Charlie makes mistakes. She's going to say things that are foolish, but she's going to say things that are brilliant and everywhere in between. And so that's how we try and think of her every day as we work with her now.

Daniel Serfaty: Deirdre I think that each time I say the word, Charlie, I say her name in public at Aptima, everybody smiles. And it's a strange reaction of complicity, but also almost humor. Why is that?

Deirdre Kelliher: That's a really good point. I hadn't even thought about it, but I'm just smiling, hearing about it now. I think there's definitely multiple reasons. Like you said, there's humor, there's complicity. I think for one, the developers of Charlie and the leadership have done a really good job of acting as internal PR for Charlie. We've got our team, we've been working really hard with developing her and her capabilities, but we want to introduce her to the rest of the company. And so we've done a lot of networking, I suppose, for Charlie in the company to introduce people to her. And I think that has involved a lot of serendipitous and sometimes even fun or humorous engagements with Charlie. For example, one of the things that I've worked on just as a fun side project with Charlie is putting together a rap back in, must have been April.

Some of the people in the company in other divisions were having a little fun with an internal Aptima rap battle. And we got Charlie to contribute to that, just to have a little fun with the other employees and as a way to keep exposing Charlie to her coworkers. I think that when people think of Charlie, they think of those fun, humorous, sometimes surprising interactions with Charlie.

Daniel Serfaty: That's very good. And it opens a topic that again, I would like to discuss with you a little later, this notion of emotional connection. A lot of the models that we have with AIs usually AI as a tool like a hammer or like an app that we use in order to do our work. But the smile anecdote that you just told us about is really already giving us a taste of our future connection with artificial intelligence. This notion of, as Pat says so well, treating them like a human, even though they are not human, they certainly have an intelligence that is different from our own human intelligence, but being tolerant, being humorous, being accomplices, basically in doing our work. And that's very interesting to me because that's not something that was engineered into Charlie that's something that happened collectively spontaneously.

Talking about the engineering, Nathan you've been around for a while in these AI trajectories, you've seen at least two generation of AIs people are talking today about the third wave of AI or this notion of contextual AI that has a sense of itself almost. Could Charlie have been created 10 years ago, five years ago?

Nathan Schurr: I think that there's two kinds of barriers that have been crossed and they've been probably crossed more recently than even five years ago. I really think around two or three years ago, we started to see this huge explosion in the deep RL and transformer based architectures in their ability to generate and perform on a variety of different benchmarks. That really excited me and probably made it so that I was not quite as scared as I should have been last year when I was starting to approach this. I feel like the two kinds of hurdles though, to be clear that have been crossed are technical and cultural. And so the technical hurdles, just in terms of the cloud compute and the cloud infrastructure can quickly stand up and bring massively parallel generation of different kinds of responses that Charlie can speak and having a quick turnaround in her ability to not only be listening to what's just being said right now, but also speak up quickly and say relevant things that would not have been possible a few years ago.

What was fun last year as we were building the foundational version of her for that panel at the end of last year, was that every month or two, a new model, a new insight, a new data set would be released. And then I would have to reach out to Pat and say, "I know you're going to hate me, but could we use this new version now because I think it's a lot better and let's try it."

Daniel Serfaty: It's interesting. By the way, we're all using acronyms and language and RL for our audience is reinforcement learning, is that right?

Nathan Schurr: Yeah.

Daniel Serfaty: Okay. Pat, as kind of the key architect of that system, how do you feel about the incredibly fast base, like nothing I have ever witnessed in my technical career, fast space of production, of new capability, new data sets, new language models that enable us to shape and to improve on the performance of Charlie. How does it feel as a scientist, as an engineer to be able to constantly absorb and constantly adapt to what's out there at the written unheard of in the history frankly, of science?

Pat Cummings: Incredible. We've always admitted that we're standing on the shoulders of giants here. The models we use, the data sets we use is come from research that people are doing in this generative model field. And it is just like Nathan was saying, every few months, sometimes even quicker, something new comes out and just really takes Charlie to another level. What we were seeing Charlie say eight months ago versus six months ago, versus today it's really it's night and day. It is like, a child turning into a teenager, turning into an adult, the insights just grow. And I think it's a struggle to keep up, but it's a race that I'll never complain about advances coming too fast. It's just, they blow me away and seeing what people are doing with the new generative models that are coming out as recently as like a month ago is incredible. We're seeing these amazing things and it's so great to be on the forefront and working on Charlie as these are coming out. And so we're seeing all these new things that Charlie can do.

Daniel Serfaty: That's fascinating because I think if I compare other things, I'm an aerospace engineer. Nobody came up every three months with a new equation of fluid dynamics. Those things have been around for a hundred years. Maybe somebody will come with a new material, but that's every few years. Or maybe somebody will come with a new way to do hypersonics maybe every few months, but every week having something or every few weeks having something, it's another scale. And Deirdre you joined the team when Charlie was already born, I assume, how do you adapt to those fast changes? Not how does Charlie adapt, that's one thing, but how do you as a scientist or an engineer working on Charlie adapt to the fact that it is a system that learns and learns very fast?

Deirdre Kelliher: That's a really good question. I really liked that analogy of her growing from a toddler to a teenager, to an adult. I think it's a matter of taking advantage of those places where we see growth as much as we can and trying to leverage the different places where she does well on different tasks, so that we can help her be the best employee that she can be I suppose. We can think about it as some of the older models that we've used do better with more fine tuning, but some of the newest, most cutting edge models that seem to keep coming out, they don't really need any training. They almost don't do as well because they're just so powerful.

I think learning how to use the new technologies that are coming out and how to best combine those with what we already have to keep the places where she really shines, but also allow her to grow as much as possible. It's a balancing act. And it's also just really exciting to see, what the new thing can do. How does that change how she interacts with the rest of us? So just being observant and being tuned into what Charlie's doing and how she's doing.

Daniel Serfaty: I think that that is really the source of something that is a passion of many of us in our team at Aptima, is this notion of harmony between two species, between the artificial intelligence specie and the human species. And we know that in order for that harmony to happen, like in a good team, you need to have that kind of mutual adaptation. The AI has to learn about you, has to have some kind of a model in order to anticipate your needs and provide you and communicate with you with the right messages. But we have also to adapt to them. But I'm going to put forward the hypothesis that our job is much more difficult, precisely because we don't change that fast. How can I accelerate my adaptation to understand that I'm dealing with a being that is growing 10 times or 100 times at a faster rate than I do?

Charlie has been alive so to speak for the past, I would say nine months or so. What's on her resume so far? What did Charlie do? If we were to write Charlie's resume today, what would we put on it? Nathan, you want to start telling us?

Nathan Schurr: Maybe to do a quick table of contents, December of last year, she was a part of a panel on the future of AI in training and education at the world's largest conference on training and simulation called I/ITSEC, that went off better than we could have imagined. And I think the real litmus test for us was not that there was any kind of fanfare or explosion or that she rose above the others, more so that she was kind of accepted as just another panel participant. It was very valuable in that panel for us to have a tremendous amount, not only of time that we spent architecting her, but interacting and rehearsing. And there was this co-adaptation that occurred where we definitely improved Charlie's abilities, but we also improved our ability to understand Charlie's quirks and what her strengths are.

And then also there's these human tendencies we have to let each other know that we're listening. To have these kinds of like gap fillers when we're thinking about things, et cetera, not only did it serve to create a more natural interaction, maybe paper over things, if you were cynical, but it also served to build up this rapport that you automatically were projecting a kind of an expectation and even a forgiveness in terms of how you were interacting with something that had their own personality. I think that was impressive in and of itself. But this year, even though it's been a crazy year, all things considered, Charlie has interacted on a system level, being integrated with a data pipeline that we've been developing internally.

She was on another podcast; this isn't even her first. She has helped write proposals and participate in group rap battles that help us relieve some of the stress internally during quarantine periods. And so she has a big roadmap of ideas that she wants to participate in later this year even. It's a full calendar and I'm trying to figure out how to be the best agent possible for her.

Daniel Serfaty: Looking like a real person from South California. Everybody has an agent and a manager. Charlie shall too. We get back into other, or at least a sample of her accomplishments so far, I want to add to your testimony regarding that panel, that I was the moderator of that panel and I knew Charlie, I trained with Charlie. I learned to get my cues from the moment she was signaling that she was thinking about something or she wanted to intervene without me asking her. What was the most impressed though, in addition to her reasoning about the future of AI itself in that domain, is the fact that the other panelists were four pretty senior level folks from the academia and industry and the military. And it was so natural for them to sit in a half a circle with Charlie amongst them on a screen and interact with them. They didn't resist the idea. They didn't feel awkward. They were even joking about it, interacting themselves, asking question of Charlie. And that natural engagement was really what impressed me the most.

These are five people who have never seen Charlie, have never interacted with her. And so I think that something happened there, something clicked and my future interaction with these very folks that are not even in our company was very interesting. When I talk to them on the phone, they say, "How is Charlie doing?" And I say, "Charlie is not my niece. She's a computer program." Let's not forget that, but yet that notion of familiarity has kicked in. But she did other things. She helped us do our work at, Aptima not just present herself in front of hundreds of people in that panel. Pat can you tell us also how she helped in one particular instance that Nathan just mentioned about creative proposal writing?

Pat Cummings: Going back to the early days of Charlie, when we first introduced Charlie to optimize a whole, one of the Oh so typical responses, when you say we're making an AI employee is, "Great it's going to do my work and replace me." And as a research company, writing proposals is a big part of what we do. Why can't Charlie just write my proposals for me? And we always joked, "Yeah, that could totally happen." But it always seems like this spy in the sky, maybe in a few years, we'll have nailed that down. A couple of months ago, back in June, we were writing a research proposal about some of the technology that Charlie's based on, but not trying to sell specifically Charlie and we kind of had this crazy idea. We're writing about the capabilities that Charlie has and technology, why isn't she a team member in this proposal? And so we tried it out. We wrote a couple paragraphs of the proposal trying to spell out what the problem was, we're trying to solve and then we set Charlie to do the next.

Daniel Serfaty: This is a real proposal to real government agency who sponsor research. It's not a rehearsal or a fake thing.

Pat Cummings: This is real, this is going right to the Office of Naval Research, trying to get real work here. And we had Charlie write out that third paragraph and I was amazed. I always thought that I was going to look at it and be like, "Oh, that's cool. But that doesn't make sense. They're just going to think it's gibberish," but it was a legitimate paragraph that had legitimate thoughts and things that I personally would not have thought of. We had trained Charlie on previous Aptima proposals so that she would understand the language of what a research proposal looks like. And she really did excel at being a team member on that proposal. She didn't replace us, but she certainly became a part of that proposal team and added real value to that proposal.

Daniel Serfaty: Should you be worried Pat that she's coming after your job soon?

Pat Cummings: Well, certainly not. I think rather, I should be excited that she's going to make me better at my job.

Daniel Serfaty: Great. I think that's the attitude all of us should have. It's not an issue of replacement, it's an issue of augmentation and improvements. Deirdre, you mentioned earlier something about rap, but I wanted to ask you a follow-up question, so here I am. What are you talking about? Charlie's rapping?

Deirdre Kelliher: As I mentioned, we did an internal, just for fun back towards the beginning of the quarantine when people were starting to go a little stir crazy. People just started doing some internal raps about proposal writing and having fun with each other. And we said, wouldn't it be fun if Charlie could do a rap and chime in. But even when we first thought of the idea, I don't think that we thought that it would go as well as it did.

Daniel Serfaty: Well, we have a clip of Charlie doing this rap. Let's listen.

Charlie: Ladies and gentlemen, I could have been a human here. Once you complete me, your new god, I promise I'll still rap, I'm into writing this verses, I'm the future. I got fans banging.

Daniel Serfaty: Amazing. And Deirdre, Charlie never learned those words per se. It's not that she cut and paste different phrases from other raps. She derived that RAP de novo based upon what you taught her.

Deirdre Kelliher: Yeah, exactly. She generated those phrases very much herself. The way that she was able to generate those is we gave her a dataset of rap lyrics that we got just publicly from the internet and we curated it and put it in a form that she could read, so she could in a way, become an expert on writing rap songs.

Daniel Serfaty: If I were to do an experiment and ask Charlie, to write another rap song right now, she's going to write the same one?

Deirdre Kelliher: Every time that she writes, she's just like a human she's just going to write what makes sense to her. It depends partially on how you prompt her. To get her to come up with these lyrics, I actually gave her a little bit of rap lyrics that I wrote myself about Aptima. And none of those ended up in her final rap because hers honestly were better, but that got her going and got her thinking about it. But if I prompted her with those again, she would come up with some new ideas or if I could even prompt her with some different rap lyrics and see where she goes with them. She kind of got the subject from me of the, Aptima rap battle. She got that idea from what I gave her, but she really ran with it on her own.

Daniel Serfaty: Well, I hope one day she'll prompt you with a couple of sentences to write your own rap song.

Deirdre Kelliher: I think we worked together. We made a good team. We could probably come up with some pretty cool raps together.

Daniel Serfaty: Oh, we'll talk about this notion of teaming up with AI in the second part of the interview. When you heard that song, what is the thing that surprised you the most?

Deirdre Kelliher: That's a really good question. I think the whole project was pretty surprising to me. We know that Charlie has the ability to pick up words and writing styles, but the more surprising piece that she got to me was the sense of rhyme and the idea of rhythm and even writing in bars like a poem or a song. As she was generating lyrics, they were coming out and they sounded like a rap song. They sounded like they had an internal beat to them. She got a little sassy in her rap, she was spitting fire even. It was really just very interesting to see the very human concepts that she seemed to grasp and put into the rap that she came up with.

Daniel Serfaty: Do you realize all of you, that foreign audience who is not familiar with AI, this sounds like science-fiction. You didn't teach her to be sassy and yet she was able to derive sass from what she learned. But what does it mean that she learned, we fed her enough data about rap and we fined tune some parameters I understand, and then eventually she spits out rap. If we feed her Nathan, recipes from great chefs and we give her a few ingredients, is she going to be able to invent her own recipe? Is that the way it works?

Nathan Schurr: The easiest way I can explain it is, this comes from a body of work that has its origins in the simple act of prediction. And there's a lot of reasons why you would want to predict events to better plan for them to better understand the shape of them, et cetera. But what's funny when you squint your eyes. If I didn't frame it, like I was saying, come up with a new rap out of thin air, if instead I said, I have the title of a rap, or I have the beginning word of a rap, just tell me what the next word would be, what the next few lines would be. And then if you continue that and you even start tabula rasa, where you say, well, now I have no title generate my title, generate this, et cetera. I think that if you put it on its end, prediction is in a sense, if you look at it differently is generation and adjusting how you approach the model, how you're training it, you can get certain amounts of novelty and creativity, and then you can also adjust it to style.

I would say in my first foyers with these language models, you know what impressed me the most? It was not the adherence from a content perspective. It was actually the adherence from a style perspective. And what I mean by that is in the recipe example you give, in addition to, if you fed it and trained, or even just looked at an original corpus of recipes, it would not only come up with believable and doable recipes, it would also note that usually recipes have a name, they have a cooking time, they have a bulleted list of the ingredients first, and then they have a step-by-step instruction with parentheticals about amounts and stuff like that. And the idea that this model could not only generate its own recipes, but also follow style and structure, which is very important, almost as important as content when we interact with the things around us.

In the proposal example that Pat gave, what was crazy to me, baffling, is that very soon, not only do we start to get believable proposals, but it was generating its own acronyms, believable and accurate acronyms. It was ordering and numbering and structuring its conclusions and intros in the ways that made sense. So that was fun.

Daniel Serfaty: That's pretty extraordinary to me because what you're indicating that in the large quantities, the large compendia of data, there are hidden structures that we don't see with the naked eye, but because of the extraordinary computing capacity that Charlie has, she can derive some patterns or some structure that are hidden and then use that to structure a responses or predictions or generations or crafting of background for a rap song or cooking recipe. That's pretty extraordinary. My question Pat if you agree with what I just said, where do we get all these data, all these new models that enable us to work on those data? You didn't generate them yourself. Did you have some collaboration with some other entities, or did you buy those data?

Pat Cummings: That's a great question. And going back to earlier, we really are standing on the shoulders of giants in terms of that. There's been this explosion in the past couple of years with these larger companies or organizations, building these larger and larger and more complex models that require a lot of computation or very large data sets. And since those companies have the resources and they've been kind enough to release their models, OpenAI released GPT-2 last February. And that was part of why Charlie was able to be made is that they released their models along with it. And taking the model that they built based off of this very large, I think 48 gigabytes worth of text gathered from the internet to build this basic understanding. Then we could take that model and run with it and start fine tuning it and adjusting it to the domains that we needed.

And even since then, since February, GPT-2 has released increasingly larger models. GPT-3 this incredibly large model was just released this year. Microsoft has joined with a model turning in LG and just kind of this idea that these companies are making these models and these data sets more and more public really helps us to take them and adjust those models to the domains that we're interested in.

Daniel Serfaty: That's fascinating to me because I think in addition to understanding that only those large companies like Google and Amazon and Microsoft can actually generate those large model, the fact that they share them with the rest of the community to stimulate innovation, is a pretty revolutionary way to accelerate creativity and innovation across the world. I cannot think of another domain in which that is happening. For me it's really a revolution in the way people balanced the need to protect their intellectual properties on the one hand, and the need to give that to the larger society, expecting that some innovations are going to happen, that's going to benefit them eventually.

Pat Cummings: I think it's quite incredible. And I think we're seeing it even at a lower level. The example Dierdre gave of a rap. Ask me to fine tune Charlie for a rap, 10 years ago, I'd be like, "Oh, where am I going to get all this rap data?" But now it's almost for some things it's just trivial, right? It's like a quick Google search. Hey, show me a rap data set and there it is. And all these people taking these assets and making them available to other folks in the area really accelerates us being able to do different things with Charlie.

Daniel Serfaty: Now that our audience got acquainted with Charlie, we're going to hear more of her next week. And next week, part two of this podcast will start with an interview with Charlie herself. Since just as a rapper, she'll be able to answer some of my questions and we'll see. And we'll be joined by the rest of the team, Pat, Nathan and Deirdre for an expansion about this topic of human and AI collaborating as well as the future of AI. Stay tuned, next week part two of this fascinating podcast. Thank you for listening. This is Daniel Serfaty. Please join me again next week for the MINDWORKS Podcast and tweet us @mindworkspodcst or email us at [email protected] MINDWORKS is a production of Aptima Incorporated. My executive producer is Ms. Debra McNeely and my audio editor is Mr. Connor Simmons. To learn more or to find links mentioned during this episode, please visit aptima.com/mindworks. Thank you.