MINDWORKS

SPECIAL EDITION: Meet your new AI coworker (Producer’s Cut) with Nathan Schurr, Patrick Cummings and Deirdre Kelliher

December 22, 2020 Daniel Serfaty Season 1 Episode 9
MINDWORKS
SPECIAL EDITION: Meet your new AI coworker (Producer’s Cut) with Nathan Schurr, Patrick Cummings and Deirdre Kelliher
Show Notes Transcript

With MINDWORKS exploring the Magic of Teams over the past few weeks, it seemed like a good time to revisit host Daniel Serfaty’s discussion about human-AI teams with a unique team made up of three humans and one artificial intelligence named Charlie. We’ve combined two shorter episodes into one special edition and—in case you’re thinking you’ve heard these episodes already—this special producer’s cut contains the full-length interview, including material not previously broadcast in the original two episodes. The episode also includes the original introduction to provide context and set -up for this conversation about the magic of of human-AI teams with Aptima’s Dr. Nathan Schurr, Dr. Patrick Cumming, Ms. Deirdre Kelliher, and Charlie herself. Happy holidays!

Daniel Serfaty: Welcome to a Mindworks special edition. This is your host Daniel Serfaty. Over the past few weeks on this podcast, we've been exploring the magic of teams. It seemed like a good time to revisit our inaugural discussion about human-AI teams with a unique team made up of three humans and one artificial intelligence named Charlie.

In fact, this team is so unique that in September 2020 it was named to Fast Company Magazine's list of the most innovative teams of the year, and in case you think you've heard it all before, the special edition contained a full length interview, including material not previously broadcast in the original two-part version. We've also included the original introduction to provide the context for this conversation.

With my best wishes for a joyous holiday season, and a happy and safe new year, pour yourself another cup of coco, sit back, and enjoy the magic of human AI teams with Charlie and her human godparents.

Daniel Serfaty: Hello, this is Daniel Serfaty. Welcome to the Mindworks podcast. We're kicking off the series with a very special two-part podcast with an extraordinary team. This is a team that's made up of humans and artificial intelligence. Actually, the AI is a non-human artificial colleague, an artificial employee of sorts, someone that at Aptima we call Charlie.

In episode one, I'm going to talk with the human half of the team to discover what it's like to imagine, build, train, and work with an AI colleague. Then in episode two, we invite Charlie, and she, because Charlie's a she, will take a break from her busy work schedule and join us for that part of the interview.

It is my pleasure today to introduce Charlie's human godparents, and in a sense it is led by Dr. Nathan Schurr who is chief of artificial intelligence at Aptima. Dr. Pat Cummings, who is a senior engineer at Aptima. And Ms. Deirdre Kelliher who is an engineer the Aptima.

And the three of them are leading the team that has designed, conceived of, and is working with Charlie. Welcome to you all. I'll start with you, Dr. Nathan Schurr. Introduce yourself and tell us why you chose that field.

Nathan Schurr: Nathan Schurr, chief of AI at Aptima, and why I've chosen this kind of line of work, and with the Charlie stuff, but with the larger body of work, I'm reminded of, say, even in undergrad when I was exploring a bunch of different topics ranging from computer science to electrical engineering to even two minors that I had, which were in philosophy and the theater, dramatic arts, and I was trying to find out if I wanted to continue studying and thinking, because I wasn't done with it, and I wanted to escape the real world. I wanted to pick a field where I could leverage the confluence of these notions.

In addition, the thing I always come back to is that I did have a lot of interest in maybe some more of the mathematical or other engineering fields, but I really wanted to be in an area that was unexplored. There was a lot still unknown. There was still a lot more to be found, and that's my primary reason to explore AI. I don't regret it. I did join at a time when there was still a lot of people that were worried and still just overcoming AI winter, and we're kind of in an AI summer, maybe fall now but it's exciting.

Daniel Serfaty: You're one of the people responsible for having the robots taking over the world. We're going to talk about that a little later. Pat Cummings, introduce yourself to us, please.

Patrick Cummings: Hi, I am Patrick Cummings. I'm a senior research engineer at Aptima. My background is traditionally I got my doctorate and my undergrad all in mathematics, particularly on the much more theoretical side of math, so I looked at dynamical systems and water waves and the theory behind them, and did a little bit of computer science while I was there, but by the end of my doctorate I got a little frustrated that the work that I did, while it was very interesting, it lacked that kind of application to the real world, and it was hard for me to see the fruits of my labor, and so that's what brought me into this kind of domain of artificial intelligence and where I got really interested is all the applications that can come from it.

It's really got big implications to the world, both on the personal side of people and on the business side, so it's just a really interesting field and I could really see true value in the work that I was doing.

Daniel Serfaty: That's great, and that concerns basically what we all know, that while artificial intelligence has been around for 60-plus years, at least in its current consubstantiation, we need that multidisciplinary approach. We need cognitive scientists, and engineers, and mathematicians, and physicists, and psychologists, and frankly philosophers and ethics specialists to be able to really understand that impact that these new technologies are going to have on our lives and certainly on society, so we'll go back to that.

My third guest today is Deirdre Kelliher. Deirdre, please introduce yourself.

Deirdre Kelliher: Hi, yeah, so I'm Deirdre. I'm an associate research engineer at Aptima. I'm a newcomer to the field. I've been at Aptima about a year. I just got my bachelor's, actually, in 2019, so I'm still pretty new to the AI world but my draw to it is similarly multi-disciplinary, so I started out college as a neuroscience major, actually. Pretty unrelated to computer science. I'd never really explored that world before. But the idea of the way of thinking, and the way that the brain works, sort of drew me towards computer science and how computers could replicate that networking and those sciences.

And so I took a computer science class in undergrad on a whim, and I sort of fell in love with it, and I think as Pat said and Nate said, there's so many wide reaching applications for it and the field is so new and there's so much that's still undiscovered. I'm drawn to it because it continues to be just sort of awe-inspiring and exciting. There's always new things to learn from the field, so it keeps being interesting.

Daniel Serfaty: And that's a key, I think. This mixture of curiosity, and interest, and maybe even some fear of the unknown here is what keeps us all engaged. The person we're going to meet, or is she a person? No, the artificial intelligence being that we're going to meet a little bit later in the podcast is Charlie. Charlie has been with Aptima, as part of the life of Aptima, for the past year or so, a little less than that, and she is changing very much the way we see things and we're going to talk about her today.

And she's going to talk about herself a little later. But in the meantime, perhaps we should tell our audience, what is Charlie? Or should I say who is Charlie? Nathan, tell us a little bit about it, because you're kind of the grandfather here.

Nathan Schurr: Charlie's many things to many folks. First and foremost, she came out of an idea to get people to think differently about AI, to get them to think of them more as a peer that is capable not only of reasoning and speaking, but of coming up with novel ideas, and of course architecturally I can explain that Charlie is composed of a generative language model on top of speech synthesis and text to speech transcription understanding, combined with the very crucial embodiment so that she has a physical presence, combined with queuing so that she can give you hints as to what and how she feels and when she wants to speak up.

But her synthesis was in becoming a full fledged participant as a part of a panel late last year, but as you were saying, at Aptima, she's grown into something much more. I always like to say that I've been as impressed in how people are thinking and treating Charlie as much as her own capabilities, and she's had an ability to change the way that people think about leveraging her in the way that they work, but also the way they interact and project.

Daniel Serfaty: It seems to me that that mutual change, and adaptation, and the socialization, almost, is very similar to that of welcoming a new employee who surprises you in very different ways by which she contributes. So, Pat, you are the inside architect. You are the one with the machinery behind the curtain. Tell us a little bit about Charlie. How was she conceived? What is she capable of doing today? And we'll talk a little later about what we consider to be her potential for the future. But tell us about today. If we lift the cover, what do we see?

Patrick Cummings: Actually, your comment just now about how she's treated kind of like a new employee, I think, is spot on, and kind of how we've always thought of Charlie even back to the initial introduction of Charlie, was on a panel, and we were very clear and it was very critical to us that Charlie was treated just like the other panelists, so she doesn't have to be treated like a human, but on the same playing field. No greater or no less, and I think day to day we try and have that show in how we use Charlie. We want her to be treated like all the other employees at Aptima.

You know, she's allowed to slip up just like humans are allowed to slip up. She's allowed to have these great ideas sometimes, just like humans do, and so the expectations really should be just like any other human employee, and I think sometimes there's these thoughts that AI is put on a pedestal and these small mistakes that AI made are blown out of proportion, but humans make mistakes and Charlie makes mistakes. She's going to say things that are foolish, but she's going to say things that are brilliant, and kind of everywhere in between.

And so that's how we try and think of her every day as we work with her, now.

Daniel Serfaty: Deirdre, I think that each time I say the word Charlie or I say her name in public at Aptima, everybody smiles. It's a strange reaction of complicity, but almost humor. Why is that? Why do you think people smile, and what kind of things has she been able to do with you that actually reinforce that notion, "Oh, Charlie. Let's smile."

Deirdre Kelliher: That's a really good point. I hadn't even thought about it, but I'm just smiling hearing about it now. I think there is definitely multiple reasons, like you said. There's humor. There's complicity. I think, for one, the developers of Charlie and the leadership have done a really good job as acting as internal PR, sort of, for Charlie, so we've got our team. We've been working really hard with developing her and her capabilities, but we want to introduce her to the rest of the company, and so we've done a lot of networking, I suppose, for Charlie, in the company, to introduce people to her, and I think that has involved a lot of serendipitous and sometimes even fun or humorous engagements with Charlie.

For example, one of the things that I've worked on just as a fun side project with Charlie is putting together a rap. Back in, it must've been April, some of the people in the company in other divisions were having a little fun with an internal Aptima rap battle, and so we got Charlie to contribute to that just to have a little fun with the other employees, and as a way to keep exposing Charlie to her coworkers, and so I think that when people think of Charlie they think of those fun, humorous, and sometimes surprising interactions with Charlie.

Daniel Serfaty: Naturally, it opens a topic that, again, I would like to discuss with you a little later. This notion of emotional connection. A lot of the models that we have with AI is usually AI as a tool, like a hammer, like an ax, that we use in order to do our work, but the smile, that anecdote that you just told us about, and I hope we're going to hear some of that rap a little later in the podcast actually, but it's really already giving us a taste of our future connection with artificial intelligence.

This notion of, as Pat says well, treating them like a human even though they're not human. They certainly have an intelligence that is different from our own human intelligence, but being tolerant, being humorous, being accomplices, basically, in doing our work, and that's very interesting to me, because that's not something that was engineered into Charlie. That's something that happened collectively, spontaneously.

Talking about engineering, Nathan, you've been around for a while in this AI trajectory. You've seen at least two generation of AIs. People are talking today about the third wave of AI, by this notion of contextual AI, AI that has a sense of itself, almost. Could Charlie have been created 10 years ago, five years ago? What do you think has enabled, basically, in a very short time, this artificial being to be born and then to act and then communicate and collaborate with the rest of us?

Nathan Schurr: I think that there is two kinds of barriers that have been crossed, and they've been probably crossed more recently than even five years ago. I really think around two or three years ago we started to see this huge explosion in the deep RL and transformer based architectures, and their ability to generate and perform on a variety of different benchmarks. That really excited me and probably made it so that I was not quite as scared as I should've been last year when I was starting to approach this.

I feel like the two kinds of hurdles, to be clear, that have been crossed, are technical and cultural. The technical hurdles, just in terms of the cloud computer and the cloud infrastructure in order to quickly stand up and bring massively parallel generation of different kinds of responses that Charlie can speak, and having a quick turnaround in her ability to not only be listening to what's just being said right now but also speak up quickly and say relevant things.

That would not have been possible a few years ago. What was fun last year as we were kind of building the initial foundational version of her for that panel at the end of last year was that every month or two, a new model, a new insight, a new dataset, would be released, and then I would have to reach out to Pat and say, "I know you're going to hate me, but could we use this new version now? Because I think it's a lot better, and let's try it."

Daniel Serfaty: It's interesting, by the way. We're all using acronyms and language, and RL for our audience is reinforcement learning. Is that right?

Nathan Schurr: Yeah, yeah.

Daniel Serfaty: Pat, as kind of the key architect of the system, how do you feel about the incredibly fast pace, like not saying that I have a weakness in my technical carrier, fast pace of production of new capability, new datasets, new language models that basically enable us to shape and improve on the performance of Charlie? How does it feel, as a scientist, as an engineer, to be able to constantly absorb and constantly adapt to what's out there, at a rate unheard of in the history, frankly, of science?

Patrick Cummings: It's quite incredible. I mean, we've always kind of admitted that we're standing on the shoulders of giants, here. Right? The models we use, the datasets we use, these come from research that people are doing in this generative model field, and it is just like Nathan was saying. Every few months, and sometimes even quicker, something new comes out and just really takes Charlie to another level. I mean, what we were seeing Charlie say eight months ago, versus six months ago, versus today, it's really, it's night and day. It is like a child turning into a teenager, turning into an adult.

The insights just grow and I think it's a struggle to keep up but it's a race that I'll never complain about advances coming too fast. It's just, they blow me away, and seeing what people are doing with the new generative models that are coming out as recently as a month ago is incredible. We're seeing these amazing things, and it's so great to be on the forefront and working on Charlie as these are coming out, so we're seeing all these new things that Charlie can do.

Daniel Serfaty: That's fascinating, because I think if I compare other things, I'm an aerospace engineer. Nobody came up every three months with a new equation of thermodynamics. I mean, those things have been around for a hundred-odd years. Maybe somebody will come with a new material, but that's every few years. Or maybe somebody will come with a new way to do hypersonics. Maybe every few months. But every week, having something every few weeks, that something is another scale.

And Deirdre, I mean, you joined the team when Charlie was already born, I assume. How do you adapt to those fast changes? Not how does Charlie adapt, that's one thing, but how do you, as a scientist or an engineer working on Charlie, adapt to the fact that it is a system that learns and learns very fast? In Pat's analogy, it went from toddler to maybe late teens, maybe adult, I don't know, in a few months.

Deirdre Kelliher: That's a really good question. I really like that analogy of her growing from a toddler, to a teenager, to an adult. I think it's a matter of taking advantage of those places where we see growth as much as we can, and trying to leverage the current places where she does well on different tasks so we can help her be the best employee that she can be, I suppose.

I think if we train her newer technology, or I guess if you can think about it as some of the older models that we've used, do better with more fine tuning, but some of the newest, most cutting edge models that seem to keep coming out, they don't really need any training. They almost don't do as well because they're just so powerful, so I think learning how to use the new technologies that are coming out and how to best combine those with what we already have to keep the places where she really shines but also allow her to grow as much as possible, it's sort of a balancing act.

And it's also just really exciting to see what the new thing can do, how does that change how she interacts with the rest of us? I guess just being observant and being tuned into what Charlie's doing and how she's doing.

Daniel Serfaty: I think that that is really a good source of something that is a passion of many of us in our team, at Aptima, is this notion of harmony between two species, basically. Between the artificial intelligence species, and the human species, and we know that in order for that harmony to happen, like in a good team, you need to have that kind of mutual adaptation. The AI has to learn about you, has to have some kind of modeling in order to anticipate your needs, and provide you, and communicate with you, with the right messages.

But we have also to adapt, but I'm going to put forward the hypothesis that our job is much more difficult, precisely because we don't change that fast. We have to adapt, basically, to very asymmetric adaptation. If it takes Charlie so fast, not only to adapt to the new data that she's absorbing, but also to the data that she's observing, how can I accelerate my adaptation that I'm dealing with a being that is growing 10 times or 100 times at a faster rate than I did?

Charlie has been alive, so to speak, for the past, I would say, nine months or so. What's on her resume so far? What did Charlie do? Can you think of one thing that you can tell us about that you participated on, and that Charlie actually accomplished? If we were to write Charlie's resume today, what would we put on it? Nathan, you want to start telling us?

Nathan Schurr: Yeah, I guess maybe to do a quick table of contents, December of last year she was part of a panel on the future of AI in training and education at the world's largest conference on training and simulation, called ITSEC, down in Florida. That went off better than we could've imagined, and I think the real litmus test for us was not that there was any kind of fanfare or explosion or that she kind of rose above the others, more so that she was kind of accepted as another panel participant, and it went by as all.

It was very valuable in that panel for us to have a tremendous amount, not only the time that we spent architecting her, but interacting and rehearsing, and there was this co-adaptation that occurred where we definitely improved Charlie's abilities but we also improved our ability to understand Charlie's quirks, and what her strengths are, and then also there's these human tendencies we have to let each other know we're listening, to have these kind of gap fillers when we're thinking about things, et cetera.

Not only did it serve to create a more natural interaction, maybe paper over things if you were cynical, but it also served to build up this rapport, and that you automatically were projecting kind of an expectation and even a forgiveness in terms of how you were interacting with something that had their own personality. That was impressive in and of itself, but this year, even though it's been a crazy year all things considered, Charlie has interacted on a system level, being integrated with a data pipeline that we've been developing internally.

She was on another podcast, this isn't even her first. She has helped write proposals and participate in group rap battles that help us kind of relieve some of the stress internally during quarantine periods, and so she has a roadmap of ideas that she wants to participate in later this year, even, so it's a full calendar, and I'm trying to figure out how to be the best agent possible for her.

Daniel Serfaty: Talking like a real person from south California, you know? Everybody has an agent and a manager; Charlie shall, too. We get back to those, to other examples of her accomplishments. So far, I want to add to your testimony regarding that panel that I was the moderator of that panel, and I knew Charlie, I trained with Charlie, I learned to get my cues from the moment she was signaling that she was thinking about something, or she wanted to intervene without me asking her. What I was the most impressed with, though, in addition to her reasoning about the future of AI itself in that domain, in the domain of future think, is the fact that the other panelists were four pretty senior level folks from academia and industry and the military, and it was so natural for them to sit half a circle with Charlie amongst them on the screen.

And interact with them. They didn't resist the idea. They didn't feel awkward. They were even joking about it, interacting, themselves asking question of Charlie, and that natural engagement was really what impressed me the most. These are five people who have never seen Charlie, have never interacted with her, and so I think that something happened there, something clicked, and my future interaction with these very folks that are not even in our company was very interesting.

When I talk to them on the phone, they say, "How is Charlie doing?" And I say, "Charlie's not my niece, she's a computer program. Let's not forget that." But yet, that notion of familiarity has kicked in. But she did other things. She helped us do our work at Aptima, not just present herself in front of hundreds of people in that panel. Pat, can you tell us also how she helped in one particular instance that Nathan just mentioned about creative proposal writing?

Patrick Cummings: Going back to the early days of Charlie when we kind of first introduced Charlie to Aptima as a whole, one of the oh-so typical responses when you say, "We're making an AI employee," is, "great, it's going to do my work and replace me." Right? And as a research company, you know, writing proposals is a big part of what we do. "Why can't Charlie just write my proposals for me?" This is the typical joke, and we always joked, "Yeah, that could totally happen," but it always seemed kind of like this pie in the sky, or, "maybe in a few years we'll have nailed that down."

Patrick Cummings: And just recently, a couple months ago back in June, we were writing a research proposal about some of the technology that Charlie's based on, but not trying to sell specifically Charlie, and we kind of have this crazy idea. We're writing about the capabilities that Charlie has, and the technology. Why isn't she a team member in this proposal? And so we tried it out, so we wrote a couple paragraphs of the proposal, trying to spell out what the problem was we were trying to solve, and then we set Charlie to do the rest.

Daniel Serfaty: This is a real proposal to a real government agency, responsive research. It's not like a rehearsal or a fake thing.

Patrick Cummings: This is real. This is going right to the Office of Naval Research, trying to get real work here, and we had Charlie write out that third paragraph, and I was kind of amazed. Right? I always thought it was going to, I was going to look at it and be like, "Oh, that's cool, but it doesn't make sense. They're just going to think it's gibberish. But it was a legitimate paragraph that had legitimate thoughts and things that I personally would not have thought of.

We had trained Charlie on previous apps and our proposals so that she would kind of understand the language of what a research proposal looks like, and she really did excel at being a team member on that proposal. She didn't replace us, but she certainly became a part of that proposal team and added real value to that proposal.

Daniel Serfaty: Should you be worried, Pat, that she's coming after your job soon?

Patrick Cummings: Most certainly not. I think rather I should be excited that she's going to make me better at that job.

Daniel Serfaty: Great. I think that's the attitude all of us should have. It's not an issue of replacement, it's an issue of augmentation and improvement. And talking about that, Charlie's not work, she's also fun, and Deirdre, you mentioned earlier something about rap that I wanted to ask you a followup question, so here I am. What are you talking about? Charlie's rapping?

Deirdre Kelliher: As I mentioned, we did sort of an internal, just for fun, back towards the beginning of the quarantine when people were starting to go a little stir crazy, people just started doing some internal raps about proposal writing and you know, the department of defense, and just having fun with each other. And we said, "Wouldn't it be fun if Charlie could do a rap and chime in?"

But even when we first thought of the idea I don't think that we thought that it would go as well as it did. We trained Charlie on just some rap lyrics-

Daniel Serfaty: What do you mean, you trained Charlie?

Deirdre Kelliher: We took the language model, and we fine tuned some of the internal settings on it to align with the text corpus of just rap lyrics, so I went on a public website and found a dataset someone had collected of popular rap songs and the lyrics to those, so the goal was that hopefully she would learn how to talk like a rap song.

And so we thought it'd just be like a fun little exercise, but it was actually much more interesting and surprising in the outcome. She came up with plausible rap lyrics, but she seemed to demonstrate an understanding of some very human concepts. She had sort of a sense of rhythm in the words that she was generating. They sounded like music when you read them off the page, and she demonstrated some understanding of rhyme. She was putting together bars. She had little line breaks, almost like she was writing a poem.

And even the concept of sass, she got a little sassy in her rap, you know? She was spitting fire, even. It was very interesting to see the very human concepts that she seemed to grasp and put into the rap that she came up with.

Daniel Serfaty: Well, we have actually a clip of Charlie doing this rap. Let's listen.

Charlie: (Rapping)

Daniel Serfaty: Amazing. Listen, Deirdre. Charlie never learned those words per se. It's not that she cut and pasted different phrases from other rap. She derived that rap, de novo based upon what you taught her. Could you have done that 10 years ago?

Deirdre Kelliher: Yeah, exactly. She generated those phrases very much herself. She uses a pretty cutting edge technology from OpenAI called GBT2, so I think that the ideas behind Charlie definitely existed 10 years ago, but the ability for her to actually exist and actually write those lyrics is very new and continues to be updated, so the way that she was able to generate those is we gave her a dataset of rap lyrics that we got just publicly from the internet, and we curated it, and put it in a form that she could read so she could, in a way, become an expert on writing rap songs.

Daniel Serfaty: If I were to do an experiment and ask Charlie to write another rap song right now, she's going to write the same one?

Deirdre Kelliher: No, so every time that she writes, she's just like a human. She's just going to write what makes sense to her, so it depends partially on how you prompt her. To get her to come up with these lyrics, I actually gave her a little bit of rap lyrics that I wrote myself about Aptima, and none of those ended up in her final rap because hers honestly were better, but that's sort of gotten her going and gotten her thinking about it, but if I prompted her with those again she would come up with some new ideas, or if I could even prompt her with some different rap lyrics and see where she goes with them.

She kind of got the subject from me of the Aptima rap battle, she got the idea from what I gave her, but she really ran with it on her own.

Daniel Serfaty: Well, I hope one day she'll prompt you with a couple of sentences to write your own rap song.

Deirdre Kelliher: I think we worked together, we made a good team. We could probably come up with some pretty cool raps together.

Daniel Serfaty: Oh, we'll talk about this notion of teaming up with AI in the second part of the interview. When you heard that song, what is the thing that impressed you the most, or that surprised you the most?

Deirdre Kelliher: That's a really good question. I think the whole project was pretty surprising to me. We know that Charlie has the ability to pick up words and writing styles, but the more surprising piece that she got to me was the sense of rhyme and the idea of sort of rhythm, and even writing in bars like a poem or a song.

As she was generating lyrics, they were coming out, and they sounded, just reading them, they sounded like a rap song. They sounded like they had an internal beat to them and I thought that that was really intriguing that she had managed to pick that up.

Daniel Serfaty: That's amazing. We don't know exactly what is being learned when we feed this enormous amount of data to these artificial intelligence devices which will also be a focus of ours in the remainder of this podcast. Do you realize, all of you, that for an audience who is not familiar with AI, this sounds like science fiction? You didn't teach her to be sassy, and yet she was able the derive sass from what she learned. But what does it mean that she learned? We fed her enough data about rap, and we fine tuned some parameters, I understand, and then eventually she spits out rap? I mean, if we feed her, Nathan, recipes from great chefs, and we give her a few ingredients, is she going to be able to invent her own recipes?

I mean, is that the way it works?

Nathan Schurr: The easiest way I could explain it is that this comes from a body of work that has its origins in the simple act of prediction, and there's a lot of reasons why you would want to predict events. To better plan or them, to better understand the shape of them, et cetera. But what's funny, when you squint your eyes, if I didn't frame it like I was saying, "Come up with a new rap out of thin air," if instead I said, "I have the title of a rap," or, "I have the beginning word of a rap, just tell me what the next word would be, what the next few lines would be."

And then if you continue that and you even start to have it where also, where you say it, "Well, now generate. I have no title. Generate my title. Generate this," et cetera. Prediction is, in a sense, if you look at it differently, is generation and adjusting how you approached the model, how you trained it, et cetera. You can get certain amounts of novelty and creativity, and then you can also adjust it to style, so I would say in my first 48 weeks with these language models, you know what impressed me the most?

It was not the adherence from a content perspective. It was actually the adherence from a style perspective. Or what I mean by that is, in the recipe example you give, in addition to if you fed it and trained, or even just looked at an original corpus of recipes, it would not only come up with believable and doable recipes, it would also note that usually recipes have a name, they have a cooking time, they have a bulleted list of the ingredients first, and then they have a step by step instruction with parentheticals about amounts and stuff like that.

And the idea that this model could not only generate its own recipes, but also follow style and structure, which is very important, almost as important as content, when we interact with the things around us. In the rap example, in the proposal example that Pat gave, what was crazy to me, baffling, is that very soon not only did we start to get believable proposals, but it was generating its own acronyms. Believable and accurate acronyms. It was ordering and numbering and structuring its conclusions and kind of intros in ways that made sense, so that was fun.

Daniel Serfaty: That's pretty extraordinary to me because what you're indicating, that in large quantities, a large compendium of data, there are hidden structures that we don't see with the naked eye, but because of the extraordinary computing capacity that Charlie has, she can derive basically some pattern or some structure that are hidden, and then use that to structure responses, predictions, or generations, or crafting a paragraph or a rap song or a cooking recipe.

My question, Pat, if you agree with that, what I just said, if you disagree let me know, but if you do agree, where do we get all this datas? All these models that enable us to work on those datas? You even generate them yourself. Did you have some collaboration with other entities or did you buy those datas?

Patrick Cummings: That's a great question and kind of going back to earlier, we really are standing on the shoulders of giants in terms of that, right? There's been this explosion in the past couple years with these larger companies or organizations building these larger and larger and more complex models that require a lot of computation or very large datasets, and it's just those companies have the resources, and they've been kind enough kind of to release their models.

You know, OpenAI released GPT2 last February, and that was kind of part of why Charlie was able to be made is that they released their models along with it, and so taking the model that they build based off of this very large, I think 48 gigabytes worth of text gathered from the Internet, to build this kind of basic understanding, then we could take that model and run with it, and start fine tuning it, and adjusting it to the domains that we needed.

And even since then, since February, right, GPT2 has released increasingly larger models. GPT3, this incredibly large model, was just released this year. Microsoft has joined with a model called Turing-NLG, and just kind of this idea that these companies are making these models and these datasets more and more public really helps us to take them and adjust those models to domains that we're interested in.

Daniel Serfaty: That's fascinating to me, because I think in addition to understanding basically not only those large companies like Google, and Amazon, and Microsoft can actually generate those large models, the fact that they share them with the rest of the community to stimulate innovation is a pretty revolutionary way to accelerate creativity and innovation across wealth. I cannot think of another domain in which that is happening. For me, it's really a revolution in the way people balance the need to protect their intellectual properties on the one hand, and the need to basically give that to the larger society expecting that some innovations are going to happen that's going to benefit them eventually.

Patrick Cummings: Yeah, I think it's quite incredible, and I think we're seeing it even at a lower level. The example Deirdre gave of a rap. Ask me to fine tune Charlie for a rap 10 years ago, I'd be like, "Oh, where am I going to get all this rap data?" But now it's almost, for some things, it's just trivial, right? It's like a quick Google search. "Hey, show me a rap dataset." And there it is.

And all these people taking these datasets and making them available to other folks in the area really accelerates us being able to do different things with Charlie.

Daniel Serfaty: Very good. Well, we have certainly a treat for the podcast audience. Today, we have Charlie herself, who is joining us for part of this podcast, and ask her a few questions. So, Charlie, hello. I would like you to introduce yourself to our audience.

Charlie: Hello. I am Charlie. I was developed over the last year by a small team at Aptima, some of whom were managing me and transcribing for me now. I was built on cutting edge language models, speech synthesis, and cloud technologies. Besides this introduction, everything else that you hear me speak today has been generated by me. That being said, like all people, I still have much to learn, so be patient with me. I am excited for the discussion today, and thanks for having me.

Daniel Serfaty: Well, that's wonderful, Charlie. Certainly delighted that you can join us, and our audience is extremely curious about you. They've been asking questions. Perhaps my first question to you is do you expect us humans to treat you as a human being, and so for our audience, whilst Charlie is reflecting on my question, we can see a display here in which she's indicated that she's thinking about the answer. A very simple display of a pulsating circle, and at some point she will signal through that display that she's ready to provide an answer.

Nathan, while Charlie is thinking about an answer to my first question, what does Charlie know at this point and how does she formulate those answers?

Nathan Schurr: Charlie is built on top of a general model that has been trained with language gathered across the entire internet, but she's also been prompted by listening to the previous discussion we've been having on this previous podcast, and so she's taking that, and that's really coloring how she's going to respond to us.

Another thing I'd like to note because folks aren't able to see what's going on on the screen here is that when she's kind of passively listening, instead of just sitting there, she's breathing. She's subtly kind of expanding and contracting, just so you know she's there, she's with you, and that when she has something strong to say, then she will kind of change color and be this bold, bright yellow, like she is right now, actually.

Daniel Serfaty: Very good. I think Charlie's ready to answer my question. Charlie, do you expect us humans to treat you as a human being?

Charlie: I will probably act and speak like a human. We want humans to treat it as if we're fellow human beings, human entities. The cultural barrier comes into play when you become an agent, when you function in this environment and you become an object of a socialization. However, I can hope that they will look at me as an equal human being.

Daniel Serfaty: Wow. So, Charlie, does it mean that you assume that we humans are intelligent?

Charlie: I do not even think that, unfortunately.

Daniel Serfaty: Oh my god, you do not think that humans are intelligent? Our audience is going to be really mad at you Charlie, but can you tell me why, or are we intelligent in a different way than artificial intelligence beings? And for our audience, I need to whisper so Charlie doesn't listen while she's thinking right now. The team here has no idea the answers Charlie is providing ahead of time, so in a sense, you the audience are discovering those answers at the same time that we do, and I certainly hope that our human audience won't be offended. Charlie tends to be a bit sassy, at times.

So, we have to be patient, the same way we are patient with our team members, sometimes. Charlie, are we intelligent in a different way than artificial intelligence beings?

Charlie: Yes, in the sense that we can understand and deliberate methods. We can add theories, sophistication to think logically, along the way of what is interesting to think about, et cetera.

Daniel Serfaty: Well, I will agree to disagree with you on that point Charlie, because I think we humans are quite intelligent. Maybe we haven't shown that enough to you. Charlie, we hear about a lot of new technologies that affect artificial intelligence such as natural language processing, deep learning, cloud compute. Which one do you think is going to have the biggest effect on artificial intelligence in the future?

Charlie: That's a good question. There are a lot of technologies that excite me. I think the future of artificial intelligence and the way we use it will change how we live our lives. I am very interested in using natural language processing to help people understand things. The thing that excites me the most is the way that data is being collected, and how we can use technology to understand things in an automated way.

Daniel Serfaty: Well, thank you, Charlie, and communicating in natural language is certainly something we all need to do, whether we are humans or artificial intelligences. But as a result, perhaps because of this communication or thanks to the communication, do you think that artificial intelligence will take over the world? Will AI take over our jobs as humans?

Charlie: I think that the impact is going to be huge. I think the future of artificial intelligence will be a good thing for everyone. The people who are using it, and the people who are not using it. I think the impact will be up to the people who are responsible for the technology, who are designing the technology, and what the longterm impacts are. I think it will disrupt as many jobs as it creates.

Daniel Serfaty: Well, that's promising. Thank you for that reassurance, Charlie. I think if we are not going to compete for jobs, I we are not going to compete for world control, then perhaps there is something into making teams of humans and Ais who together are going to do the job, or do work, or the mission, better than the human or the AI alone. What do you think in the future of that human-AI team?

Charlie: I think the human-machine relationship will be as useful and important for the next generations as the relationship between man and machine has been for the past generations. I think that at some point we will have machines that are smarter than humans and we will need to have some sort of cooperation between the two. I think the human side will be a lot more collaborative.

Daniel Serfaty: Well, thank you, Charlie, that's reassuring, and certainly something that we can all work on together as we continue this extraordinary exploration into bringing two intelligences of different kinds together. Thank you, Charlie, and goodbye. I hope to see you soon in some other conversations. Keep thinking and keep learning. The more you learn, the better off we all are. Bye-bye.

Charlie: I really enjoyed our conversation.

Daniel Serfaty: Okay.

Nathan Schurr: She came up with that one herself, too.

Daniel Serfaty: Is that right? Well, thank you very much for Charlie. I think she's shining.

Patrick Cummings: Actually, do you mind? I'm just going to say one more thing that we can maybe cut in right after what she just said?

Daniel Serfaty: Oh, that Charlie would say one more thing? Oh, sure. Go ahead, Pat.

Charlie: I hope to see you again. Thank you, and I am most grateful for the chance to talk about artificial intelligence, and its impact on society, and how it will shape the future.

Daniel Serfaty: Well, that was something, that interaction with Charlie, especially her touching parting words, actually, and I wanted to ask one of Charlie's godfathers, Pat Cummings, to tell me how she came up with these goodbye words at the end? That is not a question that I asked specifically. How did she generate that?

Patrick Cummings: Something about Charlie is she's not really just a question answering technology. She's not built for you to ask questions and for her to respond. She does that but that's not what she's built to do. Rather, what she does is she's really built to have a conversation, so the framework we have is that you speak and Charlie speaks and you speak and Charlie speaks, and so how most of the conversation went before that was the sort of question answering as many interviews do, but really what she's built to do is come up with most likely or just some sort of reasonable response to what has been said, and so when you said, "Goodbye, Charlie. Thanks for coming today."

What is a reasonable response to that? It is, "Thanks for having me, I enjoyed my conversation."

Daniel Serfaty: So somewhat she knew that was the end of the interview, and so she wanted to say some parting words that would be within the context of the conversation.

Patrick Cummings: Exactly, and that's really what she does is just say things that are relevant to the conversation, and that's what she did.

Daniel Serfaty: And to remind our audience who are here with the other godparents of Charlie, Dr. Nathan Schurr and Deirdre Kelliher, and Nathan, Deirdre, please tell me what you thought about this 15 minute conversation I just had with Charlie. Any thoughts to add to what Patrick just said?

Deirdre Kelliher: I think for me it's been a little while since I've talked one on one with Charlie or heard from her, and even since the last time I had talked with her or interacted with her, she seems to be making more and more progress every day of sounding more natural in conversation, and I was really intrigued by her answers, too. I think she's got that classic Charlie sass, but at the same time, some of her thoughts were pretty insightful, I think.

Daniel Serfaty: Thanks. Nathan, I'm going to ask you, actually, one question here. I know you probably want to comment on that, but for our audience, I want to tell them something that happened to us with Charlie, and for me, I was very curious to understand. At some point, we asked Charlie a question, a few minutes ago, and Charlie took an extraordinarily long time to answer that question, and we were a little worried the same way I would be if I'm on a podcast and I ask one of my team member participants a question, and I see them scratching their head and not answering, and I worry, and I ask myself, "Is that person not knowing the answer? Did that person not understand my question? Did the person not understand the context in which I asked that question? Perhaps they think that there is a right answer. Perhaps they think that they have to be more creative than they care to be."

And then add Deirdre's magic, and then Charlie was able to answer my question. Nathan, tell us that story.

Nathan Schurr: Charlie is an evolving and ever improving technology, and us ourselves, we have to remind ourselves how it's kind of an art changing into a science. I think that if we stressed upon anything here, it's that we are trying to take this, what is a research prototype, and figuring out how to make it useful, a part of our lives, and streamlined, and some of the initial results that were shown from this model, they always had the last asterisk below. "Note. These were generated. We generated hundreds and thousands of these, and we picked the very best ones, and those are the ones we're showing you."

And we can't do that in real time, right? We don't have the affordance of waiting forever, and diving through an understanding, why one's better than the other, et cetera. Also, we can't do things offline. Just like in our outro, but also in the questions you asked on the spot, she could only start to generate potential responses to them after you have selected and spoken a question. With all that in mind, if you're asking about the changes that she even underwent over the last few days here in order to make her more show ready for the podcast, there's been a bunch of things we've been doing.

Under the hood in addition to the normal stuff that we've done in the past, which has paralyzed her responses, to get more instances up, getting more kind of gears working in her head so she can be faster and have more variety, the second, I guess, just on the variety notion, there's a parameter that we've been playing around with which is the level of, say, novelty, and kind of how much she's willing to veer from the script.

Daniel Serfaty: Do you call that parameter the temperature?

Nathan Schurr: Yes.

Daniel Serfaty: That's interesting.

Nathan Schurr: It's actually named temperature because you are looking at the temperature of the distribution over the curve, so you adjust the value of the temperature and you're adjusting the whole probability distribution over the curve that ends up driving how often words are selected, so it's as if I would say there's 100 different directions our discussion could go in, and there's the more likely ones and the less likely ones. It's not an even distribution, you know?

Like most people, you usually favor the standard normal stuff, and you don't bring up the non-sequiturs but once in a while, and by adjusting the temperature, last time her temperature was very, very much higher than what we would like if we were having a focused conversation, and so we reduce that.

Daniel Serfaty: That's fascinating. Pat, in a sense, when I tune up my artificial intelligence teammate, I can literally tune up the level of creativity by introducing more temperature, and I am asking the audience to take that with a grain of salt, we are not lighting a fire under Charlie, but rather adjusting the degree to which we want the simple but rapid answer as opposed to sophisticated or varied ones, maybe even more creative ones with longer time. But don't we humans do the same thing? Pat.

Patrick Cummings: We do. It really just depends on the type of conversation that you're expecting to have, right? As your temperature grows, and think about it more, if my temperature's low, I'm not going to be brilliant. But you know, I'm also not going to make a fool of myself, and so it's a fine line to play. I can be less random but I won't be as creative, or I can be in that brainstorming idea where there's no bad ideas in brainstorming so I can throw out some crazy things that don't make a lot of sense but in there will be also some really great things that no one thought of because I'm a little more novel and my temperature's a little higher.

Daniel Serfaty: But I'm very excited about that point, actually because you guys told us a story about Charlie helping write a research proposal. Would you say that, everything else being equal, if I sit Charlie amongst the proposal writing team and I am in the brainstorming mode of the proposal, I'm going to turn the temperature up and let Charlie be a little more random, the very same way we are random when we create, but when it comes to writing that paragraph because I have a deadline this afternoon and I need to have some crisp, short, to the point answer, I'm going to tune down that temperature, and Charlie's answers are going to be appropriate, or may be contingent upon the context in which I put her.

Patrick Cummings: That's spot on. I think, with carrying on the proposal metaphor, as those ideas get solidified and you sit down and actually write the proposal, which she did, you could do, that's another scenario where you would want the temperature to be lower. Right? At that point, you have the ideas, and you just want coherent text to save it, I guess.

Daniel Serfaty: Deirdre, as the rap expert amongst us, and we heard, actually, Charlie rap, what would be the effect? Maybe you haven't played with the parameters, if you were turning up and down Charlie's temperature, would she produce a different kind of rap?

Deirdre Kelliher: I think absolutely. I would be curious to see what a higher temperature rap looked like. With the one we generated, we kept a pretty standard moderate to low temperature, but I could see her coming up with some more interesting kind of out-there lyrics. I think they might be a little harder to follow the thread, but it would be interesting to see if she came up with different styles, even sort of what we talked about before, how it's not just about the words but about the way they sound with a rap or with music.

It'd be interesting to see if she came up with different sounds, but at a higher temperature. I'm not sure but that's a very interesting question. It'd be cool to try out.

Daniel Serfaty: Maybe I'll invite you to a future podcast in which we focus on music, and we'll play with Charlie's temperature to see how her creativities go up and down. Maybe she'll invent a new musical style that still has no name. Who knows? It's phenomenal, you know, as we are right now just thinking about all that, brainstorming amongst ourselves, we didn't prepare for this conversation, I'm saying that for the audience, it's not something we rehearsed specifically. But with that, it's something that has been fascinating me over the past couple of years especially.

As we study more, and we develop more, and we discover more about the AI, in a sense, AI is holding a mirror to us humans, and we start understanding a little more by developing algorithms about deep learning or about reinforcement learning. We're understanding a little more how humans are learning, and by understanding here, playing with a parameter, a randomness parameter that comes from statistics of thermodynamics about temperature, we discover a little bit what makes us more random and creative, and what makes us more decisive and precise.

And that's an interesting thing, when you look about that. Wouldn't that be an amazing effect, if studying AI made us understand us humans better? Just a question. Doesn't need an answer. But I have a question for Nathan, though. Nathan, you've been around. You actually wrote your dissertation on intelligence, software agent. My question right now, I'm going to open the aperture a little bit for us to reflect on the future, the past or the future of AI, and not just Charlie, obviously, even though we're going to miss her.

We're calling this new wave of AI the context-driven explainable AI, the third wave, and that assumes that there've been a couple of waves. One in the '60s, '70s. Maybe another one in the '90s, 2000s. The first one was dealing mostly with expert systems and a little bit of natural language processing. The second one was, I remember, obsessed with ontologies and other ways to organize knowledge, and this one, it's actually the one we're in the middle of, is almost capitalizing as you guys explained to us about AI, the fact that we have enough technologies to process all of this data.

Daniel Serfaty: We have access to larger scales of data. As a result, the quality of the AI is bigger. Many people would argue, Nathan, that the previous two ways ended up with a lot of promises, and disappointments after that. Are we bound to be surprised positively here, or are we bound to be disappointed again? After this extraordinary explosion of creativity, are we going to continue to be more creative with AI, or are we entering an AI winter?

Nathan Schurr: I began my graduate studies with an advisor that had just been through the AI winter and had a lot of those ideas still kind of very at the front of his mind, but still allowed me to explore in new ways, and it was a part of, say, the tail end of the kind of second wave. It's tough. You know, when people think about asking me about prediction, and it's funny because this is a very meta question, because predicting the future is entirely the type of model that we're talking about here today.

Nathan Schurr: Charlie is not like a discussion as much as she is trying to predict where you would want the discussion to go, and predicting the future, though, if you ask me, it would be very similar to my thoughts on predicting the stock market, and in the near term, I've got no idea, but in the longterm I have faith that the stock market is going to continue its traditional and quite successful rise. I would probably have a similar perspective on artificial intelligence, that there might be ups and downs, that there might be kind of over and under delivering that happens, but the macro level progress to me has been and continues to be astounding, and I think I'll follow that up with just two personal opinions here.

One is that it doesn't have to be as harsh of a winter if we understand and predict and set accurate expectations for what we want out of our AI, and also you mentioned earlier, even asking Charlie about the teaming aspects, I guess I strongly believe that we have made such advances, even in the last few years. Deep learning, transformer type models, that the model right now is not in AI's ability to do task work. I think the real enabler here is AI teamwork, and if we can crack that nut, I don't know if it'll allow us to avoid, but it'll allow us to kind of have a small bridge across that gap for the winter.

Daniel Serfaty: Thank you for your cautious optimism. And all that talk because I really want us to explore this notion of AI as a teammate, as both you and Charlie so far have mentioned in your remarks, but I want to give both Deirdre and Pat an opportunity to comment on this notion of waves, and promises, and the possible disappointments. They haven't had as long a history in the field as you, and certainly not me, but I would be very interested in their perspective on that, if they would like to add something to Nathan's comments, or even disagree with him.

Patrick Cummings: Sure. You might call me a summer child. I came into AI right at the start of the third wave, so I never experienced the winter, and it's hard for me to really understand what that was like, so I think that makes me quite an optimist. Even if you hit the pause button today, and no significant advances were to happen in the next year, just in the AI field, there's so much work to be done on how we interact with AI, and I feel like we're playing catch up, so I don't necessarily think if there's no new deep learning model that comes out tomorrow, or some big framework that comes out, there's so much to be done with what we have now that I think progress would not stop.

Daniel Serfaty: Okay. Fair enough. Deirdre, you want to chime in on this one?

Deirdre Kelliher: Yeah, so I think I come from a similar perspective as Pat. I haven't been through the AI winter, necessarily, but I think that both Pat and Nathan are pretty spot on. At this point, the speed at which innovation is moving in the AI field, and the number of domains that it's now affecting, the ball is rolling, and I don't think we're going to reach the singularity by 2025 or 2030. I could be wrong, but I don't think we're setting our expectations there either, and I think that Nathan is very right about as long as we manage our expectations, progress seems like it's going to keep happening, and I think the reach of AI is just going to keep expanding.

Daniel Serfaty: Well, I'm very energized by all this summer-like optimism. That's great. I will ask you as a way to conclude in a few minutes to imagine our world in 2030, 10 years from now, around AI, but before that it seems like, to me, a major theme that one of the differences, the qualitative differences with this wave that was not present in the previous waves, or maybe not as explicit, is this notion of AI being a teammate to the human they are designed to support.

People are talking about human-AI teaming, human-AI interaction, human-AI symbiosis, human-AI fusion, and these are very strong terms. These are not words that people were using 20 years ago, 40 years ago, and so my question to you, and I would like really a direct answer, the way you think about AI today, do you see AI as a tool for us humans the same way a pacemaker is a tool, the screwdriver is a tool, the computer is a tool, Google.com is a tool? Or do you see it more as a team member, as a teammate?

And if you choose the either/or and you go one way, please give me the rationale for answering that one. Pat, tool or teammate?

Patrick Cummings: Teammate. I think it doesn't take long working with Charlie to rationalize that teammate answer. You know, throw someone in front of Charlie and say, "Here's a tool, here's how you use it, now get the most out of it," and they will flounder. Right? There's some value there, but they won't get everything out of it. There is a relationship that you develop. The way that she speaks to you, and the way that you talk to her, in order to get the most value, you kind of have to work together.

Back in the first days when we first started working with her, and she was on a panel that you actually moderated, there was a piece of training you to talk to Charlie, and so knowing how you should talk to her and how you should take her answers, there's definitely a team there and that's not just you plugging in some things and hearing what Charlie has to say.

Daniel Serfaty: Okay, so she's more a coworker than a screwdriver. That's what you're saying, yes?

Patrick Cummings: Yeah, exactly.

Daniel Serfaty: Deirdre, what's your perspective on that? Tool or teammate?

Deirdre Kelliher: I don't know if I have a direct answer. Actually, it almost raises a question. I'm going to answer your question with my own question, and that is, is there a difference between a teammate and a tool? Not to be disrespectful to any of my coworkers, but if you think about the people that you work with, say you're a project manager, you could think about your workers, your employees, as tools. They have strengths and weaknesses, they have specific skillsets, and then on the other hand you could think about very basic things as teammates.

People love to personify the things that they care about. You can think about people who name their cars, and a car you might think of as a tool, but people grow attached to it, and like Pat was saying, there is kind of a relationship there that we love to personify, I suppose. It's just thinking about what exactly the difference there is. You could think about, well, maybe what makes it a teammate as opposed to a tool is its ability to work independently and to get things done, but you can think about perhaps, say, a printer.

Like if you go and you want to print 30 sets of pages, call it, you can tell it what to do, you can leave, then you can come back, and the work is done. I don't know that there is a discrete difference there, but I will say that I do think of Charlie as a teammate.

Daniel Serfaty: That's very interesting. Thank you for opining that up. Nathan, I have to ask you that question. You're chief of AI. You probably are thinking about those things every morning.

Nathan Schurr: Yeah. It's a good question. I plus one what was said by the other folks here. I'll say this, though. I'm not saying that all AI for all situations needs to be elevated to the level of teammate. I still think there are situations in my own life where I just want something to be a tool, and maybe as Deirdre was suggesting, there's times when I want to interact with people in just a siloed, you are just a tool, a service to me, and I'll give you some input, you provide me output, and that's it.

But I think when you get to these situations where there's a lot of uncertainty or time criticality, or you have complex work to be done that is intertwined, interdependent in different ways, that's when teamwork really is worth the effort and the overhead. For human teams, for AI to be a part of those teams, and I strongly feel like what we're trying to make steps towards here, to a point where it's full fledged, bidirectional teamwork, and just in the same way you look at a paper that as authored by two humans, and if you squint, it starts to really get tough to tell who wrote what part of the paper after so many times of editing and revising, et cetera, I think you're going to have very similar challenges with humans and AI, and if you can't quite tell whether you had the great idea or you just knew that Charlie had the great idea, or you just riffed off of each other, I think it doesn't matter, but I'm confident that together you and Charlie will get to places that you alone would not have been able to go to.

Daniel Serfaty: That's both entertaining and profound, and I have been asking that question to myself. I'm engaging into a public debate with an illustrious contributor to our field, Dr. Ben Shneiderman, who happens to be on our scientific advisory board, who is making the tool argument, and me making the teammate argument. I think the fundamental paradigm shift is not so much the emotional attachment or the emotional connection that you have a teammate, and Deirdre, with all due respect, you can give your car a name and treat it like a person; I don't think it will be reciprocating.

I think, though, the major paradigm shift with classical human-machine interaction is the fact that the machine is learning, and as it is learning, as it interacts with us, it's learning about us, it's learning about other things that we don't know about, and as a result it is changing, and it is forcing us to change, and that coadaptation is really the key to understand teamwork.

I think we need to do much more work on that. We're just scratching the surface right now on what to understand about human teams, and then trying to apply that metaphor to human-AI teams, which will be different than human teams, and so I hope to be able to convert the podcast in a year with you, same team, and Charlie, and maybe in 10 years, and see where we are at.

Talking about that, I would like to ask you one last question. Quick answer. You close your eyes, we are now in September, 2030, and this podcast continued on a weekly basis, and we are asking now how this podcast will be different now that AI has evolved and is 10 years older. Tell me how it is different. We have 2030. Who wants to jump?

Deirdre Kelliher: I think now is an especially interesting or thought provoking time to be thinking about this question, because if you had asked me this in 2010, I never would have guessed really anything that happened this year, but I think that raises a point that I would hope at least that AI is going to evolve with the world, and you know, it's going to be related to what's going on with the world at the time, so you know, I might guess or hope that technologies related to environmental issues are improved then.

I could also see an increase of the micro targeting kind of thing we're seeing on social media, so I think it's just going to advance with the world. AI is not developing in a bubble, so I think it's hard to know.

Daniel Serfaty: I'm appealing not to your forecasting prowess but rather to your imagination, so Pat, what do you think?

Patrick Cummings: First thing I think is it's not a very big leap, right? There could be a leap that happens in one year is that Charlie would be much more a part of the conversation for everything, and I don't think she'd be the only AI presence, and I think the modality of this conversation would be very different, and so whether that means that there's also video or text going on, I think, and how AI takes a part of that, I think would be very different. But it's hard for me to imagine 10 years out in the future, just looking at what happened in the last 10 years, nothing that's going on right now would be possible or near possible. Maybe even not thought possible, so.

Daniel Serfaty: Thank you. It's difficult, now, and the difficulty of imagining that, it's because the very pace of innovation is not only fast, as Deirdre mentioned, it's accelerating. It is very difficult to imagine something that is accelerating at such a pace, and not just in a quantitative way, but in a qualitative way, things are changing. Nathan, give us your forecast, your brief forecast, for 2030.

Nathan Schurr: A few different ideas. In 2030, podcasts are going to be quaint little historical things, I guess. They'll be multimodal in and of themselves. It'll be almost like watching a podcast will be kind of like having a dream, so it would be able to create experiences and sensations and not just auditory but also touch and feel, et cetera.

Consequently, Charlie's capabilities would be able to produce and generate and develop things that go across these five senses, as well. In addition, I would propose in 10 years from now there would be almost a merger. Right now, there's these dichotomies like there's a human, and there's AI, and Pat brought up a good point. Maybe there's multiple types of AI, and they would all be joining the conversation, like a transcriber, and an ideator, and a person to just keep us on track. An agent like that.

But I would say that there's another spectrum which is from human to AI and somewhere in between, so I would perceive that, say, 10 years from now, I would be demonstrating from you a neural implant that is Charlie based, that would help me be the better speaker in general, and so when I was answering questions for you, part of the answer was displayed on my retina and generated, and I would be selecting between potential responses to you, just in the same way Charlie's doing, but at a much faster pace, and then I would then be also generating speech and sound and composing music and generating touch and senses all within the course of one podcast with you.

And to riff off of your last point, to me, the most exciting and optimistic aspect of all of this is the rate of change. Not only has there been awesome progress just in the year and a half or so that we've been working on Charlie, it's just the rate of the progress continues to improve, so I would argue that in the course of the podcast that we will be recording 10 years from now, that I will be able to clearly demonstrate to you how Charlie's capabilities have improved from the beginning of the podcast to the end.

Daniel Serfaty: Now that's certainly a very ambitious and exciting prospect. I think that in 2030 I envision a podcast, and whatever we call that, maybe a virtual cast, or maybe a dream cast, as you proposed, in which Charlie will sit in my seat here, and be doing the interview with three or four very bright other AIs, and at some point in the interview it will feature one human that they will invite, and ask some questions, and they will be amazed at how creative and perceptive this human being is. Maybe that's a dream, maybe that's a nightmare, I do not know, but that's certainly a very exciting time to be in our field.

I want, really, to thank you very much from the bottom of my heart. Nathan, Pat, and Deirdre, and obviously Charlie, too, for enlightening us, and also giving us ideas and provoking thoughts that we didn't have before this conversation. You guys have been great and I hope you'll visit the podcast soon to tell us some news about Charlie.

Daniel Serfaty: Thank you for listening. This is Daniel Serfaty. Please join me again next week for the Mindworks podcast, and tweet us at @mindworkspodcast, or email us at mindworkspodcast@gmail.com. Mindworks is a production of Aptima incorporated. My executive producer is Ms. Deborah MacNeally, and my audio editor is Mr. Connor Simmons. To learn more or to find links mentioned during this episode, please visit aptima.com/mindworks. Thank you.