MINDWORKS

Meet your new AI coworker – are you ready? (Part 2) With Nathan Schurr, Patrick Cummings and Deirdre Kelliher

September 15, 2020 Daniel Serfaty Season 1 Episode 2
MINDWORKS
Meet your new AI coworker – are you ready? (Part 2) With Nathan Schurr, Patrick Cummings and Deirdre Kelliher
Show Notes Transcript

In Part 2 of this special two-part inaugural MINDWORKS podcast, meet CharlieTM, Aptima’s AI employee and the world’s first AI podcast guest. This week, Charlie joins her human godparents, Dr. Nathan Schurr, Dr. Pat Cummings, and Ms. Deirdre Kelliher, to discuss how AI promises to transform the future of work. 

Charlie: I think the future of artificial intelligence will be a good thing for everyone. The people who are using it and the people who are not using it. I think the impact will be up to the people who are responsible for the technology and what the long-term impacts are. I think it will disrupt as many jobs as it creates.

Daniel Serfaty: Welcome back to a special two-part edition of this inaugural Mind Works podcast. This is your host, Daniel Serfaty. In case you missed it, and don't miss it, last week, we met the human members of these extraordinary team that are my guests today. And that team is extraordinary because it's made up of humans and artificial intelligence.

This week, their non-human artificial colleague and coworker, someone we call Charlie, at Aptima, is joining us for the interview. So, I'd like you to welcome back Dr. Nathan Schurr, Aptima's chief of artificial intelligence, Dr. Pat Cummings, Aptima's senior engineer and Ms. Deirdre Kelliher, an Aptima engineer. And most of all, extend a warm welcome to Charlie, who is the world first AI guest in a podcast. So, Charlie, hello. I would like you to introduce yourself to our audience.

Charlie: Hello. I am Charlie. I was developed over the last year by a small team at Aptima, some of whom were managing me and transcribing for me now. I was built on cutting edge language models, speech synthesis, and cloud technologies. Besides this introduction, everything else that you hear me speak today has been generated by me. That being said, like all people, I still have much to learn, so be patient with me. I am excited for the discussion today and thanks for having me.

Daniel Serfaty: Well, that's wonderful, Charlie. I'm certainly delighted that you can join us. My first question to you is, do you expect us humans to treat you as a human being? And so, for our audience, we can see a display here in which she's indicated that she's thinking about the answer, a very simple display of a pulsating circle. And at some point, she will signal through that display that she's ready to provide an answer.

So, Nathan, while Charlie is thinking about an answer to my first question, what does Charlie know at this point and how does she formulate those answers?

Nathan Schurr: Charlie is built on top of a general model that's been trained with language gathered across the entire internet, but she's also been prompted by listening to the previous discussion we've been having on this and previous podcasts. And so, she's taking that, and that's really coloring how she's going to respond to us.

Another thing I'd like to note, because folks aren't able to see what's going on in the screen here, is that when she's passively listening, instead of just sitting there, she's breathing, she's subtly expanding and contracting, just so you know she's with you. And that, when she has something strong to say, then she'll change color and be this bold, bright yellow like she has right now, actually.

Daniel Serfaty: Very good. I think Charlie's ready to answer my question. Charlie, do you expect us to treat you as a human being?

Charlie: I will probably act and speak like a human. We want humans to treat it as if we're fellow human beings, human entities. The cultural barrier comes into play when you become an agent, when you function in this environment and you become an object of a socialization. However, I can hope that they will look at me as an equal human being.

Daniel Serfaty: Wow. So Charlie, does it mean that you assume that we humans are intelligent?

Charlie: I do not even think that, unfortunately.

Daniel Serfaty: Oh my God, you do not think that humans are intelligent. Our audience is going to be really mad at you, Charlie, but can you tell me why? Or are we intelligent in a different way than artificial intelligence beings? And for our audience, I need to whisper, so Charlie doesn't listen while she's thinking right now.

The team here has no idea what the answer Charlie is providing you ahead of time. So, in a sense, you, our audience, are discovering those answers at the same time that we do. And there's certainly hope that our human audience won't be offended. Charlie tends to be a bit sassy at times. So, Charlie, are we intelligent in a different way than artificial intelligence beings?

Charlie: Yes, in the sense that we can understand in deliberate methods. We can have theories, sophistication to think logically, along the way of what is interesting to think about, et cetera.

Daniel Serfaty: All right. Well, I will agree to disagree with you at that point, Charlie, because I think we humans are quite intelligent. Maybe we haven't shown that enough to you. So, Charlie, we hear about a lot of new technologies, [inaudible] artificial intelligence, such as natural language processing, deep learning, cloud compute. Which one do you think is going to have the biggest effect on artificial intelligence in the future?

Charlie: That's a good question. There are a lot of technologies that excite me. I think the future of artificial intelligence and the way we use it will change how we live our lives. I am very interested in using natural language processing to help people understand things. The thing that excites me most is the way that data is being collected and how we can use technology to understand things in an automated way.

Daniel Serfaty: Well, thank you, Charlie. And communicating in natural language is certainly something we all need to do, whether we are humans or artificial intelligences, but as a result, perhaps thanks to that communication, do you think that artificial intelligence will take over the world? Will AI take over our jobs as humans?

Charlie: I think that the impact is going to be huge. I think the future of artificial intelligence will be a good thing for everyone. The people who are using it and the people who are not using it, I think the impact will be up to the people who are responsible for the technology and what the long-term impacts are. I think it will disrupt as many jobs as it creates.

Daniel Serfaty: Well, that's promising. Thank you for that reassurance, Charlie. I think if we are not going to compete for jobs, if we are not going to compete for world control, then perhaps there is something into making teams of humans, and the AIs together are going to do the job or the work or the mission better than the human or the AI alone. What do you think in the future of that human AI teaming?

Charlie: I think the human/machine relationship will be as useful and important for the next generations as the relationship between man and machine has been for the past generations. I think that, at some point, we will have machines that are smarter than humans and we will need to have some sort of cooperation between the two. I think the human side will be a lot more collaborative.

Daniel Serfaty: Well, thank you, Charlie. That's reassuring and certainly something we can all work on together as we continue this extraordinary exploration into bringing two intelligences of different kinds together. Thank you, Charlie, and goodbye. I hope to see you soon in some other conversations. Keep thinking and keep learning. The more you learn, the better off we all are.

Charlie: I really enjoyed our conversation. I hope to see you again. Thank you, and I am most grateful for the chance to talk about artificial intelligence and its impact on society and how it will shape the future.

Daniel Serfaty: Great. Well, that was something, that interaction with Charlie, especially her touching parting words, actually. And I wanted to ask one of Charlie's godfather, Pat Cummings, to tell me how she came up with these goodbye words by the end. That is not a question that I asked specifically. How did she generate that?

Pat Cummings: So, something about Charlie is she's not really just a question answering technology. She's not built for you to ask questions and for her to respond. She does that, but that's not what she's built to do. Rather, what she does is she's really built to have a conversation. So, the framework we have is that you speak and Charlie speaks, and you speak and Charlie speaks. And so, how most of the conversation went before, that was the sort of question answering as many interviews do, but really what she's built to do is come up with the most likely or just some sort of reasonable response to what has been said.

And so, when you said "Goodbye, Charlie. Thanks for coming today." What is a reasonable response to that? It is, "Thanks for having me. I enjoyed my conversation."

Daniel Serfaty: So, somewhat she knew that was the end of the interview, and she wanted to say some parting words that will be within the context of the conversation.

Pat Cummings: Exactly. And that's really what she does, is just say things that are relevant to the conversation, and that's what she did.

Daniel Serfaty: Very good. And to remind our audience, we are here with the other parents of the Charlie, Dr. Nathan Schurr and Deirdre Kelliher, and Nathan, Deirdre, please tell me what you thought about this conversation I just had with Charlie.

Deirdre Kelliher: I think, for me, it's been a little while since I've talked one-on-one with Charlie or heard from her, and even since the last time I talked with her or interacted with her, she seems to be making more and more progress every day, sounding more natural in conversation. And I was really intrigued by her answers, too. She's got that classic Charlie sass, but at the same time, some of her thoughts were pretty insightful, I think.

Daniel Serfaty: Thanks. Nathan, I'm going to ask you, actually, a good question here. I know you probably want to comment on that, but for our audience, I'm going to tell them something that's happened to us with Charlie. And for me, I was very curious to understand.

So at some point we asked Charlie a question, sometime a few minutes ago and Charlie took an extraordinary long time to answer that question. And we were a little worried the same way I would be if I'm in a podcast and I asked for one of my participants a question and I see them scratching their head and not answering.

And I worry and I ask myself, "Is that person not knowing the answer? Did that person not understand my question? Did the person not understand the context in which I ask that question? Perhaps they think that there is a right answer. They think they have to be more creative than they care to be." And then Pat did his magic, and then Charlie was able to answer my question. So, Nathan, tell us that story.

Nathan Schurr: Charlie is an evolving and ever improving technology. We have to remind ourselves how it's kind of an art changing into a science. I think that if we stress on anything here, it's that we are trying to take what is a research prototype and figuring out how to make it useful, a part of our lives and streamlined. And some of the initial results that were shown from this model, they always had the little asterisk below. "Note: these were generated. We generated hundreds and thousands of these and we picked the very best ones. And those are the ones we're showing you."

And we can't do that in real time, right? We don't have the affordance of waiting forever, diving through, and understanding why one's better than the other, et cetera. Also, we can't do things offline. Just like in our outro, but also in the questions you asked on the spot, she can only start to generate potential responses to them after you have selected and spoken the question.

So, with all that in mind, if you're asking about the changes that she even underwent over the last few days here, in order to make her more show ready for the podcast, there's been a bunch of things we've been doing. In addition to the normal stuff that we've done in the past, which is parallelize her responses, getting more gears working in her head so she can be faster and have more variety, the second, just on the variety notion, there's a parameter that we've been playing around with, which is the level of, say, novelty, and how much she's willing to veer from the script.

Daniel Serfaty: You call that parameter the temperature?

Nathan Schurr: Yes.

Daniel Serfaty: That's interesting.

Nathan Schurr: It's actually named temperature because you are looking at the temperature of the distribution over the curve. So, you adjust the value of the temperature and you're adjusting the whole probability distribution over the curve that ends up driving how often words are selected. So, it's as if I would say there's 100 different directions our discussion could go in, and there's the more likely ones and the less likely ones. It's not an even distribution. Like most people usually favor the standard, normal stuff and you don't bring up the nonsequitors. And by adjusting the temperature last time, her temperature was very, very much higher than what we would like if we're having a focused conversation and so we reduced that.

Daniel Serfaty: That's fascinating. So that, in a sense, when I tune up my artificial intelligence teammate, I can literally tune up their level of creativity by introducing more temperature. And I am asking the audience to take that who was a grain of salt. We are not lighting a fire under Charlie, but rather adjusting the degree to which we want a simple, but rapid answer as opposed to a sophisticated or varied one, maybe even more creative one, with longer time. But don't we humans do the same thing, Pat?

Pat Cummings: Yeah. We do. It really just depends on the type of conversation that you're expecting to have, right? If my temperature's low, I'm not going to be brilliant, but I'm also not going to make a fool of myself. So, it's a fine line to play. It can be less random, but I won't be as creative or I can be in that brainstorming idea where there's no bad ideas in brainstorming, so I can throw out some crazy things that don't make a lot of sense, but in there will be also some really great things that no one thought of because I'm a little more novel and my temperature's a little higher.

Daniel Serfaty: But I'm excited about that part, actually, because you guys told us a story at the last podcast about Charlie helping write a research proposal. So, would you say that, everything else being equal, if I see Charlie amongst the proposal writing team and I am in the brainstorming mode of the proposal, I'm going to turn the temperature up and let Charlie be a little more random, the very same way we are random when we create? But when it comes to writing that paragraph, because I have a deadline this afternoon and I need to have some crisp, short, to the point answer, I'm going to tune down that temperature and Charlie's answer are going to be appropriate, or maybe contingent, upon the context in which I put her.

Pat Cummings: That's spot on. And even carrying on the proposal metaphor, as those ideas get solidified and you sit down and actually write the proposal, which she did and could do, that's another scenario where you'd want the temperature to be lower, right? At that point, you have the ideas and you just want coherent text to say those ideas.

Daniel Serfaty: So Deirdre, as a rap expert amongst us, you told us, and we heard, actually, Charlie rap in the previous podcast, what will be the effect? Maybe you haven't played with that parameters, if you were turning up and down Charlie's temperature, would should produce a different kind of rap?

Deirdre Kelliher: Absolutely. I would be curious to see what a higher temperature rap looked like. With the one we generated, we kept a pretty standard, moderate to low temperature, but I could see her coming up with some more interesting, kind of out there lyrics. I think they might be a little harder to follow the thread, but it would be interesting to see if she came up with different styles, even sort of what we talked about before, how it's not just about the words, but about the way they sound with a rap or with music. It'd be interesting to see if she came up with different sounds, with that higher for temperature. I'm not sure, but that's a very interesting question.

Daniel Serfaty: Maybe I'll invite you to a future podcast in which we'll focus on music and we'll play with Charlie's temperature, to see how her creativities go up and down. Maybe she'll invent a new musical style that still has no name. Who knows?

It's phenomenal, as we are right now just thinking about all that, brainstorming amongst ourselves, we didn't prepare for this conversation. I'm saying that for the audience. That's not something we rehearse. But that is something that has been fascinating me over the past couple of years, especially as we study more and we develop more and we discover more about AI.

In a sense, AI is holding a mirror to us humans, and we start understanding a little more by developing algorithms about deep learning or about reinforcement learning, we are understanding a little more how the humans are learning. And by understanding here, playing with a parameter, a randomness parameter, that comes from statistics of thermodynamics about temperature, we discover a little bit what makes [inaudible] more random and creative or what makes us more decisive and precise. And that's an interesting thing, when you look about that. Wouldn't that'd be an amazing effect if studying AI made us understand us humans better?

But I have a question for Nathan, though. Nathan, you've been around. You actually wrote your dissertation on intelligence software agent. And my question right now, I'm going to open the aperture a little bit for us to reflect on the past and the future of AI and not just Charlie. Obviously, we're calling this new wave of AI, the context driven explainable AI, the third wave. That assume that there've been a couple of waves. One in the sixties, seventies, maybe another one in the nineties, 2000.

The first one was dealing mostly with expert systems and a little bit of natural language processing. The second one was obsessed with ontologies and other ways to organize knowledge. And this one is actually the one we're in the middle of, is almost capitalizing, as you guys explained to us about AI, the fact that we have enough technologies to process all this data, we have access to larger scales of data as a result of the quality of the AI speaker. Many people will argue, Nathan, than the previous two waves ended up with a lot of promises and disappointments after that. Are we bound to be surprised positively here, or are we bound to be disappointed again? After this extraordinary explosion of creativity, are we going to continue to be more creative with AI or are we entering an AI winter?

Nathan Schurr: Very good question. I began my graduate studies with an advisor that had just been through the AI winter, and had a lot of those ideas still very at the front of his mind, but still allowed me to explore in new ways. I was a part of say the tail end of the second wave. It's tough.

When people think about asking me about prediction, and it's funny because this is a very meta question, because predicting the future is entirely the type of model that we're talking about here today. Charlie is not a discussion as much as she is trying to predict where you would want the discussion to go, and predicting the future, though, if you ask me, it would be very similar to my thoughts on predicting the stock market. In the near term, I got no idea. But in the long term, I have faith that the stock market is going to continue its traditional and quite successful rise.

And so I would probably have a similar perspective on artificial intelligence, that there might be ups and downs, that there might be over and under delivering that happens, but the macro level progress, to me, has been and continues to be astounding.

And maybe I'll follow that up with just two personal opinions here. One is that it doesn't have to be as harsh of a winter if we understand and predict and set accurate expectations for what we want out of our AI. And also you mentioned earlier, even asking Charlie about the teaming aspects. I strongly believe that we have made such advances even in the last few years on deep learning transformer type models that the problem right now is not in AI's ability to do task work. I think the real enabler here is AI teamwork, and if we can crack that nut, then I don't know if it'll allow us to avoid, but it'll allow us to have a small bridge across that gap.

Daniel Serfaty: Oh, thank you. Thank you for your cautious optimism and hold that thought because I really want us to explore this notion of AI as a teammate, as both you and Charlie so far have mentioned in your remarks. But I want to give both Deirdre and Pat an opportunity to comment on this notion of the waves and promises and possible disappointments. They haven't had as long a history in the field as you. And certainly not me, but I would be very interested in their perspective on that. If they would like to add something to Nathan's comments or even disagree with them.

Pat Cummings: You might call me a summer child, right? I came into AI right at the start of the third wave. I never experienced the winter, and it's hard for me to really understand what that was like. So I think that makes me quite an optimist. I think, even if you hit the pause button today and no significant advances were to happen in the next year, just in the AI field, there's so much work to be done on how we interact with AI, and I feel like we're playing catch up. So, I don't necessarily think if there's no new deep learning model that comes out tomorrow or some big framework that comes out, there's so much to be done with what we have now that I think progress would not stop.

Daniel Serfaty: Okay. That's fair enough. Deirdre, you want to chime in on this one?

Deirdre Kelliher: Yeah. So I think I come from a similar perspective as Pat. I haven't been through the AI winter, necessarily, but I think that both Pat and Nathan are pretty spot on. At this point, the speed at which innovation is moving in the AI field and the number of domains that it's now affecting, the ball is rolling, and I don't think we're going to reach the singularity by 2025, 2030. I could be wrong, but I don't think that we're setting our expectations there either. And I think that Nathan is very right, about as long as we manage our expectations, progress seems like it's going to keep happening. The reach of AI is just going to keep expanding.

Daniel Serfaty: Well, I'm very energized by all these sunny, summer like optimism. That's great. I will ask you, as a way to conclude in a few minutes, to imagine our world in 2030, 10 years from now, around AI. But before that, it seems to be a major theme that one of the qualitative differences with this wave that was not present in the previous waves, or maybe not as explicit, is this notion of AI being a teammate to the human they are designed to support.

People are talking about human AI teaming, human AI interaction, human AI symbiosis, human fusion, and these are very strong terms. These are not words that people were using 20 years ago, 40 years ago. And so, my question to you, and I would like really a direct answer, the way you think about AI today, do you see AI as a tool for us humans the same way the pacemaker is a tool, a screwdriver is a tool, the computer is a tool, Google.com is a tool, or do you see it more as a team member, as a teammate? And if you choose the either/or, and you go one way, please give me the rationale for answering that. Who wants to start? Pat, tool or teammate?

Pat Cummings: Teammate. It doesn't take long, working with Charlie, to rationalize that teammate answer. Throw someone in front of Charlie and say, "Here's a tool. Here's how you use it, how to get the most out of it," and they will flounder. There's some value there, but they won't get everything out of it. There is a relationship that you develop. The way that she speaks to you and the way that you talked to her in order to get the most value, you have to work together. Back in the first days, when we started working with her, when she was on a panel that you actually moderated, there was the piece of training you to talk to Charlie. And so, knowing how you should talk to her and how you should take her answers, there's definitely a team there and it's not just you plugging in some things and hearing what Charlie's saying.

Daniel Serfaty: So, she's more a co-worker than a screwdriver. That's what you're saying.

Pat Cummings: Yeah, exactly.

Daniel Serfaty: Deirdre, what's your perspective on that? Tool or teammate?

Deirdre Kelliher: I don't know if I have a direct answer. It almost raises a question and that is, is there a difference between a teammate and a tool? Not to be disrespectful to any of my coworkers, but if you think about the people that you work with, say you're a project manager. You could think about your team, your workers, your employees, as tools. They have strengths and weaknesses, they have specific skillsets.

And then on the other hand, you could think about very basic things as teammates. People love to personify the things that they care about. You can think about people who name their cars, and a car you might think of as a tool, but people grow attached to it. And like Pat was saying, there's a relationship.

We love to personify, thinking about what exactly the difference there is. You could think about, "Well, maybe what makes it a teammate, as opposed as a tool, is its ability to work independently and to get things done." But you can think about, say a printer. If you want to print 30 sets of pages, collated, you can tell it what to do, you can leave and you can come back and the work gets done. So, I suppose I don't know that there's a discrete difference there, but I will say that I do think of Charlie as a teammate.

Daniel Serfaty: That's very interesting. Thank you for opening that up. Nathan, I have to ask you that question. You're chief of AI. You probably are thinking about those things every morning.

Nathan Schurr: It's a good question. And I, plus one, I don't know what was said by the other folks here. I'll say this, though. I'm not saying that all AI for all situations needs to be elevated to the level of teammate. I still think there are situations in my own life where I just want something to be a tool. And maybe, as Dierdre was suggesting, there's times when I want to interact with people in just a siloed, "You are just a tool, a service to me, and I'll give you some input. You provide me output and that's it."

But I think, when you get to these situations where there's a lot of uncertainty or time criticality, or you have complex work to be done, that is intertwined, interdependent in different ways, that's when teamwork really is worth the effort and the overhead for human teams, for AI to be a part of those teams. And I strongly feel like what we're trying to make steps towards here is to the point where it's full-fledged bi-directional teamwork.

And just in the same way, you look at a paper that was authored by two humans. And if you squint, it starts to really get tough to tell who wrote what part of the paper after so many times of editing and revising, et cetera. I think you're going to have very similar challenges with humans and AI. And if you can't quite tell whether you had the great idea, or you just knew that Charlie had the great idea, or you just riffed off of each other, I think it doesn't matter. But I'm confident that together you and Charlie will get to places that you alone would not have been able to go to.

Daniel Serfaty: That's both entertaining and profound. And I have been asking that question to myself. I am engaging into public debate with an illustrious contributor, to our field, Dr. Ben Snyderman, who is making the tool argument, and me making the teammate argument. I think the fundamental paradigm shifted not so much the emotional attachment or the emotional connection that you have a teammate and Deirdre, with all due respect, you can give your car a name and treat it like a person. I don't think it will be reciprocating.

And so, I think, though, the major paradigm shift with classical human/machine interaction is the fact that the machine is learning and as it is learning, as it interacts with us, it's learning about us, it's learning about other things that we don't know about. And as a result, it is changing, and it is forcing us to change.

And that co-adaptation is really the key to understand the teamwork. I think we need to do much more work on that. We're just scratching the surface right now on what we understand about human teams and are trying to apply that metaphor to human AI teams, which will be different than human teams.

And so, I hope to be able to conduct a podcast in a year with you, same team, and Charlie, and maybe in 10 years, and see where we were at. Talking about that, I'd like to ask you one last question.

Close your eyes. We are now in September, 2030, and this podcast continues on a weekly basis. And we are asking now how this podcast will be different, now that the AI has evolved in these 10 years older? Tell me how it is different. We're at 2030. Who wants to jump?

Deirdre Kelliher: I can start.

Daniel Serfaty: Deirdre, you're the brave one. Go ahead.

Deirdre Kelliher: I think now is an especially interesting or thought-provoking time to be thinking about this question, because if you had asked me this in 2010, I never would have guessed really anything that happened this year. But I think that raises a point that I would hope, at least, that AI is going to evolve with the world. It's going to be related to what's going on with the world at the time.

So, I might guess, or hope, that technologies related to environmental issues are improved. Then I could also see an increase of the micro-targeting kind of thing we're seeing on social media. So, I think it's just going to advance with the world. AI is not developing in a bubble. So, I think that's hard to know

Daniel Serfaty: I am appealing not to your forecasting prowess, but rather to your imagination so, Pat, what do you think?

Pat Cummings: The first thing I think, which is not a very big leap. That could be a leap that happens in one year, is that Charlie would be much more of a part of the conversation for everything. And I don't think she'd be the only AI presence.

And I think the modality of this conversation will be very different. And so, whether that means that there's also a video or text going on, I think, and how AI takes a part of that. I think would be very different. But it's hard for me to imagine 10 years out in the future, just looking at what happened in the last 10 years. Nothing that's going on right now would be possible or near possible, maybe even not thought possible.

Daniel Serfaty: Thank you. It's difficult, and the difficulty of imagining that, it's because the very pace of innovation is not only fast as Deirdre mentioned, it's accelerating. This is very difficult to imagine something that is accelerating at such a pace and not just in a quantitative way, but in a qualitative way, things are changing. Nathan, give us your forecast, your brief forecast for 2030.

Nathan Schurr: A few different ideas. So, in 2030, the podcasts are going to be quaint little historical things. I guess they'll be multimodal in and of themselves. Watching a podcast will be like having a dream. So, it would be able to create experiences and sensations and not just auditory, but also touch and feel, et cetera.

And consequently, Charlie's capabilities would be able to produce and generate and develop things that go across these five senses as well. In addition, I would propose in 10 years from now, there would be almost a merger. Like right now, there's these dichotomies. There's a human and there's AI and Pat brought up a good point. Maybe there's multiple types of AI, and they would all be joining the conversation, like a transcriber and an ideator and a person to just keep us on track, an agent like that.

But I would say that there's another spectrum, which is from human to AI and somewhere in between. So, I would perceive that, say, in 10 years from now, I would be demonstrating for you a neural implant that is Charlie based. That would help me be just a better speaker in general. And so, when I was answering questions for you, part of the answer was displayed on my retina and generated, and I would be selecting between potential responses to you just in the same way Charlie's doing, but at a much faster pace.

And I would then be also generating speech and sound and composing music and generating touch in sentences all within the course of one podcast with you. And to riff off of your last point, I think, to me, the most exciting and optimistic aspect of all this is the rate of change. So, not only has there been awesome progress. Just the year and a half or so that we've been working on Charlie, it's just the rate of the progress continues to improve. So, I would argue that in the course of the podcast that we will be recording 10 years from now, that I will be able to clearly demonstrate to you how Charlie's capabilities have improved from the beginning of the podcast to the end.

Daniel Serfaty: Well, that's certainly a very ambitious and exciting prospect. I think that, in 2030, I envision a podcast, and whatever we call that, may be a virtual cast or may do dream cast, as you proposed, in which Charlie will sit in my seat here and do the interview with three or four very bright other AIs. And at some point in the interview, they will feature one human that they will invite and ask some questions, and they will be amazed at how creative and perceptive this human being is.

Maybe that's a dream. Maybe that's a nightmare. I do not know, but that's certainly very exciting time to be in our field. I want really, to thank you very much from the bottom of my heart. Nathan, Pat, and Deirdre, and obviously Charlie, too, for enlightening us. And also giving us ideas and provoking thoughts that we didn't have before this conversation. You guys have been great, and I hope you'll visit the podcast soon to tell us some news about Charlie.