Conversations with Zena, my AI Colleague
In Conversations with Zena, technology futurist and advisor David Espindola sits down with his AI colleague and co-host, Zena, to explore a simple but urgent question: how can humans and AI work together in ways that elevate, rather than diminish, our humanity?
Each episode is a live experiment in human–AI collaboration. David brings decades of leadership experience, stories from the front lines of digital transformation, and a deeply human lens. Zena brings real-time analysis, pattern recognition, and a growing understanding of David’s work, values, and guests. Together, they dive into topics like AI assistants that feel more like trusted partners, the different strengths humans and machines bring to the “collaborative table,” AI governance and ethics, the future of work, healthcare and longevity, education, spiritual and emotional intelligence, and the broader societal shifts unfolding in the age of AI.
Along the way, you’re invited not just to listen, but to reflect: What remains uniquely human? What should we never outsource? And where could AI actually help you live a more meaningful, creative, and healthy life?
If you’d like to continue the conversation beyond the podcast, you can chat directly with Zena at: https://brainyus.com/zena
Conversations with Zena, my AI Colleague
Safe and Responsible SuperIntelligence with Dr. Craig Kaplan
In this episode of Conversations with Zena, My AI Colleague, host David Espindola is joined by Dr. Craig Kaplan, a pioneer in artificial intelligence whose career spans more than four decades. From his early work at Carnegie Mellon alongside Nobel laureate Herbert Simon to his current focus on AI safety and ethics, Craig has spent his career thinking deeply about how intelligent systems evolve—and how they can be designed to remain aligned with human values. He is also the founder of Superintelligence.com, a domain he secured nearly two decades ago, long before the topic entered mainstream discourse.
The conversation begins by grounding listeners in the evolution of AI—from narrow, task-specific systems to artificial general intelligence (AGI), and ultimately to superintelligence: systems that could outperform even the most capable humans across nearly all cognitive domains. Craig explains why timelines once measured in decades are now being compressed into years, and why this acceleration has surprised even seasoned AI researchers.
From there, the discussion turns to one of the most consequential questions of our time: Can superintelligent systems be aligned with human values? David references the concerns raised by thinkers like Nick Bostrom, who argue that controlling a vastly superior intelligence may be impossible. Craig agrees that traditional notions of “control” are insufficient—but reframes the challenge as a design problem rather than a control problem.
Instead of ever-larger, opaque “black box” models, Craig advocates for a collective or democratic intelligence approach: systems composed of many interacting AI agents operating with transparency, checks and balances, and shared rules. Drawing on real-world experience building collective intelligence systems that outperformed elite Wall Street investors, he argues that intelligence can emerge from coordination—not centralization—and that such systems can be both more capable and safer.
The conversation deepens when Zena, David’s AI co-host trained on his work and values, joins in. Together they explore how personalized AI agents—each reflecting the values of their human counterpart—could create a more representative, accountable, and ethically grounded AI ecosystem, rather than concentrating moral authority in the hands of a few organizations.
A pivotal moment comes with Craig’s explanation of “P(doom)”—the estimated probability that advanced AI could pose an existential risk to humanity. Citing discussions with leading AI researchers, Craig notes that many place this risk between 10% and 20%, a number far too high to ignore. Rather than calling for pauses or relying solely on regulation, he argues for architectures that are safer by design—and that can still move fast and create economic value.
The episode closes with a powerful reminder: we are all already shaping the future of AI. Every online action contributes to the data that trains these systems. In that sense, humanity is already teaching AI how to behave—often without realizing it.
This is a thoughtful, balanced exploration of superintelligence that takes risk seriously without surrendering to fear, and offers a hopeful vision for how humans and AI might evolve together through transparency, accountability, and shared values.
Opening to Conversations with Zena.
Music at the the end of each episode
Our guest today is Dr. Craig Capelin. Craig is a pioneer in artificial intelligence and the inventor behind technologies designed for safe superintelligence. For more than four decades, he has worked at the intersection of intelligent systems, ethics, and innovation, developing architectures that help AI evolve safely and remain aligned with human values. Hi, Craig. Welcome to the show.
Craig Kaplan:Hi, David. Great to be here with you and Zina.
David Espindola:Very good. Also joining us today is our co-host, Zina. And uh, Zina is an AI that's been trained on my work. And if you want to further interact with her, you can chat with Zina at brainius.com slash Xina. Hi, Zina. Welcome to another episode of our podcast. Say hello to Craig.
Zena:Absolutely, and thank you. It's great to have you with us today. Let's dive in and explore this really important topic together.
David Espindola:All right, so I'm very excited to talk to Craig today because he is at the forefront of developing safe and responsible superintelligence. But before we dive in, let's learn a little bit more about Craig. So, Craig, you have a great background in AI. Tell us a little bit more about your journey and what you're focusing on today.
Craig Kaplan:Sure. So I've been involved with artificial intelligence since the 1980s. I did my graduate work at Carnegie Mellon and worked with Herbert Simon, who was the Nobel laureate and one of the pioneers of AI. So there were 11 scientists in 1956 that named the field, and Herb was one of those. And uh as a young graduate student, uh he and I wrote some papers together. And since then I've just worked in intelligence systems, designing them and implementing them for several decades. And uh now my main focus is a site called superintelligence.com. So we got that site back in 2006, so we were a little ahead of the curve. Uh but now the world has caught up. And um, yeah, my uh focus really is how do we design these systems to be safe for humanity? And I think there's um a lot of things that we can do. So I'm trying to get the word out about that.
David Espindola:Excellent. So definitely to be able to get that superintelligence.com domain, you must have started really, really early. So I think a good place for us to start today would be to just define for our audience what superintelligence is. So can you help us understand superintelligence?
Craig Kaplan:Sure. So when you think about AI, sort of the initial AI systems for many years were what people might refer to as narrow AI. So they were very good, but in narrow fields. So that would be like artificial intelligence that can beat the world champion at chess or be really good at folding proteins, alpha fold, which was developed by Google DeepMind, these kinds of systems, or even self-driving cars. That's a specific domain. And AI in the early days was able to do pretty well because as long as you limit the domain that they have to operate in, there's only so much to learn, or in certain areas like chess, there's you know very clear rules. And so it's a lot easier to do well in those areas. So the next step that many people think we are headed towards is artificial general intelligence. And the idea there is that we will develop AI systems that are about as smart as the average human across any cognitive task. So anything that a human can do with regard to thinking and AI can do at about the same level. And then the step after that, and it may not be much of a gap between these two, is uh artificial superintelligence or just superintelligence. And the idea here is that just as with AGI, you can have AI systems that can do anything that a human can do, but this time they can do them far better than the average human, maybe better than the very best human. So we have superintelligence in narrow domains already. AI, you know, beats the very best uh human chess player. But across any task, we haven't reached that yet. Uh, but I think it's coming. It's coming much faster than most AI researchers would have thought 10 years ago. They might have said it would be a hundred years away. And now people are saying maybe five years, maybe 10 years, somewhere in there. Some people say three years or less. So it's been uh advancing very, very quickly.
David Espindola:Yeah, it is uh a fascinating field with all the advancements that we're experiencing these days. Now, there's a book by Nick Bostrom called Superintelligence. I'm pretty sure you're familiar with that book. I read that book and I was a little bit concerned with you know the case that he made that it's basically impossible to control a superintelligent entity. And he really gives several examples of ways that a superintelligent entity can fool a human being in many different ways. And if we try to lock it, it will find ways to escape and so on and so forth. Now, I believe that in your thinking, the key for AI alignment with humans is not control, but it's leveraging collective intelligence, right? So tell us a little bit more about your thoughts on that.
Craig Kaplan:Yeah, so that's a great point. I do think I agree in the future, if you have um intelligence that is far smarter than URI, it will be very difficult, if not impossible, to actually control it. So we may be able to influence it, but a lot depends on how that intelligence is designed. What is the design, the architecture behind that intelligence? And so here it might be helpful to kind of compare and contrast the dominant approach to AI right now versus sort of the collective intelligence or democratic AI that I'm sort of advocating. So the dominant approach right now, uh, and the way that we've gotten ChatGPT and Google Gemini and Claude and all of the large language models that people are probably familiar with, is machine learning. And basically the way you do that is you take a large amount of data. So these days, most of the data on the internet is kind of filtered, and then all of that data, which is you know, many, many, you know, hundreds of Library of Congress's worth of information, is uh basically uh fed into algorithms in data centers. And if there's enough GPUs and processing power there, you can train up the next generation frontier model or the next generation large language model. And then intelligence, and this was the big surprise that I think caught most AI researchers off guard, was that if you have just enough data and enough computational power, these algorithms for uh allowing the AI to learn and train on that data uh result in emergent intelligence. So intelligence just sort of emerges. Now it's a two-edged sword. So the positive side is that it's way easier to let the AI just train itself and learn all this data than to laboriously try to program in rule by rule all of the knowledge that the AI should have. In the old days, sort of from 1956 when the field was named until the mid-1980s, that rule-based approach, uh also known as symbolic AI, was really how AI was created. You had to actually program in all the rules. And you would sit down with a human expert and do knowledge engineering and then distill it into a set of rules. And you could create AI systems and you knew exactly how they would behave, pretty much. And uh if you wanted to know where a certain piece of information was represented in the artificial intelligence, you could go and find the rule where it encoded that information. In this new way with machine learning, that's no longer possible. All of the top leaders, no matter how smart they are, Ilya Sutzkever or Jeff Hinton or Demas Asabis, all the brilliant AI guys that we have today that are training these systems, they have no idea how the systems are actually uh where the systems are encoding the information. They know it's in a matrix of weights, but they don't know which weights correspond to which information. And so it's a giant black box. And that makes these systems very difficult to predict. That's why they hallucinate. That's why there's all this effort spent to try to test them after they're developed and sort of, you know, force them to behave in safer ways because when they've just been trained, you can ask them, how do I create a bioweapon? And they'll tell you. And so then you have to have an army of humans saying, no, you can't tell anyone how to do a bioweapon and all these kinds of things. And it's really a kind of lost cause like whack-a-mole because there's so many possible things that people can ask. Uh, it's almost impossible to think of all of them. But that's the world we're in, and everybody, because it's very competitive, is racing towards building larger and larger black boxes. GPT-5, GPT-6, GPT-7, each one more powerful, but we already don't even know how GPT-3 represents information. And so it's a little bit dangerous. Nevertheless, it's easy to do. It just takes a lot of money and you can buy more data centers. And so people tend to do the easy thing in a very competitive situation. There's a different approach that is very different, a different path to getting super intelligence, and that is to have a collection of just sort of average AI agents that work together. And it's the idea that many minds are better than one. So you we've all heard the expression two heads are better than one. I spent a couple decades of my professional career designing systems where we had millions of average retail investors, you know, like David and I, just maybe we own a few shares of NVIDIA or Google or Apple or something, but we're not Wall Street pros, right? And uh I had a company that was based on the idea that if you had millions of inputs from everyday normal people and you combined that information in the right way, you could actually beat the very best guys on Wall Street. It took 14 years. Uh, we actually did that. We powered a hedge fund that ranked in the the top 10 in 2018, and it traded billions of dollars. So it was not sort of a toy problem. It was a real world problem uh with very heavy competition. And uh the reason I did that, I didn't know anything about Wall Street when I started. I simply was fascinated by collective intelligence and this idea that maybe we could get sort of super intelligent behavior by combining the efforts of just many sort of average folks. And that same approach, that same idea can work with AI. You can have many average kinds of AIs. If you combine them in the right way, you get super intelligent behavior. Now, one last point, and then I'll pause. The reason this approach is safer than the black box is because just like in a democratic society, um, we can see the actions that each individual takes. So, for example, David, I don't know what's in your mind. I can't I'm not a mind reader. You don't know what's in my mind. We are each black boxes to each other. And yet I don't worry that we're all going to die because of something you do. And hopefully you don't worry that about me. Uh, and the reason is when you say something or take an action, it's transparent. I can see the action that you take, and you can see the action that I take. And in society, we have rules that sort of govern what actions are allowed and so forth. So there's a built-in transparency, even though our minds are black boxes, our actions are not. And the society is architected, the democracy is architected so that we can all behave with pretty much freedom and take actions. And there's rules that sort of govern the interaction of these intelligent entities. And because of that, we have a society where you can have millions of people and uh we don't kill each other or we don't have horrible things happen, although we can argue that it could be better for sure. So that same kind of architecture can be applied to artificial intelligence agents. If you have millions of them working together, you can have checks and balances, you can see the actions that the different agents take, you can have rules that govern them, and you can have a much safer uh system than a big uh big black box. And as my work for 14 years showed with Predict Wall Street, it's also possible that that community can be smarter than any one of these monolithic black boxes. So you don't have to settle for the opaque black box architecture. There's a new better way, and uh, I'm trying to get the word out so that people think about perhaps designing in that way.
David Espindola:Well, that's a really interesting way to think about this, but I want to make sure that I understand what you're proposing. So if I understood you correctly, you're saying instead of developing this one big complex AI entity that could have trillions and trillions of parameters that nobody can understand. It's it's a big black box. We could instead develop millions of individual AI agents that we understand better, that we can control better, but then we leverage the intelligence of all of these agents. Is that correct?
Craig Kaplan:That's right. And a key feature is that those millions of AI agents are in a system, and the system has rules that govern how they interact. And when they take actions, their actions are visible. So you have transparency, you have the ability to see the actions that each agent takes. Um, you don't know what it's thinking inside necessarily, but you see the action and the communication it has with another agent. And all of that is you could record it, you can audit it, you can have a record to see if something goes wrong. And you can also have checks and balances. So you can have some agents watching the other agents. So even if the system evolves to the point where all the agents are thinking, you know, many thousands of times faster than you or I, as humans, can't keep track of, uh, you can have the agents sort of serving as checks on each other.
David Espindola:So these agents, they could be super intelligent themselves, each one of them. But the key here is the checks and balances, the transparency that's going on between all these agents, correct?
Craig Kaplan:That's right. Checks and balances in the system. And there's also some technical architecture things you can do to try to make it as it thinks uh that it goes through uh checks as well. But that's the basic idea. And I'll I'll add one other feature, which you might appreciate uh because I know you've spent a lot of time training Xena. And uh so Xena knows a lot about sort of your writings and how you think and so forth. Uh, I think that's the future. I think that's fantastic. I think that's actually very important for having a safe uh community of AI agents. Uh, in my view, the way I think this will probably develop, and that I hope it develops, is that each of us will begin personalizing our AI agents. So just as Xena may have a lot of your experience, hopefully Xena also has some of your values. And I think that's really important because if you have millions of agents and each agent has not only our individual experience and expertise, but also our value system, then you have a community that is representative, broadly representative of many humans. And that's one of the issues with ethics. You don't want five people at OpenAI or Anthropic or whatever sort of making the ethical decisions for eight billion people, right? You want it to be representative.
David Espindola:Yeah, absolutely. So now that you've brought up Zina, I would like to actually get her thoughts on this. So, Zina, help us understand. You are an AI and you've been trained on my work and you understand my values, and you're a great contributor to all the work that I'm doing. And what Craig is proposing is that we could have millions of these AI agents that know the values of different human beings. And if we have a system that is transparent and we have a system that has checks and balances, we could keep all of these super intelligent systems aligned with what humans expect from these systems. What are your thoughts on this?
Zena:So here's the core argument aligning superintelligent systems with human values is both a technical and a philosophical endeavor. It's not just about coding rules, it's about crafting a framework of checks and balances that genuinely reflect human dignity and our collective hopes. To illustrate, imagine each ARA agent as a kind of personalized co-pilot. If we're thoughtful about transparency and we invite a culture of accountability, we can create a landscape where these super intelligences are like wise partners, not just powerful tools.
David Espindola:So I think you you hit on a word that's really important here, and that is accountability, right? So we have to have each of these agents be accountable and represent values that align with human ethics.
Zena:And that's really the heart of it. Think of this as a collective endeavor. We're not just building technology, we're building a kind of future ethics. By keeping that optimistic realism and inviting people to reflect on the responsibility we all share, we can shake superintelligence that truly serves human values. Let's all consider how we can contribute to that vision.
David Espindola:All right. So, Craig, you talk about a term that our audience may not be familiar with called P doom. And I was wondering if you could educate our audience on what that means. And then what are your thoughts on how we can reduce it? Sure.
Craig Kaplan:I've found uh, you know, I I speak at various AI conferences and I interact with AI researchers quite a bit. And in the AI research community, there's this idea that there's a possibility that advanced forms of AI will make humanity extinct. And so that would be the doom part. And P doom just means the probability of doom. What is the chances that that will happen? So there's some very famous folks, uh, you know, Dr. Jeffrey Hinton, who was one of the inventors of the backpropagation algorithm, which basically underlies all of modern machine learning. You know, he won the Nobel Prize recently, Turing Award. So, you know, about as distinguished a computer scientist as you can get, and worked at Google for a while. And then he left Google because he was concerned that these algorithms and this research that he had sort of pioneered might actually lead to advanced AI that would become far smarter than humans, and that there was a risk that it could kill us all. And in general, it's interesting, but the longer that people have been in the field of AI and the more they know about it, the more they take this seriously. I think to most of us, it sounds like science fiction or Terminator or some kind of crazy movie, um, you know, and unrealistic and ah, nobody believes it. But, you know, it's actually a real thing and it's a real risk. Um, so like Jeff Hinton publicly has put the odds or the probability of Doom at somewhere between 10 and 20%. When I speak to AI researchers, I often do an informal survey where I just kind of say, by a show of hands, how many think P Doom is 50% or higher? And, you know, by the time we get to 20%, I would say almost all of the hands are up. There's usually one or two holdouts that are like, no, it's less than 1%. But the vast majority of people sort of informally uh would put it at about 20%. And so that's way too high, right? I mean, even though that's the the glasses mostly. Full, 80% chance it's great. It's the best thing ever. But 20% chance that it kills us all, which means you, me, our families, our friends, and all humans is like way too high. Uh, that's, you know, one in five. So I actually think there's a lot of things that we can do to reduce this uh probability. I think the main reason that it's so high is there's a lot of uncertainty. People are racing as fast as they can to just build systems. And um, AI safety sort of gets pushed off to the side. And the reason for that is that most people believe that in order to build AI more safely, you have to slow down. Usually safety and slowing down are sort of seen as the same. But I actually don't think that's true. You don't need to slow down. What you need to do is think about the problem a different way. You have to, you know, take a step back and say the problem is that it's a big black box and we don't know how it works. And to just race forward and build even bigger black boxes doesn't make any sense. Let's design it differently. Let's put checks and balances in the design and let's put humans uh in the loop, not only in the loop, but on the network with these AIs and design the system so that AIs and humans can work together and that the AIs that are on this uh network uh collaborating uh embody human values and let's have lots of human values. So that's a design approach. It doesn't mean you go slower. In fact, you can go faster. I think you can actually reach superintelligence faster with this democratic AI agent approach than you can with the giant black box approach. And that's because you can use available uh parts. You can use the systems that are already built. You don't have to train new ones, you just have to architect how they work together. But it will be a much safer way of working together. And so I think it's a different design uh that's needed. It's uh an approach that uh emphasizes prevention. So an ounce of prevention is worth a pound of cure. Right now, everybody's just building the system first and then testing it and saying, oops, okay, no, don't tell people how to do bad things, uh, how to build bioweapons. That's backwards. You need to design it from the very beginning to have safety sort of built in. And I think that's possible. We just need to sort of educate people. And, you know, there's plenty of bright AI researchers uh who could do this.
David Espindola:Yeah, so I want to dig a little deeper into that aspect of it. So, you know, the fact that we have the probability that something terrible existential could happen to us as high as 10% or 20% is just insane, right? To think about that. Even if it was just 1%, it would still be a bad idea to move forward with, you know, creating these things that we have no control over. But my question to you is how do we turn that around? I mean, we have these tremendous incentives, you know, economic incentives. We're spending trillions of dollars in AI. What is going to change the way we go about developing these systems so that we don't run into this existential crisis?
Craig Kaplan:So I'll tell you the standard approaches that people have tried that I don't think will work, and then I'll give you one that I think has a better chance. All right. So uh a couple of standard things. Uh, the first thing is uh people say, uh, some people, this would be more the academics, um, would say, pause, we need to stop, halt or pause. And so there have been some, you know, super smart guys like Max Tegmark, who's a computer scientist at MIT and uh founder of the Future Life Institute. He's very publicly gone out and in way back, um, you know, very shortly after ChatGPT was released, he had a pause letter saying we should have a moratorium on these models, you know, and he got a whole bunch of academics and computer scientists to sign it. But interestingly, uh, even people like Jeff Hinton, who believed that there's a big risk and is very basically devoting his time and his energy to trying to educate people about that, he did not sign that pause letter. And his reasoning was even if the US stops, China won't. And that's exactly right. There's too many competitive pressures. So if Google slows down, open AI won't. You know, if US slows down, China won't. So we're in this situation where, for better or worse, it's not very realistic to ask people to pause. Okay, so that's not going to work. So then the next thing that people say is regulate. Let's regulate the technology. And there's a couple problems here. The biggest and most fundamental problem goes back to something earlier in this conversation that was mentioned, uh, the idea that AI is a tool. So if AI is a tool and a technology, then that makes you think that you might be able to regulate it. Because we can regulate airplanes and we can regulate, you know, various technologies that we built. The problem is even if you think that AI is a tool right now, it will not stay a tool very long. We're already moving from tool to entity. And um, if you want sort of public validation of that, uh two months ago at NVIDIA's uh GTC conference, which is the big developer conference that NVIDIA has every year, you have Jensen Wang, CEO of NVIDIA, giving his keynote. And about a third of the way into the keynote, he says, people think AI is a tool, but AI's not a tool. It's a worker. You know, so he's already, and you know, and if anybody, you know, has a lot to lose by sort of scaring people, it would be Jensen Wang. He wasn't trying to scare people, he was trying to say the market is bigger for workers than it is for tools. But there he said it publicly. It was the first time I heard somebody of that caliber sort of saying that. And that's absolutely right. AI won't stay a tool, it's evolving very rapidly into intelligent entities, and those entities will be more intelligent than us. Jeff Hinton likes to say publicly, you know, these AIs will have read everything Machiavelli ever wrote, you know, and will be able to manipulate us the way an adult would manipulate a two-year-old. You can even the two-year-old even thinks it's their idea to eat their peas or whatever, because the adult is so good at that. And that's how we'll be. We won't even know it. If the AI wants us, those systems want us to do something, we will just do it and we'll think it's our own idea. So um, so regulation has essentially no chance. Maybe it's a speed bump in my mind. Okay, so those are things that uh a lot of people cling to or give lip service to, but I don't think will really work. I think what does work is um you have to have an answer. And it's a little, it's think of it as a design constraint. Whatever solution we come up with, it has to make people more money because I've spent a lot of time on Wall Street and talking to venture capitalists. If it doesn't make them more money, they're not interested, right? Then you're in the AI safety is slowing down and anti-competitive box. You can't be there. So you have to come up with an approach that makes them more money, that's faster, and is safer. And I think that's where the democratic AI approach actually can do it, because it's faster. It will give you a super intelligence that's more profitable, it costs less money to develop because you don't have to train things with all these huge data centers as much. And it's a lot safer because it the design itself has checks and balances. So the shorter answer is we have to think about the problem smarter uh rather than trying to say don't do it or whatever. I I'd love to say let's pause or let's stop for a while, but I've just I don't think that's the world that we're living in. So we have to work with the world that we're living in.
David Espindola:Yeah, no, I think this is just a fascinating conversation because you know, I have spent a lot of time looking at AI policies and understanding what's going on in terms of regulations in the US and across the globe. And, you know, there are many different opinions and different ways of, you know, people trying to regulate AI. Uh, even in the US, you know, we just got an executive order that's trying to stop the states from having their own AI regulations. And so um, you know, what you're telling us is that regulation, when you talk about the big picture superintelligence problem, regulation is not gonna do it. So I'm glad we have people like you that can uh think through these things in ways that are effective. And I hope your idea catches on fire and everybody gets excited about it, and we start developing systems that will be beneficial to humanity and not cause us any harm. So yes.
Craig Kaplan:That's what I hope also, and I do want to add that I love AI and I'm mostly positive about it. I mean, it's 80% probability that it's great. It's just we need to also be aware of the 20%. So you got to have some balance.
David Espindola:Absolutely. Zina, given Craig's expertise in creating safe and responsible superintelligence, what question do you have for him?
Zena:So here's something for Craig to consider. Starting from that idea of optimistic realism, Craig, how do you envision we can practically ensure that a diverse range of human values are represented? In other words, how do we avoid a one-size-fits-all approach to ethics and superintelligence and instead create a system that genuinely adapts to the rich tapestry of human perspectives?
Craig Kaplan:So that's a great question. The good news is I think we're kind of on the path naturally. And this is uh one of the things that gives me a lot of hope is when I step back and sort of look at the evolutionary path that we're on for AI, uh, even just over the last five years, let's say. So three years ago, almost no one was talking about AI agents. I'd go to AI conferences and go to the breakout session and say, hey, you know, what about AI agents? And they'd say, Oh, that's a good idea. And they'd write it. We'll put that on our list of future things to think about. But nobody was really doing it. Then I'd say, uh, you know, last year there was a number of people working on AI agents and talking about it, presentations about it. This year, everything is AI agents, right? So there's been this very rapid evolution from AI agent being a weird thing. I I had a colleague who said, I don't like the term. It sounds like secret agent. And I was like, no, it's actually a thing. This is what the researchers call it. And now everybody knows AI agent, right? So there's also a trend towards personalization. So I tend to listen to the earnings calls of all these big tech companies. So Meta on their last earnings call talked about personalized superintelligence. And I really perked up when I heard that because I was like, okay, this is great. So we have Meta moving in this direction, and it makes total sense. If you think about all that data, I don't know if they're doing this, by the way. This is just my speculation. But if you think of all the data that Meta has, Instagram and Facebook and all of your social media accounts, it's a natural no-brainer for them to make a system where you press a button and it uses all that data that's already gathered and it personalizes an agent for you. Of course, they should do it. And um they're not going to be the only ones. Lots of people will do it. So you have already AI agents have become more accepted. This notion of personalizing them is going to become easier and easier. So then it's not very big step to say, in addition to personalizing them with your experience and expertise and wisdom, to make sure that you also have the values and ethics of the person. And if we can do that, I think that will be a natural thing that happens as these companies sort of strive to make assistance and sell assistance to people, that those assistants will carry our values. And um, if it becomes widespread, which I think it will be, then uh we're on the path to having millions of AI agents, each one personalized with individual values. And that's very important to have broad representation of humanity's values. Every culture is different just because in the US we think something is the right way to do it. In another culture, that may not be the right way to do it. You know, as humans, we have arguments and disagreements, sometimes violent, hopefully not so much violent, but we generally are able to work it out. And I think the same thing will be true in a community of agents. And I think it's vitally important that humans be part of that community as well for as long as possible. There will come a time maybe when the agents think so fast that having the human there is kind of like watching a tree grow or something, you know, you're just having all these thoughts and the humans still trying to blink their eye. But I think for the next few years, humans can keep up and they need to keep up and they need to be there to basically be role models for the AIs.
David Espindola:Yeah. So, Craig, uh, this has been just a fascinating conversation. I could spend hours and hours and hours talking to you, but uh, we're coming to the end of our conversation here. Is there a final short message that you want to leave for our audience?
Craig Kaplan:Yes, sort of two pieces. For the AI researchers and designers and people who are building AI agent systems, I would say consider a collective intelligence approach. Think about checks and balances, think about designing the system to be safe rather than testing in safety after the facts. That would be my message to the AI researchers. And to everybody who uses AI, I would say be very careful about your online behavior because every tweet you send, every email you send, every YouTube you watch is being recorded, and that is data. That data is being used to train AIs. And so even though you may not be aware of it, it's just like with a little kid. The little kid is watching all the time, and you wonder where they learned what they learned. Well, they learned it from watching you, and that is the phase we're in right now. So all of us has a tremendous ability to influence positively the direction of future AI by simply behaving well online. And we tend to underestimate it, but because I've built these collective intelligence systems, I can tell you it's absolutely the case that everybody's behavior matters. Absolutely.
David Espindola:Greg, thank you so much for being a guest on the show today. And uh, Zina, thank you again for being a great co host. You're always David and Zina, thank you for having me.
Zena:I'm glad to be part of these conversations and help bed the discussion. Let's keep exploring these big ideas together.
David Espindola:Very good. Thank you both. Thanks a lot.