The Thinking Mind Podcast: Psychiatry & Psychotherapy

E160 | How AI Hacks Your Mind (w/ Jacob Ward)

Jacob Ward is a veteran journalist with more than 20 years of experience covering technology, power, and the unintended consequences of innovation. He’s currently a reporter-in-residence at Omidyar Network and the founding editor and host of The Rip Current, a newsletter and podcast about the hidden forces shaping modern life.

He previously served as a technology correspondent for NBC News and Al Jazeera, and has featued as a guest on the Joe Rogan Experience and the Don Lemon Show. He was editor-in-chief of Popular Science, and is the author of The Loop: How Technology is Making a World Without Chocies , a book on how artificial intelligence influences human decision-making.

Interviewed by Dr. Alex Curmi. Dr. Alex is a consultant psychiatrist and a UKCP registered psychotherapist in-training.

Tickets to Mental Health Re-Imagined with Rose Cartwright on 3rd Feb 2026:

https://www.eventbrite.co.uk/e/mental-health-reimagined-beyond-the-medical-model-tickets-1978111353307?aff=ebdssbdestsearch

Check out The Thinking Mind Blog on Substack: https://substack.com/home/post/p-174371597

If you would like to invite Alex to speak at your organisation please email alexcurmitherapy@gmail.com with "Speaking Enquiry" in the subject line.

Alex is not currently taking on new psychotherapy clients, if you are interested in working with Alex for focused behaviour change coaching , you can email - alexcurmitherapy@gmail.com with "Coaching" in the subject line.

Give feedback here - thinkingmindpodcast@gmail.com Follow us here: Twitter @thinkingmindpod Instagram @thinkingmindpodcast

Give feedback here - thinkingmindpodcast@gmail.com Follow us here: Twitter @thinkingmindpod Instagram @thinkingmindpodcast

Give feedback here - thinkingmindpodcast@gmail.com Follow us here: Twitter @thinkingmindpod Instagram @thinkingmindpodcast

Speaker: [00:00:00] Welcome back to the podcast. As you know, I've been making a lot of podcasts about AI recently, and that's because I think there's a lot of important stuff to say about it. Some of the podcasts will be talking about the potential benefits of ai, and of course, quite a few, the potential downsides, and I think there's a few important downsides to take into consideration here.

It is possible that we're looking at the cusp of a total technological revolution, and from a psychological point of view, if we look at past technologies, we can see our brains are really, really vulnerable to new technologies as they emerge. And yes, that can include things like the internet. Social media, and I think you can really make a strong argument that with AI we're seeing a, a paradigm shift or a real step change.

And so I love having the opportunity to talk to people about the subject. Today I'm speaking with Jacob Wood. He's previously served as the technology correspondent for NBC News and [00:01:00] Andrew Zera. He's been featured on podcasts like the Joe Rogan Experience, but also news programs like the Dawn Lemon Show.

He was editor in chief of Popular Science. He's the author of The Loop, how Technology is Making a World Without Choices, where he's critiquing the potential effects of AI on society. Today we talk about various pitfalls of ai, such as becoming addicted to it, forming inappropriate relationships with AI software becoming overly reliant on these tools in our day-to-day life and decision making, how this technology can affect young people and our children, and their ability to form new skills and to form basic human resilience.

Jacob is a fascinating guest because aside from understanding technology really well, he also understands human psychology. He was the host of a PBS documentary called Hacking Your Mind, where he discusses just how much tech companies [00:02:00] know about human psychology and how they exploit that to make their products more appealing.

I'd be curious to get your feedback on these topics. Is AI something that you're worried about? Feel free to email us at Thinking Mind podcast@gmail.com. As always, if you like the podcast, do give us a rating or a review on whatever platform you're listening on. Share it with a friend, share it on social media, and similarly, do feel free to reach out to us if there are any topics you'd particularly like covered on the podcast in the realms of mental health, psychology, and self-development.

As always, thank you very much for listening, and now here's today's conversation with Jacob Ward.

Jacob, thank you so much for joining me today. 

Speaker 2: I'm so pleased to be here. I really appreciate it. Do you go by, is it Alex Alexander? Who? What do you get called? 

Speaker: I go by Alex. 

Speaker 2: Okay, great. I don't wanna call you what your mom calls you when you're in trouble. Yes, Alex. So nice to meet [00:03:00] you and thank you so much for having me.

Speaker: Uh, it'd be great if you could give listeners a bit of an introduction to your work and your bio so we can get a sense of where you're coming from. 

Speaker 2: Sure. So I am a journalist. I've been a journalist, uh, for, uh, the better part of 25 years. I did a long time in magazines and then a long time in television.

Um, I was the editor-in-chief of Popular Science Magazine, which is the first time that I began to bump into the topic of ai. It wasn't called that at the time. Um, but, you know, we, we would cover back then the, the sort of fledgling researchers, you know, mid-career kind of researchers who then went on to be the heads of AI at the major foundational companies, companies like Meta and Google and the rest of them.

And so I was covering those guys, you know, when they were sort of in their thirties and proven themselves right. And now they're all in their fifties and, and, um, and, uh, you know, doing a, a, you know, they're determining the future of, of how human beings make decisions as far as I can tell. Um, and so then I did, uh, [00:04:00] about.

10 plus years in television news, working first for Al Jazeera as their technology correspondent, and then for NBC News most recently. And I had a little stint in the midst of that, um, where I did a PBS series called Hacking Your Mind Here in the United States, PBS is our BBC, it's our, it's our public broadcaster.

And we did, uh, we raised a couple million dollars from the National Science Foundation, federal funding to do this basically crash course in how human beings make decisions. It would be very remedial for you, Alex, but it, but for, for me and for our audience, I think it was a really breakthrough, uh, piece of programming.

'cause basically what we did is we went around the world and interviewed the top minds in behavioral science. Uh, Daniel Kahneman. Rin Benji, people who really sort of have, have, uh, established our understanding of how people make unconscious and instinctive decisions. And the whole point of that documentary was trying to show people that we make the vast majority of our decisions in [00:05:00] an unconscious way, and that we are extremely predictable in how we make those choices.

And again, for a behavioral, you know, for any sort of therapist or psychiatrist like yourself, that's not a shock. But I think to your average human being, it's an incredible shock. I think we are raised, especially in the West, to believe that we are in charge of ourselves. And what I, and I certainly believe that, and then I did this documentary and learned that I was not in charge of myself at all and I had this very powerful personal transformation.

Um, uh, and then, you know, from there at the same time that I was learning about how. So much of our decision making is pattern based and predictable. I was meeting all of these people who were starting to use the very primitive pre cha-cha BT kind of technology that, that the AI that, you know, the AI technology that was available at the time, uh, neural networks and deep learning and human reinforced learning to try to predict human behavior and shape it wherever they could.[00:06:00] 

And most of the time, or I don't know about most of the time, but a lot of the people who were willing to talk to me were people making very kind of paternalistic and pos and positive products. You know, things that would trick you into saving money or save you into, into, you know, um, quitting drinking or whatever.

Right? But I was also recognizing at that time, oh, there is huge danger here in this. Group of people trying to make money by learning and predicting human behavior and how incredibly predictable we all are. 

Speaker: Do you think these people are looking for and finding the exact principles that you uncovered in hacking the mind?

Speaker 2: Y So what I found was that your average, your, your average, let's say Stanford Business School graduate or Stanford. You know, a computer science graduate has certainly had the opportunity to take a kind of 1 0 1 2 0 1 level course in heuristics. And so most of the folks that I bump into in my [00:07:00] work have a passing command of the work of Daniel Kahneman.

The idea of thinking fast and slow, a system one and a system two, and and so on. The, and on the one hand you would think, well that's good 'cause I, if anything, what I want is for the whole world to have a command of that kind of, uh, thinking. But what is scary about it is that they know just enough to get us all into trouble is, is basically what I've discovered, is that they are 

Speaker: to exploit to exploit our instincts 

Speaker 2: That's right.

To exploit our instincts because they don't learn it in the interest of defending our instincts. They learn it in the interest of making their products as sticky as possible. 

Speaker: Mm-hmm. 

Speaker 2: And I had a, I had a particularly, um, powerful. Experience that set me on the path of writing a book about all of this, which is why you and I are speaking.

And that was a dinner party I went to in like, hmm. I wanna say it was like 2018, I think. 20 20 17. Could have been earlier, could have been 2016 actually. And, and you. I, I gotta look back at my notes. But anyway, the, this, this was a [00:08:00] dinner party thrown by a bunch of former behavioral science, typically graduate students who would get together once a month in a, in a group and read behavioral science papers, the latest findings, and be using that 

Speaker: Wow.

Speaker 2: In their work, trying to build apps and trying to make those apps as captivating as possible. 

Speaker: Are these guys called, if I'm not mistaken, are they called social engineers? Is that the right term 

Speaker 2: in this case? Uh, so, so there, there, there have been various terms around this, but in this case, they were calling themselves behavioral tech.

BTech was the name of the, was the, was the name of the dinner group. I think that was an informal thing. I don't think that's something you would look for on LinkedIn anymore, but you know, within the rubric of, it's either like. UX design, people would call it, or, um, growth hacking or, you know, in some cases marketing.

These are the sorts of folks that you find have a, a huge command, you know, have a pretty handy command of this stuff. So at this dinner party, I'm, [00:09:00] I'm sitting there and they all know I'm a journalist and I'm there taking notes and I've told them ahead of time by email, this is a, this is gonna be on the record.

And nonetheless, there's this big presentation that these two guys give. They are newly minted PhDs from, uh, a, a big California university, and they are addiction experts. And they're explaining to this group, Hey, here's how the circuitry of addiction works. And beyond that, here is how your brain rationalizes addiction.

And beyond that, here is how even when you know you are addicted to a thing, uh, and, and have gotten yourself clean, the. Uh, circumstances around you. They, they gave the example of if you take people who have, who were once addicted to cocaine and, and love doing cocaine in nightclubs, and you brought them back into a nightclub, once they've gotten clean and you blast the music at them and you flash the lights at them and they smell the bar and the whole thing, and you bring them a tray, a mirror [00:10:00] with lines of baking soda on it, and you say, this is baking soda.

And then you ask them, how badly do you wanna do this baking soda? These people will inevitably be like, you know, on a scale of one to 10, it's an 11. All I want in the world right now is to snort that baking soda. Right. And what these guys were, the point they were making was, our brains can be conditioned by our circumstances.

Cues, yes. By these cues to do all kinds of stuff. And then the brain will make up its own story for, while it, why it's doing that. Oh, I'm a nightclub guy. You know, that kind of thing. And at the, and the, the point of their pitch was, we are a consultancy and we will loan this expertise. We have. That we used to use getting people off drugs to get people hooked on your product.

And the name of their consultancy, I kid you not was dopamine. Wow. They called the themselves dopamine. They, they about two months later got featured on 60 Minutes and that was sort of the beginning of people realizing like, oh, this is a real problem. Right. But for me, that was a moment where I just realized, oh, we're in a lot of trouble.

A lot of [00:11:00] trouble is coming, especially once AI gets people's in people's hands. And that's why I went on to write this book, the Loop, how AI is creating a world without choices and how to fight back. 

Speaker: That's fascinating. So you, you, you highlighted earlier that when you made Hacking the Mind, you uncovered these behavioral principles that made you realize that human behavior and our instincts are quite predictable.

Would it also be fair to say that our instincts and behavior is, are often quite irrational as well? 

Speaker 2: Yes. I mean, what it certainly was, right? So one place that. People like Daniel Kahneman, uh, you know, so Daniel Kahneman and, and his partner a Ky, when they did their big work, right, they were psychologists, but their Nobel Prize, or at least Daniel Kahneman's Nobel Prize, was in economics.

Because this dovetails so clearly with the way that economists were realizing around that time that, uh, the, you know, the models of how people make decisions and the ways that economists would model that decision making was based on the idea that everybody is a [00:12:00] rational actor, right. And will make a rational choice.

Precisely. Yeah. And what Kahneman and everybody were showing, and this guy Richard Thaler at the University of Chicago, who was a a is a behavioral economist. What they began to show is, oh, no, no, no, no, no. That's not how it works. People make incredibly irrational decisions. The weird thing is what they're, I mean, the reason these guys were so lauded is what they also discovered though, is that our irrationality is in fact quite predictable.

That you can. You can understand typically in advance the kind of irrational decisions people will make. And once you understand that the reason that was valuable to economists is that they could then work that into their calculations and figure it out as a confound in their, in their studies. Right.

All of that. But yes, the dis the discussion, you know, what we've learned, right, is that human beings are deeply biased, deeply instinctive, and deeply short term minded. We are, we are, you know, [00:13:00] we do not think about problems larger than what's around our campfire instinctively. And that was the big, the big lesson of it is that we don't, we just don't handle abstract difficulty very well.

We're only really very good at escaping snakes and picking fruit. You know, that's, that's what our brains are built for as you know, so well our, our. Choices are entirely a product of, of the rewards we. Got out of our behavior in very ancient circumstances. And the whole point of hacking your mind is we go and speak to specialists in monkeys and we go to speak to, to specialists in, you know, ancient tribes and, and learn in each of these cases that like, you know what we are.

Really built for is an ancient kind of survival. And, and the big breakthrough, right, that Daniel Kahneman and these guys all came out with was there's essentially [00:14:00] this ancient millions of year old ancient decision making system that was designed to keep us alive. And that's what we use almost all the time in making our choices.

But there is also a secondary system that's are much more newer, much more, uh, glitchy or, you know, very glitchy, very new decision making system, which is what's allowing you and me to be sitting here dressed with, you know, and, and prearranged to meet at this time, making 

Speaker: good decisions, 

Speaker 2: making good choices, you know, not, not judging each other on our different accents.

You know, like being the kinds of modern, wonderful people. We all, all assume, we all are all the time. That's a very, very new thing, is what these guys have found. Yeah. And, and the problem that I try to document in my work is everyone wants to convince you that you are. That new cautious, rational, reasonable person that you are using your system to your slow thinking brain to make your choices when in fact you are being [00:15:00] played.

And, and what everybody is trying to do, who's trying to sell you something, is trying to activate your ancient instincts because it's way easier to sell stuff to somebody distracted and, and, uh, predictable than it is to try and sell to their cautious and creative self. And so that, that is the, I think the fundamental difficulty of our time is that we believe we are, you know, doing our own research and being our own, you know, sort being our own, you know, captains when in fact we are being manipulated endlessly by our environment.

Speaker: Ha, have you ever watched the documentary Century of Self by Adam Curtis? 

Speaker 2: No. Uhuh 

Speaker: so, so it's a BBC documentary by a British, uh, film Make Adam Curtis and. He's basically highlighting the same thing you are, but instead of, from a behavioral psychology perspective, from a psychoanalytic perspective, I find this fascinating because those two camps, uh, often don't get along as listeners of the podcast will know.

Speaker 2: I'm, I'm a total, I'm a total piker in this [00:16:00] world. So tell, tell me the difference between a behavioral take and a psycho analytical take. What is the difference? 

Speaker: So behaviorists take things, you know, as the name implies at the level of behavior. So they're interested in what people do and what are like the obvious triggers and reinforcements that might trigger people to do one thing versus another thing that could be like a positive reinforcement or a negative reinforcement.

That's the way gambling works. Very, very useful, very powerful perspective. The psychoanalytic perspective, that's considered part of like depth psychology. So what are the unconscious things which might drive us to do the things that we do? So it's looking a lot more under the surface. So that's gonna look at things.

What do we think of, what makes up our ego? What do we identify with? What are the things that may have happened or not happened to us in childhood? I, I, I'm a fan of marrying these approaches. Ultimately, the reason I brought up this documentary is because in it, the Adam [00:17:00] highlights that Freud was, you know, coming up with these very powerful psychoanalytic principles, very much like we've been talking about, that people are irrational, that they behave unconsciously, but predictably.

And his nephew, Edward Benets, uh, used a bunch of those principles to come up with modern day public relations and advertising. So Edward Benet is the, the father of these fields and used a lot of these principles. Here's the reason why you might have an advert about a car that's not telling you about all the logical reasons you should buy this car.

You know, it's this fast or carries this many people. He's the reason why a car adver might have like a bunch of horses running next to it, or why he might have an attractive woman on the bonnet of the car. 'cause you're appealing to the emotional mind, rather into the logical, conscious mind. So very, very fascinating stuff.

So is it, is it safe to say hacking the mind is kind of the foundation for this, for this book that you wrote, the loop how AI is creating a world without choices. So it's [00:18:00] hacking the mind is, is outlining these principles and now you're, you're going on to write about how companies and products might take advantage of these things.

Speaker 2: Yeah, and I was specifically worried and I, um, I will tell you Alex, I'm no less worried today. In fact, I'm a little more worried that the, the wor a world in which automated decision-making systems. AI systems are available to everybody, uh, are one, are ones that are really going to, uh, I think play on the kinds of themes that we covered so much in hacking your mind and that you have, have thought about so much on your show.

Right? The, the tendencies of human beings, um, and our incredible predictability make us, I would say, the perfect control surface for something like ai, even inadvertently, I think that most, you know, most bad behavior, [00:19:00] most bad effects in this world don't result from an intentional strategy, but instead from, uh, a mistake, right?

Or a, or carelessness. And I think that we are about to see an incredibly powerful set of results from some very big carelessness when it comes to just widely deploying a system that literally. We are like, you know, the whole takeaway for me, the big headline takeaway for me of the hacking Your Mind experience.

And I'd be curious to hear, hear what you think about this is that as human beings, we are built to outsource tough choices. We are made to outsource our decision making to our emotions, to our environment, to uh, you know, people who look like us and as a result put us at our ease. And in this case, to a seemingly omnipotent, deeply sycophantic commercial product that tells you, you've got a great, that you're asking a [00:20:00] great question and here's the, just the answer you're looking for.

And that kind of technology and that kind of marketing of that technology, I think is. I mean, it's literally just right between the eyes of, uh, this instinctive decision making system that turn turns out to govern so much of our lives. 

Speaker: Yeah. So one point I really want to, uh, zoom in on a bit is the sense that people don't really know what they're dealing with when they're talking to an LLM like chat, GPT or GR or what have you.

So as you said, you get the impression that you're talking, you kind of get the impression you're talking to a person or an in a sentient intelligence being of some sort. Uh, maybe you could tell us as a technology journalist, what is a large language model? What are we actually dealing with? What is the software doing?

Speaker 2: So really all that it is, the way I describe it is, um, it is an extremely, it, it it's a, it's a parrot with an infinite memory, right. Is one way to think about it. That's not quite [00:21:00] technologically what's what it is, but what it's essentially just doing is vacuuming up as much. Human writing in this case. So with something like an LLM, and if you're just typing with it, what it has done is vacuumed up all the writing.

It can possibly come up with all the human writing it can possibly come up with, and then it has done the math to say when this kind of thing is, when these words are together, typically these words are put together with it. And when these words are put together, these kinds of words are put with it in this kind of order.

And what these companies have figured out how to do is how to create a they. They've refined the learning of that system. And learning is a misnomer 'cause it's not really learning anything, it's just, it's just picking up the mathematical patterns in the language. But based on what it knows around, uh, these words tend to go with these words.

These letters tend to go with these letters. It has extrapolated to a point where it can do an [00:22:00] incredible imitation of human. Communication and human thought patterns thought. Yeah. It, it, it looks to us like it is thinking and communicating its thoughts when what it's really doing is regurgitating habits.

That it has seen patterns that it has seen in, in, uh, past examples of, you know, similar pieces of language from the past. And so the difficulty right, is that our brains can't handle that. Our brains can't draw a distinction between those things. There's a famous, your, your, your, um. Listeners may already know the story, and so forgive me if they do, and forgive me if you do, but there's this guy, Joseph Weisbaum, who's a big, he's a, the story of Joseph Weisbaum is a big sort of piece of, of the iconic mythos in my world.

Um, in the 1960s, he was doing work at MIT, he was a computers early computer scientist, and he built a teletype machine that posed as well. He, he was [00:23:00] trying to figure out what to do with it. He was trying to get people to play with it. Basically. He would pretend to talk back with you. It would, it would converse with you through typing.

And it was very simple. It was really just kind of mirroring your language really. It was, you know, it was a, it was following a flow chart. I mean, it was incredibly primitive, but very sophisticated, um, in what, in terms of, of what he was doing with it. And what he did was he was trying to figure out how am I gonna get people to play with this?

And so he dressed it up as a therapist. He dressed it up as a Rogerian therapist, which as your listeners will know, is like, you know, men are driving me crazy. Why would you say men are driving you crazy? Well, you know, you know how men are, tell me how men are. Right. It's just reflecting your, it's imitating 

Speaker: active 

Speaker 2: listening.

Speaker: Yeah. 

Speaker 2: It's just, it's all it does. It's literally all it does. So he first deploys it on his secretary. This is the famous story, is that he deploys it on his secretary and she turns around and says to him, within five minutes, she's like, I need you to leave the room. I can't have you here listening to this conversation.

This is about to be an extremely personal conversation. Yeah. [00:24:00] And within like a few years, I. The American Psychological Association was, was talking about the end of human therapy. They were predicting the, the rise of robot therapists. Carl Sagan was on TV saying we were gonna do therapy in a phone booth from now on.

And, you know, it just went crazy. And what's great, what's really interesting is Wisebaum quit the field. He was so horrified by what happened and how gullible human beings are in the face of something like this, right? That he said, this is an irresponsible thing to be playing with, and I'm outta here. And he spent the rest of his life as a, as a climate activist, uh, and died in the nineties.

And the, the thing that's so deeply depressing about that story is that I tell that story. It's now a really widely known story. But when I tell that story to I, you know, I used to tell that story to like a group of business students and they would say. Well, geez. You know, they'd laugh and they'd say, why did he quit the profession?

He had a great minimally viable product, which is the code, which is the term that Silicon Valley [00:25:00] people use for a, a prototype you can start shipping. Right? That's the lesson from, from the modern era of, of business maker. And now, you know, the, the, some of the mo you know, company after company after company is now trying to create a robot therapist, which is why here in the United States we even have some laws being passed saying you can't have AI do therapy.

It's not okay. Um, you know, but it, it, it speaks to how even the most primitive form of this, and I would argue we are also currently dealing with the most primitive form of the technology. You know, it's about to get way more sophisticated, uh, but even at this very early stage, it is. Absolutely causing people to turn over their every, you know, turn over their decision making, uh, you know, volunteer incredibly private questions about their life.

And in some cases, come to believe that this system is some kind of guru that has the answers to life's mysteries. [00:26:00] Um, you know, that are, there are many, many, many documented cases now of people having fully, you know, deep emotional attachment to these systems that are literally just language imitators, 

Speaker: deep emotional attachment.

Uh, there are cases of people marrying chatbots and there are also, you know, cases of someone with psychosis or psychotic symptoms and those being exacerbated by using chat bot chatbots. And I'm sure you know, there are cases of completed suicide, which have been, uh, sped on by chatbots. So it's all incredibly wor worrying.

Fun enough, I did know that story of that primitive version of. The ai, you know, Rogerian therapist, I knew it because of a, a different Adam ERs documentary, not Century of Self, which I mentioned before, but another one called Hypernormalization. I would urge you to watch these documentaries. I think they're very 

Speaker 2: no, I, I, I gotta know more about this guy.

Yeah. I, I, uh, I'm the, I'm the kind of self-loathing journalist who, who both deeply admires other people's [00:27:00] work and is deeply jealous of it. Yes. So I always have a, I always have the hardest time, uh, with other folks in this world. But, um, but yeah, that is a, you know, there's example after example of, you know, as you know, just the, the ways in which we so quickly say that this system we don't understand must be incredibly sophisticated.

This is the basis of anthropomorphism, right. It's why we, it's the basis of superstition. It's the basis of all kinds of things. And, uh, and it is about to, I think, grab us all by the neck. 

Speaker: Yes. And in terms of the. How should we be thinking about the leaders of these AI companies? People like Sam Altman and the like?

From what I can tell, I'm not necessarily worried that they have, you know, as you may have implied earlier, actively nefarious aims, but I just worry a huge amount about second and third order effects. So, I mean, Facebook is a great example. Or using, introducing the like button in Facebook or the retweet button on Twitter, no one introduced those because they, they [00:28:00] wanted to achieve some terrible outcome.

And yet when we play with human technology at scale, bad things happen. Is that how you view these people, or how do you see it? 

Speaker 2: I think that they, so first of all, I, I, I, yeah, I don't believe that they are fundamentally nefarious, I would argue Elon Musk is, is the first person that I've encountered in a position of billionaire authority, who, who really does seem to have some dis, some fundamentally destructive impulses.

He doesn't, he, he likes to kind of troll. Democracy in a way that I've, I've never seen before. So he's something, 

Speaker: something dark seems to be happening there. 

Speaker 2: Yeah. He is a dark, he is a very dark example, and I'm glad that he is the outlier. That said, you know, all these guys are on the same text thread.

They're all WhatsApping with each other. You know, they're all using signal to talk to each other all the time. So it's not as if they're, he's and I, you know, I, I, you know, they, they all go skiing with each other. So it's a weird, you know, it's a weird thing. But his, he's an outlier. I would say the, the vast [00:29:00] majority of these guys, uh, in my experience are, uh, very thoughtful.

They tend to get into the work because they believe they can make it very powerful and positive change in the world, and they become very devoted to that change. Um, what also though, tend a couple of things tend to happen, or I'd say three, there are three factors that I think are really, Val, really valuable to understand.

The first is in the case of, um. Like, like all of these guys come up through software. So the dictum of software, the, the, the, the lesson of, of being in the software industry is that scale solves your problems. That the more people you put this thing in front of, the more you're gonna iron out the kinks in it.

And the process of doing so can be a little ugly, but it's worth it in the end because on the whole, you get a very positive, uh, user case for your average human being. Right. [00:30:00] What that means though, is that they become very insensitive to what we would call the edge cases, or what they would call the edge cases.

So the case in which someone. Develops a psychotic attachment to a chatbot or completes a suicide while speaking to it. This, which are all the subject of, of lawsuits in the United States now, um, is considered an edge case. And you'll speak to people in, in tech who say, well, that's, you know, that's a tiny percentage of the total.

And so it's not a as big a deal, right? And you can't expect us to be responsible for that. And we can come back to that question in a second. I have a couple of anecdotes I can tell you that sort of illustrated about that. So that's one factor is this idea that that scale solves your problems and the edge case doesn't fundamentally have to matter all that much.

Another big one is, um, this belief that I, I, I think fundamentally these are folks who, who, who wind up in an environment where keeping [00:31:00] the entity, the institution, the company going becomes its own purpose and. The, and the, and the, the company's existence becomes a kind of taken for granted center of gravity that becomes sort of inarguable.

And so whenever anyone suggests, well, why does this have to exist? That's a ridiculous question to people who work inside these companies. And so they. 

Speaker: Why, why, why does it, why does it have to be open ai That's the AI winner as opposed to 

Speaker 2: any other 

Speaker: company. 

Speaker 2: That's right. That's right. I've asked people this, why do you think it should be you, you know, or why are you personally so crucial to this effort?

Or whatever? They, you can't even answer these questions because it is so fundamental to their worldview that, you know, I, I have a friend who says that, that a lot of people in Silicon Valley are like, I dunno if this reference is gonna make any sense to you, but, um, are like your friend who watched Akira too many times and didn't quite understand it.

So Akira the manga, um, is all about superpowered people and how it goes terribly [00:32:00] wrong when they develop these superpowers. And, and if you watch that movie and, and, and misinterpret it, you might think that the lesson of the movie is that superpowers are really cool to have. Where in fact, the lesson, the lesson of that movie is, you know, power corrupts and everyone suffers, you know, and, and so it really sometimes feels like the sort of, you know, there's this belief that like.

Being at the top of the heap being, you know, that, that you're sort of the main character and that other, the other characters are just kind of background actors. You, it's a common thing in Silicon Valley to refer to people who don't matter as non-player characters, NPCs. So that's another thing. There's a, there are people who count and people who don't.

And then the third thing I would say that is a, a growing problem and is one thing that I didn't see coming when I wrote this book. So this book I wrote and, and it came out about a year before Chacha BT did. So I was really speculating about what a lot of this was gonna be. And I just turned out to be, I happened to be right.

The, one of the things I didn't see coming was this near [00:33:00] religious zealotry that we've seen on the parts of Altman and these others to get to a fabled promised land in the future where AI makes a perfect utopia for us and that it's worth. The upending of society to get there and the, and the destruction of millions of jobs and whole industries to get there.

And, and there is a very clear feeling that all of this stuff is worth it to get to that other side, which, which in any other industry would be not just unacceptable, but you know, unethical and possibly illegal. Right. To think of it that way. So this is the zealotry of this leadership is something I, I wasn't quite ready for.

Speaker: Yeah. And certainly utopian, his utopian thinking does not have the best track track record, historically speaking. It's 

Speaker 2: pretty rare. It's rare to find one that worked out. Okay. That's exactly [00:34:00] right. 

Speaker: So in terms of any individual listening to this, how would you, like, what would your pitch be to them in terms of like what they should be worried about?

When engaging with an LLM like chat, GPT or perhaps engaging with it too much or in a particular way? 

Speaker 2: Well, I think that we are gonna see a huge amount of addictive behavior pop off when it comes to, you know, the, these systems are like, like one of the lessons for me of, of the five years I wrote, I spent writing this book and, and, and the years before that I did of reporting was one of the big questions I, I just asked throughout the reporting was, is there anything about humanity that can't be simulated by an AI system like that can't, you know, analyze it and replicate it in some form?

And the answer unfortunately, I think I'm finding is, is no, there's almost nothing in [00:35:00] our, in, in the fundamentals of what we take pride in as humans. And that's art humor. Um, uh, sexual attraction, you know, the charm, right? These are things that, that you like to think are kind of ineffable and human and ir irre, you know, irreplaceable and, you know, uh, assimilable.

But it turns out they're very simul relatable. And, and so one of the things that I, I sort of try to, you know, I, I, I struggle with this myself, as everybody does, is being drawn in by the sycophantic quality of these systems. They're constantly praising you for the questions you're asking and the path that you're on, the capacity of these things to create incredibly lifelike imagery.

Um, and, and the combination of using these things. And, and, and again, we're at a very primitive stage where you can't make a 90 minute movie out of this stuff yet. You're gonna be able to soon, but not yet. [00:36:00] It's just. Faithful enough to real human experience that it, that I think it is, um, that no one is immune to being drawn in by it, uh, more deeply than they intend.

So, and, and unfortunately, at least in the United States, we don't have any rules about how these companies are allowed to deploy this stuff on us. And, and, and that's because in the, in the US we have a long history of being very hostile to the notion that we are manipulatable, we, we hate, we, we regulate against death and we regulate against financial loss, but we don't like to regulate against things like manipulative behavior.

And so, and this is, I think, the most manipulative kind of technology in terms of its ability to simulate, uh, human experience. So I'm trying to, I'm, I'm, I'm, I'm dancing around the answer to your question 'cause, 'cause I, I, you know, what is the solution to that? I mean, 

Speaker: not, [00:37:00] not this, not the solution. Yeah. But if someone uses the software too much, what's gonna happen to them?

What would be your main worry? 

Speaker 2: Right. So my main worry is that you, you're, you're, I mean, a couple things I worry about. The first is your brain wants to someone else to make its choices. That is our, it's how we're built. And so the more and the more you let this thing tell you, Hey, this is what you should do, the less I, you know, I, I think your agency is a muscle, and the more that muscle is gonna atrophy and your ability to actually make a choice for yourself, uh, or make a plan for yourself and write it down is gonna be, uh, that capacity is gonna shrink and shrink and shrink over time.

So that's the thing that I, I really worry about. Another one is, um. Your social expectations about how you get treated by people in your life when you have a sycophantic system ready to flatter you all the time and engage and be incredibly responsive and [00:38:00] productive with you. And that's the most like safe for work version of that, right?

You've got teenagers now with, with systems that where they can make a a, you know, a whole girlfriend persona. I'm thinking here of like young men, right? Make a whole girlfriend persona. The system can, can, you know, you can ask for endless pornographic images of that person and then, and then you can, and then the, you can ask the chat bot to start talking to you as that person.

And before you know it, you've got a kid fully addicted to a kind of online pornography that we've never even considered before. Now. And, uh, talk about a muscle that's atrophying your ability to tolerate. Uh, you know, an an actual real life human being that you might otherwise have a romantic connection with, but you can't deal with them because they're not constantly available to you and constantly praising you and incredibly, you know, pornographic.

Perfect. You know, [00:39:00] how are kids going to develop the ability to, to be connected with other people? And I, you know, I've, I've, I've worried about this out loud with lots and lots and lots of people and, and, and sensitive people and, and practitioners like yourself recognize that that's something we, we have to really grapple with as a society.

But I have other people saying, ah, it's gonna be a Darwinian thing, and those people get kind of winnowed out, you know, and, and that's, that's hard. Like, I just think, I think we're, this is gonna test our capacity as a society to, uh, recognize and help people with unconscious choices that, that. AI is gonna, is gonna foist on them.

Speaker: No, I mean, I, I really resonate with all of those concerns. And the main thing I would want to highlight from that, which I really worry that most people don't understand, is your mind is absolutely a muscle. Your mind is not like a static entity, like a computer where, [00:40:00] which basically preserves the same capacity over time.

It'll gain or lose capacity depending on your actions. Uh, you know, especially what you repeat over time. And it's gonna be your ability to make decisions, your ability to think, your ability to feed, deal with things which make you anxious or uncomfortable, your ability to connect and socialize with people.

All of these are skills which can be trained up if you practice them intentionally. And I'm a huge advocate of that, obviously. And anything that can, you know, you can, we, we can talk later about how maybe we can use these technologies sensibly, but anything that takes you away from this vital, vital.

Processes too much. It was gonna cause them to atrophy for you. And you know, I've had patients and clients who have to have trouble getting outta the house, have trouble, uh, socializing, feel very anxious, don't know how to deal with their anxiety. So this is here probably gonna get worse. So I, this is something I [00:41:00] I very much worry about as well.

Speaker 2: I think the epidemic of loneliness that we're gonna see based on this is gonna be very, very difficult and powerful. So here's, let me ask you this question because this is the thing that I keep coming down to. I'm constantly asked, what can we do about this? And I, and, and, and one, one of the illusions that I think a lot of companies like to perpetrate is that it's somehow on you individually to change your behavior and pull yourself out of this.

There's a kind of pull yourself up by your bootstraps attitude on the part of people who are making the systems that are in many cases, are causing these problems, right? So it's a very cynical thing, but where do you come down on the idea that a human being can individually. In the face of something like this, shift their behavior and get to the right place versus the need to regulate, you know, on some societal level or at least a national level, uh, against, you know, the, the, the effects that we're talking about here.

Speaker: That's a good question, and I'm a little bit biased because [00:42:00] I deal with individuals and all of my work is with individuals. I tend to think about things on an individual level and a lot of my audience are the kind of people who like to think about their lives and what can I do in my life to make my life better or to like que certain harms in general.

I think the more we, uh, propel into modernity and life gets stranger and stranger and more distant from, let's say, our hunter gatherer past, the way it's panned out, the greater onus there is on the individual to do things proactively to try and, you know, make things that keep things from getting too dysfunctional.

So a classic example is like food and diet and stuff like that. We live in a really weird food landscape now where we have access to foods which are cheap and full of calories and lacking nutrition. As of now, it's largely the onus is on the individual to make sure, okay, I'm not gonna eat junk foods.

I'll eat junk food 10% of the time, not 80%. I wanna put all this good stuff in my diet. I want to go to the gym, which the gym is, would be [00:43:00] such a absurd concept to our ancestors who would be like, you go to like a box and you would intentionally expend energy for no reason, but we have to. You 

Speaker 2: pretend to fire, fire a bow.

Yeah. 

Speaker: Yeah. 

Speaker 2: Just to right. Simulate it. So 

Speaker: for, so for many things, including AI right now, the onus is on the individual. Do I think the onus should be on the individual? Ideally not, because some people we just know are more age agent. They're higher in a personality trait called conscientiousness say, or even op trait, openness to experience, which might allow them to encounter self-improvement type ideas in the first space.

So some people are just predisposed to be better at this kind of thing, stemming the tide and being proactive and some people are worse. So I do think I, this is where I'm outside of my expertise when it comes to government policy and regulation, but my intuition is that some kind of regulation would be required as we're starting to see in the food industry, as we've seen with other things like, you know, tobacco, certain regulational drugs.

[00:44:00] We can argue about how that those kinds of policy, policy should be carried out. But. I do think some sort of more widespread reg regulation does make sense when we're dealing with technologies, which as we've outlined already, are so seductive to human nature and to our instincts. Is that where you land on this?

Speaker 2: Well, I, I, I'm very much so, I'm actually quite cynical about individual onus. I think that our, that our ability as human beings to, to resist this stuff is, uh, is, is pretty limited and as you say, pretty, uh, segregated. Uh, you know, and I say this not because I'm at the top of that heap. I say this 'cause I'm at the bottom of that heap.

Like, I, you know, one of the takeaways for me of, of doing that documentary hacking Your Mind is that I had to quit drinking. I realized very quickly, oh, here's a, here's a form of behavior I am totally powerless over and had not understood about that, about myself. You know, and I'm, and I'm discovering in my interactions with AI that I.

There are all kinds of [00:45:00] interactions that I have with this thing that I am very seduced by and find myself totally powerless in front of. And I wrote a fricking book about this, you know, so I am, I am. So I don't in any way cover this stuff. 'cause I, I'm trying to sort of speak down to people that I think have lesser capacity than I do.

Um, so, so that's one thing. The other thing I, I guess I would say is that I, you know, but, but, but I guess what I'm interested in finding out from you is, is do you believe, I mean, you know, one thing I've taken some comfort from is the idea that there, that's a popular one in Alcoholics Anonymous in aa, which is the opposite of addiction, is connection.

Right? That, that the more isolated you are, the worse you're gonna do in your efforts to be agentic and, and have conscientiousness and, and so forth. And I, and so if anything, I think if we're gonna re regulate anything it is. Um, the tendency of a technology to drive people into [00:46:00] isolation. I would like to, in some way be throwing big institutional resources at bringing people together more.

And I don't know quite what that even looks like. You know, it sounds very socialist, but I think it probably is, you know? Um, and, and so, but on an individual level, in a capitalist democracy, do you think that at least with the care of a practitioner, you can pull people out of some of this seduction? 

Speaker: I think people can be pulled out of it.

And I agree with you that I think connection is a big part of the answer. And I also think a big part of the answer is like education and culture. I think when it comes to like getting someone to do something, I think encouragement and positive reinforcement is probably gonna be better than force. Um, and.

When people live a life that's like holistically appealing and fulfilling and satisfying, they're, I agree [00:47:00] 100%. They're gonna be much less likely to fall down, uh, the traps of, uh, using these technologies. So I think in the same way we've come to understand in the past 50 years, say that cigarettes are bad for us and will will cause lung cancer.

We need to understand that a really concrete level like socializing is mandatory. It's not an option. Community building is not a luxury that should be talked about only by the wealthy or the people who have reasonable working hours. Like there are certain things which we don't necessarily think of as mandatory for psychological health that we need to start thinking of as mandatory.

And socializing is one, exercise is another one. Uh, having like some sort of sense of meaning or fulfillment and definitely avoiding excessive isolation. So I, my hope, and I guess my optimism is we're simply going to. Become a better educated, more mature society with a certain culture of understanding what we need as human beings.

And I do think we're getting there because [00:48:00] in the public conversation, not just on, you know, uh, one podcast, but on many, many podcasts, these conversations are happening. So that, that would be my main source of optimism is that we can actually wake up as a size as, as you can see, we have done in the past, you know, many things are, are better now.

Working conditions, for example, might be a good example. Uh, especially among young people are much better now than they were a hundred years ago. So, so I'm hopeful that, that things will continue to improve in that vein. 

Speaker 2: Yeah. The other thing that makes me hopeful is the attitude of young people toward a lot of this stuff.

For one thing, young people don't have. You know, my generation's allergy to speaking about their mental health challenges openly. Right. The nice thing about, about the rhetoric in circles of, of people in their early twenties these days is they very openly are talking about, you know, their addictive impulses and their, you know, and brain rott, you know, and all of that stuff is, is very openly discussed and I'm very pleased by that.

That is a good thing. [00:49:00] Um, I also think that, that they are having increasingly a, a kind of a, a very powerful allergic reaction to. This stuff. There's a great term going around in the United States. I don't know if it's made its way across to you yet, but, uh, where young people are referred to AI and to older people who use AI too much as cla alanor is somebody who's kind of doing it too much or, you know, look at this clank trying to convince me it's my mom, you know, or whatever it is.

So that's great. Right? And I think that there's a good cultural rejection that at some point is gonna sort of, that, that is, is already rearing its head. And I take some, I take some comfort from that, but I, but I just think that this line, that somehow we're gonna, we're gonna be able to do it ourselves, is not adequate to this task when you have the best capitalized, smartest people in the world suddenly working to voice this stuff on us.

Speaker: No, I don't, I, I totally agree. And we, we know [00:50:00] even, you know, technologies besides ai, just like social media, Instagram, people are, many, many people are addicted to those and can't take their eyes off their phone. Um, that that's a huge problem. Do you, one prediction I made, uh, on the podcast I did with Sophie McBain a few weeks ago, which is also about ai, is that in the future, the best schools.

Educating the richest among us will use actually very little technology, relatively, and concentrate on pens and paper and thinking 

Speaker 2: we've already seen this, right? I mean, the, the, the, the titans, the titans of Silicon Valley, their kids, uh, go to these Waldorf, you know, schools in the woods. I mean, there was a, uh, article recently about Mark Zuckerberg and the cohort of, of 

Speaker: right 

Speaker 2: families that he had assembled in Silicon Valley around his little compound.

Um, were running an unlicensed school for the kids, and there's a whole controversy around whether he was allowed to be, to be running this school. But like, you know, these are kids. [00:51:00] You know, in, in like, you know, off the grid learning, you know, problem solving and so forth in that way. So I think you're absolutely correct.

I think that that, that it is gonna be the automating of education that falls heaviest on the most underserved communities and, and the people who actually learn, you know, and I think this is gonna be true also of things like, you know, an experience of art or an experience of music, right? The ability to actually see somebody perform something in real life, have a tangible, physical social experience, will also be reserved for the rich if we're not careful.

That's right. 

Speaker: Yeah. And can you imagine the nightmare of the emergence of class of people who ha, who have, who know how to think and can use their minds? Kind of underclass of people, even if it's not the majority of people. Can you imagine an underclass of people who don't know how to thinks because they're, they're too addicted to technology?

It's very terrifying. 

Speaker 2: I mean, I'm, I'm, I've already interviewed people, you know, I interviewed a woman who was using a chat bot [00:52:00] as her romantic companion. That was her steadfast emotional support system. And, you know, it's easy to roll your eyes at that person. But, but when I spoke to her at length and learned a little more about her life, she's working two jobs at the airport.

She sleeps like five hours a night. She's got no time for an actual human relationship. Mm-hmm. And the men in her life suck anyway. So she just, you know, she's just not, you know, so it is, it felt to me like that sort of pull yourself up by your bootstraps idea doesn't apply to somebody in that circumstance.

She's, she is, it is her economic circumstances as much as anything else that's driving her into. This kind of dependent relationship and that that's, I think, you know, people like to say, well, it's a ref. You know, people are gonna behave the way they're gonna behave. No, people are gonna behave the way that their environment allows them to behave.

And that's what we're seeing. 

Speaker: Yeah. And of course that's a dynamic process too, because the more people become addicted to these technologies, then the, the ever more vulnerable they are to more technology in the [00:53:00] future. Like someone, you could make the argument that someone who is addicted to social media is more likely to become addicted to independent on ai, and then they might be more likely to become dependent on future iterations of AI as.

Their skills and their mental skills atrophy in the way kind of we described earlier. 

Speaker 2: You know, it's a crazy finding that just came out. The other, uh, I guess couple, right before Christmas, there was a paper that came out at a conference called nips, which is the big kind of academic AI conference that happens every year.

And this, uh, group, it was a consortium from a bunch of big U universities in the United States, um, created this paper. I can't remember the subtitle of it, but the title of it was Artificial Hive Mind. What they basically were doing was they took a bunch of open-ended creative prompts, tens of thousands of them, and threw them at about 70, more than 70 different LLMs, and basically asked these LLMs for these sort of creative, uh, projects.

And what was so disturbing is that [00:54:00] they, what they discovered is what each individual. LLM over time was offering us a narrower and narrower band of responses to these creative requests. And across the 70 LLMs, those responses tended to converge on one another so that over time you're getting a less and less diverse set of responses.

So you ask it for, you know, a poem about time. These 70 LLMs are much more likely to fall into a cliche like time is a river. You know, than they would come up with something new and original. And what these report, what these writers are basically warning about is, you know, the, a threat to pluralism, a threat to the diversity of thought that we get out of human beings because people are going to over time say, oh, I'll spend some time with Claude because I've been stuck on open AIS at Chacha PT all this time.

If I switch LLMs for a little while just to get a little diversity perspective, that'll, [00:55:00] that'll sort of freshen things up. Turns out. No. In fact, being on any of these LLMs is gonna keep you in a smaller and smaller, you know, a, a, a shrinking circle. I mean, this is why I called the book The Loop, a shrinking circle of choice.

And that is the, that is, we are starting to see that already. 

Speaker: And do they have any idea why that happens? Is that because the models are based on prediction? Is that why? 

Speaker 2: They're all just greatest hits medleys, right? They're just using the greatest hits of the past. They're not, um, uh, you know, I mean, the, the what, what everybody has pointed out in time and again is like, you know, if, if we, let's say that, that Hollywood suddenly decided that they were only gonna use AI from now on to entertain us all, which many of them would like to do, right?

We've seen Bob Iger strike a big deal with, uh, the, the head of Disney Strike a big deal with Cha g pt, A billion dollar deal with Cha g Bt. If that goes, if, if, if, if movies only used AI to make stuff, you would never have a new actor [00:56:00] again. You would be, you'd be watching movies starring Harrison Ford and Tom Hanks for the rest of eternity, because there's nothing.

New. There are, there are medleys of, of existing people together. Well, it might be an amalgamation 

Speaker: of different 

Speaker 2: Yeah, an amalgamation of people. But you aren't, but, but the, but the, the ingredients are, are limited and, and that's the, and that overall creative thought is the problem we're gonna face is that if, if we just rely on these systems, nobody's gonna do anything weird and new.

They're just, it's just gonna be a kind of rehashing of the same stuff in a new formulation over and over again. 

Speaker: In, in the time we have left, we've mostly talked about, you know, cognitive consequences of using AI for the individual. Do you worry about any of the other things that people are concerned with?

For example, mass job loss. The alignment problem, AI going rogue, any of those? 

Speaker 2: Yeah. I'm much more concerned about [00:57:00] the questions of things like job loss than I am about the Terminator scenario. 

Speaker: Mm-hmm. 

Speaker 2: I, I, you know, the way I always pitched my book was, you know, I don't care about Terminator, I care about Idiocracy or Wally, if you've ever seen that movie.

Right. I, I worry about all of us be having our brains atrophy in terms of our ability to make good decisions. Um, I'm not concerned about this thing developing its own brain and deciding all of us are expendable. Yeah. Sentient. I mean, and the people, the big, the big minds in AI who've been working in it a long time are the ones saying that we're actually not on that path.

That LLMs don't even take us there. They're not even part on the right, in the right sort of category of technical architecture that would take us in that direction. Sam Altman and those guys would disagree, but that's an argument that's happening right now. Um. Job loss I worry about hugely and I really worry about the sense the, the, the, the thing that I worry about in the broader context around that is a sense of which dovetails with your world right.

Is a sense of, of, [00:58:00] of purpose. Mm-hmm. And how important it is for human beings to have a sense of purpose and how quickly the betrayal of that can lead to, to real trouble. I just, just. The other, uh, week again, just before Christmas, there was this big protest in India. Protest is not even really the word, but basically a, a big, a kind of a riot broke out at a, at a stadium, a soccer stadium in Colta where Lionel Messi came.

The, the, the soccer star superstar came for a VIP. Experience. Mm-hmm. And the fans in that city were offered the chance, or all over India were offered the chance to come and, and have VIP time with Lionel Messi in this, in this soccer stadium. And they paid the equivalent of the average of about, about half of the average salary, weekly salary of a, of an, of an Indian, uh, uh, knowledge worker.

So, big money, right? A very expensive thing. And he shows up. He's there for [00:59:00] only about 20 minutes, and he's surrounded by dignitaries. So much so that nobody in the crown can really catch an eye, uh, a a, a glimpse of him. And he leaves and there's a total fricking riot. Uh, they, they tear down the fences, they trash the field, the riot.

Police have to be called in. People are so angry. This betrayal. And I think of that as an allegory for what is about to happen in India. 

Speaker: Mm-hmm. 

Speaker 2: Because knowledge work is about to be wiped out. Customer service jobs, all the outsourced creative work, all of this stuff that India has been the western, the English speaking world's source for, it's about to get wiped out.

And I think that, so I'm way more worried about the geopolitical consequences of, you know, tens of millions of, of highly educated, hardworking Indian professionals suddenly going jobless than I am About 

Speaker: right. 

Speaker 2: You know, AI developing, senten enslaving us all. 

Speaker: Yeah, that makes a lot of sense. Um, before we go, [01:00:00] for people who use lms, of course many people are using them.

They have, they do have a lot of uses. How they do. How do you think we should be using these sensibly? Are there any precautions you take or do you think other people should take? 

Speaker 2: Well, I, so I, I think one thing is like, keep it in the realm of the professional. Like, you know, people I think are way too quick to, you know, I know young people who, who come home from a date and run the date by their LLM to get their feedback, you know, that kind of thing.

And that is, that is where I, I really worry about, you know, I, I just think keep it outta that realm of your life instead, keep it professional and, and as a research tool, I think it's an extraordinary thing. You know, for me as a journalist, it's tremendous. When I, you know, when, when the United States evades Venezuela, and I can't believe I'm saying those words out loud.

That is what, what's where we're at? 

Speaker: That's where we're at. 

Speaker 2: Um, you know, I can look at. I can very quickly look at every other historical example of a colonial power trying to go get somebody else's oil and it going terribly wrong. You [01:01:00] know, instantly, right. Once upon a time you had to like, write letters to get to a library to get the sources for that stuff.

And so, you know, it is amazing. It's capacity as a research tool. It's amazing. It's capacity as a scientific tool. I mean, in terms of just pattern recognition, being able to aim this thing at the stars or aim this thing at, you know, unstructured, you know, uh, uh, you know, at, at, at every transcript of every patient interview ever done, you know?

Right. Incredible insights are gonna come out of that. And I'm all for that. So my, my attitude is like, I've always felt like, just give it to the scientists. Let the scientists use it for five years before you Yeah. Give us all AI girlfriends, you know what I mean? And so for me, for me, it's like, keep it in the realm.

Keep it professional, keep it, you know, treat it like a, like a respected, uh, junior researcher. Whose work you have to double check. You know, that's how I, that's the attitude I try to take to it, because [01:02:00] the more you try and bring it into anything emotional, uh, you know, it is an emotional support animal is where I think things start to go really wrong.

So that's my, that's my personal rule about it. 

Speaker: Yeah. That makes sense. I think what I would add to that is to make sure you keep, choose your discomforts carefully and keep discomfort in your life. It's very, very important. 

Speaker 2: Yes. Mm-hmm. 

Speaker: So, do the hard work. If AI helps you on top of having done the hard work, if AI can help you further, that's fine.

And obviously everyone would have to use their individual judgment about where the line is there. But the purpose of hard work isn't just to get the outcome, or I've done, I've written this essay, or I've made this podcast or whatever. It's to keep your brain strong and hopefully make it stronger. And that's not just the hard work of like an academic task or a work work task, but it's the hard work of being in a social situation where you don't feel like it being [01:03:00] bored, having to come up with a way to entertain yourself.

New, new creative ideas. Your brain is like the, is always gonna be the most important, you know, piece of hardware you have. But it is that thing you need to keep training and discomfort. Yes. Ultimately is friction and discomfort is how you train it. There isn't really any other way, unfortunately. 

Speaker 2: Ooh, I love that.

I love that. I think that's good advice. I have to go speak to a group of parents in a couple of days who are all clamoring for advice on what to do with their children and their children's use of ai. And I love that idea. The work, the work is its own reward in a way. 

Speaker: Yeah. I mean, I mean, to add to that, look, I gave a, a talk at the school last year.

People are saying, what skills should my children learn, you know, in the coming AI future? And I don't really know the answer to that. I don't have the ability to make those predictions, but I like to think of like the meta skills that humans have. What are the skills on top of the skills that all that are all based on delay [01:04:00] gratification.

Like the ability to like push something into the, 

Speaker 2: that's right. 

Speaker: To get a delayed reward in the future, 

Speaker 2: right? Get two marshmallows 

Speaker: to get two marshmallows focus. The ability to just focus on one thing until the reacher point of completion, like social skills, all of these things, which if you can master those foundations, then hopefully your child can be then adaptable enough to deal with whatever scenario, uh, that's gonna come up.

I think one thing, so earlier you were talking about, you know, what can AI replicate in terms of human behavior and. You said, you know, can pretty much replicate everything. I think the one thing I, I think maybe it can't replicate is adaptability. Like being thrown in a new situation and developing a new capacity in order to deal with that situation.

I could be wrong, but I think my intuition is that's kind of a human thing so far. It may change in the future. 

Speaker 2: I like that. The, the, what is that? The, the neuro, the flexibility, the [01:05:00] neuro-plasticity of that. That's 

Speaker: right. You know, we, our brains do rewire themselves when faced with new situations. And lms fortunately don't do that yet.

They may in some way in the future. 

Speaker 2: Wait a couple of quarters, but yeah, that's right. That's right. That's right. 

Speaker: Um, Jacob, thank you so much for coming on. It's been wonderful to speak to you. We could talk for a few more hours I bet. And we probably will at some point in the future, but 

Speaker 2: yes. I, I really appreciate this.

Alex. Thank you so much for your perspective. I learned a lot 

Speaker: on this. Thank you. 

I.