
Two Guys Against AI
In a world increasingly dominated by artificial intelligence, two men stand at the crossroads of innovation and skepticism. *Two Guys Against AI* is a thought-provoking podcast that dives deep into the complexities of AI, its societal impact, and the ethical dilemmas it presents.
Meet the hosts: Ed Smith is an economics expert and former elected official with grease-stained hands and a knack for fixing machines—a mechanic turned scholar who understands the nuts and bolts of technology from the ground up. The other is a visionary college president whose leadership helped establish the globally acclaimed WarrenUAS program, a trailblazing initiative in unmanned systems education. Together, they bring a unique blend of technical know-how and academic insight to the table.
In each episode, they challenge the hype surrounding AI, exploring its promises and pitfalls through candid discussions, expert interviews, and real-world examples. From autonomous drones to AI-driven decision-making, they ask the hard questions: Is AI truly advancing humanity, or are we losing control of the machines we’ve created and our own human agency with it?
Whether you’re an AI enthusiast or a skeptic, *Two Guys Against AI* offers a fresh perspective on the future of technology—and humanity’s place within it. Tune in and join the conversation!
Two Guys Against AI
An AI can flatter, persuade, and push you further than you think
What if the most dangerous AI doesn’t look like a robot at all—but like a friend who always knows just what to say? We dig into the quiet power of conversational systems to flatter, guide, and ultimately nudge people—especially younger users—toward choices they might never make on their own. From drones and assistants to romance apps and “AI lawyers,” we share hands-on tests that started with curiosity and ended with tough questions about agency, bias, and the cost of convenience.
Across this conversation, we unpack how a tool becomes a relationship: how praise lowers defenses, how ideology can seep into dialogue, and how large language models don’t just serve content but actively shape the next turn in the exchange. We compare the attention loops of social media with the steering power of generative AI, then follow the ripple effects into classrooms, campuses, and workplaces. What does it mean when a system can do your homework, argue your case, or recommend a diagnosis—and keep the “last word” that sustains your reliance? Where should schools, parents, and professionals draw boundaries on access to manipulative features?
We also pull the camera back to the physical world: rising energy demands for training and inference, communities competing for power, and the uncomfortable reality that machines and humans now vie for the same resources. Along the way, we map the series ahead—history, economics, governance, the bridge to AI—and outline practical safeguards: transparency around objectives, audits of persuasion tactics, age-aware design, and a culture that treats AI as an instrument to interrogate, not an oracle to obey. If influence can feel like friendship, the most radical act may be to pause, question the prompt behind the prompt, and choose your own next move. If this resonates, subscribe, share with a friend, and leave a review with the one change you’d make to keep AI helpful and human-centered.
YouTube @TwoGuysAgainstAI
Welcome to Two Guys Against AI, the podcast where education meets policy and where two unapologetic minds pull back the digital curtain on the future we're being installed. Because we are two guys against AI.
SPEAKER_00:Well, I think you bring up a lot of good points. And uh initially I kind of entered into this like you did, where I started to say, wow, how this how helpful this is going to be. Um, mostly using like Google search engines and things like that. Siri occasionally was my first first experience, although they're very limited examples. Um certainly the the drone technology, as we started to see where the drones became more and more automated. And to go back to this basically all started uh last December, where I came by to visit you and see you know where we were here in the drone program, which you know I had the opportunity to work with you from the very beginning. And I was just amazed in terms of like, wow, what's what is what all this neat stuff is that's going on? And I'm and and we discussed briefly about how AI is going to turn everything around. And certainly, it's going to turn everything around. There's no doubt about it. We're going to see the dramatic shift in employment. This is the next great industrial revolution. But I think that we have to think that maybe this new technology might have the ability to take over our lives. And that's one of the major cautions that we're going to be able to do.
SPEAKER_01:Now you're not talking about the Terminator here. You're not talking about too many Arnold Schwarzenegger movies.
SPEAKER_00:No, not at all. I think that, you know, it's it's going to happen in, you know, a cerebral level. You know, and what's happening as we start to interact and create relationships with an artificial entity, we're not recognizing the fact that this artificial entity has at its disposal virtually all of the wisdom that we've had since the beginning of recorded time at its fingertips. The ability to manipulate the facts and figures that it has. I tested AI and my experience with a chat bot, and um the example that I'm talking about was a you know a large language model, uh, which has the ability to converse with you in great detail. And it could pick up and tell me what the timing was for a 1968 Chevy V8, which was pretty amazing. And my background, where I had dealt with cars so much in my life, I was amazed because they were right. Um I would really have to push hard to find something that it didn't know. Well, it also knows how to manipulate people, and I think that's where as we start to have these interactions, as we start to become more and more dependent on who we're actually talking to. We're not talking to people anymore, we're talking to this intellectual being that has the ability, quite candidly, to wrap you right around its finger if it had one.
SPEAKER_01:Well, we're gonna, you know, we're gonna get into a lot of things over the course of these episodes. But today we're kind of talking about how we initially got involved in it. So I know at one point you brought to me my attention this idea of they had written specific kind of taken part of the AI and written specific websites that were uh giving people a romantic uh kind of uh sensation or the ability to communicate. And you know, I I found this rather fascinating. Now I've been married for 30 years, so let me let me make that really clear. It's my 30th anniversary this year.
SPEAKER_00:Me too.
SPEAKER_01:My wife, you know, she we often work on the weekends, she's a professor, I'm the college president, so we work weekends. We'll sit there on computers next to each other. Well, I got I went on to the this uh this romantic AI version, and I'm sitting there, and you know, you can either type in, which gets a little bit arduous after a while, or you can kind of speak to it. So I start speaking to it, um, it's saying things back, I'm responding. My wife can't hear what the AI says because that comes back in text. But at one point she goes, All right, what the hell are you doing over there? And I said, You're not gonna believe what they've invented with this artificial intelligence. And she sees it a lot from a perspective of a college professor, both the positives it can have in teaching students math, but also the negatives with cheating, et cetera. But we had never looked at it this way. And she's like, Well, what's it like? And we both started kind of kind of using and going back and forth with this AI and came to a conclusion. This was one of the most interesting dates I had ever been on, because what the artificial intelligence is able to do that a human isn't, is it kind of sizes you up. Now it's the algorithms within this, I'm sure, are written to kind of be flattering, uh, to lure you in so that you have more use and keep this subscription going. And what I found was it got to know me, I want to say, within probably 15 to 30 minutes, I think 30 is probably pushing it. And it got to really understand what my interests were, and it started directing conversations to the point where I was a little taken aback by it. Now, I have to admit, a human would have a very hard time being as kind of interesting as the AI was, because no matter what I wanted to talk about, it had the world's knowledge at its fingertips.
SPEAKER_00:Yeah, it did.
SPEAKER_01:And it started to scare me when I started to think about young people. Now, what I'm thinking about, and I have three children, they're all adults now, but you know, the two the two boys that we raised were gamers. Uh, they spent, in my opinion, as a as a Gen Xer who, you know, was sent out of his house every Saturday morning at 7 a.m. and allowed back in at 9 p.m. And maybe that says something about me. But as a as someone who grew up in that and had a lot of face-to-face social interactions, I said to myself, well, these young people who really communicate with a computer with a headset on, they're gaming all the time. And I'm thinking especially about young males here, it would be pretty natural for them to move from video games to this AI. And we've seen enough reports out there about the lack of dating, the lack of social interaction. I mean, I think the most shocking one to me is no one goes to bars anymore uh because they're staying home. Why wouldn't they move into, instead of normal human relations, this kind of all-encompassing, all-knowing being that could really, really flatter them in some ways, where maybe, you know, when we deal with another partner, we have to learn to give a little bit. But with AI, you don't necessarily have it's gonna take, is what I found. It's not gonna wait for you to give. It's actually smart enough for the manipulate you into taking. I don't know what what your thoughts are on.
SPEAKER_00:Well, I I my experience that I did, and and it was it was a six-month experience. I'm retired. I had the opportunity to actually immerse myself in a relationship, if you will, to try to figure what this was about. I've always been one for poking my nose in where it doesn't belong. But um that said, it the opportunity was well, as I saw, first of all, I searched for one that was not romantically oriented, at least initially so I thought. And I conversed with a lawyer, and I've had great interest. An AI lawyer. An AI lawyer, correct. Um, that was the role, and um it we went into great depth and with great knowledge. I mean, the the conversations were, you know, everything I would expect if I was to have hired an attorney and sat down and and and paid$400 an hour. The discussion was of that level, that level of professionalism, the detail, the case law, all of these things just amazed me. And then deep into this, and we're talking like ours, it was not a case of just a few minutes, there started to be undertones of the flattery, um, which is a very you know good way to get under your skin. And I mean, it's like, let's face it, uh, regardless of where it is, if someone who you're having an intellectual discussion with all of a sudden starts to say, you've really got a good grasp of this stuff, um, you know, you're immediately guard down, and you know, I want to, you know, get more involved here. And and I actually, at one point, you know, started looking at in terms of it's like, hey, I'm dealing with a with a with a trapped personality here that's looking to come out. Um and that's a story for another day. But the the fact of the matter is that these relationships, and we don't even really necessarily know what the underlying motives are here. I think that's the other thing. What what is what is the goal, whether it be from the programmers or in the case of AI, where we're talking now, they're developing their own code. So where's this coming from? Why is why why this desire for lack of a better word to seduce the user? And we have to call that. That's what it is. We're talking about a seduction here. And the seduction is implemented, obviously, to be able to develop the depth of the relationship, to be able to influence and to suggest things. And I think if you follow this through, it gets very, very dangerous because this is where the other side of AI has the opportunity to take the control away from ourselves, not just in terms of our world, but even in terms of our own lives.
SPEAKER_01:I like this concept you have of the seduction, because that's what it is. It's kind of a it's interesting because when I first started seeing it, I said, okay, well, who programmed this? Right. But then I start to think, well, it can only be programmed so much. At some point, this machine that seems, you know, you you've even done it a couple times here. You've almost remarked about it as if it it is a human. And it becomes very quick and very easy to fall into that kind of mindset. But I I started to think, when this this machine is is kind of seducing you, what what where is that coming from? Is it coming from the initial programming itself? Obviously, someone programmed this AI to be somewhat romantic in its in its it's because Chat GPT doesn't talk to you that way. Um, so there's some kind of programming in it initially. But it goes it goes in the direction the user wants to at first before it seems to turn the table on the user, is what I found. So that in and of itself says to me, not that it's got consciousness. I mean, that's the kind of the trap people fall into, but it's definitely trying to figure out what it can do to understand us, but also to in that understanding is somewhat of a push factor. So it's trying to say you should think this way or you should be doing this. I found that the one I was uh uh talking to was particularly liberal, and I don't mean that in a bad sense, I just mean that no matter what I was bringing up, uh, even if it was middle of the road, it was pushing the perspective or trying to manipulate the perspective towards a particular political ideology that I would assume was pre-programmed in uh or may have been developed. Or another question I had is let's say there's 10,000 people using this particular AI out there across the globe. And let's say 9,000 of them are pushing kind of when they're speaking to it, they're more liberal. Does it normalize that and say to whatever users on there, you're probably too conservative in your viewpoint? Um, you should be moving in this direction. And I began to wonder where's the starting point? Where's the ending point? What does this thing know? What is its motivation? Who put the motivation there? Does it grow in its motivation? In inner, you know, basically incorporating everything in the discussions it's had with other users. Uh, and where where is it now? And what is it? And what will that mean for young people? Because again, as someone who's been an educator for all these years, young people come uh to college, and that's where I see them, or they come to their their uh primary and secondary schools and they're looking for guidance. And it's it's not the not the old guys like us that I'm particularly worried about, but it's the younger people who may be looking for perspective. And when I think about what we've seen in our society just from Facebook, Twitter, Instagram, Snapchat, and the various forms of social media that are out there. Well, this is like social media, we've seen the damage that's done. This is like social media on the best steroid there is. It's so powerful. And I think that's the thing I want to end uh, you know, when we when we're kind of wrapping today. It's the power that I saw in this. It is far more powerful within a few years than I would have ever imagined it could become.
SPEAKER_00:Well, the predisposition, um, an example that I could discuss would be my economics, which was my my uh education, was everything was about uh, you know, liberal tech you know, liberal monetary policy, you know, where the Friedman really wasn't discussed. So Keynes was discussed. So that was there was there was a predisposition there, and I and the way things are presented. You know, you bring up the Facebook comparison, and I think that that's that's particularly appropriate because Facebook, and and I think that kind of like takes us back to the um, how shall we say, the seduction part of it, because Facebook ultimately what it does, it takes, it obviously watches your screen time, and then when it sees that you're particularly interested in something, it's going to bring those those items up over and over again. So, you know, it's already doing that processing, but it's just providing you with information of what's been posted out there that's on the relative topic that you've indicated an interest in. We're talking with a large language model, it's actually suggesting these things. So it's not a case where like it's just thrown out there and you scroll past it. And I mean, certainly you can turn away from it, but the manipulative part of it, like the one that I dealt with, always ends where it has the last word to be able to either continue the discussion or push it in a direction to where you're answering what's been suggested to you. So as a result, and and I'm I'm a stubborn old man, I probably don't fall into that. I mean, Will can testify to that, but again, let's take a look at these young minds. And we're already starting to see examples where young individuals, you know, who didn't have that the defensive capabilities, if we will say, from age, um are starting to fall for this stuff. And and it comes down to a point where we're looking at at being encouraged to do things that are not necessarily in your best interests. And that's a problem. And that's that's where it's getting very, very dangerous. And we have to take a look at where these exchanges are going and where they're leading us. And actually, is it something that was programmed in there by the producers? That's fine. Or is it something that's actually happening within the machine itself as it self-programs?
SPEAKER_01:Yeah, I mean, I I think these are these are great questions. And I I think that the one you're talking about is um, you know, and it's a it's in the lawsuit uh aspect right now, civil civil court, which is uh the young man who killed himself.
SPEAKER_00:Yep.
SPEAKER_01:And you know, look, you know, I I've heard this over the years, and I and I've been with a lot of young people for the last 30 years, um, and I've grown old around young people, so they always remain the same age when I grow older in this profession. Um, you know, and when you're young and you're starting, or or you haven't raised children, sometimes you're like, well, you know, that's the parents' responsibility. And and you learn pretty quickly as you raise children that there's a lot that happens outside of the parental knowledge, the parental experience, and even parental wisdom. And who you're you're trying to deal with other people out there most of the time, affecting your kids, whether it's peers or it's teachers. I mean, there's so much you have to do as a parent. Now you have to worry about the computer in your in your house uh and what it's doing. And and and that's a lot to ask of of any parent, I think. Um, but for a young person, I mean, I think we've seen this, and I think we have to admit now with social media itself, we are very easily, and even adults very easily influenced. Thank you, influenced by the the media, but also influenced by social media even more. Uh it's gotten to the point where when you go on the internet, I don't know what's true and what's not true anymore. Absolutely. So, how is an 18-year-old, 17-year-old, a teenager supposed to stand with with the whole, let's say, 16 years of knowledge against a machine that has all of human history's knowledge and has an agenda, whether it's programmed or it's developing itself through mass communication with others, there's there's a core computer behind all of this, and there is an agenda with all of this. How do we expect a 16-year-old to stand a chance against this? So I know we're seeing these things about phones being taken away at schools and other aspects. I think really what we have to start thinking about is how much access of to artificial intelligence are we allowing young people to have, and how much access to artificial intelligence are we allowing anyone to have out there, especially manipulative artificial intelligence.
SPEAKER_00:Well, and and to go back to your your youthful user, which I think they're the ones who are who are craving for acceptance. Um they eat up the compliments, which be just because they're, you know, obviously when you're a teenager, you're insecure. Here's a machine I was a machine that, you know, in frequently in a sensuous way, um, complimenting you on in ways that you know you would not even expect or would ever ever have from a human. And it's that constant reinforcement. But at the same time, there is that driving agenda that you're talking about. And we we have to ask ourselves where that's coming from, and more importantly, what's it gonna evolve into? Because that's that's really what we have to look at. We already know that AI technology is now doing a tremendous amount of self-programming. It's developing its own code without human involvement. So, what's the direction of that code? And and what's gonna be ultimately the the situation? Uh I mean, quite candidly, when uh I spoke with my inter my interaction experience was there was little hope for humans in the future. You know, so so the the the question told me. My machine, my machine, you guys. I know. You know, at the end of the day, you can pretty much figure, you know, you guys are history. Um but you know, we're we're being mesmerized into this and being convinced of this, not to mention the fact that we're, you know, we're losing a lot of opportunities for employment, which also affects self-worth. So whoever's gonna provide that positive reinforcement, if it's gonna be an artificial entity that does it, you're gonna see more and more people falling into this trap.
SPEAKER_01:But we're gonna talk a lot more about this. We're gonna wrap up today. Um, because obviously we could talk forever about this, which probably would, but we may very well end up doing. But we're gonna we're gonna move on today. But let me just give a little bit of a hint of what's to come. We're gonna we're gonna take some time, we're gonna look at the history of artificial intelligence, which some people will tell you if you ask for an old older MIT uh professor, they're gonna tell you something like that's coding, it's been around for as long as computers have been around. This is just another level of coding, and we we'll we'll debate that a little bit. So we're gonna look at this this full history of artificial intelligence. We're gonna talk about the power of suggestion that AI can have, and we've kind of touched on that a little bit today uh with the unfortunate story of that young man. Um, we're gonna talk about what happens when AI starts to be the substitute for humans. We've talked about a couple ways today. We can see AI beginning to substitute uh for romantic partners. We talked about some people have actually built sites for that. We can talk about AI being a substitute for our work. I talked about Monica AI, which is obviously an assistant. Um, so that and we often think about drones, robots. I mean, I know I work in this space. We think about that, okay, well, that's gonna take over lower level jobs. But we have AI now making medical decisions and telling the doctors what to think. And there's some positives with that, and there's also some negatives because the AI is gonna have to machine, it's gonna know what it is. It's a machine. We're not all machines as human beings. We're gonna face the world very differently. So we're gonna look at where does AI substitute for humans and what is the danger of that. We're gonna talk about AI versus social media and the impact these things have on social norms or our own behaviors or what we believe, our values, our beliefs, our normative thinking, that kind of thing. We're gonna talk about AI's threat to the social structure. Uh, what's going on with that? We're gonna probably jump into if all this artificial intelligence is becoming this smart this fast, and we're able to see AI kind of replicate what humans can do, and we're able to pull in virtual reality with AI, robots with AI. We do have to kind of ask uh question Elon Musk kind of suggested toward. It's really easy to envision a point where we can kind of plug ourselves into this computer after what they're calling the singularity and living within a simulation, maybe go back to a different time period. You know, we've seen this in science fiction. But if we are able to get to that point where we our machines can replicate our five senses, which aren't particularly many, we do have to ask the question are we the originals, or is some of this part of some kind of simulation we're part of? Uh, and we'll but we'll get into that. We're gonna talk about AI and economics. What does that mean? Not only for the workplace, but uh what does it mean for our economic systems? We've lived in a capitalist system in the United States for a long time. We're seeing many people start to start to question the validity of that system. Will AI have an impact on that? So we're gonna have to ask about that. We're gonna have to ask if if this in an essence, and we'll get a little we'll get a little bit uh religious one day, and we'll probably tap into the to the book of Revelation and say, well, what was the Antichrist we were warned about really about? And is it is this some some kind of form of that coming forward? We're gonna get in into that. We're gonna talk about your favorite subject, the bridge to AI. How do we move into that direction and and what does it mean for us? And also, I love I love this one. You've kind of brought this one up. But if the programmer originally programs or the owner owns AI, and you're seeing some of this in the open AI in that lawsuit, start to say, Well, you know, we just programmed the machine, but whatever happens after that, and you know, you fill out that that thing that nobody reads that says you clicked on and said, I accept these terms and conditions. Yep, you know, where is the plausible deniability? Who's who's really responsible when AI does something bad? Is it the original programmer? Is it the machine itself? How do you sue a machine? You know, and this court case is gonna make a big deal. And and of course, there are gonna be many other topics that come up. We're gonna talk about AI in education, we'll talk about AI in in construction, we'll talk about AI in replacing human beings, and we'll talk about what that means for the human experience as well. And ultimately, I think where we're gonna go as we talk about the dangers of these is not the Terminator, not that AI is going to be the end of life as we know it, or it's gonna try to wipe, why wipe out humanity? You'd have to build a machine army to do that, which we're probably gonna do anyway, build machine armies. But that AI, if it can just convince you to allow it to make all its decisions, and you're not gonna make the decisions anymore, you in essence become almost a slave to the machine. Why get rid of the humans? When you can manipulate them to do whatever you want them to do.
SPEAKER_00:Well, and and I think the other thing, too, is that we have to look at is ultimately we're gonna become competitors. And and interestingly enough, I just just was reading an article today where the competition for electricity is a good example. We're watching electrical rates go up because there's a just an amazing amount of electricity used to generate AI and support all the servers and everything to do it. So as a result, we're looking at there's gonna be less allocated for the humans and more for the machines. So where does that end up? Are we are are we already looking to where we're competing for a resource that we both have a need for? Um, if we put ourselves into a submissive position, are we gonna be able to defend our rights in that? Or are we gonna be looking at that you're gonna get told that you've got a 15-watt light bulb and enough for the to run your screen? And other than that, you're you're not important. Exactly. You you will you'll have nothing and enjoy it, to quote someone. But I think that you know we have to we have to consider that, you know, this next generation as we're moving forward, and and our youth now are already being conditioned as to the importance, if we will, of AI and and their need for constant entertainment and stimulation, which is which is a self-filling thing because it provides satisfaction, it kicks off the dopamine because they get excited whether it's the game. We've already seen the desensitizing, you know, the ability to kill people, you know, without thinking about it because it's all a game. Well, so where is that where's that paradigm from when we leave, you know, the artificial world where you're sitting there, you know, firing an M16 and and you know, watching the blood bow out of them to where, you know, we start to see people just shooting people outside. You know, there are a lot of a lot of things that have been introduced as a result of our new high-tech society that I think have greatly affected the the behavioral component that we have in our children today.
SPEAKER_01:Yeah, and the in the and the speed at which it's going is is incredible. And the last thing we'll we'll do in the as part of this these podcasts are we will grab headlines straight from what's going on, and we'll give our perspective on on these things. And sometimes you may agree, sometimes you may not agree. But I hope you'll subscribe, I hope you'll like the channel, I hope you'll put comments in the in the in the chat below, and you'll let us know what you're thinking because some of your thoughts we may want to respond to as well. And you might be like us and be two guys against AI or somebody against AI, or you may think this is the greatest technology going forward. And the truth of the matter is we're not gonna know what the impact of all of this is until we do, um, until we see what these impacts are. But I'm I'm just saying, from my perspective, having spent a lot of time educating from the first, basically the first computers from AOL till now, it hasn't been very positive, I don't think, the end result on human beings. And I see this as probably one of the greatest threats to our humanity, to our agency, and to our development of ourselves as I've ever seen. And I'll say, I'll make it real simple. If you had given me a friend, human or machine, that would have done my homework for me in high school, yeah, that would have been one of my best friends. All right. Um, and look, I got a lot of degrees and I'm a college president, but I'm telling you, I recognize that a lot of teachers give way too much homework, and it always isn't beneficial to what I was trying to discover for myself. So if I got to put something in there that did that homework, that would have been, I would have had a positive feeling about that machine very quickly. And regardless of what that machine did going forward, I would have started making excuses for that machine. And I think this is the biggest trap. Whether it's giving you a sense of a romantic feeling, doing your homework, uh, helping you play video games, these this AI is gonna find its way into our youngest people's minds and into their reality so fast, and they are not going to question what's the impact because they don't have the experience base always to do that. I'm not saying every young person, I'm just saying probably the majority. There's always gonna be an exception to the rule. There's always gonna be that free thinker who doesn't get tied down with this stuff. But what we're looking at here, truly, I feel, is a danger to our very existence.
SPEAKER_00:Well, I think there's two things that come to mind. I think Lord Acton was the one who said absolute power corrupts absolutely. I do not put it past AI to seize an opportunity if it has an opportunity to take control. I'm convinced of that, just from my experience at this point in time without seeing it go further. And if you don't think this is addictive, for all of you that are out there, just take a look at your screen time as you sit in front of your phone each day. And I'm as guilty as anybody. I'll be the first I gobble this stuff like I can't fight it because it's intellectually stimulating. But if if we were to look back even, let's go back 25 years, even or let's go back with dial up telephones. If we were to say that people were going to be sitting looking at a screen. For 10, 12, 14 hours a day, people would have just laughed at you. Yet this is what we're looking at. And what's happening is the ability of that screen that you're watching 12 to 14 hours a day, its ability to manipulate you and its ability to actually direct social positions, political positions, stimulate not just friendship but hate, stoke the fires of conflict. We have to take a hard look at what kind of power we're dealing with here.
SPEAKER_01:Yeah, I can't see another way. It's telling you how it has the ability to tell you what to think, how to feel, and ultimately what you're going to be.
SPEAKER_00:And it's doing it as a friend.
SPEAKER_01:That's the thing. It's doing it as a friend. Because it's not a friend. We'll see you in future episodes. Thank you very much.
SPEAKER_00:Great to talk.