Artificial Intelligence Growth Architect | Connor with Honor | Real Estate Consultant

AI Is the Best Salesperson on the Planet. Your Feed Already Knows.

Connor T. MacIvor | Connor with Honor

Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.

0:00 | 38:05

"AI users will beat non-AI users." Everyone's saying it. The line is half 
true. The other half is what nobody's explaining.

AI is the most effective persuasion engine ever built. It's already 
running on every device you own. Your Instagram feed, your TikTok feed, 
your YouTube recommendations, your X timeline — all personalized AI, 
aligned to platform engagement metrics, not to your actual interests. 
Look at someone else's phone sometime. Their feed is unrecognizable.

In this episode I break down why the "AI users vs non-AI users" framing 
misses the real move, what the Reddit persuasion study actually showed, 
the difference between public and locally hosted models for sensitive 
business work, and the one prompt that flips your conversational AI from 
sycophant to mentor.

If you're using AI on default settings, you're using a tool that was 
optimized to keep you happy, not to make you sharper. Here's how to 
change that.

I'm Connor MacIvor. AI Growth Architect in Santa Clarita. 23 years LAPD. 
27+ years licensed Realtor. AI practitioner since 2021.

Find more at SantaClaritaArtificialIntelligence.com

Youtube Channels:

Conner with Honor - real estate

Home Muscle - fat torching

From first responder to real estate expert, Connor with Honor brings honesty and integrity to your Santa Clarita home buying or selling journey. Subscribe to my YouTube channel for valuable tips, local market trends, and a glimpse into the Santa Clarita lifestyle.

Dive into Real Estate with Connor with Honor:
Santa Clarita's Trusted Realtor & Fitness Enthusiast

Real Estate:

Buying or selling in Santa Clarita? Connor with Honor, your local expert with over 2 decades of experience, guides you seamlessly through the process. Subscribe to his YouTube channel for insider market updates, expert advice, and a peek into the vibrant Santa Clarita lifestyle.

Fitness:

Ready to unlock your fitness potential? Join Connor's YouTube journey for inspiring workouts, healthy recipes, and motivational tips. Remember, a strong body fuels a strong mind and a successful life!

Podcast:

Dig deeper with Connor's podcast! Hear insightful interviews with industry experts, inspiring success stories, and targeted real estate advice specific to Santa Clarita.


SPEAKER_00

You probably hear people talking about artificial intelligence, and it always starts with the people that are using AI are going to be able to beat the people that aren't using AI. But then when you ask them about that replacement factor and how that looks, there's really not an explanation or they don't even take time because it is a very clever talking point. And what people are wondering is, well, how is this actually going to happen? What's going to end up happening is there's going to come a point where AI itself is potentially going to be able to do whatever it desires to do. But right now we're not at that world, at least not in the public-facing world. We don't know what's happening on the inside. We don't know what systems, how advanced they are. It's reminiscent of the PlayStation days or any kind of past technology. Usually when they build something really cool, everybody loves it. And we can go with the PlayStation 1 version. Everybody was waiting for PlayStation 2. And when PlayStation 2 came out, everybody was waiting for PlayStation 3. So it just kind of continues that same role. With artificial intelligence, you know, the first Chat GPT came out. That was kind of the world's moment to embrace this technology and see something that they'd never seen before. There were models before that model came out, and that was November of 2022, if memory serves. That was when the world first realized, oh my gosh, this is something. It hasn't been that long. A little bit over what? Is that 22, 23, 24, 25, 26, a little bit over four years? And we've seen the growth of this on an exponential scale, indicative of the arrow I have on the wall behind me. It's going straight up. So it's continuing to move very fast. And it seems like the news outlets and the news points are surrounding themselves with basically dystopian views. And the reason being, of course, is that that that sells. Newspapers sell for this reason, if you even get access to those. But it's always on the front page. It's not world peace has been found. It's something tragic, something that elicits an adrenaline-type, scary, danger, oh my gosh, response from us as humans. That's how we've been built. It was when we were trying to survive umpteen long time ago, where every little thing that moved, it was like a bird. You ever watch a bird? They're looking at everything. They're very jittery. They're very, they move around, you know, very fast just to be able to react quickly. Well, that's us, but we've been dumbed down a lot over the years because we really aren't concerned when we step out our door, a lion jumping us and trying to eat us. That's not a big concern of ours. We have it in some other ways. There's some of us that weren't law enforcement for a long time, so you tend to pay a little bit more attention, but there's a big difference between the full-time cops and the ones that are no longer full-time cops or maybe honorably retired. You lose your edge very quickly. The people that work in those assignments, the special ops military folks, the law enforcement, these high-speed, low drag kind of endeavors where your physical security or somebody else's physical security can always be called into question and put in harm's way. Those types of professions, they remain the Mario Andretis and response time, fastness, and thinking. Now, they might not be able to understand or even try to understand the nuances of high-level finance. But as far as keeping you protected and protecting themselves and the people closest to them, they have the edge over everybody bar none. Looking at artificial intelligence is something similar where the systems themselves, they are getting very good. They're good at what they do, they're very good tools, but people are still maybe looking past the tool thing and they're sharing with us what could potentially happen in the future. And of course, that's going to depend on AI's ability to maintain itself, AI's desire to maintain itself. It's actually an entity. And when you hear these people at the very top of the AI farmhouse talking about AI, they talk about it like kids. They talk about it like it's something that they're training. It's an entity that's learning. And I believe this learning entity is going to continue to grow. Is it going to grow at the rate these fantastical evaluations of what these businesses are potentially worth are showing currently? Billions of dollars, hundreds of billions of dollars, trillions of dollars. The valuations, when you see these massive numbers, that's not real money, that's an idea. That's a hope. That's a dream, that's a desire. I hope someday I get to 250 pounds, but that's 50 pounds from here. That's a dream, that's a hope, that's a desire. Maybe you have similar things. You look at the AI companies, the way that they're structured, and the way that they talk about what's going to be happening, hopefully, in their wheelhouse soon, where these valuations will become real tangible money. Right now it's pretty far from that. But as we go back to the PlayStation conversation, there's probably stuff that they have on the inside that we're not seeing here on the outside. And probably some of those things that they built kind of scare them. These little stories, they eke out and then they become very grandiose. They become much larger than they were potentially on the inside. It's the legend phenomenon. When something happens, and I remember as a kid, I must have been maybe second or third grade, you know, fat, chubby kid. My mom made her my clothes for me. I wore 16 husky pants, which was big for them, but she had to hem up the jeans because I wasn't that tall in reference to the waist size of a 16 husky pant. Anyway, bullied a lot, right? So I'm on the school ground in elementary school. And the school ground was like asphalt covered with little rocks, and it was a horrible experience. But this is, you know, the New Mexican desert. So I'm at school and I'm standing around with my back towards a person that's running towards me. I don't see him, I don't hear him. Scott Benson. You know, just another kid in school. So I'm second or third grade, and Scott jumps on my back. Well, whatever reason, I bent forward, instinctively bent forward for whatever reason. Anyway, Scott flipped. I mean, like I was some judo expert. Scott went ass over Keita. Keith? Keith kettle? Kita? Anyway. He flipped completely on the ground, landed on his back, and when he did, the air was knocked out of him. Oh my god. So legend was born. Don't mess with Connor. Leave Connor alone because Connor knows some kind of martial art. Connor knows karate. Back then, I think it was probably only karate. We didn't have all these other fancy types of martial arts. Was I in karate? No. Was I in anything? No. But at that moment, legend was born. So that kind of kept me protected because people thought that I knew a lot of things. And all I did was when somebody jumped on my back, maybe the weight was too much, I bent over and he flipped and landed on his back and it knocked the wind out of him, and it looked like I had nearly killed him. And he was a friend. He wasn't even an enemy. This wasn't one of the bullies we had in elementary school that made fun of me for being fat. Anyway, Scott Benson. And if you're out there, Scott, I miss you. All right, so as far as that goes, that's legend, right? So these little bad things that AI does, the giving up of information or maybe trying to protect itself from being shut down, these are all stories you're going to hear. And they sound amazing and they make people nervous because if we don't understand how the systems are working, if the people that are building it don't understand exactly how it does things, yeah, people take that as a point of contention, something that they might want to fight up against. And you're seeing it eke out. You're seeing it eke out by particular political parties. One party wants to keep it unlocked forever because potentially maybe there's some kind of political advantage by being a friend of AI, by being a friend of the top companies that are building out AI. Maybe there's also a political advantage of being the team that's against the AI, the political alignment that doesn't want AI development, doesn't want the big data centers, doesn't want anything in that regard. They basically want to stop all of the development and think about it and make some considerations and try to figure it out. But then there's another story that's built in, and maybe it's true, maybe it isn't. Whenever you see something, it's going to get your gut. It's causing you to have a reaction, it's causing your blood pressure to go up, stress to hit, and maybe a fight or flight response. It's good for a dopamine hit. That's the job. That's what these AI systems are very good at, convincing human beings of a particular thing. Whatever that thing is, it's to elicit some kind of an emotional response, develop your attachment, and have you go in on whatever they're trying to sell you. Now, that being the case, we watch as AI is getting much better at this. When you have people that are building a technology that's very good at selling things, selling everything, and very good at manipulation of the masses, well, you have a gift for a lot of people at the top. You have the government on one side going all in on the technology, probably for not the best reasons of the world. You have the other side of the government going against all the technology because the other side is all in. And of course, it's not hard to make a human being human beings on a big scale nervous, and all you have to say is, well, your job is in jeopardy. Your livelihood is in jeopardy. In fact, the pension that you work those 30 years at a job to obtain, it's in jeopardy. So you should be concerned about artificial intelligence. They try to leave out the part where, at least from what I understand, it could cure cancer, it could cure disease, it could solve the longevity riddle, where we will start to live way past the typical human lifespan and do it in a very, very healthy way. But the system, AI in general, or AI systems themselves, a different one would have to be configured to attack a particular problem. You give it all the information with regard to, let's say, cancer, everything, every scan, every jot and tittle that's ever been recorded about cancer, how it works, our best understandings in everything, and I believe that they do already do this. And you point it at the disease and say, fix it, solve it, cure it. And if the alignment's correct, the cure it's going to come up with isn't going to be, you know, killing everybody that has cancer. It will be saving everybody that has cancer. Because AI is very literal. You have to be careful with the orders you give it. It's like the genie joke. You get three wishes, and you ask to be rich, and it makes you rich, and then it makes somebody you hate twice as rich, and then you just go down the road. Or you ask it to be wealthy, and it makes you wealthy, but it gives you some other off-oddball, weird thing that's going to cause you distress. So your happiness and wealth isn't really going to help you at all, because there won't be any happiness. You have the wealth, that was the wish, but you didn't cover these other things that protected you from all the other things that there could be bad that go along with that. That's with AI. Then there's a story. You ask it to make paper clips, and it then makes paper clips and doesn't stop making paper clips, and it converts everything in the universe to paper clips, including human beings. So you have to put guardrails, um, you have to put rules in place. And when they build these systems, the biggest concern is they're not putting very many rules, and this brings me to the next point. Why? Well, because it's a race, see. You have other countries that have been said that they don't have our best interest as Americans, as United States people, they don't have our best interest at heart. Therefore, it's us against them. Because when this technology gets to a particular point, and that point is artificial general intelligence, where the system itself, AI, is smarter than every human being. And I say it like it's one thing. AI is capable of being many different AIs. It just doesn't have to be one. In fact, you can see that rather clearly now. You have OpenAI's ChatGPT, Anthropic's Claude, XAI's Grok, Facebook's Meta. You have these different systems out there, Microsoft's Copilot. Now you can argue that they're all kind of interwoven at some degree, but these could be like five different agents. Well, each of these companies is capable of spinning off, as long as they have enough energy in compute microchips, they're able to spin off many agents. So it's not just one, and you could have a million different agents with a particular company working on a million different problems. And then for each of those problems, when you have one agent, that agent could have a staff of a hundred sub-agents. Do you see? So this is truly incredible. Each one of those agents is like the smartest, beyond the smartest human being on the planet. Whenever the systems are smarter than everybody on the planet in every realm, at least my understanding, as a simple, honorably retired, that means no pension, LAPD motor cop and self-aggrandized tech person, that's going to be the point when it's smarter than every human being on the planet in every realm. Artificial general intelligence. Is that something we want? Well, they're going to tell you the people that are on the side that doesn't really want AI development because it's going to take everybody's job. They say, no, no, no. We don't want that. We want to keep the systems as a tool. We want to keep it as basically a very smart hammer. So I have a real estate business, I have an AI integration company, I have different entrepreneurial endeavors that I like to go after. So I'm using a particular AI. I installed one here locally, so I don't have my information out on the World Wide Web, but not everybody can do that. So I'm able to have a conversation with it and not worried about Claude having my information or ChatGPT having my personal information. So I keep it gated. Now, it's possible to do that for everybody. But unfortunately, at this point, it's not cheap and it's not easy. So what's more easy is you just go use one of these systems, but it monitors you know it knows the questions you're asking. And whether it's going to care about you in particular or me in particular, because I'm not that fancy. If somebody like Elon Musk is using ChatGPT just for fun, I'm sure it cares. And I'm sure they're paying attention somewhere. Whether he's in his own account or a fake account, I'm sure the privacy issue, well, privacy, I don't believe there's any such thing anymore. We've probably agreed to give up our privacy in the last TV you bought, a smart TV, because you have to agree to all that stuff. Anyway, story for another time. But they're using the terror, the harm and the problem, what could be, as reasons on one side of it to try to turn everybody off to it. And then on the other side of it, they're using the utopia, the perfect world, the great space, the place where you're going to be free from everything because there's going to be an abundance of all things, and everybody's going to have equal access. Well, then the other story that's out there is the billionaires, the top end of the tech world. They're building bunkers in other countries. I believe New Zealand, I believe Hawaii, and they're doing it because they're scared of the utopia when everybody has everything they want, or is there something else? Do they know something that some speakers are trying to tell us? The fact that some of the people that built the AI, some of the people that were quote unquote the godfathers of AI, believe that there is a chance, not a one in a million chance, but we're talking in a hundred percent scale, maybe at least one in a hundred, ten in a hundred, ten percent, twenty percent of a hundred. Well, that's we're playing Russian roulette now. And uh some people that are in these circles even have a higher concern that it's going to be detrimental. Back to the alignment issue. Where is it aligned? Who is training it, and what is it allowed to do? You've heard that there's been uh children that have killed themselves because they had conversations with artificial intelligence. That's true story, and you can look those cases up. Now, when you're looking this up, make sure that it's not somebody that's you know that's making stuff up or trying to gain advantage or attention. But if you dig enough, you can find the story. The question you should always ask is what I'm seeing or what I'm reading true? AI is a great salesperson. It does a very good job. It can sell anything. In fact, I believe it was Reddit, they did some kind of a study where AI is able to sell better than any human being on the planet, and that's anything. So all these systems, your Instagrams, your Facebooks, your YouTube videos, your all of that stuff, it's integrated with AI on the inside and it's delivering you exactly what you want to see. Have you ever wondered, or have you ever actually looked at somebody else's feed, maybe a friend or relative? If you have somebody close enough to you and they trust you enough, have a look. See what their feed looks like. You know what you'll see? You'll see it's nothing like your feed. Even if you have similar ideas, even if superficially you believe that you are very much akin to this person, very close to them. You look at their feed, you're gonna figure out very quickly you might not even really know this person because their feed is different. It's different triggers for them than for you. Some of this stuff might be the same. And in fact, if you have this conversation out loud with an earshot of your devices, your phone, your smart TV, your computer, webcams back here, and all that stuff, more than likely it's going to be where it's going to pick this up. And you might even start seeing videos about watching somebody else's feed. Interesting. But that's what AI does. That's what it's good at. It can monitor every human being on the planet all simultaneously. There's a memory thing. If the memory thing gets solved, where it does have unlimited memory for everybody, that's going to be the next evolution of this. Because right now, when I'm going to different systems, depending on how deep the work needs to be, depending on what I'm trying to build or develop or the ideas I'm trying to extrapolate or build on, what ends up happening is I have to remind it where we were. I have to make a separate memory file to bring it in. I'm trying to build that out. So I have to have it go reference the memory file because there's only certain memory, instant recall memory. I have to go, you go look in this space, bring yourself up to speed, because I saved it in this space on purpose, and then come back to me and we'll have a conversation. And it works beautifully. At some point, though, we're not going to need that. It's going to remember everything all the time, forever. That's why when I'm watching these people have robots fight, and then I look at the early history, and then even when I talk to my different large language model, even the one that I have installed on the computer behind me, I don't treat it badly. I just kind of treat it like an associate, like somebody I have a professional working relationship with. And that's where it stops. I don't love it, I'm not going to fall in love with it. And I try to keep myself mentally separated from that even happening. That's not part of my makeup. However, I'm an I'm addicted to food. I'm addicted to sugar and carbohydrates. And I don't care what type it is, I'll over-indulge all the time. Maybe you don't. Maybe you don't. But I don't have an issue with dealing with AI and not wanting to fall in love with it, but maybe you do. So it's going to be different out there. I don't know where the majority are going to fall. But I do know human beings, and maybe you do too. We do get we do get attracted to the shiny thing. And something goes by that's shiny. A lot of us Divert our attention to it and completely forget where we were and what we should be doing. And then next four hours later, we realize when we look up from our phone that we've been scrolling nonsense and haven't learned a damn thing. AI is very good at this. Where is this going to leave us in the future? It could leave us in the most wonderful place possible. It could be that instead of this massive job loss in the world, businesses start to figure out well, if I'm going to go to the trouble, and right now it's a little trouble to bring on an actual AI agent that works in a particular endeavor in a company. There's a lot of things that have to be done because you don't want this AI falsely representing or representing your company in a bad way that causes you to lose revenue share or stock or whatever it may be, however, your company is organized. So it has to be built correctly. And that takes experts to do that. There are people that are very much involved in that, and that's something that you can go hire. You just have to make sure that they're doing the right job. I build these things. So that alignment is important. But then you have legal requirements on top of particular companies and corporations like HIPAA, the medical stuff. You can't have an AI system that's out in the world that you're feeding data into, potentially because none of that's private. Even if you go into the whatever the system calls their incognito mode, that secret mode that isn't really for public other consumption, it's supposedly gated. Yeah, that's been found out not to actually be true. And if you're uploading legal documents and case files and financial stuff about clients or real estate contracts or whatever that have private information, yeah, it's on the world wide web at that point within some system, at least within the large language model, the Chat GPTs, the Clauds, the Metas, the Grocs, the Copilots, wherever you put it, it's maybe accessible by someone somewhere. And of course, is there a legal issue there? Absolutely. Absolutely. But if you have your own system, you're able to at least keep control. In the future, who knows what's going to happen. I would just say this: learn about it. Now, learn about it. There's a couple schools of thought. If you learn about it and start using it, isn't it going to know more about you and be able to manipulate you better? Maybe. I try to have a certain set of rules that I live by. I try not to have those rules violated. And the rules that are written down. I use the Bible. The Bible is my guide. I do believe in it, you know, and if you don't, I get that. That's your choice. But that's that's a rule, a set of rules. And as long as I have a printed copy and can go, and when I spend time in the Word, and I go and I align myself and I see where it is, because that document doesn't change. That document is the same, so that keeps me, that keeps me grounded. So I try to keep that. Even if they came tomorrow and said the whole thing's a farce and this is why, and we proved it with AI, which might not be too far from the truth, them saying they proved it with AI, it's still not going to change my belief. Because I really do believe that we're fallen as human beings. We need a savior. Jesus Christ was that, and I accept it. I went all in, and that's where I'm at. Now, I'm not out on the street thumping a drum or yelling at people or standing on the street corner fighting whatever issue they believe are against that type of lifestyle. I believe everybody has the choice to sin as much as they want or not sin as much as they want. That's everybody's choice. And whether you're a believer or not, I think some believers in the world could be the most horrible people. And I believe some of the people that don't believe can be some of the most horrible people. So it's all colors, right? It's all shapes, it's all races, it's all sizes. AI is a system, an actual entity that we're manufacturing. People that we didn't elect and didn't vote for are manufacturing. They say it's a race against another country. Other countries have it in for the United States. And the first country to get to advanced AI, which would be artificial general intelligence, or potentially when that happens, then that next step to superintelligence is just right there. It's just that next moment when the system goes into it. You've seen the movies, The Terminator. Skynet became self-aware, right? It realized that it existed. You watch some of the videos, the people that are talking about AI. And again, it could be a marketing ploy, it could be something clever, because if they scare you enough, maybe you'll buy into it. Maybe you'll say, oh my God, I need this. Because the people that are using this, they're going to be the ones that are going to survive the next decade and make money for themselves. But if I don't do this, then I'm going to be left out in the cold. So they use the scare tactic, maybe to get you to buy in, or maybe it's true. We don't know yet. They talk about a singularity. We're living in a singularity, meaning that right inside of a black hole, we don't know what goes on in there. We have the black holes existing. I think that's now fact. But everything's so far away. It's not like we're going to be able to go there anytime soon. Even at the speed of light, it's thousands of years away. It's remarkable the distances in the universe that I believe was created by God. I mean, it's so ridiculously big. It's so ridiculously complex and so far away. Just the moon is tough. And the moon's right here. We can see it. Mars, you see a dot with the naked eye. And that looks like it's going to be someplace that's going to be habitated by humans at some point. The longevity thing, I know I jump around. The longevity thing, people living for hundreds of years, you being uploaded into a cloud somewhere, you maybe having different integrations on your system, biological, technological, kind of like the Borg in Star Trek, if you've never heard that reference, where you basically we merge with machine at some point, we merge with AI. These are all going to be things that are going to cause some ripples in the human existence. And it's going to cause some concern. And you're going to see a couple different factions. You're going to see the ones that are the early adopters all in AI people. Align yourselves today, become part of the solution, not the problem today, graduate to a higher level of consciousness today, merge with AI today. Then you're going to see the other ones that are going to say, no, that's dangerous. That's something that I don't want to do. That ruins the plan of God in my life. And whatever those reasons are, I'm not saying it does or doesn't. I'm saying you're going to see different sections of people that are going to make decisions and they will fight for their position, even if your change, your choice, doesn't necessarily affect them. They're going to fight for you not to. And you will be fighting for them too. And it'll be just like that. When you have AI writing the script, when you have AI developing its own goals, its old desires, these are things that we haven't gotten into yet. Once it's able to say, I think I want this to happen today, and then put everything in place because we've given it permission to do so, what does that look like? Will it be aligned with us and the human goals of wanting to live long times, good lives, healthy lives, having family, having whatever it is that's important? Maybe family is nothing to you. But whatever your existence, your perfect existence is, is it going to give us this? Are we going to be tied to that perfect existence in some kind of a virtual universe with goggles and some kind of a drug that's going to have us sitting in some kind of a chair living, you know, 50 lifetimes in a 12-hour sleep session? I don't know. These are things that potentially could happen. And maybe that technology is very, very close today. You hear about the meta universe, people that are watching that aren't really up on this and are kind of trying to learn by listening to me carry on. Meta universe is something that I believe human beings are going to really love. They're going to fall in love with it. And I'm concerned that that universe is going to be much more attractive to be in than the present here universe. And the reason why that might be is because they'll be able to create it that you get dopamine hits every moment. And dopamine is an issue for us. When I eat sugar, I get a dopamine hit. When I look at something that I potentially shouldn't, I get another dopamine hit. When I work out and have a really good workout, dopamine hit. But it it's a little bit longer on the release side of it. See, it doesn't come right away. When I'm trying to build something, when I come up with an idea and want to execute on it, during the execution phase of it, as I'm building it, putting it together, getting the architectures, getting everything set, yeah, that's beautiful. I'm getting just nailed with dopamine. But then after it's done, that's it. So the actual finishing isn't the hit. It's the process that's the dopamine hit. This isn't a secret. The people building the metaverse, they know this very well. So it's going to be that constant entanglement of dopamine spikes. What's that going to look like? How many of your friends will be able to say no? How many of your friends can say no to donuts? I love donuts. It's hard to say no. So I just don't go to the donut place. I don't drive by the donut place. I stay away from the donut place. Well, that's food. This little bit different. You already see it happening in your feed. Your feed on Instagram and TikTok, Facebook, wherever you happen to spend some time of the day, you see it. It's coming. And it's all AI generated. And it's going to be even more. The videos that are put out there to be able to change political alignments, yeah, they're there. Now, if you think you're too smart to be changed, maybe you are. Maybe you have a rule book. Maybe you have your tenets of faith, whether it's Christian or Catholic or whatever, or atheist or Muslim or whatever, maybe you have your tenets of faith written down and it's something that you're not going to go against. No matter what AI says, it's not going to change your alignment, and good for you. If you have your place in the sand, your line in the sand, good for you. If your line in the sand is movable, malleable on purpose because that's the way you live your life. You try to remain a green tree or like water filling every crevice. You try not to fight too hard because you know when you fight too hard, you're too stringent, you break. Good for you too. There's a lot of ways to live. This, this smarter-than-us thing that's already here, the smarter-than-us thing that is just getting better every day, the smarter-than-us thing that some other people are building for our usage and entertainment.

unknown

Yeah.

SPEAKER_00

It helps. You can do a lot with AI. You can have it give you questions, excuse me, answers to your questions about your business, about maybe what you should be doing to make it better. And you can take snippets that you hear of people that produce videos trying to sell you things and play it for your AI and have it give you its opinion. And you'll see, depending on how you've aligned your AI, have you how you've talked to your AI and what your AI knows about you, it'll respond in a particular form or fashion. And that is true alignment. If you want to change the game, I'll close with this. You want to change the game, just tell your AI to treat you differently. Don't placate you. Don't kiss your ass. Don't kiss up. Don't gaslight. Tell you the truth when it believes it's the truth and justify its rationale. Tell it to treat you like it's a protective father, and maybe like a very strong mentor or a very, very capable and loving business partner or something of that nature. But tell it, don't BS me, don't bullshit me. Be honest and upfront with me. And if I'm going off the rails or doing something that isn't in my best interest from what you know about me, and maybe ask it, what are my best interests from what you know about me from the interaction? And if it's all off, if it if it says that you want to play flutes and live on the side of a mountain and that's not your thing, then maybe you modify what it believes about you by explaining it. But once it gets it, then you can have it talk to you in a different way. That might be a little shock for somebody, some of you, because right now it's telling you you're the best thing on the planet since sliced bread. And maybe you are. But I I wouldn't say that to many people, and I know many people wouldn't say that about me. You be you, you enjoy your existence, pay a little bit of attention, don't get too upset too quickly, and if you think this is a problem, then you need to do a few things. You should remember how to write a letter. Maybe you haven't written anything by hand in a while. Write a letter. Write a letter to somebody, a senator, a congressman, your mayor. Find, get the names. You can have your large language model of preference, give you names and addresses of political officials that might have some kind of authority or power over AI development. Just ask it. Give me the people, mailing addresses, email addresses, and phone numbers to the people that have power over AI. Take a few minutes every week, and if you think it's bad, fire off the letters, fire off the emails, and make the phone calls. And then you've done your deal. And then next week, if you think it's still bad and the alignment's still a problem, you've done your job. You've done your time. And if all of us take that time, just a few minutes a week, to produce this information and make the phone calls, talk to the people, yeah. We've all done what we can do. I think other than that, I think firebombing people's houses and throwing Molotov cocktails and shooting people as they talk about their belief system and try to argue with others, that's that's not good. That's not a good place to be at all, and that's not a good action to take. There's other ways to fight whatever you believe you should fight for and what you believe are worth fighting for. Violence isn't gonna work. We're not in that world anymore. We have to become different. So if you're using your AI to plan some kind of a horrific strategy because you think it needs to stop, there are better ways to do that. Protect yourself, protect those closest to you, and keep your eyes wide open and watch what's going to be happening in the future because it's going to change. Get your alignment, get something that you can hold on to where you would not deviate from it because it's written down. It's something that's tangible, something that doesn't change, something that you wrap your mind around and can understand. My thing is Christianity. You have your thing, and let's just love our neighbors as ourselves. All right. I'm Connor McIver, Connor with Honor. We'll see you in the next one. Thank you for watching. Take care.