People Strategy Forum

Brian Beckcom - Beyond the Algorithm: Why Human Judgment Still Wins

Sam Reeve Season 1 Episode 159

Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.

0:00 | 37:59

As organizations accelerate their use of AI and automation, one critical question remains: where does human judgment still matter most?

In this episode of the People/AI Strategy Forum, Sam Reeve is joined by Brian Beckcom to explore why leadership, experience, and human discernment cannot be replaced by algorithms alone. While AI can improve efficiency and decision support, Brian argues that judgment, ethics, and accountability must remain firmly in human hands.

This conversation unpacks the limits of automation, the risks of over-reliance on data-driven systems, and the leadership capabilities required to balance technology with responsibility. Together, Sam and Brian discuss how leaders can leverage AI without surrendering the human elements that drive trust, culture, and long-term performance.

This episode is especially relevant for executives, founders, and HR leaders navigating AI adoption while remaining accountable for people, outcomes, and organizational integrity.

 Key Topics Covered:

  • The limits of AI and algorithmic decision-making
  • Why human judgment remains essential in leadership
  • Accountability, ethics, and decision ownership in AI-enabled organizations
  • Balancing automation with trust and culture
  • What leaders must retain control over as AI adoption grows

If you enjoyed this episode, follow the People/AI Strategy Forum on your preferred podcast platform and join the conversation! 

About the People/AI Strategy Forum
The People/AI Strategy Forum explores how leaders navigate the intersection of people strategy, leadership, and artificial intelligence. Hosted by Sam Reeve, Founder & CEO of CompTeam, the Forum features conversations with executives, practitioners, and experts shaping the future of work.

Learn more about CompTeam and the People/AI Strategy Forum at compteam.net.

Sam Reeve

What if the competitive edge in AI isn't, the saturated world that we're dealing right now, but it's actually humans using better judgment when data disagrees or the clock is ticking and the stakes are human and leaders can win or lose in this environment, if they're using AI incorrect. So welcome to the People Strategy Forum. I'm Sam Reeve, your host, and CEO of comp team, where we help organizations design people-centered total rewards that attract top performers. Today's topic is beyond the algorithm why human judgment still wins. Our guest is Brian Beckham. He's a trial lawyer. And a leadership thinker who helps people really spend the right time of their days, being successful in high stakes situations. So Brian helps leaders balance the algorithms while they can, work with humans, work with AI effectively, and, and of course, do all this in an environment that's gonna protect the business. Welcome Brian.

Brian Beckcom

Glad to be here, Sam. I really appreciate it. This is one of the artificial intelligence, computer science, things that. I'm super fascinated with. I imagine a lot of your listeners are, and I have been since I studied computer science in the early 1990s, so really looking forward to the show, thanks for having me on.

Sam Reeve

Great, William. Yeah, likewise here. Brian. I know I've been waiting to talk to you for quite some time, so I'm really looking forward to this conversation. So Brian, as we dive into the topic let's first talk about you and your journey. Can you tell us a little bit about how, you came to help the people that you help?

Brian Beckcom

Yeah, so boy, that's a, I get started in a lot of different places. So I'm 53 years old and I, you and I, how old are you, Sam?

Sam Reeve

Yeah, I'm 55.

Brian Beckcom

We're the last generation ever, to have lived before smartphones, before social media and after.

Sam Reeve

Yeah,

Brian Beckcom

You and I, our generation has seen some things and

Sam Reeve

kidding.

Brian Beckcom

I studied computer science back in the early 1990s at Texas a and m. I took a degree in computer science and another one in philosophy. I've been interested in technology since I stole the password for our computer science lab in high school and gave everybody an A and it's been, you know, people, people asked me why I thought you were a trial lawyer. Brian and I say, I am. And I say, why? How do, how do you go from computer science and philosophy to trial law? I say, I spent four years in a computer lab with a bunch of nerds and dorks. I didn't want to do that my whole life, so I went to law school and what do I do now? 25 years later, I sit behind a computer with a bunch of nerds and do the same thing. When I first started practicing law, there was literally no online research. We had to go to physical books in a library. It's hard to believe back then how inefficient that was. It's been absolutely amazing for me to have lived through this time and to have seen. Leaps and bounds in technology. It's been remarkable and really cool to see it through the eyes of a lawyer and a business owner. That's the brief story about how I got where I am.

Sam Reeve

Yeah, we've seen a lot of things, through our lives. I was at, my brother-in-law's. House the other day for the holidays and, he had his mom's typewriter in the corner one of those manual typewriters. You push the keys down right? And strikes with the, the arm. I was like, and I, I can remember that being used by my parents, here we are talking about robots, artificial intelligence and, spaceships, going into orbit. It's a sci-fi world out there for sure.

Brian Beckcom

Do you remember when, you used to have these physical typewriters and you'd press a button in a physical. A piece of metal with literally a little outline of the font would, or the letter would, would hit the page. Typewriters were big time technology when they first came out, do you remember when they came out with the typewriter that had one line of text and you could type text and then push a button, it would go click and they would type it. Do you remember those things?

Sam Reeve

I do remember seeing those, but I've never used one myself. That was pretty high tech in the,

Brian Beckcom

That was so high tech, because you remember there was something called liquid paper

Sam Reeve

Yeah.

Brian Beckcom

where when we use the old fashioned typewriters, if we, if we had a wrong letter, we'd have to take some white paint with a little tiny paintbrush. And paint over the letter and then it would dry, and then you could put the right letter there. So being able to actually have a digital screen that shows you what you were typing and then push it and it go was like a huge advance, right. I mean an incredible advance. The thing about technological advancement, generally speaking, is it kind of. It goes up in an exponential curve for the most part. It's not a perfectly smooth curve. So there'll be jumps here and there, but it moves up exponentially. That's Moore's Law, which is the idea that computing power basically doubles, in a relatively short period of time, Moore's law held true for a very long time. The problem with Moore's law is right now we're running against the limits of physics. So there's only, you can only make electrons so like, and ships so small until literally the, the energy that goes through the circuit, the circuit can't contain it anymore. So we're bumping up against Moore's law. I predict, AI's gonna be exponential. The growth will be exponential. I can explain why I think that in a little bit, but hold onto your, hats guys.'cause it's, we're like in the middle of an explosion right now, it reminds me of the story of the frog and the pot of water. You turn the pot of water on and the frog boils because by the time it realizes it's so hot. It can't jump out. So

Sam Reeve

Right.

Brian Beckcom

We're in the middle of that right now. We are literally in the middle of an explosion right now.

Sam Reeve

Yes. And that frog in the hot water sounds like a pretty good cautionary tale our leader should be thinking about. Going back to your story about, the typewriter and correcting what the white, tape. Or paste, I'm thinking about, you know, back then we had to pay attention, right? We had to look at the details. Now some professionals out there are, you know, running with ai, it's putting out what, what looks like beautiful text and then they're not reviewing the details they're putting something out there that's whether AI is hallucinating it's the wrong references or it's out of context and it's hurting their reputation. So, I mean, there's a lot of cautionary tales as we go through this growth curve.

Brian Beckcom

Yeah, no doubt about it. And, there's something interesting that, I kinda like to ponder a little bit. So what's your favorite fiction book of all time?

Sam Reeve

Geez. That's a tough one. I do like science fiction. I've been thinking a lot about, the 2001 Space Odyssey, things like this that are coming up right now because of, the time we're in thinking about how they depicted, what things would be like 20 years ago compared to

Brian Beckcom

Yep.

Sam Reeve

are.

Brian Beckcom

Yep.

Sam Reeve

I'd say one of those.

Brian Beckcom

Here's why I ask the question. So whoever your favorite author is, the best writer of all time, we're gonna come to a time very soon where AI can mimic those writers. The question I'm asking myself is at what point. Do we get to the point where humans aren't able to distinguish between good writing and bad? Because all they see is this certain style of AI writing. Normally if you produce something that has value, what that means is there was a cost to create it. So a book, like one of your favorite science fiction books, like 2001 is Space Odyssey. There's a great deal of human effort and energy that goes into that. If you can produce that with very little energy and very little effort, how are we gonna tell the difference between what's a good movie and what's a bad movie? They're all gonna kind of be homogenized. What happens with these ais is from an algorithmic standpoint, they're extremely recursive. Recursion basically means you have these loops set back and the loops, if this, then that, then go back and do this, and then go back and do this. And so what I found with the AI is the more you work with it, the more you train it, the more you do certain things, the more it kind of. Goes to the mediocre middle, is maybe the best way to put it. In other words, Sam Reeves, output's gonna look the same as Brian Becker's. And lemme give you a concrete example of this. This is from my industry, but this could be from anybody's industry. I am a 25 year board certified practicing trial lawyer. I am really, really good at what I do. But what happens when a C plus lawyer figures out that they can go on to chat GPT or whatever AI and say, Hey, I'm not very good, but tell me what Brian Beckham would do. And it just goes and spits out everything I would do, right? My competitive advantage is I know the right questions to ask. My competitors don't really know how to figure out what is really important. In a trial, just like in a business deal, normally there's a real core that's important and the rest of it doesn't matter that much. The rest is just a distraction. I know what that important core is, so that's my competitive advantage. But what happens when the C-level lawyer says. What would Brian Beckham think was the most important thing to, and I do this right now, I've been collecting, opening and closing statements of big verdicts for about 10 years from all these lawyers that I know and respect, and I've pumped'em all into ais and I've said, analyze these. Tell me why they're persuasive. Put forth a template and then I want to follow that template for the rest of my opening and closing argument. So I'm basically. Taking the brains of all these phenomenal lawyers and using their brains for my own purposes. What I'm kind of trying to figure out is, where is the competitive advantage to being human and where is it, where are we no longer competitive with these machines? That's a really important question. The other important question to me, quite frankly, is. From a purely thinking standpoint, are we gonna have any advantages over these machines? Or at some point, are they just gonna be smarter than us at everything? I've actually learned some stuff about that in the past couple weeks. That's super fascinating that I'm happy to share with you. But I'll stop there'cause I've been going on for quite some time.

Sam Reeve

Yeah. Well, Brian, it's hard to tell the future, and I, and I do think that there's going, is going to develop to where it can really read. room like I like, right, right Now, what AI cannot do very well is understand the nonverbal cues that we see in a conference room or a courtroom, you see somebody sweat or start to shift in their seat or, or their expression on their face. think some of this is, is difficult for AI to interpret at this time, and maybe it's not gonna be that way, in the future. But right now I think that's one area we have an advantage. What do you think?

Brian Beckcom

A hundred percent, although that's all easily uhis are gonna be able to do that easily and it won't take long. And that's because that's all pattern matching and recognized. I mean, there's bookshelves, there's libraries full of books, and instances about if your face moves in a certain way, that means you have necessarily a certain emotion or you're doing certain things, or if your eyes move in a certain way, like AI can do that with no problem whatsoever. I'll tell you the interesting thing about AI and vision. That's a much harder problem. But you know how they've, one of the ways they've been able to fix this is have you ever gone online and a website has said, Hey, I want you to prove you're a human, and then it'll have a grid of pictures and you have to figure out what's what do you know why you have to do that? It's called a capture. Do you know why you have to do that?

Sam Reeve

Why is that?

Brian Beckcom

You're training ais in vision. Everybody thinks that's some sort of security issue and it's not. What it is is we've been crowdsourced, all of us, into teaching ais how to see, so when you say bicycle, for example, AI used to not, you ever see like somebody with a bicycle attached to the back of their car?

Sam Reeve

Right.

Brian Beckcom

AI used to see that if it was in front of'em as a car going straight and a bicycle going sideways, that's how they would see that. And with enough data, with enough people saying that's actually a bike attached to a car. Over time you see it's not moving left or right, you know, there's all these cues and stuff that's now solved. So those, those problems are basically being solved by just. Thrown as much data and as much information as you can. The thing that is potentially insoluble may be solvable, but potentially not, is this notion of artificial general intelligence and what it really means to be intelligent. So there's also different sorts of intelligence. Your thermometer is a freaking genius. When it comes to figuring out how to maintain the temperature in your house, right? Your thermometer is intelligent, at least with respect to the temperature, but only the temperature. Chess engines are a million, trillion times smarter than humans already, and they always will be at chess, but only chess, right? And so the real interesting question, and by the way, this is, nobody knows the answer to this. Nobody like this is an open question in science. What is consciousness? What is the purpose of it? And how does it connect with general intelligence? And so I can tell you for an absolute fact that ais have not reached general intelligence. I think everybody will agree with that. The question is, what will that look like? How will they do it, and what will it mean that. Is the huge question. And I don't think it's be gonna be reading the room. I think Theis will be able to read the room better than humans. The second they can read the room better than humans, they'll be able to read the room better than humans for the rest of eternity. Okay? So they can, because they're gonna be able to walk into a room to use your example, see everything instantly, process it a trillion times faster than us, have more data to rely on and be able to think in parallel. Like they're just better than us at that. The question, the real question is, will they develop a sense of consciousness, a sense of identity, a sense of theory, of mind? I wanna give you a real concrete example that I've experienced recently. I use AI a lot to help me with witness examinations. I have to do it personally. I'm not letting the AI do it, but the AI is really good at starting in and giving me ideas in fact, I've noticed that about 60% of it is perfect the first time. Now I've been working with it and training it a lot, but where it's wrong this to me is the real insight, okay? Where it's wrong is it doesn't have a theory of mind. It doesn't have a sense that there's another human. It's creating questions for another human. So what it does is it'll create a list of amazing, you know, 80% of the questions will be absolutely perfect, and then there'll be 10 or 20 questions, and I'll look at it and go, what kind of autistic weird, stunted emotional person would even think about asking another human being? That question, and the AI has no clue, right? It just doesn't appreciate that it has. So the AI and I are interacting, it's been programmed to act human-like, right? But when we're referring to a third party has no sense of that. And

Sam Reeve

Right.

Brian Beckcom

I think that's fascinating. So maybe they ultimately learn how to program enough psychology and, you know, behavioral science and all that into the AI where I can pretend like it has a theory of mind. But if it pretends well enough, what's the difference? If it pretends like it's conscious and none of us know. Isn't that being conscious? I don't know. It's a good question.

Sam Reeve

Yeah, definitely. And you know, I think

Brian Beckcom

I.

Sam Reeve

That comes to another point where we can't tell the difference. For instance, us talking on the Zoom call right now, there might be a point where, when we're on a call like this to question am I really talking to a human or,

Brian Beckcom

That is a deep, deep philosophical question. It has been for a very long time, and like I said, it's called theory of mind, it doesn't matter if we're on Zoom or not. You and I could be in person talking to each other and I have no proof whatsoever that you're a conscious agent because the only thing I have any direct proof of. Is that I'm a conscious agent. I know I am experiencing something, I don't know if anybody else is. I assume they are because they're doing the same stuff I am. One of the great things about human intelligence is we start with assumption that other people are having a similar experience and we go from there. That's kind of like the fundamental question. If you don't know the difference, what does it matter? Maybe it matters ethically. If the AI actually has a theory of mind and is actually having experience and unplugging it, is that murder maybe? Right? But if it's just pretending, it doesn't matter. So I mean, profound consequences based on. How we view this problem. The other thing is, which I think is really interesting, Sam, is our brains are built on underlying organic material. With enough knowledge, we should probably be able to figure out how that material creates thinking and consciousness. We're not there, but we're getting closer. If we can ultimately understand that, can we build a non-organic machine that does the same exact thing? And if so, will consciousness arise in the same way? And if it does, are those things human there are no questions more profound than these questions AI raises questions that are the most profound questions we've ever faced as human beings.

Sam Reeve

Yeah, and I, suspect that we're gonna be bridging some of this technology within the next, five to 10 years for sure. Right. Now let's just talk about the stuff that we're dealing with right now. I mean, there are already a person can go into, like using a tool to clone their voice it can be done. In pretty much indi with not a lot of passion, so, you know, so to speak and inflection. But, pretty much a, a a serious business type voice. they can also go in and, and create an avatar themselves, you know, sitting down in a, a, a desk like you're sitting at right now with a microphone the next technology currently being refined is agents. Agents that can act on your behalf doing certain things. And so it's not a, it's not, too long to where all that comes together. And, you may be conversing with an AI avatar that you think is a person that is a true agent of their company and has the ability to strike contracts and so forth. And, draw up a deal and send it to you via email. Is that company on the hook for that? If it's a, if it's a, avatar.

Brian Beckcom

Great question. I don't know. I think it could be, why not, right? But again, and this is one thing that I completely lucked out on, all this tech stuff and philosophical stuff also has legal implications, like broad legal implications. And what is the law? The law is a statement of our ethics, of our collective ethics. Like what we say is right and what we say is wrong, those are great questions and nobody knows the answer to those questions. You know how we're gonna probably answer those questions. There's gonna be people that give tons of money to certain congressmen that are gonna pass laws. The congressmen, by and large will know absolutely nothing about the technology itself, and the incentives in our system are geared towards maximizing profit. And so people are gonna try to maximize profit by getting laws passed in their favor. That's the way the system works. It's worked that way forever. My personal view is at the end of the day, a human being should be responsible for everything ultimately, one of the problems I see as a trial lawyer is corporations diffuse responsibility. So I know if I'm working at a corporation, I can't get sued personally. And I also know that I can make decisions that are gonna be filtered up through the system and ultimately all the responsibility is gonna be diffused. So the great example of this is, how was it that Ford built the Pinto when it knew it was literally gonna incinerate families? Because the corporate forum allowed it to do that, it's the same exact thing with these technology companies. How is it that we're building AI that could potentially kill the world? It's because these companies want to be first to market so they can make. A profit. That's a realistic view of how the system is set up. Those are the incentives. We can debate whether those incentives are good or bad, but they exist and they're there. That's how things run. So it's always, it's always fascinated me that, and I don't wanna name any particular technology moguls, but just think of your favorite technology mogul. Most of them, aren't very good in technology. They don't actually do the programming. For the most part. What they're good at is business. Okay. So Elon Musk is like John Rockefeller. I mean, that's what he's, Elon Musk is not a master coder. He doesn't build the rockets, he can't build the car, but he's a, he's a great businessman. Bill Gates cannot program a computer. Okay. He's, he's unable to do that. Great. Businessmen. But, but here's the thing is we think because these titans of tech industry are rich and they've had successful companies, that that means they're moral or ethical or have some wisdom in the moral, ethical, area. And those two don't follow, in fact, often they're the opposite, so basically we're entrusting the keys to this technology to people that may or may not have our best interest in mind. I don't really understand that. I don't understand why you let somebody who started a social media company because they couldn't get a date dictate what billions of people see as their reality every day. That seems to me to be a bad way to run a society.

Sam Reeve

exactly. For sure.

Brian Beckcom

We should let Elon, we should el because Elon Musk has a lot of money. We should let him dictate what everybody else thinks about everything. Right?

Sam Reeve

Yeah.

Brian Beckcom

Right.

Sam Reeve

be the calling card for

Brian Beckcom

That's what we're doing. It's the same thing with all of them, because they have these Silicon Valley companies. We decide, we, there's, they're way smarter about everything than they, than they actually are. I grew up in the nineties when all this computer stuff was blown up. Here's the dirty little secret. It's all luck. Okay. Mark Zuckerberg was in the right place at the right time. Elon, bill Gates, all these guys, it, it's just, it's just luck. It's, it's not, it's super talented, don't get me wrong. But if Mus doesn't, if Mus stays in South Africa doesn't come to Canada, and then the United States, he ain't Elon Musk.

Sam Reeve

I think like what you said there, Brian is, being in the right place, right time, right. Situation, right. Technology. Yes. That's, that's, there's a degree of luck there. And I can see that. And also there's the talent. I mean, a person asking to make the, the sacrifices and so forth to, to, be successful, but as you said, does this mean that these are the individuals that are going to rewrite history or, society? I think that everybody has a part to play there. Leadership has a part to play. And to come full circle to, the intent of the podcast here is to really, what should leaders, you know, what, should they harness an ai? And what, should they be cautionary of with today's technology, especially when you're thinking about a, business standpoint, risk and so forth, what are the cautionary tales that you can say about, what leaders should avoid in ai?

Brian Beckcom

If you're not using ai, you are behind, period. You should have been using ai, and I don't care what business you're in, I don't care what industry you're in, you should have been using AI two years ago. If you're not using it now, you're way behind. like if you didn't have an email address right now, people would think you were a hermit or you were a monk, but back in the day, oh, why would I use email? That's silly. Do you remember when the blackberries came out with the physical keyboard, the little blackberry device?

Sam Reeve

I thought that was super cool back then.

Brian Beckcom

Super cool. And then they came out with the iPhone. Nobody will ever tap on glass. That's what they said. Remember, nobody's ever gonna do this. There's no tactile feedback. Every time a new piece of technology comes out, there's a whole group of people that will say, this will never, da, da, da. They're always wrong every time. They're wrong sometimes for different reasons. But then here's the flip side of that, Sam, you. Theis will give you a big competitive advantage, but you as a human have to know how to deploy it the right way. I use AIS to help me start briefs. I, I use ais with my diet, with my juujitsu, with my golf training, drafting discovery, opening statements, giving me ideas, helping me deal with my wife. You name it. I use the AI for just about everything, but I don't trust it. I verify everything it says, I can't believe there's lawyers filing briefs where they haven't even checked whether the case is real. That's just laziness quite frankly. So it's a two-pronged message, Sam. You have to be familiar and conversant in AI if you want to be in any business from now until the rest of. However long humans are, around, there's a great saying, by the way, Albert Einstein says, I don't know what World War III will be fought with, but I know World War four would, will be fought with sticks and stones. So, we're gonna need AI until we have World War Four, whatever, but then the flip side is, you can't trust it to be accurate about pretty much every, anything. It literally makes stuff up all the time. The stuff it makes up sounds like it's accurate. And so, so like we have to be actively engaged. The AI is not like, Hey, go do this for me. And you come back and there's your product, it's like, Hey, go do this for me. And you come back and go this is a good start, but there's a bunch of things I gotta change. Part of the reason for that is because I'm a human and Sam, I know what you like better than this. AI knows what I mean. This AI is just guessing at what you like and I actually know what you like, so I'm gonna be like, well, Sam is not gonna, that's kinda weird the way that said that I think Allis have autism.

Sam Reeve

Yeah.

Brian Beckcom

I think if we see it through, and I know there's different kinds of autism, but I'm talking about the autism, where you have trouble connecting emotionally. AI can't, like, it's all fake. Like if an AI says, Hey, how you doing today? Hope you're having a good day? There's no. Emotions actually occurring that's just all phony because it's trying to make you feel that way. There's autistic people that have problems connecting emotionally with people. Not all autistic people are like this, but some are. And that's what an AI is right now. It's autistic in that way. So anyway, my message is get on the program, but don't trust it.

Sam Reeve

That's right.

Brian Beckcom

Yeah.

Sam Reeve

about is, you know, using, our artificial intelligence or working with artificial intelligence, kind of like we, how we, we do with a, a new employee. I mean, we don't know their, their skillset. We don't know how consistent they're going to be at work. We need to be diligent in checking the details and coming to a certain level of confidence with what they're, with, the output that they're, they're providing And also with just like employees, they can have a bad day, get distracted. They can, you know, maybe spend too much time, Hey man, I'm not saying an AI might do this, but might spend too much time out with the, the, their friends, partying in the night before and, and coming in and not performing well. Right? Things like this that you'd have to watch out for regular employee. I think that we have to do this with artificial intelligence as well, you know, and it's just quality control, ensuring that we're doing good management.

Brian Beckcom

Absolutely.

Sam Reeve

Now. One thing I wanna, talk to you, Brian, as far as, you know, I did bring up a story earlier about, a person cloning their voice. Creating an avatar. Things like this are known to, increase, a person's productivity. Some leaders are doing this, should that be, something they should pursue with caution? What are your thoughts?

Brian Beckcom

Pursue with caution.

Sam Reeve

Cloning their voice, me creating an avatar in their own likeness.

Brian Beckcom

I think that's okay. When I call you on my iPhone, you know how on the other end, it picks up and it's like an image of you if you want it to be.

Sam Reeve

Right, right.

Brian Beckcom

I have an AI image, if I call you, you'll see my AI image. I also have every single firm photo in my firm. For everybody that works for me is an AI image. I haven't, done it with the voice yet. The AI images that I use, and again, we're, learning the ethics of this as we go, which is fine. That's the way a lot of things go, but I think as long as you don't try to play it off is something other than it is generally speaking, that's a pretty good approach. So if you saw my AI image, you'd be like. That's a great picture. But I can tell it's ai, right?

Sam Reeve

Yeah.

Brian Beckcom

Same thing with my, it's like I tell people the way I kind of think and wish I looked.

Sam Reeve

Yeah.

Brian Beckcom

So, but it's obvious. The real moral dilemma is when people don't know the difference.

Sam Reeve

Right.

Brian Beckcom

And what are you doing there? Like, like what would I, do? This would be incredible. What would you do if like, all of a sudden on this side of the screen, another image of me came in and I said, by the way, this guy, the whole thing was fake. So what would you feel, you'd feel weird about that. You'd be like, this guy just kind of lied to me. It was a little deceitful. He didn't tell me ahead of time. I feel like I was fool. Like you would feel those are bad. Those are negative emotions, right?

Sam Reeve

Right,

Brian Beckcom

so I think at least right now, as long as we're clear with each other about what's real and what's not, which right now for the most part is pretty easy to figure out on your own. Even the voices aren't perfect. There'll always be a little pause or something's off just to touch. By the way, that's called the uncanny valley. You know what that is, right?

Sam Reeve

I've heard about that.

Brian Beckcom

Yeah. So, so uncanny valley is C3 po, R two, D two obviously robots. Everybody loves'em, thinks they're super cute, right?

Sam Reeve

Right.

Brian Beckcom

As you get closer and closer to being a human, it gets creepier and creepier the closer a robot is to a human, the creepier, it is until you can't tell the difference. So Westworld, you can't tell the difference. You ever seen the show? Westworld?

Sam Reeve

Love it.

Brian Beckcom

Can't tell the difference. Now, back up just a little bit and think of the most lifelike humanoid robot. You've maybe seen some of these videos of these like Chinese robots or something, and they look like humans, but not quite. And we're like, God, that looks so weird.

Sam Reeve

Mm-hmm.

Brian Beckcom

So the uncanny valley protects us. I think it's like an evolutionary thing, like unless it's perfect, humans have a real good sense. Of this, this shit ain't real. What I'm looking at ain't real. I can tell. I don't know why, but I can tell it's not real.

Sam Reeve

Yeah. So, but as we're, wrapping up this conversation, what are the top things, Brian, that you would like leaders to walk away from? When they're thinking about, Hey, how do I use artificial intelligence in 2026? And, what are the things that I should be reluctant? About using artificial intelligence for.

Brian Beckcom

You should use artificial intelligence for everything, would be my view. It's a skill using it. The more you use it, the better you get at it. Whether you can trust things, whether you can't trust things, how to prompt it, how to teach it. And you should use it for everything. It's like having. An extra brain, like I have an extra brain right now. I'll give everybody a short story. I'm 53. My wife just turned 50. We've been married for 25 years. Happily married three children, but she's gone through some changes right now and so am I.'cause our kids are about to be off. All of'em. We're gonna be empty nesters. I'm trying to figure out how do I best maintain the quality of our marriage, so I sometimes will ask my AI questions about this. Sam, it's been transformative in the way I deal with things and in a positive way. At our age, Sam, we have all these patterns in our head. That have developed over time, these ways of thinking AI is a great way to break those patterns. So I can be like, Hey, my wife went out the other night with some friends. She got home an hour late. I was a little perturbed about that, right? Normally I would've said, where were you? Why didn't you come home? Why didn't you text me? Hey, I'm a little perturbed about this. Just when she comes home, say, Hey, glad you're home. Happy you're here.

Sam Reeve

Yeah.

Brian Beckcom

It's amazing. It's like having another human to, to say, Hey, here's where you're thinking is a little bit, pat, it's a little too much of a pattern. Maybe you should do it this way. Think about it this way. It's incredible. The way I think about it is, it's like Google search. A million times better. It looks out at all this stuff if I were to ask Google the same question five years ago, how do I deal with my wife? What would it do? It pop a bunch of psychology articles. I'd have to read all of them, digest'em. It'd be a couple videos phil, with the ai, it's literally like having a psychologist next to my side. I also have. A doctor on call at all times. I was having a problem with my energy dropping in the afternoon. Here's what I'm eating, here's where I'm sleeping, here's what I'm doing. Try these things, boom, boom, boom, boom, no problem. I look at it like I have an extra brain. That's how I'd encourage your, business leaders to look at it too. You have an extra brain available if you learn how to use it.

Sam Reeve

Right. I think this is, one of those times of, advanced. Learning, where we all have to do the best we can, try to stay out of risk as much as possible and move forward.

Brian Beckcom

Yep.

Sam Reeve

important for us to do so to keep up with our competitors and our peers.

Brian Beckcom

Yep.

Sam Reeve

Brian, tell me, what are you looking forward to in, 2026?

Brian Beckcom

I am looking forward to, becoming a better version of myself. And part of the way I'm gonna do that is I'm gonna have AI be my coach. So I want to be, I have this like idea, you know, I don't know what the menial hey Sam is, but. I tell my kids like getting the most outta whatever talents, whatever you have, kind of maximizing whatever you've been given is not a bad purpose. I'm trying to maximize my, talents my goal is to maximize those. I wanna be, I wanna be everything I do. I wanna get a little better at, I wanna be a better lawyer. I wanna be a better father. I want to be a better jujitsu guy. I wanna be a better golfer. I wanna read a lot and AI is helpful for all that stuff.

Sam Reeve

I agree. I think that, this is going to be my, two. Percent year,

Brian Beckcom

Love it.

Sam Reeve

the book that this came out, but it's just an extra 2%, just a little bit every day on everything that you're doing just multiplies the end of the year. I think that's my goal this year. And I think that, we can use these new tools to really, accomplish this, spending more time with family. More time with friends, having, better relationships. As long as we are taking, that stance of really, enhancing our humanity,

Brian Beckcom

Yeah.

Sam Reeve

That's where it all comes out.

Brian Beckcom

Agree.

Sam Reeve

Well, Brian, it's been a great conversation. Thank you so much for spending time with me today.

Brian Beckcom

Wonderful. Sam, I knew when we, talked not long ago that this would be a fantastic show. You're a great guy, really smart, and I'm just fascinated by these topics. So thanks for giving me a chance to spout some of my opinions.

Sam Reeve

No, I love it. I love it. Okay, RO to all those that are tuned in to the People Strategy forum, we'll see you next week. If you found this episode interesting, please like, share and, ensure you're thinking about that friend of yours, that, needs that boost, that says, Hey, you need to take a look at artificial intelligence and see how you can use this technology effectively for not only, work, but your family and your life.

Brian Beckcom

Yeah,

Sam Reeve

think that's, what we need to do going forward. Take care everyone, and we'll see you next week.