Love u Miss u Bye

AI: Law, Ethics, Innovation, and Human Creativity

February 12, 2024 Christi Chanelle Season 1 Episode 10
AI: Law, Ethics, Innovation, and Human Creativity
Love u Miss u Bye
More Info
Love u Miss u Bye
AI: Law, Ethics, Innovation, and Human Creativity
Feb 12, 2024 Season 1 Episode 10
Christi Chanelle

Send us a Text Message.

Watch Now:

https://youtu.be/yAFjK529T3w

Ever pondered the crossroads where AI meets the law? Grab your coffee and join me, Christi Chanelle, along with my insightful guest, legal eagle Lori,  as we untangle the complex web of artificial intelligence's latest quandary. From AI-generated doppelgängers to the echoes of Taylor Swift's likeness, we dive into the heart of pressing legal battles and the ethical implications of technology that blurs the lines of reality.

This episode isn't just a conversation; it's a journey through the potential threats and marvels of AI in our daily lives. We discuss how generative AI is crafting content with the power to deceive, from realistic imagery to voice clips, and how this affects everything from personal reputation to the sanctity of creativity. We also probe the increasing role of AI in human resources, turning a critical eye on the future of machine involvement in our professional spaces. Lori brings her A-game, offering legal insights that are sure to both inform and provoke thought.

As we wrap up our expedition into the AI labyrinth, we reflect on the delicate balance between embracing technological innovation and preserving the irreplaceable spark of human ingenuity. I share my own hesitations about letting artificial intelligence encroach upon the personal magic of podcasting and client relations. Together, Lori and I underscore the enduring need for the nuanced judgment that no robot or algorithm can replicate. Tune in to this candid exploration and affirm your belief in the value of human expertise amidst the march of machines.

Support the Show.

Watch the episodes on YOUTUBE: Love u Miss u Bye
https://youtube.com/@Loveumissubye?si=qp5BK-Pf89SexD0k
Website
https://christichanelle.com/
TikTok- ChristiChanelle
https://www.tiktok.com/@christichanelle?is_from_webapp=1&sender_device=pc
Facebook - Love u Miss u Bye / The Sassy Onions
https://www.facebook.com/TheSassyOnions
Instagram- ChristiChanelle
https://www.instagram.com/christichanelle/?utm_source=ig_web_button_share_sheet

Love u Miss u Bye
Become a supporter of the show!
Starting at $3/month
Support
Show Notes Transcript Chapter Markers

Send us a Text Message.

Watch Now:

https://youtu.be/yAFjK529T3w

Ever pondered the crossroads where AI meets the law? Grab your coffee and join me, Christi Chanelle, along with my insightful guest, legal eagle Lori,  as we untangle the complex web of artificial intelligence's latest quandary. From AI-generated doppelgängers to the echoes of Taylor Swift's likeness, we dive into the heart of pressing legal battles and the ethical implications of technology that blurs the lines of reality.

This episode isn't just a conversation; it's a journey through the potential threats and marvels of AI in our daily lives. We discuss how generative AI is crafting content with the power to deceive, from realistic imagery to voice clips, and how this affects everything from personal reputation to the sanctity of creativity. We also probe the increasing role of AI in human resources, turning a critical eye on the future of machine involvement in our professional spaces. Lori brings her A-game, offering legal insights that are sure to both inform and provoke thought.

As we wrap up our expedition into the AI labyrinth, we reflect on the delicate balance between embracing technological innovation and preserving the irreplaceable spark of human ingenuity. I share my own hesitations about letting artificial intelligence encroach upon the personal magic of podcasting and client relations. Together, Lori and I underscore the enduring need for the nuanced judgment that no robot or algorithm can replicate. Tune in to this candid exploration and affirm your belief in the value of human expertise amidst the march of machines.

Support the Show.

Watch the episodes on YOUTUBE: Love u Miss u Bye
https://youtube.com/@Loveumissubye?si=qp5BK-Pf89SexD0k
Website
https://christichanelle.com/
TikTok- ChristiChanelle
https://www.tiktok.com/@christichanelle?is_from_webapp=1&sender_device=pc
Facebook - Love u Miss u Bye / The Sassy Onions
https://www.facebook.com/TheSassyOnions
Instagram- ChristiChanelle
https://www.instagram.com/christichanelle/?utm_source=ig_web_button_share_sheet

Speaker 1:

One of the big things that you say to me is make sure you put a disclaimer out there, and I so hate it.

Speaker 2:

Well, I think from my standpoint because I'm going to speak to sort of the legal issues and you know potential ramifications and sort of the legal landscape it's important for people to know this is just information, right, we're just sharing information you from a regular standpoint, me from the legal landscape but it's not legal advice. So nothing that we talk about today is meant to give anybody of the listener population specific legal advice. If they have specific issues, you know they should look into those with an attorney that represents them.

Speaker 1:

Okay. So I'm going to slip that into the beginning of the episode so we can make sure everybody's on board that this is just a conversation and we are just having our morning coffee or water together. And I do that now. You know, before I would talk to you and I wouldn't think well, let me talk about legal stuff. You know, as I get older, I realize how much more valuable your information is. The Love you, mishie Bye, podcast. Let's inspire each other. Hello and welcome to Love you, mishie Bye.

Speaker 1:

With me, christie Chanel, and I'm here with my best friend, laurie Laurie. Thanks for joining me. Yes, of course, christie. We're filming early, so it's earlier than I'm used to filming, so I may seem a little slow, but I will catch up as soon as my coffee catches up. So I'm very close.

Speaker 1:

But I think it's really cool because when I was driving up here this morning, I'm thinking I get to have a conversation with my best friend early on a Saturday and it's a planned thing, and so I know that we're both crazy busy all the time. And to have that time frame, to know that and, sorry, I have been sick so I may have to drink a lot of water this episode it seems like it's itching my throat for some reason, but I am more than up than to have a conversation with you. It's actually less prepped because you're so prepped. So I was like, yeah, this is perfect. So I've been sick, so I haven't done a lot of prep for the show, so I'm so glad that you came. But even even more than that, this is Valentine's week and I know the normal thing would be to talk about love and relationships, but we are taking a sharp left turn and talking about something that struck me so hard.

Speaker 1:

About a week and a half ago I was doing my normal scroll on TikTok to see what the trending topics are and just be entertained, and a clip came up and they're talking about Taylor Swift, who I am a confessed Swifty, so I really really enjoy her music and how smart she is really. So I'm watching this and they start talking about these pictures that are out there right now of AI, their AI, taylor Swift. But they look exactly like her and it's provocative pictures and I am just literally speechless as I'm hearing this information. It terrifies me. So I send you a text and I'm like Lori, this is something like we had a different topic scheduled in our minds that it would be our next time we sit. We sat down together to talk about things and I was like no, no, no, no, no. We've got to talk about AI.

Speaker 1:

It's one thing for me to just state you know what I see, what my opinions are. It's a whole different thing when you add the legal mind as to what could possibly potentially harm everyday people. I hear all these opinions out there, which is similar to me, but I've never really heard just a casual conversation with an attorney about AI things and what they're seeing in certain arenas. So thank you for coming on and sharing this insight, and I am just like the viewer in this conversation. I'm just going to you know, ping pong off of the things that you're telling me and ask questions that I think they might be asking also.

Speaker 1:

So with the Taylor Swift AI stuff, it terrified me, lori. It terrified me because this is someone I'm sure you know. This is a famous person and when you're dealing with fame and things like that, this is a typical type thing. Not the AI pictures, but just people feeling like they know them, they can say whatever they want, they can do whatever they want. And now it's taking it to a whole new horrifying level. So I like to know what your take is on the whole AI Taylor Swift drama.

Speaker 2:

So, you know, I haven't really looked at the specific incidents or I've never seen the AI generated photos or anything. I think the one thing we want to do is just say, okay, well, what is AI? Right? So it's artificial intelligence for anybody that is not interested in it. That's what it is. It's not human intelligence, it's artificial intelligence. And then there's different aspects of that.

Speaker 2:

As far as the AI situation with Taylor Swift, that's generative AI, right? So it's like AI that can just take certain information and then generate new information. So somebody took information about her photos, whatever they did, and they input it into whatever AI platform they're using, and they generated what they wanted, right? They created images of her that weren't even her or that have nothing to do with anything that she was doing. So I think your point is you were horrified, right? And so that's a natural reaction, because if that could happen to her, that could happen to anybody. And then, when you start to take that a step further, all you need is like a second or two seconds of somebody's voice, and the next thing you know what. You're generating voice clips that are supposedly you or me or Taylor Swift or anybody, right? So now you can Can I interject on that?

Speaker 1:

Yeah, I actually. They have this ad that plays on TikTok and the ad is taking a song and then putting your voice into it, but it's AI generated, so it's making you sound like a singer, so that completely touches on that, right there, where they can just take my voice and turn it into a hip hop song and it. You know what I mean. It takes away the talent behind things. I'll tell you what.

Speaker 2:

Well, that's another point. Right Is you can generate anything. So now there's AI generated books and AI generated songs, and you're taking this human component right out of it and anybody can create anything, and they have zero talent and they have nothing but a computer and AI capabilities. And so we're talking about embarrassing things and things that could be hurtful. Maybe the Taylor issue is could be hurtful to their image, and then it could be hurtful to her career, maybe because people can just take her songs and make them their own and things like that.

Speaker 2:

But then you have to think about the really scary stuff, which is someone taking a song and taking your voice or your child's voice or somebody, and then posing as you and reaching out to your loved ones or whomever and saying, well, that they have you right, they have you, They've kidnapped you. It's their voice saying, mom, I'm in trouble. It's their voice saying mom, I need your help, Mom, I'm in trouble. You know all these different things. There's scams that people are perpetrating against regular people because they have these voice clips that sound just like their kid or just like their mom or just like their dad, and so now you're talking about not only hurting people but using AI for evil right. This is like pure evil trying to shake people down, trying to get money from them, saying, you know, if you don't give me this amount of money then I'm going to hurt your child, you know, and things like that. I mean this is happening. This is really happening.

Speaker 1:

Well, I think, and just like everything, when you see, you know, I would like to think that most people that invent things do it with good intentions. There's always going to be those people that say, well, hey, he can do that, we can do this. And it's terrifying because AI there's so many ways that it can harm us and there's ways that it can help us, but we don't even know all the ways that it can harm us yet. So, when we go back to the Taylor Swift situation, the first thing I thought of is you know, I'm so lucky that I was able to be part of a time that didn't have social media. I wasn't able to put all my stuff out there as a 19 or 15 year old girl not quite thinking through the ramifications of what I say.

Speaker 1:

So now, you know, obviously the world has to live with that, but I'm thankful because maybe I would have been, you know, sending a bathing suit picture to my boyfriend, you know, and I don't have that stuff out there, which actually, you know, I'd probably look better back then, so it wouldn't have been so horrible. But my point is I take, I try to safeguard my kids from, you know, don't put it out there because it's then. It's there forever. And you have people that are like that, really like cover up their bodies and they don't even wanna show a shoulder. People can take those images that you've worked your whole life to have a certain image and destroy it in seconds and that, right, there is just and you can't take away those images that people now have in their mind and it's not you, but it's like you.

Speaker 2:

I think that's where I jumped to first was yeah, I think the average person, when they hear something like that, it is horrifying, right. It's like, what are we doing here? And I guess, you know, maybe I wanna kind of dial back a notch and say I'm not against AI, I'm not somebody that, you know, doesn't have innovation in mind. You know, I think innovation is wonderful and I think that there are areas where AI can assist. Right, that's what it is, it's like assistive technology. But you know, there's sort of the extreme. It's like there's the extreme one way where people are worried about the world being taken over by robots, right. And then there's the extreme the other way, where it's like, oh, you know, this is just helpful and it can't hurt you. But you know, I think my overall position on this is you know, these are all emerging technologies and the law is still trying to figure out how to keep up with it, and so, from my perspective, you know, I've always worked with really large companies. You know my entire practice life, it was defending pretty large corporations, and so I've been following these AI issues for a long time, because they're all very highly technological companies that see the value in things like this.

Speaker 2:

What the purpose of artificial intelligence was and is is, you know, to take these huge data sets and then basically be able to learn a pattern of behaviors and learn how the user wants it to work, right. So, like it knows what you're gonna say, it can help you. It goes faster, you're typing and it's predicting your next sentence or a comment. With the you know, traditional AI, it doesn't create anything new. It just takes the data that it has and helps you do things faster, better. But with generated AI, you're producing new, so you're taking all the data that's in there and then you're asking it to do something new.

Speaker 2:

And the information in these platforms, in AI, it only goes up to a certain point, right? So if all the data is up through 21, then something that you might ask it for and ask it to produce for you now is not gonna be up to date. It could be false, it could be wrong, it could be discriminatory in a way. So if in 2021, you know it was only men who do this or only women who do this, then if you ask it for something in 23 or 24, it may just think that, okay, well, we can't generate that for you, because you're a woman and this is really for a man.

Speaker 2:

You know just silly sort of things like that. But it just makes things up when it doesn't know what to do. They call it like hallucinating. So if you ask it to do something and it doesn't know, it just makes it up. In the legal context there's a notorious incident of a New York lawyer who decided, when chatGPT came out, basically you know, said, generate me a brief that's based on this, that and the other thing, and it turns out chatGPT didn't know what to do. So it made it up and it quoted cases that don't exist and relied on law that doesn't actually exist. Oh no. So when the court got ahold of this then like I suppose it was their law clerk or somebody that started trying to look for these supporting cases and they don't exist, they were fabricated. So the employer got sanctioned. I wouldn't be surprised if his client sued him for malpractice because the client could have done that on his own.

Speaker 2:

Exactly, and it was wrong. Right, it was wrong. At least if it was the client representing himself, he would have tried to look for proper law and so forth. But you know, this is how dangerous it can be right? There's this idea that if I generated, it's correct and that's not true, which is scary.

Speaker 1:

Right, that's crazy. Though I turned on a podcast this morning, I don't even remember the name because all I did was type in AI. When I'm listening now, I have to tell you it takes a lot to keep my attention. I, my mind, will kind of go off into La La Land, unless the person talking is able to capture it, and so words will capture it. So as I'm listening, I'm zoned out. I'm completely, I'm like planning this episode.

Speaker 1:

I'm totally zoned out and then all of a sudden I hear nuclear war. It jolts me out of my thought as I'm driving and I'm like, okay, so they're saying that the top three things to end civilization is nuclear war, covid times a hundred. So what is that? Bio warfare, bio warfare and then AI, drift warfare, yeah and AI. Those are the three things that can end humanity, which is a dark turn. I know, I realize it's a dark turn, but it jolted me out. Yeah, it jolted me out because I think I think I get all of those on a level now, after having COVID, I understand that one very well and how and what it can do.

Speaker 1:

Nuclear war is something because we're in war, that's always been a concern, because it's a button press. The other one is AI, which we don't know all the ways that it can do it, but some of the things that I saw was that it's actually starting to create its own thought process in some things, where it's building upon and having their own opinions that differ from ours. That's where it gets scary, because if it can start to build its own thought process in mind, that's kind of terrifying. Lori, you know what I'm saying. It's like yeah.

Speaker 2:

Yeah, I still don't know enough about the technology to say like it completely can act on its own. I think at the end of the day, it's still relying on sets of data that's been fed right. So I suppose if you fed in there lots and lots of very bad stuff, like hey, let's go back to the Holocaust and say that the Holocaust was appropriate or let's say that this is the direction, I suppose that it could generate like-minded thinking. You know to your point. You know it's not going to just evolve completely on its own, with its own. You know it's a machine, so it doesn't have a thought process. It needs to be fed information and I guess you know one of the contexts in which I've been dealing with it, or that comes up a lot in my world, is that hiring people is a big investment for companies.

Speaker 2:

Finding top talent is a very time consuming process and not every company has a huge talent acquisition team or recruiting team. A lot of companies use platforms. You know different HR platforms where when you apply for the job, you go through their platform, you upload your resume, and there are a lot of features that have been used for years that are not scary. They're just the sort of traditional AI where it helps go through large sets of data and let's say, you have 700 applicants, so it helps you narrow down based on the job description. Do these people even need the job description right? So it can help you with those kinds of things.

Speaker 2:

But there are a lot of platforms out there that are starting to take the next steps that they're going to choose the candidates or they're going to use technology to interview these candidates. They're going to be the ones to interface. You're going to deal with a chat bot. You know there's all these different things. Like the chat bot will pop up and be like you know, can I help you? Like, like you see on all these websites. But do you want, do you, as a company, really want to get so automated?

Speaker 2:

It's so automated that it's not until the last step that you're actually interviewing the person and seeing the person live and then making the decision, and so there's a lot of talk around and a lot of legislation around.

Speaker 2:

Like if you completely take employee decision making, like if you take it out of the hands of a human being completely, when you rate people's performance, like if you decide to use a performance system that uses AI. Are you making decisions about people's performance and fit for a job by listening to a computer and by listening to what AI is saying? So again it goes back to that like was there a historical path where, you know, maybe women have this job more than men? So then are all the men getting excluded? And you know, was a certain you know, when you just start thinking about the historical sense of like, you can see how discrimination can become an issue in hiring and in rating people's performance. So there's a lot of legislation pending, there's a lot of legislation out there, and all of it is sort of saying if you're using AI to make decisions like that, you could be in violation and you could be discriminating against people unknowingly.

Speaker 1:

You think, if they have built this, that there's going to be some kind of filter mechanism or something where you could say this will include women of all ages, men of all ages. You know what I'm saying? Maybe not yet, but because that is and could be a potential problem, I would hope that they would put in safeguards where you can say do not include, exclude anyone unless they don't have a degree. You know that type of stuff, Just thoughts, thoughts. But yeah, right now, heck, yeah.

Speaker 2:

Yeah, I am pretty sure that most of these systems are kind of fail safe in that way. But there's still that mentality of like but you don't know right, you really don't know. And I think the guidance out there from a lot of the you know, like the American Bar Association and local bar associations, and just you know there's a lot of guidance on it. And then there's legislation. There's definitely legislation in New Jersey, not New York State, new York City. California is always out in front of this and I think that while we need to embrace it and understand it, we must be vigilant of the inherent risks. And that's how I'm guided and that's how I guide my clients. That's how I've always guided my clients and I think that's how that I sit internally in a company that you know uses technology and is highly technological. Everything I look at when it comes to AI and machine learning and generative AI is what are the risks, what are the benefits and how do we find a middle ground? You know, that's sort of how I view this.

Speaker 1:

Yeah, and and you know that's come into my mind just as a regular consumer and learning how to do editing and all these other things.

Speaker 1:

At first I was against it and, like I'm all about the human experience.

Speaker 1:

I mean, when you're storytelling and you're doing stuff like that, you want it to connect to the heart, you want it to connect to the brain, and the more that you put AI into that, the more you take away the heart in my opinion. So I had that standpoint, I had that. That was my stance on it, and then I started to realize if I don't at least embrace it, like you're saying, I'm going to be left behind and I'm not going to know how to move forward in the community of editing and storytelling unless I at least learn it and get a little bit behind it. I think it's finding that happy medium, like you're saying, between the heart and the technology to really kind of find your way, and I think the people that can can straddle that line properly are the ones that are going to succeed the most. Now, on this same note, do you, are you worried about getting jobs eliminated with this whole thing, or do you think it's going to just spark new opportunities in the lane of AI.

Speaker 2:

Honestly, I really don't think that robots are going to take over the world. I really don't. I don't believe that. I think there's too many jobs and too many things that require that human analysis and that human check. Right, so, even if robots build it, there's still going to be a human that has to check it and has to make sure, and so, yeah, it might eliminate some of the jobs that are like you do a task, you do a task.

Speaker 1:

Like what? What do you see it eliminating?

Speaker 2:

I don't know, I mean, I really haven't thought about it Proofreading. Maybe you know, maybe it'll impact the editing community. Maybe it'll impact, you know. I do think in a manufacturing context there's probably jobs that will eliminate, yeah factory.

Speaker 2:

But I still think that it's going to create new opportunities it's going to create. You know, it's probably going to break even In my mind. That's what I think it's probably going to break even because as everything gets more sophisticated, then you need to find people who are experts in those sophistications and those areas and so you know, when you have self-driving cars right, which I'm scared of.

Speaker 2:

Yeah, and I am not a fan of myself personally, especially as a you know products liability lawyer for most of the time, oh yeah, that's right, I'm just, you know.

Speaker 1:

You see, the red flags huh.

Speaker 2:

I'm just not a fan. But you know, some will say that automated autos are actually going to be safer, or actually are safer statistically, because they're more predictable than human beings. You know, you cannot predict what the person in front of you is going to do when they all of a sudden decide to look at their phone or pick up their phone, or they get distracted by their kid or whatever. Good point.

Speaker 2:

That is not predictable, right, but the technology in the self-driving car is predictable and it knows that if there's a person it's going to stop. It knows, you know. So there's real proponents for it. Yeah, but to me I feel like it's still too new. And you know, anything that is just rolling off of the assembly line and just rolling out there's always going to be glitches. There's going to be periods of time where you know you got to check what's really happening and you know, is the program running the way it was supposed to? And then, if it isn't, then a human being is likely going to be the one to fix that.

Speaker 1:

So why in five, two minutes, were you able to make me go from? I'm never getting into a car like that to. I think it's brilliant. I want to do it in two minutes, lori, because I never thought of the whole checking your phone, turning the radio down, talking to somebody next to you, that is that makes so much sense, because they're taking out the human elements of the mistakes that happen on drinking and driving, all of those things, right, oh my gosh.

Speaker 2:

Okay, but then the other side of that argument is does the technology understand enough that, let's say it's programmed so that if a certain, if a person walks in front of the car, it stops right, it's going to stop short of hitting that person. Well, what if there's one person and then another person behind that person and they're very close in proximity? The car isn't going to know or be able to predict which person it should avoid, right? So then that's where the lack of human reaction and human processing, you have those scenarios of the car can't make the judgment in a situation like that, which it's supposed to hit. Is it supposed to hit the barrier or something else? So I think you have that. That's the other side of the argument, that the car doesn't know and so the car can't make ethical judgments, the car can't make decisions about. You know, it's like the hoody throw overboard, right, those kinds of ethical dilemmas. The car can't do that, so it's there's, there's.

Speaker 1:

I mean the car, laurie. I think I'm so stoked, though, and I'm not walking in front of it.

Speaker 2:

We've already become a society. I think, in my view, you're so distracted by so many things that what do you do? You don't pick up the phone and call people, you just text them and then you can't even actually text them words, you text them emojis and you know, and like we're already to everybody's, like doom in their TikToking and all their, you know social media and all these things. Like we're already so distracted by all of these things and bells and whistles and short and everything is immediate gratification.

Speaker 2:

And so when you start to talk about AI and generative AI and all these tools and machine learning, like they all have a good purpose, but in their extreme, are we just contributing more and more and more to like the dumbing down of the world? Really like are we just taking all of the decision making capacity out of people's hands and handing it over to a computer and saying you do it? And that's what doesn't sit super well with me is people are creative, right. People's brains are amazing. People think of and create the most amazing things, like whether it's actual art or a story, to your point, the storytelling, the heart of the matter. And do you really want to take that away and have AI write your book for you? Like, no, you know. Do you want to create like Christie's podcast, that like auto Christie, like, like?

Speaker 1:

imagine that right.

Speaker 2:

I mean, like why? Why would you want, like Avatar Christie, to like run your, your podcast? You know, like it's just, it's those moments where I'm saying, like you know, I'm still a person that writes letters and sends cards to you are and pick up, picks up the phone and but yet I've embraced technology because I have to, because I work with.

Speaker 2:

You know, our did work with Fortune 500 companies and you know hugely successfully smart engineers and you know, and now scientists and people that are just light years you know ahead of me in terms of what they know about these technologies and so I have to embrace it. But personally, I am not going to be using chat, gpt to create like work product that I'm going to turn over to a client.

Speaker 1:

Well, I do need to tell you something. You just brought up like a fictitious idea of a podcast, christie's Avatar. Well, I was listening to. You know I'm always trying to learn and they're like there there is either already out there or it's coming is, if you ever get sick, you just put all of your stuff inside this AI and it will do the show for you based on whatever you want it to talk about, and it's you. That is freaking scary, like I was sick this week. So I could have just said Laurie, let's just do this, and then I'll just send my avatar to fill in there. That's coming, laurie, and it's right here. It's crazy.

Speaker 2:

But you, because of the reason you're doing this and the platform that you're using to tell your stories and to tell other people's stories and, to you know, basically connect with other people and inspire them, and you know you don't want to have our talk. Well, I don't think you want Avatar Christie doing that.

Speaker 1:

Christie Chanel doing that, I know. But you know, I mean it's nice to have a sick day, I'm just saying, but I wouldn't do it, I wouldn't do it. I can make that vow, I just. I may just try it for an intro, just to show everybody what it's about. But I love doing this. I don't want to send a robot me in there. Why just have somebody else do the podcast for me and I feed them the words? It's the same thing.

Speaker 2:

But I feel the same way about you know, when people ask like, well, wouldn't it just be so great if you could generate all these things through generative AI? And I'm like, yeah, that's great from like a time perspective, but I didn't work my ass off my entire life, go to four years of law school, do everything. I did, practice my whole career so that I could turn around and ask a robot to like, do something for my client. No, my clients expecting me. You know they they're in this case I'm now inside, but you know, when I was external counsel, they're paying hourly fee for me to give them my thoughts and my advice and my expertise, and not something I punch into a machine and then turn around and say, oh yeah, this is what we should do.

Speaker 1:

No, no, you're right, but if you flip that, also on the other side of this, you and I, the humans, will be competing against the AI versions out there. That's the hard part is to know. They need to put that little, which I think they're going to start doing. I know, I think, youtube is starting this, where they put an AI generated type sticker on the bottom so that people now know the difference between what's created by AI and what's created by AI and what's created by a human, which I think is good, because that will give people an opportunity to choose Do they want to watch AI or do they want to watch an actual human? Do it, but we're going to be competing. Lori, it's coming.

Speaker 2:

Yeah, but you know what I think about, what I do every day now, and you know it's different being inside the company than outside of it. When you're inside the company, you're meeting with these stakeholders every day. They're coming over to your desk, they're calling you on teams and they're saying, oh my God, here's what just happened, what do I do and what do you think and what would you do? And they really don't want to hear, like, what's the party line. They want to know, like, what do I recommend in this particular scenario? And no robot is going to be able to do that. When people come to me, they know they're going to get a fair and balanced representation of not just this is what the law says, right, no robot is going to spit out or automated. It's going to spit out. This is what the law says. This is the black and white. I give them the gray. That's the difference, and so I will never be replaced by a robot, in my view.

Speaker 1:

Period. So one of the things that I see a lot on TikTok is when they talk to Siri or what's the other one, alexa. When they talk to Alexa and they're like Alexa, who's going to win the Super Bowl, and then Alexa will come up with something, and they're like see, it's already predetermined, it's scripted, they're making it up, they're having hallucinations. Oh, my God, you've just filled in a blank space for me. No pun intended. Okay, I never could figure that out. I was like why does it already know this stuff? It's because it's making it up.

Speaker 2:

Make it up. They're not going to take over the world. That's really my, my upshot.

Speaker 1:

Well, I hope you made a lot of people feel better to know that they're not going to take over the world. I mean, if we get anything across to everybody, it's you don't have to worry about that. But we are getting closer to the Jetsons. I mean, let that be known. Definitely I did.

Speaker 1:

I did want to touch on one thing that you said about how an AI can can generate your voice or the voice of a family member or a child or a cheating spouse. You can do a lot of damage in that way. I would suggest that one of the things that I did with my kids when they were younger was we had a family password, a family code that we would use and I would say if a stranger comes to you and says your mom told me to pick you up, make sure they know the word, or run, you know one or the other, I'm running, I'm running. So I would suggest maybe doing the same thing with your family, with your husband, with your mom or your dad, just so that you have a word. And if this AI thing, this voice thing, does start to catch fire and it starts to happen to more people, you say got it, what's our word? And that way that they can't get you, because if that AI generated thing doesn't know your word, then you're good to go.

Speaker 2:

One of the. I think, when you asked me if we could talk about this, one of the messages you know I sort of wanted to convey to people is like just be, just be smart, right.

Speaker 1:

Just be vigilant. Don't know what you're talking to. I mean, not everybody has that.

Speaker 2:

No, I know. But you know common sense, right? Just just common sense. Like you know, we all like to put stuff on social media, we all like to share what's important to us, and that's fine. And just I'm not saying don't do it, I'm just saying be careful, right.

Speaker 1:

Give us some, give us some tips, what? What I mean I?

Speaker 2:

should we do? I think? Well, you know, I think from an AI perspective, like there's not a whole lot you can really do other than just be mindful that if you're going to use this technology, you know, double-check yourself if it's an important thing that you're using it for, or whatever. I mean, just don't assume it's right. You know from that perspective, but also that, just the way hackers and and really Technologically savvy people have figured out how to steal people's Money and steal people's identities and all that, this is gonna do that too. You know, like this, this is just emerging technology that people are gonna use for bad and are going to do bad things. So be vigilant.

Speaker 2:

If it doesn't sound right, don't trust it. If you get the call saying your kid is, you know, in danger, use that word. Or or just stop for a second and say okay, you know, and think it through. If somebody's demanding money from you, you know. If the robot calls you and and tries to shake you down and Says, like you owe back taxes, okay, like, stop for a second and you know.

Speaker 2:

There's so many things you can do and it's, I would just say, use the same techniques you would if you were feeling like you've got a fraudulent phone call or a fraudulent letter or fraudulent email, or you know these things that ask you to click on everything like just use those same Moments where you take the step back and say like this doesn't sound right and if it doesn't, if it's threatening you know anything that's like threatening and says this must be done now.

Speaker 2:

Usually that's the first Indicator that something is wrong. You know, don't use the same password for every single thing. You know, because if, if the AI or the machine or whatever can crack your passcode, then now they can get into every account you have thing into your bank account, your credit card accounts, whatever it's like. Just use the same sort of vigilance that you would with any Electronic activity. You know, don't put your whole life out there, because then it knows everything about you. I'm right in the middle, you know. We got to embrace it. Can't be afraid of it, but don't be so quick to jump on the train that, like this, is gonna change your life forever and everything's should be done.

Speaker 1:

Through a. Everybody has that thing that makes them special. Don't remove that by putting a robot in front of yourself and your creativity and your perfection, Imperfect perfection. But that's what makes us human and that's why it's so valuable to not let AI rule your world. So really appreciate you being here. Was there anything you wanted to add?

Speaker 2:

No, I would just say, you know, be vigilant, enjoy it, play around, see how it can work in your life, the whole AI sphere, and then, just you know, stay human.

Speaker 1:

Stay human. Oh my god, that is like the best line. I'm gonna use that this week in my post or next week Stay human. And it is Valentine's Day week, so we obviously want to wish everyone much love to their families, to their friends, to their pets and To you, laurie, I love you being here. And I have to do one call to action at end of the, at the end of the show, that I hate to do so.

Speaker 1:

Last week I talked about supporting the show, down in the show notes, where it says support the show, and it'll take you to denominations of three, five and ten. This week I'm gonna say, if you have the time, I would love you to put a review. If you're listening to it through a podcast, streaming Software or something on your phone, just put a review. And if you're listening to it in YouTube, then put a comment and put a like, right, right. The second, because it helps get the show out there and you're supporting it, even if you can't do it monetarily. So like and subscribe, come back and listen to me next Monday. And, of course, we're gonna have Laurie back on the show because she has such a good insight and I love to learn from her. So again, thank you for being here. We're gonna say it together one, two, three Love you, miss you, miss you, bye. Love you, miss you, bye.

Speaker 2:

Oh.

Speaker 1:

You miss you by has been brought to you by Christie Chanel LLC, but if you're looking for more information or want to follow us on social media, go check out Christie Chanel comms. All the podcasters stream there and the YouTube episodes are there, so why not? You can also listen. We're all podcasts are streamed. This includes apple podcasts and Spotify. And lastly, thank you to you. You, yeah, you, the one that's listening or watching. I appreciate you so much. Love you, miss you, bye you.

Legal Ramifications of AI
Concerns About AI and Privacy Threats
The Risks and Benefits of AI
The Impact of Technology on Society
The Implications and Dangers of AI

Podcasts we love