AmeriServ Presents: Bank Chats

The Truth Behind AI

AmeriServ Financial, Inc. Episode 21

Comment via Text Message

Our Saint Francis friends are back once again, this time to chat about a technology that has quite the buzz surrounding it. Artificial Intelligence (AI) has many practical applications but can be detrimental if used improperly. What exactly is AI? Can we trust this technology? How could AI benefit the financial industry? Learn the answers to these questions and more on this episode of Bank Chats.

Credits:
An AmeriServ Financial, Inc. Production
Music by Rattlesnake, Millo, and Andrey Kalitkin
Hosted by Drew Thomas and Jeff Matevish

Thanks for listening! You can find out more about AmeriServ by visiting ameriserv.com. You can also find us on Facebook, Instagram, and Twitter.

DISCLAIMER
This podcast focuses on having valuable conversations on various topics related to banking and financial health. The podcast is grounded in having open conversations with professionals and experts, with the goal of helping to take some of the mystery out of financial and related topics; as learning about financial products and services can help you make more informed financial decisions. Please keep in mind that the information contained within this podcast, and any resources available for download from our website or other resources relating to Bank Chats is not intended, and should not be understood or interpreted to be, financial advice. The host, guests, and production staff of Bank Chats expressly recommend that you seek advice from a trusted financial professional before making financial decisions. The host of Bank Chats is not an attorney, accountant, or financial advisor, and the program is simply intended as one source of information. The podcast is not a substitute for a financial professional who is aware of the facts and circumstances of your individual situation. AmeriServ Presents: Bank Chats is produced and distributed by AmeriServ Financial, Incorporated.

Drew Thomas:

Fast fact, a British mortgage provider reduced the average time for completing a mortgage application from one to two hours down to eight to ten minutes using AI. I'm Drew Thomas, and you're listening to Bank Chats.

Michael Zambotti:

We've never seen that before where students use ChatGPT for...

Kevin Slonka:

Oh, I've never seen that.

Michael Zambotti:

I had one actually leave the thing in there and said, you know, do you need to know anything more about this topic?

Kevin Slonka:

Really? Yeah. I mean, generally, it's, it's very easy to know when students use AI to produce code. Students don't realize how easy it is to know that they've plagiarized at all, whether it's plagiarized from a website or plagiarized from another student in class, it is just so easy. And the general telltale thing is that in the code, the code does something that I didn't teach them. So, I'll see that they made a variable this way. I taught them to do it this way. Why did they do that? So, that just tips me off. I'll just grab a line of code, pop it into Google, and then I'll find it on some website, and there's the whole assignment. Okay, so I mean, it's just so easy to detect that you know they're doing something that you didn't tell them how to do.

Michael Zambotti:

For essays as well you can, if you're reading 25-30 essays from students, you kind of get a feel for their style, and also the style of an undergrad student. And then, whenever you see something generated by ChatGPT, it jumps out. I'm not saying that I'm, you know, as good as the online detectors, I'm not. Which there are ones that you can search and, you know, copy, paste text and see if it's probably generated by AI or not. But you get a feel for it, you start to, you know, something's not quite right about this. And, you know, being people that are into

Kevin Slonka:

Yeah. The student always turns in C-level work, and then all of a sudden, here's a, you know, master's level technology and hacking things, you know, and looking for ways essay, yeah, wait a second. around it, we know those techniques, you know, I look at metadata for a Word doc and see how long was the document opened, and it was open for a minute, okay, yeah. How did you write 10 pages?

Michael Zambotti:

How did you do that? Teach me your ways.

Drew Thomas:

So, so we kind of jumped right into a middle of jump right into the middle of conversation. But I'm just going to set this up a little bit, just so that we have it on on to sort of introduce this. So, we're talking about AI with with Kevin Slonka and Mike Zambotti from Saint Francis University. We started talking AI, sort of off the, off mic, and then we decided that this was too interesting to not put on mic. So, that's where we are. So, you've joined us in the middle of what has been an interesting conversation about artificial intelligence, and not only how it impacts things in the Scholastic environment, but just, you know, some misconceptions that that are out there about AI. And I'm curious to see, I was going to ask you this, but and then I decided to wait until we were on the mic to ask you this. So, from a cybersecurity standpoint, is AI making things more difficult for people to identify fraud? I mean, is that, is that something you know that, can you generate something that's more likely to get somebody to fall for something by using AI? Or is it less likely?

Kevin Slonka:

I wouldn't say more or less. I would just say quicker. So, you know, you can generate phishing emails or, you know, whatever, social engineering prompts a lot faster and using different sources, using AI, faster than you could as a human, just going and Googling things and piecing them together. Yeah. So, yeah, it does let bad guys do things a lot faster than they were able to do. And there's actually two of our students who want to do some research with another professor who you've spoken with in the past, John Valkovci, oh yeah, to see if they can find artifacts of something like, is this image generated by AI? So, in the digital forensics' world, are they able to detect when things they find on people's computers were AI generated?

Drew Thomas:

There was a news story that I read that there are some of the major corporations, I think Adobe was one. Apple might have been another. There were a number of corporations that agreed that they were going to put a metadata watermark that couldn't be user-tampered with, right when something was generated by AIs, but. I mean, so, but again, not everybody, people that see stuff like that online, like they're not going to, they're not going to know how to go look for a metadata watermark, so.

Kevin Slonka:

And even what you just said that people can't tamper with. That's wrong, like from, from a, from an offensive security perspective, you can tamper with anything, so people will figure it out.

Michael Zambotti:

And that's kind of like, you know, if you think, and a lot of people might not know this, but every printer puts a unique fingerprint whenever it prints paper, you can't see it, and it's in the, on the paper. And so if you could take a piece of paper and say, well, where was this printed? You could match it up with a particular printer.

Drew Thomas:

Wow.

Kevin Slonka:

It's multiple dots of ink, yeah, so at my very

Jeff Matevish:

It's like a dot of ink or something like that. first job, so this was what, 20-25, years ago. Yeah, we learned that these things were happening. So, we printed out a sheet of paper from the printer. Looked up the code online, we actually had to take it to our publishing department, because they had the eye loops. Okay, they were that small, so we had to look at them and figure out, okay, here's the dots, and we That's, that's pretty crazy. Yeah. had to put the dots into this reader online, which then decipher the dots. And it told us the name of the printer, the IP address, the make the manufacturer like it, you could take that piece of paper and know exactly where it was printed.

Michael Zambotti:

Yeah, but if we see something like that with AI, I think that would be pretty awesome. You know, little story back in May, we had our Community Development Week at school after the semester, and our Dean of the STEAM department, he actually did a presentation where he created an artificial intelligence him doing, he said, all I did, he said, I'm not an expert on this, but he said, I put a picture on, I gave a voice sample, and said, make me a script. And it read a three-minute video, and it was passable. You know, whenever we watched the video, I was, you know, I wasn't saying for sure, this is not our dean, yeah, in the video, so. And he said, you know, I put 10 to 15 minutes into this. So, that's another thing. And, you know, one of the key, core concepts I teach in school is, be skeptical. Be skeptical of everything that you see, everything that you read and verify it. Yeah, and that goes double for these videos and pictures now. Don't just assume things on face value.

Drew Thomas:

I'm trying to think of, I'm trying to remember the website, and maybe I shouldn't remember because I don't want to necessarily endorse it. But there's a website that I came across when I was, it was about two months ago, where you can, it'll generate music like, it'll generate an entire song for you, like lyrics and all like, you can either give it lyrics, or you could just say, write me a song about my cats, you know, eating catnip at 2am and it'll, it'll create an entire song around it. It's absolutely crazy. There was

Jeff Matevish:

There was a lawsuit that someone had, had put into AI that they wanted lyrics like a certain artist. And then they went to a different AI website and said, okay, here's the lyrics, I want you to sing it like said artist, and you couldn't tell the difference from the original artist and this, this imposter, this deep fake.

Michael Zambotti:

Yeah, I've done that with with ChatGPT. I said you're Taylor Swift, please write me a song about cybersecurity, and you read the lyrics, and it's like, wow, this is like, it's really cool. It actually, you read it, you're like, well, this could be a Taylor Swift song, yeah, and then you're right, you there's ones that you can actually have it sing in the person's voice. So, maybe we can make a cybersecurity...

Kevin Slonka:

I want to hear that song because I love Taylor Swift.

Drew Thomas:

Who doesn't? We're gonna do after we're done, after this conversation, we're gonna go create, and then that'll be the theme music to this episode. It'll be, no, I just, so before we, before we turned the microphones on, I want to, I want to double back just a minute, because Kevin said some, some stuff that I think is important to note is that, talk a little bit about what you talk about, about what AI really is.

Kevin Slonka:

Oh, yeah, yeah. So, it's not magic. You know, from a computer science perspective, AI is nothing more than a bunch of if statements, which, if you've never done computer programming, it's a way that you make a program do one thing or a different thing based on the input it receives. So, if you have a program that you ask it, you know, enter a number. If you enter the number five, it prints out, yay. If you enter the number four, it prints out, boo. You know, you can make a program react based on the input. That's all AI is. It is just 1000s upon 1000s upon 1000s. You know, people have to train AI with data. So, it's not, it's not able to think. I think artificial, the word artificial intelligence, it's a misnomer, because it's not intelligent, like there is no intelligence to it. It's just strictly artificial. And it's nothing more than just bunches of if statements that programmers have to put into it to make it be able to react to whatever people were typing in to ChatGPT or whatever. I'm sure those of you listening, if you've used any type of AI, you've probably gotten an incorrect result at some point from what you've asked it, and the only reason that happens is because somebody hasn't programmed in that one if statement that can answer your question correctly. Yeah, that's all it is.

Jeff Matevish:

You hit that else.

Kevin Slonka:

Yeah, where it didn't know what to do.

Drew Thomas:

Yeah, we're all talking in Basic here.

Michael Zambotti:

Well, the problem is sometimes you'll get an incorrect answer, but if you don't know, if you're asking it something about a topic you don't know about, and it gives you a statement, you're going to take it as, well that must be true. But if you know about the whatever you're asking it and say, well, okay, now I can discern that's a false statement. There was an example where somebody said, well, how do I get my cheese to stick to my pizza? It keeps falling off. And ChatGPT said, helpfully, put a quarter cup of glue in your cheese and, and somebody said, this doesn't sound like a good idea. I know about the Paleo diet, but this is a little, little too, far too much, yeah, but then they actually found the root of it was a Reddit post where somebody sarcastically said, you know, you can just put cheese, or glue in your cheese and, and that's what it was learning on so, like, like, Kevin said it, you know, essentially, we're dealing with not artificial intelligence, but machine learning. It's learning from something, and if it's learning from something that's wrong, it's gonna output wrong. It's not gonna all of a sudden, know.

Kevin Slonka:

Yeah. So, nobody has to worry about Skynet, or,

Drew Thomas:

Yeah, the, you said about, about learning, you know, you know, artificial intelligence, doing anything of from itself or learning, I think the term, you just said, value, because it's, that this is not what it is. sarcasm, the concept of human sarcasm, has got to be something that is the most difficult thing in the world for a machine that is looking at everything very literally, to understand. You know, if I said it could be something as simple as saying, you know, In the 80s and 90s, say, oh, man, that's bad, right? Bad meant good, right, right. So, if some machine learning tool out there is researching stuff that was written, a newspaper written in the 1980s, it's going to have an entire subset of data that bad means good, but it doesn't always mean good.

Michael Zambotti:

Well, if it's trying to learn off of Gen Z slang, good luck.

Drew Thomas:

I don't understand. I, maybe every generation says that, but I don't understand Gen Z.

Kevin Slonka:

Every generation has their slang, right? But, yeah, there's some that, I mean, maybe we're all just old, right.

Michael Zambotti:

Yeah but this is really weird. Yeah, some of them, but maybe it's not weird to other people, but you're right, a computer would not be able to discern if they heard one person say bad at some point, you got to look at the context. You know, what did you say that in, or in baseball, you could say that pitch was filthy. You would say, well, whoa, that was a terrible pitch, right? No, it was a great pitch. Yeah. It's really the context.

Drew Thomas:

Or the ball was dirty. You don't even...

Michael Zambotti:

Right. And in that case, the ball was filthy, yeah, yeah, where my the uniform was filthy. Oh, well, it was actually dirty. Okay, yeah, because the guy slid. So, the context is important. And I don't think that computers are at that point, or artificial intelligence is at that point where it's going to get that context to the degree a human can discern it now.

Drew Thomas:

I think, I think it's one of those things where by putting it out there for people to play with, and I really think that's what it is. It's the latest technological toy for people to be able to play with, to create art, you know, music, to create deep fakes.

Kevin Slonka:

It's the cryptocurrency of this decade, right? That was the big thing 10 years ago. Now it's AI.

Drew Thomas:

Yeah, and I do, I do believe that it is going to, I don't even want to necessarily say I think it's going to improve, it's going to change. And, you know, like much technology, I think that, you know, if you grew up playing Atari, right, and then you graduated to Nintendo, and then you graduated to, you know, PlayStation, and then it improves, but it doesn't improve instantaneously. It took 30 years or 40 years to go from Pong to Tomb Raider, on, on, you know, that looks realistic and all these graphics and stuff, but so will AI improve, I mean, at what it does, you know, possibly. And I think that's where people get scared, is that, I think they're getting scared of the idea that they're not gonna be able to tell what's real and what's not.

Kevin Slonka:

Yeah, and that, I mean, that definitely could happen like we were talking about before. You know, does this make the, the life of hackers easier or whatever? And, and, yes, you know, the things that can be made that look real that aren't, can happen much quicker now. And, and I would say that if they're, you know, even though I hate AI and I think it's stupid, even if there was, don't hold back, Kevin, even if there was a use for it, you know, like, I think Mike had mentioned this before, writing, like small scripts, doing programming for you. That is something that, you know, AI is generally okay at, you know, if I said, you know, write me a Python script to do x, it could do a fairly good job. And, you know, it would write the the 40 lines of code. It might not be perfectly correct, but I know how to program. I could fix the one or two mistakes, and I just saved myself 10 minutes. So, I mean, there are some benefits to it. It can make your life a little bit easier. But if you don't know that it's it could be wrong, and you don't know to fix that mistake, you know, did you just cause a system to crash because you took something from AI and deployed it in production without testing it?

Drew Thomas:

There was a CrowdStrike issue.

Kevin Slonka:

I was wondering if you were gonna bring that up. We could talk about that for an hour.

Drew Thomas:

It just kind of makes you wonder, though, like, did they release that without, did, what were they using to create that code that caused that glitch? You just gotta wonder.

Kevin Slonka:

That was definitely a virus definition file generated by ChatGPT.

Michael Zambotti:

Yeah, I don't know if they'll ever come out and say exactly how they let that happen, but a lot of people said, well, why didn't you test this patch? You know, you don't test it in production like that.

Kevin Slonka:

The AI told us to.

Jeff Matevish:

It said it was good.

Michael Zambotti:

Yeah, it was an intern. Maybe it was an intern, which I think who was it? Was it Solar Winds that blamed an intern for for doing something, it was a huge breach. It was like, oh, just they put it at the intern. It's like, well, why'd you give the intern that, that capability?

Kevin Slonka:

He's the easiest one to fire. Yes.

Michael Zambotti:

It wasn't our fault, it was the intern, right.

Kevin Slonka:

But yeah, I mean this, you know, we're talking about not being able to tell good from bad with AI. And this really has affected some companies so far. Like some hospitals now, are using AI to help diagnose so like radiology studies, you know, they'll use AI to look at the radiographs and be able to tell you, oh, this is a broken bone, this is a tumor, whatever. But while that's good, it's not always right, and some hospitals have actually put policies into place now. UPMC is one of them who has done this. No medical decision will ever be made based on AI without a human checking it first. Yeah, that is a real policy that they have put into place because they know people were doing this and you got to, I mean, you're dealing with people's lives at that point, right? Right. You got to make sure somebody doesn't die, because the AI said, so.

Drew Thomas:

So, is there, is there a little bit of a, I'm gonna, I'm gonna again, I'm gonna show my age. Whenever I was in school, elementary school, middle school, high school, you know, I had teachers constantly telling me, you know, you're not gonna, you'll never live your entire life with a calculator in your pocket. You better learn how to do long division. So, really, I think I do have a calculator in my pocket. And so, I mean, is there some element of that to AI as well? Like, is there some resistance to AI that, that it's like, oh, well, that's never going to be but could it, I mean, could it improve it in a positive way, or is it always going to be something where you're going to have to double and triple check what its output is?

Michael Zambotti:

Well, I think any technology that comes out about there's always going to be some resistance to it. Even today, you know, you work in banking, there's people that will not use online banking, right? True. They're gonna, I'm never ATMs. I'm never using those. Okay, there's always some resistance to it. But then there's other people who use it. And you know, ultimately, the technology we look at is neutral, you know, it's a tool, and is the tool always going to work and be the end all be all. It's not, you know, but it's going to help us. How can we use this? Like Google searches. You know, you could do Google searches as well, and it's going to assist you, but you're not going to rely on it. You're still going to have your experience and your, your insights into the particular thing that you're that you are working on.

Kevin Slonka:

Yeah, how many times do you get bad results from a Google search, right? Happens all the time. Sure, you don't get, like, a list of 100 perfect results, but I...

Drew Thomas:

And look at how much longer that algorithm has been working, compared to AI.

Kevin Slonka:

Yeah, the Google algorithm has been out since what, 99 or whatever, when they started working on it. AI has been around for what a day like, give it time people, yeah, but like, Mike said, it's, it learns based on stuff that is already out there. So, how do you protect against that, right? You know if, if AI is learning from Reddit posts and other forums where people can lie and put bad information, how do you protect against that? Is there a way, no matter how good AI gets, is there a way that we can ever get it to a certain point where we can stop worrying that it may tell us something that is wrong? Okay, this is taking a little, little turn.

Drew Thomas:

As I sit here right now, my gut, my gut feeling on You know, but I mean, really, I mean, like, what, you know, it kind of goes back to the semantics and the, that is just no. That's mine too, you know? Because I think there are too many, and I think we talked about this off-mic, but I think there are just too many people out there who live the conversation we had about sarcasm, like, what is, what is to cause chaos. And their, they, their joy in life is to take something and try to make it, you know, try to break it, try to make it, you know, less usable, less accurate. And if you're going to allow the Internet to feed AI information, it's always going to be fed information that is, that is not 100% true or accurate. And then, you know, if you want to get true? Like, you know, saying that, that, that, that pitch was into the philosophical side of things, what is truth, but. filthy, is true, but it's true in the context that we understand the terminology, rather than, you know, a literal interpretation of what is true.

Kevin Slonka:

And that kind of goes back to my kind of philosophical viewpoint that I mentioned about AI in general, that, you know, the whole computer science field since its beginning, our sole purpose was to make things better, make the code better, make it more correct, make it always right. Now, there's this tool out there that the whole world has access to, and it's always wrong, like we've taken the whole field of computer science back 50 years by you know, everybody using this tool that is nowhere near better, you know, from things that we've had in the past. So, that just bothers me at a core level.

Michael Zambotti:

And it is the shiny new thing. And we've, we've come across many shiny new things in technology. Years ago, Kevin will remember this, cloud, and everything was cloud. It's like, we're gonna do, it's gonna be cloud-based everything. You know, you like, buy a product, you're like, it's cloud based. You're like, I don't care, it's, you know, it's breakfast cereal. Yeah, I don't know how the cloud comes into this, but, and then we see it now you'll hear products. I heard a bank advertising saying, we use an AI platform for making decisions on Really. It's like, that's like, that is totally not us. loans.

Kevin Slonka:

Yeah, please don't do that. If what bank is doing this, please don't do that.

Michael Zambotti:

That's something that you need human intervention. And also, as a customer, it doesn't really matter. Why would you care how the bank was making a decision? You know, hopefully it's a person you can talk to and discuss.

Drew Thomas:

But you know, I mean, I think that comes down to sometimes marketing too. I think there are times when a company wants to get on the bandwagon and say they're using AI, and we've had conversations within the bank about what kind of AI we're willing to accept, what we're willing because, mostly because we simply don't have a choice, in some cases, but to adopt certain, certain AI things. Because if we want to use certain pieces of software, basic stuff, like Microsoft Word, you know, you have to accept the fact that there is an AI element, because there, these companies are building it in, and if you don't want it, then you just can't use that software. And it becomes a thing. But, so we've had conversations about how much of it to use, and even just trying to define what AI is, because, to Kevin's point of a bunch of if this, if statements, is spell check in Microsoft Word AI? I mean, we don't think of it that way, but technically speaking, by the, by the letter of the law, you're telling the system, compare this to the correct information you already have on your, you know, stored in memory as to how to spell the word, you know, strawberry and right and, and tell me if it's wrong. I wouldn't call that AI, but some might.

Michael Zambotti:

Well, that's where it comes down to context. Because yeah, you could say that word whether with an H or weather with E-A. This is, both of them are spelled correctly, but in the context, one might not be right, whereas it might say, well, the words, if it's just checking if it matches the dictionary, it does. This word matches the dictionary, it's spelled correctly. But in the context, it's the wrong word.

Drew Thomas:

And that was a really, a big thing when spell check came out. I remember it was a grammar check, yeah, because I remember English teachers saying, like, just because it spells, you have to read it yourself as a human being, because just because it passes spell check doesn't mean it's right, yeah, and I think AI is falling. It's a bigger version of that, but it's the same idea.

Kevin Slonka:

Well yeah, that's exactly it. I mean, all we're doing is giving a new name to stuff that's been around. It's just this stuff is more complicated now, it's being done faster now. You know, spell check, yeah, that's, that's AI. That's, that's literally a computer doing a task that a human would have normally had to do. It's what AI does.

Michael Zambotti:

Well, one area I've seen as a possible positive for AI in banking and financial services is fraud detection. Banks and custom, and credit card companies can use it to detect fraud on accounts quicker.

Drew Thomas:

So, looking at patterns and examining like, you know, well, you guys, you guys started out by saying that you can kind of look at a student's writing and you, you learn what their style is, and if their style suddenly deviates, you kind of think, well, maybe they're using something other than, you know, maybe they're not writing it. But, you know, banks can look at that too. They look at, okay, well, this person always shops at this particular store. They always buy that particular coffee in the morning, and suddenly they're doing something completely out of character in a different state. We look at that as humans and say, hey, maybe we need to call this into question. So, I could see where something like a computer algorithm might be able to do that more efficiently. But you still, you know, maybe don't want to rely on that entirely, because maybe somebody's on vacation.

Michael Zambotti:

Exactly, and there's other, there's context, there's also privacy issues in learning, you know, the data it has to learn on, but, yeah, that's a great point. You know, we always think of ourselves as very unique, but you know, if we look at everybody's individual credit card bills or debit card purchases, they'd probably follow a pretty consistent pattern, but that's where, you know, the so there are some positive uses, but there's also privacy concerns, and there's also the context of, yeah, hey, you know, Drew's on vacation, right? He's out there in Caribbean, and he just uses card, you know, to buy, you know, a coconut drink. Yeah? Well, that, he doesn't usually do that. We don't want to flag it immediately. We want to maybe run an additional check on that.

Drew Thomas:

Yeah, I don't know. I think this is, this is, this is a conversation that I think a lot of people are having, and a lot and it's a, it's a good conversation to have, just to try to educate people on what exactly AI is and whether or not it's, it's going to be the demise of humanity or not. My guess is more to look toward not. What about War Games, is my question?

Kevin Slonka:

Yeah, nobody has to worry about Skynet or anything like that. It's, it's just not capable of that, and it never will be. Okay, so war games is different, right. That was just the computer that's, you know, something flipped a switch, and now you're launching thermonuclear war.

Drew Thomas:

I love, I love the fact that I'm hanging out with you guys because you got that reference. And there are a lot of people that, that, like my daughter would not.

Kevin Slonka:

But, but this kind of brings up the CrowdStrike issue that you just, you know, talked about. What allowed that issue to happen? You know, I would hope that with thermonuclear war, our nuclear weapons are under more of a lock and key than a single computer that can launch them at will. You'd, I would like to think that.

Drew Thomas:

But yeah, then again, people use password as their password, so.

Kevin Slonka:

That's true, yeah.

Michael Zambotti:

And it's actually less than what, what they offer. Now, well, there was a story, and I don't know the exact details, because I don't think the Department of Defense has released them, but there were situations where the US and Russia did talk to each other, and it looked like there was a nuclear launch on one side, and they actually talked to each other and said, no, you know, we're not. And they got to the trust level that they could verify this, you know, they're not just saying they didn't launch them. They, uh, really didn't. But there has been, apparently, some close calls before, and I'm sure there's some that we will never find out about, too.

Drew Thomas:

I'm okay with that. I don't you know in the information, in the information age, I think there comes a point where there's just information I don't need to know.

Kevin Slonka:

And, and doesn't that cause a lot of the problems that the society has today? Right. Too much information. If it was 50 years ago, you just wouldn't know that things like this are happening and you live your life, and now everybody

Michael Zambotti:

Everybody's an expert. Yes. knows everything. So, now everything is a big deal.

Drew Thomas:

I mean that to me, I look at social media, and I think to myself that it was the one of the greatest failed experiments in communication that you could possibly have in that it has allowed us to become more socially connected, but it is also given everybody a megaphone and told them that their opinion is just as, as, as valid as someone who might be more learned about a particular topic. And that's not to say that everybody's opinion doesn't have value, but if I am a nuclear physicist, I believe that my value about nuclear physics has more value than say Jeff's, who is a...

Jeff Matevish:

No, I've watched a movie once. I'm an expert.

Michael Zambotti:

You stayed at a Holiday Inn last night.

Jeff Matevish:

Yeah, yeah, yeah.

Kevin Slonka:

Well, I think Mike said it best on one of our previous shows. You know, the loudest person wins, right? And in places like this, whoever screams the loudest, that's the one everybody hears.

Michael Zambotti:

And it's kind of torn down the walls between the, you know, quote, normal people like us and celebrities. You know, celebrities used to be, wow, this, this person that we would never hear from, maybe an interview every once in a while, or see them in a movie. Now we see them, you know, what they just had for breakfast on, you know, on Twitter or Instagram, and, you know, they're now, they're in a Twitter fight with somebody about some other topic, and it's like, whoa, they're just like, a person, like us, yeah, the mystique kind of crumbled with celebrity.

Drew Thomas:

And yet, there's, there's more of the, there are people that who are celebrities that I, I guess the term, the very term celebrity has, has sort of lost meaning for me, because there are people that are called celebrities, that I was watching the, the Fourth of July special on, on TV, and they said, oh, we have all these, you know, how they do the rundown of celebrities at the beginning? Oh, we're gonna you to hear from this person, that person I knew, like one out of the 20 that they named. And I'm like, how can they be called a celebrity when I've, and my wife didn't know who they were either. For the record.

Kevin Slonka:

They're the Gen Z celebrities. And what's their job? Influencer? I guess influencers, YouTubers.

Michael Zambotti:

Right. Their job is an influencer, right. Of what? What do you influence? They have a brand, yeah, yes.

Drew Thomas:

All right, so what else is there, anything else about AI that we didn't touch on, that maybe Kevin would like to rail on just for another minute?

Kevin Slonka:

No. I mean, I think we hit it all really. The bottom line is just be careful. It's not the be all, end all. It's not the greatest thing in the world, despite what you're hearing on commercials and everything, and it can be wrong. So, if you do choose to use it, double, triple, check the results.

Michael Zambotti:

Yes, be skeptical, my friends. Whenever you get a result from the internet or somebody you don't know on the internet or social media, be skeptical of it. Don't take it at face value. Always do that investigation. Is this true? Like ask those questions. Ask, why. You know, if you get an answer for someone, say, well, why? Why is this the case?

Drew Thomas:

Yeah, there, it's like that meme that was going, I may be misquoting it, but it was something to the effect, of everything you read on the internet is not necessarily true, and then it was attributed to Abraham Lincoln, right? Yeah, yeah. You got to think about those things. But, but AI is, I think it is a buzzword. I think it is something that people are using sort of capital, more cavalierly than they probably should be. To your point about the cloud. You know, everything, we talked about that in one of our previous episodes too, everything was in the cloud. You know, it's up in the sky. It's, it's not in the cloud. It's on a server somewhere. It's just not on your server in your house.

Michael Zambotti:

Well, it's also a question businesses are asking. Inside of every senior management and boardroom meeting, they're asking, should we be using AI? How should we be using AI? What should we be doing? So, it is a big topic in business as well. So, that's why, you know, as security professionals, we also need to consider this. This is something that the businesses are asking for, that they're incorporating, maybe with or without proper...

Kevin Slonka:

Due diligence, due diligence, exactly. Yeah. But, I mean, that's a good point that you brought up. This is being brought up in every boardroom. But I think it's important, if we have any listeners who are in those boardroom meetings, it is okay to say we don't need AI. Just because somebody in the boardroom asked, what are we doing with AI? Doesn't mean you have to go use AI. Yeah, you can say there is no business value to this. We're not going to do it. So, that is a valid response.

Michael Zambotti:

Absolutely. And really look at what is the value of this? What value does this bring to the business? What risks does it bring you? What's the proposition between the value and the risks?

Drew Thomas:

All right, good, good conversation, guys. Thank you very much from once, you guys keep popping by and we keep, we keep taking advantage of your your expertise and your time, so.

Michael Zambotti:

Well, you keep writing the checks and we'll keep...

Drew Thomas:

I'll have AI write you one and send it right over.

Kevin Slonka:

Must be why my last one bounced. This podcast focuses on having valuable conversations on

Drew Thomas:

Thanks guys. various topics related to banking and financial health. The podcast is grounded in having open conversations with professionals and experts with the goal of helping to take some of the mystery out of financial and related topics as learning about financial products and services can help you make more informed financial decisions. Please keep in mind that the information contained within this podcast and any resources available for download from our website or other resources relating to Bank Chats is not intended and should not be understood or interpreted to be financial advice. The host, guests, and production staff of Bank Chats expressly recommend that you seek advice from a trusted financial professional before making financial decisions. The host of Bank Chats is not an attorney, accountant, or financial advisor, and the program is simply intended as one source of information. The podcast is not a substitute for a financial professional who is aware of the facts and circumstances of your individual situation. Artificial Intelligence is still such an abstract concept for most people that it feels quite scary. Some think that by adopting this technology, we are heading towards a Terminator-like future, where humans are at war with machines. Others point out that generative AI still can't always correctly spell words or create an image of a human hand with a typical number of fingers. Objectively, AI is a new tool that we must learn how to use properly, and much like a hammer, this tool could be used to either create something amazing or to destroy what has been built. Our eternal thanks to Kevin and Mike from Saint Francis University for talking with us today. AmeriServ Presents Bank Chats is produced and distributed by AmeriServ Financial Incorporated. Music by Rattlesnake, Millo, and Andrey Kalitkin. Production and editing by Jeff Matevish. You can find all of our episodes at ameriserv.com/bankchats, or by searching for us on your favorite podcast app. We've also started to include video for the show, so, check us out on YouTube. For now, I'm Drew Thomas, so long.

People on this episode