AmeriServ Presents: Bank Chats
Financial education shouldn't be boring! Bank Chats combines a relaxed conversational style with experts from various fields to talk about banking and finance using terms that everyone can understand.
DISCLAIMER
This podcast focuses on having valuable conversations on various topics related to banking and financial health. The podcast is grounded in having open conversations with professionals and experts, with the goal of helping to take some of the mystery out of financial and related topics; as learning about financial products and services can help you make more informed financial decisions. Please keep in mind that the information contained within this podcast, and any resources available for download from our website or other resources relating to Bank Chats is not intended, and should not be understood or interpreted to be, financial advice. The hosts, guests, and production staff of Bank Chats expressly recommend that you seek advice from a trusted financial professional before making financial decisions. The hosts of Bank Chats are not attorneys, accountants, or financial advisors, and the program is simply intended as one source of information. The podcast is not a substitute for a financial professional who is aware of the facts and circumstances of your individual situation. AmeriServ Presents: Bank Chats is produced and distributed by AmeriServ Financial, Incorporated.
AmeriServ Presents: Bank Chats
Why Treating AI As a Tool, Not A Threat, Can Make Banking Safer and Smarter
Fear sells, but it rarely informs. We wanted a grounded take on AI: what it actually does today, where it goes wrong, and how banks and consumers can use it without getting burned. With guest John Valkovci of Saint Francis, we unpack generative AI’s strengths, the reality of hallucinations, and why data quality and model choice matter more than hype. No sci-fi doomsday, just candid mechanics and practical guardrails.
Credits:
An AmeriServ Financial, Inc. Production
Music by SchneckMind
Hosted by Drew Thomas and Jeffrey Matevish
Thanks for listening! You can find out more about AmeriServ by visiting ameriserv.com. You can also find us on Facebook, Instagram, and Twitter.
DISCLAIMER
This podcast focuses on having valuable conversations on various topics related to banking and financial health. The podcast is grounded in having open conversations with professionals and experts, with the goal of helping to take some of the mystery out of financial and related topics; as learning about financial products and services can help you make more informed financial decisions. Please keep in mind that the information contained within this podcast, and any resources available for download from our website or other resources relating to Bank Chats is not intended, and should not be understood or interpreted to be, financial advice. The hosts, guests, and production staff of Bank Chats expressly recommend that you seek advice from a trusted financial professional before making financial decisions. The hosts of Bank Chats are not attorneys, accountants, or financial advisors, and the program is simply intended as one source of information. The podcast is not a substitute for a financial professional who is aware of the facts and circumstances of your individual situation. AmeriServ Presents: Bank Chats is produced and distributed by AmeriServ Financial, Incorporated.
Fast fact, the term artificial intelligence was coined at the Dartmouth Summer Research Project, considered the birth of the field, in 1956. I'm Drew Thomas, and you're listening to Bank Chats. Welcome back to yet another wonderful episode of Bank Chats. I never get tired of doing these.
Jeff Matevish:Me either.
Drew Thomas:And today we are going to discuss artificial intelligence, AI, and I know that we've talked about this before, and we will probably talk about it again, because it is just...
Jeff Matevish:Ever evolving.
Drew Thomas:It is ever evolving. It really is. It's one of those things that a lot of people are still trying to get their head around, that the people creating it are still trying to get their head around, in some cases. And with us today to talk about it is John Valkovci from Saint Francis and other places. John has been here to talk to us about crypto before, and we've, we've talked about some really cool, cool things with him, and it's really great to have you back. Thank you.
John Valkovci:Thanks for inviting me. I appreciate it.
Drew Thomas:Absolutely. Yeah, thanks for making the trip down again. It's so let's, let's talk a little about, about AI. So, AI, artificial intelligence, right? Arguably a misnomer, because it's not, is it intelligent? Let's start with there. A lot of people think that this is going to turn into Terminator and that people are going to, you know, the artificial intelligence going to take over our lives and, and ruin humanity. Is that a legitimate concern, or are we kind of overblowing that at this point?
John Valkovci:It's the thing that science fiction is made of and there's always the possibility of any technology that man develops getting away from us. Sure, let's look at nuclear power. Okay, that was a technology that we developed. Does it have the potential of getting away from us and destroying us? Of course it does, but we've also managed to harness it. And AI is a similar technology. Does it have the potential of getting away from us? I mean, once we have machines that are able to get to the point of creating other machines and working with other machines, there's always that potential that they could provide some harm to humanity. So, we have to keep the guardrails pretty tight. But right now, that right now, it's more of a function of science fiction than it is science fact.
Drew Thomas:Okay, well, that's, that's comforting to know to a certain degree, but there are different, so you were starting to tell us a little bit before we, before we turned the microphones on and so over there, there are actually different types of artificial intelligence. You were saying about generative AI and things that can, let's, let's talk about some of those differences, and what they actually are.
John Valkovci:Generative AI is pretty much what we're all used to. If you've used ChatGPT, you're using generative AI. And they call it generative AI because it basically takes data and uses that data to formulate a response. It's not programming, it's just data large language models. It uses components of machine learning. And it takes this data, and if you provide it with a prompt, it looks at the data. It looks at the data from the prompt. It looks at its data that it has in its memory for want of a better term, and it basically predicts then words, would be the best word to say this. And that's how it does it. It's basically a predictive model based on data, okay. And that's why a lot of when ChatGPT first came out, and they made it free for everybody, and everybody started using it, initially, they started taking all the data that people were typing in to feed that data. So, now the responses you're getting from ChatGPT are based pretty much on, on data. But that's where, that's where we have to be careful when it comes to any type of AI, because it's based on data right now. Again, this is the generative AI I'm talking about, not, not the neural-networks and other types of AI that are being developed for different purposes. These are the things that are going to affect us, mostly on a day-to-day basis.
Drew Thomas:But is it like any other computer? It's bad data in, bad data out, though, right?
John Valkovci:It is, garbage in, garbage out.
Drew Thomas:So, when they started opening it up to ChatGPT and letting anybody put whatever they wanted in there, people can be kind of awful at times, if they really want to be and say some terrible things and so forth. So, did ChatGPT then start reflecting that?
John Valkovci:There were instances where ChatGPT was reflecting that, let's call it bias, yeah, okay, that's a good word, yeah. It reflected the bias against something or for something, because that's where most of the data was coming from at that time, and that's how it predicted and formulated its responses. Okay, so. Yeah.
Jeff Matevish:So, how do you fact check something that seems like all knowing? Like, how do you, how do you know that what you are getting out as an output is you know a good answer, yeah, correct. Or can you?
Drew Thomas:I mean, do you teach it math problems and then verify that you know, you know two plus two is four, and then make sure it knows two plus two is four? Or, how do you?
John Valkovci:You can, math is maybe not the best example, because it is finite, and we know that when you have a two plus two, it does equal four. But on something more esoteric, like sociology or psychology or something like that. The short answer to your question, Jeff, is that you have to fact check it. It's your responsibility, because AI is known for something that we refer to as hallucinating, or hallucinations, and when AI hallucinates, it's giving you an incorrect answer. And you and I had a conversation earlier today, one of the projects I do with my students at Saint Francis is I have them open up ChatGPT, and I asked them to simply type in a simple question. Ask ChatGPT how many R's are there in the word strawberry?
Jeff Matevish:Okay.
John Valkovci:ChatGPT typically comes back with the answer two. We all know that there's 3 Rs in the word strawberry, but ChatGPT says there are two. And so that you challenge ChatGPT, and you say, are you sure ChatGPT? I think there's three. And ChatGPT will not so much argue with you, but almost mock you, saying that it's almost a superior intelligence, and it knows that there's three, that there's 2 Rs in it, even if you say there's three. So, that's called a hallucination.
Drew Thomas:That's reassuring.
John Valkovci:Yeah, that's called a hallucination. And there's other types of nonsensical questions you can ask ChatGPT, and you may get strange answers. It depends if you're using ChatGPT 3.5 or 4.0 because these are different models.
Jeff Matevish:So, what's the point of using generative AI like ChatGPT, if you have to verify everything you input? Why can't I just go Google that and get the same answer? What's a, what's a use case that you know you wouldn't have to verify, or that it would be, would be worth using generative AI like that?
John Valkovci:Okay, one example is, if you receive an email and you want to draft a response to the email, there are applications now that can do that automatically, or you can copy somebody's email, okay, you received, and you can open up ChatGPT, paste that in the ChatGPT and say, ChatGPT, please formulate a response with, with an apologetic tone, with an aggressive tone, and tell it what you want to say with the prompt, and it will generate.
Jeff Matevish:Okay, so not, not fact based.
John Valkovci:Right. It's not fact based. So, it depends on what you're using it for. If you're using it to write a paper, a term paper, then, yes, you do need to go back, and fact check it. We all heard the stories, or perhaps maybe we didn't, out there with our listeners of the attorneys who drafted legal briefs using AI and ChatGPT. And ChatGPT actually cited cases in a legal brief that didn't exist, and when the cases they cited that did exist, they were cited for an incorrect premise. There have been several lawyers who have been sanctioned and disciplined by the courts. I think some, one lawyer was fined $5,000 by the court for doing this because he never went back to check the cases. So, it's incumbent upon us if you're going to use generative AI like that for a fact-based inquiry, you do need to fact check it okay.
Drew Thomas:I mean, to a certain extent, you could argue that it's little different than whenever, say, an attorney would ask one of their interns, you know, to go and research cases and bring them back to them. If the intern just made them up to get out of doing the work, it's still incumbent upon the attorney going to court to make sure that that work was done correctly.
John Valkovci:Exactly, exactly. And that's why AI particularly, what we're talking about here does have the potential to really revolutionize a lot of different industries, including the financial industry or the fiduciary industry when it comes to money. But we have to be careful.
Drew Thomas:Yeah, there's, there has definitely been a lot of conversation I think, in the financial industry. And I can even say even within community banks, because I can, I can say we've had conversations too, a little bit about how much AI to incorporate into our daily work lives. And right now, we don't use any, really, because I think there's a concern about the hallucination. There's a concern about the incorrect answer being provided or used.
Jeff Matevish:Or information that leaks out, that should not be leaked out.
Drew Thomas:I was just saying there's even, there's a privacy concern, right? Exactly, you know, worries that, you know, if, if, if Jeff's personal information is included in some AI model, and then someone else gets a hold of that model, is Jeff's information somehow accessible? Is that a legitimate concern?
John Valkovci:It is. And when you're using AI, whether it's for the email example, as we talked about before, or some other fact-based thing, you should not be putting in any personal information into any AI model whatsoever. You should not be putting any type of PII. You shouldn't be putting your birthdate, yeah, I would even use, don't even use your name. You shouldn't be putting anything like that. Many companies and businesses also are creating AI policies on what you're allowed to use it for and what you're not permitted to use it for, because there are just a number of issues. Think of AI as this new technology, and it has the potential to revolutionize so many areas of society, but we're still feeling our way around this right now. When AI generates a picture, and I can tell ChatGPT, I want to start a new D and D, a Dungeons and Dragons campaign, and I want to create an emblem for it. So, give me the god Thor with this and this, and it will create a beautiful graphic for you. Okay, is that your intellectual property? Did you create it?
Jeff Matevish:I gave input, but no.
Drew Thomas:Yeah, but if I'm an artist, and you tell me what to, what to make, and I draw it for you, is it your intellectual property, or is it mine?
John Valkovci:That'll be subject.
Jeff Matevish:Did I pay you?
John Valkovci:Yeah, a contract, okay, if you're going to pay him to paint you a portrait, yeah, okay, usually, if he's the creator of it, Drew, you're the creator, the intellectual property rights belong to you. Now you can, as part of the contract, transfer those to Jeff, right, but you created that. So, there's just other legal issues that we really haven't gotten to the bottom of yet, and most AI models, like ChatGPT have guide rails on them, so you can't ask them anything, because it will give you their standard response that I can't answer that question, or I won't answer that question, or it goes beyond my safety rails here. But there are AI models out there that are designed for criminals. And if you want to ask an AI model, can you please draft code on how to hack into this computer or that computer? Or could you analyze this application and find five vulnerabilities and then write code that will exploit those vulnerabilities? They're out there.
Jeff Matevish:Wow.
Drew Thomas:Yeah, that's really scary. I mean, that's and, you know, most things I think, I think even if you go back to, I don't know, go back 200 years, you know, criminals are always trying to find ways around things, and then people are always sort of putting guide, guardrails up after the fact. And you used to have time to do that, right? You used to, you know, if people started robbing banks, they started putting more security around banks. So, then they started robbing the started robbing the trains, taking the money to the bank. So, they started making the trains more, more protected. And I mean, people still ride shotgun in cars, because that was the person sitting next to the stagecoach driver. Was the guy with the shotgun, you know. But the faster AI moves, the harder it is to put those guide rails up before they're exploited.
John Valkovci:I completely agree with you, yeah, and I can say, based on my experience as a federal prosecutor for 28 years, if we have new technology, the criminal element is usually one of the first segments of society to leverage and exploit that new technology, yeah, to some sort of evil benefit or something like that. And you're right. You're always playing, you're always in a reactive mode. You're always trying to play catch up. They always seem to be one step ahead, because they embrace this new technology. They're not afraid of it. They immediately see how can be exploited and leveraged with crimes. When you look at AI right now and deep fake photographs or deep fake videos, or you can have AI right now that actually drafts phishing email campaigns that are extremely realistic, and so it's becoming more and more difficult. We all have, I'm sure you here at the bank have training on, oh, what I recognize in a phishing email and what to do, but they're getting more and more complex. They're getting so much better right now that they're actually avoiding some of the spam filters and things like that, that most banks and businesses have in place. Yeah, they can duplicate a voice so that you think you're listening to a relative online, but you're really listening to an AI voice. And then the whole deep fake thing with photographs, and putting your photograph on another image, or creating an image around your photograph is also disturbing. Yeah.
Jeff Matevish:The last time we talked to Kevin and Mike, they had mentioned that you were working on possibly something that would be able to look at, kind of like breadcrumbs of, of an image, or to try to see if there was anything that you could use to tell if something was, was faked or not. Is that, I mean, that technology there, or is that still something we can't do?
John Valkovci:It's almost there, let me put it that way, Jeff. I am working on research at Saint Francis, where, when I teach digital forensics, we look at the binary level and the hexadecimal level. We look at the very, very basis of applications, and we rip data out of computers to prove what people did, we to recreate what they've done on their computers. We hear about it all the time. So, the research project is focused on when you use deep fake to create a photograph, are there certain markers or metadata? Are there certain signals within that file that you created with a deep fake that would indicate it's a deep fake photograph, or if it was generated by AI? And it's interesting to do that. They have these AI detectors online. People feel very good with them. You can take a photograph, upload it into the online platform, and it will tell you what percentage or probability this photograph was generated by AI. I conducted a little test with my students, and I asked them to go online and find an AI generated photograph and drag it and drop it into one of those detectors, and most of them came back with 95% probability that this was generated by AI. I then asked them to go to their own phone, their own device. Take out a photograph that they knew was legitimate, was not AI, and to actually put that into the same detector, and many of them got very, very similar answers. Really? 90% probability generated by AI, 75%, 93%. Yeah. And so, these AI generators that they're touting online, they, the results are somewhat dubious, so we can't rely on them. We need to really get down to the basics of what that file is. Now there are other ways to look at different deep fake photographs, because what AI has a very difficult time doing, if it's creating a graphic, is also creating text within the graphic.
Drew and Jeff:Yeah, we know that. Yeah.
John Valkovci:Wait, that's, that's not how that's spelled. And so, so that's one clue if you're looking at a graphic file, but if you're looking at a photograph of an individual, there are certain things you can look at, just by skin tone, skin texture. You can look at shadows, because sometimes the shadows are inconsistent. There's, there's markers you can look at.
Drew Thomas:There's also, as a human being, there's also a very uncanny valley sort of thing that when, when you look at an AI generated human, a lot of times you can just, there's nothing specific you can point to, like.
Jeff Matevish:That's too symmetrical.
Drew Thomas:It's too symmetrical, it's too good. There's something about it that you can kind of see in most cases, I'm not going to say all cases, but in most cases where you're like, and it's not like, it was the a year or so ago, whenever people were generating pictures of humans and they were coming out with six arms, like, well, I can tell that, because that's not real, but there's still, there's something. There's something about that person, you're like, I don't think that's a real person. It's weird.
John Valkovci:Maybe it's the way we've developed over millennia, the human being recognizing faces and facial features, I don't know, but we're trying at Saint Francis to find that digital marker. There's been some research around the world on this already. We're just trying to continue it and add to that body of research. What can I find through digital forensics that will establish this is an AI generated photograph?
Drew Thomas:You know, the other thing that, that concerns me about that and is this idea that, you know, in some cases, we live in a society where people love to question everything. They will question, they will question established science, simply to be contrarian. And so, it makes you know, if you can, if you can put something in front of that person, say, well, this photo says that person A was in that car at this time of day, say, you know, trying to research a crime or something like that. And then you come along and say, well, we've done all of these algorithmic things to verify that this is not real and it, it almost feels like you're the one that's not being upfront and truthful, which is crazy, because a lot of people, I think, would look and say, but that's a photo. I can see the photo. I can't understand your abstract notion of bits and understanding the back. Is there a concern that even when you can find a way, or if you can find a way to prove that certain images and so forth are AI generated, that people still won't believe you?
John Valkovci:Maybe on the individual, yeah, the science will be the science, yeah, but it's so easy to manipulate a file. I think we all know that when you look at your, you take a picture with your cell phone, everything that we've all done, it's kind of routine, yeah, that embedded within that photograph is something we call metadata, which is data about data. And it gives me a lot of information about that photograph. If you have your location services turned on, I can actually get the latitude and longitude, time of day where that was taken, and whatnot, and it's on your phone. I could take that same photograph if you send it to me, and I can manipulate that metadata so it looks like Drew took that photograph at a completely different time and date. And think of the ramifications of that, sure, if it's not a very flattering photograph, and it's on your phone that you took and the metadata, shows that you took it, now people are going to say, well, there's your proof, right? There's your proof, right? But if we can somehow come back scientifically and say, no, it's not because the science is you need to have A, B, C, and D, this photograph is A, B, and it's missing C and D, or it has other components to it. So, I think once we get to that science, and the science becomes accepted, I don't think you're going to have too much of a reluctance, reluctance to accept the fact that people can't, because what you're saying is people, they believe what they see. Yeah. And I think we're going to be moving beyond that right now as a society, yeah.
Drew Thomas:So, let's talk about some of the, we I mean, I think we tend to focus a lot on the negative sides of AI, but what are some of the benefits? I mean, they're building it for a reason, right? What are some of the benefits of AI that you've seen so far, like, especially in, you know, I guess, for individuals, but even for businesses.
John Valkovci:Businesses, since we're here at AmeriServ, why don't we talk about financial okay? Oh, let's, let's. It makes, makes the bank more efficient. Think of a chatbot for customer service. Somebody calling up wanting to open an account or, or do something like that. You can have a chat bot that would basically take the place of a human answering their questions, because they have that database that they know everything about the bank, the banking hours, the banking locations. How do you open an account? How do you transfer within an account? You can have that chat bot then take care of that, and that chat bot would be available, 24/7, 365. So, if anybody has a question about banking here, it can be answered. And so that helps you as a bank, because it really enhances your ability to provide really, really robust customer service. Sure. Another way, and we spoke about this in our last conversation about crypto, are all the regulations that banks are subject to, and one of the strongest regulations is anti-money laundering. You have an obligation as a bank to look at different transactions going through your bank, and there should be markers, and if you come across suspicious transactions, you have that regulatory obligation to report that to FinCEN, right? The Financial Crimes Enforcement Network, which is part of the, run by the Department of the Treasury. AI could significantly enhance a bank's ability in the regulatory area by scanning transactions, looking for things that might go undetected by a human because, again, it's designed to recognize patterns. It could see patterns where we can't as humans, and so AI could be used by the bank to assist it in its regulatory management and regulatory function, by scanning all the transactions, trying to recognize patterns, looking for suspicious things. Because right now, most banks use some sort of, I guess there's some sort of electronic anti-money laundering software that they use. And I don't know if you would agree with me, but as a federal prosecutor, I'm pretty, pretty aware of getting these SAR, these suspicious activity reports. There's a lot of false positives that these programs currently kick out. We could greatly reduce false positives by using AI. We could really enhance the bank's ability to detect suspicious transactions, and the more suspicious transactions we detect, the more we can try to limit criminal activity and money laundering activity before it actually takes place. So, that's another great benefit for financial institutions, is in the regulatory field.
Drew Thomas:I think it's interesting what you said about patterns, because if you try to create a random pattern, right, you will inherently create a pattern of what you think is random, but it's not right. We, you know, and you see this with people that, that like to go to casinos or like to play the lottery and stuff, and they think that, well, I can, I can look at the past 500 lottery drawings, and I can discern a pattern. And in reality, you have the just as much of a, we kind of talked about this in the lottery episode about statistics, right? You have just as much of a chance of the number 33 coming up today as you do tomorrow, as you do the next day, as you do the next day. We think of random by saying, well, we're, I'm only going to use 33 every third time or every seventh time. But that's, that's creating a pattern like 33 could appear three times in a row and then not again for a month, if it's truly random. But we don't think of patterns as being truly random. It's hard to, it's hard to intentionally create a random pattern, if that makes any sense. So, I think these software things could maybe pick up on those things that humans think they're being random, but they're not, you know.
John Valkovci:I see AI as really being helpful to a lot of businesses, from a customer service standpoint, from a regulatory standpoint, those are a couple of the really primary ways it helps you become more efficient then, too.
Drew Thomas:Yeah, I think, well, going back to what you said there, too, about, like, you know, I think a lot of people look at AI and they think that AI is taking people's jobs and, you know, removing people from, from the equation, and saying, well, if AI can answer questions about my hours and locations, why do I need a human being? And while I do understand and I agree that some of those could be legitimate concerns, I'm also looking at our society, especially in the United States, with the baby boomers and things and generational changes where we're not, we're already seeing some changes where people just there aren't enough people to do the job, whether it's intentional, they just don't want to do that particular job, or because they're doing something else like so this may be an opportunity for us to, to supplant some of those things, not so much to take a job away from somebody, but to...
Jeff Matevish:Fill the gap.
Drew Thomas:To fill a gap where, where nobody is there, to fill.
John Valkovci:Absolutely, yeah. It also frees that employee up to do other tasks, so we can actually start doing more than we were doing before with the same employee base, because now we have the employees free to do other tasks. They're not tied down to the mundane task of, say, customer support. I don't mean that's mundane. That's a very important aspect of any business is to have strong customer support. But if you can relegate at least a portion of that to AI, that would help quite a bit.
Drew Thomas:I think that, you know, people thought similar things about ATMs I think back in the 70s and 80s. They said the ATMs were going to, you know, make the bank branch go out of, out of business and stuff. And it didn't. It changed the function of a bank branch, for sure, but they didn't go away. The people are still here. They're just, they're performing different functions. They're doing things that require more of a human thought process than, say, an ATM can do.
John Valkovci:And that's another important thing, since you just mentioned human thought. AI models don't think right now, at least generative AI. Generative AI is based on pattern recognition and then giving you an answer based on patterns and based on data. They're building neural networks right now that really are, again, using machine learning where, where AI models can, can actually learn from their environment and learn by doing things and learn by making mistakes, and they learn like humans, but we're still a ways away. I mean, the Chinese have been at the forefront of robot development and incorporating AI with robots, and they recently, they trained a robot to do a front flip. Most robots that are designed can do a backflip, because, I guess physically, it is easier to do a backflip than a forward. And so, the robot actually does a forward flip and lands, and actually then takes a step forward to balance itself. If you watch the video of it, so it's extremely, they, they have another robot that they, that's been trained to do Jiu Jitsu and Kung Fu moves and the balance. So, if you ever saw the movie with Will Smith, I Robot, yeah, those types of everyday robots, I don't think they're that far into the future. I really don't. I mean, when you think about where we are with AI now, you know, a few years ago, people were saying, well, this will be coming about in 10 years. Well, here it is, 3, 4, 5, years later, and we're already there, yeah. So, I think that the time for the development of AI and the use of AI is going to be compressed. I think within five years to eight years, you're going to see it having a major influence on society.
Drew Thomas:Hopefully, the Isaac Asimov rules apply. The laws of robotics apply. We hope, yeah,
John Valkovci:I love it, laws of robotics. which is, which is really, because you mentioned I Robot. It was originally written by, by Isaac, and that was written in what 70s, I want to say. So, yeah, that's, that's, that's pretty crazy. It's interesting you brought that up because I, I use ChatGPT for more than just say, write this or draft me an email. I actually, I asked, are you familiar with Asimov's three laws of robotics? And it said, yes, I am. And actually, listed it for me. I said, do those apply to you? And it said, well, I am not a robot, you know. So, it gave me kind of a vague answer. It left me a little bit concerned. So, wait a minute, you kind of are and it's like, okay, it's like, I asked, can you, can you put AI technology into a robot so the robot functions and thinks. And it said, yes, you can. And I said, would that robot then be bound by the three laws of robotics? And it said, well, it depends how it's programmed. It's like, no, you don't understand. So, it's interesting to have some good conversations with, with ChatGPT. It's very eye opening, and you can start to see its potential. And you can start to see when it starts referring to us as a society. I mean, that's
Jeff Matevish:It's like, back in the day, you used to ask the term it used. It's us as a society. I said wait a minute, us? So, you're part of a society? And it's interesting. Alexa, you know, the Amazon Alexa's, yeah, a specific question, and they would shut down, you know, yeah.
Drew Thomas:I don't know. I don't know. Yeah, that's going to be interesting, because there are as again, as of the date of this recording, the Amazon is either just recently or is about to release an AI version of, of Alexa for people's homes, and they're touting a lot of really appealing things to make it more conversational, make it easier to use. I think I read in one article that the intent behind Alexa, originally from, from Bezos, was that he wanted the enterprise computer from Star Trek to be able to interact that way, that casually, to be able to say, I want to play music, lights. Are we potentially opening ourselves up by allowing something like that into our home lives, to, to learn too much about us?
John Valkovci:My, again, my personal opinion is, yes. I think technology is amazing, I really do and technology to solve world problems is necessary. That's we're humans. We develop the technology it should be used to solve problems. Personally, I don't believe it's a problem that somebody's too lazy get it to go up and change the temperature on their thermostat, yeah, okay, or to turn on a light switch. That, to me, is not a third-world or a world problem that needs to be solved by technology. Other people really embrace the fact that they could sit there in their chair and sip their coffee and shop online, and the next day, these things miraculously appear in your doorstep, or they could change the thermostat, or they can turn lights off and on by sitting there and doing nothing. So, I think it's personal to every individual on how much technology you want in your life. And if we're talking about Alexa, Alexa listens. It has to, because it has to know when you say the word Alexa, so that it turns on, so it's always in that listening mode. And there have been criminal cases where we, evidence has been used from Alexas to convict different individuals. When I teach digital forensics at Saint Francis, I teach the students how to take a data scrape from an Alexa device and see what information is on there, what people talk about, what people, it is listening. I'm not saying it's actually recording everything, but it's listening to hear that prompt. So, to answer your question, and I hope I'm answering it correctly, is, I think technology is great, and if you want to embrace it, and you want to have your entire house controlled by a computer, like from, from the enterprise, okay, and just ask it something as you could that's fine. There's nothing wrong with that. Some people will embrace that. Some people won't, but it does have the potential to, I think, the word I'm going to use here, I might offend some of your listeners, it has the potential of making us somewhat lazy. Okay, if you're going to use ChatGPT and AI models to draft everything you write, then you're really never going to develop the skill set to write.
Jeff Matevish:That's true.
Drew Thomas:I agree. Yeah, yeah. I, you know, I mean, I, I think of sometimes, you know when, when I was in school, you know, the teacher said, well, you're not going to have a calculator everywhere you go. But actually, I do. It's in my pocket right now, but at the time, I did not, you know, you had to have a graphing calculator to do certain things, and you weren't going to carry one everywhere you went. But I, but it was, it was less about not having a calculator everywhere you go. It was more about understanding what you're doing. And I was at a, I was at a conference many years ago, and there's a guy that talks about generational differences, and he said, finish this sentence. He said, young people are technologically what? And everybody in the room said savvy, and he said, wrong. They are not, they are no longer technologically savvy, they are technologically dependent. Because they don't learn, we were talking about off camera, or off camera, off mic, we were talking about computers. You know, we were talking about building our first computers and doing, using computers. You had to know how they worked. Most young people today do not understand how their phone works. They just know that it works. And that makes it a little bit scary, because if the people like me that learned how it worked, are no longer around, then what does the technology then become dominant, you know? And when you have AI telling you what to do, and you don't know the difference, I mean, it's, it's crazy.
John Valkovci:Drew, using your calculator as an example. If you're in grade school and you're issued a calculator, or you have a phone, and you learn how to use that calculator to solve math problems, and that's what you're taught forever. And you're never taught to do math in your head or on paper, and somebody takes your phone away from you, you're lost. You can't function. You can't do math other than perhaps basic math and writing is the same. And there's a lot of different areas where, yeah, the potential to become dependent is a great word. I use the term lazy. I think they could be similar in a way, you become so dependent on it that if it's removed from you, then you almost start to panic, as you don't know how to do things on your own.
Drew Thomas:Yeah. I mean, it's, you know, it's like a construction worker that grew up learning how to use a claw hammer, and then they're handed a, you know, and then, but if you, if you went to school and you were taught to use a pneumatic hammer to you, that's how you do it, right? You don't have any reference for what was before that, and that's not necessarily your fault because you weren't taught otherwise. I can't do math with an abacus, right? But there were generations of people that could, that's how they did math. I don't do math that way, so it's, it's interesting, yeah.
John Valkovci:Yeah, I can't use a slide rule. Yeah, okay, but a few years people, just a few years older than me, grew up with slide rules and we didn't have slide rules when I went to school, because we had gone beyond that. So, I can't use a slide rule right now. I can't use an abacus like you can't either, but, you know, it's one of the few things that I tell my students. I said, you, you know, technology. If I have an issue with my phone, they can solve the issue. Okay, here's what you need to do. Just do this and this and this, I said, but I can do things that you can't do. I can read an analog clock
Drew Thomas:Or cursive.
John Valkovci:I can read and write in cursive. I could do math in my head, yeah, you know, yeah, and things like that, and they seem lost. I mean, I asked some of my students, we work with percentages sometimes. I say, okay, what's 10% of this, this, or 0.1% of this? And they all reach for their phone, right? I say no, you don't need to use your phone for something like a 10% or 1%. It should just be there. I said, how do you figure a tip when you go to a restaurant? You know, if you want to leave a 10% tip or a 20% tip, don't you just do the math in your head?
Jeff Matevish:I hope it's written down on the bottom of the receipt, right?
Drew Thomas:Or now the computer will tell you like leaving, leaving.
John Valkovci:That's true. It is. You're right, absolutely right.
Drew Thomas:That's a whole other conversation about tipping. That is, oh yeah, the whole thing, like, you know, now, now, now, everything asks you to tip, even if you're doing all the work, it says you want to leave a tip?
John Valkovci:It's interesting, you travel through Europe, and they don't. They don't do it at all. They don't, yeah, yeah. If you do, you're insulting them. So, it's kind of refreshing to go to Europe and say, I don't have to tip.
Drew Thomas:Yeah, well, because people get paid a living wage, well, to a certain extent, yeah. That's a whole other conversation beyond AI. But, yeah, so, I mean, you brought something up, and I guess maybe we can circle back to this as maybe a final point, is using AI, you were saying, like, who owns AI? If you, if I generate something, who really owns it? Is there an ethical thing to using AI at this point, like, in an education standpoint, how do you, how do you tell if somebody is using AI to complete their, their classwork, for example? And is it ethical to use it if it's a legitimate tool?
John Valkovci:I ask my students how many of them have used AIs in their career to write papers, and a fair number raised their hand. And I said, do you think it's fair if I use AI to grade your papers? And almost universally, they say, no, you shouldn't do that. You have to grade it. Well, then if you can write it with AI, why can't I grade it with AI? And so there seems to be a disconnect when it affects them personally. There is an ethical component to AI, okay, if you talk about education, you shouldn't be drafting a paper that you're submitting for a class representing it to be drafted by you and your work when it wasn't. That's cheating. Simply. It's no different than in the day before AI, Jeff, if you had an assignment, but you hired Drew to write the paper, and you turned it in, you didn't write it right. Drew wrote it, and you're turning it in, representing it to me as your work, getting a grade on that, yes, that you didn't earn because you didn't do the work. AI is very much in the same vein. You're just representing what is your work when it's not, it's akin to plagiarism. You're representing, these are your words when they're not. They belong to someone else. All of those fall within the code of conduct and the Code of Ethics within most universities, so you can't use it. How do I stop that? In certain instances, what I typically do at the beginning of a semester is I have in class writing assignments, prompts, just a page or two. I start to develop a very good sense of how well my students can write. Then when we have papers and they start, I mean papers that sound like they were written by a PhD student, I know there's something amiss here. I also do use AI detectors. I don't rely on one. I will put it through three or four different AI detectors, and if I come back with a consensus, then I have a conversation with the student, and it's happened in the past. So, I think there is that ethical component. Are you going to represent that this is your work when it wasn't? And that's, but that's just in the education because that's what you asked me about Drew, was educational. In the fiduciary there's also something else that you have, there's an ethical component to using AI, yeah.
Jeff Matevish:Going back to what you'd said about plagiarism. So, back when I was in high school, they were just starting to bring out, you know, these, like, turnitin.com type of things where you had to put in your assignment to ensure there's no plagiarism. How does that work now, with, with AI, when it's, it's generated, it may not be exactly from a website. Does, do these websites pick up plagiarism if you're taking an answer from an AI, a generative AI like ChatGPT?
John Valkovci:It depends on, on the detector you're using. Okay? A lot of them, since when you think of what AI is, particularly generative AI like ChatGPT or Gemini, it drafts that paper that you're turning in based on the data, and it predicts and tries to organize the data in such a way. And the detectors look for certain patterns, look for certain cues. If you look at things written by ChatGPT, they love to use hyphens in ChatGPT Instead of commas. Well, they'll just in the middle of a sentence, they'll put a hyphen, and then, you know, some sort of independent clause, and then another hyphen instead of a comma. So, it's how they separate things. So, there are clues on how things are done. And these detectors use various algorithms, and they look for patterns. It's still an AI model that you're using to look at AI. Sure. You have an AI bot looking at AI trying to detect patterns. And I don't, it's very meta. I don't think, they're certainly not infallible. Yeah, sure. Okay, you could write something right now, type it out just as it comes out of your head, and put it in there. It will say this, there's about a 90% probability it was generated by AI when you know it's not. So, I have to be very careful when I'm doing that. I certainly don't want to accuse somebody of, say, cheating when they didn't. Sure. So, I'd like to have a conversation with the students to find out more, more about them and whatnot.
Drew Thomas:And I mean, ultimately, while you want to maintain the integrity of universities and colleges and you want to maintain the integrity of the degrees that people are earning, I think it's important for people to maybe think about the fact that if you're using it for those kinds of purposes, and you really are circumventing all of the things you're supposed to be learning by taking this shortcut, eventually you're going to be asked to do these things in a setting that is not academic. That's the whole point you're learning it, right? So, if you're learning to be a writer, and then you get out into the real world, and you can't write, but you got, you know, you graduated with honors at your college. You're, you're only, you're only cutting off your own legs, really, because you're not learning anything. You're just, you're just getting the grade.
John Valkovci:Ultimately, you're cheating yourself. Yeah, you really are. But that's why any cheating is you're cheating yourself for that one grade. Just put the time in and study and learn it and do the work. But again, I still, I want my students in cyber security, at least, to embrace AI. I want them to see its limitations. I want to see how it can be used. Because when, when they leave Saint Francis and they go into the workforce, they will be tasked, particularly this generation that I'm teaching right now. They are at the vanguard of AI and a lot of new technologies as they're coming out. They are going to be called upon by different employers. How can we incorporate AI? How could AI make us better? How could it make us more efficient? How can it make us more profitable? So, they have to understand the abilities, the capabilities and the limitations of AI to be able to say, here's how we can use it in this situation. So, I want them to understand that.
Drew Thomas:Yeah, and going back to the benefits, I think it, you know, there are situations where you know, if, if you're analyzing just a ton of numbers and a ton of data, while it might take two humans two weeks to do that, an AI algorithm might be able to do it in 10 minutes. I don't necessarily think that's a bad thing, because, again, you can, you can sort of jump to okay, well, why are you analyzing the data? You're not analyzing the data to analyze it. You're analyzing it to be able to determine a result or to identify something that you want to, an action you want to take. If you can, if you can, get to the end of the data analysis, and then, as a human, make that determination and do it in 10 minutes instead of two weeks, that's probably a good thing.
John Valkovci:I couldn't agree with you more, but that just, it solidifies the point that we need to see AI as a tool, a tool that can be used by us, just as any other tool that we use to make ourselves more efficient. Your calculator is a tool on your phone to help you in certain instances. Could you do it yourself? Probably. But the calculator allows you to do it more quickly, more efficiently, and then allows gives you time to do other things. So, if we look at AI as a tool, and we see it in that vein, and we use it in that vein, then it's fine. It's fine. There's nothing wrong with it at all. I agree with you. Yeah, it does. I mean, yes, if we can shorten the time from 10 days to two hours to analyze data, why wouldn't we?
Drew Thomas:Yeah, interesting, yeah. I mean, there's always two sides to a coin, I think, you know, and I think that's, that's really where we are with this. And I think it's important to just keep, I think it's important mostly, to just keep having these conversations, so that you can look at both sides and say, okay, yes, there are good things, there are bad things, recognizing those bad things to try to mitigate them, and then recognizing the good to try to encourage them. I think that's really the only way. If you're only looking at one side or the other side, it's very easy to get tunnel vision, and that's all you see. I think you have to look at both.
John Valkovci:You do. You have to have an open mind. It's a technology. It's a tool. How can we use this and whatnot, so, and it does, it does help it, but as long as long as we don't become dependent on it, yeah? Or the word I used earlier, lazy, lazy.
Drew Thomas:Yeah, all right, I think we're good.
Jeff Matevish:Yeah, that was great. Thanks.
Drew Thomas:Thank you very much, John.
John Valkovci:Oh, you're welcome. Thank you.
Jeff Matevish:This podcast focuses on having valuable conversations on various topics related to banking and financial health. The podcast is grounded in having open conversations with professionals and experts with the goal of helping to take some of the mystery out of financial and related topics, as learning about financial products and services can help you make more informed financial decisions. Please keep in mind that the information contained within this podcast and any resources available for download from our website or other resources relating to Bank Chats is not intended and should not be understood or interpreted to be financial advice. The hosts, guests, and production staff of Bank Chats expressly recommend that you seek advice from a trusted financial professional before making financial decisions. The hosts of Bank Chats are not attorneys, accountants, or financial advisors, and the program is simply intended as one source of information. The podcast is not a substitute for a financial professional who is aware of the facts and circumstances of your individual situation.
Drew Thomas:Change is always scary. We often cite Henry Ford for revolutionizing the production of automobiles with the moving assembly line in 1913, but most people today don't remember that there was significant worker resistance to the new technology, citing its monstrous and dehumanizing nature. In banking, the introduction of ATMs in the 1970s caused a lot of confusion and concerns about losing the human touch. While no one can argue that AI is evolving rapidly, it isn't alive. It's a new tool, one that we will have to learn how to use responsibly. There's always a wild west phase with new technology. The Wright brothers certainly didn't have to file a flight plan with the FAA at Kitty Hawk. But we can't let the technology get too far ahead of the boundaries. AmeriServ Presents Bank Chats is produced and distributed by AmeriServ Financial, Incorporated. Music by SchneckMind. Our executive producer is still Jeffrey Matevish. You can find our full library of episodes on our website or by simply visiting the show page on your favorite podcast app. For now, I'm Drew Thomas. So long.