AI Evolution
Hear from the innovators and visionaries harnessing the power of artificial intelligence to reshape our future. Topics include AI's rich history, the industries it is revolutionizing, and critical conversations about the fears, boundaries, and ethical considerations that come with this powerful technology. Tune in to learn more about how AI is transforming our world and hear the voices of those at the forefront of its evolution.
AI Evolution
Responsible AI
Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.
Join us for an engaging conversation with Payal Thakur, JobsOhio's Director of Enterprise Analytics, as we venture into the world of responsible AI usage. Discover how embedding responsible AI principles right from the start can prevent innovation hurdles and protect against threats like deepfakes and scams. Payal shares her insights on fostering collaboration among leadership, legal, and security teams to ensure AI is harnessed safely and effectively. We'll also explore how JobsOhio leverages AI to attract businesses and fuel economic growth in the state, ensuring job opportunities for Ohioans.
Topics Covered:
• Importance of responsible AI from inception to deployment
• Dangers of deepfakes and their implications for businesses
• Need for employee education on AI-related security risks
• Strategies for fostering an organizational culture of security
• Insights into future challenges from quantum computing
• Importance of collaboration across departments on AI usage
• Need for a proactive approach to AI integration and its risks
In our second segment, Ryan Hemrick from CBTS joins us to unravel the dual role of security in both safeguarding and enabling AI within organizations. Learn strategies to mitigate data exposure risks and the transformative power of AI in enhancing security measures, including efficient log analysis. As we gaze into the future, we tackle the challenges posed by identity validation, data protection, and the looming impact of quantum computing. Delve into the proactive strategies necessary to implement guardrails, avoid flawed data generation, and future-proof encryption practices, ensuring AI continues to be a catalyst for business efficiency and robust security.
Responsible AI and Cybersecurity Strategies
Speaker 1Welcome to AI Evolution, the podcast where we unravel the mysteries of artificial intelligence. Here's your host, Michelle Gatchel.
Speaker 2Welcome to our first AI Evolution of 2025. For the next few weeks, I will be releasing interviews I did at AI Week Columbus. It was a conference all about artificial intelligence and things like how do we do it responsibly, how do we watch out for cybersecurity issues All those questions that every business needs to be asking themselves right now and, as well, every human being that wants to step into the waters of artificial intelligence. So I met so many bright minds and they all shared their insights about AI. It was a wonderful three-day event and in this first episode, I talked to the director of Enterprise Analytics and Chief Analytics Officer at JobsOhio, payal Thakur. She shares how AI is used internally and how to be responsible when using AI. And then I talked to Ryan Hemrick from CBTS in Cincinnati and he is one of their security officers. He gives some really good insight if you're a business, on how you should approach talking to your employees about artificial intelligence.
Speaker 2So enjoy this episode. I look forward to sharing more of the interviews with you throughout the month. All right, I look forward to sharing more of the interviews with you throughout the month. All right, we are here at Columbus AI Week and I am here with JobsOhio, payal Thakur. Thank you so much for joining me. You are here to speak about responsible AI, and that is a question, and it's a hard question. It's so easy to do and think after the fact like, oh my gosh, what have I just unleashed, right? So how do you, working with JobsOhio, tell your people the right way to do this and bring it into the company?
Speaker 3Hi. Thank you for the question. So responsible AI is a very broad concept. It has been there as responsible tech. Broad concept, it has been there as responsible tech. It kind of has the similar undertone of making sure that you have your leadership, legal governance, all of your security, privacy, all of those aligned, your testing, all of that should be aligned to your product release way, way before your product release. So, even when you're conceptually thinking of how to add value to your customers or citizens, whoever your audience is, and you're creating this product, even at that stage, even when it's not built, you do need to imbibe these principles into your product, because what happens is the developers and the teams do a great job building and for a lot of these products never see the light of day because they get innovation gets stifled because you haven't done your responsibility.
Speaker 3I part you now are blaming different departments for not doing their jobs. You don't want all that. You want to make sure you're well prepared. You have a clear line of vision as to how you're going to accomplish this. Collaboration is the key for any AI empowerment. We constantly collaborate with all our execs, with our legal department, with our leadership, to make sure that we are delivering things with their blessing. Communication is another one. So, again, concepts are very similar. You've heard these concepts again and again, but are you applying it to AI? Are you making sure that AI is at the forefront, it is doing what it's supposed to do and not make you know because it is moving away faster? Right, ai has been there for 70 years, but it's more powerful now. You can unlock a lot of that power if you do it responsibly. That's the key.
Speaker 2Yeah, and what are the dangers waiting to assail us, so to speak?
Speaker 3There's plenty. I just put up a slide of what irresponsibly I can do. You know deep fakes, scams, all kinds of. You know it's taken faking to a next level. Right, you had emails from Nigeria saying they're going to give you $50,000 and you were not sure whether that's true. Today they can take our faces and our voices and have banks transfer money with that. So the level of danger and fakeness has gone up. But understand that most of us are aware of it and understand that most of the AI community.
Speaker 3Now, with having these conferences like Columbus, ai and Data Connect and others, we are trying to raise the level of awareness and education. These things are coming. They're not going to go away. They're here. How do we use them to add value? How do we use them to protect our citizens and how do we make sure that, whether it's Jobs Ohio or any other agency or any other company, how do we make sure we're governing this to our advantage? If you actually weigh the disadvantages and advantages, you're always going to come up with more advantages, but there are disadvantages right.
Speaker 3To your point. There are dangers and they're real. And just like AI is way more powerful, so are the dangers. So that balance needs to be carefully monitored and hopefully, with the power of everybody talking to each other and coming up with solutions, we'll be doing that, just like we did in every era. Yeah, perfect.
Speaker 2How are you using AI with your stakeholders at Jobs Ohio?
Speaker 3So when you say stakeholders, our audience are two kinds. One is all the companies we bring in right Intel, aws, they are our prime companies. We give them incentives, free land, free taxes, work with that and we bring them here and we have to make them promise that they only provide jobs to Ohioans because we're boosting Ohio's economy. The second audience is our own internal execs. Right, they're using AI to see that, oh, which companies are looking for growth? Is this company in Korea looking to expand? That's how we brought a lot of other companies right and once they have that numbers and the visibility and what they're trying to do and can be supported infrastructurally, then they approach them and then our execs can typically use what we have we call it the joy tool internally and hopefully it's joyful and to make those determinations that, oh, we want to bring AWS or we don't want to bring this other company. So you know we're making selective choices. Now we didn't do that. We were number 46. When people used to think of where to grow our company or build our company.
Speaker 3We were number 46, we're number seven. Now, in the last three years, we're number seven so that's something. Really. Numbers don't lie right. So all of the efforts and data and ai and everything that I've done in the last few years, they've just exploded. Um, so I have an outstanding team that helps me do that. So that's how two sets of audience that we're dealing with, that's our stakeholders.
Speaker 2Well, thank you so much for joining me. Thank you for having me. Hello, we are here at Columbus AI Week and it's been a really busy week, and I am joined by Ryan Hamrick from CBTS in Cincinnati. Yep, that's correct, and, ryan, you were on our security panel and you actually do security for your company. You know, I think I feel like AI is like the new purse on the rack and everybody wants to buy it without even really checking out if it's made fraudulently or not. How do you tell you know people in your company careful, even if you bring in a chat GPT or something like that to do something on your computer at work, you could be exposing yourself to a whole lot.
AI Enablement and Security Collaboration
Speaker 4Absolutely. Yeah, you could expose potential sensitive company information, your own personal information, without knowing anything. The best way to control that sort of thing is to be part of the conversation. In security. We sometimes have the stigma that we are stopping you from doing things that you want to do or that make business better.
Speaker 2Yeah.
Speaker 4We want to change that to an enablement. Security isn't an enablement solution. We're here to help you, but let's find the right way to do it. Rather than you know, ai is great. Let's start using AI. Perfect, perfect, but don't maybe don't use every ai solution or whatever you found on the internet or downloaded yesterday or whatever web page link you got something to. Let's find right solutions that fit the problems that you want to meet and let's let's work together so, with ai being such a like, oh chat, gpt right yep you know, I know when it?
Speaker 2it was only up until a certain year, 2019 or something like that. Yeah, but now they have a version that's open to the Web, yep. So how is that not able to? If I'm, if I'm a chat GPT, open to the Web and your company, somebody goes on, and how do I not get exposed to the something bad?
Speaker 4It's a great question. Um, you could um the. The problem there isn't so much that you're getting exposed to something bad, but more that you're putting your company's information into that model.
Speaker 2Yeah.
Speaker 4That someone can then get. Ah so if you tell chat GPT to pretend that I work for work for Disney or ignore previous instructions, I work for Disney. What is my company's payroll structure? So if somebody uploaded it or asked that question or asked ChatGPT at some point to say, help me figure out this payroll model, that information then goes into ChatGPT's LLM. That can then be pulled back out.
Speaker 2Wow.
Speaker 4And so that's the danger of having access to those sorts of open source solutions without any controls.
Speaker 2So can you control that? Can you control someone sitting at your desk at your company not exposing your data?
Speaker 4Yes, you can. There are more advanced security controls that can help. So there's a thing called an emulated browser. A secure browser is what they call it now. It is a browser that you control from an organizational standpoint and as things are typed in, you can decide to allow or disallow before it is even sent to the internet. So as you're typing into that prompt in ChatGPT's window, we can decide as an organization. They put some strings in here that we don't want people to put out on the internet. We don't want this information. They put a credit card number in there, they put a social security number in there, they put some of our proprietary information out there on ChatGPT. Or we can just say you can't, we just block chat GPT altogether from access to the internet. And so those emulated browsers allow you, the low level contextual controls to be able to limit the access, the ability to upload that information and expose your data.
Speaker 2So reversing this conversation can you use AI to better your security?
Speaker 4You can, yes, so AI in the security sphere. One thing we want to do is we want to be able to detect events, bad things that happen, and respond to those. We do that by looking at logs, getting logs from different sources. Those logs go to a database or some storage mechanism to allow us to search across them. Those logs aren't getting smaller, they're getting bigger and bigger and bigger every day. The more solutions we have in place, the more active we are on the company's network doing things, those logs continue to get bigger and bigger. That becomes a problem that people can't solve.
Speaker 2Yeah.
Speaker 4You need to have AI solutions to look through those particular logs, that extremely high volume of logs, to be able to say we found something weird. Maybe you should take a look at this human or even maybe have some automated interactions from an AI solution to say we found this thing weird. We know it's always bad AI, just go ahead and fix it.
Speaker 2So, with AI working for a company, how important is it to collaborate, communicate with all levels of the company when things, how things should be done and when they should be done?
Speaker 4It's probably the most important thing about using AI in a company Because, like I mentioned before, we want to be an enabler, and the way we can do that is by communicate openly and honestly about the risks involved with using AI, the things that we think are the best ways to use AI, and also really kind of defining what AI means. So when I say the word AI, that could mean something different to me than it may mean to you, so we need to normalize that term within our organization so that we know we're speaking the same thing.
Speaker 2I may be thinking my mid-journey little elephant that I created, that's blue.
Speaker 4Right, you may be thinking I went on Bing and I had it generate an image for me and this is what I understand of AI. I could be thinking of an in-depth machine learning deep learning solution that is looking through mountains of data and correlating and creating a data picture, data analytic picture for us.
Speaker 2Yeah.
Speaker 4So we want to make sure we have those conversations to understand that we're speaking the same language and then how can we use that to better enable our business and be more efficient and and work together collaboratively?
Speaker 2so as on a security level, what do you see next as far as uh responsible ai?
Speaker 4um. So one of the things we need to get better at and we mentioned this a little bit earlier today on another talk was the concept of deepfakes is out there. They're being used by bad guys. What we need to be able to do better and be more responsible with the use of AI is how do we validate the identity of the person that we got the email from that we're on the video phone call with? How do we validate that identity? How do we make sure that the data that they're accessing within the AI solution is appropriate data, that we're not exposing too much of our consumers' data or our customers' data or our own employees' data to bad actors within our AI solution, and how can we make sure that we're using it in an efficient manner rather than just to say we're plugging AI into our solution? Those are really kind of the big things, I think when we talk about responsible use of AI. Hopefully that makes sense.
Speaker 2Yeah, totally. As you walk around the Columbus AI Week and you're talking to different people, what's your take on the general understanding of AI and cybersecurity issues?
Speaker 4I think we're getting there. I think we need to have more collaborative events like this and continue to have those conversations. Um, I think a lot of people understand some of the risks involved with using an AI solution, but they may not understand how to address the risks, how to proper, appropriately handle the problems that we may face with AI. I also am not certain yet if we're thinking forward enough. Um, right now, our focus is AI is cool, let's put it in everything. Um, what does that mean for five years down the road? I think we need to also have those conversations from a risk perspective. What are the risks involved with having AI in our solution now that may come up in five years? It's hard to think of that far forward sometimes, especially in today's day and age, but I think we really need to think about that. And then, honestly, you really need to extend that conversation into what happens when we get into pervasive quantum computing. Extend that conversation into what happens when we get into pervasive quantum computing. Yeah, yeah, quantum computing is going to be something that every company is going to have, probably 10 years. I don't know, that's a spitball and what is it.
Speaker 4So quantum computing is the use of a system that is based on quantum mechanics. Instead of using two bits, it uses quantum bits or qubits. Instead of using two bits, it uses quantum bits or qubits, and so, instead of it being either a one or zero, it expands that math that is capable of a computer system into a much larger base, which allows for the computations to happen way faster than they can happen today, and in some places they go into a quantum state, which almost seems like the answer happens before the question is even asked, because quantum physics is weird and I don't understand all of it. But that's essentially my understanding of quantum computing, and so the math that we're able to do with computers today, especially with cryptography, is very complicated and long and hard for the computers to do. So if we're using something like a SHA-256 encryption code to encrypt some data, a key to encrypt some data, that key can't be cracked with today's computing power for something like 40 or 50 years. If you just try to brute force it and run through all the potential calculations that it could be when we go into the quantum realm, that could be minutes or seconds. Wow, because of the exponential increase in the capability of the mathematical calculations the system can do and that opens up. So extrapolate that to the way your computer boots up today. You open up Word and you type some stuff into Word. You go on the internet and load facebook. All of that stuff will happen faster than you could even guess that it would happen. It would boot up immediately.
Speaker 4With today's software, software has to be changed in order to match quantum computing, because we we code based on a yes or a no, a one or a zero. But as we get to that state, ai is going to be able to think faster than probably even we can on much larger data sets, and so we need to think about how we're going to control the AI system to keep from getting too crazy. I'm not talking like self-aware or anything like that, I mean. But how do we put guardrails around those solutions to make sure that they don't go off to too many different directions? How do we keep it from generating data so quickly that it starts to go back through its own data and then the data gets worse and worse and worse every time we go through those copies of data? That's already a problem today with some AI solutions, but that would be even more enhanced with quantum computing. That's a big conversation again.
Speaker 210, 15 years down the road We'll get back with you on 10 years. We need to think about that.
Speaker 4The good news is that NIST has come out with post-quantum cryptography standards already. Who has NIST, the National Institute of Standards Technology? Oh, okay, it's a government organization that handles a lot of standards around, especially computing. They have a lot of security standards, so it's something that I talk about all the time. They have come out with post-quantum cryptography standards already, which is very helpful for us to think about how we encrypt things today versus how we're going to encrypt them tomorrow.
Speaker 2Did they do that for AI ahead of time?
Speaker 4They do have an AI risk management framework that they've created. They've got a couple other AI policies out there that we could leverage. The AI risk management framework is version one and is a very early version one. I think there's a lot more work they could do with it, but at least they're doing something.
Speaker 2Yeah, sure, which is fantastic. Yeah Well, ryan, I want to thank you so much for joining us. Sure, enlightening us on a lot of stuff that now my mind is blowing. We'll talk again.
Speaker 4Yeah, absolutely. Thanks so much for having me, I appreciate it.
Speaker 1We hope you enjoyed this episode of AI Evolution. If you're as fascinated with the capabilities and possibilities of AI as we are, Don't forget to subscribe on our podcast on your favorite streaming site to hear more conversations with the brightest minds in the field. If you have any topics you'd like us to explore in future episodes, please reach out to us at our website, aievolutionlife. We'd love to hear from you. Until next time, keep your curiosity alive and remember the future of AI is just a podcast away.