Inspire AI: Transforming RVA Through Technology and Automation

Ep 16 - Pioneering Responsible AI w/ Bryan Ilg from BABL AI

AI Ready RVA

What happens when artificial intelligence makes critical decisions that affect people's lives? Who's responsible when AI systems produce biased outcomes or legally questionable results? Bryan Ilg, Chief Sales Officer at BABL AI, tackles these pressing questions in our thought-provoking conversation about the future of AI governance.

With a unique background spanning from scooter rental entrepreneurship to pioneering work in AI oversight, Bryan offers rare insight into how organizations can navigate the complex landscapes of AI implementation. "AI governance is probably one of the last jobs that will ever exist," he suggests, highlighting how the oversight of artificial intelligence systems represents the future of work as task-based jobs increasingly shift to automation.

BABL AI stands at the forefront of responsible AI development, providing independent auditing and assurance that evaluates systems for bias, legal compliance, and effective governance. Bryan explains why traditional risk management approaches fall short when dealing with AI's unique challenges, and why diverse perspectives are essential for responsible implementation.

The conversation reveals alarming blind spots many organizations have when deploying AI solutions—from inadequate user training to failing to consider how automated, quantified decisions must be legally defensible in ways human decisions weren't previously scrutinized. "It's much easier to maintain than to fix a big problem," Bryan notes, making the case that proactive AI governance saves organizations from potentially catastrophic failures down the road.

Beyond the technical aspects, Bryan shares a compelling vision for AI's future where technology serves human-centric values rather than merely driving profits. His perspective offers both practical guidance for business leaders and an optimistic outlook on how properly governed AI might help solve our world's most pressing challenges. Want to learn how your organization can implement AI responsibly? This episode provides the roadmap you need.

Speaker 1:

Welcome RVA to Inspire AI, where we spotlight companies and individuals in the region who are pioneering the development and use of artificial intelligence. I'm Jason McGinty from AI Ready RVA. At AI Ready RVA, our mission is to cultivate AI literacy in the greater Richmond region through awareness, community engagement, education and advocacy. Today's episode is made possible by Modern Ancients driving innovation with purpose. Modern Ancients uses AI and strategic insight to help businesses create lasting, positive change with their unique journey consulting practice. Find out more about how your business can grow at modernagentscom. And thanks to our listeners for tuning in today. If you or your company would like to be featured in the Inspire AI Richmond episode, please drop us a message. Don't forget to like, share or follow our content and stay up to date on the latest events for AI Ready RVA.

Speaker 1:

And we're back. Today we're talking to Brian Ilg, who is an entrepreneur and AI strategist with diverse backgrounds spanning business operations, it solutions and AI governance. He currently works as a chief sales officer with Deploy Dynamics and Babel AI. He got his start by running a scooter rental business and made his way into navigating the complexities of cellular IT partnerships and made his way into navigating the complexities of cellular IT partnerships. Brian's journey has been anything but conventional. His unique ability to reverse engineer problems, identify inefficiencies and align stakeholders has positioned him at the forefront of AI-driven business transformation, and today we are exploring Babel AI and their role in responsible AI. Thanks for joining us today, brian. Thanks for having me Jason.

Speaker 2:

I'm excited to chat today about Babel and what we're doing.

Speaker 1:

Awesome.

Speaker 2:

Well, can you start by Maybe you'll have to bring me back to talk about the scooter business down the road.

Speaker 1:

Absolutely I would love to. I think entrepreneurs have a huge role in the future of ai, so I'd love to get to your, get to the bottom of what you're talking about with your scooter business. So in the future, definitely, we will reconnect. Uh, this episode, let's. Let's start the audience off by tell us a little bit about your interest in AI and what brings you here today.

Speaker 2:

Yeah, absolutely so clearly, I've dabbled in a bunch of different things in my career and have been fortunate enough to be connected with Babel AI back a few years ago, mentoring new businesses through the University of Iowa, and I felt really compelled to join their mission and what they're really trying to do with AI. So I'm really excited to talk about what they're working on. It's kind of a new space and hopefully can bring shed a little bit of light about what is responsible AI, how does Babel AI services help build trust within products and services using AI and, ultimately, how do we innovate faster with more confidence?

Speaker 1:

All right, so then start us out by telling us what does Babel AI do? Why, why?

Speaker 2:

independent AI assurance is so critical to today's landscape? Yeah, absolutely so. Babel AI really cut and dry. We are a organization that audits and provides assurance for AI systems for bias, legal risk, legal compliance and good governance and effective governance. So we are helping organizations make sure that they are not exposed to any sort of unforeseen risks. We're also helping build trust in the markets, with new AI solutions popping up left and right, making all sorts of claims.

Speaker 2:

So the function of auditing generally in business is attributed to financial auditing to assure that what people are saying and reporting is trustworthy and accurate for market stability. We are bringing that into the fold of AI responsibility. We are bringing that into the fold of AI. We really believe that AI auditing is probably one of the last jobs that will ever exist because, as task-based work goes away, in lieu of AI systems, people are going to need to be able to manage AI, which is kind of the AI governance component, and then we're going to need to be able to verify and validate that what the AI is doing is trustworthy, hence the audit function. So that's where we're coming from.

Speaker 1:

Interesting observation you just shared with us. You said you believe that this is the last job that will ever exist. Did you think like in future sense, where ultimately there's nothing left for humans to do but manage and monitor for AI governance? Is that what you really meant for that?

Speaker 2:

Yeah, you know it's a futurist point of view. I think I'd bring that and you have to kind of have a little bit of that. You know what's around the corner perspective to think about AI, because it's so exponentially growing in all these directions. So, figuring out how to manage AI, as unfortunately, I believe it's going to take a lot of jobs that exist today and create a lot of new jobs tomorrow, that will be wholesale different, but yeah.

Speaker 2:

I think those new jobs are going to be more managing AI than doing tasks, because I think the AI is going to be able to do tasks, so we want to make sure that those tasks are being done correctly and with confidence and trust. That's what we bring to the table.

Speaker 1:

Yeah, yeah. When I think about some of the risk and exposure to companies just throwing AI solutions out there, it takes you back to one of the previous podcasts we did on responsible AI. It's kind of a 101 on that and you definitely contributed to that, so thank you. So your team plays a key role in AI auditing research, with funding from IBM's Tech Ethics Lab and Notre Dame Research Center. I understand what were some of the foundational principles you established and how do you shape AI governance today with this?

Speaker 2:

Yeah, so our company was founded back in 2018 by our CEO, shea Brown. He's a professor of astrophysics, machine learning, and we have a lot of other legal and researchers underneath the team. That started as kind of our core research team. So we were funded by grant research on how to provide assurance for AI and our whole idea was kind of stemming from that thought of we need to know how to manage AI into the future because it's going to do big things and we want to make sure they're doing things correctly and there's trust. So our whole research and we published this there's three or four, I think there's three or four. I read three. I think there might be a fourth on this specific topic of AI auditing. So we really kind of break it down into three sections and we can get into this a little bit later and kind of how this all comes together.

Speaker 2:

But governance is super important. So who's accountable and what are the desired outcomes? What are in your policies that's important for making sure that AI is trustworthy? Right, we have to know where we're going so that we can make sure that you know we can walk it all back towards, kind of the baseline. Second would be you know what are your metrics that you're using to really define what is right? How are you tracking those and what risks are associated with those? So what might be not being covered, or are those metrics producing outcomes that are unintended? And then the third thing is have you tested it? Can you reproduce the testing? And is that testing you know consistent? And is that testing consistent? And that goes into pretty much what we're calling our AI audit card today, which is just a simple report, simply digested report, not simple to produce for people to look at and see what's going on in these systems and hopefully not hopefully but assure trust within the systems.

Speaker 1:

Yeah, definitely not a simple thing to pull together, for sure. So tell us a little bit about your thoughts on why is AI governance such a challenge for organizations today and what are the most common gaps you see in AI risk management.

Speaker 2:

Yeah, I'd say why it's a challenge is it's just new. I don't think governance or regulation or even you know, small internal task force, if you will. That's kind of a way to think about it. None of that's new to business. It's just what AI is capable of doing is so tremendously different from any technology ahead of it. We're really talking about in some of the conversations because we do talk to philosophers what's the purpose of humans? These are very new questions for business. They're very new risk profiles and a proper governance really is for ai and the unique risk it really takes a different approach.

Speaker 2:

Um, it's not something that's best served by a single person. It's generally a committee, lots of diverse thought, analytical, critical thinking. Um, does that align with strategic missions and values? So for organizations that are kind of winging it and trying to throw AI in, you can see where that doesn't align best and there's all sorts of risks within people not being very deliberate and thoughtful and intentional about the use of AI. So I think the move fast and break things model is traditional SaaS. I don't think that works super well with all AI until you get your governance right.

Speaker 1:

So just different super well with all AI until you get your governance right. Just different, I think. I agree, corporations move fast because they're on a time budget. If they have the innovation, the technology to make change and disrupt the industry and grab the competitive advantage advantage, they're going to and they're not going to wait until governance catches up to them before they deploy it. They're going to put it out there and they're going to see what breaks, make a bunch of money and then, when you know governance comes around and reels them back in or reigns them in, then they accept their fate and it, you know, puts another barrier to entry in the way. But that's the world we operate in. Innovation happens too quick, right.

Speaker 2:

So, yeah, I think that's yeah.

Speaker 2:

It's a good way to say it and not to get too off into that topic, but I do think there's quite a bit of opportunity for people who don't play the game that way.

Speaker 2:

I think there's a lot of enterprise talent that's floating around out there and, you know, with a lot of strategy and thought and the use of AI to do a lot of task based things, there's a lot of companies that will be able to get it right first and then, when those companies make those mistakes because they're not paying attention or they're playing by old rules, they won't be able to just say oh well, that's the cost of doing business. There'll be someone else with a better solution that people will move to, because I do think that there's that tension in the market right now and that opportunity. So I think how I like to sell governance to people to help them buy in is like, if you do this right and you have the right trajectory and aim for your organization, then can demonstrate that aim through assurance or trust or anything along those lines. That's who's going to win in the future of business. I do think that there's a change of the guard in terms of business strategy Interesting.

Speaker 1:

Yeah, you're stating that the old ways aren't going to last much longer and people are going to catch on and recognize that those that are being responsible in these areas and creating trustworthy applications with this technology are going to be the ones that stand out, because the rest of them are in it for a buck.

Speaker 2:

I think so, and that just goes back to people and this is kind of ethics, if you will. I know that's a really sensitive word in this space, but you know, kind of the ethics are those? What are those undeniable truths that people have? And you know, the things that people always seek as value and consumers. Their value are things like saves me money, saves me time, reduces my pain, and then also just it brings me ability to grow and peace of mind, and I think a lot of business models today tend to not do the maximum amount that they could to solve those, and I think there's opportunity there. But that's more about intent and desired outcome of your AI, which, again, this is stuff that Babbel does routinely. When we work with organizations, we can walk in and this is kind of goes back to our audit card and our assurances. So we start with governance, like who's accountable? What are your desired outcomes? What does right? Look for you.

Speaker 2:

And I think, um, this is where regulation has been, this very like tipping point conversation where it's like are you, is it just regulation that you're really, you know, concerned about or not?

Speaker 2:

But now that all the uh regulation kind of getting the teeth ripped out, it really just boils down to what's the business outcome you're trying to achieve, and that's where we focus with our customers what does right, look for you, and then we make sure that that is being delivered. So are you mapping it with the right metrics? Are you aware of the risks that might prove that that isn't going to be where you land with your desired outcome because of the metrics? And then you know are we testing it? Is it actually delivering? So is the system itself built in a way that doesn't have, you know, things like model drift or hallucinations and those types of things. Not everything's going to be perfect, but is it within your risk tolerance too? So those are all really interesting conversations that we have with our customers routinely. And yeah, I think you know, from a business entrepreneurial, you know cowboy point of view, I guess, is kind of where I land these days is I do think that there's a better way to go about doing that.

Speaker 1:

Yeah, and naturally people want to do the right thing, so yeah, that does sound good Makes perfect sense to me. So tell me. There are many organizations that adopt AI solutions without clear governance or risk management. So what are the biggest blind spots that companies have when it comes to AI risks?

Speaker 2:

expert. I'm just regurgitating here. But you know there's a lot of decisions that historically have been subjective, like humans making the choice of A or B, but now and that's been acceptable as all right well, we trained our staff to make decisions in a way, but now they're quantified decisions, so 37% of the time customers go this way, the other ones go this way. Can you defend that legally? Why was that decision made?

Speaker 2:

This is a whole new ballgame and what I think a lot of organizations are doing when they go quick boom, boom, boom. I want to get AI in, know how to build the systems. And or maybe it's built into the code in such a way that they're putting quantified decisions that potentially could implicate them legally into engineers hands rather than really bringing that collaborative team together to say, hr legal engineer, why? Why are we making this decision rate for resume parsing, you know, in an HR hiring experience, right? So that's a law that exists, but at the end of the day, you know there is bias in every decision, but we just have to be able to back it legally and there's existing laws that have preceded anything that have ever been for AI, and you know we just have to make sure. Organizations have to make sure that they're paying attention to that stuff. So, you know, we just have to make sure organizations have to make sure that they're paying attention to that stuff. So yeah, there's many, many more examples of that, but that may be a good to think about.

Speaker 1:

Yeah yeah you mentioned it a couple of times, I think is the diversity of thought. I think that, by and large, if you can put diversity of thought into the equation, If you can put diversity of thought into the equation, you'll get most of the way there right. Yeah yeah, Different people trying to solve the same problem together in the same room. You will do the right thing, I believe.

Speaker 1:

So thinking about these risks a little bit more. What are some of the biggest AI related legal, reputation or financial risks that businesses tend to overlook? Can you share any real world examples of failures?

Speaker 2:

Yeah, I think some that I like to call out that are pretty common from a risk profile. So existing enterprise risk management systems or risk management frameworks in the financial industry, for instance, they're really robust and that's a really, in their words, a lot of red tape, a lot of regulation that they've got to sort through for good reasons to protect consumers. But AI brings a whole new risk profile into the fold. So assuming that those really rigorous risk management frameworks from before AI we can call it are good enough, I think is a real big gap in knowledge. There's a ton of new risk profile elements, like we just mentioned.

Speaker 2:

So a different one outside of the quantified choices. Just your procurement policies, your procurement rigor. So how are you inspecting if you're bringing, like a paid service into your organization? How are you bringing and validating and testing the claims, the accuracy, all of those things? Because once you employ that tool into your workflow, pretty commonly we're seeing that you're materially changing the intended use of the terms and conditions of that tool, which now passes the liability for any sort of error onto the organization who's employing the tool.

Speaker 2:

So just even having like weak procurement rigor is and policies that don't inspect correctly opens you up legally to unintended use or unintended outcomes, because if you're in a bank, this is a different way to think about it and you're a teller, are you actually like if a customer said how are you making decisions using AI with my money? Are they able to explain it all too? And that creates new friction between consumer and banks. And those are conversations that, if you're not buttoned up with policies to defend and training and those things, those become real challenges for institutions and organizations to speak to, and that opens up risk.

Speaker 1:

Yeah, transparency and explainability with the latest technologies are some of the hardest things to overcome in establishing new innovations around these tools and use cases. I get that Yep. So many companies adopt AI without clear oversight. Tell us a little bit about the dangers in that, and how does Babel AI help organizations establish the proper governance?

Speaker 2:

Yeah, so we help organizations all over the place with respect to where they are in their AI governance journey. So we're aware that organizations are kind of in a messy state right now by and large, there's someone that really have it but what we do is we help organizations by doing AI governance gap analysis. That's always the best place to start, and how we do that is we'll go in and every single AI use case or system is unique. So we come in with our team of experts, we go through discovery where we just really get a good scope of all of the inputs, the decisions and desired outcomes of the system. And that starts with the things I mentioned, with governance and interviews and just getting to know folks From there.

Speaker 2:

We do a side-by-side gap analysis and risk review. So as we identify gaps, we rank that risk and we do this so that we can help ultimately map the biggest impact items, the shortest time windows to resolve them, and we're trying to map those towards a risk management framework that does cover those unique AI risks. So that's also part of the discovery. So when we talk to organizations, ultimately that's delivered in a report and then we're leaving organizations with the decision on how they want to move forward, we can continue to help or they can take that back internally, but we have everything buttoned up at the end.

Speaker 2:

I was going to mention just real quick for risk management framework be something like if you're just doing business in the United States, that could be the NIST AI risk management framework. If you're more global, you might want to map those risks towards, like the EU AI Act, which is a little more rigorous, but just differences of geo, where the business is conducted, where the AI is being used. So we help organizations get to again what's right for them. So it's. You know it is a custom process, but that is formulaic in our approach.

Speaker 1:

Cool, cool. So what are some of the most overlooked aspects of AI testing? Are there any industries that are getting it right, setting a good example for everyone else?

Speaker 2:

Organization by organization might be the better way to say, instead of industry by industry. But one of the things that we see is user testing, beta testing, ensuring that you know end users understand what's going on. You know it's a rushed market. Get out there, and I've I've been on the receiving end of a poorly trained user at a doctor's office. I went and I had to fill out a form that says we, you know, we're using Microsoft Copilot to do whatever with your historical notes. Do you comply? And I like to test everything in the real world. So I went up to the receptionist. I'm like can you tell me what notes actually are being going to be processed? Or like what the function is or the purpose or any of that? And they looked at me like I was speaking Greek. So I mean, the person who handed me the form to sign couldn't even explain roughly what was going on with the data, and that's some real disconnect in terms of mission and end user.

Speaker 2:

So we look at the comprehensive rollout. These are training process documentation. It's really kind of that stuff that is just kind of like drips off into nothing in a lot of organizations, I think, and that really affects the quality of the AI adoption, was talking to me and he said it's really concerning for the investment community VCs, private equity because what they're seeing with their early AI investments is that they've been able to secure, like MVP contracts, but the adoption rate for the year two contract like the big re-up is really failing at a lot higher clip than traditional SAS metrics, because the users are completely clueless on how to use the tool. They're kind of being left alone, so that user testing isn't just a bad outcome, it's affecting investment. So there's a lot of pressure right there. It's interesting Really overlooked.

Speaker 1:

Yeah, I need to go back to the receptionist that was asking you to fill out the liability form on the use of AI. That's not a great way to establish trust with your customer, that's for sure. So shame on them.

Speaker 1:

What you just said about the second year use of their services, it strikes me as a yes, a user training issue, which is an interesting one, like you said. Is it because the users are told to just adopt this new technology and start using it on your own and go learn from it? They're not investing the time into the human aspects of leveraging these technologies, or is it something else?

Speaker 2:

There's existing processes in place, so there's workarounds in some capacities. Right, they can go back to older ways of doing things. Maybe those are left in place as a fail-safe or redundancy, but instead of taking things certain ways. So I didn't get into a great use case with that, but it was just kind of a broad statement from someone who is in that investor space, has had nice wins in Silicon Valley, works at a large consulting firm and this is a concern that they see in the market for investment.

Speaker 1:

Well, I could definitely see how, if you give people options, they're going to choose the one with the least resistance. So if you're going to go for it, you need to burn the ships, you need to take out the old ways and say this is the new way You're going to learn this because we believe in it. You know it's like giving them two different Word document editors. Right, that's exactly right. Stop using that old one in it. You know it's like giving them two different Word document editors.

Speaker 2:

right, right, that's exactly right.

Speaker 1:

Stop using that old one. We're going to keep it on your PC, but don't use it anymore. We want you to use this new one with the AI technology in it, and start using that AI technology, because it's going to help you, you know, do your job better, be more creative and save you time. But it may not be as intuitive at first, right? And so they say, oh, I'm not sure about this. And then they give it a shot and then they're like, oh man, I just wasted an hour. I'm going to, I got to get this out, you know. And it's like, okay, that's fine. But you know, sometimes you do have to look at your opportunity ahead and invest a little bit more time and burn the ships before deciding that it's easier to go back.

Speaker 2:

Yep. So invest in that user testing and invest in great training, and I think those are commonly accepted best practices that change management anyway. So why organizations skip that or deprioritize that is always strange to me, but it's really hitting in a lot of different places. So there's a lot of opportunity there.

Speaker 1:

Yeah, so your team pioneered AI auditing research. What were the key findings and how have they shaped today's best practices in AI assurance?

Speaker 2:

Yeah, so probably the best way to say it. Like I said, we publish research. We have our audit card, which is ways that we're helping businesses. You know a couple I think it was a couple hundred agencies, maybe 200 agencies producing research, but that is probably the most commonly accepted risk management framework here in the United States in terms of best practice, and you know we're happy to be you know, a contributing researcher group in that. So we've developed a great network of researchers and thought leaders to produce that.

Speaker 2:

Overall, though, we're also part of the International Algorithmic Auditors Association, which is a global network for unified auditing standards unified auditing standards so our CEO sits on the board of that. He's a board member of an organization called For Humanity, which is working towards AI for good, ai, people first, human flourishing, these types of outcomes and we also participate in different think tanks with universities and from our research, we also are applying that daily with our assurance. So we're out at the leading edge of some of the use cases, agents, llms, more difficult, challenging things rather than just a basic. What's the statistical bias within this decision, over this decision? We're out here trying to solve some of the more complex challenges daily with organizations. So we research all the time. It's a core function of our business and we've applied it in different ways, many different avenues.

Speaker 1:

That's very cool. You all have your hooks into a lot of different areas. I work with a lot of smart people.

Speaker 2:

It's pretty awesome.

Speaker 1:

Indeed, it is. So how do you measure AI success beyond just performance metrics, and what should businesses be considering when evaluating AI systems?

Speaker 2:

Yeah, that's a really good question and you know it's not a punt. But every organization has their own desired outcomes with AI and what Babbel's here to do is we're not strategy, we're not change management, we just are aware of how important those things are, but we help assure that those outcomes are there. So when you're saying metrics beyond maybe just profit, those are things like human impact value generally, and people have always the term ethical use of AI is always something that kind of gets mystified. So I went because we're here in the state of Virginia. This is for AI Ready. So if you're still listening to this episode, this is all from the VITA policies on what is acceptable ethical use of AI. So this is how the state of Virginia is actually measuring and their guiding principles for AI use and just to kind of grab a couple of points they want AI to be trusted, safe, secure and acting in a responsible, ethical and transparent manner for implementing AI. So it must be validated by humans for bias and unintended consequences and all of their departments, agencies, offices are ensuring that. You know things are explainable and you know, accountable and resilient. So these are other metrics that are kind of more idealistic states for a lot of organizations Like, yes, we want to be, you know, positive for everybody. What is that? So that's where we go back into. You know your desired outcome kind of defines, kind of the North Star of the metrics, and are your metrics really producing that? So performance metrics, I think typically are tracking back to close rates and you know ROI and these things, and I don't think those things are even. They're incredibly important for capitalism and business generally, which I support.

Speaker 2:

But there is kind of this other layer now. And how do you put metrics to? Those are things that our organization can help organizations figure out, because a lot of those are more philosophical. And how do you take data and turn it into more philosophical outcomes? That's really challenging. But that goes into our testing and evaluation, verification, validation process and within that we can pull. So one organization within the hiring process was stuck in a procurement challenge and they're like my company, my customer. Rather they really want to know if we're producing bias with English as a second language for candidates applying. So we work with them to come up with the right kind of cocktail of metrics to produce that report. So that's how we help organizations on that side. There's all sorts of metrics that go into that. So that's a wide conversation, that's a whole podcast, for sure. Yeah.

Speaker 1:

You've brought up philosophy a number of times in recent questions, and any philosophy stands out to you, like one that you said earlier was what is the role of the human being in a world where AI exists, exists Something? I probably misquoted you there, but that kind of stands out to me as really profound work, right? Have you come across any other quotes? You could say that you know interesting thoughts there. I love philosophy, by the way, that's why I do this, you know it's.

Speaker 2:

I don't know if I have anything specific and offhand there. You know, I think I have my own personal philosophy for winning in business and I think that it's probably not specific to the Babel use case. I'd love to maybe come back on and talk about it down the road, but I just think that, you know, working towards people first outcomes and really being able to define what a people first outcome is is kind of this ethical AI, if you will, or ethical use case, however you want to call it. But yeah, there's the problems that exist in this world, aren't? How many emails produce a close rate?

Speaker 2:

The problems in this world are are we going to have, you know, things like food for everybody, water for everybody, environment that you know? Doesn't, you know, attack us and ruin our houses on the coast? Right, we want to have a beach house that's nice and that's not going to get wrecked by climate change. So I think those are new business outcomes that people should be striving for. That's my own personal philosophy and I do think that AI does give the opportunity to pursue those solutions. And if you can aim AI towards outcomes that either reduce the impacts of those, whether I'd say start at the root cause, not addressing the problem, the results. That's going to get people into better places across the globe. So that's my personal philosophy.

Speaker 1:

I don't know if anyone else has that. If you start there and keep that as your North Star, then I don't think you can go wrong. Honestly, this philosophy talk makes me want to think about the future. So tell us, Brian, what's next with AI oversight and governance.

Speaker 2:

Yeah, I think there's a lot of really great governance tools that exist. So, when you think about AI at scale, how do you manage at scale on a daily? It's making a lot of decisions for people. So there's a lot of new tools, new industries emerging, new software to be sold in there. But I think that is, you know, just a little slice. Overall, I think people and we talk about AI literacy, workforce transition more people need to understand how to govern and manage AI, and I think the people that really invest into those skills are the ones that will be leading the future of business, just like anybody would for, like, soc 2 or an ISO standard. You know these are well-respected certifications and the enterprise you know has validated as necessary to do business, and AI assurance or an AI audit is going to fall right in line with some of those same principles. So I would say AI governance should be a commonly accepted, you know, workforce role and AI auditing should be as common as a financial audit in terms of thought of business function.

Speaker 1:

So, yeah, a lot of a lot of stuff coming that way about this before we can move on, and that is it feels to me like the conversation is around either creating this governance role in your organization or moving people toward the governance role. But what about the people that are doing other jobs that should also be gearing up for understanding AI governance and the tools and being able to have the conversations in the diverse focus groups around solving the problem? What do you think they need to know as tangential roles in this equation?

Speaker 2:

Yeah, that's a good question. How I like to think of AI and workforce transition is everybody's kind of been. There's a few people that are out in front and it is a few, even though and you know kind of echo chambers of people with AI. Some organizations have zero clue of what's going on, they're just aware that it exists. So everybody, in my opinion, is starting relatively in the same place, kind of at an entry level knowledge.

Speaker 2:

I would say every single person in the workforce today understands what it is they do really really well right now and just understanding how AI functions.

Speaker 2:

So this is part of education that we bring kind of the principles of AI, how we just find the socio-technical algorithm.

Speaker 2:

So there's the human, there's the machine, it's a workflow and it is now all math and done by robots and agents, like immediately, which is crazy, but effectively it's just workflow design. It's thinking about business processes and automating them quickly and just understanding how it works. Understanding how it works, then you can start to think about it more strategically of how do I maneuver myself, how do I position myself to either be a key human in the loop or be above and managing that system and that workflow or the product, and everybody has that unique, diverse perspective which makes them valuable if you understand how ai works. So that's how I like to think about it. And then, whether you are staying kind of ground floor or you want to move and ascend into more leadership roles with ai, I think it all is is there, um, and then ultimately, like we said earlier, in the pod, it was uh, you know, ai auditors are probably the last job on the planet, um a degree Future state.

Speaker 2:

A lot of time to get there.

Speaker 1:

Let's round it out for our listeners. We've got business leaders in this episode thinking about what they should do. What is the first step that you would recommend they take to ensure they're using AI responsibly?

Speaker 2:

Yeah. So if you're just hearing about AI governance right now it's a new concept I would just take time to read some basic information. There's so much out online. There's some really good people that are out on LinkedIn. So one person that I really recommend. He has a great book. It's called the ethical machines by Reed Blackman, Our, where our companies are familiar. He's a. He's an ethical consultant. But just the first chapter of that book, man, it's punchy, it's good, it's sharp. It explains how governance structure and governance content come together and just really draws out some really basic like aha, I understand where that can go wrong. So it kind of helps frame your thought. So if you're new to it, I highly suggest that book. I put my sales team part of my training curriculum with them as that first chapter so shout outed and our ceo definitely uh, helped out with some of the ideas around that and fact check that. I learned that after I read the book.

Speaker 2:

Oh, nice um yeah, so that was cool to know. But yeah, reed's a good dude, um, and then you know if, if, if not, um, you want to go. That's how you kind of start. That's what I would suggest. If you want to go a little bit more in depth, babel AI. You know we're a leader in this space in terms of audit and assurance. We have our own coursework, so you could even, you know, look behind and learn how we're inspecting these systems and how we think about governance from an assurance and audit function. So, if that's more attractive thinking about, we have a whole AI auditing certificate program and governance is just one of the five courses of that. So we have all sorts of room to grow. But within, that is kind of the principles and a simple checklist of how to build your own governance system. But yeah, there's a lot of good information out there. Those are kind of my two biased opinions, but there's a lot of great information, a lot of good people working on air governance right now.

Speaker 1:

Cool, I love to listen to a good audible book, so awesome, all right. And lastly, brian, what if there's one thing that you hope our listeners take away from this conversation? What would it be?

Speaker 2:

One thing Maybe this is a different angle to think about all of this, so maybe this is a new perspective.

Speaker 2:

But AI governance is important and what does it really do for a business?

Speaker 2:

If you have your governance, you have your policies, you're constantly making sure that and testing and doing things to that, you're better able to maintain your AI trajectory, your systems and all those, and it's much easier to maintain than to fix a big problem.

Speaker 2:

If you think about a car changing your tires versus a blowout right, going to the doctor for your routine checkups versus a big diagnosis that just didn't go address those are kind of the dichotomies that I look at AI governance with. So if you have AI governance in place, you're maintaining, you're tracking, you're managing versus. I'm just kind of out here freewheeling and hoping that nothing bad happens, but when it does, it's going to be bad and it is an investment. And I say invest into the maintenance rather than invest into the big fix, because the big fix is more expensive and has so many other risks associated with it, and I think it's a necessary thing that organizations have to reduce their overall liabilities and potential brand impacts with AI and, trust me, there's lots of use cases that you can go out and look where people get it wrong and there's class action lawsuits that are already moving and it's there's a lot.

Speaker 2:

So invest into it and take it seriously and put yourself in a position to innovate and win long term. You know, I think that's where AI governance does for organizations.

Speaker 1:

Well, I for one have been convinced this is the real deal and I'm going to take it seriously. So I would love to explore more of that content by Beverly in the future, personally, All right, last question for you, Brian. If you could wave a magic wand and have AI do anything, what would it be?

Speaker 2:

Man, that's a good question. You know, I think that I would. You know, I'm a futurist, an eternal optimist, but a realist. I would love for people to deploy AI in a way that didn't just prioritize profitable outcomes and instead just kind of people-first outcomes that produce profit, and I think there's just a lot more, and with that I think we can solve the world's problems right. So I always tell people you know, hey, what are you up to? I'm trying to change the world. So I think everybody hopefully buys in and use AI to create valuable outcomes that prioritize people. I think that's what I would love AI to be able to do. I think that does create purpose for people. We have a lot of problems to solve and there's a lot of work to get done, but I think AI is the magic wand to achieve it, unlocking a little bit of creativity in there too. So I'd love AI to solve the world's problems. How's that? There too? So I I'd love AI to solve the world's problems. How's that?

Speaker 1:

Yes, I'm with you there, I I think it does bring purpose to focus, you know, your life's goals around solving big problems for humanity's sake, you know, for local community's sake. Whatever it is, however you reach, it's a, it's a beautiful thing, yeah, I think. I think most people want to make a difference.

Speaker 2:

They really do. I think AI is an incredibly powerful tool and, aimed in the right direction, can really do some cool stuff. That's what we try to do at Babbel Help people get their aim and their trajectory and make sure that they're getting what they want out of their ai and, uh you know, applied for the right reasons, I think those business strategies win long term and it's going to be good stuff rocky road, but we'll get there.

Speaker 1:

Yep, you've convinced me. The future is responsible. Ai, that's awesome, I can sleep. I can sleep better at night now, brian, thank you.

Speaker 2:

Man, I wish I could say that for myself. So I'm happy to see it. I'm helping change the world. I get Jason to sleep a little bit better at night. I still am up on chat GPT all night talking about it. All right, all good stuff. All right, man.

Speaker 1:

Thank you for your time today and I look forward to talking to you again in the future.

Speaker 2:

okay, yeah, likewise, Jason. This was great Cheers.

Speaker 1:

And thanks to our listeners for tuning in today. If you or your company would like to be featured in the Inspire AI Richmond episode, please drop us a message. Ai Richmond episode. Please drop us a message. Don't forget to like, share or follow our content and stay up to date on the latest events for AI Ready RVA. Thank you again and see you next time.

People on this episode