
What's Up with Tech?
Tech Transformation with Evan Kirstel: A podcast exploring the latest trends and innovations in the tech industry, and how businesses can leverage them for growth, diving into the world of B2B, discussing strategies, trends, and sharing insights from industry leaders!
With over three decades in telecom and IT, I've mastered the art of transforming social media into a dynamic platform for audience engagement, community building, and establishing thought leadership. My approach isn't about personal brand promotion but about delivering educational and informative content to cultivate a sustainable, long-term business presence. I am the leading content creator in areas like Enterprise AI, UCaaS, CPaaS, CCaaS, Cloud, Telecom, 5G and more!
What's Up with Tech?
AI Governance: ModelOp's Approach to Enterprise Trust
Interested in being a guest? Email us at admin@evankirstel.com
Trust remains the central challenge for enterprise AI adoption, as revealed in ModelOp's comprehensive AI Governance benchmark report. CTO Jim Olsen explains why so many generative AI projects remain stuck in development limbo, with 56% taking 16-18 months to reach production.
The disconnect stems from AI's non-deterministic nature - unlike traditional software, models can't be fully predicted or verified in the same ways. "One bad recommendation is harder to overcome than a thousand correct ones," Olsen notes, highlighting how organizations struggle to build confidence in systems that sound convincingly authentic even when delivering incorrect information. This challenge becomes particularly acute in regulated industries where the stakes are highest.
Financial services companies have developed the most mature governance practices out of necessity, having faced multi-billion dollar fines for improper model management. However, healthcare faces even greater complexity with "life or death" decisions and patchwork regulations across different jurisdictions. In both cases, fragmentation within enterprises compounds governance challenges, with different teams pursuing siloed approaches that prevent organizations from learning collectively.
ModelOp addresses these challenges through centralized model lifecycle management that provides visibility, consistency, and automated governance. Their "minimal viable governance" approach enables organizations to start with essential controls and iterate, rather than waiting for perfect solutions. As AI evolves toward autonomous agents with decision-making authority, governance becomes even more critical.
Ready to accelerate your AI implementation without compromising on trust or compliance? Discover how leading organizations are cutting deployment times in half while building stronger governance foundations. The key isn't waiting for perfect solutions, but starting the governance journey now before complexity overwhelms your AI initiatives.
More at https://linktr.ee/EvanKirstel
Hey everybody, fascinating chat today on AI Governance with a true innovator in the space at ModelOp Jim. How are you?
Speaker 2:Doing good. Doing good today. How are you doing?
Speaker 1:I'm doing great. Thanks so much for joining. From your off-the-grid location I see the solar in the background. We got Starlink going Really intriguing. But before all that fun stuff, maybe introduce yourself and what's the big idea behind ModelOp.
Speaker 2:Sure, yes, I'm Jim Olson. I'm the Chief Technology Officer of ModelOp and actually did the architecture and design of the original system, so I know a lot about the space and myself and some of my colleagues have actually been working in this space with previous efforts for a long time 10 plus years, et cetera, and that kind of things, so have a lot of knowledge about not only the newer generative AI, but also the traditional AI, traditional statistical techniques, et cetera and how they affect your business created the ModelOps solution to actually bring in the full lifecycle management of all kinds of models, everything from an Excel spreadsheet to an LLM foundational model and now agentic AI solutions as well. So we put that together to make that process a lot easier, because we found a lot of companies are struggling getting their solutions using these technologies out to production and getting their solutions using these technologies out to production.
Speaker 1:Fantastic mission. And you released a governance benchmark report on AI recently. Ideal for this audience. So let's kind of start with the big picture. What was the big idea, the motivation behind this benchmark study, and what did it tell?
Speaker 2:us. Well, a lot of it was understanding, really, where companies are at with their AI solutions. You know there's a lot of disparate information and articles and I mean, if you listen to the Silicon Valley digital native companies, you know everybody's using it for absolutely everything. And you know it's the next biggest thing that you talk to some of the enterprises and they're more hesitant about, like, how does this impact my business? And you know what? What am I willing to put into place? And et cetera.
Speaker 2:So there there wasn't a lot of great clarity into what the plans for an enterprise business actually are in a variety of different spaces. And you know, what we found is a lot of companies are struggling to build trust within their organizations about these solutions because they don't have the insight. We're seeing a lot of IT departments pushing back because they're finding shadow AI, where we've seen things, where we've heard where hospitals people were posting customer data into chat, gpt-4 to get summarizations going around the IT, which obviously that's a huge risk. I mean it's breaking several laws and that kind of things. So how do they get these processes in place? And so that's where we saw that a lot of companies are just struggling with those concepts as a whole and in the report you can see a lot of our findings. Where it's a lot of people playing with it, they're not getting the solutions out there quickly, so they're losing on business value, but understandably they also want to make sure they have trust in these solutions.
Speaker 1:Well done, and one of the headline stats from the report 56% of Gen AI projects take 16 to 18 months to reach production. So how do we get out of this kind of quagmire?
Speaker 2:Well, that's the foundation of why we built the company Much like today. Think of software in the 90s. People would actually go and just develop it on their desk, throw it out into production. There wasn't really processes and because of that things broke and at least programming is deterministic in nature, so you're able to put processes in place. Now what company would not have some kind of a CICD pipeline nowadays Common practice stuff but back then those didn't exist.
Speaker 2:What we found is those same kind of processes for enabling basically more efficient deployment of these solutions and understanding and insights into those solutions and reproducibility didn't exist for basically the model world, for AI models, spender models, foundational models, et cetera. So that's what we created was a process where you can actually automate a lot of this to make it easier, because software is very deterministic in nature. Ai models et cetera are the exact opposite of that. So what can you put in place to provide those insights and build that trust within your organization so you don't hit all of these red box? And we found when we deploy our solution into customers, they were easily cutting that time in half, if not even more so depending on how sophisticated the company was to start with and creating just a formalized repository where people can find out who's using this stuff for what use cases and what is already approved out there. Could I leverage this et cetera and that kind of thing, so having that centralized inventory and an automated lifecycle process to drive the software out to production?
Speaker 1:Well done, and you surveyed a number of sectors financial services, pharma, manufacturing. Did any particular vertical stands out in terms of maturity or challenge with AI governance?
Speaker 2:mature because they're forced into it. So, uh, you know, some of our very first customers were obviously financial customers, heavily regulated. You know you can't have models, uh, making trades or predictions, or whether somebody gets a loan or not. That isn't well scrutinized and understood. So that was obviously the first place that had the most challenges, because they could be audited constantly, um, and had to have everything documented, because, you know, I don't remember which bank it was, but it was like a multi-billion dollar fine for not doing this properly, and so you know there's a lot at stake. So, obviously, for them, that created the necessity. Necessity is the mother of invention. So you know, you'd see a lot of homegrown processes within there that weren't always so effective because they weren't stepping back from their own business to develop them, and that's by having our own solution that is more neutral and takes all of the ideas into concerns, creates a more efficient solution.
Speaker 2:We have several firms in the financial sector. What we're starting to see, though, now is obviously AI is coming into the healthcare industry, and we're literally talking life or death decisions when we get into healthcare. So I see that as a space with even more challenges, because they weren't born and bred in the statistical nature of a financial institution where these things are well laid out, well regulated. There's patchwork legislation across different states as to what AI can be used for within healthcare and of course, it's just the very real concerns. Nobody wants to have one of these models blow up and be a stain on their reputation, like there was an example in healthcare where long-term care was more being not recommended for minority groups than non-minority groups and actually resulted in a lawsuit. So there's really real-world situations here that come up when you bring these solutions into life-or-death situations. So we've seen definitely a lot of interest there as well.
Speaker 1:Yeah, we all saw what the challenges were with IBM Watson many years ago, trying to early stab into the healthcare space with a lot of challenges. I think we've matured a lot since then but still a lot of work to do. And you mentioned the report shows 50 plus generative AI use cases in many cases, but only a handful make it into production. What's that disconnect? Why the drop-off?
Speaker 2:but only a handful, make it into production. What's that disconnect? Why the drop-off? Well, I mean, there is a natural to be fair, generically there's a natural drop-off Everyone's got great ideas and then bring him to production. There absolutely is always a revision on that.
Speaker 2:But what's even more driving that now is this lack of trust. People are more skeptical of these solutions because they are non-deterministic in nature. You know, if you have a model that predicts whether a cell is cancerous or not, that's fairly readily verifiable and, you know, testable to a degree. You know you have known labeled, use case data and things like that. When we start to get into more like even something as simple as summarize this patient record into a recommendation or this prospectus from a company into a summary that I can use to make quick decisions about whether we should be investing or not, or these kinds of things, that's not deterministic in nature. So people are naturally skeptical because they can't look at it and say, for sure it sounds good, but is it right? That's one of the challenges, especially foundational models. Skeptical because they can't look at it and say, for sure it sounds good, but is it right? You know that's the one of the challenges is, uh, you know, especially foundational models. They're known for sounding very professional and intelligent, etc. But not always quite so factual, so they're very convincing of giving you the wrong information and convincing you it's right so that it creates this trust, because it it's so much harder to you know, one bad kind of recommendation or something from one of these situations is a lot harder to overcome than a thousand correct ones. You know, people tend to remember where it went wrong. So you know, how do I build that trust that, yeah, we are holistically looking at the, not only the foundational model itself but it's applicability to the use case, that I'm doing it and what risks and mitigations have I put in place to different tools that they use, etc. Into a single pane of glass. Uh, inventory like we provide helps provide that clarity of oh well, it's also been used over here, it worked really well over there. Oh okay, now I got a little more. You can start to build that trust.
Speaker 2:And these six people reviewed it and these risks were identified and said, yeah, this will be okay because of this. You know, need that kind of a story behind the model getting out there to build that trust. But then, likewise, you can't have building that story be a manual process on an Excel spreadsheet or a SharePoint file or just a bunch of Jira tickets scattered all over wherever that doesn't give you the story. So that's where our software helps actually pull that information together into documents and mitigate risks and findings et cetera that all are in one place and you can actually get that and make sure all those Jira tickets et cetera, et cetera are tied back and happen. So by just automating all that, making sure it's there and making it readily available. You've got to do that, whether you build this off yourself or you buy a solution like ours, because otherwise, yeah, the process of trying to do the model lifecycle then becomes overwhelming.
Speaker 1:Amazing Spreadsheets offer AI governance in what's this 1999? I mean, come on, we need to up our game a bit. And that's for you healthcare with your fax machines and your email. It's like a zombie that just won't die. The other challenge in the enterprise, as you know, is fragmentation, lots of silos, lots of technical debt. What does that look like in the real world in terms of impacting AI at scale?
Speaker 2:Well, I mean, obviously, if you have different people taking entirely different approaches, using different technologies et cetera without any consistency, it does create more of a burden on understanding, not only getting these things deployed, because it's just work to do that, but then also in doing the reviews of the technologies et cetera. And a lot of that just naturally comes out because these are kind of a lot of grassroots efforts that we're seeing. Initially, it doesn't tend to be as centralized at a higher level, so water finds its own level. So the individual groups are picking their best breed tools and their solutions and kind of running at it without the knowledge of the other teams and what they're doing, because they can't find them. When you get to very large companies, that's just a reality across business unit, team et cetera.
Speaker 2:Collaboration is a challenge. You do want to have a centralized process, understandings and then the ability to automatically generate findings about hey, have you thought about this, this and this, because we've already seen this be a situation in another organization using these. So have you taken this into account? Or who's the responsible people to talk to, et cetera. So that's the challenge, without having some kind of a centralized, understandable and automated process is there's an inconsistency even in the process itself, which then becomes frustrating to all these individual teams, and you're not standing on the shoulders of giants within your own company. You're instead all trying to forge it on your own, and we know that that never works out as well.
Speaker 1:It's not. So let's talk risks. There's still lots of landmines to avoid out there on the regulatory side, lots of compliance risk and fines and other challenges. What do you advise customers to be aware of when it comes to real world?
Speaker 2:exposure. Regulations are definitely important because obviously if you're not compliant with a regulation that's pretty cut and dry, you're going to get in trouble. And what degree of trouble you get into is going to also depend on how much process you can show, because nobody's going to be perfect If you did nothing, just ignored it and everything, they're going to be a lot harsher on you than if you tried your best and tried to do everything right. Things are still going to go wrong. That happens. Right, things are still going to go wrong. That happens. They're probably. You know, if it goes wrong just because of a black swan event or something like that, you're probably not going to get in that much trouble from a regulation standpoint. But what we really talk about is, more importantly, is also even just your brand, so it's not even having to do just with a regulatory get a fine.
Speaker 2:You know many, many products and especially in the consumer product space and we work with several on that kind of things your whole value is in your perception by the customer. You know, by one toilet paper versus another. Yeah, there's some differences in the things, but that's not. Maybe your way you make money is by making the better toilet paper. You make it by making the better toilet paper. You make it by making the better, having the brand that has the name recognition and you trust the quality if you put ai solutions out there that have a blunder.
Speaker 2:Like you know, mcdonald's put out its automated uh ordering scheme and there's tons of videos posted online of people yeah, I'll take one fry. Okay, add 11 fries. Oh, no, remove that. Only one one. Okay, we have 12 fries. You know, and it kept going on like that. That was a hit to their brand that made them look foolish. Now that's gonna destroy mcdonald's? No, probably not. But those do have impacts. They have real financial impact that is even harder to measure in the long term. That can still cause you maybe even more problems than the government find.
Speaker 1:I bet, so you're in a very hot space at the moment. A third of companies, evidently, are budgeting $1 million or more annually on AI governance software, so congratulations on being in a hot market segment. Maybe talk to us about your space in general, where it's headed, obviously up and how do you see yourself competing versus other players out there?
Speaker 2:Well, one of our biggest spaces is we're always staying ahead To us now, just straight like a RAG architecture foundational model, that's kind of yesterday's news yes, everybody's doing it, and that yesterday's news doesn't mean not very relevant to an organization and we still have a great focus on there. But obviously all the buzz right now is around agentic ai and what that means, because that even has larger implications. You're literally giving autonomy to these foundational models to make decisions about actually changing data within your database or sending emails or any of these kinds of things. So that's what we've been working on specifically is how do we bring agentic AI solutions into the model lifecycle process and we've done a bunch of work there. We actually have webinars on it on our website. But you can actually start to manage these and things like MCP tools. Everybody's talking about those now.
Speaker 2:The Anthropics MCP model context Protocol is kind of one of the tools or for lack of a better term of how do LLMs communicate to actual things that can affect change or read specific data.
Speaker 2:So we've incorporated agentic tools right into our solution so you can actually use agentic AI to do model governance itself and that kind of things. But more importantly, we also have ways of like how would you approve an MCP tool to use and know which use cases are allowed to use it, and what filters can you put in place, like PII protection Maybe? I know specifically that this particular model may have access to PII data, so I want to block any PII data coming out from it, et cetera. So we've been building things in that space, knowing that the Identity AI solutions are going to literally change the landscape in that way and that, as these companies put these in place, how do they know what they're doing? How do they know where they're used? How do they know you know protect against you know deciding all of a sudden to sell all of its stock or something? Truly, with autonomy comes greater danger.
Speaker 1:Got it For sure, including personal danger. Getting into these robo-taxis now all the time, I'm always scratching my head How's this going to go? But I digress. So when you talk to a customer, maybe they're a little skeptical or uncertain as to where to start, how to prioritize this journey. What's your advice to them?
Speaker 2:Well, what we suggest is we have a thing called minimal viable governance, which is kind of like here's the minimum you need to do. If you try to start I mean anything if you try to start out doing it all, then you're never going to get there. It's just like uh, you know coding, we we use more kind of an iterative approach now, as opposed to the waterfall design approach of the past. Same thing with governance is get started, um, start small. Get the things in place that you absolutely need that's going to change by your business maybe you have. If you're a financial institution, you need your minimum levels a little higher than if you're just protecting your brand on that kind of things and get the processes in place, understand what's there and treat it iteratively, continue to grow and add. And that's where our solution providing a configurable approach to the model life cycles that doesn't require writing code or changing the product itself really enables you to do that iterative process and even version the process to carry forward, so that way you can evolve and if a new regulation comes up tomorrow, you can plug it in or you have, as I said, as your business. But the important thing is don't wait, it's the problem's only going to get worse.
Speaker 2:Get started now, because getting any process in place means there's a process and there's things identified and you know what's going on, versus kind of burying your head in the sand and just waiting until it bites you, because it eventually will and it's going to get harder to unravel it later, when there's a whole bunch of them out there, than if you get started when, as we see, there's only so many in production. You have that big backfill sitting behind. Per this report, staying behind. Per this report, you want the process in place to help that backfill not only make sure it's governed and doing the right things, but also help identify those and push those out into production, so you don't lose a lot of those maybe good efforts that are buried within your company. Great advice.
Speaker 1:So we're halfway through the year. I can hardly believe it, but what are you up to the second half? Any travel events beyond the summer? What's on your radar?
Speaker 2:Well, we're attending a whole bunch of different things. I'll be honest, I don't know all of them because I don't go to all of them. On that kind of thing, we just recently went to the CHI conference out in Stanford and participated in that, talking about, specifically, ai usage within the healthcare industry. You know, we've got cdao conferences we've been going to constantly doing webcasts. We do our own webinars um, I just presented one, uh, actually last week on agentic ai and what we're doing there on that kind of things. And you know, uh, a lot is virtual nowadays still, but uh, you know, we're doing some in-person events as well, with conferences et cetera that are going on and starting to pick up on that. But really we're kind of participating everywhere in a lot of different things. So usually, again, this is kind of more of an iterative space, so things come up and you never know where you're going to go next week, potentially.
Speaker 1:Exactly. Well, speaking of virtual, I'm admiring your real background, not virtual background. What's up for the summer in Colorado? Any hiking or fishing or hunting or birdwatching? What do you get up to there in the woods?
Speaker 2:Well, yeah, the wildlife we get to watch right from the deck. So we get moose and elk and marmots and everything come right up to the deck. On that, we actually myself personally we have a lot of. We have 14 acres here and we have a lot of beetle kill, so I'm always working on cleaning that up, unfortunately. Yeah, so I don't need a gym membership. I do it by moving trees around and things like that. So we got that. But, yeah, we get out into the woods and hike and all kinds of things as well, take our UTVs around, et cetera, too, and just yeah enjoy nature where we can.
Speaker 1:Fantastic. Well, thanks for joining, taking some time away from all that, and congratulations on all the success onwards and upwards.
Speaker 2:Yeah, absolutely, and thank you for taking the time to talk with me today. I really appreciate it.
Speaker 1:Thanks everyone for listening, watching, checking out our new TV show at TechImpact TV now on Bloomberg and Fox Business. All right, take care. Thanks, jim.
Speaker 2:Thank you.