The Amplitude of Tech
Welcome to The Amplitude of Tech podcast, produced by Amplix, a leading technology advisory firm, where we bring the voices of technology thought leaders, subject matter experts, and enterprise IT decision makers to you to talk about today’s transformative technology and how it can create opportunities for increased success.
The Amplitude of Tech
Enterprise AI Without the Hype: A Field Guide from Pega Systems CIO David Vidoni
Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.
Everyone is talking about AI, but far fewer organizations are actually scaling it. David Vidoni, CIO at Pega Systems, joins Shawn Cordner of Amplix to cut through the noise and get practical.
The conversation covers where enterprise leaders should be focusing their automation efforts across finance, HR, and legal functions; how to approach the pilot-to-scale gap and what the widely cited MIT finding that 95% of AI pilots fail to produce ROI actually tells us; and why the phased, iterative approach to modernization consistently beats the high-risk big bang. David also shares how enterprises can build the governance and quality controls needed to trust AI at scale, what explainability and audit trails look like in regulated industries, how to vet AI vendors for supply chain risk, and why change management remains the variable that determines whether any of it sticks.
If your organization is past the curiosity stage and ready to build something that lasts, this episode is the starting point.
What You'll Learn:
- Where AI automation is generating real ROI beyond the obvious entry points, including finance, HR, and legal functions
- How to structure a POC that is designed to scale from day one and avoid the pilot-to-scale trap
- Why phased modernization consistently outperforms big bang migrations and how to build the case internally
- What governance, audit trails, and quality controls need to look like before you trust AI to run autonomously
- How to manage AI model drift in regulated industries subject to SOX compliance and HIPAA requirements
- What ISO 42001 certification means and why it should factor into your vendor selection process
- How to evaluate supply chain risk when AI is embedded in the platforms your business depends on
- Why change management determines whether AI adoption succeeds or stalls regardless of the technology
Hey everyone, thanks for joining the Amplitude of Tech podcast. I'm Sean Cordner, Chief Marketing Officer of Amflix. Today we had David Vidoni on the podcast. He is CIO of Pega Systems. What's interesting about David is not only is he a CIO practitioner who is implementing AI in his business, but he's also overseeing a company that is implementing AI in their products. So you get to hear both sides of this conversation how to implement AI into your business, but also uh how to be a responsible vendor of AI as well. So this was a good one. I hope you enjoyed it. All right, David Vidoni, thank you for joining the podcast. How are you today? Good. How are you doing today, Sean? I'm doing great. Uh could you just do a quick little introduction, tell everyone who you are and uh where you work? Sure.
SPEAKER_01So David Vidoni, I am CIO at Pega Systems. Pega is an enterprise software company based out of Waltham, Massachusetts. We focus on workflow, decisioning, and AI.
SPEAKER_00Okay, so we don't usually like to do commercials on this podcast. I try to steer people away from talking too much about what their company does, but I feel like it's going to be relevant to the rest of the conversation. So if you could maybe give us the quick elevator pitch on what that actually means, what you guys actually do. Uh it's okay if it's a little bit of a taller building and takes you longer than 30 seconds, but let's uh just kind of get into what Pegasystem actually does.
SPEAKER_01So Pega Systems is a 43-year-old enterprise software organization rooted in workflow, specially specializing in the retake that. Pega Systems is a 43-year-old company focused on workflow and automation. We focus on the enterprise. We have done uh extensive work in financial services, healthcare, insurance, you know, all about eight verticals. What we do is we address challenges related to organizations that need to get work done, get work done efficiently, combination of people, of automation, and integrating systems and bringing about the agility that the organizations need to survive and thrive in today's world.
SPEAKER_00Okay, so I feel like your marketing department is going to give you an A plus for delivering the scripted pitch there. But um let's kind of dig into what workflow automation means. Can you give me like some real life examples? And since you're a 43-year-old software company, let's start with before AI was a thing and now that AI is a thing. Sure.
SPEAKER_01So workflow, very simply, um, I'll put it in terms of case management. You and I have have called companies, we've had issues with our accounts, we need to have things looked at. Maybe we need refunds or disputing charges on a credit card. When you call in, they'll create a case for you that will kick off an investigation. It might be something that they can resolve right away. It might involve routing it to other people within the organization. At some point, they come to a conclusion. Either you have a good outcome on that and they refund your charges, or it's something that they cannot deny. But either way, it's work that needs to be tracked. It needs to be tracked throughout the organization, who has it, how long it's sitting with them. There are many situations like this within companies where they need to make sure that they're efficiently getting their work done and knowing where bottlenecks are and having opportunities to optimize them. So Pega addresses that space by providing a very robust set of tools to build systems and workflows that are tailored to the way businesses function, specializing for what they do well, what differentiates them with their competition, and not boxing them into a solution that is just off the shelf, you know, where they need to necessarily adapt. So that's kind of the foundation of what we do. Where AI comes in and how things are evolving, or we're moving towards more of an automated world. We've been working for many years around automation, traditional automation, robotic process automation for systems that cannot be integrated with, some other automation bringing together systems or just doing straight through processing and decisioning. AI takes that to another level because for steps that did require some analysis, some decisioning and review that is not structured, is not necessarily repeatable. AI is nicely fit in there to help to bridge that gap and to automate those steps. And so we're weaving those capabilities within our products so it allows organizations, you know, for you know, those ones that they continue, they need to continue to run their systems and their businesses tomorrow and next year and the year after. They don't want to throw out all of those those those processes. They they need them to run the organizations, they need the consistency and auditability in how they do it. They don't want to have it being run where you get one outcome one time you run it, and you get another one another time, and you have no idea why. So uh introducing AI at points in the process that delivers repeatability, additional automation, gives them a leg up and additional efficiencies so they can continue to innovate and focus in the people in on the areas that are truly adding value and getting them away from the places that the tasks are somewhat repetitive and um tedious.
SPEAKER_00Got it. And uh I feel like this is where we're gonna start getting into the buzzwords, right? So uh you're using agencai AI in your software platform, is that right?
SPEAKER_01We are using agenc AI. We are using AI-assisted activities, so guiding individuals through discrete work tasks might be the next appropriate step to take. It might be suggesting knowledge articles that they could give to their customers or you know, helping them to do research so they're informed to make the right decision. So it it really spans both both agentic as well as AI assisted.
SPEAKER_00One of the reasons that I was excited to have you on the podcast here is because you're a CIO that I think is experiencing AI from both different directions, right? So you're a consumer of AI and you're someone who's implementing AI into your business and into your platform, but you're also running a business that goes to market with an AI product, or at least AI integrated, into the product. So I'm interested in exploring both angles of this. Before we get too far into that, though, I wanted to just kind of understand when you're sitting down and you're talking to a client or a prospective client and you're kind of looking at the landscape of their workflows and their business today. How are you helping them make decisions about where to start, what to approach next in terms of automation? Like how how do you find the low-hanging fruit or the most accretive initiatives that are going to have the most benefit to the business?
SPEAKER_01It really starts with identifying, you know, where are the bottlenecks, the manually intensive steps, the time-consuming aspects of doing work. Um, you know, maybe there are areas that they can't respond fast enough, or people just prefer to interact with automated bot or to just self-service. They want that instant response. Something like that would lend itself very well to an agent to assist them and guide them. And it's not a consistent process every time. Everyone's gonna come in with a different challenge, different context. And so to have the flexibility to adapt to that and personalize those experiences is very helpful and also you know, it increases the level of service that they can provide to their customers.
SPEAKER_00That makes sense. So, you know, we spent a lot of time on this podcast and really all the content that we produce here at Amplix talking about AI in the contact center and you know how AI agents can deflect calls and help people get to answers faster. And so that that is fertile ground for us to continue discussing. But I'd like to just get into some areas that we don't more commonly talk about. So, aside from the contact center, what are some other areas of the business that those and technology leaders should be looking at for these opportunities for automation? Like for me, like accounting probably uh comes to mind, like AP and AR processing. Uh, I know marketing has a ton of use cases. I could talk all day about what we would love to do uh here at Amplix, right? So, what are where would you look in the business aside from the contact center?
SPEAKER_01So I'll span the the different functions of the organization. We'll start with finance since you brought that up, accounting, accounts payable, accounts receivable, even the process of someone requesting to something to be purchased, navigating some of these legacy systems. They start with some information that they need to collect before they even ask you what you want, like a cost center. Um, you know, they have no idea. They have no idea what these things are. They just want a piece of software or they want to buy service. It's an excellent place to put AI in front of it to guide them through that process so you know they can say what they want, and then the system can help fill that information in in the background, and then that can be submitted to um, you know, the the other steps in the process to get the right outcome. Financial teams that are, you know, public companies that are doing quarterly close activities. There are very repetitive tasks that um do require assessing you know that transaction in the context of what's going on. Maybe there's other factors that need to be looked at. Um it's something that you couldn't just solve by you know traditional automation. You need to make some judgment calls, maybe you need to look at some thresholds and all of that. Those are excellent opportunities to uh for for a discrete task to automate with an agent and and bring about better uh productivity and and and reallocate those folks to do something else. Um in in the HR space, there are you know painful processes that happen throughout the year, you know, annual reviews, some of the skulls. And every year, I'm asking, you know, what did I do this year? And you have to you know spend some time pulling that together. If you have an you know agent or AI-assisted coach that could go and look at everything that you've created, everything that you you sent out for release notes, announcements, emails, or even go against the systems that you were using to track delivery, they could cut that process down significantly and get you a good working draft to submit. Moving over into the legal area, there's contract redlining, there's just you know understanding complex contracts, being able to query them and ask specific questions and understand it on the you know, also on the finance area, just in terms of you know analyzing data, understanding trends beyond what you would do normally in in you know Power BI or traditional reports. You can you know ask questions of the data without having to build a report for every way you want to look at it. So, you know, those are those are a few of the options and opportunities that that AI and and agents can lend themselves to. You know, in terms of running things in the background, you know, there are many things that you would like to know, but you wouldn't know unless you were you know to look at report or check in on it. You could schedule those things to check in on those things based on thresholds and alert you when it's when it's relevant. So there's just there's so many things across the organization from a business standpoint that could be improved significantly. And they're allowing you to do things consistently. It's not relying on someone to remember to do it or if they have time to do it. There are things that could be scheduled and run on a set interval if certain conditions apply, and then bring that to your attention so you could take action on them.
SPEAKER_00Yeah. Uh you mentioned performance reviews. I'm about a month behind on completing my performance reviews right now. So Melanie from HR, if you're listening to this podcast, I apologize. I'll get to it eventually. As soon as David uh sets me up with some automated workflows to get it done for me. But I I want to uh focus on what you just said there about scheduling the AI to look into you know specific areas of the business and then kind of create a report or a dashboard and you know notify you in regular intervals. I love that idea of it's it's a window into the business, right? Being able to not just see the data, but probably engage with the data in ways that you normally couldn't engage with a spreadsheet, right? Or at least it would be a lot harder to do that. And I feel like you know, that's a use case that I don't hear that much of, maybe because it's more complex and in the sense that you need to have your data ready to be able to do that, and you have to probably do a lot of integrations to make sure that those the data flows from place to place, and then you have to be able to interpret that data. So just kind of talk me through like where is that use case in kind of the order of priorities for most businesses, and should it be higher and how can they get there?
SPEAKER_01It should be higher if there's uh you know costs associated with losing a customer. You know, there's a lot of energy in in in terms of you know getting a customer, keeping them. Um, if there are things that you can look at, and and those things could be, you know, dots across the the organization. Maybe, you know, maybe they're they're not using as much, you know, software example, not using as much software as they did in the past. They're they're not engaging as much in some of the events that we're putting on. The the projects and and other activity is going down, that you know, may not be a great indicator. Looking at any of those data points in isolation may not give you the complete picture. Building a you know, complex report that joins all of that together is is very tedious. It's time consuming, prone to breakage, because if systems change and data changes, then that report will constantly have challenges to say current. Using AI, you know, and being able to ask information, questions across that information, you're not relying on on you know rigid connections and joins and things like that to get your insights. Um you could quickly navigate across those different data sets, those applications, and have that um have the agent help to you know make an assessment on on the likelihood that that customer will renew or you know, if they're a flight risk. Um though those are types of things that you know it's extremely valuable to put in play. And um it's it's kind of you know, all the benefits of of integrating with systems without all the rigidness of APIs and and joins, um it can interpret things just like you and I can interpret things provided it's given the right instructions.
SPEAKER_00Yeah, data is a funny thing, right? Because on one hand you think data is data and truth is truth and it is what it is, but there's data can also be manipulated or it it can be subjective in certain ways. You and I could look at the same report or the same set of data, and we could draw different conclusions or interpretations of what that means. So I imagine that if you're doing something like this, you need to run some sort of a regression analysis to see historically what those things meant and what the outcomes would have been, and that figures into it. But I also wonder, you know, is is there's some danger of bias showing up in using AI in that way, in terms of, you know, the person coding it or the person training it and the way that they would look at data and the way they may interpret signals from data, can that come through in the AI model itself?
SPEAKER_01It it could. Um, we can also um run into issues with hallucinations. What I find works very well is when you're doing discrete things. If you're asking it to do 20 things, you know, I think the the quality of the responses that you will get will break down. Just like if I was to ask you to go and do 20 things right now altogether, you might forget a few. I know I would. Um AI is in in some respects not different. So if you could do one at a time and pull them back and and then assemble those, you have a better chance of getting you know a result that's of high quality and is consistent. I think the other thing that is really important as agents are rolled out and AI is used more prevalently is the ability to test the quality of those results. And and testing has to be done with different approaches and different tools. Old school testing, you know, you had standard inputs, you got standard outputs, you would build an automated test for that, would check, everything was great. With with AI, you you you put in a request with some inputs. First time you get this type of results, second time might be slightly different. You have to be able to flex with those variations, um, but still understand that it's it's an accurate result and do it with a level of confidence, a level of confidence scoring. So I think any critical agent or or AI process that you're inserting in an overall process should have a level of quality checking because your inputs might change, the models you're using will definitely change over time. You want to make sure things don't like slowly drift and and you're not aware of issues with prop with the results coming back.
SPEAKER_00That's interesting. So how do you control or at least watch for the drift? I mean, you you must have some sort of you must have some sort of benchmark that you have to be able to go back and reference, right?
SPEAKER_01You don't have to apply different, like is that different types of tools that um testing tools that maybe you have a known test data set that covers a variety of different types of cases. You'd have to feed that through the process and um you know, using a scoring-based approach, assess how how close that is to what you expected, and and that will give you a level of confidence that whatever you're running is is maintaining the same level of quality in the response.
SPEAKER_00You've used the word confidence twice now in this conversation. And earlier in the conversation, uh you said something to the effect of AI using judgment or use, you know, judgment was the word that you used in the context of AI. So I'm wondering, you know, can AI truly have judgment? And maybe the easier question to answer is uh when is it appropriate to have AI, agentic AI running in a fully autonomous kind of way versus having a human in a loop?
SPEAKER_01I think when when there are situations where it's running reliably over a period of time, you know, you've assessed the results, you don't see any surprises, it's consistent, you know, and I would say, you know, again, it's it's handling you know small discrete things, and you can evaluate each of those decision points. If you're not seeing any anomalies or drift over time and you have validation processes in place to check that, it is probably okay to let that go and then just be alerted of exceptions that get highlighted. However, you know, there are many things that you still want to have people in the loop to, you know, evaluate. I I would imagine no one would be pleased if they were, you know, applying to a position at a company and all decisions were made by AI. That that would bad things could happen on so many levels, you know, we we would never want that to happen. Um, you don't want to have someone be evaluating those recommendations before any decision was made and they have an opportunity to look at you know the the the applications and other things. So there's so many uh situations like that um in a business process that you'd want to have humans in the loop to evaluate. You know, the decisions that that AI is making is only based on the information that's available. So if you have questionable data quality, you don't have all of the information that is accessible. You absolutely would want to have someone in the loop to make sure that you know something wasn't missed. Maybe there's you know context outside of that process that needed to be factored in before a final decision was made.
SPEAKER_00Yeah. It's interesting. You mentioned hiring as one of those processes, right? That it might not be a good thing. So Amazon, I'm pretty sure, ran into a problem where they they put a they put AI in front of the hiring process. So it would scan the resumes that were coming in and it would make decisions based off of that resume and the position that they were applying for. And you had to get through that filter in order to get to a human being and actually get into the process, which I think happens all the time. And what they ended up finding was that the AI was trained on historical data of who was in those positions. And of course, historically, who was in those positions were white men in those management positions. And so So it started blocking all females from getting through into the hiring process. And so they they caught onto it pretty quickly and they obviously made that adjustment. But that's an example of why you wouldn't want AI in that process. But conversely, there's a famous study in social science that they did. The guys, the freakonomics guys, talk about this. I don't know if you're familiar with their work, but they did a study on, I think it was people that were plying or trying out to be in an orchestra of some kind. And so they did it where they could see the people and then where it was blind, where they couldn't see the people. They ended up realizing that people were making bad decisions because their human cognitive biases were getting in the way when they could see the people and talk to the people. And when they just purely heard the music, they made better decisions in who they were going to bring into the orchestra. So that's an argument for maybe why AI could be better in the hiring process. And there's other studies as well that say people just don't do that well hiring. It's almost like throwing a dart would be better because you you get a false sense of confidence. Like, oh, I'm good at reading people, and you know, I've you're drawing on experience and things like that, right? Where where AI can be a little bit more objective in the data, I think.
SPEAKER_01Yeah. But I guess my my philosophy is, you know, anytime you take your eye away from anything, you know, problems can happen. So to the extent that you can use you know capabilities like this to enhance the data you have, some insights, I think you you will get to better decisions. So it doesn't have to be an either-or thing.
SPEAKER_00Sure. Multiple inputs are always better, right? So getting back to trust for a second, you talked about having the model running for a period of time and analyzing it and making sure that the results are predictable. What else goes into trust? I mean, how do you even get to that point? Uh you must have some implications around governance and around observability and explainability, right? So, what are the other components of trust in AI?
SPEAKER_01It's just that. Um, you know, an audit trail of what decisions were made and why. Um, you know, being able to monitor to make sure that all those critical components are operating at all times. And, you know, you didn't have you know three or four things that were running, and then maybe the second or third one didn't run for some reason, maybe the model was not available or there was a signal glitch. That could easily just continue to fall through and get processed in a way that you don't expect. So, you know, having having that transparency, not having to guess what happened when that was processed is really important, particularly you know, when you're dealing with regulated industries, industries that are subject to to SOX compliance and and you know regular auditing. You can't just have something process and have it be arbitrary and you know, this time it comes out this way, this time it comes out the other way. Um and and I said at the beginning, like companies want to provide consistent service, so they need to ensure that they have you know a consistent set of business policies being adhered to and and executed. So, you know, by having that transparency, you know, seeing that in the audit, understanding exactly what ran and why, I think gives organizations the confidence they would need to know that it's working right.
SPEAKER_00Yeah, I want to dive into the implications of regulated industries, but before I do, I just want to put a bow on this part of the conversation. So is there anything else that you would say makes agentic AI enterprise ready, or maybe the other way around, making sure that an enterprise is ready for agentic AI?
SPEAKER_01On the second part, I think that there's a lot of work that has to be done. Change management to make sure an enterprise is ready for AI. You know, introducing agents into a process, having them make some of the decisions, doing some of the analysis. It requires a level of comfort and trust and being comfortable to step away from that particular task. And um, you know, that might be unsettling for some who might see that as a first step of having their roles fully automated away. But, you know, on the flip side, there are so many things that people do today, day in, day out on their roles that are repetitive, they're tedious, they don't enjoy doing, and and because they have to do it, they don't have time to do other things that they should be doing. So I do think it is opening up an opportunity to, you know, allow them to take on more interesting and value-added activities. That needs to be communicated and make sure that's understood within the organization. And then also, you know, just understanding how to work with an agent as opposed to another person, you know, if there's an issue, what do you do? Do you go to the agent's manager? Do you how do you interact with an agent if if you have a question, if there's an anomaly? Is there support for that? You know, you need to think about things like that, because it's not always happy path processing. Strange things happen all the time, and and you know, you need to be able to handle anything that might come up and have a way to do that consistently.
SPEAKER_00Yeah, you said change management, and we had Gary Sorentino, who's the CIO of Zoom, global CIO of Zoom, on the podcast a few weeks ago. And he was talking about human change management as opposed to technology change management, right? And I think that you're saying both, right? You need to really think through both. And the human part of it is so important. I I in my career, I've been involved with a number of large software projects and rollouts and I as a business stakeholder. And the ones that fail, like I worked for a company that invested like seven figures, mid-seven figures in in like early 2000s money in an Oracle system. And uh they ended up having to rip it out like a year later because people just didn't use it. They didn't the the salespeople wouldn't use the CRM, the finance people never got comfortable with that module, the accounting module. So, and that was because it was just it was poorly communicated. No one really understood the why. There's a guy in the marketing space, Simon Sinek, and he always says, What's your why? And and the importance of understanding where a business is going and the why behind the decisions that it's making is I think underrated. And a lot of business leaders really don't think about, you know, it's like, hey, we're making this change and you're gonna do it because you work for us, right? But people need to buy in because you're never gonna get adoption if they don't buy in and and get aligned on what the goals are.
SPEAKER_01Yeah. And that goes beyond just AI. Any any technology change, any process change. If you don't have buy-in, you run the risk of having it be either a failure or not getting close to the ROI that you hope for.
SPEAKER_00Yeah, absolutely. So let's dig into regulated industries. What are some of the unique challenges of implementing AI, specifically agentic AI, into workflows in heavily regulated industries?
SPEAKER_01You will have some situations where it's just flat out prohibited. So, you know, it makes it very difficult. So that's kind of a full stop. For other organizations, you need to have full explainability on what was done, what data was was used in the decisioning, what the decisions were, under what circumstances, you know, so to be able to go back in time for any and all you know decisions that were made related to work that was being processed is is very important. Making sure that I mentioned this before, you know, as models evolve and change, that you don't have drift, you don't have differing results and they're happening and you have no idea that those are causing a negative impact to the quality of the work. So what controls do you have on ensuring that you know those are consistent and within the level of acceptable quality and consistency? Those are other some of the things, you know, some of the other things that need to be factored in. How are changes introduced? How are things managed that you know all go into you know this process? Um, you know, that needs to be factored in so you know you don't have other things that might be impacting the output and and the quality of that. All of those changes need to be tracked with appropriate evidence that they've been tested, and there's no no opportunities for unexpected or unpleasant changes to be introduced. You know, it could be security threats or unintentional errors that that made their way in.
SPEAKER_00Yeah, I'm wondering if you know having AI in a healthcare environment, for example, increases the potential for data leakage, which makes you at a higher risk for violating HIPAA regulations, for example. And if that's the case, I also wonder are there implications on the cyber insurance side for that?
SPEAKER_01There are use cases that are incredible for healthcare, being able to assist someone in radiology to to look at images, doctors to be able to, you know, summarize the notes they had with the patient so they could send them out and every situation. Well, those are things that can be reviewed by uh human to make sure that you know it's accurate and and expected. In terms of security, you know, I sincerely hope I would not expect that they would be putting this into public chat GPT. So, you know, making sure that the models that they have or the the um the chats and everything is is fully secured and is not available for for sharing um with you know traditional safeguards for for making sure that you know no one outside can access it without permission and all of that. So, you know, operating in a zero trust model. There are also some emerging standards that are that are coming out to to certify uh AI models that also come along with additional insurance safeguards that that could be layered on top of it, similar to what is already available today for cybersecurity insurance, um, but more specific to AI?
SPEAKER_00The federal government has largely been absent in terms of setting any kind of regulation around AI, but there there are frameworks, right? Like so NIST being one of them where we're CMMC. Um I'm I'm wondering, like, are the regulatory bodies themselves up to speed on AI and are they updating their requirements and regulations as well?
SPEAKER_01Aaron Powell I think everyone is just trying to stay up on all of the changes. Are there times when you know things lag behind? I'm sure. I don't know how they wouldn't. I mean, things are developing by the day and during the day. So I'm sure there's a lag there. But I I think what probably will um make sure that we have these standards will be um companies. Um companies want to make sure that that they have trust and you know customers trust them and that they're they're doing things with integrity and explainability. So, you know, having these types of certifications, being able to, you know, point to uh you know ISO 42001 or other standards shows that that they they put energy into this. It's a trustworthy process. Um it's a very rigorous certification, you know, or or you know, adhering to to NIST or other other frameworks will be a differentiator for them. Um not having that will slow down their ability to sell and and be competitive in the marketplace.
SPEAKER_00So let's dive into you as an AI supplier or having AI in your product for just a second. So are there any unique challenges to introducing AI into a product or going to market with an AI-based product that you've had to consider that are above and beyond the type of strategy and and um governance and security measures that you would have put in place as a user of AI?
SPEAKER_01For us, I mean, beyond the the basics you know of wanting to have it certified like our other products, making sure that it's integrated into the things that we already have. We're not building parallel security models. We're not, you know, having to take copies of data and put it outside of the system because the architecture can't handle that. We we need it to be you know native and integrated into the solutions. Because if it's not and it's on the outside, every time you want to change something, you have a few additional hoops that you need to jump through. And I I'm sure there are you know several other vendors out there. Uh I'm not gonna name names, but there are other ones out there that probably don't have architectures that lend themselves very well to adopting AI easily without going through you know magic tricks and you know, maybe you know, additional layers to you know give a level of of agenda capabilities. So it over the long haul, I think will will hamper innovation, hamper you know, speed to market if you don't have something that is you know natively designed to embrace and work with that.
SPEAKER_00Go ahead and name some names because uh I'm waiting for my viral moment here to make this podcast big. So let's let's start up a controversy. No, I'm just kidding. But so uh we've had a few AI experts on that have made it a point to talk about the risk of AI in the supply chain. So now that you're part of the supply chain, how do you look at supply chain risk? And do you do anything differently in terms of vetting your vendors and understanding how they're implementing AI in their systems?
SPEAKER_01We we do. We we make sure that they have the requisite certifications, you know, understand what frameworks they're working against, uh, what evidence they have. It's honestly, it's it's I don't know if we fully kind of uh internalize the the full impact of of what might thing what things might be coming? You know, if you build agenda capabilities in a system, how do we know that there's you know redundancy in that? If a model is not available for some reason, maybe there's an outage, you know, how does the system behave? You know, has that been properly accounted for in the processing? What if it's down completely? You know, can the system continue to work? Or are you dead in the water? You know, like you know, one of the public cloud providers, you know, if your application is all on the cloud and it's down and you don't have you know failover, you're dead. So same might be said for agents. You know, what happens if those go down and and you've you know you've downsized, or maybe you didn't upsize in the first place, you were fully reliant on these. How do you continue to operate? What happens if if you know some vulnerability is injected in? Do you have the right checks in place to make sure that code is being uh tested for vulnerabilities afterwards, or are you assuming that you know you have a step in the agent to do that for you and everything's good? There's a lot of assumptions that people make when they build systems, and and we just need to really go through a deep analysis of what-if scenarios to make sure that it's actually behaving as expected. And you know, no different than if you were to do a tabletop exercise or you know, just you know, deep testing with adverse conditions. I think the same should be done with AI that's inserted and agents, and people should understand how things are going to be when bad things happen.
SPEAKER_00You had mentioned that you get your products certified. What is that certification? And should the people listening to this look for that certification in their supply chain?
SPEAKER_01So I've mentioned earlier ISO 402001 this to you know certify uh AI, and there's a whole series of controls associated with that. So, you know, it involves bringing in an outside auditor, doing a thorough review of it, and and you know, understanding how all of those those controls are met and making sure that it's not just you know met for the uh the one time that you have it tested, but you know, there's ongoing processes to ensure that that will be adhered to.
SPEAKER_00So we talked about workflows, and if I could zoom out for a second and let's just talk about modernization, right? Modernization of the tech stack, but also leveraging technology to modernize the way that you're actually doing business, the way that the business functions. Clearly, that's uh an ongoing thing that people are grappling with, right? How do you escape tech debt? How do you adopt newer technologies without disrupting the business? But what I'm most interested in is how do you do this, like approach this modernization without creating more complexity in the business?
SPEAKER_01There are many, many tools out there. We offer some. Um, you know, there are other ones too for different you know technologies where you know we can analyze a code base, maybe it's a COBOL, you know, mainframe COBOL system in an insurance company or or you know somewhere else. Been around for you know 30, 40 or more years. You know, it's kind of like asbestos, like nobody wants to mess with it because it'll just you know create all sorts of problems. However, you know, it it's it's a limiting technology. The availability of coders to continue that is scarce. So it just limits your ability to innovate. There are there are tools out there that um can analyze that, bring that into uh a different model that describes what is being done by the application, what what fields are being processed, business, you know, business rules, all of that. Those can be imported into new technologies, next generation technologies. We offer some, but there are earlier options too. And so that it provides a company opportunity to not spend months and months and months doing deep analysis on what was built and why it was built, and you know, there's all sorts of tech data in there too. You can do that with with AI tools, understand it, describe it, have an opportunity to, you know, even reimagine it. So maybe some of the steps that were there needed to be there because that was the best thing that you could do at that time. You know, now it might be something that could be handled by an agent. It could be something that could be, you know, is more conducive to straight-through processing and integration. So there are opportunities to reimagine these things in ways that just simply were not possible. So I think today there are so many opportunities for organizations to move beyond those and and to step into the future and to continue to to innovate and evolve and make you know AI an agentic part of that.
SPEAKER_00How would you suggest the technology leaders listening to this go about getting budget for these kind of initiatives? Like what's what's the process? What are the magic words that they need to say to the finance department or the board to get behind that large a scale of an initiative in terms of what the cost, but also the the time and resources that will be dedicated to large-scale change?
SPEAKER_01What I'm what I'm seeing is more and more I'm hearing from folks, you know, the appetite to do these massive projects and hope for the best, and good things will come out on the other side without doing any initial pilots. You know, the appetite for that is next to nothing. Um, you know, it's very low cost to do a POC. To take take a module that, you know, is maybe complicated, maybe there's dead code, other things, things that you just you weren't able to do anything with before, be able to extract that out and and transform it and and put it into the modern era. And then, you know, being able to show that will give more confidence and um also better insights into the level of effort that would be associated with taking that to the next level. The other thing too is you don't have to take these things as big banks. You know, you could you know peel off portions of a system and and and modernize those in phases. And and you know, the nice thing about that is it brings down the risk, it's easier to focus on. I'm sure you do learn things, you know, uh the in subsequent phases, so you can get better at it. Um, and and so there are ways that you can do that without you know totally taking the business offline for that that conversion. And those things can be done much more rapidly. So there are different ways to to attack it. What I would say though is probably the one that I would not advocate for is that big bang where everything's taken offline and the big migration, if there are alternatives to eating that elephant one bite at a time.
SPEAKER_00Sure. But on the other side of that, though, which your approach sounds completely logical and intuitive, but the other side of that is the famous MIT study that everyone's arguing about. 95% of AI pilots fail to produce an ROI. I think the argument is about, well, there's a few ways to push back on that, right? Number one, the methodology of the study is often questioned. Number two is is it really a failure or was it a learning process? And number three was, was it ever intended to generate an ROI to begin with? Right. So I think those three three things might be related, but what do you make of the pilot to scale gap? And regardless of the MIT study, how would you tell someone to go about ensuring that they're going into a pilot that, if they intend it to scale, has a good chance of scaling?
SPEAKER_01I think any of these early pilots are are nothing but experiments. You you have to understand how it works, what the boundaries are, um, as mentioned before, how how much you're asking it to do and and what Pass. There's definitely higher success doing it in smaller pieces and then chaining them together. Understanding what works well, what doesn't, that's going to be what you're doing in your early pilots. You might have some some successes here and there, but more often than not, some things are going to work, some are not. And you're going to walk away with low confidence that this is something that can work consistently at scale. So you need to refine the right level of scope of AI for whatever you're attacking. And then through other means, it doesn't have to be all AI, it could be workflow or some type of standard process to be able to execute that in a succession to be able to process it. Not everything has to be handled genetically. There are so many valid ways of tackling problems. So you need to understand the boundaries of it.
SPEAKER_00You called it an experiment, and I just want to dig into that word for a second. Do you think that they should treat it like a scientific experiment in the sense that they go into it with a hypothesis and they design the experiment in a certain way to eliminate variables and try to predict what the outcome is going to be and then kind of analyze the results and the variance between the expected outcome and the actual outcome in the interest of quantifying the value of the pilot so that they can leverage that for future predictions and pilots, but also to go back to the board and show good stewardship of the money that they're investing in the experimentation.
SPEAKER_01I feel like that process, I'd never thought about it that way, but as you were describing it, it just felt very um over engineered. Overengineered, complicated. Sometimes there's just value in jumping in and seeing where it will go. I was talking to someone yesterday and they're like, you know what, you should start with the end goal and what you want. And then, you know, from there, what would be what is the first thing that would need to be coming together to get that result? And then step back, step back from the end goal to understand all the fundamental pieces that are needed to bring about that solution. I found that kind of uh intriguing. Maybe I'm doing that unconsciously, but I thought that was an interesting way to perhaps tackle you know some problems like this. But I I think I I think it needs to be less structured. I think, you know, providing it, you know, guidelines, you know, what you understand the constraints to be, what you know it should be looking at, it's probably a faster way to innovate. And and the way things are going, you know, change is going to just be going faster and faster. I don't know if people can take the time to go through such a scientific method for each and everything that they're trying to do.
SPEAKER_00Sure. It was uh Da Vinci that said simplicity is the ultimate sophistication, right? So sometimes uh just keeping it simple is is the mark of doing things the right way. I want to just take a quick left turn. I want to talk about AI in in the dev cycle and how vibe coding may be changing that. Are you guys using vibe coding or using AI in your development cycles?
SPEAKER_01We are using AI in our development cycles. We are uh we've introduced some some vibe coding capabilities as well to to our experiences building applications, imagining them, ideating on them. And we'll have increasingly sets of functionality that that does that. I think it is absolutely necessary. It is it is nice because it allows you to quickly accelerate on on discrete pieces of a system at a given time, whether it's a screen, whether it's a process, what have you. And um, you know, doing that in pieces, I think allows you to assess what was done for that thing. You know, and you just send a bunch of instructions and a whole bunch of things happen in the background. You don't know what happened, you know how they happened, whether it was right or wrong. It may be apparent, it may be not. It leaves some mystery and some doubt. And then over time, if everything was done like that, you might have no idea what to do if there's a problem. So, you know, things can get away from you. So I I think there's absolutely value of doing that in a controlled fashion and iterations on things. And that's where I think you know, we'll get efficiency of scale. And as as that becomes more of a norm and comfortable, people understand it and it's repeatable, you will be able to chain more and more things together. But I would start, you know, with the atomic things or a few things and then go out from there as opposed to just a simple instruction that does a whole bunch of things and you have not no idea what happened.
SPEAKER_00Any other tactical advice for people that might be considering introducing that into their development cycles?
SPEAKER_01I I would say, you know, obviously play around with it, experiment without the the you know, hypothesis. I would probably also suggest, you know, getting some outside guidance on, you know, the right and wrong ways to do that because it's more than just you know jumping in and in vibe coding. You know, there's a whole change in the development process and the release process. There are probably roles that you know will emerge out of this type of development that don't exist today or don't exist at the organizations today. So I think you know, having someone come in with an outside lens to to give some insights into you know how all of those things to come together and how you produce software is is super important because otherwise you will learn the hard way, it will take much longer, and you probably won't be happy with the the ROI on it.
SPEAKER_00I've seen some stats, I can't recall them off the top of my head, and I don't have them up, but that developers themselves, when asked, said that AI is speeding up their development cycles by say 20, 30 percent, something like that. But then when actually tested, they found out that it was taking them 20 to 30 percent more time to actually do the the same amount of work. Some of that might just be learning curve, right? They're still it's clunky, it's not, they don't have their 10,000 hours of of vibe coding yet or whatever. But you know, I wonder like, are you seeing an appreciable impact in productivity or the amount of time that it takes to you know get a release out the door?
SPEAKER_01We are seeing situations where people are absolutely getting you know value. They're they're doing amazing things so much faster, you know, things that would have taken months, you know, taken a few days to do. So absolutely, you know, that is happening. I've seen it. Um, I've seen the results of that. Um, you know, to your to your general comment, you know, people are saying, oh, yeah, I'm getting you know 20, 30 percent, you know, it's anecdotal. You know, I ask, well, how? How are you measuring that? Well, I I you know, it's my gut feel. Uh I think, you know, very soon we'll have to have more objective ways to measure that. And it can't be lines of code generated because lines of code is just that. It's it's more stuff, more stuff that can go wrong. It has no direct correlation to functionality, and actually, you know, over time will make things worse. So it's it's going to be really important that any stats are backed up by data, credible data, not your gut feel, and factor in, you know, the the actual features that are being developed, defects that are being created, and overall time that it takes to bring everything together to get it out the door.
SPEAKER_00I I don't want to ask this about Pega systems specifically, but just your viewpoint on how this is going to impact the industry broadly. You mentioned that it's going to create new positions, but obviously one of the things that people are talking about in the media is how this is going to be an apocalypse to coders, right? That no one's going to have a job at the end of this and AI is going to do all the coding. So I wonder, what do you think is the reality of what this looks like in three years? It's probably in AI terms, that's a lifetime, right? It's impossible to predict. But let's just let's just try. Like, what do you think is going to happen? Are we going to see the same number of people doing incrementally or exponentially more work and maybe creating some new positions or shifting some positions? Is there going to be a net loss?
SPEAKER_01I I don't know. I mean, and and if I if I could predict the future, I'd be, you know, betting on the stock market on all the wins right now. But the way that I do see things going is, yeah, roles will definitely change. What you're doing today is going to be, you know, very different than what you're doing in six to twelve months. You know, you'll be interacting with vibe coding experiences, with agents that are doing discrete activities. Do I think productivity is going to go up? Yes, I do. Do I think that overall output will go up? Yes, I do. Like I said before, other roles will be needed. Um, probably need, you know, because you're creating more things, there's more pressure on downstream activities for validation, for quality. You know, you still need to have someone look at the actual usability of the products that are being created. Those things are going to be happening a lot more rapidly. So I I sincerely hope, you know, that the smart companies are going to be the ones that are focused on improving productivity output. And they're not doing this for um, you know, simply a cost-cutting measure. Because I think if you're if you're doing it for that purpose, you're probably you know focused on the wrong things. Um, yeah, that that's kind of my my thoughts in in general. I do, I do think, you know, there's absolute needs for for many roles and and and new roles, like I said, will come out of this. Just like you know, when when the internet came around, people's roles changed and you know, a lot of one, a lot of new ones were created, and things continue to evolve and and innovate. And so I don't really see that changing with us.
SPEAKER_00Yeah. Uh let's just wrap up with a big question for the technology leaders that are listening to this. Do you have any advice or tips or guidance of how they can go from their current state to a more modern, more efficient back-end process workflows? Like how can they streamline their business? Any advice for them?
SPEAKER_01I would say, you know, start by you immersing yourself in these new technologies. They're they are going to be the way that things get done now and in the future. Um, and and you need to constantly be looking at all of the developments, the trends. So, you know, whether it's you know joining a tech group and you know, listening to speakers talk about it, you know, connecting with podcasts, you know, working with a consultant. Um, you know, it's it's not something that you do this weekend, you check off the box and you're set. You know, information becomes stale very quickly, capabilities change constantly. Something that does not work now, you could come back in in a month or two months, and it's completely different. Or there's something brand new that that emerges. So you have to, your ability to learn and adapt is is gonna be, you know, under extreme pressure much faster than anything else that we've seen. And you just need to embrace it and lean into it. And the reason is, you know, these these are incredible capabilities that were unthinkable just a few years ago. And it's only going to you know become better, and you're only gonna be able to do you know more than it's the worst right now it will ever be. It's only going to get better. And and you know, there's there's just so much opportunity for things that you you want to do or you couldn't do. These technologies are going to enable you to be able to do those things in a much better way and much faster. It's it's really upon all leaders to understand this, embrace it, and and make sure that the systems that they have today, the data and the staff are prepared for this change, because otherwise, it's going to be you know quite painful to make that pivot.
SPEAKER_00And given the advice that you just gave, I believe that you're president of the Boston chapter of SIM, is that right? That's correct. Yes. Do you want to give a little plug for the local sim chapters of the people that are listening?
SPEAKER_01Yes. So I am the president of Sim Boston. I've been with the organization for 10 years now. It is an amazing group of tech professionals. We're about 400 people strong and growing, amazing events throughout the year. I think we hold about 60 events. So great opportunities to network, to learn, also to give back. So whether that's participating in committees, whether that's getting involved with outreach. One of our pillars is actually doing outreach and funding some outreach partners who can help um provide provide a robust tech talent pipeline through alternative means. So we do many, many great things. So it's it's a great place for you to build your external network and grow. And if you want to learn a thing or two about AI, you know, we have many sessions for that as well. So love to have you. Just go to simboston.org and and you can learn more. And if you want to join, there's a link to join for membership.
SPEAKER_00And you and your uh representative from your chapter are going to be at our tech leader conference April 30th, April 29th, and 30th on Cape Cod. And uh, you're also going to be speaking on one of our panels as well. Aside from that, where else can people find you?
SPEAKER_01You can find me on on LinkedIn. I attend many of those the sim events and around the Boston area as well as some other CIO-focused groups. So, you know, love to connect and learn more. And hopefully we'll we'll see you in the Cape in about a month.
SPEAKER_00Awesome. David Vidoni, thank you so much for your time and expertise. Appreciate it.