
The Disruptor Podcast
"The Disruptor Series," your blueprint for groundbreaking innovation, started as a periodic segment of the Apex Podcast. This is not your standard conversation around Design Thinking or Product Market Fit—this is the series that dares to go beyond conventional wisdom, confronting the status quo and exposing the raw power of disruptive thinking.
Our journey begins with intensely provocative dialogues that set the stage for the unexpected. With a focus on Experience Disruptors, Product Market Fit, and a host of other captivating topics, we bring you face-to-face with the ideas that are flipping the script on traditional buying and selling experiences.
But we don't stop at ideas; we dive into their real-world applications. "The Disruptor" brings you an unfiltered look into the lives and minds of those who are either being disrupted, creating disruption, or strategically navigating through disruption.
Our guests range from industry veterans to daring newcomers, all willing to share their experiences in shifting the paradigms that define their stakeholders' experiences.
If you're tired of business as usual and eager to question the preconceived notions that hold back innovation, "The Disruptor Series" is your ticket to a transformative journey. Tune in, disrupt yourself, and become an agent of change in an ever-evolving landscape.
...
The Disruptor Podcast
Mainframe Modernization: From Black Box to Clarity
Welcome to another edition of The Disruptor Podcast!
This is part two of our two-part series.
In this episode, Zengines CEO Caitlyn Truong returns to reveal how her team simplifies the high-stakes challenge of mainframe modernization.
She also discusses the challenges of mainframe modernization and how Zengines helps enterprises de-risk legacy systems with smart data lineage.
Key Topics Discussed:
- Why legacy mainframes still dominate large enterprises
- Common missteps that derail modernization efforts
- How Zengines turns “black box” systems into readable data
- Real case studies from banking and industrial sectors
- Key strategies to reduce project risk and cost overruns
Key Insights / Quotes
- “Enterprise data lineage is a beast—start small with a clear use case.”
- “We help teams reverse-engineer the mainframe into business requirements.”
- “Every test case break used to spawn three more questions. Now they get answers in minutes.”
- “It’s not about replacing mainframes—it’s about de-risking what’s already working.”
Zengines' Disruptive Way:
- Providing business analysts with AI tools for automating data mapping & changes.
- Minimizes large teams, speeding up data conversion
- Prioritizing tasks earlier in the process
- Lowering costs and boosting productivity
To learn more about Caitlyn and Zengines, visit their website (Zengines.AI) and connect with Caitlyn on LinkedIn.
In Part 1 of our conversation, Caitlyn and I discuss how Zengines disrupt organizations by automating data conversion. Listen here: Disruption in Data: Why Digital Migrations Fail and How to Succeed.
Comments or Questions? Send us a text
***
Engage, Share, and Connect!
Spread the Word: Valuable insights are best when shared. Share this episode with peers who may benefit from it if you find it insightful.
Your Feedback Matters: How did this episode resonate with you? Share your thoughts, insights, or questions. Your engagement enriches our community.
Stay Updated: Don’t miss out on further insights.
Subscribe: You can listen to our podcast, read our blog posts on Medium, Substack, and LinkedIn, and watch our YouTube channel.
Collaborate with The Disruptor and connect with John Kundtz.
Got a disruptive story to share? We’re scouting for remarkable podcast guests. Nominate a Disruptor.
Thank you for being an integral part of our journey.
Together, let’s redefine the status quo!
Tips are welcomed and appreciated, too!
Disruption and Data Transforming, migration and Modernizing Mainframes. Hi everyone, I'm your host, john Kuntz, and welcome to another edition of the Disruptor Podcast. For those new to our show, we started this series back in December 2022 as a periodic segment of the Apex Podcast. Our vision was to go beyond the conventional wisdom by confronting the status quo and exposing the raw power of disruptive thinking. Today, in part two of our show, we welcome back Caitlin Truong, ceo of Zengens, as we explore how her company disrupts organizations, automates the end-to-end data conversions. We will discuss valuable advice on the pitfalls, the mistakes that many executives make when attempting digital transformation. Welcome back.
Speaker 2:Thank you so much, john.
Speaker 1:Before the break, we were discussing the broader challenges of data migration. In part two of the show, we will focus on a particularly complex area modernizing the mainframe systems. Caitlin, let's dive into the unique challenges and solutions to bring to the table when trying to do mainframe modernization. Many people believe the mainframe was a thing of the past. Having spent 38 years at IBM, I heard that, probably for 25 of those years, but, as you know, they persist in many organizations, especially large enterprises. Some of the largest banks around the world, large institutions, still heavily rely on the mainframe and there are specific challenges and opportunities and I'd like you to dive into what those might be in the context of mainframe modernization in a digital transformation.
Speaker 2:Thanks, john. I fully agree. Of all my clients and customers from Vengeance and all my consulting years, I know that customers have always indicated that mainframes are highly performant, highly reliable. So I don't think that's the main driver of some of the opportunities associated with mainframe modernization. Really, I think it's about de-risking that legacy technology right the fact that it was written decades ago and some of those SMEs aren't available. Whether it is because there are newer code bases so you don't have as many people who know the languages with which were written in some of those mainframes long ago. Whether it's because what was written back then and again remember some of these 60 years ago just weren't as many standards, so it's maybe more difficult to maintain some of those code and programs from way back when. It's really about de-risking the business, more so than it's a problematic area.
Speaker 2:I think the product and technology is fantastic. It's just a need for business to ensure they are resilient and staying relevant Organizations all of the ones that you're talking about, across all industries. They're looking to explore mainframe modernization and taking different flavors. Some cases they are keeping the same build functionality of programs and just rewriting it in different languages, and in other cases they are decommissioning or sunsetting those old systems and moving to vended cloud-based software solutions. Part of that, if they're moving to SaaS, is that they feel they can benefit from constantly receiving software updates and they can be absolved of not having to maintain the code and programs themselves. Those are key drivers and I'm seeing a lot of those in banks, insurance companies, industrial companies, etc.
Speaker 1:It's amazing how much COBOL code is still out there. The people that built those systems they're not around anymore. I was working in a bank in South Africa. They had built a tier four data center. They were trying to move their mainframe-based core banking system out of a tier three data center. They spent a lot of money on this really fancy new data center but they couldn't find the time to do the data migration. They couldn't take the systems down. The amount of time it would take to analyze the data was not within their downtime window. They made multiple attempts to move but couldn't do it because they were running the core banking system. I know this is a major problem and concern with any sort of mainframe modernization initiative, whether you're trying to move to a different location or if you are trying to refactor some of these old COBOL programs and applications that were written 40 years ago. So how do you specifically address these complexities in mainframe data migration and modernization?
Speaker 2:I was looking at the stat, there are more lines of COBOL code in existence than any other programming language. It is still true, there is a voluminous amount of COBOL code still in existence. So all of those things. There's an interesting case study. There was a bank looking to modernize and move to a new core and payments system and they were converting off the mainframe. Unfortunately, this story is a painful one where they weren't able to because the mainframe was a black box to them. They tried to migrate some of the accounts over to this new vended software. When they tried to go live, they actually shut out customers out of their bank accounts for over a week. This was very bad. They could not complete this because it was so terrible they had to shut down the program. Now they are managing two different systems. The old mainframe core is still live and then a number of accounts are still now on the new system. As you can imagine, the business case was not realized at all. Some of those executives unfortunately lost their jobs. That's why I think it was something we ran into when we were working with a customer. We were brought in to help them with a conversion While we were there. Oh, my goodness, this mainframe is a black to the business. How will I convert if I don't know how to answer whether or not that rule is right or wrong?
Speaker 2:Zengens said, because of our core product and our ability to ingest certain things and activate metadata to be smart, let us see if we can give this a shot. So we did and evolved. We created a second product, a second capability. It's our mainframe data lineage product. This is specific to helping teams get through that modernization effort. I'm not a refactoring tool, I'm not a porting tool. I ingest the schema, I ingest the metadata, I ingest the code, the full code base, including the JCLs. I use all of that to get a data lineage information base. But we are actually a research tool. We allow business analysts to understand business calculation logic and the various conditions that caused it to use this calc statement versus this calc statement. It empowers the business to be able to reverse, engineer what's inside that black box into business requirements for them to execute a conversion.
Speaker 2:The business analyst can have their sets of test cases that they want to run through the systems right, because they want to turn the new system live. It's a new vended product and in their test case they will run a scenario where they run a transaction through the old system and a transaction through the new system, old system being the mainframe. If the output is the same, all good. They will use out-of-the-box business rule from the software of new product. Unfortunately, they run into a lot of breaks. Where the values are old system legacy mainframe says the output is negative five and new system says the output is five.
Speaker 2:At this point in time, the business says who is right? Is it that we have a rule or some variation or some specific business rule that we need to make sure we account for, or is the software system wrong in how they are computing this calculation? They need to understand what was in that black box to make a decision. Today that process looks like the business writing a question and sending it to the mainframe SMEs and then waiting for a response. As the mainframe SME can ascertain, that mainframe SME is navigating and reading through COBOL code, traversing module after module, looking at go-tos, lookups and reference calls. It takes some time to come back. When the business gets that answer they say, okay, that's helpful, but now you've spawned three more questions and so that's a painful process for the business to feel like they are held hostage a bit to the fact that they can't get answers themselves. So this is what Zengens did democratizing that knowledge base and making it available for business to get answers so they can successfully complete that conversion.
Speaker 1:That's incredible. I think we might have worked with the same customer or know the same customer. I've heard that story a bunch of times myself the more I talk with you. I really wish you guys were around back in 2015 when we were trying to do all these work, but it's obviously the times have changed and you've leveraged your knowledge and technologies to help with these projects, which I think is super cool. What are some of the critical success factors for organizations undertaking mainframe modernization projects? What would you recommend to an executive that says I want to move my core banking systems off the mainframe and put them under another cloud-based system?
Speaker 2:Yeah. So I would say that it is important always upfront, make sure that there is an understanding of what you're converting. Make sure there's an understanding of the right use case, the right scope, amount, etc. If it's going to touch something, you have to understand the full lineage of the mainframe itself. How do you understand all of the interdependencies or what are the ramifications? If you're only looking to decom one area, there's probably threads to other modules and programs. First, to make sure there's clarity and understand what's important to realize from an ROI perspective. Second, have a guiding strategy and design these mainframe workhorses. You have a lot of functionality in them.
Speaker 2:I've never seen it be a big bang. My personal experience. I've seen phased approaches. When it is, it's a bit of well, I'm going to modernize this guy, but that means I need to feed data out of the new system back to the old system to keep it current, because some aspects are still live on the mainframe. Have the right plan design, be agile about it. And then I say employ a tool. You do need data lineage and I would say something that we learned going through this experience with our customers is that it's not lineage at face value that the business needs If you're going to go through the conversion, you need the tool to help you answer questions, and I think that's really what was the sticking point, because it was not just oh, I just need the traversal pattern, the interdependencies.
Speaker 2:That wasn't the only thing the business needed. The business and engineering team, when they had to do the research themselves, found that the first question you get isn't always the only question or the answer you needed. Right, you start with a question that might cause you to unpack other questions and then you might need to step back and look at a visual to understand the full map of the potential impact area. So, having a tool that allows you to answer questions associated with going through changes, how can you be sure that you are now able to answer what was built? Why was it built, how was it built, what were those rules, so that you can be sure that you can successfully decommission portions of the program safely. Start small, start with something that you know you can convert and move. Celebrate those successes, because it is possible and we're seeing great results.
Speaker 2:Something I would emphasize, john, is that the team attempted to rewrite everything that was in the mainframe. There was just no way that the team attempted to rewrite everything that was in the mainframe. There was just no way that the organization could finish that job in our lifetime. It's all of these modules, all these reference to thousands and thousands, tens of thousands of other modules. So then it was okay, we'll only research the things that break those reconciliation issues. And even that took two months for one inquiry to come back from the team, probably because they have to manage and run the Mayfirm itself. So what we saw was they wanted to make sure there was a way to get answers quickly and reduce that dependency on me's. It became something that allows them to understand or shed a light into the black box.
Speaker 1:It makes a ton of sense to me the use case, the use AI for the data lineage investigation or analysis. So, as you said, it takes forever for a human to do it because they have their day jobs, but it's a lot of work. If you can leverage an AI large language model to help you figure out the lineage, you shrink the time needed to figure that out, which allows you the time to actually do the data migration, data conversion or mainframe modernization initiative. Am I saying that correctly?
Speaker 2:Yeah, the fact that the data lineage tool becomes an asset to the team to allow them to answer questions, because I found that they had a variety of different questions. You just don't know what might cause the break in your test cases and obviously you need to get through all the test cases for you to feel comfortable that you're ready to go live. This was the point that I was trying to emphasize around the value. Also, john, one of the things that I'm very excited about is from a results perspective. The business team presents their dashboard at the steering committee and program review every couple weeks. They are not yet in the testing near go live phase. They showed that when they had their breaks those times when the test case did not get to the same result between old system and new system answer why the break existed is down to 0.4%. Every time they ran into a break, they have a tool and the ability to answer why that break is there. Then business can make a determination Do I want to do something about that break?
Speaker 2:Do I want to make sure that the new system accounts for that rule, because maybe it's a specific treatment Whenever this transaction shows up? I need you to do this very specifically and software company might not have that quote unquote customized rule. The business can now find it in Zenges. They can say yes, it must be true. Therefore, software company, please add this to your list of things that you need to include before we go live, so the ability to catch that upfront, as opposed to not knowing it, waiting until you're in testing your goal live or parallel run and then discovering these things. That's why you will encounter overrun project, overrun cost, overrun, delays, et cetera. It's because you just couldn't answer it upfront. What I like to emphasize here is the value is you're de-risking the program overall. You're able to say I can answer it. What I like to emphasize here is the value is you're de-risking the program overall. You're able to say I can answer it, I know why it happened and I can account for it and I should not encounter that break during parallel.
Speaker 1:If they have the data upfront, they can make a business evaluation or a business decision on the risk right. Is this risk worth paying money and time to fix or can we live with this risk? That's as opposed to finding it out on the back end when something actually breaks. That's the beauty of giving somebody a platform or the ability to shift this decision-making to the left. Let the business unit say, okay, I can live with this or I can't live with it.
Speaker 2:I need you software vendor to account for it. This is where engineers are most valuable, which is let's build right. Their time should be spent on the build portion as opposed to helping with the hey, write the right query so that someone can tell you whether or not you wrote it right and whether or not it looks the way that someone wants it to look. So, tying back to our first conversation, when you do find that break and say I need the product, the new system, to be able to handle this, here it is, and now I know the business rule. Please make sure that you add this to the product.
Speaker 1:Excellent. One last question what's the future of the mainframe in the enterprise and how do you guys fit into that future?
Speaker 2:I do think when I say de-risk, I think that we not only de-risk a modernization program, john, but we also de-risk business operations, because I think, as I said from the beginning, it's not that the technology doesn't work, it's that the resources and the ability to understand what was there was becoming more difficult and scarce. So if you have a product and a capability to be able to back into that, to get the knowledge to understand it actually, then if your technology is working, great. It's really about educating a new workforce. I think if organizations are looking to keep mainframe and they just want to make sure they can have a workforce that can manage that you have an ability to understand, shine a light into that black box of engines and then use that to be a way to teach new engineers. The requirements are there.
Speaker 2:That is de-risking business operations overall and still saying mainframe has its place and it's more what business wants from a operational management of that type of technology. So that's one. And then the second thing is there's also businesses that have made the decision to move forward and again I don't think it's about the technology, I think it's a hey. Saas products means I don't need to worry about keeping the code based grid and as there are new capabilities out of that software, I can adopt that. So that's a benefit to a lot of organizations looking to adopt SaaS based solutions.
Speaker 2:If you are modernizing well, then the future of how engines can help there is. We're just looking to make sure that we're continuously adding more and more capabilities to support all the variations inside mainframes, so the fact that there's mainframes and mid-ranges and the different code bases from COBOL to RPG and PL1 and assembler, etc. So we just want to make sure that we can support organizations and be really good at saying we provide you the ability to have a light in that black box. I'd be the best at mainframe data lineage.
Speaker 1:Wow, this is great stuff. Thanks for sharing all that. I appreciate all your insights and your experiences, and it's fun talking about the good old days of the mainframe as well. But so my last question really, is there anything I haven't asked you that you'd like to share before we wrap the show up?
Speaker 2:On the mainframe data lineage side, the most important thing is to remember that enterprise data lineage is a beast of an effort not just connecting the mainframe but trying to connect the mainframe to all the other applications you have in-house. It's hard and the question is for what value and for what use case? Make sure you start really with understanding why you're asking for day balanage and have that use case be small enough that you can be successful. The hardest area of enterprise data lineage is the black boxes, ie the legacy tech, and the most important thing that I would say is to get started. Organizations and teams can sit, imagine, design and talk about it for a long time. You'll learn and get more value just by doing. You might make mistakes because you didn't plan for all of it, but you're at least already started, not waiting to discover all of it through planning.
Speaker 1:Great advice, I agree, how to eat the elephant one bite at a time. How can people learn more?
Speaker 2:about you and your services? What are your socials? What's the best way? Thank you. John Zenzens' website is wwwzenzensai, with a Z in front of the word engines plural. And, by the way, we had AI way from the very beginning, before a lot of these other companies. I'm on LinkedIn so folks can connect with me or send me a direct message and we have information as to how to reach out to our team, to get a demo and work with us, to go through a POC or a trial, because I know that in this world, everyone likes to see how it works and we would be delighted to have people give it a shot. When it comes to data conversions, it doesn't need to take an army. Everyone can do it themselves. It's for the business user. So if we start to change the mindset and say, hey, this can be self-service, that's what I'm hoping to move to, so that everyone can change faster can be self-service.
Speaker 1:That's what I'm hoping to move to, so that everyone can change faster. Excellent, this has been a great interview, a great show. You're so energetic, you're so knowledgeable. It's just really a pleasure to have you as a guest on the Disruptor. So, everybody listening, please don't forget to connect with Caitlin on LinkedIn, check out their website and Caitlin. Before I wrap it up, I always give the guest the last word and then we'll say goodbye.
Speaker 2:Thanks, John. First I wanted to say thank you. We had so much fun and your listeners should know just what an interesting and knowledgeable person you are. I had so much fun learning from you in our conversations. And then also just to say that it's all about gratitude, right. I appreciate the opportunity to spend time with you, that you've made time and space for me and for Zenges, and that this experience of building a company and hopefully looking to really deliver value to people. I just believe data is solvable, and so let's help people focus on other things when they want to change and let's make the data not something they have to worry about.
Speaker 1:All righty everybody. Thanks, caitlin for being on our show. I'm John Kuntz and thanks for joining this edition of the Disruptor Podcast. Have a great day, guys. Take care Thanks.