The Signal Room | AI in Healthcare & Ethical AI

Enterprise AI Journey: Agentic AI, Generative AI and Data Foundations in Healthcare | Gary Cao

Chris Hutchins | Healthcare AI Strategy, Readiness & Governance Season 1 Episode 19

Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.

0:00 | 42:06

Send us Fan Mail

What does it actually mean when an organization says it is on an AI journey? In most cases, according to Gary Cao, it means a vague intention and a handful of disconnected projects, without a holistic framework or a roadmap for the next three to five years. As a chief data, analytics, and AI officer with 30 years of experience across eight companies spanning healthcare, financial services, and multiple industries, Gary has built and led enterprise AI capabilities from the ground up. He brings a perspective shaped not by theory but by accountability.


In this episode of The Signal Room, host Christopher Hutchins, Founder and CEO of Hutchins Data Strategy Consultants, sits down with Gary to unpack the four pillars that determine whether an AI initiative delivers value or stalls: business strategy, analytics and innovation, data management, and technology infrastructure. Gary explains why technology gets the most visibility and budget, data management stays below the surface, analytics culture remains underappreciated, and business strategy is the hardest conversation to have. He also draws a critical distinction between generative AI and traditional data analytics, arguing that without the right data foundation, generative AI produces helpful but ultimately superficial outputs that will not fundamentally change decision processes or workflows.


The conversation moves into probabilistic versus deterministic thinking and why executives must become comfortable making decisions in ranges rather than exact answers. Gary introduces a three-part ROI scorecard that balances direct revenue impact, cost avoidance, and qualitative benefits that are hard to quantify but strategically essential. He also addresses the philosophical tension around workforce upskilling: if AI reduces headcount, why would employees welcome its adoption?


If you lead enterprise data and analytics functions, advise boards on AI investment, or manage the translation between business strategy and technical execution, this episode maps the journey from crawl to walk to run.


Key topics covered in this episode:


(00:00) Gary on the CFO regret: we should have invested in data analytics years ago (00:24) Introduction: Gary Cao's career across healthcare, financial services, and enterprise AI (01:46) What organizations really mean when they say they're on an AI journey (02:39) Four pillars of AI maturity: business strategy, analytics, data management, technology (05:01) Enterprise AI framework from 30 years across eight companies (07:13) The hidden cost beneath technology contracts: getting data fit for use (09:33) Three layers of AI: traditional analytics, NLP and image processing, generative AI (12:41) The tension between enterprise systems and probabilistic AI models (13:13) Healthcare versus financial services: different tolerance for accuracy (18:07) Does generative AI need different governance than traditional analytics? (20:39) How executives should think about risk tolerance in probabilistic decision making (24:57) Historical bias in data and why governance must create space for judgment (26:25) Workforce upskilling and the phi

Support the show

About The Signal Room: The Signal Room is a podcast and communications platform exploring leadership, ethics, and innovation in healthcare and artificial intelligence. Hosted by Christopher Hutchins, Founder and CEO of Hutchins Data Strategy Consultants. Leadership, ethics, and innovation, amplified.


Website: https://www.hutchinsdatastrategy.com 

LinkedIn: https://www.linkedin.com/in/chutchins-healthcare/ 

YouTube: https://www.youtube.com/@ChrisHutchinsAi

Book Chris to speak:  https://www.chrisjhutchins.com

Gary Cao:

Some CFOs are saying to data analytics leaders, "Oh, sorry, I think in the past few years you pushed for me to invest in data analytics. I always pushed back and said, okay, what's the ROI?" Now we have even a higher level of pressure from all the places. So we should have, could have invested more in data analytics two or three years ago earlier. Now it's still not too late, but still we should have been, could have been much more proactive.

Christopher Hutchins:

Today on The Signal Room, I'm joined by Gary Cao, a longtime friend of mine and executive leader who's been navigating AI adoption from the inside of an enterprise. Gary brings a perspective shaped not by theory but by accountability, aligning AI ambition with operational realities, managing risk while driving innovation, and helping organizations move from experimentation to enterprise scale. As a chief data, analytics, and AI officer, and a serial founder of multiple internal startups focused on data and analytics across industries, Gary will share his experience and lessons with a focus on AI strategic roadmaps and how we can potentially integrate traditional analytical AI with the emerging and rapidly evolving generative AI capabilities in the enterprise context. Gary, welcome to The Signal Room.

Gary Cao:

Thank you, Chris. Happy to be here.

Christopher Hutchins:

Well, I am happy to be having this kind of a conversation. Let a few folks in on it. We've had some nice conversations previously, and I always learn something when we talk. So thank you again for joining me. I want to start with some things that are really near and dear to your heart. I know you've been talking about these things a bit lately, and with all the hype and the buzz going around, I thought this is a really great topic. So when you're talking to an organization and they're saying they're on an AI journey, what does that usually mean in practice?

Gary Cao:

Generally, most companies, when they say they are on an AI journey, they mean they have a general, very vague intention and also some type of tool evaluation, projects, use cases in mind. But they don't have a holistic systematic way of designing a framework or a roadmap for the next three to five years. It's not just next month, next quarter, even next year. It's really, I would say, minimum of one to three years, if not three to five years.

Christopher Hutchins:

Wow. So how do they go about really assessing where they are then? It seems like it's not an orchestrated strategy by what you're describing. How would a leader be assessing where they fall on the AI maturity scale, do you think?

Gary Cao: Many companies are still, I would say, in a traditional journey of data analytics, maybe in a new phase of using generative AI. But in many people's minds, those two things are disjointed. They are not properly organically connected. When I talk to mid-sized companies, CEOs, and board members, I ask them questions:

What do you want to see in three to five years in terms of the future state of your company, in terms of your strategic priorities? From there, I want to see your strategic plan, and then from there translate that into things that we can plan, combining four things together. One is technology, which is the foundation. Without that, you cannot do anything. Second is data, this is the raw material. The third thing is algorithm, patterns, analytics. The fourth thing is the business adoption and execution of the recommendation from analytics, pattern detection, and the algorithm side. So that way we can see the real outcome or results of delivering the real value. Those four things come together, creating a virtuous cycle to truly make this capability development journey sustainable and truly delivering value that's expected by the board as well as the CEO and the owners and investors.

Christopher Hutchins:

Right. You mentioned something very briefly, but it's always a challenge. Data is always one of the biggest barriers that I've run into, and I'm sure you've seen this before too. People have a different understanding of the quality and completeness of their data than what reality actually is. That kind of leads to another area I want to touch on. Maybe you can pick up on the data component of it because it's a pretty significant lift. There's a gap usually between the ambition and the execution. The elements that you described probably fit into that gap in various ways. Talk a little bit about some of the gaps that you see and how do you approach that.

Gary Cao:

That's great. I recently talked with a group of people, practitioners in large companies in technology, data, analytics, and digital transformation in the Cleveland area. We met at Case Western Reserve University, Weatherhead School of Management. We talked about the enterprise AI journey. And consistent with my professional framework based on the past 30 years at eight companies across industries, I identified four things as food for thought. Number one is business strategy. Without that, nothing is really meaningful. Number two is innovation and diversity of ideas, analytics, and pattern detection and algorithm-driven recommendations and decision making. Number three is data management. And number four is computing or technology or tools, infrastructure. All of those four things, business strategy is the most difficult topic for discussion. Innovation and diversity of ideas and analytics is less discussed. More discussed, generally on top of people's minds, is technology, tools, infrastructure. Data management, data quality, data lineage, data knowledge is generally below the surface and undervalued, underdiscussed. All four things are critical and required to make this journey less frustrating, more successful, more productive. Technology is mostly visible and discussed. Data is below the surface, sometimes out of sight, out of mind. Analytics is less discussed, still undervalued or underappreciated. And business strategy is generally difficult because people say, how do I connect the business strategy with technology and problems and use cases and data management? Because data management requires much longer patience and deliberate effort.

Christopher Hutchins:

Right. So do you think there's a disparity between the investment in the data foundation versus what the ambition is? I've seen it more often than not. There's a hidden cost tucked underneath the technology contracts, and it's underappreciated in terms of the level of effort when it comes to getting the data fit for use. What's your experience on that?

Gary Cao:

Yeah, data is not as tangible as technology or tools. It's sometimes not even visible, but that's the raw material. The disparity here is that people try to invest for the long term. They are generally very knowledgeable about data, about technology. But the gap is not about investment in data management, but rather connecting data and technology with business problems, and then focusing on use cases and short-term value creation. And then having that connection, that traction. That's the very interesting integration of all four things together. Many companies are very strong in technology and sometimes also strong in data. Business is foundational. But the connection point, the weakest spot, is the analytics part, which is using technology and data to focus on a problem, solve it, and deliver value by supporting and servicing the business decision makers. That connection generally is not easily established, maintained, or supported.

Christopher Hutchins:

Yeah, that's an interesting thing. I've been playing around with my own models, I'm sure you have too. But the quality of the inputs, the prompts, they have everything to do with what kind of output you're going to generate. There's a lot of variability as well. Maybe if you don't mind, just talk a little bit about how you see generative AI fitting in at an enterprise level, as part of an analytics foundation. I don't know that we're really clear on where the guardrails are, but I know you've spent some time on that. I'd love to hear your thoughts.

Gary Cao: It's confusing to many people. Everybody talks about AI, but in terms of definition, people have different definitions. And in common discussion or casual dialogue, people mention AI casually. There's something like AI slop or AI wash. In my mind, because I've been in this data science and machine learning space for 30 years for my entire career, I believe AI's definition is pretty broad. There are three layers. The first layer is traditional data analytics and machine learning, predictive modeling, statistics, mathematics, economics. That includes dashboard and business intelligence. The second layer is kind of there but people don't necessarily recognize it:

natural language processing, visual and audio, and image processing. The third layer, which is more visible in front of users as well as consumers, is generative AI, large language models, agentic systems, and all the other very dominant topics right now. For example, chips, large language models, visual video creation, those kinds of things. Those things coming together, there is a clear line of evolution for the past 20, 30, 40 years. If people don't understand the evolution, they believe AI is something magically emerging in the past few years. That's missing the entire history. If you look at the history, you will demystify AI and say, oh, it's magic? No, it's not. This algorithm is statistics, it's economics, it's mathematics. So when people from the board of directors, from CEOs, from CFOs say, "I want you to use AI or generative AI," what they really mean is, "I want to see results." That's not healthy in my mind. In reality, we should look at the entire history and make sure we crawl before we stand up and walk. Walk before we start running. That's the capabilities journey. I'm not saying you have to do it sequentially. Sometimes you can go back and forth iteratively. But without the right data foundation, without understanding the data management side, anything that's generative AI-generated content, like customer service, meeting summaries, or new idea generation, those things are helpful and useful, but they are not going to fundamentally change the decision process, culture, and workflow.

Christopher Hutchins:

Yeah, I think that gets to a point that maybe we don't talk about enough for people to really wrap their head around what this is actually about. But I know when we've talked, you've talked about a tension that sits between enterprise systems and probabilistic AI models. It's not magic. I figured that much out. I don't know that I understand everything. But talk a little bit about what that tension is between enterprise systems and probabilistic AI models.

Gary Cao: I worked in the healthcare industry, I also worked in financial services. I'm using different perspectives on functions. For example, in the financial services banking industry, there is always tension between marketing and risk management. Marketing means you acquire new customers, you cross-sell to customers with other products. Marketing is generally very flexible. In terms of requirements, they don't have a high level of accuracy in terms of precision. So once you have the probabilistic decision-making process, that's good enough. But in risk management, the tolerance for lack of accuracy is much lower. Of course, it's still probabilistic. For example, you have a model that says this group of accounts has a higher probability of going delinquent. That's still a probability. But their requirements are higher and it requires more compliance and more calculation, more economic, scientific, statistical analysis rather than tolerance of a wider range or ballpark number. Similarly, in healthcare, clinical decision making or diagnosis or treatment has a higher level of requirements for accuracy and a higher level of compliance and emotional value. If you look at administration and scheduling or logistics, supply chain, those types of things are slightly more tolerant in terms of lack of accuracy or precision. So that's the healthy tension between those different areas for potentially applying analytics. Generative AI can be very helpful for different use cases. I have a list of things that are really valuable. For example, technical assistance, troubleshooting, content creation and editing, personal and professional support. Those are top use cases for generative AI. But if you go back to say, I want to look at predictive equipment and machinery maintenance, those things require more statistical analysis, and at that moment it goes beyond language models. It goes to more mathematics, statistics, and will require a higher level of specialized expertise. A simple language model may not handle that. Of course, we have agentic systems that can potentially blend in processes and languages and quantitative numbers, but we are not there yet. We cannot blindly trust that a language model can help us make better decisions in areas that traditional data analytics can easily solve. One great advancement is that in the past, building a data science model may take a few hours or a few days or sometimes weeks, sometimes even months, depending on the complexity. But right now, if you have the right data and have high confidence in data, and you have process workflow defined, that model algorithm development can be as fast as a few seconds or a few minutes. So that enables a lot of data scientists to be much more productive. But does that reduce the workload and attention to data quality, data relevance, and data usefulness? All those kinds of things still have to be there. Otherwise, it's garbage in, garbage out. And it goes beyond language; it's truly quantitative. The other thing is that for current data management, most enterprises are still using relational databases, data marts, data warehouses. Sometimes they also have data lakehouses and data lakes. But those are tabular, relational data. That's different from the more commonly used data fed into generative models. Generative models work with unstructured data:

language, visual, audio, log files. So the difference is there, and generative AI greatly expanded the capabilities for data scientists to speed up their workflow and expand their scope of potential impact.

Christopher Hutchins: Right. It occurs to me that what we're talking about is an acceleration of learning that typically would be stretched out over a much longer period of time. Just the pace that it's generating new recommendations or outputs begs the question:

do we need a different kind of governance for this generative AI approach that's different from what we typically do for analytics?

Gary Cao:

I think it's more complex. There will be some similarities and consistencies, but there will also be something different, maybe going beyond traditional data governance. Data governance has been struggling for the past few decades because it's high cost and very hard for people to prioritize master data versus production or operational data. It's impossible to boil the ocean. How do you balance the 80-20 rules, saying 80% of the impact is actually deliverable using the 20% of the volume of the data? That could potentially be a great space where generative AI can help, in terms of looking at the distribution, focusing only on things that matter, and looking at the outliers and things out of the norm, so we can correct that. The governance there is probably useful. But the language part, the leakage part, many people don't realize that if we do this, things are already out the door. It's not necessarily in front of people's computers. That will require discipline, culture, education, and training because people have to realize we want to minimize risks, minimize the chance of making major mistakes that cannot be reversed. That has to be there first. Smaller potential problems and mistakes, if we can recover from them, we'll just recover, gradually learn, and each time minimize the chance of making the same mistake in the future. But the big mistakes that you want to avoid, those have to be on a high priority list for topic executives, including the boards.

Christopher Hutchins:

It's just a different dynamic to be having conversations at an executive level, getting into the differences between deterministic versus probabilistic thinking. How should executives be thinking about risk tolerance? Because as you were just talking about, there are certain areas where there's just a higher threshold that has to be met for something to be acceptable. How should they be thinking about that? And where's that margin where you know you've got to hit a certain level of accuracy before anyone's going to be comfortable trusting it?

Gary Cao:

Yeah, I think the business judgment is still important. I always go back to the fundamental elements of running a business. First of all, you have to have a customer. Second, you have to have some type of service and product that can deliver value to the customer. And then you have financial performance, operations, and everything else. From there, it's still right now a human judgment. Generally, most people are very comfortable with mechanical, deterministic models. If one plus one is two, done. But if you say one plus one is maybe 1.9 or maybe 2.1, they don't understand that. In probabilistic data science or machine learning models, it's all about patterns. It's all about judgment. I remember a statistician said, "All of the models are wrong. Some models are useful." I always believe that. If all of them are wrong, then nothing is perfect or deterministic. So it's a mindset shift for leaders to say, how do you balance the probabilistic versus the deterministic? In reality, most business leaders make decisions based on judgment anyway. For example, how do you know to invest this much into this new product, going to this new territory or market? It's all judgment. So they should be comfortable. They are already comfortable making decisions based on limited information. Especially in business, in the military, in different arenas, it's all the same decision-making process. As long as leaders remember business foundational principles, then just use that. Generative AI could be another tool, another enabler. Some people say it could be like a new digital species. That might be too far in a philosophical way. But from the business perspective, most enterprises are still in the early stage of the journey for data analytics, let alone generative AI. And what I've heard is that some CFOs are saying to data analytics leaders, "Sorry, I think in the past few years you pushed for me to invest in data analytics. I always pushed back and said, what's the ROI?" Now we have even higher pressure from all places. So we should have, could have invested more in data analytics two or three years ago. Now it's still not too late, but we should have been, could have been much more proactive. So some CFOs are realizing that until we push to the next level, we're a little bit behind. Even though they think ROI is critical, ROI is not the only decision criterion. You also have to use judgment, intangible values, culture factors, and strategic value of different projects.

Christopher Hutchins:

Right. Now you mentioned something that I think is really important to dial in on. We're talking about a thought process that has to be instantiated better. We have to make sure we're not bypassing some things. When you're talking about patterns that may exist in the data and facts that exist in the data, it leads you to predict things within some range of acceptable percentage points. But in reality, sometimes what's available to us in history, while true, is no longer the direction we're going because we've learned something. So the governance process has got to be a little bit different. You can pick almost any scenario. One of the best examples I've heard was the descriptive disparity in salaries between men and women historically in certain fields. That historical data would predict certain things and tell you where it's headed based on what it sees in the history. That's where the judgment stuff is so much more important. We have to make sure we're designing our governance processes to appropriately pause and create space for those judgment calls to happen so we don't take on more risk than we ought to.

Gary Cao: Yes. One other factor that I've seen many companies may not have spent enough time or investment on is workforce upskilling. It's about mindset shift, it's about ownership, it's about driving change that's ultimately beneficial for the company as well as the employees. That will require some type of lifelong learning attitude. There's the old saying:

if you don't train them, they will stay. If you train them, they will leave. Which one is better for you? Training and education and upskilling the workforce is great. But there are some other potential concerns. If AI is really reducing the workforce, why would the workforce be welcoming the adoption of AI? That's another philosophical discussion. It's about company culture, it's about why companies exist.

Christopher Hutchins:

Yeah, I think that's a great point. And I love that you're bringing up human judgment and wrapping everything around the support of human beings. It's sure exciting to talk about the technologies. People that think like we do probably enjoy some of that technical jargon and investigating and learning new things, maybe more than others. But you're making some really important distinctions where this is not a technology thing. This is really a whole other level of support that we're trying to wrap around the delivery of good health care. But even beyond the healthcare sector, same thing is true. This has all really got to support the lives of human beings and make their lives better in ways that we are mitigating as much risk as we possibly can. We're never going to bat a thousand. But if we can get to 80 or 90 percent when we've never been able to hit 50, that's not perfection, but it's certainly an improvement. The leadership has a really important task to try to figure some of these things out because the pressures for cost reduction and containment in health care, I don't anticipate them changing. In fact, there's an expectation that we're going to be more efficient. And what does that actually mean?

Gary Cao:

It really depends. Maybe reducing the waste, maybe optimizing resource allocation, or maybe better alignment of timing or geographic allocation. I also see multiple dimensions of disparities in terms of adopting data analytics and generative AI. One is the industry spectrum. Some industries are much more advanced, for example, digital native companies versus manufacturing versus other industries. They may be much earlier in their evolutionary stage. That's the industry difference. The second is a geographical difference. East Coast, West Coast versus South versus Midwest. There may be geographical differences because of industry exposure and talent and other factors. The third thing is within each company, the disparity between technical people versus business people. Business people think, don't tell me the details, I want to get this thing done. But then the technical people say, I know the technical aspect, but I don't know how to effectively communicate with business people. Many companies could benefit now and in the future by identifying and developing people who can fluently speak business language as well as technology language, data language, and analytics language, all together. It's a long-term challenge, but eventually it will get better. When we're in the journey making changes, the progress can feel slow. But if you look back three years, five years from today, you will see we have traveled a long distance. It's really about believing in the vision, believing in the direction, and truly putting effort into it.

Christopher Hutchins:

Yeah, I think there's an interesting concept that I've been thinking about a lot lately in terms of the kind of role that you're describing that you've been in. I've spent a number of years in it too, but the ability to do that translation between the business and the technical team can't be overstated. It's such a big part of what you've done in your own engagements as an entrepreneur, but also as a major player in enterprise systems. That's such an important role. Whether it's any specific title doesn't really matter. You just need to have people that are really good at understanding both and bridging that communication gap that often exists, because at some point you're going to have to be held accountable for what you're delivering. Which brings me to one thing on the business side I would like to have you weigh in on before we pivot to looking into the future. How should boards and enterprises be evaluating the potential ROI or lack thereof with all the things coming at them? You and I spent way too many hours sifting through literally hundreds of emails sometimes in a single day with solution providers covering a wide range of things that may or may not be viable. But how would you say that boards need to be thinking about that?

Gary Cao:

Yeah, at the end of the day, it's really about judgment, experience, and calculation of pros and cons and benefits and cost. The ROI discussion can have two elements. One is value, the other one is strategy or growth. I actually wrote an article in the past few months saying the board and CEOs, when they look at AI investment, should balance two approaches. One is value, the other is growth. Value is more specific, measurable metrics in terms of ROI, project-based, specific use-case-based. Growth is more strategic value, even hard-to-quantify benefits. But we should not necessarily discount or deprioritize the strategic value of AI investment. In my calculation of a potential scorecard design, there are three components in the benefit or ROI. One is direct impact on incremental revenue or profitability. Second is cost avoidance or cost reduction. Those are measurable numbers. And sometimes people say, how about number of hours saved? That's efficiency, which is another dimension. The third thing is benefits that are hard to measure in dollar values or even numbers. For example, customer net promoter score improvements. Those three things, the decision makers can assign different weights and come up with a score. So for example, something that has low or no impact on revenue but has some impact on time saved or cost reduction, but a huge benefit in qualifiable but not quantifiable, soft benefits, then the decision maker should say, yes, this is something we should do, even though it's not short-term ROI-number driven. CFOs have a lot of great input into the discussion, but generally the CEO or the board will make the decision. We should not let one metric dominate the entire discussion. It's about balancing, about making sound decisions with different perspectives and different weights on different factors.

Christopher Hutchins: Yeah, there's always a difficult task sometimes. If you're looking at things just from how they contribute to a margin, whether it's cost reduction or increased revenue, that obviously is top of mind in most conversations. But there's this other factor that can be a whole lot more disruptive if we don't get it right:

when we're dealing with some pretty difficult shortages for staffing, for nurses and clinicians, particularly in general medicine. I love that you highlight that there's more than one thing to look at. The numbers are important, of course, but those numbers are guaranteed to get worse if you can't staff the clinicians that you need.

Gary Cao:

That's why I always believe that everything has a life cycle. One industry, one location, one product, one company, one person has this life cycle. There is a chance that the board of directors or the CEO could reinvent a company, a culture, a business model to start a new life cycle curve. But that's not easy. If you don't reinvent yourself, it's easier to just take the ride and eventually the life cycle will be growth, then stagnation, then decline. How do you bend the curve from decline to another new growth? There will always be some type of dynamics in terms of conflict, friction, different tension. It's always there. It's just a matter of the company's life cycle and also the vision and the core value.

Christopher Hutchins:

This has gone super fast. I always enjoy our conversations. If I can get you to look out into the future over the next three to five years, two things. What do you think some of the most important leadership skills and capabilities are going to matter the most? And how do you see things shaking out in terms of the organizations that are going to really come through this in a positive manner? What are the characteristics of those companies and the leadership it's going to take?

Gary Cao:

I think the most important traits for a leader or influencer are to solve problems, deliver value, and provide service. If you are delivering valuable service to the right customer, the right stakeholder, the right audience, you will be in a good position. To do so, I'm coming back to the potential traits of the organization. It's the way to deal with ambiguity versus specific details. It's a trade-off between specific ROI versus intangible benefits. How do you measure that? Another important trait for successful organizations would be workflow optimization and architecture design. It's not about having a rigid process. Based on the market demand, based on customer needs, how do you make this workflow process steady but also continuously evolving and improving? That way you prevent rigidity and take a more agile approach. When things change, you can also change or be adaptive. Being adaptive is one very important trait for future companies.

Christopher Hutchins:

I really appreciate that. The concept of disruptive innovation has been around for as long as time. Very hard to do. But the challenges are going to only get bigger now because we're advancing at a pace that none of us have ever experienced in our lifetimes. It's just staggering how fast the advancements are coming.

Gary Cao: Yeah, I would say there are two or three things I can leave with the audience today. Number one is go back to the fundamental basics. Don't forget about the core value and principles and guardrails. Number two is balance probabilistic versus deterministic approach. You need both; it's not one or the other. The third thing is for data analytics professionals:

how do we take advantage of the advancement of technology and the environment? There's a lot of visibility and awareness about data analytics. How do we pivot from generative AI to the entire spectrum of data analytics, including generative AI, agentic systems, and robotic process automation, to focus on two aspects? One is data management. The other is explainable algorithm-based recommendations. Huge opportunities exist for data management. How do we streamline this entire data management process? It's time-consuming, it's expensive, it's also not very accurate. How do you streamline that using generative AI or agentic processes? The second is explainable algorithm-based recommendations for business decision making. That will deliver value as a last-mile value delivery.

Christopher Hutchins:

Awesome. Well, Gary, as always, a really great conversation. I always learn things when we talk. Thank you so much for coming on the show and sharing your experiences and your knowledge and giving us a glimpse of what we should be expecting over the next three to five years. Sounds like we've got plenty of work to do.

Gary Cao:

Thank you very much, Chris. Happy to be here, and I'll always support you in a way that I can. It's also benefiting the data analytics professional community.

Christopher Hutchins:

Thank you so much. I really appreciate you coming on the show. I look forward to having you back. To the audience, watch for the show notes. We'll provide some information so you can definitely reach out to Gary. He's got some really great experience. And if you have questions, if you need some help, I know he can help you. So thanks again, and that'll do it for this episode of The Signal Room. That's it for this episode of The Signal Room. If today's conversation sparks something in you, an idea, a challenge, or perspective worth amplifying, I'd love to hear from you. Message me on LinkedIn or visit SignalRoomPodcast.com to explore being a guest on an upcoming episode. Until next time, stay tuned, stay curious, and stay human.