The Entropy Podcast
The Entropy Podcast is a cybersecurity, technology, and business podcast hosted by Francis Gorman.
Each episode features in-depth conversations with cybersecurity professionals, technology leaders, and business executives who share real world insights on cyber risk, digital transformation, emerging technologies, leadership, and the evolving threat landscape.
Designed for CISOs, IT leaders, founders, and professionals navigating today’s digital economy, The Entropy Podcast explores how organizations can adapt, innovate, and build resilience in an era defined by constant change, disruption, and geopolitical uncertainty.
The name Entropy reflects the growing complexity and unpredictability of cybersecurity and technology ecosystems and the strategic thinking required to thrive within them.
Topics include:
- Cybersecurity strategy, risk, and resilience
- Post Quantum readiness
- Emerging technologies and innovation (AI etc).
- Business leadership and digital transformation
- Cyber threats, regulation, and geopolitics
- Lessons learned from real-world experience
New episodes deliver practical insight, expert perspectives, and actionable knowledge so you stay informed, strategic, and ahead of the curve.
Buy Our Swag:
We now have some slick new swag you can purchase through our Esty store.
https://theentropypodcast.etsy.com
Watch and Subscribe
You can also watch full episodes and exclusive content on our YouTube channel:
www.youtube.com/@TheEntropyPodcast
Achievements
The Entropy Podcast delivered strong chart performance throughout 2025, demonstrating consistent international reach and listener engagement.
- Regularly ranked within the Top 20 Technology podcasts in Ireland.
- Achieved a Top 25 placement in the United States Technology charts, holding the position for one week.
- Charted internationally across multiple markets, including Israel, Belgium, and the United Kingdom.
This performance reflects sustained global interest and growing recognition across key podcast markets.
Audio Quality Notice
Some episodes may feature minor variations in audio quality due to remote recording environments and external factors. We continuously strive to deliver the highest possible audio standards and appreciate your understanding.
Disclaimer
The views and opinions expressed in The Entropy Podcast are solely those of the host and guests and are based on personal experience and professional perspectives. They do not constitute factual claims, legal advice, or endorsements, and are not intended to harm or defame any individual or organization. Listeners are encouraged to form their own informed opinions.
The Entropy Podcast
Navigating AI in Healthcare with Dr. Alex Tyrrell
In this conversation, Dr. Alex Tyrrell discusses the integration of AI in healthcare, emphasizing the importance of trust, expert involvement, and responsible practices. He outlines the challenges faced in clinical workflows, the future potential of AI technologies like virtual twins, and the regulatory landscape. The discussion also highlights the need for organizations to manage shadow AI and innovate within regulated environments, providing valuable insights for leaders looking to implement AI solutions effectively.
Takeaways
- We don't compromise on trust.
- Expert AI by experts for experts.
- AI solutions that don't fatigue the user.
- You can't just release AI into the wild.
- The path to AI value is not linear.
- AI is tying together stakeholders more closely.
- You have to engage your workflow.
- You really have to define the problem statement.
- Stay curious.
- Two in a box is an effective strategy.
Sound Bites
"We don't compromise on trust."
"Expert AI by experts for experts."
"Stay curious."
Francis Gorman (00:02.022)
Hi everyone, welcome to the Entry podcast. I'm your host, Francis Gorman. If you're enjoying our content, please take a moment to like and follow the show wherever you get your podcast from. Today I'm joined by Dr. Alex Tyrrell, who is the head of advanced technology at Walters-Clewer and chief technology officer for Walter Clewer Health. Alex also oversees Walter Clewer's AI Center of Excellence, focused on accelerating innovation across all Walter Clewer's divisions in areas of AI, agentic, machine learning and data analytics.
Alex has extensive experience designing and delivering commercial scale machine learning and analytics platforms and setting technology strategy for enterprise content management, digital transformation and new product development. Alex, it's pleasure to have you here on me today.
Alex Tyrrell (00:43.416)
Awesome. Thanks, Francis. Great to be here.
Francis Gorman (00:46.064)
It's great to have you Alex. And I suppose as the Chief Technology Officer and you know, over the AI Center of Excellence, you're wearing many hats. So I suppose I'd like to ask you, what are the guiding principles that kind of shape your long-term technology strategy, specifically in the healthcare side of things?
Alex Tyrrell (00:54.104)
Mm.
Alex Tyrrell (01:03.822)
Yeah, great. Yeah. so foremost, right, we don't compromise on trust. When you look at Walters Kluhr as a company, we have 189 years going back to JB Lippincott within our nursing business. And really, you know, our customers trust us to be the gold standard and evidence-based tools and support. Yeah. So, you know, absolutely, you know, bring technology innovation to the market with a focus on safety and efficacy.
Um, and then, you know, as a CTO, things have obviously been changing very fast, particularly with the introduction of gen AI. And I've been doing this for a long time and in AI presents a lot of unique challenges, right? You sort of have to develop this intuition, right? You have to be able to adapt and pivot quickly. We're talking about non-determinism, right? It's not obvious that, you know, you have a clear path to success, that you can get a good problem solution fit. Uh, I mean, the clinical setting, you know,
You're building AI to support professionals and it's really, really demanding, right? You're looking at entirely new methods of sort of evaluating political relevance, whether or not the solution is gonna add the right value. It's demanding in new ways, yeah? So that's a little bit of sort of where I focus. ultimately, when you really think about some of the changes that have been occurring with GEN.AI,
you know, I think about like the overall day in a life, right? So whereas, you know, we used to be really, really heavy focus on maybe like architecture design, UX and wireframes, right? That's sort of evolving, right? We, you know, particularly with agentic AI, we want to own the outcome, right? We want to deliver the outcome, not the little breadcrumb trails, the transactional moments, all the clicking, those interfaces that tend to be sort of challenging to use, right?
And sort of, know, that traditional focus on containers, microservices, APIs. I'll be honest with you. We're talking a lot more about p-value, confidence interval, and sample size. That's a huge shift, right? We're moving more towards sort of experimental, but with rigorous validation. And of course, traditional IT still plays a really, really important role. We'll probably get into that a little bit. But this is a big change for technologists. I mean, they just haven't had this type of experience in the past, most likely.
Alex Tyrrell (03:23.886)
Some data scientists may be familiar with some of these things, but in a particularly in a healthcare setting, that's where we're really fortunate to have access to on staff licensed physicians. These are folks that are involved in care delivery and they understand evidence-based practice. They understand this is based on research and experimentation. So we developed a strategy that we call expert AI and it's really AI solutions by experts for experts.
Obviously, this is a focus on expert in the loop at every stage of design, validation and testing. It's rigorous. It's ensuring safety, efficacy and clinical relevance. And that has really helped us kind of align bringing together, you know, technologists and what they do best and clinicians and what they do best. And that has really been a key part of sort of the long term guiding principles that are shaping our current sort of early work and early focus.
around Gen.ai and agentic and these new technologies. And certainly it's going to be part of our long-term vision again. So expert AI, expert solutions, expert AI by experts for experts. Yeah.
Francis Gorman (04:34.494)
That term expert in the loop, I haven't heard that before, but it makes absolute sense when you say it out loud. know, human in the loop has always been the thing, but now that I'm thinking about it, do you want the human to understand what the loop is telling them?
Alex Tyrrell (04:46.36)
Yeah, we gotta go upstream, right? Before there's any loop at all. have to understand what does that loop look like? What is it meant to do? And the experts really help us. They help us break down the complex problems. I mean, again, owning the outcome means you need to really understand, you know, what is the intent? What are you trying to achieve? What is the purpose? What are the risks? We feel like it doesn't make a lot of sense to have AI engineers, you know, develop
you know, complex solutions in the clinical setting for which they really know nothing about. And then to rely on simple, like, you know, more or less, you know, benchmarks and statistical proxies as sort of their guiding principles. They really don't reflect the complexities of the real world. That expert in the loop gives us that establishes the guardrails, the firm foundation and make sure as we innovate and really speed up those innovation cycles.
that we're doing it safely in a trustworthy manner and we can provide that explainability and transparency along the way.
Francis Gorman (05:45.469)
makes total sense, Alex, it really does. Yeah, no, that's a key number to take away from me already now. I'm gonna start changing my perspective slightly on that one. I suppose, and the expert aspect, you're dealing with clinical environments, obviously, so if things go wrong, they go wrong at a real human level. what is the most...
Alex Tyrrell (05:46.53)
Yep. Yep. Yep.
Alex Tyrrell (05:51.119)
Really won't question it. Yeah. Yeah.
Alex Tyrrell (06:03.502)
Mm-hmm, yeah.
Francis Gorman (06:09.628)
misunderstood aspect of integrating AI into those clinical workflows from an executive perspective even because I'm assuming the complexities are far greater than making somebody's PowerPoint have nice slides and that sort of thing. you're dealing with human beings at the far side of this so the risk must be higher.
Alex Tyrrell (06:11.938)
Yeah. Yeah. Yeah.
Alex Tyrrell (06:22.934)
Yeah!
Alex Tyrrell (06:29.484)
Yeah, so this actually builds kind of on the point that I just made. So we have technologists going through this sort of transformation, this journey, shifting from all these details around building enterprise technology, cloud APIs, microservices, and getting a more of an experimental understanding of how we add value. Again, going back to basic statistical principles, these are not familiar themes. Experimental design, measurement error, uncertainty, risk of bias, these are not
typically the types of themes that you would expect a technologist to command. And that extends all the way up, in a way up to the executive stakeholders that are learning and beginning to understand that like the path to AI value is just not linear. It's not structured in the way that maybe your traditional enterprise SaaS offerings have progressed historically, know, with, know, specific, you know, planning, you know, basically breaking down features and using your agile safe.
Those are good, good style techniques. We tend to decouple from the sort of what we call the agile release training now, right? We're doing really, really experimental work. And in order to refine that right value, to make sure that we do it safely and effectively, it requires new forms of governance and as well as sort of an increased focus on go-to-market, right? There are new challenges out there, right? So first of all, when we look at AI and our experience over the past couple of years with Gen.AI is it's really not
as effective as sort of a bolt on a point solution, right? There is the entire ecosystem now to consider, right? And by the way, that tends to be true across industries, you know, and it takes time to develop that right value, the impact and the environments you're deploying into needs to be considered, right? And there may be new training requirements for people using the solutions. You may be involved in piloting and testing and increased governance. This may be very different from the way that you were building sort of your traditional SaaS products in the past, right?
Most importantly, AI is sort of tying together stakeholders more closely, right? So your technologists, your product folks, your go-to-market, our expert clinicians, our customers, really look working a lot more closely. So that takes time. And you can't just release AI into the wild. And I think we're learning that as we go. We're sort of maturing, right? And so you're not only experimenting to see if you get a good problem solution set using the core technology. We've got great success at doing that so far, right?
Alex Tyrrell (08:57.871)
you're also experimenting to get that right product market fit, the commercials, the adoption, the willingness to pay, the overall governance. know, with AI, in particular, GenA, and Agentic, it's all still very new, right? So we're figuring that sort of as we go. And this is the part that I really like to emphasize. You know, as a CTO, you may find you feel more like the chief explanation officer, like what happened with these experiments, you know, and our assumptions.
How do we pivot? What are we gonna do here? Are these investments gonna bear fruit? Do we have the right strategy? And I've been doing this for 25 years and this is the biggest change I've seen. And I like to sort of relate it. Having a PhD helps a lot when you're sort of the chief explanation officer. And if you find yourself in an executive meeting that feels like a thesis defense, that's probably a good sign things are working. You're moving past the early challenges of ADA.
AI adoption and really get into that real value. Yeah.
Francis Gorman (09:59.423)
Sounds good. And Alex, suppose one of the things that always interests me is what are those key kind of adoption? How do I put this now? When I think of financial institutions, you look at fraud detection, AI can help there. You look at better customer experiences, maybe help desk and all that sort of thing. When it comes to healthcare and those innovations,
Alex Tyrrell (10:10.415)
Hmm.
Alex Tyrrell (10:17.295)
Yeah
Yep. Yep.
Francis Gorman (10:28.572)
What does that tech look like? What is AI being used for? are the use cases?
Alex Tyrrell (10:33.805)
Yeah. So we are, we're focused on a number of key areas, right? So a few months ago, we really released our up-to-date expert AI. And that is basically expert AI is built on decades long commitment to curating the gold standard and evidence-based clinical decision support. And that's used by, you know, approximately 3 million clinicians around the world. And, we really continue that tradition by grounding gen AI in our trust in verified content.
Again, having those experts in the loop. This is preventing hallucinations, but it's also designing the solution, again, as we talked about, making sure that it's explainable and transparent, right? So that's providing now on-demand guidance and recommendations at the point of care, safely and effectively. It's saving clinicians valuable time, and it's allowing them to focus on patient care. Another area that we're finding really, really good value, and I...
I would sort of win this in, right? Technology changes all the time. Three years ago, we were talking about AR, VR, the metaverse. This is sort of pre-gen AI. And maybe we're a little early on that technology cycle. We actually found really, really good value in use of technologies like this in our nursing products, where we could basically use virtual simulation and learning solutions.
It's a great way to remove risk and increase access to authentic learning environments and help, you know, create clinical competence, right? Where it's difficult to actually have, you know, clinical practice. So these simulations, these, these new technology solutions are really valuable there. And now we're using Gen. AI. We're looking at basically focus and adaptive learning, remediation, intervention, so that the students on their learning journey are getting improvements all the time. So another great area, great use case.
for AI, GenEI and ultimately agentic,
Francis Gorman (12:27.986)
And Alex, where do you think we're going? So let's say by 2030, if you were to look forward with your CTO hat on and that innovative vision, where do you think we'll be in this space? Is the world going to look very different than it does today?
Alex Tyrrell (12:32.035)
Mm-hmm.
Alex Tyrrell (12:35.855)
I'm
Alex Tyrrell (12:43.471)
Well, that's hard to predict, but I do have one particular area that's a personal connection to, and listen, I'll just come right out with it, the virtual twin. I'm really, really excited about this concept. know, where you're programming basically computer simulations, programming them with your genetic profile, your physiology, it's combined sort of with continuous monitoring.
as well as your specific healthcare history. And then you're introducing advanced modeling and simulation to improve diagnosis and treatment planning, as well as focusing on things like preventative medicine, predicting disease, and offering more choice early in disease progression. Now, the personal connection. In my PhD and postdoc many, many years ago, I say how long ago, but I focused a lot on what we call in silico simulation of tumor biology. So you're in vivo, in vitro.
So this was in silico, all done in computer. And it was using the sort of earliest versions of GPUs. And at that time, the GPUs were small. So we could simulate a small tissue compartment with a single tumor. We could simulate a lot of detail, everything from blood flow to drug delivery. It was really, really interesting. And obviously, today's GPUs are orders of magnitude more powerful. Much more complex biological systems could be modeled.
And this is really exciting in my estimation, in fact, very doable. And I think we'll see this begin to emerge at least over the next five years. Yeah.
Francis Gorman (14:14.422)
I'm gonna have a digital twin in the future. I'm not sure how she'll feel. You just go do those tasks and come back to me. It is a fascinating concept. But yeah, I think I struggle myself to visualize what would it be like in six months time, nevermind by 2030, the rate of change we're seeing at the moment.
Alex Tyrrell (14:16.367)
Yeah, that can be helpful in the workplace too, right? Yeah. Yeah.
Alex Tyrrell (14:34.915)
Yeah, yeah, absolutely.
Francis Gorman (14:38.238)
Alex, something that comes up often with our guests is we talk about responsible AI and ethical AI and all of those things. What does that mean in practice for an organisation like yourselves operating in kind of high risk environments like healthcare?
Alex Tyrrell (14:52.013)
Yeah, yeah, right. So sort of like that sort of non-negotiable criteria, right, when you're evaluating whether an AI system is ready for deployment in healthcare, right? So at Walter's Clure, we focus on safety and trust and an equal measure clinical relevance, transparency and explainability, right? We also focus on mitigating bias and ensuring ethical and responsible use of AI. And really a key to this is an overall formal governance process. The designs a set of evaluations and rubrics that can
pressure test a solution against a determined risk profile, right? Not every use case has the same level of risk. And so we have to sort of look at that very specifically in a rigorous way. most importantly, we don't aim to replace clinical reasoning and judgment, right? But we have to assess algorithms on their ability to follow, for example, expert instructions or expert in the loop and to identify all the relevant clinical context and deliver what we call clinical intelligence, right?
It's not replacing clinical judgment or reasoning, but we're making sure that our algorithms can think and act like a clinician and not replace them. And that really helps actually make sure that we can provide explainability and transparency, which is an absolute must around ethical and responsible use of AI. We have to make sure it's safe, most importantly. And we also perform a series of intense efforts with our licensed physicians on staff.
And this is really to assess whether a solution has the potential for bias or the potential to cause patient harm. And we test and validate everything thoroughly, right? And here's another really, really important criteria that really goes to the heart of governance and ethical and responsible AI. We don't just turn on AI features in the wild. We don't just launch a product feature and an existing product and say, here's a button, try this, right?
We absolutely have to work with our customers and our partners and make sure that we roll out solutions in a controlled way, right? So we'll take additional clinical feedback from the frontline and make any additional adjustments until the solution is trusted and demonstrated to be safe.
Francis Gorman (17:02.428)
makes sense and is comforting, I have to say. You're not just flicking it on. So many people just turn it on. It's good to know that you're not just flicking it on. Alex, one thing I suppose that always intrigues me is looking at the regulatory state and Europe versus the US. When I look at frameworks like FDA oversight of artificial intelligence and machine learning over the next number of years, how do you see that?
Alex Tyrrell (17:04.942)
No, yeah, just turn it on. Yeah, Yeah,
Francis Gorman (17:32.574)
framework kind of come into fruition and do see any changes that they're going to come through there that will impact the technology side?
Alex Tyrrell (17:34.991)
Mm-hmm.
Alex Tyrrell (17:39.053)
Yeah, I mean, so if you go back to like even the early introduction of LLMs and sort of, you know, understanding that they're ubiquitous models, they can do a lot of things, right? And so these generic large language models began to become sort of, you know, embedded in our everyday life, including workplace settings and including likely healthcare settings, right? You know, the problem with on their own, they aren't very explainable and transparent, right? They add value, they can be powerful.
But the train on the web and you know all of its darkest corners right now that obviously poses some sort of fundamental risk now there We've developed many many ways so to rehabilitate your sort of generic LLM things like grounding preventing hallucinations make them more explainable But I would expect like at least some form of guidance around how generic LLM should be used in clinical settings, right? And in fact, you're already seeing LLM generic LLMs implementing more restrictions guard levels
and warnings, which I think is positive right now. I think when you really look at where the technology is evolving, I think there's going to be a distinction between sort of your agentic AI workflows with the capacity to make changes automatically, right? know, versus AI that is designed to be more sort of that oversight, that human loop at each step verifying. I think the regulatory authorities in the FDA may focus more on agentic AI designed to act independently.
But in all cases, increased explainability and transparency will be operative. Things like citing any sources used, identifying potential risks and recommending mitigation should be the focus basically for all the AI solutions, right? You know, we feel trust is key and establishing trust is a shared responsibility across all stakeholders in the ecosystem, clinicians, hospitals, and care settings, vendors and service providers, as well as regulatory authorities. We all need to work together here, yeah?
Francis Gorman (19:35.583)
Yeah, no, that does make sense. I think trust, think you've said trust a few times there and it's something that I almost, you know, I find it hard to fathom sometimes how human trust has shifted away from institutions and experts to algorithms and machines. know, it's the it's that sideways trust now, know, like a couple of years ago, no one would have trusted the car to drive them around without, you know, the steering wheel or accelerator or brakes, but we're heading there. like the
Alex Tyrrell (19:40.036)
Yeah.
Alex Tyrrell (19:51.055)
Okay.
Alex Tyrrell (19:54.893)
Yeah.
Alex Tyrrell (20:03.501)
I mean, that trust thing, think, I'll just jump in real quick if you don't mind. I think this is another really important point that I think we want to cover here too. So like, you know, this trust, this trust in the technology, this confidence in technology, it could be a potential failure mode, right? So like, know, LLMs are designed to answer questions and to respond, you know, with what seems like increasing levels of confidence. I don't know if you notice this, but I do, right? You know,
We all understand there is a risk of hallucination that's been around for a long time. That's certainly a risk. And, you know, we're doing a lot of things to mitigate that risk, but that the purported confidence, the blazing speed at which these LLMs can answer questions, including clinical questions, it creates a powerful psychological effect. another risk, potentially even bigger risk, may be this sort of notion of de-scaling or over-reliance on this new technology. Right.
especially like in a clinical setting, especially without the right clinical judgment at those crucial clinical moments when it matters most, right? The machine seems to know what's going on here. And so you move forward, you know, and this really could become sort of a widespread failure mode. It's more of these tools become available, right? So at Walter School, we talk a lot about how to create AI solutions that don't fatigue the user, don't encourage them to let the guard down. Instead, focus on determining the right clinical context.
when nuances evolve, make sure you don't overburden the clinician with too much information, but encourage engagement and support the clinical reasoning and judgment of the clinician at each step.
Francis Gorman (21:41.564)
very reassuring to hear you say that and I say that for a number of reasons. There's a very funny meme going around that kind of has a sinister background to it and it basically says your future doctor is using chat GBT to pass their exams. Eat healthy.
Alex Tyrrell (21:43.545)
Yeah.
Alex Tyrrell (21:55.031)
Right. But my other favorite meme, I live in Colorado, you know, obviously we focus on safety first, but there's a meme in Colorado in the mountain country, safety third, which is, know, we don't believe in that. Right. So yeah, it's interesting to hear about this. Yeah.
Francis Gorman (22:12.23)
Yeah, it is. It has a pun that, you know, if we do this cognitive offload, you know, that we lose some of skills. And if the the machine goes down and you're in the middle of something, will you have that recall or is that reliance there? And I think that's kind of what you've touched on to make sure that that doesn't happen, that you don't create a brittle human at the end of the process.
Alex Tyrrell (22:17.389)
Yep. Yep.
Yeah.
Alex Tyrrell (22:25.603)
Yep. Yeah. Yeah.
Alex Tyrrell (22:31.423)
Absolutely. Yep, absolutely.
Francis Gorman (22:33.974)
Alex, I'm always interested to talk about shadow AI in organizations, because we see it all over the place in different companies that we interface with, et cetera, where employees are just thrown daily into ChatGPT or into other generative AI tools or whatever. I see it all over the place when I go to visit my accountant. ChatGPT opened them on screen on my account and the other hand going, are they cross pollinating or what's going on there? How do you control shadow AI as a...
Alex Tyrrell (22:36.569)
Mm-hmm.
Alex Tyrrell (22:58.671)
Yeah,
Francis Gorman (23:03.79)
a chief technology officer and you've got that technology and state in front of you but you know the users are potentially putting data elsewhere.
Alex Tyrrell (23:05.775)
Mm-hmm.
Alex Tyrrell (23:12.007)
Mm Yeah. So that's an interesting one. And, know, the former, you know, incarnations of this ghost I.T. This isn't really a new concept, but it's maybe a little bit more nuanced and subtlety with the introductions of LLMs, right? Because I don't think everybody's completely aware of some of the potential risks. Right. So so first and foremost, you know,
You have to revisit your standard IT governance. Again, ghost IT and shadow IT, this stuff's been around for a while. It's not completely new. but what you have to do is you sort of have to, in my estimation, add a dedicated AI governance framework. That's really, really gonna help you understand where the risk is, what sort of compliance controls are you gonna need? How do you evaluate safety, clinical relevance, things like that, right? So.
You're also obviously going to be putting emphasis on the trusted zones. The closer you get to protected health and information, the bedside patient care, point of care, sort of the more stringent standards. Listen, these technologies can be very useful across the board in our everyday lives and some of some of the work we do in a general workplace setting. And so finding that right fit and being able to measure that is key. Here's another key. The culture of no or delaying adoption.
That's going to almost guarantee that you will get shadow AI almost assuredly, right? You have to be able to engage your work flow. So you have to be able to identify those key areas where technologies like this can add value. Maybe it's not point of care. Maybe it's not clinical decision support, but somewhere in the ecosystem, there are opportunities to add value. And so you perform surveys, you engage with, you know, your employees and you find those opportunities to
deploy and find ways to make GenAI safe and effective in the workplace, right? And that really gets to education and raising awareness. You can't emphasize this enough. You have to train and engage your staff. You must teach them about the risks many people don't understand. Don't put any personalized information into an LLM. We have a joke in technology about LLMs are the Las Vegas of technology.
Alex Tyrrell (25:29.581)
What happens in an alum stays in an alum, potentially forever. And that's a risk a lot of folks don't really understand or appreciate. But you also want to support the growth and maturity of your staff, of your workflows, workforce as these new solutions develop and increasingly add value. Yeah. So I think that that's pretty much sort of a number of key points to make sure that shadowing has not become a major risk in your organization.
Francis Gorman (25:57.823)
kind of leads me on to ask you a bit about innovation in highly regulated environments and how do you structure and build teams to ensure that you can deliver on that value for the business? Because when I look at this in multiple organizations, everybody wants to AI everything rather than going, is AI the right place to get that outcome? But on the other hand, we seem to have a lot of people in the oversight governance.
Alex Tyrrell (26:14.774)
shh shh shh shh
Francis Gorman (26:25.158)
and very few in the execution and doing aspects of it. And I see this across many organizations. And it always leads me to think, you know, AI hit us at such ferocity and speed. like when we look at it, we're trying to understand how do we make it safe? How do we understand where it's going and what it's doing and how to keep the data privacy piece all intact and who's consuming it for what reasons. And the last thing we think about is
Alex Tyrrell (26:27.471)
Mmm.
Francis Gorman (26:55.294)
Who's building the technology in and of itself? Are we a bill by a consumer organization? I'm interested from a chief technology officer perspective. When you look at this problem that's come in, you go, OK, now we need to develop AI, but we need to do it in a way that fits within the rules of engagement because we're a highly regulated entity. But we also need the right mechanisms and individuals to execute in those mechanisms to be successful.
Alex Tyrrell (27:00.867)
Mm. Yeah. Yeah.
Alex Tyrrell (27:13.679)
Mm-hmm.
Alex Tyrrell (27:20.239)
Yeah, you absolutely have to innovate, right? So I think there are two organizational patterns that have developed that I think sort of can help address some of the tension. You want to innovate fast, but it's heavily regulated. You have to be compliant, right? So the two concepts that I think are very interesting is the first one, the team of teams, which I'll sort of explain, and then two in a box, right?
You know, you need dedicated technologists that are expert in cloud, AI, scaling enterprise solutions and managing costs and all that really important sort of IT focus, right? That's your engine for innovation, but they're not going to be experts in compliance or experts in delivery of care or experts on how to measure patient risk or many of the other factors that are necessary for working in heavily regulated industries like healthcare, right? So you create multi-
disciplinary teams right from the get-go, what we call a team of teams, stakeholders with different roles that have different decision rights, but all working together. And this is typically in an actual physical room. Remember those days, right? Really working together in person has also been key. Folks in a room voicing challenges, but also bringing their unique craft and experience to the table. So those stakeholders are there on day one.
And you don't want to bolt on compliance when you're in the terminal stages delivery. That is not the time to try to figure out how to meet compliance requirements. And this is really about developing a collective understanding of the mission and purpose, the stated goals, and how to ensure compliance at every step. This is really, really key. And then once you sort of have alignment, you begin the actual work. So now you have to go into this operational mode. And this is where we focus on
what we call two in a box. And this is really kind of core of our expert AI strategy that I talked a little bit about before. We pair a clinician, nurse or pharmacist with an AI engineer. What could be simpler, right? Small, intimate and focused. That is the kernel of the process. And they begin ideating, testing how to break down a complex problem. They're experimenting hand in hand to sort of develop that intuition around the art of the possible. And after a few cycles, the relationship begins to formalize.
Alex Tyrrell (29:39.381)
you know, the expert defines the box, the guardrails, the safety measures, and the instructions and guidance for attaining, for example, a good clinical outcome, right? The AI engineer must stay within this box while they turn the technical dials and the AI innovation really begins to accelerate, right? So the two in a box develops intimacy, it creates empathy and a more aligned vision. People are aligned on their purpose.
And that box can scale to multiple experts and AI engineers as needed, but always the two together, right? I emphasized it before. What sense does it make to have an AI engineer build a complex solution that they really know nothing about, right? You got to have that sort of two in a box approach. And this has been an incredibly effective strategy for us.
Francis Gorman (30:30.098)
That is actually really interesting. Actually the two in a box. So you've taken your clinical expert and you've taken your AI engineer expert and you've put them together. you, instead of just coming up with a one page of requirements and then somebody doing, this is what they want. And you know, the car turns into a pony. Yeah.
Alex Tyrrell (30:47.183)
Yeah, yeah, that traditional product management, waterfall. No, this is very, very intimate, very focused and very experimental. And that's kind of, builds the sort of a narrative of a lot of this. So that's like, you know, you really have to look at a lot of many, many new variables, you know, the risks, the value of the clinical relevance, all these things that are coming together. Again, it's really feeling much more like science than it is technology. And it's really interesting, exciting.
But again, we developed the strategies to make sure we stay focused and can remain focused on that trusted, verified, explainable and transparent solution.
Francis Gorman (31:25.788)
No, I really actually I love that Alex have to say that is a fascinating idea. And actually you may have something there. I think you need to create a little framework that you can sell the Gartner or Forrester or someone on the back of that one too in a box. Sounds like it could be the next the next big thing, because it makes sense instead of just having a one page requirements document and walking away and waiting for something to fall out of the machine to far side. If you pair up the disciplines and then I presume you you test and trial. So I don't know.
Alex Tyrrell (31:26.691)
Yep. Yep.
Alex Tyrrell (31:33.271)
Yeah, yeah, Yeah.
Alex Tyrrell (31:44.387)
Yeah. Yeah.
Alex Tyrrell (31:51.575)
Absolutely.
Francis Gorman (31:52.935)
I have a friction point of X amount of patients a day with this problem and you you have the technology to maybe be able to ease that and you work it through, you work the problem.
Alex Tyrrell (31:54.447)
All right. Yep. Yep. Yep.
Yep. Yeah. Yeah. And when you think about LLMs, what they're really good at, I there's the notion of LLM reasoning and they're improving that, but they're better at following expert instructions and expert guidance. And that comes from the expert and the AI engineers can take that expert sort of guidance and those instructions, what we call chain of thought, and they can improve the ability of the LLM to analyze and break down complex problems, giving you that better outcome, more accurate, more clinically relevant.
and more safe, explainable and transparent. And that is absolutely key.
Francis Gorman (32:33.33)
Before we finish up Alex, I might just ask you, if you were to take a stand back look now and give leaders out there advice who are looking at artificial intelligence in all its guises, the key steps that you wish you hadn't known when you started this journey, what would they be? What's the key bits of advice you would give to leaders out there?
Alex Tyrrell (32:52.813)
Yeah, right. So, I mean, I think, you know, first of all, you really have to look at, you know, what are you trying to achieve? Right. So we talk about a lot more about like owning the outcome. So what does that mean? You really have to define the problem statement. We talk about a process, you know, which is we didn't invent it, but the idea of like focusing on the jobs to be done. Right. We're talking about agent. We're talking about workflow. You absolutely have to develop that customer intimacy. And it goes way beyond sort of your traditional maybe
UX and CX where you're looking at sort of, know, customer journey mapping. This is really starting from first principles and it's difficult. It's not quite product management. It's not quite go to market, but it involves all these things. And the most important, it involves experts that sort of authentically understand the jobs to be done, the workflow and what are the outcomes and what are the desired outcomes and also be able to measure the potential for risk and you know, whether or not
If a solution even makes sense to build or whether or not it can be effective, right? That's number one. Number two, where do you find these experts, right? We're very fortunate at Walter School, where we have over 7,500 clinicians that work on our up-to-date solution for clinical decision support. We also have in-house licensed clinicians. Folks like that, those are your domain experts that are really driving forward expert AI, right?
And you can work with customers and you can find other ways to really understand those jobs to be done. But that is really, really key. And we found that that's the focus point. Really from first principles on day one, design for the professional as the professional, right? You're not looking necessarily, I think we've really moved beyond the idea of just adding sort of Gen.ai features to add that like modest productivity game. It is about closing the loop now in order to do that authentically.
You really have to get much closer to the customer and understand how to add that value and make sure you're doing it safely and effectively, right? I would also make one other comment and note, know, AI, it's changing so fast and it's really, really intense. That probably is true, but it's changed in ways that make that talent profile subtly different, right? The well-prepared pedigreed, you know, experts in AI.
Alex Tyrrell (35:16.879)
building teams of these PhDs. You may need folks like that with that skill set, but that isn't the universal skill set you need now. You really need to focus on talent that demonstrates curiosity, a willingness to learn, a willingness to experiment, reason and plan under uncertainty. Not everybody's comfortable with this. Some people like that sort of like, here's the requirements document. Let's set up all of our story points. We'll go into our agile safe mode and we'll deliver an outcome at the end. We'll certify it and launch it.
You really want those people that can work in really ambiguous environments, take instructions and guidance and feedback from multiple stakeholders and be very focused and disciplined because it's all very new and we're learning at the same time. But I think that is really a key thing that may be overlooked in terms of how to develop the right talent, how to build the right organization and where you should focus on in terms of your own workflow. So those are a couple of things that I would say I would give advice to and wish, you know,
that I had been able to put a little more thought into that prior to this journey on GEN.AI.
Francis Gorman (36:19.934)
Really appreciate Alex and I think there's a lot of takeaways there. We got expert in the loop, two in a box and stay curious. So it's been a very, I suppose insightful conversation for myself and I've come away with a couple of takeaways so I'm sure the listeners will get something out of it as well. So thanks very much for taking time out of your day.
Alex Tyrrell (36:26.489)
Stay curious.
Alex Tyrrell (36:31.641)
wonderful.
Alex Tyrrell (36:36.129)
Awesome. And super enjoyable. Yeah, super enjoyable. I really appreciate the time. Wonderful. Yep.
Francis Gorman (36:42.302)
Thank you very much.