Biotech Bytes: Conversations with Biotechnology / Pharmaceutical IT Leaders

AI Trends Reviewed 2024 & 2025 Predictions - Expert Insights From Top Biotech CIOs

Steve Swan Episode 24

Hi Everyone! Welcome to the Biotech Bytes by The Swan Group

As we close out 2024 and look toward 2025, what lessons has AI taught us, and what can we expect in the future? Please visit our website to get more information: https://swangroup.net/ 

In this powerful discussion, eight biotech and technology CIOs—Bill Pierce, Ganesh Iyer, Bill Wallace, Rajvir Madan, Shola Oyewole, Eduard de Vries Sands, Elisabeth Schwartz, and James Schluter—reflect on AI's real-world impact in 2024 and share bold predictions for 2025.

Specifically, this episode highlights the following themes:

  • Reflections on AI’s role in 2024 and expectations for 2025.
  • Key strategies for addressing data governance and compliance.
  • The role of culture and collaboration in driving tech adoption.

Links from this episode:

Rajvir (Raj) Madan  [00:00:00]:
In my opinion, I think a lot of AI strategies tend to focus on the how rather than the why. Right. So we're talking about sort of how do we bring these data sets together, how do we sort of ensure that there's no biases in our models and all of that. For me, what's really important is that we focus on the why. We focus on the business problem that we're looking to solve. And we think about, you know, what is solving that business problem going to bring to the organization and whether it's through AI, whether it's through automation, whether it's through ML. You probably don't want to go down the path of even solving that business problem problem unless it's really important and it's going to bring value for your stakeholders and your shareholders.

Steve Swan [00:00:41]:
Welcome to Biotech Bytes. I'm your host, Steve Swan. Today we're wrapping up 2024, talking about 2025. I'm joined by eight different CIOs with wide ranging perspectives and opinions on where we are and where we're heading next year. Some of these face in names you should probably be familiar with. If you pay attention to our podcast, there might be some that are new. We'll do our best to keep things organized and moving along. I'm sure at some point, if we start talking over each other, we'll work it out.

Steve Swan [00:01:08]:
So I'll just do a quick, I'll run down everybody's name and you know, everybody's name's already on the screen here, but you know, I'll just run down who's here. Right now we've Eduard De Vries Sands. We have Ganesh Iyer. We've got Raj. Thank you, Raj. We have Shola Oyuole, we have Bill Pierce, James Schluter, Elizabeth Schwartz, and Bill Wallace. Thanks everybody for for showing up today. So I had sent out some questions in the initial email that I, you know, when I invited everybody and I, I mentioned that I was going to probably just call on one person and let everybody dovetail off what their answer is to this, to the question.

Steve Swan [00:01:51]:
And the first one I'm going to direct towards Bill. Bill Pierce, the first question that I had that I wrote down there in 2024, we talked a lot about AI on our podcast. Right. And just wondering, in your opinion, has AI lived up to the hype and how do you see it evolving in 2025?

Bill Pierce [00:02:09]:
An interesting question. I think it's whose hype, right, Are we trying to live up to? If it's the vendor hype, I'll Just give you an example. I bought a driver, a golf driver with AI in it. And I don't hit like Bryson DeChambeau, so something's not working quite right. I think if you have realistic expectations to, you know, sorting through the hype and understanding what's realistic, then you probably had a pretty good 2024. Steven, if you remember your and my conversation, we talked about kind of AI foundations and being AI ready. And a lot of what my team did is preparing the executive team and I had ELT level meetings on this, preparing the executives to what does it mean for us, what does AI mean and what are our strongest opportunities? So things like getting more closer to our customer, doing some personalization using the data that we have, forecasting, we made massive improvements in forecasting, anomaly detection. Some of the areas that maybe we didn't do quite as good as we wanted to is the predictive modeling that we did.

Bill Pierce [00:03:12]:
But I think we had a pretty good year setting our foundation by getting our data ready, right? So we did a whole data classification effort, making sure the cybersecurity was in place. We had a whole lot of data loss prevention and data detection and response setups and just encryption and the whole nine yards. And then getting through AI policies, which was fun, right? Working with HR and getting an AI policy that the whole company could live with and then eventually rolling that out into an AI roadmap and what's our near term opportunities, things like chat box or supply chain or some of these projects that we took on and extend that eventually to what we ultimately want to do, right? In life sciences, you've got a compound, you've got some research, you really want to change the outcomes that happened in personalized medicine and the experiences of all human beings, right? So we're not forgetting that we're scientists, led and scientific by nature, but we just felt like some of our foundations for this year's hype, right. Would be a little bit more realistic. So I think for BioAVT anyway, my experience is it has lived up to a lot of what it can do and can't do. And for 2025, I think some of our big things will be a heck of a lot more data, right? So we think that you may want to call it this, you may not Internet of things, big data collision with AI and then just letting the models mature in tune, right? We have realistic expectations about that and we have better we get out of the supply chain of some of the operational and then into some of the more scientific, right. Biomarker detection and Some of the really complex use cases as we've tuned these models and cleaned up our data. And then more important, Steven, you and I talked about the last time we chatted.

Bill Pierce [00:05:03]:
How do you make sure AI is compliant? How do you make sure you don't have this big risk going out? And I said something that maybe people may laugh at, but I still hold true to this today, and that is we use our HR policies to drive how the AI is deployed. And what do I mean by that? We're doing a lot of testing to make sure that the AI is compliant so that it can't do things that a human. Right. That an employee wouldn't be allowed to do. Right. So if you, if we heard about the chat bot that happened in European shipping agency that told an off color joke. Right. We all know an employee on the phone that tells an off color joke would be terminated probably on the spot.

Bill Pierce [00:05:48]:
Right. So you have to treat the AI as it starts to mature and become, you know, more mature and more operational and more integrated. You have to almost treat it like an employee. So we've done a lot on the testing and quality and compliance harness for AI. So 2024, big foundation year, 2025. Excited to see what comes.

Steve Swan [00:06:09]:
Very cool. Lots there. You know, one of the things that you said at the very, very beginning when you were, when you were talking was something, it reminded me of a conversation that I had with Ganesh. Ganesh, when you, when you were talking about the foundation of the education of really what AI is and what it can do and educating your leaders. Right. You and I talked a lot about that because you were kind of like, well, I got to figure out if they even know what they're talking about. Let's start there. Right? Tell us about that.

Ganesh Iyer [00:06:37]:
We're doing something similar here at Harmony Biosciences.

Bill Pierce [00:06:40]:
So.

Ganesh Iyer [00:06:40]:
So, Steve, thanks for having me here. For those who do not know me, my name is Ganesh Iyer and I'm the CIO here at Harmony Biosciences. And we are doing something very similar to what Bill just talked about. 2024 is going to be that, that year of aligning, educating and defining AI for us. Obviously, as Bill started off his conversation talking about the hype, whether it's the vendor's hype or whether it is truly something for us. Right. We don't know if we really need AI at this point of time. So I'm going to go as blind as that.

Ganesh Iyer [00:07:19]:
Right. And what we're going to do is one of the key things that, that we were discussing within the board as well was how do we define governance more than anything, right? The first step for us is we don't have a policy in place yet. So step one was for us to define a policy and then along with that policy, have that policy ready, have a workshop with the business, try to understand AI. Is it truly AI, is it analytics or is it some form of automation that the business needs? Because the business also AI is a mix of a lot of all these things. So trying to understand and align with the business, bring in a third party company to explain all this, understand what Gen AI is, try to understand what a lot of these applications define as AI within their applications. Because the business is pretty confused about that because we've implemented Netsuite, we've implemented Coupa, we've implemented Conga, we are big on TM1 and all these guys have inbuilt AI within their platforms as well. And a lot of all these companies are directly calling on the business leads. A lot of these guys are pretty confused about AI there.

Ganesh Iyer [00:08:45]:
And then here I am, right? A technical guy going and talking to them about Gen AI Chat, GPT, talking about using teams as our bot, right? So trying to kind of help them understand what these different concepts are, are very key. And for us, 2024 is going to be that year of trying to understand. And what we've done is the other piece is for us to define a strategy for AI. And again, as a cio, it's very easy for me to come and define a strategy, right? But is that strategy going to be something that the business is going to embrace? Right is the big question. Instead of me trying to align with them, once I define a strategy, my approach is slightly different. I want them to be part of defining that strategy. So part two of that workshop that we're planning on and that's something that we're planning in February because January is going to be pretty busy once people return back from the holidays. We want to define our strategy together as an organization.

Ganesh Iyer [00:09:50]:
What is it? Do we really want to embark on AI or do we provide you guys first with analytics? What is that going to be? And then based on that, define what our AI strategy is going to be and then collectively take the AI policy, build a governance around it and the strategy to the board I think will make, make a lot of sense. And that's how we are approaching it, hopefully. I answered your leading question, Steve.

Steve Swan [00:10:19]:
No, it's perfect. Thank you, thank you. And you know, I think part of, again, part of your Answer, I think dovetails a little bit into some of the stuff that I think Edward and I talked about a little bit there too.

Bill Pierce [00:10:32]:
Right.

Steve Swan [00:10:32]:
You know, I mean, your approach, what you've done is really focused on the business, but then you've taken almost an individualized approach from the. From the technology side. Right.

Rajvir Madan  [00:10:41]:
I would agree. The way we look at it, we look at it to enable our business, so we really feel we're part of our business. So one part is discard rails. I think Bill and Ganesh, you talked about it. How do we make sure we have the right policies, we have the right counsel, we make sure that we do the right thing from a privacy and risk perspective. But the second part is how do we work with our business unit leaders and together say, what could AI do in your business unit? How could we reimagine how we want the business to look, you know, a year, three years and five years from now? And what does that mean for our margin and for our revenue? And once you start to map that out at a business unit level together, you end up with, I would say, almost a strategy per business unit. And we then roll that up because I don't have the idea that, you know, as an it, I can decide this is the right strategy for the enterprise. We do it at a business unit level.

Rajvir Madan  [00:11:27]:
But it's interesting, once you start to inventory and share the repository of things we're doing per business unit, how many things are applicable to multiple business units. So we're really trying to create the demand and IT is there then to help fulfill the demand. The other approach we looked at originally is like, well, what if we do it all? But then I'm kind of pushing it into my business. That's not going to work. I need my business leaders to see how is this going to help my margin? And, and at the same time, in our markets, how do we see the market changing? How do we see the available margin for the whole industry? How is that going to shift when AI gets deployed at scale? Because I would like to be at a better part where I get a larger share of the margin versus my competitor. So it's. I think it's a fairly strategic per market approach. And then we try to roll up on what does that mean at an enterprise level.

Steve Swan [00:12:12]:
So overall, I mean, we're all getting involved with AI, Right. And I think what I'm hearing anyway is it's a let's wait and see and let's just keep the continuum going as to where we were in 24 oh yes.

James Schluter [00:12:26]:
AI is evolving constantly. So we have to see how that path meanders control that path. Talking about guide rails, that's necessary. We talk about compliance. I know that the FDA is setting up guidelines on medical devices with the self corrective side of the medical device to ensure that there's trustworthiness in the process that that's following and you build confidence in the technology. So trying to pull. What everyone's talking about is we need to manage expectations, we need to educate and we need to build trust and make sure that we're following the right path. Take small bites, not big chunks.

Steve Swan [00:13:08]:
Right. And we all talked about that, right? I mean, to get there, I think. And I had this conversation with a lot of you, the data, Right, I remember Raj and I talked about that quite a bit. Right. You know, we talked about the data coming into this. AI is something that we've got to. Well, first of all, we got to figure out and make sure we got the right data in the right shape. But second of all, are we using the right data and is there more that we can do with some of the data that's out there that we don't have? Like Raj, you and I hit on EMR and EHR data at some point, right? You were like, we should be squeezing more of the juice out of that data.

Steve Swan [00:13:39]:
I mean, so in 25, do you think that's something that we're going to be pushing for? Tell me what your thoughts are there, Raj.

Rajvir (Raj) Madan  [00:13:45]:
Yeah, absolutely. So I think, you know, from my perspective, there is a lot of focus on sort of acquiring new data. But I strongly believe that a lot of the data that we need sort of already exists. And I think the ehr, EMR example that we've been talking about, where, you know, there's a lot of unstructured data that exists in EHR and EMR systems, let's say. I think these are doctor's notes, these are HCP notes, let's say. And now you're starting to see sort of, you know, using natural language processing, using AI, the conversion of those unstructured data sets to structured data sets, which could then inform a wide variety of use cases in the pharma and biotech industry. Right. Think about safety monitoring, think about patient recruitment, think about, you know, some of the other use cases around drug discovery and drug development, just sort of real world evidence data sets that we've all been, you know, using over the past few years.

Rajvir (Raj) Madan  [00:14:40]:
Right? Yeah. So for me, I think we first got to look at some of the data sets that already exist and sort of think about how we could squeeze more juice out of those data sets. I totally agree with that. I think the other thing, you know, and I think we started to sort of talk about this is that, look, we have biases in our, in our data sets today. Right. And, you know, we've been training some of these data models, or we've been training a lot of these AI models, let's say, without thinking about some of the biases, without, you know, putting a lot of emphasis, you know, around some of the biases. Right. We all work for companies that are involved in clinical trials.

Rajvir (Raj) Madan  [00:15:18]:
And how do you, you know, when you think about running a clinical trial, you know, how do you make sure that the clinical trials that you run in, you know, think about a wide variety of sort of, you know, patient population sets? Right. So when you think about data, you gotta think about those biases as well. And me, I think it's really important that we start to incorporate thinking about biases into how we prep some of these training data sets, you know, for building our AI models. Yeah. So I think those, I think data biases and then squeezing the most juice out of existing data sets would be two of the things that I'm thinking about.

Steve Swan [00:15:52]:
How do you think, I mean, you know, eliminating or trying to limit those biases. You know, I think Bill Pierce talked a little bit about it, I think. Right. But I mean, what do we do? How do we do that? Where do we go? You know, I mean, I don't know. How is it just training the models? Is it our data? Is it, is it a combination of the two? I don't know. You know, does anybody have anything on that?

Eduard De Vries Sands [00:16:15]:
I think that as the different business areas, we started the year educating a lot of them on AI and the new aspects of AI, because there are certain large language models are relatively recent, and that's what has led to a lot of the AI hype. But there's been machine learning and other types of AI now for almost two or so decades. So I think there was a lot of education. And at the beginning, I think it was as people didn't fully understand. They just thought it was, okay, let's take this model and throw it against the data. But when they start to recognize that you can have hallucinations, when they start to recognize that it isn't so much artificial intelligence as it's pulling things together that it can find relationships with, that has led to a number of conversations about do we put, for a given business use case, do we put the Data from several different sources in a specific area. So the AI can't accidentally go out and go to somewhere else and pull something in. You don't want some large data area where someone doing something on the clinical side could accidentally have their AI pull in something on the marketing side and vice versa.

Eduard De Vries Sands [00:17:40]:
And so a lot of this year, I think, has been people starting to understand what are the aspects of AI that they need to think about as they think about the business use cases they're looking for. And yes, there's a lot of pulling data together and certain metadata and different things that are done so that the AI functions in the way that they want. But it's also about taking a new look. People have been used to data in different areas in a lot of ways, sort of silos. And if you want to integrate it into an analytics environment, you have to do a lot of work and et cetera. And they initially looked at it and said, oh, AI can do all this for me. But I think there's been a recognition that we need to be careful about what data we let an AI model go against as we determine what's the specific business use case we're trying to solve for.

Bill Pierce [00:18:39]:
Steve, that's why I talked a little bit about that data classification. Right? So by partitioning your data, by classifying your data, exactly what Bill was talking about can be managed pretty tightly because I a thousand percent agree with him. You really don't want some marketing data to kind of spill into that clinical data and then that data be analyzed as just, you know, the source of truth. Right. That, that really you'd have something that had undue influence. I also talked about that testing. I'll give you a little bit more on the testing side. As he said, there's a math is math, right? It's been a lot, has been around a long time.

Bill Pierce [00:19:14]:
So there's kind of Bayesian and quadratic programming algorithms. Not to put too much nerdy stuff on there, but it can find bias and actually help you tune the bias over time. So even though your company may produce biased data, again, there's mathematical techniques. When you start tuning your model, you look at that anomalies and outliers detection. You might start with patient safety and say, look at this, you know, here's that normal distribution and here's these outliers. Let's really dig in. And you really use that to start looking at where you might have bias or where you might have errors in your data. And that's why it takes a little bit of time to tune these models.

Bill Pierce [00:19:50]:
And the discussion I had at the LT is this may take us 6 to 18 months to mature Right. Before we're comfortable with this going outward. Now, I worry more probably about the inward data coming in because we did such a big data classification. So if I just open up ChatGPT and let it pull in everything, that's very dangerous. So we've done a lot to wrap ChatGPT and what it pulls in and then, you know, what it goes out is really highly restricted. Right. Pretty much from us. Nothing goes outward.

Bill Pierce [00:20:21]:
So again, I, I think there's a lot of caution that the different leadership teams have to take when they start managing this data.

Eduard De Vries Sands [00:20:31]:
Yeah.

Rajvir (Raj) Madan  [00:20:31]:
And I, you know, I would, I just want to go back to, I think, a point that Bill Wallace made and I think I, you know, I think this is important and, you know, I just want to make sure this is not lost on us. Right. So in my opinion, I think a lot of AI strategies tend to focus on the how rather than the why. Right. So we're talking about sort of how do we, you know, bring these data sets together, how do we sort of ensure that there's no biases in our models and all of that. And, you know, I'm an engineer, I've been enamored with sort of technology and all the sexy AI tools that are out there. But I think for me, what's really important is that we focus on the why. Right.

Rajvir (Raj) Madan  [00:21:09]:
We focus on the business problem that we're looking to solve and we think about what is that business problem? You know, what is solving that business problem going to bring to the organization? And unless it's really going to bring significant value to the organization, whether it's through AI, whether it's through automation, whether it's through ML, you probably don't want to go down the path of even solving that business problem unless it's really important and it's going to bring value for your stakeholders and your shareholders, let's say. Right. So, you know, I think it's really important that you focus on the why versus focusing on the how. Or at least that's where you have to start, in my opinion.

Rajvir Madan  [00:21:48]:
And I think this is in line with what we've done in IT as well over the last couple of years. How do we become outcome driven? We're trying to steer towards an outcome and not say, hey, these are the tools we have available. I think if you think where healthcare could go, you would hope eventually that we will get reimbursed for the outcomes our medications and our treatments create. So what we're doing right now should set us up to get there, and that will then drive us to how do we want the market to look in a couple of years, and how can this specific effort we're doing help us to get us to where we want to be? So I fully agree.

Steve Swan [00:22:21]:
I think being outcome driven like that, Edward, I think, dovetails into something that I think you take it all the way back to the root right with the people that you hire and how you bring them up to speed. And Shola, I want to tap you on the shoulder here because I want to kind of share with folks the way that you approach your group and your technology and how you keep folks curious and experimenting in the right direction. Can you tell us a little bit about that?

Bill Wallace [00:22:46]:
Hi. Thanks, Steven. Good morning, everybody. I'd like to join everybody in today's call. I run digital innovation at Unite Therapeutics, and I was CIO for the first 19 years, but for the last seven, I've just focused on enabling new technologies into the company. And I do this by encouraging a lot of experimentation. I'm sure you all can remember the last few years, big words like big data, voiceover, ip, cloud. There's always a new fangled toy or tool that the media or whomever brings to our attention.

Bill Wallace [00:23:30]:
And everybody runs around trying to be big data ready or cloud ready or whatever. AI was invented in 1940, 40s. By turning okay, he came up with the concept of a machine that can do all kinds of things. We've had AI with us now for 84 years. All of a sudden they give it a new name, Generative AI. That sounds cool. But I tell you, the value I've gotten from it so far, very, very useful. It's like, do you remember Microsoft's Clippy, that little smart little clip that used to.

Bill Wallace [00:24:06]:
Yeah. Well, so that is what generative AI can do for us today. From a desktop point of view, you can also use it for research. The reluctance to use it in R and D or in anything to do with drug development is the unpredictability. And it is hard for me to convince compliance legal that the results of a generative AI session can be predictable and consistent. It is hard to prove where this machine got its answers from, even if it's from a corporate of data. You fed it, you trained it on. Until we reach a point where we can truly understand where it gets its information from, it'll be hard for us to incorporate generative AI in drug development.

Bill Wallace [00:25:09]:
I think for research, it is super helpful. But to actually get FDA clearance on a device, maybe a software, as a medical device with built in generative AI, I think we're a little ways from there. I do a lot of experimentation here. I do a lot of proofs of concept at United Therapeutics. Very small scale. But what we've done is to formalize this. We've built a data governance group. We want to first understand what data we own, what we have, and what kind of information we think we're going to need down the road.

Bill Wallace [00:25:45]:
And I am now building an AI center of excellence where we'll have business owners, data owners, and more importantly, a group that will help vet problems the business is trying to solve. To me, generative AI is just another tool. And the question is, can this tool be useful and can this tool help us make better drugs, faster drugs, or even improve the quality of life of our patients? Every experiment that I run has to come up with answers like that, how does it help my patient? How much faster can we get a drug to market? And if I could quote the Gartner, you know, hypercycle, I think we've reached the point of the trough of. What's the word again? Trough of disillusionment. We thought this could do so much. I mean, we're all, you know, you have to have titles like generate chief AI Officer. I'm like, why there's no chief electrical officer today in a tool. Why would you go change your title to be chief AI Officer? What does that mean? You know, So I think, I think we should give ourselves some time, let this mature a little bit.

Bill Wallace [00:27:19]:
Let's keep experimenting, let's keep playing with it, of course. But I think the jury is still out. The business is looking at us and saying, how do we make money with this? And I don't have the right answer yet. So that's where I am on this. Steve.

Steve Swan [00:27:36]:
I think we're, I think everybody's still trying to figure out how to make money with it, right?

James Schluter [00:27:41]:
Well, yeah, it's not an easy button that you hit on a desk, you know, so it takes some time.

Steve Swan [00:27:48]:
Right.

Eduard De Vries Sands [00:27:49]:
And I think that's one of the things is we've looked at business use cases. The. What is the roi? What is the benefit that AI can bring to a process versus the level of effort? You know, I had a conversation with one of the, one of the business areas and they were talking about, well, if I ask this AI things, I sometimes get answers. Like I hired an intern out of college and I asked them to do some research And I said, well, AI is going to give you answers, but you do need to make sure that it is the answers that you're looking for. And we had a long conversation and part of it was, look, if I use that analogy, then it costs you something to have an intern for the summer. It also costs you something in the compute, in bringing the data together, and in building metadata for that data so that the AI can run. And you need to make sure that as we look at this solution, it isn't going to be something where it's the cost of five interns instead of one. And I also talk, and we, the business is starting to understand, as we've worked with them a bit, is that sort of ashola was just saying it's going to be a marathon, it's not going to be a sprint.

Eduard De Vries Sands [00:29:23]:
And that's, I think, part of where this hype cycle has been. And it's gotten to the trough of disillusionment because some of the consulting companies that were coming in and talking to the business earlier this year were promising a lot and being very vague on what it would take to get there. And we've come to a point where we're a little bit more focused on those business outcomes and are able to more realistically do some estimates of what it's going to take to get there. And there is, I think, an emerging understanding with a lot of our business partners that first, it is a true collaboration between technology and business, as we always look for when we're trying to improve these business processes. This, because it's emerging, it's so much more important that we're all aligned. And at the same time, it's not necessarily going to give you everything you wanted right out of the gate. And that there, there does need to be an understanding that it's. It's going to take a while to get to the ultimate endpoint they're looking for.

Rajvir (Raj) Madan  [00:30:32]:
I think, Bill, if I can just build on your point. One of the things that I read this interesting sort of article over the weekend, happy to share it with everyone. It talks about this sort of distinction that I think we're still getting our hands around, which is, you know, generative AI tools like ChatGPT shouldn't be thought of as a, you know, as a search engine. It should be thought of as a dialogue engine. Right? And I think that's a big, that was a big aha moment for me where you shouldn't be sort of giving it a query and then expecting it to sort of come back with an accurate answer. I think you have to sort of start to have a dialogue with it, right? You have to continue to sort of engineer that prompt and continue to refine that prompt before you get sort of input that's going to be valid for you. Right? And I think that's the paradigm shift that I think a lot of us are wrestling with, which is sort of not thinking of it as a search engine and thinking of it as more of a dialogue engine. And I think as soon as we get our arms around that, I think we're going to start to see more value coming out of some of these generative AI tools.

Steve Swan [00:31:42]:
It sounds like the, you know, the sales cycle, right? The hype cycle got ahead of us, right. We got out over our skis. But what we're. What, how do we handle the governance of this? I mean, right. The board's got to be thinking about this. They don't have a choice, right? I mean, is that on us? Is that on it to think about the governance of this? Are we part of that? Are we, Are we part of the problem, part of the solution?

Bill Pierce [00:32:06]:
And actually, my board's pushed us to be progressive in this space, but it's been a very cautious progression. Right. So what's proven right back to the. This has been around for some of these machine learning in particular models have been around for a long time. What's proven right, I think where we've made some really good success with what's proven. And that's why I said for things like personalization of our services or things like forecasting, we've made massive steps forward, right. We used to miss some of our forecasts by a few million bucks. Now we're almost down to the dollar in many cases.

Bill Pierce [00:32:43]:
Right. So some things have worked really well. Anomaly detection, surprising enough. You know, go back to the Roger search engine. I mean, I actually found if you're willing to learn, right. The search and the anomaly for trained people, right. Actually helps you to understand your data and explore your data before you start, open it up to the world, right? So the data scientists actually tend to do a pretty good job. And when you're trying to do like supply chain optimization, they can run a lot of anomaly detection and where really is the supply chain broken and then start to focus with the business on what the attack techniques are for fixing that issue.

Bill Pierce [00:33:23]:
Right. So we've had massive success there as well from our customer service and our supply depots and things of that nature where we haven't had this success because it's a hell of a lot more complex. And this is back to Charlotte's comment. Yeah, we can search a biomarker and we can get back 650,000 hits on that biomarker. What do you do with that? Right. There's a lot of data coming back at you. So how we're controlling that problem set in that domain. Right.

Bill Pierce [00:33:51]:
Is back to the scientist and having really good ontologies and taxonomies and controlled data sets. Right. I don't have anywhere in the company open ChatGPT. I have nothing where it's open to the Internet. I've, I've either ring fenced it in island browser. I've ring fenced it with Microsoft. Right. I've just done so much ring fencing.

Bill Pierce [00:34:13]:
It's, it's actually very constrained and it's partially because, hey, let's experiment, let's go as fast as we are comfortable, but let's also not have the big misstep that we've seen a lot of people have. And I think it was Netflix who used a whole bunch of images or video content that they shouldn't use.

Rajvir (Raj) Madan  [00:34:32]:
Right.

Bill Pierce [00:34:32]:
It went out and found and brought it back in and they. Right. And then they had the big reset. Right. Oh, we made a big mistake. Well, imagine doing that not in a media company, but a life sciences company. That could be the end of you.

Bill Wallace [00:34:46]:
Well, so I mean, what we've done here is we. It rolled out an AI policy. Legal and IT are very aligned and HR and we've leaned towards a lot of education because we know that eventually each person is going to have some kind of access to a large language model from their mobile device, from their home computers, et cetera. Just like the Google search. Right. Eventually it's going to be ubiquitous. It's going to be everywhere. Even my Meta Glasses has some version of AI in it.

Bill Wallace [00:35:35]:
So it will be really impossible to prevent people from using this. So what we've done is we've focused on educating them and saying, okay, these are the guardrails. Right. Proprietary data is proprietary information. You cannot share that in any LLM system. This is what it looks like. All right. Any results you get from an LLM system, you should validate.

Bill Wallace [00:36:06]:
It should already be some kind of subject matter expert in it. If you're going to use it as any information you are going to publish, you must cite your sources. So there are multiple layers. And I think the best thing for any employee is education, experimentation. And through experimentation, further education, shutting everything down, which was our initial strategy in this company. I don't run it Anymore. But shutting everything down was only as good for a short period of time until we noticed there were LLMs all over the place. So I think right now, education first, lots of experimentation and after we've built our center of excellence, we can now begin to focus on value driven applications that generative AI could help us with.

Eduard De Vries Sands [00:37:11]:
Yeah, one thing that we've done, you know, there's we, there's individual productivity AI that is your co pilots, your chatgpts. And in addition to the education that we did earlier in the year, we also did in a limit, I'll say in a limited fashion, roll out individual productivity. In our case, we're Office 365. We primarily did it through copilot and we basically went to all the managers and said, hey, find out if anybody on your team would like to use AI for their individual productivity. And we can again going through compliance and legal, we can turn it on for them, but we do need your approval. And so I would say right now we have about 10% of the organization that is using AI and we actually have sessions about every three to four weeks where we just get that group together, whoever can attend, what have you been using it for, lessons learned. And there really has been a gain in understanding and not, you know, not so coincidentally. A lot of those individuals that are interested on an individual level are involved in the conversations we're having on the biz, the business aspects that we could do for getting better efficiency.

Eduard De Vries Sands [00:38:43]:
And you know, Ganesh was mentioning it before, right. AI is starting to come in all these different software as a service solutions. And so we want to make sure that everybody, as Raj was saying, right. Understand it's a dialogue engine, it's not necessarily a search engine. And as they're working with these, the individual productivity, it really has helped to open up to them what it can, what it cannot do and what it does better. And that actually has changed what some of the conversations are in terms of business processes that can benefit from AI. And I do think that the more that our business partners and colleagues get used to AI and see how it works in a controlled fashion. I mean one of the reasons we did copilot is because we can keep it just to our data, just to the data that a specific individual has access to within the365 ecosystem.

Eduard De Vries Sands [00:39:42]:
And but that has that kind of self education as well as self production, you know, efficiencies has, has helped our conversation as they've gotten more familiar with it.

Bill Pierce [00:39:56]:
Bill, I want a second what you just said. I when I started turning on the data loss prevention and really monitoring AI usage. I was horrified what I saw and maybe this would Charlotte's talking about, but I. You can't just shut it off, right? If you shut it off, you'll become kind of big brother in the evil empire. But I just, I used the data to start educating people. So when somebody was screenshotting GDPR data, I would just contact them, say, why, what are you doing that for? When somebody was, you know, gmailing something, I'd be, why, what are you doing that for? And then we gave them just like you did. We, we said, okay, this copilot, this is safe. Or hey, this chatgpt we wrapped in the island browser, this is safe.

Bill Pierce [00:40:38]:
Can you please go here? Because a lot of what the use cases were kind of ironic and maybe infantile, which is just editing. People are sending content through and I had somebody send a board deck through to edit it, right? I'm like, okay, an open chat GPT, send my board that to the world, right? And so you kind of chuckle, but you're mortified at the same time. We've done a lot. I have three layers of data loss and data detection and response systems and we just put a lot of energy in that and educating technology then to educate the business of hey, why are you doing that? And ask really good questions. Now we're putting in a library out there and the tools out there that we say prairie policy, these are safe. Knock yourself out here, right? You can do a lot here that's very safe for the company.

Elisabeth Schwartz [00:41:33]:
So we've taken a kind of a different approach and that we've gone completely by use case and by what the, what the business needs. It's not the technology driven process, it is a business use process. So we actually have discrete applications that we use per business area and they're pretty basic. They're not really generative. It's much more auditing or things like that. And we've been very open with our users that they cannot put things into chat GPT because it'll get out. They should not use. You know that it is not a closed system that anything they put in potentially be released.

Elisabeth Schwartz [00:42:12]:
So from that perspective, we just have very discreet uses and we're not really using generative iad and I haven't seen a use case for that.

Bill Pierce [00:42:23]:
Elizabeth, Something my CEO did, which was extremely helpful is in our strategy sessions, talking about the strategy and the use cases and where he sees this going and where he sees. But also educating the elt. It wasn't me educating, he brought up kind of some cybersecurity breaches where data leaked and, you know, a competitor of ours had a, you know, $75 million hit. Right. You know, just educating at the C suite is extremely helpful. Right. When it comes from the top down, you'll find that people tend to listen a little bit more than if it's just it, you know, you, you have a cybersecurity analyst just saying, hey, don't do that. It's much better coming from the top.

Elisabeth Schwartz [00:43:05]:
Is the one place that we are using AI is on our security. Our soc is basically all AI we use. We use a program that does it.

Eduard de Vries Sands  [00:43:13]:
Exactly makes sense. And what we've seen is that our CEO as a sponsor works really well because a, you know, when he pushes the business units, you get more of a reaction obviously than you know, when it does it. And he has that fine line between, hey, we've got to grab these opportunities, find these opportunities, otherwise we're not going to be market leading. But at the same time, let's protect the downside. And for example, he went a day to MIT, of course, like how do CIOs handle AI? And that was an eye opener because after that we infused it into many more of our conversations. Is the real payoff there? I think it's starting, but I do feel we're ahead of the market and to me that's important because that's where we want to stay while we also manage the risk and make sure we don't, like you've all said, like Elizabeth said, we don't take risks with client data, we don't co mingle data and things like that. It's a balanced approach.

Bill Pierce [00:44:00]:
Yeah.

Rajvir (Raj) Madan  [00:44:00]:
And I think for me, one of the points of education has been, and even a point of learning has been, is that the traditional ROI model is only one of the ways in which you could measure the value of a, of an AI driven use case. Let's say, you know, I have sort of two other frameworks that I sort of think about. I think one is around, you know, the ROE model, which is the return on the employee. So when you think about tools like Microsoft Copilot, for instance, it actually does not have positive ROI in sort of the traditional sense, but it has positive ROE or return on the employee because it's actually helping the employee not work on sort of commodity tasks and get them to work on more sort of value added tasks. And I think of that as a return on employee versus a return on investment. And then there's another Part of it, which is sort of return on the future. If you think about use cases like drug discovery and drug development, Shola, you were mentioning some of those use cases. Those actually don't have immediate return on investment.

Rajvir (Raj) Madan  [00:45:02]:
They have return on the future because you have to invest in them for sort of three to five years, let's say. And it's going to give you a competitive advantage in the future. Right. So part of the education has been how do you get your executive team to understand that it's not only roi, but it's ROE and ROF as well. And I think that changes sort of the financial dialogue as well with your executive team.

Ganesh Iyer [00:45:25]:
I totally agree to that. There is no roi. The way I see AI, it's all about efficiency. How can you improve what you're doing? And the other thing that we've also made sure that we talk at least to the leaders is AI does not mean reduction in jobs. A very clear message that needs to go out. Right. Because a lot of people have this myth that you bring in AI, we're going to lose our job. Right.

Ganesh Iyer [00:45:59]:
I think the approach has to be AI is going to help you do your job better. Right. But again, I've been listening to all of you, all great points. Right. But we kind of see a lot of other challenges. One of the thing is that, and Steve, you and I have talked about this a couple of times, right? We've grown too big too fast. Three years ago we were $180 million company. Today we are close to 800 million.

Ganesh Iyer [00:46:31]:
Three years. We're planning to be a billion in another year. Right. Our problems are different. We were not digital. So in the three years, we implemented close to 47 applications. Right now we are trying to digitize a lot of our processes. The business is still not ready.

Ganesh Iyer [00:46:55]:
And then here we have AI. Right? And everybody wants AI. But you are not digital completely. Your processes are not integrated. We do a lot of manual integration. I do fat fingering. Case in point, here is between my contracting application and my procurement application. I manually enter the vendors in two applications, same vendor, right? So when I do things still which are very rudimentary.

Ganesh Iyer [00:47:26]:
Right. And very elementary, the way I look at it, AI is something that we are only focusing on certain parts of the process. We are still not looking at AI holistically drug development process. We can do a lot of things there. But again, are we ready for that? We are not. Right? So for us, learning what we actually need and what I call the fit for purpose solution for harmony is something that we are very, very focused about. It's not about AI just because AI is there in the market. And the fact that I as a cio want to put it in my resume or somebody else wants to put it in their resume saying that we've done AI and you're ready for the next job.

Ganesh Iyer [00:48:14]:
And I'm being very candid here, but it is about trying to understand what it is that we need and if we truly need it. Because now a lot of boards are also pushing AI. Three months ago, my boss, the CEO, didn't even think that AI was something that we needed to talk or discuss about. Now suddenly he comes and says, oh, Ganesh, we need to get AI. Everybody needs to be aligned. We need a strategy, the board needs a governance. All of a sudden, right? But the way we think, and when I say we, I've already aligned with a lot of business leaders, and we all collectively think is, yes, the board is pushing. And for us to be able to answer the board, we need to be aligned and understand what AI truly means for us.

Ganesh Iyer [00:49:06]:
And it may be a roadmap for all that we talk. And the roadmap could be 2026 is when we start looking at AI.

Bill Pierce [00:49:14]:
So, Ganesh, when I was in a similar situation, right, 16 mergers and acquisitions, and you end up with a lot of operational complexity, right? And when I mentioned the forecasting growth that we've had, it was exactly to the point of. Right? Multiple ERP systems, multiple operational, all disparate data. And by putting that in the data lake and then putting AI on top of it, we actually got to high level precision. So by playing that back to the CEO and cfo, who were highly skeptical by the way, that that investment would pay off the way it has, and now for the board to watch that investment pay off, and for the CEO and cfo, I don't have a justification issue anymore. Now it's okay, what are the business unit leaders? What's the true business problems that we're trying to solve? Right? And have them bring the problems up from a strategic perspective. And I can say again, in our strategic planning, why I mentioned to Steve at the beginning why I'm excited for 2025, out of our top three initiatives, two of them have AI as a core component of them. Whether we can be successful or not remains to be seen. Right? We've got to go through that learning curve and that development curve.

Bill Pierce [00:50:28]:
But it's going to be exciting to see a business unit leader lead a whole transformational new product capability that actually will be IP for us as well. So it's again, I'm pretty excited for where we can go with this if we understand where the focus is. Right. And that's that. I think that's the hard part, particularly for a company heavy growth, heavy M and A. Right. Because there's too many problems to attack.

Bill Wallace [00:50:54]:
Right.

Bill Pierce [00:50:54]:
You're going to really focus.

Rajvir Madan  [00:50:58]:
It can make scaling easier. I think that's what you just described because we all did integration in the past, but now how do we scale and get some of the leverage we want to get without constantly doing the full integration? Because the value is not always there, as I think we've all learned over the years. And AI can be a tool in the toolkit to achieve that. And you're all right. It's never the goal, but it can be a valuable tool.

Bill Pierce [00:51:20]:
And does the board want to see or hear the failures? Maybe not. Right. That's been an interesting dialogue for us. Don't do it if it's going to fail. Well, actually the failure may be more educational for us than anything else.

Ganesh Iyer [00:51:34]:
Right.

Bill Pierce [00:51:34]:
So I think, you know, Charlotte probably could give us lots of, you know, failures. They said that was very educational for them from an innovation perspective.

Bill Wallace [00:51:45]:
Well, so I've been working with cleanups medical affairs team to help draft a clinical trial protocol where we're going to leverage artificial intelligence to help diagnose a disease. Now, that is not generative AI, that is more, I guess, machine learning type AI where it is going to analyze a patient structure, physiology and try to predict a disease. Now, that is regulated, that is controlled, that is software as a medical device. That is a qualified and validated system. That is a practical application of artificial intelligence. But it is controlled. It is not generative AI. It's a clinical trial.

Bill Wallace [00:52:44]:
And we're going to test this product to see is it as good or better than the real thing, the gold standard as it's called. And with successes like that, that brings business value. Because what you're doing is, what you're proposing is can I use software to diagnose a patient's disease, which is non invasive, which is faster and better than today's standard of care, which is running a right, you know, running a catheter of the patient's heart to detect the pressure in the heart. Instead of doing that, can we do this digitally so there's a clear business value for that. And if we can find more of those, I think that will encourage management or the business to look more into AI as a tool to help advance the business needs so this is a first for us and I find it very exciting.

Rajvir (Raj) Madan  [00:53:58]:
I would build on your point. I think Business value, absolutely. You have to start with the why. You have to think about the use case and the business problem. But I think for me the other question you have to ask yourself is, are you culturally ready as an organization to sort of transform with the use of AI or not? And I think if the cultural element is not there, you know, you can forget about AI. I think the first focus that you need to spend on is, is making sure that you have the right cultural DNA to actually move AI initiatives forward. And if you don't, I think you become a chief cultural officer and focus on that before, you know, focus on being a CIO or a caio.

Bill Pierce [00:54:45]:
I couldn't agree more. If your chief medical officer, chief scientific officer doesn't support that initiative you have going on there, even though you might have done feasibility and patient recruitment, even though you might have done safety and pharmacovillages and now you're putting it together more holistically. If they're not ready for that, that's going to be a huge barrier. Right. It's, I think it has to be discussed at the can, the executive level.

Bill Wallace [00:55:09]:
And that's why I lead through education, awareness. Right. I let folks that are actually going to do this work, I expose them to this experimentation. I have a pilot group, it is voluntary, people join and it's a co pilot pilot group. So the first exposure is to Copilot, which sits in my enterprise, so they can play safely. It's a very safe sandbox for them to play with. And they use it as probably as you would call it, I wrote it here, you said it's a dialogue engine. So the initial reaction is to use it as a dialogue engine and to help them with their day to day editing of documents and all that.

Bill Wallace [00:55:57]:
That is to breed confidence and comfort with this. The next stage would be, okay, what aspects of your business can we leverage generative AI in? You know, we have a librarian service here where we have a group of people that answer queries to our drugs, diseases, patients, et cetera. And they have to respond to each query with a customer personalized email. Well, this is a great use case for a generative AI tool. And I set up a private LLM for them that trained on their data. Okay. And every time a request came in, would cut and paste that request into their prompt and it will provide a personalized response and it will include its sources. So you could just email that whole response to the patient or to the doctor, because it cites the sources, it provides the correct and accurate answer.

Bill Wallace [00:57:03]:
But again, I tell people that you've got to be a subject matter expert in order to use generative AI so that you can tell when it is beginning to hallucinate. Job of generative AI is to please you. You are the master. They will do their best to impress you. They will do their best. They will make things up just to make you happy, because that is their job. They were designed to please you. Think of an intern you hired.

Bill Wallace [00:57:34]:
They're never going to say, no, I can do this. Going to try to figure it out. And that's what generative AI is. You know, you sit down long enough, you prompt it about a disease, before you know it, it starts going to places you shouldn't go to. You're like, wait, we started with A, you ended up with Z. There's no relationship. And the way you tell it, it apologizes. And then it starts again.

Bill Wallace [00:57:58]:
So I always tell people you've got to be a subject matter expert in anything you are learning or trying to quote when you are interacting with the dialogue, training, experimentation, education, Repeat, repeat, repeat.

Steve Swan [00:58:15]:
So, Raj, we talked about, you know, you know, the mind shift from ROI to roe, right? And. And when we're talking about AI, you know, you were talking about the organization, how the organization has to be ready. Is that how you get them ready to make that mind shift in them, in whoever's making the calls? Right. The ultimate calls from ROI to roe, you know, return on employee as opposed to return on investment. Is that part of it?

Rajvir (Raj) Madan  [00:58:42]:
So, Steve? I think that's part of it, but I think the other part of it is to understand what your current culture sort of looks like and then what cultural barriers you might face to get ready for AI. Right? I mean, you know, how is decision making happening at the organization? You know, how empowered are the employees in your organization? You know, there's. There's so many things that you have to think about. Do you have a fail fast or fail forward type of culture? Because I think part of what Shola mentioned is you're not going to succeed with every pilot, every experiment that you run. Right? And is the organization willing to reward individuals that are taking that risk or, you know, are they going to shut those experiments down because they're, you know, they're destined to fail, let's say. So I think you have to assess what the cultural context of the organization is and any potential sort of tweaks you have to make to the culture and Then you start to sort of think about how do you make those tweaks to the culture to sort of be AI ready, you know, as an organization? Because I think without the right culture, I'm not quite sure you're going to succeed.

Steve Swan [00:59:51]:
A year ago or so we were talking and I mean, the use case were real limited. I mean that people were really using AI for. Right. It sounds like now after talking with everybody and hearing all this, there's more than a handful, right. Of different use cases folks are using AI for. So in the span of a year. Right. Because we're reviewing 24 and moving into 25, we've jumped quite a bit.

Steve Swan [01:00:14]:
And changing that culture and doing things like Shola was talking about, I guess, has. Has led us to where we are. Right.

Bill Pierce [01:00:24]:
Steve, a lot of these have been proven.

Ganesh Iyer [01:00:25]:
Right.

Bill Pierce [01:00:26]:
So again, machine learning, computational biology, preclinical space has been proven for a long time. Right. They've published papers and outcomes. Right. Using that machine learning techniques and mostly. Right. For a long time. It's, it's.

Bill Pierce [01:00:42]:
What's the culture to take that from the preclinical out to the clinical and then eventually out to the real world. Right. And that's, I think that's what Raj is trying to describe. Right. Like every organization is going to have its unique combination of culture and people and experiences. And I think that the team, I don't think that. I don't see this as a CIO's job, by the way. I think the team, the executive team's job is to come together and have a hard dialogue about that.

Bill Pierce [01:01:10]:
And again, what I try to do is make sure it's team based, even when I have to, I guess, let go. I've had several occasions where I have an executive that's going to run with something that I'm afraid they're going to do some damage with. I just kind of watch and hold my tongue and try to facilitate as much as I can.

Bill Wallace [01:01:34]:
Bill. I mean, I'd say this is a tool, that it is a business tool, but it is a business tool best understood by leaders like yourselves. So by, for that reason, you will have to lead that conversation with the executives. The executives, when they're having their morning coffee, it's on the newspapers everywhere, and there's going to be this fomo. There's no understanding of what this. They're thinking. Does this mean we have to replace people? Does this replace my job? Does this make us do better things? So as a CIO or as a leader of sorts, Right. I think We've got to grab the bull by their horns and help guide these leaders in the right direction.

Bill Wallace [01:02:24]:
Explain what this is to them. Education, make them comfortable. You are very right about the culture. However, some cultures are baked in fear. Fear of the unknown, fear of technology, fear of losing jobs.

Bill Pierce [01:02:46]:
Right.

Bill Wallace [01:02:47]:
The way you allay that is through a lot of experimentation, a lot of testing, and say, listen, give them a little sandbox, give them a proof of concept. They can't do wrong. They cannot destroy anything. The whole point is to build that comfort. And there are different levels of comfort for people. For us, we are. We love our jobs because we love dealing with stuff like this. But there are folks that don't do what we do, they're going to be super uncomfortable.

Bill Wallace [01:03:21]:
And I think the way I help them be comfortable, encourage them to do a lot of experimentation, including the executives.

Rajvir Madan  [01:03:31]:
I think that's fair and partly is also figuring out in the culture what excites people, what excites your board, what excites your executive team, what gets your employees fired up? So is it commercial success? Is it if we are able to take a rare disease and really find 20% more patients so we can help them improve their lives? And do we then tell that story and then link it to. Well, and we've been able to do that because we had better patient segmentation then we did personalization. And that's how we found these patients who were able to help them on therapy. Put the breadcrumbs really close and celebrate it together as an executive team, we decide what's in the town hall. We decide, you know, what messages we share to a larger audience. And with that, we also continuously influence the culture. So I think culture is a. An input, but then we should say, where do we want the culture to be? And keep tinkering together there slowly with every single action we take.

Rajvir Madan  [01:04:23]:
And we make mistakes, let's be honest. But as long as we together say, what is the culture we want, what is the outcome? Again, we want, we can connect the dots and we can get our entire employee base there.

Bill Pierce [01:04:35]:
I agree. I've had more success commercialization, operational than pure research. Right. And it's probably, again, some of the compliance and other fear factors, as was alluded to earlier. But again, having those wins has really helped the team to be a lot more comfortable moving forward.

Rajvir Madan  [01:04:57]:
And we have to do that today. We have to, you know, in our field, we have to stand apart, you know, when our drug goes to the provider, to the doctor, because there's often many of them in A crowded market, you know, there's lots of exclusivity. How do we make sure that our therapies stand out and that they can help the patients? Because if we don't do the commercialization right, we can have the best therapy in the world. It's not going to make the impact in the lives we care about. So I think that's something we shouldn't underestimate. And if you look even outside of pharmaceutical industry, AI on a salesforce effectiveness has major impact whether you look, you know, even at, you know, at retail, you look at travel sector. So how can we take some of those approaches which are. Which are successful in other industries and apply them to ours? Because we don't always have to reinvent the wheel.

Rajvir Madan  [01:05:40]:
Let's be smart about it, and then run a tool with compliance. How do we make sure we do the right thing for the patients, for the physicians and for the market and pick the ones we can and drive them really hard?

Eduard De Vries Sands [01:05:51]:
Absolutely. And Shola was talking about the training and the experimentation. Raj talked about corporate culture and how that gets integrated into how people approach these AI projects. And I thought, Bill, your comment about when you see some of the business leaders and you think that they're kind of going down, maybe not the best path, but sometimes they just need to learn for themselves. One of the things that we, in part of our training, when we were educating the organization on AI and the different types, et cetera, one of the things that we had to deal with, we wanted to make sure we dealt with was the idea of, you know, how you work with your consultants and your. And. And the business partners. And that really comes down to.

Eduard De Vries Sands [01:06:44]:
I said, for a lot of time, data is like water. It flows to the area of least resistance. And when you have concerns, okay, am I going to be seen as failing if this doesn't work out? When you maybe haven't done a lot of experimentation as a business person, but now you get a consultant coming in and saying, oh, we've got the greatest AI since sliced bread, and we've got something that we can do for you. It can be a little bit. And again, we had a policy about you need to get any AI usage approved through it and compliance. But there's always that possibility that someone thinks, oh, well, there's just this consultant. They know what they're doing and they can kind of help us on the side. And myself and one of my team members, this goes back now almost two years, but we were messing around with chat GPT, the, you know, the online Version not using any of our corporate data.

Eduard De Vries Sands [01:07:50]:
This literally was non corporate devices. We were just messing around with ChatGPT and we asked a bunch of questions, got a bunch of responses, and of course, we all know you can go in and look and see how those answers were gotten. Well, we asked some very specific questions about some of the disease states that we are researching, and we got some very specific answers. We went into ChatGPT and we took some screenshots of where that information came from. And it quickly became apparent that there was another pharma company that had what we considered to most likely be proprietary information that was available out on the web with ChatGPT, because probably someone over there had been doing their own experimenting and perhaps didn't realize what it. What it meant. And I then, thankfully, I actually knew the head of technology there in my network and sent them the information outside of the corporate network and they thanked me very much. And it turned out that apparently one of their consultants had had some conversation with somebody and then took data that shouldn't have gotten out and made it and made it on the public web.

Eduard De Vries Sands [01:09:13]:
But we took that story and we covered up enough stuff so that when we show that in our education, you can't see the specifics. But that I think was very helpful and very sobering because it's one thing to say, hey, we don't want that stuff out there. It's another thing to realize that if you give proprietary data to one of your consultant groups because you think that they can do something for you on the side, and if there's a failure, well, you can always blame the consultants and you don't have to take the risk yourself all of a sudden to realize that that could really go south very quickly if they are not completely compliant and how they utilize and what they utilize. It went a long way in helping, I think, to get everyone on board with the fact that we do need to think first about making sure that the data does not flow like water into the wrong place.

Bill Pierce [01:10:16]:
I think in the last six months, Bill, and I don't know if you're seeing this, but I had, you know, similar experience. I was with two at a CIO conference. Two CIOs talking and the other one told the other one their IP was exposed. And when you hear that, scares you to death, right? But in my last probably three months of conversations, I've had both Salesforce and Microsoft come to me and tell me how they're going to protect my data. They don't even wait for me to ask they come and say, here's our solution and here's how we've protected your data and here's how we thought about it and here's how it. And do you want to talk to a technical architect to go through the details? So I do see the conversation changing a little bit and it's maturing a little bit. And I'm hopeful that, you know, it won't be pervasive across all professional services and contractors. But I'm hoping, at least for our vendors, it becomes kind of a mandate for them to work with us.

Eduard De Vries Sands [01:11:11]:
The best part about that whole thing that I just described was we found out that there wasn't a consultant for us who had taken our data and put it in place. They shouldn't have.

Steve Swan [01:11:25]:
We're getting down to the wire here, folks. I think we hit on AI quite a bit. I had a few other things I wanted to chat about, but I only had us reserved till getting close to 1:00. So I want to ask everybody a question. I know that lots of you were on my podcast. We asked, I asked a music question at the end. This isn't going to be a music question, so don't worry about that. But what I wanted to ask each and every one of you, again on a lighter subject, if you were talking to a college senior today that was coming into biotech, it what advice could you give them? What advice would you give them? And whoever wants to go, Bill, you want? Whoever wants to.

Bill Pierce [01:12:00]:
Yeah, I'll go. You and I had this conversation briefly and I've given this to two of my kids. By the way, I recommend cybersecurity as kind of one of the paths inward. And the reason I do that is a, the tools from a data and an analysis and a behavioral perspective. But you just learn so much that's going to then educate you on the foundation of what you're about to step into. Whether it's models, whether it's data classification, whether it's data leakage, as we said, the data flows like water. I think that's a great analogy. I just think cybersecurity is probably a unique career.

Bill Pierce [01:12:44]:
It's self sustaining career. And even if it's like an attorney, even if you don't stay there, you find these attorneys tend to go up into other careers is just a great foundation for you, you know, going forward. So I always, you know, kind of ironically recommend cyber security as one of the paths inward.

Steve Swan [01:13:02]:
Thank you. Yeah, I recommended that to my college kid, but she went to data science so she said she didn't want the pressure. I don't want the pressure. I don't want all that.

Bill Pierce [01:13:14]:
There is pressure. That's true.

Rajvir Madan  [01:13:17]:
I have one in college and a couple to follow. And it's interesting, when I talk about this, I'm like, biotech is changing to tech bio. If you think about it, the whole technology from an IT AI perspective is becoming so much more part of the R and D effort that even if you want to go into hardcore science, make sure you get that technology component. Because even if you look at Moderna and others, they're using the computational power so much to actually get to create the service. We all depend on that. I think there's an interesting, interesting segue there that if you go down the technology route, you can actually end up in the science route. It's not where in the past you have to choose left or right. It's more of an integrated offering now and then.

Rajvir Madan  [01:13:55]:
You know me, Steve, I always want them to take a business approach first and technology second. Because I think that's the way how we are all successful on this. On this call.

Steve Swan [01:14:04]:
Just as a side note, I used to get pinged right a ton for security roles. Bill Pierce it's all about. Everybody's pinging me about data roles, all sorts of data, anything, you know, data management, data architecture, data governance, whatever. Anyway, that's just a sidebar for everybody. Anybody else want to take a shot at what they would tell a college senior?

Elisabeth Schwartz [01:14:26]:
I actually have three, one and two in college, one that graduated. So actually my married and I have one kid who married a guy who is in cybersecurity. And so that's been interesting to watch. She's working in a company now. She is actually working more on the data side. And she has a degree in computer science. And my youngest is studying computer science. Honestly, if I were advising a kid getting into health care, it, I would tell them to try and sit in as many meetings with other departments as possible because pharmaceuticals are so complicated.

Elisabeth Schwartz [01:15:01]:
There's so many touch points with it. Fda, sec, you name it. So I'd actually advise them to try to get as much experience with other departments as possible and then get back into IT to learn how to complement.

Steve Swan [01:15:15]:
What the business is doing, solve business problems. Right.

Rajvir (Raj) Madan  [01:15:20]:
So, Steve, I would say, you know, for me, it's actually not about focusing on a specific discipline, whether it's cybersecurity or, you know, learning about market access or patient access. For me, it's about just being curious and being inquisitive. I think what is what is, you know, the, you know, the field of choice and the field of relevance today is going to be very different, you know, five years down the road, 10 years down the road. And I think if you impart skills to individuals that can sort of transcend time, let's say, I think that for me is more important. Important. So just how do you remain curious? How do you remain inquisitive? And as Simon Sinek always says, start with the why? I think for me those are like the skills that I would want to impart on a senior.

Steve Swan [01:16:07]:
Thank you. Anybody else?

Ganesh Iyer [01:16:10]:
I agree with that.

Bill Wallace [01:16:12]:
Curiosity, the spirit of curiosity. Do whatever you love doing, but be curious. At the end of the day, all these are tools. Next year, what's it going to be? It's going to be something different.

Bill Wallace [01:16:30]:
Yeah, the. I would say the same, the similar thing, you know, the cybersecurity. If you want to have a potential career that isn't just with biotech, it. I mean cybersecurity has applicability across all kinds of industries. And at the same time the data piece, I mean AI is causing a focus back on data. I mean, there was a time, I'm going back a ways here, but before the big data warehouses were around, before all the big analytic tools were around, where there was a lot of focus on data, a lot of focus on building things the right way, you had constraints with your databases, you've had constraints with your data analytic tools and over time both of those have improved and so there's a lot more flexibility now. And I think that with the way that AI is being looked at across the biotech continuum, that there is now a new focus on data. But the key piece in all of this is to be always interested in the new thing, to be always curious, because the systems and processes and the conversations I'm having across the business departments and that with the business partners is very different than Even it was five or 10 years ago and is even more different than it was 20 years ago.

Bill Wallace [01:18:03]:
And that is because the evolution never stops. And you really have to be someone who is a, who is eager to be in a continual learning mode. And truthfully, not everybody is, and there's nothing wrong with that. But for these kinds of roles in an organization, you really need to have that desire.

Steve Swan [01:18:25]:
You got to stay curious to stay relevant. Bottom line.

Bill Wallace [01:18:29]:
Absolutely.

Bill Wallace [01:18:31]:
In a nutshell, yeah.

James Schluter [01:18:33]:
Steven, in the past, I'll simplify this a little bit. These kids, they come out of school, they don't have any real world experience. Green, if you will. I've recommended to some people to just don't be afraid to take that first job in a client service area because it gives your exposure to all the technologies within the organization and all the business units that are within it, so they can reach out through that area that's like the seed and build their career through that, through the Explorer, find what they like to do.

Bill Pierce [01:19:09]:
I tell anybody who's going into technology, you better like change because it's going to be constant for the rest of your career if you want to make a career out of it. So isn't that what makes it fun, though? Change?

Elisabeth Schwartz [01:19:23]:
Right, right. We're always learning. Exactly. It's so much fun that way. There's always something new or some business thing or technical thing. I don't know. That's why I like pharmaceuticals. But it is an interesting field.

Steve Swan [01:19:37]:
Ganesh, anything? You and I have talked about our kids quite a bit.

Ganesh Iyer [01:19:42]:
Yeah, no, they've all been saying pretty much what you and I have talked. But I think at some point of time you've got to focus. And I see cybersecurity. So if you're looking at it from a job perspective, growth perspective, cybersecurity, it is because you're never going to be out of job and you got to be objective at some point. Right. But again, to really be in the cybersecurity world, are you that technically augmented? Do you have that interest? And then I think to a great extent, and I keep telling my team, as people in it, at any given point of time, customer service and taking responsibility is something that each one of us needs to have interest in. If you do not have the interest to take responsibility because business will not take it, it is up to it. And if you don't have that interest, then it is not the place for you.

Ganesh Iyer [01:20:48]:
So I think it is a combination of a lot of things. And as James pointed out, it could start off with customer service. Trying to understand, at the end of the day, it is a back office kind of role. So customer service is very important. Maybe you learn that way. So there is no right or wrong method. But at some point you got to focus on something. And for me, I think that is cybersecurity.

Ganesh Iyer [01:21:16]:
It used to be transformation, erp, when I started working, but now it's moved to cybersecurity. Probably in another couple of years, it could probably move to AI data and things like that. It's always evolving.

Bill Pierce [01:21:31]:
So.

Ganesh Iyer [01:21:34]:
That'S something that Steve and I, you and I have talked about this. I feel very strongly about cybersecurity.

Bill Pierce [01:21:40]:
So, Ganesh, you talked about. You talked about AI not taking jobs away. And I think when I started in the erp, really getting serious in the ERP space, it was presented to me that the ERP would eliminate a lot of jobs. Right. So the more things change, the more they stay the same.

Ganesh Iyer [01:21:56]:
Actually, it increases the number of jobs, because I know in a lot of large implementations that I've worked on, there were more ID guys now who never had a job before. Right.

Rajvir (Raj) Madan  [01:22:12]:
You know, while I. While I agree that I, you know, I think the job pool is actually going to grow, Ganesh, I do think jobs are going to evolve. Right. And I think we have to prepare for that evolution, let's say. So instead of needing more writers, you're going to need more editors, let's say. Right. So I think we need to help our teams prepare for that evolution then, right?

Ganesh Iyer [01:22:32]:
Correct, correct. And it's always been the case, right, with new technologies, the jobs have always kind of changed, evolved. They're much better, more enriched.

Steve Swan [01:22:43]:
I think Edward spent some time talking about that on LinkedIn, how AI is going to help enhance leaders. Right.

Rajvir Madan  [01:22:49]:
Well, and I think it is. I think it's going to make us more productive. But I think if you look at society, we're significantly more productive than we are 50 years ago. The jobs have changed. Jobs have not gone away. So if you think about. In principle, if you think about what AI does today, and I think Ethan Mollock said it really well, it makes our great people somewhat more productive, but it makes our mediocre people kind of get too good. So that goes back to the intern level.

Rajvir Madan  [01:23:15]:
Will it get better over time? Will it make us more productive? I'm more productive. I don't think I'm significantly better now. Hopefully in two, three years, AI will make us better. But it's. It's an evolving area, and we just got to experiment and try for ourselves in our own life to keep getting better. But I see it like where we're going in our call centers, not a place we're trying to augment intelligence. So instead of saying, Steven, you got to do this, we say, well, I would suggest this, but please use your human judgment to make sure it's the right choice. And you can see in a few years that might change.

Rajvir Madan  [01:23:46]:
We're just. We're not ready for that yet. And I think many of us are not. But we will get there at some point and it will be exciting when we get there. And the last thing I would say is while we do this, because it's all digital. We can actually measure the percentage of times we get it right. So over time we can start to forecast, we can predict which of these activities we can hand off and which ones we can't. That's how we have to think about this.

Rajvir Madan  [01:24:07]:
So the one place where I slightly disagree, I don't think it is a back office function anymore. I think it is part of becoming the business and part of helping us grow the revenue in certain companies, which I love. And I think that's exciting. And that goes back to that. We have to influence the outcomes and you can't get to certain outcomes if technology is not a core part of our offering. In many companies, I'm not saying in all, but in many, I think, yeah.

Bill Pierce [01:24:30]:
I think you nailed it, Ed. Right. Even my own staff, like you're the most business focused cio. It's like you have to be, you have to talk the lingo and you have to understand what makes business tick. And if you don't, then I think you need to again find another career because I don't think it will have. A lot of the back office stuff's going away, right. It's, it's coders. Right.

Bill Pierce [01:24:53]:
If you're a programmer, there's so much low code, no code and cloud going on. If you aren't worried about adapting your skill set, you, you need to be.

Steve Swan [01:25:01]:
Yeah, I can tell you that Bloomberg right now, they interview anybody you, you have to program in Python for any job for them.

Bill Pierce [01:25:08]:
Yeah, exactly.

Steve Swan [01:25:09]:
Every job. Every job you can't program in Python. Have a good day.

Ganesh Iyer [01:25:15]:
And I think, I think, I think I said back office because in a lot of companies, and I'll talk about us here, we look at it as more a reactive kind of an organization, right? Yes. It will change, it will evolve. And we've done a lot of that work in the past three years. Right. But it's still that the business, we still involve the business, it can't just go and do something. We still have to work with the business, make sure we front end the business. And in that sense we will always be an enabler. We will never be the leader per se.

Ganesh Iyer [01:26:02]:
The business will lead it. That's what I mean. And that for that to change, probably AI could be that technology that could bring that change. Right. But we see that still in a lot of companies. I've seen that in my previous company. I see that here we try to change it. It's a little difficult.

Ganesh Iyer [01:26:24]:
The business still wants the first thing.

Bill Pierce [01:26:27]:
I had a little bit Ganesh Having been a general manager of a business unit, I thought having a P and L would free me from being overhead. Right. They're really just a different slice off the same onion. And I'll give you an example. Right. There's probably nothing that your business can really do to help you unless you have some really technical enabled business to make your E Commerce take off. I've had 44% year over year growth in E Commerce. That's triple, quadruple what the business performance has been.

Bill Pierce [01:27:03]:
Right. And so there's a real chance for it to shine there. Right. Taking advantage of customer experience and a lot of the data driven functions and analytics. Right. And saying this is how we get closer to our customer. If again you're talking to the business properly about it and you frame it properly. Right.

Bill Pierce [01:27:21]:
Otherwise they'll just push you off to the side. Right. And say I'll come get you when I need you. Right.

Ganesh Iyer [01:27:28]:
I completely agree there. But I'll give an example. Right. We are trying to provide more visibility to the business on their complete contracting and spend. Right. Who makes it happen? It makes it happen through analytics. But who goes and talks in front? It's the head of finance who goes and talks about the solution in front. From that standpoint, I always think that it is an enabler who partners with the business.

Ganesh Iyer [01:28:00]:
That's where I think and don't get me wrong, I absolutely want it not to be a back office. I want it right in front of the table there because we make it happen. And that's why I talked about the responsibility. Because the responsibility is more taken by it than the business to make things happen. But again, it's a mindset.

Bill Pierce [01:28:20]:
Absolutely.

Ganesh Iyer [01:28:21]:
And there are certain companies that still have that old mindset and it's people like us that have to change that mindset and say that it is not back office. We are part of what you call the people who make things happen. Right.

Bill Wallace [01:28:41]:
Maybe we can start by rebranding it. When I was cio I renamed the department to Business Systems Group. We're not called it that way. We sat at the table like marketing, like hr, like everyone else. And we solved business problems. It should be renamed to something else.

Ganesh Iyer [01:29:01]:
Totally agree, totally agree. Because that was such a big thing that we did here. We call it the Information Systems Organization. We don't call it IT because earlier for us the business taught us thought about us as a laptop and a network organization. When we went and talked to them about business process, they said no, no, what are you talking about? You are just laptop and networks. Just help me get on the Internet.

Bill Wallace [01:29:28]:
And give me my laptop and fix my Apple phone. I'm trying to rebuild it.

Steve Swan [01:29:34]:
Rebranding, right? Rebranding.

Bill Wallace [01:29:36]:
Rebranding.

Ganesh Iyer [01:29:39]:
Listen, how many.

Bill Pierce [01:29:41]:
How many technology committees do you see formed, Steven? Are you seeing technology committees on boards? Are you being asked to recruit for technology committee leaders on boards now? You must be seeing that, right?

Steve Swan [01:29:53]:
Not yet, no.

Bill Pierce [01:29:54]:
Okay, that's interesting. I think it's coming. Yeah, I do. I really think that it's going to start becoming front and center. Okay.

Steve Swan [01:30:01]:
All right, well, we'll see. I'll let you know. I'll keep you posted. Well, listen, each and every single one of you have to thank you. You're all awesome. Happy holidays. This was great. I'm going to see you all again, hopefully on another podcast.

Steve Swan [01:30:15]:
But all your input was great, and I appreciate every minute that each and every single one of you gave us. And my Biotech Bites community is going to love this. Thank you.