Biotech Bytes: Conversations with Biotechnology / Pharmaceutical IT Leaders

The Rise of AI in Pharma & Biotech with Bill Wallace

January 04, 2024 Steve Swan Episode 1
The Rise of AI in Pharma & Biotech with Bill Wallace
Biotech Bytes: Conversations with Biotechnology / Pharmaceutical IT Leaders
More Info
Biotech Bytes: Conversations with Biotechnology / Pharmaceutical IT Leaders
The Rise of AI in Pharma & Biotech with Bill Wallace
Jan 04, 2024 Episode 1
Steve Swan

Welcome to the first episode of the Biotech Bytes podcast!

Today, we have an insightful conversation with Bill Wallace from Intercept Pharmaceuticals, exploring the transformative role of AI in the pharmaceutical and biotech industries.

Bill talks about how working together and keeping data safe are key in these industries. He dives into the specific challenges the industry faces, like making sure smaller companies can access data and adapting to the new things AI can do.

We also touch on the critical role of AI in patient safety and drug development, while addressing the challenges of data protection.

As we navigate through this conversation, it becomes evident that the integration of AI in pharma and biotech is a delicate balancing act. It requires an approach that harmonizes innovation with prudence.

Join us to discover how AI is making a significant impact in the pharma and biotech industries. This episode is going to give you a great look into how AI is really shaking things up in our industry. Tune in to learn all about it!

Specifically, this episode highlights the following themes:

  • The integral role of AI in advancing pharmaceutical research and patient care
  • Confronting cybersecurity challenges in an AI-enhanced pharmaceutical industry
  • Navigating the financial and strategic landscape for AI adoption in varying company sizes

Links from this episode:

Show Notes Transcript Chapter Markers

Welcome to the first episode of the Biotech Bytes podcast!

Today, we have an insightful conversation with Bill Wallace from Intercept Pharmaceuticals, exploring the transformative role of AI in the pharmaceutical and biotech industries.

Bill talks about how working together and keeping data safe are key in these industries. He dives into the specific challenges the industry faces, like making sure smaller companies can access data and adapting to the new things AI can do.

We also touch on the critical role of AI in patient safety and drug development, while addressing the challenges of data protection.

As we navigate through this conversation, it becomes evident that the integration of AI in pharma and biotech is a delicate balancing act. It requires an approach that harmonizes innovation with prudence.

Join us to discover how AI is making a significant impact in the pharma and biotech industries. This episode is going to give you a great look into how AI is really shaking things up in our industry. Tune in to learn all about it!

Specifically, this episode highlights the following themes:

  • The integral role of AI in advancing pharmaceutical research and patient care
  • Confronting cybersecurity challenges in an AI-enhanced pharmaceutical industry
  • Navigating the financial and strategic landscape for AI adoption in varying company sizes

Links from this episode:

Bill Wallace [00:00:00]:

There is nothing wrong with us all comparing notes on, hey, how are we going to keep ourselves cybersecure in an efficient and effective way? And I think AI is going to be a similar thing. The specifics around the protocols, the specifics around the compound, that's all individual IP and always will be. But how you might be able to utilize a particular AI or an AI platform for helping with clinical trials, helping with research and development similar to the cybersecurity space. There's nothing specific and IP related for that specific piece. And that's where I think pharma and Biotech It leaders and our business partners can really help each other.

Steven Swan [00:00:46]:

Welcome to Biotech Bytes podcast. We're here to sit down with Bill Wallace from Intercept, where we're doing conversations with It leaders in Biotech and Pharma for leaders in Biotech and Pharma. Bill, thanks for joining us today.

Bill Wallace [00:01:02]:

Great, and thank you for having me. I'm excited to be.

Steven Swan [00:01:08]:

You know, a lot of the time when I'm talking to leaders in biotech and Pharma, especially nowadays, Bill, I hear a lot of conversation and you and I have had some conversations too, right. I hear a lot of conversation now about around AI and such. Right. A lot of folks are really hitting on that. So I guess maybe I'll just start real general. What are your feelings and thoughts around AI in our industry?

Bill Wallace [00:01:32]:

I tell you, it feels like it's the Internet circa about 1992 when the World Wide Web and Web crawler were out and about. It definitely feels like it's early days, but it also feels like this is going to be a huge change for a lot of things. And you go back to when the Web first came out and people started using their Netscape browser and saw things graphically for the first time, and that started a huge amount of change. And no one can imagine not being on the Web today. And I think that AI is in its very early phases, but it's not going to be too long before people couldn't have imagined a time not using it because they're using it for things on an everyday basis.

Steven Swan [00:02:25]:

Well, yeah, so that kind of brings up the question that I hear from a lot of folks and probably some of the things you'd want to hear from others too, but what would you see? Not that you're doing this or anything, but what would you see or think or imagine? Are some of the low hanging fruit for business cases right, for AI, and then what would you see as some of the far off ones right. Further on down the road that maybe it's not capable of yet?

Bill Wallace [00:02:49]:

Yeah, so low hanging fruit, especially in the Pharma space. So we actually started to use AI recently, and we're doing it in the old crawl walk run. We're definitely crawling. We just rolled it out for our intranet help. Right. So now you go to the chat bot. The chat bot we used to have to put in specific information. We used to have to specify how people were going to ask.

Bill Wallace [00:03:24]:

We've now integrated that chat help into an AI capability. And it's very early days, but the results are definitely better than we used to get when it was just us trying to figure out what people were going to need when they were asking how do I print X, Y or Z?

Steven Swan [00:03:44]:

Right, sure. Now, what would you think are some of the far off things that we're not thinking about or that you think it may help us with?

Bill Wallace [00:03:55]:

Right, yeah. So we have already started conversations with some of our business colleagues around some potential use cases. Again, right, very early, a lot of people are thinking about what is the art of the possible. And one of the things that we've been talking about is, for example, you've got something like pharmacovigilance, adverse events, safety monitoring and profiling and we've got a variety of tools to do that today. But those teams are always looking to put patient first and they're always looking for a new way to potentially look for something that may not be immediately apparent. And so we have had a number of conversations about is there a way that in an appropriate manner and with the appropriate safeguards for, of course, our patient data, which is always first and foremost, how could we set up some kind of an AI model to be able to pull in not only our information, but potentially external information for general background information for types of people. So that the model the AI could figure out when something was anomalous beyond those things that we already know to look for. And so that's to me a really exciting example where AI could in the future potentially really help.

Bill Wallace [00:05:26]:

But there's a number of things, first and foremost, the security of the data, and then secondly, thinking about some of the other things with AI, which is when it brings in other data, are there potential biases? Is there a potential that if you don't know the exact data set it's bringing in, then you could potentially get inaccurate or incorrect data? Because as we all know, not everything on the Internet is 100% truthful. So there's a lot of conversations going.

Steven Swan [00:05:58]:

Being polite, bill, you're being polite?

Bill Wallace [00:06:00]:

Yeah, I'm being polite. Right. There's a lot of conversations going on about what are some things that the business partners are already thinking about where it could be helpful. But again, right, crawl, walk, run. There's a number of items that we have to get a handle on before we can jump all the way up there.

Steven Swan [00:06:21]:

Sure, yeah. And you just talked about the data and the security of the data. I mean, that's the two biggest pieces, right, about all this. Because first of all, we all know AI is very specific, right. So it's got to have perfect instructions I'm here, I want to go here. But the data has got to be as good as perfect, right? And we've got to really make sure that before we build out all our models and build out all our technology, right?

Bill Wallace [00:06:46]:

Absolutely. And a quick anecdote. So of course Chat GPT, there's the public part, then there's the more private part. So this past summer, myself and one of my team members, we were actually messing around with Chat GPT because we're interested, right? We're trying to learn things. So we put in some queries and questions related around our disease space, right? We were not using any of our data. We were simply going on satgpt and asking for the information, the free. And of course, when you get a result, you can then go in and see some of the data areas where that result came from. And to make a long story short, we got results and I was a little concerned about the results.

Bill Wallace [00:07:40]:

We went in and looked at the data and we saw data that looked like it was from one of our potential competitors that probably they don't want to be public and I'm keeping it anonymous, but it happened to be a company that I'm friends with, the CIO. We took some screenshots, I called them up and said, hey, just want you to be aware of this. And they said, oh my God, this is definitely not something that should be out and available. And the long and the short of it is they might have had a very excited consultant who was happy to use Catch EPT and wasn't as careful about using some of their proprietary data in trying to get a result. And so on our side, we have a very specific process with compliance and legal, et cetera. If somebody wants to use AI for a particular business process, they fill out a form, we work with them. We work with compliance to make sure that we all understand exactly what data is going to be used and obviously make sure that things are going to be private because it is unfortunately a little bit of the Wild West out there right now.

Steven Swan [00:09:00]:

Well, yeah, too. So that brings up a good point. And I hear a lot of people saying that obviously there's technologists, there's AI folks, there's data folks, there's security folks and there's data security folks. And combining all that together is really where you just were. I had somebody come to me and they were talking to me exactly about your example that you just had. He said to me, we're a few keystrokes away from someone just making a mistake because this is an open forum and if our formulas or something get out there, that's a big problem for us. So what I think about when I hear those kinds of things is do we disable this kind of function inside our corporations or, I don't know, how do you protect it because it is open like that. So I don't know the right answer to all those kinds of things.

Bill Wallace [00:09:49]:

Yeah, it feels like what I'll say in general for cybersecurity, which is we can have a layered security model, we can have a variety of things looking at data transmissions and all those types of things that we do. But at the end of the day, it comes down to the colleagues and employees and our partner contractors and consultants being smart as well and understanding when they shouldn't take our data and do something. And that's why we have this process where we have them fill out a form. We don't want to say you can't use AI because then all it takes is somebody who's doing something on the side and you potentially don't know your exposure. But to your point, we need to make sure that whatever they are doing has the appropriate safeguards and that we can as a team make sure that the benefit is going to be done in a way with appropriate and low risk.

Steven Swan [00:10:55]:

Yeah, because again, it's open forum. It's just scary, right? I don't even know how to it is it's crazy? Yeah, I mean, it's great stuff. It's awesome, it's cool technology, it's forward thinking, it helps us. But something like that gets out there and then that's a problem for everybody. Now, as far as we just talked about how the integrity of the data, data has got to be great, right? Coming into this thing, do we all have to build up in your thought process, do we all have to build up our data organizations? Tell me what your thoughts are on that.

Bill Wallace [00:11:29]:

Well, good question. You look at the large pharmas, right? Your J j's your Pfizers of the world they obviously have their own processes where they're looking at AI and what it can you know, they're putting a significant amount of money behind that. My company and other in the smaller and mid market, we don't necessarily have millions and millions of dollars that we can just throw on, hey, let's see what happens. So I do think that as we go along, there should hopefully be an opportunity for some partners to come into this space who are using AI and who potentially get access to, for example, patient data, the lab, the medical data. Because when you're doing a variety of analysis in the research side, the clinical side, right. What do we always talk about placebo? Well, if we have somebody who has similar disease states or similar lab results, et cetera, and then we compare them to someone in our trial who's getting our drug, it can help with analysis of a variety of different things. But at the same time, those smaller and mid market companies can't afford to buy the huge amount and volume of patient data for the small amount that actually relates to say, our particular disease state. So I do think there's going to hopefully be some partners and providers who think about, okay, we can grab all this data and we can leverage an AI tool where we can keep the data of each individual client of ours separate and distinct, but that they can all leverage this broader pool so that they can take advantage of that.

Bill Wallace [00:13:28]:

And when we say leverage the broader pool, right, we're in rare liver disease. So, yes, you need a data set that has tens and tens of millions of lives for us to wind up with. The data set we care about is probably somewhere in the tens of thousands. So we can't afford to buy tens and tens of millions of lives worth of data to find that tens of thousands. But if we've got a client partner who's building out this kind of capability, and now we can subscribe to just that reduced set based on the lab results, et cetera, the particular protocols that we need, that it would be beneficial for us and hopefully reasonable for us. And at the same time, that partner now can do this with 1020, 8100 different pharmacists and biotechs. And we can all together help to fast forward things in development, in safety monitoring, and really help the patients that much quicker.

Steven Swan [00:14:29]:

And so I'm surprised something like that doesn't exist yet. So you're saying that these big data folks haven't done that yet, they haven't sliced it out for smaller to medium sized companies?

Bill Wallace [00:14:40]:

We are continually looking to see who's out there and who might be doing this and how we might be able to leverage it. We haven't found one yet. I'd love to hear tomorrow that there is somebody we've missed, right? Yeah.

Steven Swan [00:15:03]:

This is a call to them. I get it. Okay, I hear you. So now with the data and with what we're so data typically resides in the business, right? Typically.

Bill Wallace [00:15:15]:

Right.

Steven Swan [00:15:16]:

And so that's kind of a push and a pull too, right, for the It folks that have to do the security piece of it, right? Because now you got to protect your data. You got chat GPT. Chat GPT, probably. I guess that crosses lines too, between business and It. But then with them owning the business, being the data owners, usually not all the time, but usually does that present challenges for us in It when it comes to all this, or don't you think so?

Bill Wallace [00:15:46]:

I would say it absolutely can, and it does. I would also say that it also depends on the situation. When I first joined where I am now, we outsourced our clinical trials to the CROs, which is very typical. And that included, they gathered the data in, they would build out repositories for us, do all the standard stuff that CROs do. And we got a new person in the R and D clinical space, leader there who joined literally two weeks after I did with the organization. And we've partnered very well together. And one of the things that we did was say, look, we're still going to use CROs and we're going to use them for what they're good at, but we want to have better management and governance over the data itself. So we actually implemented for our new trials, we started in 2022 and we've got more coming now between 2023 and 2024.

Bill Wallace [00:16:54]:

For all new trials, we actually have the repository. It's cloud based, it's utilizing one of the more standard platforms in the industry. But our CROs are actually putting the data into an intercept managed cloud environment. And yes, the business owns the data, but it helps them and we manage that environment in partnership with them. That partnership has come a long way and goes a long way to make sure that we can keep things safe and secure, but at the same time help the clinical and research organizations have the access to the data that they need, because when it's off in a CRO somewhere, they have to request a data pull. If somebody forgets something, it takes days or weeks, whatever it may be. And now they literally okay, the authorized people have access. They can do those polls right now, and it allows them to.

Bill Wallace [00:17:55]:

We're also in the process of building out a repository where we're going to be able to do analysis and reporting. And you can do those kinds of things more easily if you have control of data.

Steven Swan [00:18:07]:

On the clinical side is where you see the biggest usage right now of what we're talking about for the AI and the Chat GPT and things like that, right?

Bill Wallace [00:18:17]:

I would say that is where I see a lot of the interest. There the pharmacovigilance, and it's about right. You've got patients who've got certain issues, which is why they're using your drug or potentially using your drug. And a lot of times you really want to compare what are those patients experiencing, whether it's side effect or a potential side effect, what are they experiencing versus other members of the population who have similar medical issues but maybe don't fully meet the criteria, or for whatever other reason they're not yet on your drug. And you want to do those comparisons. And it's not terribly easy without, again, investing in a lot of data that really isn't relevant to you, and without a lot of time and analysis to go very specifically and write out your queries and your procedures. And that's where I see a lot of people looking at AI and saying, hey, if somebody can go in, an AI can go in and take the data and see what pops, maybe they can find something that in our just specific query language wouldn't otherwise be found. So I see a lot of people looking at AI as a great complementary tool to all of the good work and analysis that they're trying to do today.

Bill Wallace [00:19:51]:

And they're really trying to figure out, and we're trying to figure out with them. How do we do this in a good compliant way, not have a lot of false positives, all those kinds of things, right?

Steven Swan [00:20:04]:

Yeah. Well, I bet commercial could probably use it a lot too, I would think. But that's, I guess, yet to be determined. But with everything we've been talking about here, right? We hit security, we hit data, we hit AI. But do you feel and maybe we're not there yet, but do you think we're going to have to have a separate governance security sort of function around the data as it's specific to AI? Because traditional security as we know right now, right, for our systems and for our organization would really get taxed, I think, if they've got to work on and handle and be responsible for all this, or am I overthinking it?

Bill Wallace [00:20:48]:

I don't think you're overthinking it. I think this gets back to the call out to, hey, we'd love some partners who can pull this together for us because we're trying to rub the sticks together and create the fire right now when it comes to AI, there's a lot of interest both on the technology side as well as our business partners. And so we're working right now to look at some of those use cases and see what it is. So we're definitely enthused, I do agree with you. If we get to the point where we go beyond the crawling and then we're getting to the walking and potentially the running at that point, it could definitely have an impact. Depending on what partners can do for us, it could absolutely have an impact in terms of compression of job responsibilities and time and potentially needing to have more people to focus on that. I think that's definitely a potential growth area.

Steven Swan [00:21:52]:

If the data folks don't hear this, they should. Right. So now just general security then, because I know one of the things that a lot of folks want to hear about, a lot of it leaders, they talked to me, I went out to a lot of different folks like yourself and asked what are some things you want to talk about? Some want to talk about AI, some want to talk about security, some want to talk about the combination of the two, which we hit that as well. From a general security perspective, again, not getting specific. Right. Can't do that. But thoughts and feelings around if that's morphed in any way, shape or form with the things that we're talking about here. I mean, you talked about one of your partners that got in trouble or one of your colleagues rather, that got in trouble because somebody put something in there.

Steven Swan [00:22:40]:

But what are some of the things that people are thinking about as far as except for your data, your data partners? What are some of the things that they're thinking about that they have to do or maybe not do right going forward? Like I said, earlier. Is this a matter of if it gets too big and too well, you were against said, you know, do we shut off Chat GPT? You can't do that because that stifles innovation, right? So do we have, I don't know, somebody that's responsible for is there AI security coming? I don't know. I don't really know the thought there.

Bill Wallace [00:23:16]:

So there is AI security coming. There is already AI security happening. Palo Alto just had a recent there was an article recently about Palo Alto and Palo Alto, right. They've got firewalls and they've got a lot of cyber tools. Their own internal team has been using a form of AI now for the better part of a year and a half or thereabouts. And one of the reasons why they started was because they've started to see the bad actors using AI, right? I mean, if you're a bad actor and you've taken over someone else's server first, you don't necessarily always care about the data so much. I mean, you want to grab somebody's data, but if what you're doing becomes known, it's somebody else's data, not yours. Plus, and this is the other thing with AI we can talk about, if it's somebody else's server, it's their compute cycles and they're paying for it.

Bill Wallace [00:24:21]:

You aren't. So unfortunately, the bad actors have been at the forefront of really trying to embed AI in some of their standard business practices, which of course is ransomware and attacking and all those bad things that they do. And so one of the things that I see from a cyber perspective, we've actually had to adjust our profile and everything always changes. But the attack bots that have come along and now with them using AI, in addition to that, we've had to really take a look at automating more of our layered security model because the old days of, oh, an alert came in. Now let me send it to the security Operations center, let an analyst take a look at it, look at the flag, figure out if something is going on. Now, these days, with some of the things that the hackers have going, by the time that happens, if it takes 30 minutes, they may have thousands of your files. If they've compromised something, the days of being able to put eyeballs on an alert and think that that was going to be reasonably sufficient is really going by the wayside. And so what we've had to do is we've had to look at how can we implement new, more automated security models and an automated security posture where when something is detected, we shut something down right away.

Bill Wallace [00:26:04]:

And then the notification goes out so that we can see, okay, was this a legitimate business process that was happening, and now we need to unroll it and unwind it, or is there something that we need to address here? It has definitely changed the cybersecurity landscape from where it was just a few years ago.

Steven Swan [00:26:24]:

It's funny how we use really right now, a lot of not just the use cases, but the actual usage of AI when I'm placing folks in AI related roles. It all has to do really, for the most part with automation right now, right? So it's funny how we use AI to automate and the bad guys are using AI to penetrate or automate their penetration. So we got to use AI against AI. You know what I mean? It's interesting. Anyway, just it's very true. Is there anything else that you think we should hit on that maybe something that you or some of your other colleagues may want to hear about that are out there, thoughts or feelings about different areas of it?

Bill Wallace [00:27:06]:

I would say I'm a member of several groups of CIOs heads of technology across the pharma and biotech space and we're continually getting together on a fairly regular basis to talk about these types of topics. And I think a podcast like this, something along those lines because we need to all learn from each other. The explosion of all of this change is so monumental across the landscape that no one person can know all or see all. And I think that it's all of us working collaboratively together that's going to help us make sure that we can stay on the leading edge with hopefully not being too far on the bleeding edge with our business partners. And do things again in a very appropriate, regulated and privacy conscious way so that we can gain the benefits for both ourselves and our companies and our patients without having some of the downsides that could come. And the other thing that is still one of those things out there is the cost models. No one seems to know the compute resources for AI, they can take up a lot and no one seems to have any good estimation models yet. So that's another thing that is causing us to be cautious in our adoption of it because we also need to be mindful of budgets and no one likes to start something up and not know where it's going to go budget wise.

Bill Wallace [00:28:50]:

There have been some nasty surprises that I've heard about and one or two, a little bit myself as well on that front.

Steven Swan [00:28:58]:

Well, that kind of goes back to what you said a little earlier because I think the big companies, the large companies have a different mindset when it comes to this kind of conversation you and I are having. And that's one of the examples, right? The costs around AI, maybe they're willing to throw a lot more at it than say, an Intercept or the Swan group, whatever it is, right? So it's kind of two different thoughts and two different processes as you go through that because they can spend it or they may have it to spend, but I don't think, from what I understand, and maybe I'm wrong, but the funds that go into it are the funds that go into it. And I don't think we're allocating more money to this whole thing. I think it's coming out of the same bucket, right? Or is it a whole new innovation budget for AI that's away from the typical AI It budget in most of these companies?

Bill Wallace [00:29:52]:

I'm sure that different companies will work at different ways. Right now, kind of with our business partners, we are combining budgetary forces for some of these new innovations. But to your point, no one wants to have a major portion of their budget disappear in some realm without understanding what the result is going to be. So that's where, again, if we can get AI, it's like clay. You can mold it into what you want, but to a certain degree, it would be great if we could get those partners who've molded into the specific vase that we could use and then use it for some of those specific use cases. Because right now there's a lot of doing things from scratch. And at some point I do think there's going to be an opportunity for some AI partners to figure out some of the secret sauce and help the companies get from the walking to the running and doing it in a compliant way, of course. And that that will provide value for the organization.

Bill Wallace [00:31:10]:

So we're all interested, we're all working toward it, but I think that that's definitely going to help to fast forward things when that comes along. Yeah.

Steven Swan [00:31:19]:

You mean like the AI vendor, right? Yeah. Well, so just to circle back on what you just started saying there and what you said a minute ago, my thought process behind my whole podcast idea was exactly what you said. I just one day had an idea that I know lots and lots of folks that are leaders in It and biotech and pharma. And if some of them are getting this much smarter every day, why not help everybody else get that much smarter? And maybe others are smarter in different areas. And collective intelligence gets better every day. So let's share it so that we don't have to reinvent a wheel at each turn.

Bill Wallace [00:31:54]:

Absolutely. Look, I look at AI and that evolving. That space evolving the way Cybersecurity was 20 years ago, right? Cybersecurity has evolved. The security layers have evolved. But we as It members in the pharma and biotech spaces. And I've got some friends outside of the pharma and biotech as well in It, so we do talk as well, but especially within pharma and Biotech, right? There is nothing wrong with us all comparing notes on, hey, how are we going to keep ourselves cybersecure in an efficient and effective way? And I think whole number of us have benefited over the years by having those kinds of conversations and those kinds of ongoing collaborations on a regular way. And I think AI is going to be a similar thing, right? The specifics around the protocols, the specifics around the compound. That's all individual IP and always will be.

Bill Wallace [00:33:03]:

But how you might be able to utilize a particular AI or an AI platform for helping with clinical trials, helping with research and development similar to the cybersecurity space. There's nothing specific and IP related for that specific piece. And that's where I think pharma and biotech It leaders and our business partners can really help each other in a way that's only going to benefit our companies and especially our patients in the long term.

Steven Swan [00:33:36]:

I couldn't sum it up better. I think that's great. I'm glad you joined me today. I'm glad we had this conversation. Before we go, though, before we go, I always like to ask folks, and this is up to you, how you answer this, do you like music? Do you like live music? This is a totally off the wall question.

Bill Wallace [00:33:58]:

I love live music. I love it.

Steven Swan [00:34:01]:

That's a great answer. Who was your favorite band? This is what I like asking folks. Who was the favorite live band you ever saw? And I didn't get you ready for that. I didn't prep you for this one.

Bill Wallace [00:34:13]:

No, you did not. My favorite live band I've ever seen is Bruce Springsteen.

Steven Swan [00:34:21]:

Really cool.

Bill Wallace [00:34:22]:

Yes. Very cool. Yes. You've seen him a bunch, like, seven times. Okay. Compared to some other I'm proud to say that the first rock concert my son attended at seven years old was Bruce at the Been, and he's been a Bruce fan ever since. And don't get me wrong, there are some other fantastic singers out there. I've seen you two, like, eight times.

Bill Wallace [00:34:55]:

And they're also great live, but just pure live performance. Bruce.

Steven Swan [00:35:04]:

I saw him for the first time this past fall. It was great.

Bill Wallace [00:35:08]:

I haven't seen her live. I haven't even seen the show or the movie, but people are talking about Taylor Swift and her current new eras tour as it's like Bruce but with a woman instead of sounds like it's family.

Steven Swan [00:35:28]:

My daughters and my wife are very big on the Taylor Swift thing. I didn't go, but they did anyway. Well, all right. Well, I appreciate know if anybody that checks this podcast out wants to get in touch with Bill. We'll have his URL for his LinkedIn on here. So you can reach out to Bill if you want to reach out to me. Need anything, my URL is going to be on there. And Bill, I thank you for joining us.

Bill Wallace [00:35:53]:

Thank you for the invitation. Thank you for the chat. This was great.

Introduction
About Bill Wallace
Early applications of AI in pharma
Considerations for data security and AI bias
Anecdote about data privacy and ChatGPT
Challenges of integrating AI in corporate environments
Building data organizations in smaller companies
Collaborative opportunities in data sharing and AI
Managing clinical trial data in cloud environments
AI use in clinical settings and pharmacovigilance
The emergence of AI security
The future of AI in pharma and biotech
Connect with Steven Swan and Bill Wallace