Biotech Bytes: Conversations with Biotechnology / Pharmaceutical IT Leaders

Data Protection in AI Adoption with Bob McCowan

February 02, 2024 Steve Swan Episode 3
Data Protection in AI Adoption with Bob McCowan
Biotech Bytes: Conversations with Biotechnology / Pharmaceutical IT Leaders
More Info
Biotech Bytes: Conversations with Biotechnology / Pharmaceutical IT Leaders
Data Protection in AI Adoption with Bob McCowan
Feb 02, 2024 Episode 3
Steve Swan

A revolution in biotechnology is underway, with artificial intelligence at its core, but the fervor over AI adoption brings with it pressing concerns about data protection that cannot be overlooked.

In this episode, I'm joined by Bob McCowan, Chief Information Officer at Regeneron, to navigate the crossroads of AI development and data security. We dissect the lifeblood of AI efficacy - the quality of input data - and explore how organizations can create a sanctuary for innovation while preventing data breaches. 

Bob brings a wealth of experience to the table, revealing how AI serves as a copilot for human ingenuity. We also delve into fostering a culture that accepts failure as a part of innovation and the need for strategic training within enterprises.

This conversation brings to light pivotal strategies for integrating AI securely and effectively. Join us to uncover practical insights and prepare your organization for the future of biotech.

Specifically, this episode highlights the following themes:

  • The critical role of data quality and leadership in AI implementation
  • Building the right environment for AI adoption and the importance of learning from failure
  • Data protection and intellectual property safeguarding in the age of advanced analytics

Links from this episode:

Show Notes Transcript Chapter Markers

A revolution in biotechnology is underway, with artificial intelligence at its core, but the fervor over AI adoption brings with it pressing concerns about data protection that cannot be overlooked.

In this episode, I'm joined by Bob McCowan, Chief Information Officer at Regeneron, to navigate the crossroads of AI development and data security. We dissect the lifeblood of AI efficacy - the quality of input data - and explore how organizations can create a sanctuary for innovation while preventing data breaches. 

Bob brings a wealth of experience to the table, revealing how AI serves as a copilot for human ingenuity. We also delve into fostering a culture that accepts failure as a part of innovation and the need for strategic training within enterprises.

This conversation brings to light pivotal strategies for integrating AI securely and effectively. Join us to uncover practical insights and prepare your organization for the future of biotech.

Specifically, this episode highlights the following themes:

  • The critical role of data quality and leadership in AI implementation
  • Building the right environment for AI adoption and the importance of learning from failure
  • Data protection and intellectual property safeguarding in the age of advanced analytics

Links from this episode:

Bob McCowan [00:00:00]:
Everyone's thinking generative AI is going to change the world, and it probably will in many ways. Five years ago maybe it was applicable to this group, but five years further on, maybe there's opportunity to take that and take those models and use it on a broader scale across the rest of the organization. I think it's raised a lot of healthy discussions. In the coming years, I think we're going to see a lot of changes to how we do our work based on AI. But again, to me it's not going to remove the human in most cases.

Steve Swan [00:00:28]:
Hi, my name is Steve Swan. I'm the host of biotech Bytes, where we chat with IT leaders within biotech about their thoughts and feelings around some of the current trends within technology. And today I have the honor of having Bob McGowan, the CIO of Regeneron, on as my guest. And thank you Bob, for being with us.

Bob McCowan [00:00:47]:
Thanks Steve, for inviting me on. I'm looking forward to having a good.

Steve Swan [00:00:53]:
So, you know, I'm just going to get right into know lots of folks now what they want to talk about is AI. It's in the headlines, it's everywhere. Business leaders, technology leaders, everybody's talking about it. So I think maybe I'll just give you a wide question. What are your thoughts and feelings around AI and how it applies to our industry right now?

Bob McCowan [00:01:17]:
I think it almost feels like AI was born twelve months ago to listen to some of the press in the way it's being promoted and I think it's at sort of maximum hype now. I do believe there's absolutely value there. And in fact, within our own organization, we use AI a lot and have been doing for many years. I've actually won a number of awards related to it. But I think what you have to do is strip back the question about AI to what is it you're actually talking about? And I think you have to go back to now what's the question you're asking? What data do you have to answer that question? And then what's the right technology? And in some cases AI is a great assist. In other cases, there's many great tools out there that will do the job just as well, if not better and a lot less cost. So I think it's worth taking a look at. But you should come at it from the perspective of what's the problem you're trying to solve? Rather than how can I use AI? I mean, if you're just out there looking to use AI, probably the wrong approach.

Steve Swan [00:02:22]:
Okay, well, so you just talked about the data, right. So, I mean, you're only as good as your inputs. And a lot of folks are talking to me about their data and their inputs of their data that goes into AI. But then they're also talking to me about ais maybe will sit in it and not in business, and a lot of the data sits in business, right? So I don't know, is there a push and pull there? Is there something we need to think about or something that we need to address? I don't know.

Bob McCowan [00:02:52]:
So it's all about the data. I mean, the quality of your output from AI. And again, going back, redefining what you mean is all about the quality of the input. And I think that not a discussion about where it sits doesn't really add value these days. In fact, from an IT perspective these days, the way I think about it is it is there to enable the technologies to make it available to those subject matter experts that really understand now the business process or the scientific process or clinical process, and it's a true partnership. But having a discussion, should it sit in the business or should it sit in an IT team, or should it be federated or centralized? It doesn't really matter. The key is getting the right data of the right quality in a way that can be consumed with the right scientific leadership or business leadership to answer the problems that they're trying to address or to solve the problems they're trying to address. So I think it's more about coming at it from, it's another tool in the toolkit.

Bob McCowan [00:04:00]:
AI is very broad. So bring it back to are you trying to solve an imaging issue? Are you trying to solve an analytical issue? Are you trying to solve maybe just a simple productivity issue with some of the copilots, and then based on that, that will lead you towards what's the best way to actually deliver the capability, right?

Steve Swan [00:04:20]:
And can it really do it? You just mentioned copilots, right? I mean, when I think of copilots, whether you meant this or know, AI assists humans, right? It's our copilot, it's a human know. And I don't think it's any coincidence. That's why Microsoft named theirs, correct?

Bob McCowan [00:04:36]:
I suspect, mean, to me, the human is always going to be involved in AI. Certainly in our industry, highly regulated. Everything has to be explainable. Now, for most things, you just can't take it at face value and say, yes, we're done. You need to know where the data has come from that's allowed it to present that data for you to make your decisions. But the human absolutely has to be part of it. I think this is where almost segmenting AI into what is it you're trying to achieve lets you look at it through a slightly different lens. So if I think about our organization, we have enabled generative AI with the approach being we're an innovative organization.

Bob McCowan [00:05:21]:
So let's expose it to everyone, find out what they're doing with it. Take those use cases, play it back, share it with others, stimulate other ideas and get productivity gains. And some of those may be driven by things like copilots, where you get an email from someone and you can say, okay, hey, can you turn this into a slide for PowerPoint for me? Or vice versa? You get a PowerPoint slide and you say, hey, there's really good information in here. Can you convert it into two or three bullet points for me? And those type of productivity gains for, let's say, knowledge workers, old term, but let's call them knowledge workers. You could apply that through thousands of people across the organization. So a little bit of productivity times thousands of people adds up to huge productivity. I think often where we go to, when we think about AI is the more broader use case where you're going to use it for some sort of breakthrough discovery or you're going to use it for some deep analytical purpose. And I think there you're usually dealing with a lot fewer people with a lot more refined data.

Bob McCowan [00:06:29]:
The level of effort to get the value out of it is going to take a lot more effort, but the value could be extremely large. But how you approach it is going to be very different. And I think that's what you have to do is segment it again, look at what it is you're trying to achieve, but recognize there's value in that whole spectrum.

Steve Swan [00:06:49]:
So is there going to be? Maybe there isn't yet, but do you envision an AI CoE in some of these organizations? Like there's a separate AI function, or is it embedded in the business, or is it embedded in IP, or is it just kind of everywhere?

Bob McCowan [00:07:03]:
I think it's like a lot of the technologies, I think there can be benefit in the CoE in terms of building the architecture of how you're going to enable those solutions. But I think it would be wrong to try and create a CoE just for AI. Certainly in our industry, it's a tool. And now if you created a CoE for this, now, why wouldn't you do that for a bunch of other things? There is some benefits to it, but how you think about it and how it gets embedded, I think is going to determine where it sits. It's the same. I mean, I'm a CIO today, and the definition of a CIO, depending which organization you go to, is all over the place now. You get CIOs and CDOs and CDOs and CDIOs and. Doesn't really matter.

Bob McCowan [00:07:53]:
I think it comes down to if you are delivering service, if you are trying to drive innovation, you got to get your input from your business partners, from external sources, you got to bring the right people together, and ultimately you got to architect the solution and where it sits and how it's controlled. I mean, I'd like to say it doesn't matter. It does matter, but for the most part, it's just look at your organization, do what's right by that organization and put it in the right location and it can evolve. Coes are a good way at times to create the momentum, but sometimes it's not the right way to sustain it. So I think AI at this stage is a tool. It's a very powerful tool. But to me, if you create that Coe beyond just the enablement side of it, you're limiting potentially the possibilities.

Steve Swan [00:08:47]:
All right, makes sense now. So I'm going to pivot a little bit here, right, with I, if I'm sitting in a small biotech right now, right. And I'm listening to Bob, right. I know that Bob started Regeneron when it was a lot smaller company, and we have ambitions of getting there. Right. My Steve Swann, Biotech, right?

Bob McCowan [00:09:09]:
Yeah.

Steve Swan [00:09:10]:
What are some of the things that you would say or that you could. Thoughts and feelings that you would say of things that I don't know that I should be thinking about to make my life easier six, 7810 years down the road, is there something that you would say, well, you should be thinking about this, or maybe you shouldn't think about that, or this gets a little too much attention. That doesn't get enough attention. Any advice, anything you'd give me?

Bob McCowan [00:09:33]:
I think it's going, first of all, it's listening to the experts within your organization. What we do is very complicated. You got to trust and rely on each other. But I think from an it perspective I mentioned earlier, it all comes down to the data. But the data is typically generated and go through as a workflow. So if you go back and look at that and you understand now what data is being brought into the organization now, what data is being generated, how it's being viewed, how it's connected to other data in that work stream, I think if you look at it from that perspective, then what you can start seeing is, okay, where's the opportunity to use some of these tools to maybe accelerate that process and maybe that's good enough. An example might be imaging. I mean, if you're creating lots of imaging and you're trying to figure out what's going on now, AI or ML is fantastic for that.

Bob McCowan [00:10:28]:
And so you could totally accelerate that or even go deeper into the interpretation of the imaging. And then I think if you look at other parts of the work stream, maybe it's that deep analytics where you're trying to hypothesize different outcomes based on the data. But I think you have to go back to now, what is your process? What is it you're trying to get at the far end? And now what are all those steps in between, and then what's the data generated to take you through those steps and maybe look back a number of times and that will tell you where to apply AI. That's how I think of it. Now, I know other organizations and they're maybe in different fields, they're looking at AI to build, let's say, a technology platform to help accelerate, and that's fine. I mean, that's a good business model, that's their business model, but that's not what we do. We're into scientific discovery. So the way we're thinking about it is how do these tools help scientific discovery? Now, you can extend that to how can they help with quality of manufacturing or how they can help with onboarding for trials and execution of trials.

Bob McCowan [00:11:39]:
But it all comes back to what's our reason for being. It's for medical discoveries to get products to patients to help them. We look at it through that lens.

Steve Swan [00:11:50]:
Is there one thing that you wish you knew back then that you know today that would have been something great to know that it would have made your life easier today, eight years later, and I'm not talking, we don't need to get into a psychotherapy session, I'm just talking business. And it.

Bob McCowan [00:12:11]:
Yeah, no, it's a great question. I think it goes back to the data. I mean, there's a lot of focus today on fair data now, findable, accessible, interoperable, reusable, all those terms. I mean, if people had been thinking about that data and that data flow all those years ago, it would be much more ready for a lot of the deep business analytics. It'd be much more ready for potentially some of the learning that needs to take place. But I think even before that, it would also help facilitate with the growth of computational capabilities and storage capabilities and integration capabilities. You can connect it much more. So you can start asking and potentially answering much more complex questions.

Bob McCowan [00:12:53]:
So it goes back to, it's all about the data. So now, if we had started thinking about our data 30 years ago, it would be much easier to actually learn from it now. Whereas a lot of the tools today are going back and saying, okay, how can you revisit that data, look at it through a different light, integrate it with existing data, and I'll bring it forward into where we are today. I mean, so much of it effort goes into managing, connecting that data. We've been really fortunate in some ways at Regeneron, in that our cloud journey was a little slow to get going, but we leaned into it, probably want to say, five years ago, and completely modernized how we manage our data flows. We've designed it for scalability. We've designed it for integration. And having done that, it's paying dividends.

Bob McCowan [00:13:49]:
It really is paying dividends. Now, we've got a lot of historical data that we still have to solve for, but I think we've been fortunate in that we did a really heavy lift of going to the cloud, modernizing, getting our data in there, and it really has been paying dividends for us.

Steve Swan [00:14:09]:
So get that data ready.

Bob McCowan [00:14:11]:
Get the data ready.

Steve Swan [00:14:12]:
Piece of advice, can't ever be too ready with the data. I run into so many companies and I talk to so many leaders where they get their shiny object right, they get their AI, rather their tools, and 6810, twelve months later, they're like, well, I don't think our data was ready for this. So now they're going back. Like you just said, they're going all the way back. And what do they do with all that technology they built? What good is it without the gas that goes in the engine effectively, right?

Bob McCowan [00:14:42]:
Yeah, the data, it is challenging, but sometimes you just have to go slow. To go fast, you have to sort of pause. And sometimes these technologies are hyped so much that you absolutely know there's value. But if you race into it too quickly, what you find is you spend all your time reworking it and redoing it. I think it's better to actually take a pause, make sure you're asking the right questions, figuring out if the technology is ready or not. And in some cases, from a technology perspective. I'll get my team involved in it, but it's at times an investment for the future, where we're either ruling it in or ruling it out. And so what we're saying is, okay, great technology here.

Bob McCowan [00:15:34]:
Is it going to bring value to us as an organization? We won't know that unless we invest a little bit of time in it. We will do that in many cases. Absolutely. And we lean in, move forward in other cases. We have built a capability. We know enough to know what good looks like. We know enough to know when it's ready. But we might just put that on the shelf and say, good technology, but not for us, not now.

Bob McCowan [00:15:57]:
A good example of that is quantum computing. We've taken a look at that. I do believe that's going to be a game changer in years to come, but it's going to affect different industries at different paces. But we've taken a look at that, and it's something that ultimately, I do think is going to be highly impactful. But the reality is, when you're developing a drug and it's going to take you, let's say five, six, seven years, if all you're going to do is analyze something and do it in 15 minutes, versus it's going to take you 4 hours in more traditional computing, is that really worth all the effort to put into it right there right now? Now, I do think as it evolves and as it becomes more stable and more mainstream, there's a place for it. But that's an example where we try and look at those technologies, figure out where it makes sense, and then either move forward with it or put it on the shelf. And there's a lot out there. But if you try and tackle everything, you're going to lose focus on your goal there is to support the business and actually help the whole business move forward.

Bob McCowan [00:17:04]:
It's not to play around with technology itself.

Steve Swan [00:17:08]:
And one person made the analogy that we're really in the real beginning stages. He's like, we're rubbing sticks together here to create fire, but we all got to learn how to crawl before we walk, right? And so we'll get there. That's where we are. And another person mentioned to me that with chat, GPT, even to kind of pivot to that a little bit, that's something that we've all got to watch. And do we leave it on or do we leave it off right inside the corporation? If you leave it off, people are going to be doing it on the side. If you leave it on, you got to be careful. You don't want any of your formulas or anything getting out there. Somebody that I was talking to actually spotted something from another company that they knew, screenshotted it, and sent it off to that CIO, you know what was going to happen.

Bob McCowan [00:17:54]:
Not a good situation. No, I agree. But it goes back to what I was saying. If you pause enough to think through the repercussions, then what you can do, and this is what we did, was we figured out, how do we expose our users, our staff, essentially, to get access to it, but protect the data so that it stays within our walls. So we architected the solution to say, now go look at your existing policies, go look at your existing regulations behaviors. It's all about controlling data. Data going out on AI is no difference from data going out on other solutions. So same discipline.

Bob McCowan [00:18:32]:
But what we did was architected it with guardrails, and we have some monitoring going on to check and balance, and then we look at how people are using it and figure out, okay, is what we've done good enough, or is there maybe risks evolving that we hadn't thought about? But right out of the gate, we started thinking about what could go wrong. Let's make sure we tackle that before we lean into it too far and too fast. I don't think shutting it down is a solution for anyone. Think about it. There's nothing you do on a daily basis that probably doesn't involve an AI solution somewhere, whether it be you're getting a letter in the post and it's checking your postcode for you, or your fridges these days are telling you when you need to order your milk, it's just embedded everywhere. And I know they're sort of trivial examples, but the reality is if you try and block it, there's another ten areas where it's going to pop up. And it could be in one of your existing software solutions where it's embedded. It could be a new software as a service solution.

Bob McCowan [00:19:43]:
It's coming into the environment. So rather than try and stop it, you need to understand it. And then what you have to do is train and educate your users and put enough guardrails and enough checks and balances that you protect your data and protect your IP and protect what it is that makes your company great, so that you don't inadvertently expose it or expose Pii or other information that shouldn't go out there.

Steve Swan [00:20:06]:
Now, I've asked other folks this, and you've been doing this for a long time with AI. Does it fall in an innovation budget or is that something. Again, I'm kind of going back to the CoE concept, AI being separate. Would AI, do you think, end up being its own funding budget, or is it in an innovation budget or is it just in an operations or. I guess it depends on the organization. My mind just went there as you were talking.

Bob McCowan [00:20:33]:
Yeah.

Steve Swan [00:20:34]:
I don't know why.

Bob McCowan [00:20:34]:
For us, it's organizational.

Steve Swan [00:20:38]:
Okay.

Bob McCowan [00:20:41]:
Some of the scientists have come to us in the past with white papers about theoretical way of looking at sub visible particles, for example. To us, that's just a project. And the ML engine, the AI aspect of it, is just another tool. So it's funded as part of research. And no, we don't look at it that we have a budget for AI now within the IT budget. What I will do is take some of my funding and say, look, we've got to build those controls, we've got to build the architecture to enable it. So we'll invest there, but that's more investing in the infrastructure and the governance side of it and the policy side of it. But when it comes to how it's going to be used, that's driven by what the business needs, what they want, what the opportunities are.

Bob McCowan [00:21:32]:
And no, absolutely, the IT teams can bring those ideas forward, and we often do. We'll say, hey, are you looking at what's happening out there in the big world? And maybe there's something we should look at. So we partner and explore some of those ideas. But it is driven based on what our scientific and business users want to do. And it's their budget. At the end of the day, they have to be just held accountable for their output. And therefore they're the ones we're working with to prioritize, decide how do they want to spend their dollars and where do they want to spend it. Right.

Steve Swan [00:22:08]:
And you're an enabling function for them to get them to where they want to go. Right.

Bob McCowan [00:22:11]:
We will help enable it.

Steve Swan [00:22:13]:
Right. Yeah. What I find or what I hear folks talking about. Right. R and d side, commercial marketing side, those are kind of the two big areas right now. If you just have to make it, paint it with a broad brush. Right. That's kind of what I'm hearing now, at least for now.

Steve Swan [00:22:28]:
Right. And why not?

Bob McCowan [00:22:29]:
Yeah. Some of the tools we have on the commercial side, again, they have this built in next best action now, in terms of following up with a health provider. Now, wherever you are in commercial, whatever industry, those type of capabilities are being built in, again, if you bring it back to, it's a very powerful capability, but it's only as powerful as the organization that is trying to utilize it. And it goes back to the questions they're trying to answer. So I think that's how I think about it.

Steve Swan [00:23:08]:
I think, unfortunately, part of the job that we have that I'm hearing more about within it is managing the expectations. Right? Because like we just said earlier, there's a lot of hype around it, there's a lot of talk about it, and sometimes certain folks in the business will be like, hey, I just read about this, let's do this. And we're not ready there, or we're not there. We don't have that capability, or the data can't get us whatever it might be it is.

Bob McCowan [00:23:33]:
But again, go back four or five years ago, robotic process automation, it was the same. Everyone wanted to do robotic process automation, and you'd sit down with people and say, well, we can do this, and it's going to save you an hour a week. Is it really worth it? And I think it's a little bit like this. So you have to qualify and question and partner to get to the point of getting people to really understand that nothing's for free, everything's a trade off. And so if you're going to put effort into this, you want to really make sure that it's worth it. And now, again, in our space, it's, yes, you could do wonderful mathematical modeling, but sometimes going into the lab and doing the experiment is absolutely the only way to do it and the right way to do it and maybe the most cost effective way of doing it. So I think it comes back to just pressure testing. Is there truly a need here for it? And I think if you stay on that path now, you'll stay pretty honest with yourself.

Steve Swan [00:24:39]:
Yeah, I was just chatting with the guy the other day who kind of dovetailing off what you just said. One of his use cases, he said he's trying to build out for AI is using natural language model NLM. Right. And his scientists can then actually say, show me any work we've done in the past on researching Steve Swan's boo boo on his finger kind of thing, and then it can pull up all their data or experiments that they have on that, which is. That's got to be a huge time saver, right? If you've already done that kind of.

Bob McCowan [00:25:09]:
Work, I would think it is. And it's a very common use case within our industry because there's so much reading and research, and often it's revisiting, perhaps research you may have looked at ten years ago where things were not possible, whereas now with the technologies, maybe there is opportunity, but the data's got to be right. So I think if you go too broad, you could end up just chasing your tail. Whereas if you bring it back to, let's say, a particular therapeutic area and say, okay, everything we have in house, we are going to train a model on that. And you could think of it almost as a smart search engine. There's a little bit more than that. And if you did that and you start getting value, then you could look and say, okay, where else do we go? Or you could look and say, okay, well, how do we augment this even further? So now we've got our internal data, which we trust. Do we have trusted sources of external data that we want to add to it? And then through the training, education, and modeling, you can set these up to say, okay, I'm asking a question of real world data, or I'm asking a question of our company only data, or I'm asking a question of our data plus trusted sources that are verifiable, and I think you can build out those type of capabilities.

Bob McCowan [00:26:30]:
It's a really good use case, and I think it's one of the simpler and potentially highly impactful use cases that we all should be looking at.

Steve Swan [00:26:42]:
Well, then, going off what you were saying earlier, you really got to trust that real world data, if it's not just your data and it's everybody's data or whatever real world data, you got to have a great source for that.

Bob McCowan [00:26:51]:
Well, if it's on the Internet, it must be true. Isn't that? No, you're absolutely right. And I think you have to be a skeptic of everything you get unless you know exactly where it's come from and how it was generated and whether or not it's been pressure tested or not. But I think having that skepticism keeps it healthy and keeps you out of trouble. But the level of skepticism, the level of checking depends on the use case. Again, if you're going to say, hey, I want to take this email and make a few bullet points, I'm trivializing it, but who cares? Whereas if you're saying, okay, I want to analyze this data related to some medical condition, you want to know that what you're looking at is totally verifiable and you can go back and check it. So again, it comes back to those use cases and thinking through, I think it comes down to, you got to engage your brain when you're using these tools as you do anything, perhaps a little bit more, and never get complacent that just because you get an answer, you can run with it. Depending what you're doing, you might need to question that and determine whether or not it's really true or not.

Steve Swan [00:28:07]:
Well, like any good it department, right, you've got to get really to the root of what folks are looking for. Sometimes they don't know exactly what they're looking for. So you really got to help them get to that. Right.

Bob McCowan [00:28:21]:
Generous AI really has brought a lot of attention to AI in general, and I think that's a good thing because it's like a lot of technologies we deliver on projects within our industry, but often we don't take it and say, okay, how do we leverage this across on a much broader scale? We can sometimes get caught up on, okay, go solve the next problem. And I think what this has allowed us to do is go back and say, okay, look, everyone's thinking generative AI is going to change the world, and it probably will in many ways, but it's allowed us to go back and say, well, what are we already doing? And is there value there? And is there value? Five years ago, maybe it was applicable to this group, but five years further on, maybe there's opportunity to take that and take those models and use it on a broader scale across the rest of the organization. So I think it's raised a lot of healthy discussions. And in the coming years, I think we're going to see a lot of changes to how we do our work based on AI. But again, to me, it's not going to remove the human in most cases.

Steve Swan [00:29:31]:
No, I agree with you and everything I'm hearing and seeing, saying the exact same thing. We've covered AI and all the things surrounding that. Is there anything else you think we should cover that we haven't hit on that you'd like to share, that you think folks might want to hear about or learn about?

Bob McCowan [00:29:50]:
Well, I think AI is a good example of you look, the job wrecks out there. Everyone's looking for AI experts, everyone. When you read the definitions of them, I'm not really sure they know what they're asking for. In many cases, in other areas, they've got fantastic definitions and there's real clarity. And you can say, okay, these organizations are going after a certain area and you can almost read it into the job descriptions. But it also goes back to just analytics. I mean, you think about years ago, in fact, not even years ago, twelve months ago, it was no, you needed resources to do your deep analytical work. And cybersecurity is another area.

Bob McCowan [00:30:32]:
There's not enough cybersecurity experts. I think with these technologies, it's reinforcing that as an organization, you have to create an ability to grow skills from within. Yes, you can supplement it from outside, but if you are always relying on experts coming from the outside, I think you got a losing game. This is where you've got to think through about how do you use the opportunities and the projects to expose people to these technologies. How do you build the capabilities internally and create a learning type of organization where a resource perhaps that you have today. Two years from now, they're going to be deep into deep learning support for scientific space. So how do you get your staff to go there? And I think that's a big opportunity. The shortcut is go and buy it.

Bob McCowan [00:31:32]:
I think the longer term game is create that environment and hire the type of people that are good at dealing with ambiguity. They're able agile enough to relearn and almost unlearn and relearn again and create that type of team that can do that. And I think that's an opportunity. And AI is sort of reinforcing that. You're not going to buy your way out of this. You need to have a team and you need to invest in your team and you need to trust your team and continually have them learning.

Steve Swan [00:32:09]:
You got to have curious people that want to learn, it sounds like, and then you got to have the training to get them to fill that void, right. If they're curious, there's a void somewhere. So fill that void with the training, right.

Bob McCowan [00:32:18]:
And sometimes the training isn't there. I mean, a lot of the vendors are promoting support and capabilities and it's a great way to tap into it, but you find that in many cases they're on the same journey. When you recognize that, you can start thinking about, okay, well, how do you create your own journey and how do you create your own learning opportunities? Our industry is very innovative. I think often failures are celebrated because you have discovered something maybe, and I think you have to almost embrace that type of attitude where you have to accept that there are going to be some failures, but if you don't try, you're not going to be able to develop and grow where you need to go. Some of it is trusting your people to take on these big challenges and learn on their own and then share their learning and bring other people along for that journey and also stay connected to external organizations and opportunities and recognize you can learn from them as well. But it is a mindset. It's changing that mindset. You're not coming in here to fill the job based on that job description.

Bob McCowan [00:33:36]:
It's much about as that's the immediate need. But you're hiring people that maybe are going to fill a totally different job description two years from now. But you hope you can grow them into that, or they'll grow into that, or even they'll drag and the rest of the organization into that.

Steve Swan [00:33:51]:
I like that you didn't say this. I'm saying this. So don't reprimand the failures. Celebrate the failures. Right. With a certain amount of.

Bob McCowan [00:33:58]:
Yeah, I mean, if someone's failing every other day, there's a problem there.

Steve Swan [00:34:01]:
But I think that's a problem.

Bob McCowan [00:34:02]:
But yeah, I think that managed and controlled approach where if you can understand, okay, if the failure gives you a result that you know, okay, you're not going to go down that route. You've learned something, but it's got to be manageable. It's got to be controlled. But yeah, I think sometimes we can be too harsh to focus on the failure rather than focus on. Okay, what did we discover from it? And again, it's a little bit of a mindset change for organizations.

Steve Swan [00:34:37]:
Okay, one last thing before I let you go, because I know you probably got to run, and I'm asking everybody this at the end of the podcast, didn't mention this to you at all. What has been. This is totally off the wall, Bob, your favorite live concert you've ever been.

Bob McCowan [00:34:51]:
To.

Steve Swan [00:34:55]:
On the spot. Do you even like live music?

Bob McCowan [00:34:59]:
This is actually top of mind, and it actually goes back to when I was a kid, and it was one of the first concerts I went to. And it's a group many people will not have heard of, stiff little fingers. And it was during the punk era in Northern Ireland. And I grew up at a time when we had what was called troubles. But a lot of the punk bands back then were teenagers, sort of kicking back on the society and also on the politics at the time and just saying, look, let's just get on with it. Let's just live together. But the music was punk, so it was sort of high energy. But that was one of those concerts that I remember for lots of different reasons.

Bob McCowan [00:35:46]:
I'm not quite sure why it popped into my head, but, yeah, look them up. Fingers.

Steve Swan [00:35:51]:
Yeah, I'm going to. I'm definitely going to. Yeah, no, that's all good stuff. Well, thank you very much for being with us. I appreciate your time.

Bob McCowan [00:35:59]:
No, I enjoyed it.

Introduction
Leverage AI for productivity gains across organization
Apply AI/ML for imaging analysis
Exploring deep analytics for scientific and business applications
Quantum computing is a potential game-changer
Architected with guardrails, monitoring, prioritize risk prevention
Scientists propose theoretical research, AI is tool
Research and utilize data effectively for progress
Internet content requires healthy skepticism and verification
Embrace failures, create your own learning journey