Pivoting to WEB3

AI Governance in Healthcare Explained with Brian Green and Donna Mitchell

Donna P. Mitchell Episode 67

Can AI Actually Help Healthcare—Without Crossing the Line?

In this episode of Pivoting to Web3, I sat down with Brian Green, the Chief AI Ethics Officer and Founder of Health Vision AI, to talk about something big: how AI is changing healthcare—and why ethical guardrails are more important than ever.
He started as a social scientist in public health, and now he’s at the forefront of AI governance. We talked about the real risks and rewards of AI in healthcare, how we can build better systems without losing the human touch, and what responsible innovation should really look like.

Key Takeaways:

AI in Healthcare: Where Ethics Meets Innovation

Brian Green’s journey from public health to AI governance

How AI is used to improve healthcare outcomes

Importance of defining business problems before adopting AI

Key elements of ethical AI governance frameworks

Addressing bias and health disparities in machine learning

Ensuring patient consent and transparency in AI use

Readiness assessments for organizations exploring AI

The role of explainability in building trust

Using ethically sourced, culturally aware datasets

Looking to global standards like the EU AI Act and Coalition for Health AI

Visit [mitchelluniversalnetwork.com](https://mitchelluniversalnetwork.com) for more updates. 

 #Blockchain #Web3 #AIinHealthcare #EthicalAI #AIGovernance #HealthTech #FutureOfHealthcare #ResponsibleAI #HealthcareInnovation #AIForGood#DigitalHealth  #PivotingToWeb #AILeadership #BusinessAndA I #AITransformation #AIForBusiness #HealthVisionAI #DonnaMitchellPodcast

About Brian Green:

- Deep experience and comfort in cross-functional team leadership serving customers in health communications, health analytics, and digital media & marketing operations. A thought leader in Responsible AI, AI governance, use-case development & organizational readiness for AI solutions.
- Adept at public speaking, presentations, and written communications, and ability to synthesize and communicate complex medical and technical information to various audiences.
- Demonstrated strategic and analytical skills and the ability to research and understand the needs and requirements of competitive business landscapes to facilitate innovation and change.
-Subject matter expertise in Oncology, Immunology, Cardiovascular, and Rare Diseases.

Connect with BRIAN GREEN:

Website: https://health-vision.ai/
Email: Brian@Health-Vision.AI

Connect with Donna Mitchell:

Podcast - https://www.PivotingToWeb3Podcast.com
Book an Event - https://www.DonnaPMitchell.com
Company - https://www.MitchellUniversalNetwork.com
LinkedIn: https://www.linkedin.com/in/donna-mitchell-a1700619
Instagram Professional: https://www.instagram.com/dpmitch11
Twitter/ X: https://www.twitter.com/dpmitch11
YouTube Channel - http://Web3GamePlan.com

What to learn more: Pivoting To Web3 | Top 100 Jargon Terms

Donna Mitchell [00:00:04]:
Well, good morning, good afternoon, good evening, welcome, welcome, welcome, and welcome to pivoting to Web3 podcast. Now today we have Brian Green and Brian is going to be a little different from what I've had before because I'm finally found somebody with governance and he is the chief AI Ethics officer and founder of Health Vision AI. He an expert in AI governance strategy and responsible for AI practices. I really enjoyed meeting him and I thought this would be an invaluable conversation for the audience because we've touched on AI, we really haven't gotten involved with it and understand it and see where it intersects. So with that said, Brian, say hello to your audience and ours and tell us a little bit about your background and how you came into the governance space.

Brian Green [00:00:57]:
Well, thank you, Donna. I'm very pleased to be here. Glad that you asked me to come and be talking to your audience. As you said, I am in the air space and I am responsible for thinking about AI, governance, ethics, responsible use of AI within health care and life science sectors primarily. Although I do work in other sectors as well. My journey here is a little bit different perhaps than the norm, although in the AI world I find that there's like not a lot of norms. People's paths into the world are a little bit different than some other sectors, perhaps because it is new and emerging. Although my background matches some of what people that are in this world know, have done in the past.

Brian Green [00:01:51]:
I started in the healthcare sector about 30 years ago as a researcher. So I'm a social scientist and I was doing research, public health, nonprofit sector, and doing, you know, what we call traditional statistics within social sciences. And so you analyze data, you come up with patterns to help explain the data. And then, you know, in public health you use that to formulate answers for public health programs or policy or for, you know, interventions that will help people, you know, reduce incidence of disease, whatever. The data is used for lots of different purposes. And while I enjoyed working as a researcher, one of the challenges I always found was that we often wrote papers, published them in a journal, an academic journal. But that data, the, the findings, the research was not used for change for maybe a decade, it took so long. So, you know, I began early on seeing some of the challenges of how traditional research and even traditional statistics work within larger systems.

Brian Green [00:03:18]:
After that five year study I was on, I then jumped on the care side in healthcare and did more quality improvement work. So again, it uses data, but it uses data to do iteration cycles of improvement, which is, you know, a little bit quicker than traditional research. And interestingly, I find that approach is used a lot within AI and machine learning training. I'll come back to that topic probably a little bit later. So after a couple of decades of working in the public health sector, I jumped over to the private sector. And because I've been working in healthcare communications, digital space, some of the earlier websites for people providing information and app development in the private sector, I focused on the company I worked for. We developed and managed online communities for people with chronic health conditions. So they were all online 100%.

Brian Green [00:04:23]:
We did not meet, you know, with what I call real people out in communities that I was used to. I was used to working out in the community and talking to people, doing focus groups, engaging with people. So this was 100% online, and that was a big shift. So one of the things I did at that company was help them kind of formalize and validate that model, which, you know, we published, you know, kind of the what, what this model looks like and the why and the how of what we were doing to engage people 100 online in that space. Again, we're using data, you're using a very new, quick, creative ways, and you're able to iterate very quickly. You're able to see what patterns help people engage better with content, what topics they engage with, even, you know, how you moderate an online community and how many people you need to moderate them, but also what you focus on, you provide support, you first validate where they're at. Right. So there's various principles of how you reach and engage people online meaningfully to improve their health.

Brian Green [00:05:38]:
The revenue for that business was primarily pharma, advertising and market research. And so working within that space is how I got interested more in AI and certainly generative AI. We had used predictive AI and you know, what's called rpa, Robotic Process Automation. So ways of automating some of the data workflows behind the scenes so that we could access the data more quickly and provide usable insights to optimize the audiences, the platforms, who were reaching, that sort of thing. And in that job the last few years, I was there in organizing that data analytics team. And having the experience on the predictive AI is what led me to kind of think more about how we integrate other AI. Generative AI, now that's what we call it, into our workflows and into product development. And that's kind of where I'm at now.

Brian Green [00:06:47]:
I started my own consulting company in January of 2024. So it's been a little over a year now. And you know, doing some exciting things in, in that space.

Donna Mitchell [00:07:00]:
So, I mean, thank you for that, that, that brief background in your history, because I was very impressed by it when, when you started looking at the data analytics. And when you talk to companies that are interested in bringing AI into the organization, what are the main things that they should be looking at when they start at least wanting to bring it in? And you've got a workforce, there are behaviors, there's attitudes, there's mindsets. And I see the benefits from a diagnostic standpoint on the medical side, but what happens from the organizational side? Maybe this is what's rolling in my mind. When I first came into corporate, I learned that you had to balance the needs of the organization and the, the public, the customers, and now all the stakeholders, as we call them today. How do you make that happen? When you talk to a client, what's the first two or three things they really need to be looking at? Especially if they got like 20, 30,000 employees, they're more at the enterprise level.

Brian Green [00:08:11]:
Exactly. Well, you know, you hit on a great point because one of the key challenges is the size of the business and the scale at which they want to use AI within their operations or within, you know, whatever outcomes they're producing. And that's why there's a clear starting process. It should not be one AI size fits all. And I think of the challenges we're seeing right now is because when ChatGPT OpenAI hit the, you know, the market and the embraced by just people to use every day for various purposes in their lives, it kind of quickly accelerated what people think about AI. And that's kind of like all they know about it is, is this kind of conversational AI agent, which is just one very small type of AI and one very specific use case. So when a company's looking at this, they've got a couple things to think about. One, they already know that there's a lot of their staff that are using AI, you know, maybe on the phone in a way that's not approved, what people, you know, may call stealth AI.

Brian Green [00:09:24]:
So they know that people are using it, they know that there's an exposure to risk, but they're not really sure what they want to use it for. So the very first thing they need to do, and I start with a discovery session with talking about their needs, what they perceive their needs to be for potentially using AI, is step back and say, well, what business problem are you trying to solve? What is it that you're trying to solve? Are you just trying to Mitigate harms of your staff using AI? That's a different question. Then are you really intending to incorporate. Incorporate AI for specific purposes in your company? If the answer is yes, and in healthcare, then we want to think about what are they trying to solve with that. Is it. Is it reducing staff time and workflow issues? Is it improving diagnostics? Is it improving their ability to refer to specialists in various health areas and to do those referrals more quickly with more data points? You know, so what business problem are they actually trying to solve that needs to have a clear understanding? Because they may not need AI for that problem, you know, and I'll be truthful and tell them like, you've got the wrong one, you know. However, I find that most people do need AI in some context, right? And they may need predictive AI, they may not need generative AI yet. They're not there yet.

Brian Green [00:10:52]:
So I have to start with a readiness assessment, a formal readiness assessment. And this can look at like what it looks like is a little bit different depending on the company I meet, you know, their needs, where they're at.

Donna Mitchell [00:11:06]:
Can you give us an example of what a readiness assessment is like?

Brian Green [00:11:09]:
Exactly. So a readiness assessment, you know, starts with, for me, starts with a conversation, understanding their needs. It for the way I do it is a collaborative process. I don't come in like an auditor and just start checking off boxes. That may be part of the process, the audit part of it. But I'll map out what this process looks like for them. I tailor the readiness assessment for their needs and it'll have several milestones. So one of the first ones will be calling together a group of people at that hospital organization, company, you know, however you want to characterize that business.

Brian Green [00:11:49]:
I'll have them identify multi stakeholders. So they may need someone from the business units, finance, their legal team, their internal IT team, their data people. I need doctors, nurses, I need people that are involved in patient advocacy if they have an internal omnisped office, people that deal with patient issues, that'd be a great person to have in this initial conversation. And patients themselves in some part need to be involved in the development of AI in that process longer term. And so this is a great way to involve them. Early on in that readiness assessment stage, we look at very specific questions. So I have a set of 50 to 100 questions that I want to address within that process. And how many questions really depends again on, you know, a number of factors, the size of the organization, their needs.

Brian Green [00:12:52]:
And we'll have a process where there's you know, they identify certain number of people within the organization that will complete a self assessment. They'll rate themselves on these questions and these factors. I'll do that with my team. You know, we'll have an assessment, I'll do stakeholder interviews. So I'll ask again for a slate of people to talk to and I'll do a workshop. I'll do a workshop where I bring together the people and talk about, to get people on the same page and align on what they're thinking about. Because you often see there may be, you know, one leader at the organization, a CEO that is really gung ho on AI and other people aren't. Right.

Brian Green [00:13:36]:
And you need to hear that up front. You need to see where that organization as a whole is at. So I'll start with that and usually that, that workshop starts with a very initial exercise that's open. I want them to share honestly. So I'll say, what are your biggest fears about using AI in your jobs? And then also ask people to contrast that with well, what do you think might be helpful about AI? So get all those things out on the table. Have that discussion in a safe, you know, environment where they're going to be honest with each other. Because at that initial out discussion there's no, there's no outcome yet. Right.

Brian Green [00:14:18]:
So they feel less threatened to kind of share these things.

Donna Mitchell [00:14:21]:
That's interesting. That's really interesting.

Brian Green [00:14:23]:
I believe that's my approach, right. This is how, how I do it and this is how I've done similar conversations, not even about AI. So back when I did quality improvement work, you know, with multi stakeholder teams in healthcare, those were with doctors, with nurses, with social workers. These were large teams that we brought together. And you know, you're focusing on a specific problem, you're looking at data, but you're asking that group to problem solve together and step through a, an iterative process of improvement. It's a very similar approach here. You're just using a different technology. Are you using AI? So you want them engaged in whatever you're developing from an AI perspective.

Brian Green [00:15:05]:
They're not all developers and not all engineers, they're not all data scientists. But they have very valuable things to contribute early on in that process because they're the implementers. And frankly, to have successful AI in healthcare, one of the biggest problems right now of why people are not embracing beyond a pilot stage and using AI in more useful ways in healthcare and in life sciences too is where in the workflow is the best place to insert the AI. Right. Is it in the er and even if, is it the ER where in the ER admissions process or that process of different people seeing you in the emergency room, where does it happen? Right. So these questions need to be answered and they need to be answered by the people on the ground that are doing the work every day. So that's, it's a different approach to readiness assessment than others may use. But I need it to be open and collaborative upfront and not just the kind of very formal audit checklist.

Brian Green [00:16:13]:
Now what is one of the components of that? I have, you know, buckets of domain or domains that we're looking at. So obviously yes, we're looking at their tech stack, we're looking at their existing data sets and what state are they in. And you know, there may be 50 questions just about the tech and the data, but we're also looking at the organizational factors, the readiness, how many staff do they have that know about AI? What is their level of literacy? And then, you know, there's obviously a training piece that comes later. And so for the readiness part, you need to know something about the systems too. Right. How do, how is, how is organizational learning already done within that organization? Is it top down? Do they have multi touch points throughout the year that they train people on things, you know, for instance, cyber security. Every organization has cyber security training that they do usually once a year. And like one very formal thing, they take a, they take that test and then they pass whatever and that's it.

Brian Green [00:17:23]:
If it's not integrated within their ongoing performance reviews and engaged in their, you know, one to one conversations with their manager, then it's not well integrated. And that's the kind of thing you need to know up front with AI because you can't do a one and done with AI. It's continuous in terms of the organizational learning. And so you also need to look at leadership. And what is the commitment to AI within the organization? Do they have supports in place for this? Where are they even thinking about placing it? So you need to know answers to all these questions because it informs my recommendations, it'll inform my recommendations of what I think they need to do and next steps. Right, okay. So it's really a whole of organization assessment in some ways, but you're focusing on then the critical pieces that will be the most important for them to hit the ground with AI and implement it successfully. So there's a number of factors that, that go into that.

Brian Green [00:18:30]:
Then the outcome of this, you know, assessment process is a report, but it's, it's Very tangible and usable because I'll have all those factors with a, a rating, you know, stage of readiness next to them, coded actually because I love to color code like little, little status bars and then it'll have it, it, it then can link to. Their next step would be to, to roadmap this and say like okay, how well we implement this, what does it look like in terms of time frame now? So you know, one to three months, next three to six months and then later six to 12 past 12 months. So you kind of organize it into that traditional roadmap that they're used to from a development perspective. So software development engineers do their road mapping that way and then there's a priority, you know, a way that they can prioritize each of these pieces, components that would be necessary for AI implementation. That priority setting happens as an end, at the end point of the readiness assessment so that they could take this data from the assessment, pull it into whatever project management system they're using. So if it's asana, say jira, they're using formal project management software, this is ready made for that. They can just import it and then they'll have those pieces that help them develop their roadmap moving forward. So I like to kind of package it in a way that's not just a report, but also useful for the team as they move forward.

Brian Green [00:20:09]:
Now they then may choose to consult with me to help with some of that, those next steps as well or they may not, they may go with another company, but at least it's, you know, they don't then don't need to bring in another consultant to do that piece set that they've already got some of their priority setting done that helps them get off the ground and organize that first initial stage, the development stage.

Donna Mitchell [00:20:33]:
So when, so you've given them the package. Yes, and the package and the implementation. And it sounds like an awesome way to go about it because you kind of give them their strategic plan in a way too. As I was listening to you and the roadmap, the blueprint and what, and what to do next. So when you're looking at the organization and for those that are listening, that have brands or solopreneurs or smaller businesses or medium sized businesses and they're really looking at what we want to do. When you have governance, that really comes into play. I think governance is kind of behind the development and the technology right now. It really hasn't caught up, but so it's like the wild wild west.

Donna Mitchell [00:21:17]:
I've heard that's the best way for anybody to explain it, it's the wild, wild west and here we are. But meantime, you're trying to do the right thing. There's a little bias in there. They've got local governance, you got geographic govern, global governance. So, yes, I guess my question is if somebody's looking at trying to do bring in AI.

Brian Green [00:21:38]:
Yes.

Donna Mitchell [00:21:39]:
What's more important? Looking at it from a functional standpoint, whether it's their operation or admissions or marketing or the sales group or customer service group, or is it the. What is the best that we can grow with the talent? Where do they decide what they want to do first? Does that come out in your assessment, where their needs are? That's what they figure out when you come in. Or when you come in, you help them figure that out?

Brian Green [00:22:13]:
I help them figure that out, yes, I help them.

Donna Mitchell [00:22:15]:
So they could be confused in that area.

Brian Green [00:22:17]:
They. They most certainly are confused in that area. I find that people are not, you know, they're. Well, first, there's not one clear path forward. There's not one way to do it. And you're right, governance right now is all over the place. It's confusing.

Donna Mitchell [00:22:33]:
Can you help us with that? Can you help us understand what does everybody need to know right now about governance? What can you share on that note? Because I haven't done much in that area. So now that we know the assessment and what you do and how you do and how you leave it with them, there's so much I want to ask, but we only have a little bit of time right now. What is the governance and how does that impact. And could you also share with us what concerns you about what you don't see in governance?

Brian Green [00:23:05]:
Great questions. It is one that it's not the easiest thing to answer only because governance can be and is lots of things. But for me it is the foundation and the framework that you need to develop and use AI successfully in your business. And that's any business, frankly. But within healthcare and life sciences, where there is already a set of regulations, it's even more critical. So, yes, you hit on a few of these items already. The governance is around the vision and strategy, it's around the leadership commitment, it's the oversight team, it's the ethical guidelines, it's about equity and making sure that you can measure outcomes and ensure that you have equitable decision making. In AI tools that you use, it's about feedback mechanisms, it's about tracking compliance, it's about quality improvement.

Brian Green [00:24:12]:
CQI and AI integrations, like I mentioned, wide framework it's about cultural appropriateness and it's about fundamentally transparency and explainability. So it's all of those things. I want to make governance the foundation and make it something that is the, the way that people just successfully integrate AI into their organizations. Right? And so it gives you a framework and a process as, as you're moving forward. One of the key things that all organizations need to do in healthcare and life sciences that are utilizing AI is create a committee. It's a multi stakeholder committee, a group that's focused on the governance aspect and it involves some of those same people I mentioned earlier in the assessment process that you may need to make sure that you've engaged. So in your governance committee, you need, you know, doctors, you need clinicians that may be nurses, other clinician, other people that help in the clinical care flows. You need someone with the, you know, business analyst responsibility.

Brian Green [00:25:36]:
You need someone from your finance team, you need someone from your legal team, you need patient advocates, you need the developers, the data science people. So you need a bunch of different people. You need a social, you know, social technical person like myself, someone with a social science background that brings in different research perspectives. You need those people as part of your ongoing way of looking at what you're doing that provide guidance to, you know, the C suite, that provide guidance to the types of data that you need to be analyzing ongoing. When you utilize, when you utilize AI within a healthcare setting, for instance, because it is a powerful tool that has potential for risk, you need to make sure that you're minimizing that risk for the organization and that you're producing the outcomes that you want, that you're not producing outcomes by using AI that would, you know, bring about greater health disparities or inequities between different groups. There are some examples here. AI research, you know, very early in healthcare, some research has shown that specific health conditions. Let's pull some cardiovascular examples where we see that there's existing health disparities, right? Women and men have different risk factors for chronic heart disease.

Brian Green [00:27:10]:
There's racial disparities where you can see huge differences in terms of outcomes from clinical care, where certain racial groups don't end up doing as well after a cardiac incident and after being receiving care. There's a number of reasons for this. Some of it's from that very initial entry point from er, right? So Stanford Medicine is one of the groups that had done some of this research. And what they did was look at the data, the machine learning data, the LLM data that was used in the AI and analyze the data kind of on its own with different iterations to try to adjust for this bias that was in the model. So the, the bias was in the original data sets. Right. How do you make sure that you can adjust it so that you're not reproducing those existing health disparities? And this is the question that they, they had to perform their research, the few, first few iterations. So they, they did, you know, adjustments and they were adjusting for racial differences.

Brian Green [00:28:33]:
They saw that African American women, for instance, so both adjusting for gender and for race, African American women had much poorer outcomes compared to other groups. Um, and they tried to adjust for that in the data sets itself. And the original adjustments went the wrong direction. They made the disparities worse. So they had to do this. The, the study. They did it like four different times. And the other adjustment they made was within the workflow of the er where they were looking at, you know, how to best insert this AI tool.

Brian Green [00:29:17]:
And they kind of moved it around, you know, to test where would this make the most difference. The final model. They got it. You know, they, they adjusted the data. I won't, you know, I could even describe how they did it, you know, quickly to, to your audience. But it's a, it's a study that hasn't been published yet, but they presented on this data at Stanford AI Conference, actually a few months ago. I forget when. October maybe.

Brian Green [00:29:48]:
And I saw it. I was very impressed by the data because it was one of these issues that, that, you know, it's so challenging when you know that there's bias within all healthcare data and you always try to adjust for it statistically. Right. But this is a little bit different when you're talking about AI because a lot of it's what they call black box. You don't see what's happening in the background. And so when you're looking at it at the machine learning level, there are things you can do and push the models in different ways. Reinforcement learning. There's different techniques to ingest new data Rag, for instance, that you pull in different data sets to adjust.

Brian Green [00:30:28]:
But what, you know, the interesting thing here is that it's easy to think and assume that you know the way to improve an outcome. But they got it wrong. They got it right in the end, but it took them many iterations on that original data model and ingesting new data sets to get it so that it did not worsen health equity outcomes. So it proves that it's possible to get it right. It just proves that it's challenging. And I come back to governance for this because this was a research study. Right. But if it was real world, you would have a governance committee in place that would be looking at each iteration of you saying, wait a second, what is going on? Stop.

Brian Green [00:31:14]:
Let's look at this closer. Right here you just had the researchers themselves, they're their own committee, and it's not implemented in a real world setting in a way that's going to harm people, thank goodness. But you see where it could go. You could see that people could have been inadvertently harmed by starting a project where you're making assumptions that are very logical to make. But the AI itself was not trained in a way that would produce the outcome that you wanted to see. And therefore it was producing, you know, based upon the training data, the original data it was seeing outcomes that would, would amplify existing health disparities, the exact opposite of what you want to do. So, you know, it brought three important lessons out for me, just hearing that presentation as I thought about it and kind of processed it, and, you know, one of those takeaways for me is this just amplifies everything. I was thinking about why AI governance in the kind of foundational model where you have stakeholder group that kind of always is monitoring and evaluating how this looks and adjustments that need to be made, why it's so important and why it's critical in healthcare.

Donna Mitchell [00:32:36]:
So while you were talking, what came back to my mind is some articles I had read and it seemed like you have to be careful or you have to know that you have ethically sourced data.

Brian Green [00:32:46]:
Yes.

Donna Mitchell [00:32:47]:
And ethically sourced data. How does somebody know? Or is there a journal or is there a list of ethical, ethical journals or magazines or papers, white papers. How does somebody know who's rolling in the lead of ethics and governance at this time if they wanted to do some research on who and what they wanted to choose? And then my, my second part to that is, is there certain monitoring tools or mitigation tools that you're aware of that you would like to talk about?

Brian Green [00:33:23]:
These are all great.

Donna Mitchell [00:33:24]:
Before we close.

Brian Green [00:33:25]:
Yeah. I mean.

Donna Mitchell [00:33:25]:
Or do I need to have you back?

Brian Green [00:33:27]:
These are great questions. It's, it's, it's a challenge. So we have to, like, think about contrasting here in the U.S. what we have to look at and leverage versus, say, EU. Who. Like, the EU has a regulatory framework in place right now, the EU AI act, which gives us a lot of guidance about the kinds of things that we can look at the way we need to think about mitigating risk within, you know, certain sectors, like healthcare. Healthcare is always going to be considered high risk. Same with life sciences, pharma research.

Brian Green [00:34:00]:
That's always high risk. Why? Because people's lives are on the line. Right. So the EU AI framework is very helpful to us and gives us a lot of things to look at. Setting that aside, looking at us, because we don't have that same, we don't have a regulatory framework yet, we don't have specific laws that we can point to. But there are other frameworks. You know, we have a privacy law, we have other laws that are in place that provide us a lot of guidance around what we think about when we talk about governance and compliance and risk. In answer to your question of like, what do we look to, to help us understand ethical frameworks and you know, what I call responsible because these words, I use them interchangeably a lot, but they're actually, you know, very different.

Brian Green [00:34:47]:
And we can argue about, well, what is ethical to one person may not be ethical to another person. Right. And so getting it out of the conceptual world into specific, you know, standards that are measurable, that people can see they're tangible is, is the million dollar question. We do have that. So we have, for instance, the NIST framework. The National Institute of Standards and Technology has a framework that was developed originally for cybersecurity, that has been adapted for AI and that's available. And there's organizations that have taken that and some other frameworks like ISO framework, some others, and kind of brought it down to measurable standards within healthcare. So for instance, there's a group called CHAI, the Coalition for Health AI, which is in the U.S.

Brian Green [00:35:39]:
it's a non profit organization and it was formed now a little over two years ago. And it's a organization that has brought people together from industry, private sector, from government, from nonprofit sectors, brought together leaders to help develop guidelines, standards and that sort of thing to help exactly what you're, what you're asking about. They have, they slash we. I'm a member, but I'm not a formal leader, a committee member, I'm just like a member. But they have developed very useful standards. They're also, they put out the kind of model card. So in AI in general, model cards for a thing, it's a, you know, some people say it's like a label on a product, right? So like it's very similar to a label that you find on a food product, right. So that kind of model card shows you the things that are part of that, that models Framework and testing framework, etc.

Brian Green [00:36:48]:
It lists all of those. That's one helpful guide. It doesn't answer all of our questions around, you know, would bias still be remaining in a model? No, but it does identify some of the questions that you need to answer that question and do more testing. They've also proposed what we would call third party assurance labs. Right. This is just a proposal. Some exist, but there's no regulation requirement. And this is one of the things that I know has been part of what the federal government has thought about or looked at, you know, over time and is still part of the existing administration's potential framework is to have third party entities.

Brian Green [00:37:36]:
So public private partnership around an organization that would be this third party that would be analyzing AI models and being able to say yes or no. Do they do these things that help mitigate risk? Do they do these things that help identify and are transparently saying what data sets are behind the model? Now there's things that anyone can do, any organization, irrespective of how those things move forward and how they develop. They're great things to look at, but they don't answer all your questions. So looking at data and what you need to do, any organization can take a model that are out there and they can look at model cards and look at other indexes of performance to pick one or two that they might want to use to develop their use case. And they then need to regardless when they're using and developing this model within their context. In healthcare, you need to have what we call localized data to fine tune your model. You have to ingest data from your healthcare system in a privacy protected way. So it's in a data lake.

Brian Green [00:38:52]:
So say you're a llama model. You know, whatever model you're using, pulling off the shelf, your developers are putting it in a protected environment that's not exposed to other users, just your, your organization. So it could be an OpenAI model, but you've put it in this privacy protected way in a data lake, data warehouse kind of lake, and then you ingest your organization's data. There's different techniques for that. Rag is one that is popular right now that people talk about. The techniques don't necessarily matter as much as that you do this in ways that are iterative over time. You're fine tuning the model itself on your localized data. That's important because every healthcare institution has serves different, you know, populations.

Brian Green [00:39:44]:
And that data is going to be very important in training or fine tuning that foundation model in a way that it's Going to perform better for your outcomes, for your needs of your population. So you're not getting bias from that larger foundation model that may be representative of populations that are not who you serve. You're not getting as much of that bias interfering with the performance of the model because you're fine tuning it on your more to your data set. So it's a complicated process, you know, for, for people to, you know, perhaps understand, but it's, it's easily achievable and it's something that is needed within the healthcare context. In life science has similar things and that's something that's kind of intuitive to most say pharma companies. They're never going to want their data proprietary to be exposed to other potential competitors. So they're always going to have their data contained and used. Even if they're utilizing cloud, you know.

Donna Mitchell [00:40:58]:
Cloud, cloud computing and everything else.

Brian Green [00:41:01]:
Yeah, they still have to have it in this, you know, data, data lake prepared way. And so people kind of understand, yes, we need to take models and adapt them to our use, use our, our data, ingest it. Now that can be very costly. Right. And so again, you need experts, you know, people that are, you know, in this space that have developed fine tuning models. You know, I've done some of this work so I can kind of make suggestions about how they architect it in a way and how frequently they do these data ingestions and fine tuning and manage it in a way that they can contain their costs. Because you could easily know, blow up your bill, you know, if you're not kind of careful in how, how you've set it up. And it's a consideration that it's not, you know, no one that's selling you a solution is say like, they're usually saying like, oh, use our solution because you'll save money.

Brian Green [00:42:08]:
Well, you may save money in the beginning, but your bill could be triple what you're used to, what you're expecting because you haven't factored in, you know, what needs to happen in this kind of ongoing cycle. Once you've launched your, your AI tooling, it's continuous. You've always got to be looking at new data, fine tuning it and ensuring that it's performing the way that you want. It never stops. And that's not just because someone like, you know, a company like mine wants to keep having money flowing from this kind of work. It's not, it's really just because you, you know, it's the way AI works it, it always needs to have data.

Donna Mitchell [00:42:53]:
It's constantly advancing the model.

Brian Green [00:42:55]:
The model would degrade over time, right. And potentially collapse. One of the risk of AI is what we call model collapse. And it's just because of the way that it's, it, it, it's the way that it's built, right? And so that's why it's, you know, people think, oh, this is just shiny object, let's just get it and you know, use it and we'll figure it out later. It has to be done planfully. It does have to be tied to strategy. And the reason that I think governance is so important is it is the foundation that just allows you to do this well, to do it successfully. And also all the other business factors you need to think about, do it, you know, within a budget.

Brian Green [00:43:34]:
Do it so that your users or customers, patients in this case, actually benefit from it and understand how it's being, how its use is actually helping them. That's, you know, one, one of the factors I haven't talked about really yet, but to me is, you know, one of the most important in what we're talking about with responsible or ethical AI is the explainability. So if we're using AI for a patient's career, they need to know one well, they need to be, have consented to the use of the AI, but they need to know what led to the decision that a doctor then said, okay, I'm going to recommend this for you based upon this AI, you know, that we, that we used. They need to know, it needs to be explained to them in a way that they understand. And so the explainability features and AI itself can help with this. You know, the AI can say here are, or you know, print. It doesn't have to talk, but it can say, you know, here are the five data points that were the most critical, that were prioritized in the recommendation that the doctor then took from this body of knowledge and is saying, here's what we recommend for a treatment, right? So they need to be able to explain that healthcare providers already explain some of, you know, what, what they do and what they're recommending for a patient and why. But again, since this might be given to the healthcare provider in a way that is a black box to them, shouldn't be, but that might be what they're, what's seeing, what's happening now that explainability feature, that feature set and the kind of the layers of explainability need to be part of the AI output and need to be communicated then to the patient in a way that they understand how AI was used within the healthcare and within the decision framework that's being presented to them.

Brian Green [00:45:38]:
And that's part of governance and it's critical part of governance within healthcare or other sectors where you've got very sensitive and critical information that affects someone's life or well being.

Donna Mitchell [00:45:55]:
So I got a quick question and, and then I'm curious to know if there's something you want to share that I didn't ask and how people contact you. But. So you're saying a patient, are you saying that a patient should know when a doctor is using the output of AI for diagnosis and treatment plan?

Brian Green [00:46:15]:
Absolutely. So first of all, do you know.

Donna Mitchell [00:46:18]:
If that's happening now or do they just kind of get the information, they get the diagnosis and they talk to the patient and they out of there so they can get their numbers in and they can get the back end. You know what saying.

Brian Green [00:46:27]:
So I hope it's happening. So I don't. So, you know, this is the most.

Donna Mitchell [00:46:32]:
Important part of the conversation to me. Are y'all telling what are they doing it? Because I can pop in there myself and see, oh, this is what's going on.

Brian Green [00:46:40]:
You know, the organizations, I mean, certainly in the organization I work with, one of the first things I do is look at their, their policies, right? And so privacy policies, et they have to consent the patient to its use. Like it's just end of story. Now in the EU where the regulation is very clear, it is required by law, you have to consent. Here in the US you have to.

Donna Mitchell [00:47:03]:
You have to consent that you're using.

Brian Green [00:47:07]:
Yes, they have to say to the patient, now, it may be on the little form they sign, you know, when they walk in the office and people read, you know, already got HIPAA forms, you know, like, yeah, I know like that unfortunately. But you know, really an organization when they're, when they're starting this technology and some of the big clinics that are using AI now, you know, whether it's Mayo Clinic, Cleveland Clinic, do this, they're, they're consistent, it's okay. But you, you know, yes, you say, you know, we're using new tool in our practice and it uses AI and here's how the AI, you know, works. And we, yeah, they should be consenting them to this in an informed consent process just like any other thing that, you know, because it's experimental, one, you know, it's not, it's not a clinical trial where it's an experimental drug, but it is a new tooling and process that does affect outcomes and they should be consenting Patients to it. And, you know, think about where, you know, if you, if you go to a provider that has a patient portal, you know, and it's usually tied to a system, may be epic, or you're consenting to the use of that portal and the sharing of information electronically, it may have a little chat bot that pops up. I know mine does. Right. There's AI behind that tooling.

Brian Green [00:48:36]:
And so you can turn that off. You can say, no, I don't want to ever deal with the chat bot. I just want to send my little message, you know, talk to a person on the phone, whatever. There is a consent process there. Again, it may not be as transparent. It needs to be. It may not be, you know, sometimes it may be fine print. Then people just, you know, click the box.

Donna Mitchell [00:48:53]:
Yes, yeah.

Brian Green [00:48:54]:
Move forward. But, you know, in my view is we should treat it kind of like we would in a. When a. When an organization, a healthcare institution is rolling this out for the first time and it's the first encounter with that patient, they should treat it almost like they would the clinical trial type of consent, where it's an informed consent, where they're saying, we're using this, it shares information a little bit differently. We want to make sure that you're okay with it. And you understand we'll be using it in your care today and in the future in very specific ways. And if the way we use it in the future changes, we will tell you about that too. Right.

Brian Green [00:49:32]:
So I think it should be open. It's a conversation. It takes not much time. I think, you know, some of the research that has been done around this area even suggest how it's introduced is very helpful. And so if. And if it's the doctor or nurse practice, if it's the person, the healthcare provider that the person already has a relationship with and trust, if they introduce it and talk about this, the acceptance of it and patients saying, yes, yes, I agree, is much higher than if it is just kind of snuck in at the bottom of a form. Right. Because the research that had been done on this, looked at how patients understand, has this been used, how they felt about it, that sort of thing.

Donna Mitchell [00:50:21]:
But do you think the conversation should be taking place between the doctor and the patient when they're looking at something and they used AI to get to the end result? Let's say it's a complicated case and you're in Europe, one of the big teaching hospitals, and at the end of the day, you know, they trying to do a differential diagnosis, they do the differential diagnosis and they kind of used AI or AI through the mri, the diagnostics and everything came up with this. And then they've got the labs and everything else in the office setting. Yes, I guess is where my mind is right now for the patient.

Brian Green [00:50:56]:
Yes.

Donna Mitchell [00:50:57]:
Should there be a conversation with the health care provider and the patient saying, look, this is, this isn't my noodle. This isn't my knowledge, this is not my clinical background. This is my clinical background with the help of adjunct or adding in AI and this is the best course. And this is why. But you need to know then in that conversation that AI is in the middle of this making a recommendation or helping with the synopsis with a medical opinion.

Brian Green [00:51:25]:
Yes, I. Absolutely.

Donna Mitchell [00:51:27]:
Okay. I think so too.

Brian Green [00:51:28]:
But that's the ethical approach. And I think if you think about it as a member of the care team. Right. So right now in these kind of challenging conversations and, you know, so maybe it's a primary care doc and, you know, it's a, it's a diagnosis where it may be a complicated case. It may be that they're recommending surgery or a course of treatment that's, you know, that requires some conversation with the patient. They have choices to make. There could be multiple treatments and it may be that there's going to be more engagement needed for additional tests or whatever with specialists in those kinds of difficult conversations already, though, that, you know, doctor, whoever's talking to that patient right now should be saying, you know, we've, I've, I've talked to my care team, I've talked to a couple other doctors, a couple other specialists, and we, we think that the best option for you may be X. Now, there's two other things that you might want to think about.

Brian Green [00:52:28]:
Here's what they are, right? This is the kind of conversation that's, that should, you know, already be happening with care teams. I know my doctor, when we had to talk about some options for something, one of my doctors should say multiple. But, you know, that was the conversation. It's like, here's what I would recommend. But you could also think about these other things and, you know, I can refer you to people, blah, blah, blah. In that conversation. They could say, you know, part of the recommendations that we've been looking at. I've talked to my colleagues, but we've also used this AI tooling.

Brian Green [00:52:58]:
Here's what it kind of, here's the synopsis of what it has informed us about. And I've looked at it, I've considered it. I like these options, you know, so I Want to talk to you about them. That's how I would recommend they do the conversation. I think that is how conversations go with some patients in some context right now. Unfortunately, that's not how it goes for a lot of people and a lot without. Without AI in the equation. Right.

Brian Green [00:53:29]:
Unfortunately, we're not there in the way we need to be to have good conversations between healthcare providers and patients already. That's a challenge right before AI is added into the picture. Now, I believe that AI can actually help that. I believe that one of the best use cases for generative conversational AI right now within healthcare is to actually improve patient healthcare provider communication. And I've written about that. I can talk about it forever. That's obviously.

Donna Mitchell [00:54:02]:
We'll have you back. We'll have you back.

Brian Green [00:54:03]:
Yeah, I won't talk about that today. And outside of my consulting, I am, I am working on an AI application that's patient centric. The goal is to do some of that. That company's in stealth mode right now. I have two collaborators for that company. We're in stealth mode. We're not looking for investors yet. We'll be looking in the latter half of this year.

Brian Green [00:54:31]:
So it's exciting. But I think, I think that's part of what excites me about the use of AI within healthcare is there's so many possibilities and so much that can help improve upon our healthcare systems and within a research setting and life sciences. If we do it well, if we do it responsibly, we do it ethically, we do it with the frameworks that we need, then we can actually improve upon healthcare experiences for patients. Right. And your question gets at such an important critical factor. It's like we're already not doing this the way we need to in healthcare. Before you bring API into the picture, you should have great conversations and relationships between healthcare providers and patients. That should be happening.

Brian Green [00:55:21]:
It's not, I guess.

Donna Mitchell [00:55:23]:
And then, and then I know we. I want to be respectful of your time. I guess what made me really think about it is I have been on all sides of the coin. Between being on the clinical side as the rep and then the physicians in the office, and then as a patient with some histories that were complicated or misdiagnosed, and then being an advocate for a senior that I take care of long term with a chronic illness, I kind of been in this medical side and there are things that should be happening that aren't happening that I see at all levels, 360 degrees around. So that's why I asked that how can people reach out to you or talk to you or collaborate with you or maybe invest with you?

Brian Green [00:56:17]:
Great question. Okay, so my website is my logo Health Vision AI. So that's my website and there's of course contact information there also. People can get more context and information about some of services I provide, but they can just email me. My, my email is Brian B R I A N Health Vision AI. So that's my emails. Probably the quickest way to reach me quicker even than you know, a text message or call at this point. But yeah, they can just reach out to me directly.

Brian Green [00:56:53]:
I will, I will get back to people and talk about. I'm happy to just talk and chat about things that they're thinking about with AI, but I'm also happy to help them get started, help them with a writing assessment, help them with AI literacy training their company with really thinking about AI strategy and how they move forward.

Donna Mitchell [00:57:16]:
This was a fabulous conversation. I'm really glad that we met and we've taken the time to delve into Health Vision AI because the governance piece, the ethics, how the data sets and everything works together for the patient centric solutions and for healthcare is most important. So with that said, thank you so much for listening to pivoting to Web3 podcast. And Brian, I'd like to say thank you for being the guest. We have to have you back. I'm very interested in the communications and everything that you're doing, so we might need to turn this into a special series. And thank you so much for being here and those that are listening. Good morning, good afternoon, good evening and and thank you so much and we're shaping Tomorrow together.

Brian Green [00:57:57]:
Thank you Donna and thank you to your audience.