The Signal Room | AI in Healthcare & Ethical AI
Welcome to The Signal Room, your go-to podcast for expert insights on ethical AI, AI strategy, and AI governance in healthcare and beyond. Hosted by Chris Hutchins, this show explores leadership strategies, responsible AI development, and real-world implementation challenges faced by healthcare AI leaders. Each episode features deep conversations covering healthcare AI innovation, executive decision-making, regulatory compliance, and how to build trustworthy AI systems that transform clinical and operational realities.
Whether you are an AI strategist, healthcare executive, or AI enthusiast committed to ethical leadership, The Signal Room equips you with the knowledge and tools to lead AI transformation effectively and responsibly.
Join us to learn from industry experts and healthcare leaders navigating the evolving landscape of AI governance, leadership ethics, and AI readiness.
Follow The Signal Room and stay updated on the latest trends shaping the future of ethical AI and healthcare innovation.
The Signal Room | AI in Healthcare & Ethical AI
Why AI Verification Is the Real Bottleneck in Pharmaceutical Drug Discovery | David Finkelshteyn
Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.
AI can now search through a vastly wider grid of possible compounds and molecules than any human team could evaluate in a lifetime. The bottleneck is no longer discovery speed. It is verification. David Finkelshteyn, CEO of Pivotal AI, builds AI systems for pharmaceutical and life sciences use cases that can be verified, defended, and trusted. His work sits at the intersection where machine learning outputs must survive regulators, audits, and real-world consequences involving human health.
In this episode of The Signal Room, host Christopher Hutchins, Founder and CEO of Hutchins Data Strategy Consultants, sits down with David to examine why generating AI-designed drug candidates without rigorous verification is fundamentally meaningless. David explains that discovery and verification are inseparable. A model can propose thousands of novel molecules, but each must pass through pharmacokinetics screening, in vitro testing, in vivo validation, clinical target selection, and human trials. Every stage is a potential breakpoint, and AI introduces a new category of risk: when models design molecules that look unlike anything seen in nature, there is almost no historical data to predict how the human body will respond.
The conversation covers the complexity-transparency tradeoff in machine learning, where more complex tasks demand more complex models that become less explainable. David walks through what real verification looks like in drug development, including the critical importance of separating training data from validation data to avoid overfitting and data leakage. He also addresses the emerging consumer health AI landscape and offers a practical rule: give the model more context to reduce hallucination, treat it as an analytics tool rather than an inventor, request real source references, and then go see your doctor.
The episode closes with David's outlook on how verification will shift from constraint to competitive advantage as automated robotic labs begin closing the design-verification loop, reducing the time between AI-proposed candidates and physical synthesis and testing.
(00:02) Teaser: Human readiness vs. technical readiness in healthcare AI
(00:38) AI in drug discovery: expanding compound search space
(01:00) David Finkelshteyn (Pivotal AI): building defensible AI systems
(02:00) Discovery vs. verification: why validation is critical
(03:51) AI due diligence in an emerging field
(04:26) Drug development stages: synthesis to human trials
(07:08) Novel AI molecules and the verification gap
(07:48) Validation standards remain unchanged for AI
(08:20) Faster R&D: compressing timelines with AI
(09:22) COVID vaccines: early signal of AI acceleration
(09:56) Black box problem: limits of model explainability
(11:58) Complexity vs. transparency tradeoff
(12:08) Explainability gaps in clinical and regulatory settings
(13:31) Verifying AI outputs: use case, data quality, leakage risks
(16:22) Missing context in consumer health AI
(17:33) Responsible use: verify sources, consult clinicians
(19:55) Incomplete context as a primary sou
About The Signal Room: The Signal Room is a podcast and communications platform exploring leadership, ethics, and innovation in healthcare and artificial intelligence. Hosted by Christopher Hutchins, Founder and CEO of Hutchins Data Strategy Consultants. Leadership, ethics, and innovation, amplified.
Website: https://www.hutchinsdatastrategy.com
LinkedIn: https://www.linkedin.com/in/chutchins-healthcare/
YouTube: https://www.youtube.com/@ChrisHutchinsAi
Book Chris to speak: https://www.chrisjhutchins.com
The tagline for my company is humanizing AI for care. We've talked about how healthcare needs to be emotionally ready before it can be technologically ready. How people feel safe, seen, and empowered is that change happens.
Christopher Hutchins:Hundreds of thousands of dollars on the data that powers the technology.
David Finkelshteyn:Why is verification, not model accuracy, the hardest problem that you actually have to solve? But what gives us the ability to kind of search through the much wider grid of possible compounds and molecules and biomolecules. So the whole idea of AI now is to be able to fail faster. So we can get faster to an actually correct result.
Christopher Hutchins:Today's guest is David Finkelshteyn, CEO of Pivotal AI, where he focuses on building AI systems for pharmaceutical life sciences use cases that can actually be verified, defended, and trusted. We're going to go beyond the hype today to talk about the real bottleneck in pharma AI verification. Not making models faster, but making their outputs defensible enough to survive regulators, audits, and real-world consequences. David, welcome to the Signal Room. We're in some pretty interesting times, and I know you're doing a lot of great work in the pharmaceutical industry right now, among other things that you're working on. But if you don't mind, I'd love to just jump right in with some questions. I know I've got a lot to learn, and I'm sure our listeners are going to be very interested in what you have to say today. So let's talk about pharma AI. When people talk about it, they focus on discovery speed more often than almost anything else. Why is verification, not model accuracy, the hardest problem that you actually have to solve?
David Finkelshteyn:Thank you first of all for having me here. I'm excited to be here and to share some of my experience. It's always a pleasure to have a conversation with smart people. So answering your question, I don't think one is more important than another. I think they tie together and the discovery itself just doesn't make sense without verification because we should be able to get something, a molecule, the drug, that will fit some requirement parameters. So it should be plausible in terms of pharmacokinetics, pharmacodynamics, it should fulfill many other parameters. So in this sense, yes, we discovered something, but what is it we discovered? And this discovery should be tied together with verification both in our computer. And then we want to get down to five, ten, fifty, a hundred molecules, and we want to actually synthesize them in the lab and we want to test them. We can start, we should start with purely in vitro tests because it's the cheapest one, and this way we will cut off a lot of uncertainty. Because again, our model, regardless how good it is and how accurate it is, how verified the model itself is, we need a ground truth. And ground truth is actual physical molecules being synthesized, regardless if it's small molecules or big molecules. So generating molecules without verification just doesn't make sense.
Christopher Hutchins:Right. I understand that. You raise an excellent point as well in terms of the realities of what we're using now with AI. It just hasn't been available to us for very long. It does require a lot of due diligence, especially in the space that you're working in. I really appreciate your perspective on that. I think for me it's comforting. I hope it is for others as well. Let's think about the outputs. When you're using the AI, what are the most common ways, between discovery and regulatory acceptance, where do things really break down?
David Finkelshteyn:So there are many possible points of breaking down. And most of the breakpoints are the same regardless of the way we discover the lead. So there are possible breakpoints when we're trying to synthesize it, then when we're trying to test parameters of the drug in vitro, then when we go from in vitro to in vivo, like in living creatures. Unfortunately, we still need to do this. Then when we select a clinical target group, then when we go to human testing. So all these points are possible breakpoints. There are specific breakpoints for artificial intelligence, for this way of generating drugs, and they are the other side of the medal. Because on one hand, what AI gives us is the ability to search through the much wider grid of possible compounds and molecules and biomolecules. Before, we knew some number of molecules, and as a humanity we were inventing them and making steps, and now they seem very little steps compared to what we can do now with all this computational power and machine learning power. We already did a lot of tests on all these compounds and molecules, all sorts of testing again in vitro, in vivo, all sorts of types of research of these molecules. But when AI gives us the ability to simulate experiments with something far, far away from what we already have, something that looks completely different, regardless if it's a small molecule or a protein, we can use AI, we can design using AlphaFold, let's say we can design a protein that we've never met in a human body or in an animal body, just something artificial, and AlphaFold will predict how it will fold in space, and this is great. However, when verification comes into place, this unseen protein may cause our body not to accept it. It may cause an allergic reaction, and because it doesn't look like anything we've seen before, we have almost no information to guess about it. So we just have to make more tests, and again, this is a possible point of breakdown.
Christopher Hutchins:Right.
David Finkelshteyn:Yeah. And this is, I would say, yeah.
Christopher Hutchins:The speed that you mentioned, I think this is the piece that gets me excited, but I can see when you're talking about validation, it also complicates matters. But you're able to really look at a much broader set of information and simulate and model different things that would take you a very, very long time before having that capability. But then you have to play catch up on the validation side. That must be exciting to know how fast you can go, but there's some work that you just can't shortcut. I imagine that's a bit of a frustrating point as you're learning and developing new compounds.
David Finkelshteyn:I mean, it depends on the perspective. It can be frustrating on one hand, and on another hand, knowing that we are safer from, I mean, I am as a person who has been working with machine learning models more than ten years, I can tell you that sometimes they are so much off, they're completely off. So it actually makes me feel safer knowing that our validation criteria haven't been changed for AI specifically.
Christopher Hutchins:That being said, at least you know that you can fail faster if that's one of the benefits, right?
David Finkelshteyn:Exactly. And this is very, very important because again, I think around 90% or even more of all drugs, after they were selected as drug candidates. So this whole industry, everyone knows that creating a new molecule, the novel molecule, even generic, but specifically novel, is huge money, tons of money. You can't just be a startup or enthusiast and just create new molecules. You have to get a tremendous amount of money. And a big part of it is this huge failing rate. And the whole idea, and you catch it up, the whole idea of AI now is to be able to fail faster. So we can get faster to an actually correct result. Instead of putting a lot of hours and efforts and human work into something that will fail in months, now we will know it will pass even our machine learning threshold in a week.
Christopher Hutchins:That's a massive impact. That's exciting because think about the broader uses of AI for clinical research as well. This is a really exciting time that we're entering into. If we thought the COVID vaccines came out quickly, that's probably just scratching the surface of what's going to be possible as we learn how to use the AI capabilities more efficiently moving forward. I've heard the term black box way too often when they're talking about data and AI in any industry. Why do those models fail specifically in pharmaceutical environments?
David Finkelshteyn:What do you mean by failing in pharmaceutical environments?
Christopher Hutchins:The idea of trusting AI, not necessarily having enough visibility into exactly how it's deriving the outputs that you see. Where is that really problematic in the work that you do?
David Finkelshteyn:Yeah, so this is the essence of machine learning modeling per se. So we have a kind of tradeoff. There are workarounds that we can talk about more later, but the rule of thumb is there is a tradeoff. The more complex the model, the more complex the task, usually the more complex model you need. And the more complex the model, the less transparent it is. Because for us, transparency is something that we can understand, we can relate, we can grasp. The easiest, the most simple machine learning model, what is it? It's just a line. I think every data science first lesson you have a graph and you have some dots going approximately this direction, and everyone being asked, okay, now tell me, predict where next dot is going to be in this graph. And now everyone drawing this line and saying, okay, it's going this direction. So congratulations, you just made your first prediction as a machine learning model would do. So we understand where this machine learning comes from. It's very simple. But then there is a ladder of complexity, and at the end we have large language models, a neural network with very complex structure, with hundreds of billions of weights. And of course, when you look at it and you have input and output, there is no way a human can tell, okay, I think I know what this will give me. So it is able to solve very complex tasks because the structure is so complex and flexible, and this makes it not explainable so much. We can't understand.
Christopher Hutchins:Right.
David Finkelshteyn:People can't always understand the reasoning behind decisions that machine learning models make.
Christopher Hutchins:That's such an important thing too. We really need to make sure that we're thinking about how we're working with AI. We have to have explainability and transparency, particularly when it comes to anything that's going to touch clinical care or it's going to be some sort of treatment that a human being is going to be subjected to. Regulations are a little bit slow in coming when it comes to this side of everything, but I don't think we have a whole lot of time left before that starts to ramp. I'm starting to hear a lot more about it now. Hopefully we can be proactive enough in the industry that we don't have things done to us, which is usually how regulation affects us. We've got people who are really not understanding the technologies, they're not understanding the medicine, they're trying to make policy, and that's just a recipe for disaster, in my opinion, particularly in the space that we're talking about. Because if you're dealing with it from a regulatory standpoint, you're thinking about it from the legal and risks lens. And the people that are having those conversations are certainly not trained scientists, they're not trained clinicians for the most part. There are a few, but very few of them are actually in regulatory roles. I really appreciate the context that you're giving it. What does it mean to actually verify an AI insight in the context of drug development?
David Finkelshteyn:So the closest thing to verifying AI, when we're defining the model, we're getting back to what we discussed at the beginning. The model output, there are just some numbers. The context is what is important for the model. So we should define several things clearly. First, we should define the specific use cases, the frame within which we can apply this model. Because there are models that can predict pretty good. Everyone knows AlphaFold. AlphaFold can predict pretty good how molecules are folding in space, and they're really doing a good job there. And I will give you an extreme example on that. We want, regardless of how good AlphaFold is at predicting how molecules look in space, we won't rely on AlphaFold to predict a new drug against some kind of cancer. Just because it's not intended to do this. It wasn't trained to do this, it wasn't validated to do this, it's not the job for this model. So first thing is a specific use case. Second thing is curated data. It's very important what kind of data we use to train and test the model. I don't think it should be a guide for data scientists on how exactly to do this, but at a high level, data should fit some statistical distribution. It shouldn't have too many outliers. At least they should be distributed well. When we're validating the model, we should avoid so-called data leakage.
Christopher Hutchins:Right.
David Finkelshteyn:So it means we should really separate the data which we use to train the model and that we use to validate the model. And often as a data scientist, we have an urge when we have new data, and sometimes it's very exciting new data, some new data sets. We have an urge to retrain our model using this new data. So this is a very dangerous path because if we retrain this model using this new data, we should have another test set to validate this model again, because otherwise we may overfit our model, meaning it will show very good results in our train data, obviously in our validation set, which is not purely validation anymore, and therefore we cannot say how well it will fit the pattern, not the specific noise in our data set. So this is very important.
Christopher Hutchins:You mentioned context and I want to kind of stay on that topic for a minute here. Because I think that's one of the things that I'm starting to find out that people really are not as diligent as they should be in how they're using AI. And I think the risk that is now emerging, because we've got two companies that are now entering the healthcare space from a consumer access standpoint. I wonder if maybe you can talk a little bit about how important context is, even in a larger sense, because we're talking about people just interacting with ChatGPT Health or with Claude Health. These are AI companies. The data that they're working with, because it's sitting in platforms that are considered to be administrative and not clinical, it changes the dynamics in terms of who's responsible for decision making. And if it's not involving a clinician and it's sitting on a platform like that, in most situations, we're not talking about liability even being a possibility. So missing context, I think, is going to be something that we have to really pay attention to. Talk a little bit about that from your perspective and why it's such an important part of what you're doing.
David Finkelshteyn:So, yes. The first thing I want to say here, in my opinion, it's really important that people are trying to know more and figure out more about their own life, especially about their own health. When you have an appointment with a doctor or when something hurts and you do your own research before going and blindly trusting someone. Even if this is a trained specialist, unfortunately we know that not everyone in this area is the same. There are better specialists and not so good ones. So it's always good to have your own opinion and enable your critical thinking supported by some knowledge. That being said, of course, everyone, as I'm sure especially everyone who is listening to your podcast knows that ChatGPT, Gemini, Claude, all of them, they hallucinate. And they do this on a daily basis, pretty often. So again, my rule of thumb here is always try to find the ground truth, the origin, the reference. And the LLMs, they're not hallucinating because they are stupid. Actually, they hallucinate because they don't know. But if you give them enough context, it's a very powerful analytics tool. They can extract knowledge from different sources, different types of references. So what I suggest, the golden path I think, is whenever you want to do research, do your search, but when you use an LLM, request it to work with a real source, and of course the gold standard is the scientific article. So if you want to research something about any type of symptoms or disease, just ask it to give you a bunch of references and give you a suggestion based on it. And the second, of course, right after this, go to the doctor. Have your knowledge, gather your knowledge, and go to the doctor. You're not trained enough, neither me nor you, no one trained enough unless you're a specialist. So do your research and go to the doctor.
Christopher Hutchins:That is perfectly said, and I think people really need to hear that message. We probably should put that on repeat and turn the volume up, because the context issue, from my standpoint, just because of my own experiences, I can't remember things that happened 25 years ago that actually might be entirely relevant for the condition that I'm dealing with right now. And if I can't remember it, the model certainly is not going to have that information available. So the context has got to be something that people think about. And it's only like you said, it's not hallucinating because of anything except really the lack of access to information. And there's a lot of things that go into that. And I think the things that I get concerned about are symptoms that by themselves might be low risk and there would be no red flags set off. But with proper context, if it happens to be relevant that two years ago I had a really difficult time healing my broken leg because I had a wound, that kind of information in current context might mean something completely different than just the symptoms that you're trying to figure out what to do about now. That's just in my own thinking. I know that they're working through and deploying the doctor's version and a patient's version, but again, still the context is important. I am on medication for my blood pressure. The model may not know that. The model might just say I've got high blood pressure and alerts, and I actually go to the physician and I'm taking up time that I don't need to be taking, which means the physician may not be treating someone that really needs treatment because I'm in there and I don't need to be. So I think there are a lot of different sides to this, but the context is really critical.
David Finkelshteyn:Again, because everyone who has internet and twenty dollars in the pocket has access to a great analytics and knowledge base tool. It's a good, it's a big trend for patients to own their own data. Meaning when you go to the doctor, they gave you some, they made a diagnosis, they may have run some tests, and then they tell you something, give you one paper where it says what they did and what the doctor did. And you're a very organized person if one year later you have this paper somewhere. It's a rare case actually. But the clinic, the hospital, they always have your data. They gather it, they store it because they have to. And for you to own this data, to be able to use the whole context for maybe an LLM tool or another specialist to look at it, this is a big trend I'm seeing now in software development in this area and in healthcare in general.
Christopher Hutchins:It's going to be an interesting year because we started right out of the gate, and we're not even halfway through, well I guess maybe we are about just halfway through today, halfway through January. It's going to be interesting to see where things go from here. I want to kind of get into some things around data integrity and data quality because that's one of the other issues that we're dealing with. You talk about the absence of data. From my standpoint, that's the scariest kind of bias that we could be dealing with. Talk a little bit about how you manage, when you're trying to determine where to focus, how difficult is it to validate whether you actually have all the right inputs, the right data sources, and you've got sufficient data integrity and quality that you can actually trust it and start to move forward?
David Finkelshteyn:Yeah, it's a great question. I say this is actually the biggest bottleneck in AI in drug design, development, in healthcare, and in general in all data science tasks. So really now data is like gold. Everyone is running around trying to get more data, generate more data, and this is another problem, synthetic data, but again it's a big topic. Yes, you need to very carefully select, get, and select sources of your information, curate the data, make sure that the origin, the sources of the data are trustworthy. And then you need to maintain integrity indeed. And it sounds easy but actually it's not because there are a lot of errors that happen along the way from where you find the data to when it comes into the model to train it, and all the things usually related to human error. The good thing is that in theory the path to integrity is pretty straightforward. You just need your system fully traceable, auditable, and you have to log everything. Again, it sounds easy, it's not in reality. But this is how you ensure integrity of your data. Make sure in every step along the way of your data pipeline you can always look backwards, and in every step you know what happened with your data, how it was transformed, how it was brushed and selected and curated. So this is the way to preserve the integrity.
Christopher Hutchins:Right. It's such an important factor and I think just thinking about the context again, when it comes to clinical care, facts change. And when you've trained your models to look at things a certain way, you've got all kinds of historical data there. It may be actually just fine but the reality is things do change and if the model's not aware of what's changed, then it's going to give you outputs that are going to take you in a direction you don't want to go. I think the real important thing for us to remember is when you're talking about healthcare, it's called the practice of medicine for a very specific reason. It's an evolving science, it's not static. And unfortunately a lot of the regulatory agencies, they treat it like it is static and that's how things are measured. Whatever the composite is that you're measured against to determine whether your quality is good or not, you're either incentivized or penalized. It's based on a composite that no human being will ever look like, and it's just a really important thing especially when we're talking about using LLMs as a consumer. Remember, you are unique. The model doesn't know who you are and it doesn't care and it's certainly not going to have any compassion for you. So you've got to just be aware of what it really is. And the incomplete risk, in terms of having not enough data or not the right data, it's the same issue that a clinician has now when they look at us in the exam room. They're wondering what don't they know and is it important? AI is not going to solve that problem. It may help to accelerate the processing of information and eliminate a few more risks or even diagnostics, but it's still not going to be able to give you the precision that everyone wants. I would also say to what you mentioned earlier in terms of how people should be thinking about it, you also need to remember when you're putting your PHI into a tool like that, you're also eliminating your own protections by doing so. People really need to be very careful how they think about using these things, and just remember, to your point, do your research but go see your doctor. It's just really important to do that. So let's talk about what I kind of touched on there and it's maybe the uncomfortable part of what we're dealing with. What are the weaknesses that are most often surfacing during audits or regulatory reviews or legal scrutiny?
David Finkelshteyn:Yeah, again, for regulators, there are two kinds. If we're talking about AI in drug development and drug design, we can divide our AI tools or approaches by two types. First type is when we, and this is where we work mostly at Pivotal, and what we were talking about during this podcast, generating new leads, new candidates for drugs. And for usually the regulators, for the authorities, for FDA and EMA, they don't care much how we came up with this drug, with this new molecule, because anyway we're going to undergo all the same tests that other novel molecules go through regardless of the way we came up with these molecules. Maybe someday down the road we will be able to be more convincing with the reasoning how we came up with this, but at this point it doesn't really matter how you came up with this molecule that you want to test. It's just going through the standard path. Now another type of machine learning tools or AI tools that comes into the pharmaceutical area, these are more decision makers and they're not really in the designing your drugs. They're more in the analyzing information. For example, the biggest example of it is an EMA-approved tool, essentially an AI assistant for histology scoring. It's an approved tool that helps make a decision, and this is a completely different thing. This essentially became a part of the validation chain. And because people don't trust AI, and for good reason, we don't have a lot of, I think this is the only one of its own kind that is essentially allowed in as a decision maker without too much human oversight. So we're just not there yet.
Christopher Hutchins:Right. As we're getting towards the end here, I want to maybe take a look out into the future a little bit. We have talked about hallucinations, but what would you tell people that they should be thinking about in terms of how can you prevent them in the pharmaceutical industry in particular, because I'm sure there are a lot of folks out there that are wrestling with how to use it and how not to use it, but the hallucination piece of it I think is something very interesting to understand where you think that needs to go.
David Finkelshteyn:So again, it really depends on what you want to do with this. In general, the rule of thumb, if you want an LLM to hallucinate less, give it more context. The more context, the more data you will fit into the LLM, the less it will hallucinate. Essentially think about an LLM as a very powerful search and analytics tool. Just don't ask it to invent something. Ask it to analyze and work with the data and context you gave it.
Christopher Hutchins:Beautifully said. I don't know that that can be overstated. That's amazing. As you look ahead, do you think verification is going to really become a competitive advantage rather than a constraint?
David Finkelshteyn:Yes. I see now, and it's in a different stage of drug design. The chain is very long from idea to actual drug in the pharmacy. The chain is tremendous. And many steps in this chain involve a lot of documentation, a lot of papers filled, and this is where AI is taking over now, because as we all know, AI works with documentation, filling papers, based on your input, filling some standardized templates and papers from authorities. It's doing it great, and there are all sorts of tools emerging now to take over this task. And this is great because there isn't any harm from this, because they cannot spoof any evidence or push some drugs that will be toxic or harmful in any way. There is just a bureaucracy threshold and now we have a tool to fight with this bureaucracy, at least at some portion of it. So this is where we see acceleration. What I can say is, first, we will try to do something with low-hanging fruits which are working with documentation, and maybe AI will let us reduce the amount of effort we have to put into the bureaucracy and the red tape. And this will lead data scientists and biochemists and other scientists and smart people in pharma to put their mind more into the essence of it, into the creation of new drugs. Getting back to our very first topic, we may be able to shorten this loop of designing and verification, and now there are loops where we design drugs in AI and then we send candidates to the automated robotic lab and this lab tests the molecules and sends the results back. So it's closing this loop and we need less and less human in it, and this is very good. A very good high quality verification loop will lead us to faster drug invention and probably higher quality medicine.
Christopher Hutchins:Exciting. There are just so many things that weren't possible a decade ago that now are going to become reality. I think there are more groundbreaking discoveries coming in the cancer space. It wasn't that long ago that a cancer diagnosis really came with an unfortunate timeframe that you're dealing with because it wasn't curable. But things are evolving so quickly and the work that you're doing in the pharmaceutical space is incredibly important. David, as we're wrapping up, if folks wanted to reach out to you to have a conversation because I'm sure that you've shared some things that folks will have some curiosity about and want to get some insight from you, how do they reach out to you? What's your preference?
David Finkelshteyn:Yeah, you can reach out mainly through my LinkedIn profile. I hope the link will be in the description. And there is a form on our website where you can ask a question or submit a request for a project or consultation.
Christopher Hutchins:Perfect. For listeners, I'll make sure that his contact information and the website are in the show notes. David, thank you so much for a really practical and grounded conversation. I think you hit some themes on a high level that folks really need to be cognizant of. Context, context, context. Big focus on that. I really appreciate it, and I'm sure that the clinical folks that are listening out there appreciate that message too. Context is important, but when you've done your research, go see your doctor. That's it for this episode of the Signal Room. If today's conversation sparks something in you, an idea, a challenge, or perspective worth amplifying, I'd love to hear from you. Message me on LinkedIn or visit SignalRoomPodcast.com to explore being a guest on an upcoming episode. Until next time, stay tuned, stay curious, and stay human.