The Brain Injury Forensics Podcast

Unmasking Research: Navigating Bias in Brain Injury Forensics

Joshua Goldenberg & Richard Batson Season 1 Episode 5

Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.

0:00 | 33:40

Join Dr. Richard Batson and Dr. Joshua Goldenberg, as we delve into the issue of bias in brain injury forensic research, questioning the impact of industry funding on scientific studies. In this episode, we challenge preconceptions about the scientific process, exploring the integrity of research and its potential compromise due to financial influences. Dr. Batson provides expert insights, examining the debate on research funding and its effects on results. We explore recent meta-epidemiologic studies and Cochrane debates, highlighting that industry sponsorship doesn't necessarily undermine credible science. Through thorough scrutiny and rigorous controls, we emphasize the possibility of high-caliber research irrespective of funding sources. The conversation underscores the importance of critical examination and avoiding outright dismissal of scientific findings.

In the context of brain injuries and forensic research, our discussion addresses the critical role systematic reviews play in ensuring justice and accuracy. We unravel the significance of transparent, replicable research methods as safeguards against bias, emphasizing the importance of robust, standardized tools. Tune in for an eye-opening conversation that reaffirms the value of unbiased conclusions and the creation of an unassailable body of work applicable in both medical and legal contexts.

Learn more at https://braininjuryresearchsolutions.com/  or email us directly at info@braininjuryresearchsolutions.com

The information provided on the Brain Injury Research Solutions podcast is for
general informational and educational purposes only and is not medical, legal, or
other professional advice. You should not rely on the information provided in the
Brain Injury Research Solutions podcast as a substitute for professional medical
advice, diagnosis, or treatment from a licensed healthcare provider who is
familiar with your individual situation or as a substitute for legal advice from an
attorney.

Introducer
00:02
Welcome to the Brain Injury Forensics Podcast presented by Brain Injury Research Solutions, a forensic services and contract research organization. Join Drs Richard Batson and Joshua Goldenberg as they interview nationally and internationally renowned experts and dive into the latest developments in brain injury forensics, applied medical research, state-of-the-art forensic methodologies, gold standard advanced neuroimaging and numerous brain injury related medical topics. 


Dr. Goldenberg, Co-host
00:37
This is just a reminder before we start that while we are doctors and have advanced training in forensic medical science and forensic epidemiology, and we will be discussing topics that involve medicine and the law, the information in this podcast is not medical, legal or other professional advice, and this podcast is provided for informational and educational purposes only. You should not rely on anything you hear as a substitute for medical care by a physician or other qualified medical professional or legal advice from a licensed attorney. Always consult with your physician or other qualified medical professional for medical advice and an attorney for legal advice. Hello and welcome to the Brain Injury Forensics Podcast with Drs Goldenberg and Batson. 

Good morning, sir. How are you doing? 


Dr. Batson, Co-host
01:25
Good morning, dr Goldenberg, I'm doing well. 


Dr. Goldenberg, Co-host
01:29
Yeah, where are you at right now? Can I out you?  


Dr. Batson, Co-host
01:34
No, I'm in secret hiding for research. Catch up research, sabbatical. 


Dr. Goldenberg, Co-host
01:44
Yeah, you're way too tan to be doing a research sabbatical. 


Dr. Batson, Co-host
01:51
Well, we won't talk about that. That's two hours of swimming...


Dr. Goldenberg, Co-host
01:55
It's a working vacation. You can trust me, ladies and gentlemen, he is a workaholic like myself. 


Dr. Batson, Co-host
02:03
Two hours of swimming at the end of a 10 to 12 hour workday probably. 


Dr. Goldenberg, Co-host
02:06
Yeah, I think that's allowed. I will allow it, counselor. 


Dr. Batson, Co-host
02:09
But anyway, let's just say, we get up very early in this location. 


Dr. Goldenberg, Co-host
02:17
Yeah. Well, it's snowing over here and I am actually looking up electronic mats so that my neighbors will not slip and break their neck on my walkway out there. Anyway, it's cold and snowy and I'm a little upset with you, sir, for having so much sunshine, but we'll let it pass. So what are we going to chat about today? 


Dr. Batson, Co-host
02:38
I think we're going to talk about those electric door mats as personal injury mitigation strategies For those in climates where there could be premises, liabilities, slip and fall cases. That should be the topic today. Now, I think today we're going to talk about bias in research, and something came up the other day in a conversation with an attorney who I was consulting with and he said well, isn't it the case that pretty much you can, you know the way that you ask a question in research, you can just get whatever answer you want. And I said, no, that's not the case. And he sort of huffed and puffed and as if he knew better, and I thought well, you know, this is a good opportunity to talk about this because there's this concept that somehow, you know, scientific nihilism, or that we're sort of in a and I think we've moved into sort of a post-truth era where people no longer trust science or scientists, or for either legitimate reasons or misplace skepticism. But I, you know some of it's legitimate for sure in retrospect of some of the events that have transpired and you know, some of it is misplaced. 


03:56
I think what got me interested years ago was a book that I read by called Bad Science, and there's another one called Bad Pharma, and this was written by Ben Goldacre, who's an epidemiologist out of the UK, and showing how absolutely it's true that you can manipulate science in a way where you're more likely to get the findings that you want, and so that is a reality that we have to deal with in the research community and it's something we have to talk about. 


04:31
But part of what I want to talk whatever I'm going to interview you today, and what I want to talk about is so, okay, let's talk, let's move into that and talk about how do we mitigate bias or positioning the science in a way where we get the answers that we want, so that really we're moving into a platform of greater scientific clarity, truth and fairness, so to speak. 


04:55
So that's going to be the topic. And then I want to also ask you a little bit about this concept of, I think, evidence nihilism, which is sort of the extreme proposition that, well, because science can be manipulated, then it must be manipulated under all circumstances and therefore it's no longer trustworthy and why even bother? And so I think we need to address that. And so the way I look at it is we're somewhere in between. Science is imperfect, but it can most certainly help us to get closer to the truth than without it as a tool or an instrument. So I want to, before we move into bias and research and how we mitigate that, tell me a little bit about this concept of evidence nihilism as you understand it. 


Dr. Goldenberg, Co-host
05:43
Yeah, there is a book on the with that name. That well, I first heard about it from a professor of mine, Dr. Brignal, and I think it's actually a book by that title which I read through. And the basic idea, and this is a topic that is very close to my heart. So I spend most, a very large amount of my time either assessing articles for risk of bias and we could talk about the difference between bias and risk of bias in the future and teaching doctors how to read articles and identify bias. So it is a major part of what I do. It's a topic that I'm absolutely fascinated with. 


06:25
What I would say is what is known is that the vast majority of the medical literature out there right now has low-level evidence, and what I mean by that is that there are many factors that go into the, the certainty or the confidence we can have in a research finding, one of which is the risk of bias of the study. There's other things, but one of which is the risk bias. But overall, the medical research literature is is pretty appallingly Sparse. Right, we're missing stuff. The stuff that out there isn't perfect, and so, as we learned to identify and Uncover and dig down and find all these risks of bias and and look with a fine-tooth critical comb At these research articles. It was very easy to say that. Well, you know what actually what we're really seeing is. Most of the literature is not ideal and it needs a lot of work. 


07:22
Now here's the next step. You can take that and say, well then I just can't trust anything, it's all garbage, I'm gonna throw it all out. You could say that would be like an evidence nihilist perspective. Okay, it's. If it's as bad as you're saying, josh, then let's just throw the baby out with the bathwater and move on. That's evidence nihilism. 


07:42
Now I would argue, as a clinician, you can't do that. You still have patients in front of you every day and you need to make decisions based on the best available evidence. And if you look at the Definitions of evidence-based medicine, that's what they're talking about. They're not talking about the best evidence. To talk about the best Available evidence that's relevant to the patient in front of you. 


08:02
And I would argue that is applicable in the legal environment, in the forensics environment as well. We have real people, real clients, real attorneys, real judges, real juries who have to answer questions and make determinations and they can't sit there and pontificate about well, it isn't perfect, therefore right, we can't make the perfect the enemy of the good. So what we do is we look at the evidence in front of us, we critically evaluate it and we make a call based on the best available evidence. And I think that is a pragmatic approach, with eyes open about Research and I would not throw the baby out with the bathwater but I the same time very much obsess over the risk of bias in the medical literature. 


Dr. Batson, Co-host
08:44
So you probably remember a time when cars didn't have headrests or seatbelts. I think all of us have seen those cars, if you've ever been to a car show with. 


Dr. Goldenberg, Co-host
08:53
I've seen pictures of those cars. I'm not quite at that age. 


Dr. Batson, Co-host
08:57
Yeah, you've seen pictures or if you've been to some of the car shows, if you've been to some of the car shows, there's there's no seatbelts and there's no headrest and I look at them from a personal injury standpoint. 


09:06
I'm like I would never get in that car. 


09:08
But yeah, at the time it was obviously a better means of transportation, a faster means to get from point a to point B, then, say, a bicycle or a horse and buggy, and probably more comfortable than riding on a horse, you know, for longer distances in terms of you know, you know Level of comfort and and and not being exposed to the elements. 


09:29
So I think what you're saying is sort of akin to this if we look back at how cars were made, you know, compared to now, we wouldn't say that we wouldn't if we'd lived in that time, that we wouldn't have ridden in or driven a car to Get where we needed to go more efficiently. I see we're going like that. So so it seems to me like what you're saying is science is a, is a is an iterative process and we use what we have yeah, that's best available to us rather than saying, you know, looking back and saying, well, that that car is not viable as a means of transportation. That's actually a wrong proposition. It was viable as a means of transportation. It just wasn't as safe or Efficient or or or fast as the the current cars Is that. Is that an accurate understanding, that analogy? 


Dr. Goldenberg, Co-host
10:15
Yes, I will also posit that, indeed, cars are usually better than when you need to get around right, especially since I got thrown from a horse the first time I ever tried it, trying to impress an old girlfriend. But that's another story. What I would say is so, yes, I generally, I think I agree with that analogy. In general, I would say that Science is not perfect science. I like what you said science is iterative. I think that's true. I think we are getting better over time and I think we have to always put in front of our minds what is the best available evidence to answer this question. And that's what it comes down to. 


Dr. Batson, Co-host
10:49
Okay, well, let's jump in right away and talk about you know. The other idea that came up is funding sources. So and I want to kind of vet that there's different ways to get funding, for research Can be privately funded, it could be an NIH grant, etc. Is there a linear relationship, if you will, between funding and expected outcome of results? Meaning, just because something was funded from source A and source A had an agenda, does that mean automatically that the results of that research or that study are unreliable and shouldn't be trusted? And vice versa, if it was, let's say, it's non-profit funded or NIH funded, does that automatically mean that we have good science and it's trustworthy? And can funding be a reliable proxy for trustworthiness of science? 


Dr. Goldenberg, Co-host
11:49
This is such an important question. I actually just gave a talk on this topic a little bit of go to an industry group, and so I'm pretty much up to date on the literature on this and I find it absolutely fascinating. So you'll have to interrupt me if I just go, if I just keep on yammering at you on this one. But so yes and no. So it's actually been a very complicated debate that's been going on in the research methods community. There is a anyone who's interested, who's as nerdy as we are 2008,. There was a debate between two researchers at Cochran and they published the debate as two competing editorials on this idea of when we do the Cochran risk of bias assessment, which is this sort of standardized high level way of looking at the risk of bias of a study. Should we have a standalone domain that's just industry sponsorship, and if it is, we'd get a check and it's a high risk of bias study right away or is it more complicated than that? And it was a very interesting debate that goes on to this day. But what Cochran decided and I think rightfully so is that no, not necessarily that you have to look at it with more of a critical eye. Now, what do we know from the literature? If you look at what are called meta epidemiologic studies which is a mouthful but also means that basically you're taking this like high, high view of all of the medical literature and looking for how bias plays out across all sorts of different conditions what you see is that industry sponsorship in general can or is associated with rosier pictures in results. Right, the results look rosier. In other words, there is empiric evidence on in general, across all of medicine of bias or a risk of bias, and that way we should probably think about industry sponsorship and funding sources as a potential risk of bias. It doesn't mean it's biased, it means there's a risk there, sort of like a red flag. Take a closer look, be even more critical type of thing. Now, what's so fast? This has been known for a while Now. The argument was okay, yes, this is true, we know this from the literature. But we actually have to look closer, because how that bias plays out, it doesn't just magically occur. Right, there has to be a mechanism. It's not just a risk of bias. So maybe the design is off or the question is asked in a particular way. Yada, yada, yada. So you have to do more critical analysis to do that. Now, this is the last thing I'm going to say about this. Maybe, maybe not this year, or I guess last year, 2023,. 


14:14
There was another fascinating paper out of the Cochrane camp that came out looking at this question more recently, and the expectation was that they were going to find more of what they saw before, which is industry sponsorship in and of itself is associated with a rosier picture. They actually found a more nuanced answer, which I think is fascinating. So industry sponsorship, like industry sponsorship per se, is not necessarily associated with a rosier picture. However, industry sponsorship where they have control over the design is. So what does that mean? So if you have these really nicely done pharma studies where big pharma pays, you know, $10 million to do a study, they write a grant to Harvard and they walk away. They're not at the table for methods, design, they're not deciding when it's going to get published, they're not writing it, they're not ghost authoring it All this stuff that happened years ago. They're not doing that. They're writing a check and walking away. That is not necessarily associated with bias. We're not seeing evidence of that. 


15:15
That separation seems to be good and that makes sense intellectually right, because it's not just this magic thing that happens as soon as you sign a check right. There has to be a mechanism and if you have the right safeguards in place, you can protect against that mechanism. So I thought that was absolutely fascinating and made intellectual sense to me. So I think that's a long-winded way to get around to saying. 


15:35
The way I look at industry sponsorship or sponsorship in general, is it is important, you need to look at it for everything. You need to see where the incentives align and if you see that there are incentives in one direction or another, you should take a close look. It's like a waving flag saying take a closer look at me, take a closer look at me. So you should, you should take a closer look and make sure they cross all their T's and dot all their I's and if they do, awesome. If not, you dig a little bit deeper. And that's the way I think you should look at it. It's a risk, but it can be done. Well, at least that's what the latest empiric evidence suggests and, again, it makes sense intellectually. 


Introducer
16:19
If you'd like to learn more about our unique approach to brain injury forensics, email us directly at info at braininjuryresearchsolutions.com, or learn more on our website, www.braininjuryresearchsolutions.com. There you can sign up for webinars, explore featured papers and learn about the team. Enjoy the podcast. Don't forget to rate us and review us on Apple Podcast to help spread the word. 


Dr. Batson, Co-host
17:00
So I just want to understand then. So, Cochrane, which is, you know, we can consider the gold standard and evidence synthesis, at this point in time, industry funding is not considered to be a standalone factor for influencing the results of research. You have to look deeper. It's more nuanced, is that correct? That's correct. 


Dr. Goldenberg, Co-host
17:23
Perfect, that is correct. It's not part of the risk of bias assessment. Okay. 


Dr. Batson, Co-host
17:25
Yes, it's not part of the risk of bias assessment. Okay, that's very, very important, I think, for us to understand, and I appreciate you clarifying that. So let's jump right into how we mitigate this. Let's say we've got two different scenarios. We've got a grant versus industry funding. Let's take an industry funded scenario and let's say you know, how are we going to design a study, an evidence synthesis project, which is what we do? How are we going to mitigate bias and make sure that the results are reliable? What's the process? Can you walk us through that? 


Dr. Goldenberg, Co-host
18:02
Sure, so what I would say. So for evidence synthesis, the most of the biases that can creep in are going to be with how you set up the question and how you decide you're going to analyze the results and how careful you're going to, or systematic you're going to be, in your search. That's where most of the problems will usually arise. There's other sorts of bias, but that's probably the main one. Most of those are addressed, most of them, with using a priori registration. So, in other words, before you look at the results of the papers, right. So in evidence synthesis, you go out and you systematically find all the research articles you can on your question in this case a forensics question as opposed to a clinical question and you find everything you can and you analyze it, et cetera, et cetera, et cetera, and then you see what the results are. Well, before you know what those papers say, you write a protocol and you publish that protocol a priori so that anyone in the world can see it, and it's time stamps right. So a lot of these registries where you post these protocols and this is not just true for evidence synthesis, it's true for all of research you can go back and look at historical changes that have been made to the protocol and kind of time that in and see what's been changed. Does it look legit? Does it not look legit? So the basic idea is that you put out there for the world to see what you said you were interested in, so you can't change your mind after you see the results, right. So that's kind of the idea. So I think a priori registration helps a lot. 


19:35
Now, as far as you had started this conversation with saying well, it depends on how you ask the question. I don't know that that's super relevant in a forensics situation because you have a specific clinical question, right. But you have a specific forensics question that's presenting itself because of the case in front of you, right? Or if you have a patient in front of you in a clinical world, like, you have a very specific clinical question. So in theory, that question should be lined up specifically to the specific question that you have. So I'm not sure that that is a major concern in this case. But it'd be more about like are you being systematic? Are you not cherry picking? Are you analyzing everything according to set standards, like the Cochrane Risk Abuse Tool, for example, that everyone follows and everyone agrees with, in doing your analysis. So I would say that protects against a lot, but you still need to look close. 


Dr. Batson, Co-host
20:25
Now tell me a little bit more, or tell the audience a little bit more about the Cochrane Risk Abuse Tool, without doing a complete lecture on this, which I know you do, and you teach the LA version a breakdown of that. Can you go over that for us? 


Dr. Goldenberg, Co-host
20:41
Yeah, this is just a tool, for they have many tools for risk bias. The classic one is for randomized controlled trials. There's other tools out there from other institutions and from Cochrane for different type of study design. But basically the idea is is that there is a set number of things that you need to go through and evaluate in a research study to say is this okay or is this set us up for a higher risk of bias? Again, we never know if it's biased, we just know if there's a risk of bias and you look overall at the result. So this is a standardized way with formalized ways of approaching it, that you do and go through and say okay, do we think this study overall is trustworthy or not? Can we trust the results or not? And that goes into your overall assessment of the evidence at the end. 


Dr. Batson, Co-host
21:29
Excellent, gotcha. Okay, perfect, perfect. Anything else along those lines that you feel like would be helpful to cover within the Cochrane Risk Abias tool? 


Dr. Goldenberg, Co-host
21:42
No, I don't think so. I think the main take-homes are that part of. So I'm an optimist about the medical literature, and I think a lot of that comes from the fact that you have organizations and scientists that are putting up standardized ways of doing it, so that it gets harder and harder and harder to kind of deviate from that without raising alarm bells, and so I do believe that there's still a lot of garbage science out there, but I also think it's getting better, and I think that we now have tools to be able to identify issues when they arise but also mitigate against some of the potential risks as well. 


Dr. Batson, Co-host
22:20
Okay, let's bring this full circle back into the type of work that we're often asked to do inside of the forensic setting, which we've covered in prior podcasts, but it's probably worth a review. 


22:33
We're often asked to look at the medical literature in a broad way, using gold standard methods such as systematic review and meta-analysis, to try to understand the prevalence of a particular sequela, a result, a medical result of a brain injury. And one of the ways that we do that is to look at prevalence, and we've talked about prevalence in prior podcasts. And so when we're looking at prevalence data to try to say you know, okay, for example, in our recent paper that we're working on on convergence insufficiency, how common is this disorder, a binocular vision after a concussion, a mild traumatic brain injury and then we compare that to baseline risk in a population that hasn't been exposed, to try to get an understanding of the relative risk or comparative risk. So when we're looking at, when our goal is to explore the literature to establish prevalence, which is a general causation exercise, how much risk of bias is there in the sense of some of the issues that we brought up within that specific type of systematic review? 


Dr. Goldenberg, Co-host
23:50
Yeah, I think what's nice about whether it's prevalence or intervention, systematic review or whatever it is our, you know, the hypothesis that you and I have is that if we bring these standardized tools that have kind of been built for researchers and bring them to a forensic setting, we're going to have a better approximation of the truth, right, with a less biased view. And one of the biggest things that we see you know anecdotally, you and you and I and we talk about this a lot is, you know, this cherry picking from experts about the research literature, right, like, if you you know Google search, google scholar, even PubMed, you can find an article that meets your preconceived notions. The whole point of a systematic review is to avoid that like it is literally designed to avoid cherry picking. So if you systematically review the literature, you're often doing like we use our default is like three databases plus grade literature. You're doing multiple database searches, you have standardized search terms and like we have reviews, like we're working on one now with seizures where we had 21,000 citation hits that we're going through right in duplicate, having two people independently go through all this, and the idea is that you are casting an extremely wide net and you're catching all the fish that meet the criteria, because you don't want to just pick. 


25:02
Now this analogy is falling apart. You don't want to just pick the fish that agree with your position. You don't want to just cherry pick. And so the idea is that basically, you, you get everything that's out there and then critically look at it can we trust it, can we not? And then you synthesize that data. So the fact that you're systematically going through and then you are transparently saying how you did that search so anyone else could do the same thing and come up with the same result, that is basically how we define a systematic review and it is built around avoiding this cherry picking and, in the medical world, avoiding what we call narrative reviews, which you know we're really trying to move away from, where you have some expert with the preconceived notion basically right, whatever they want. 


25:44
The idea here is that we're systematic in it. So I would say, by definition, using systematic review processes which we do to answer this question we're going to minimize the chances of cherry picking. That would be number one. As far as how we select articles, how we evaluate them, how we synthesize them, we all use, we use standard methodology from the, from researchers, from scientists, and then we say what we're going to do ahead of time and, you know, put it out in a protocol ahead of time as well. So that's the registration we were talking about earlier. So these are some of the things that we like to do to protect against some of these issues, but at core, I mean, I think we're the only people doing systematic reviews for situations like this, so I think it's mostly just content experts, you know, writing their opinion position with a couple citations, and so probably the biggest issue is that the cherry picking, which I think systematic reviews, really is the best full work against. 


Dr. Batson, Co-host
26:40
Okay, so I'm gonna do two things here. I'm gonna use a real-world analogy to hopefully salvage your reputation with regards to the fish analogy, because I think they're actually. I think there actually is a good fish analogy, and if I wasn't in a You've hinted that I'm in a tropical area and if I wasn't snorkeling on a daily basis, I probably wouldn't be able to help you with that. 


27:01
But but I think you see you're doing research, but I think there's an appropriate analogy, which is you know, there's an area where I swim almost every day and snorkel and it's a contained area. Right, it's inside of a reef and so let's, let's call it. You know, I don't know how many square feet it is, but let's, let's call it about a, let's call it about a half an acre of of Ocean. And when you're snorkeling in there, you see fish of different sizes, right, and some of them are really small and some of them are, you know, up to. We saw one the other day. There was probably three feet. I think there was an eel in there. 


27:33
Technically that wouldn't be a fish, but my point is, if your goal was to, if you were a biologist, and your goal was to sample all of the fish in that Finite area, which would be the finite area, would be a topic related to brain injury, right, we're asking question about epilepsy or or binocular vision, or psychosis or whatever. It's that pool that you're looking at of Fish in a finite area. And now you've got a measure, you got a sample enough fish within that area so that you can say this is the average size of the fish in this area. This is the range, this is the upper limit, this is the lower limit, here's the standard deviation, and so that applies. And so what you're saying is what we're actually trying to do with the work that we do which we believe is important and it's and it's not something that we should just just throw out based on Scientific nihilism is we're trying to actually come up with an answer About what the average size of a fish is in that finite area, whereas what's been going on in forensics for the last 30 years, as we can see it is, we've got we've got people that are retained, that do have conflicts of interest, and their party wants to say that the average size of a fish is is Five inches Right, it's a very small fish. 


28:45
And so they they swim into that pool and they grab two small fish, one that's four inches and one that's six and then they come back and they say, hey, the average size is is five, five inches, and what we found is, when we measure, you know, six thousand fish, the average size is actually closer to a foot. So is it? I think that's a fair analogy to talk about sampling. 


Dr. Goldenberg, Co-host
29:07
I appreciate you saving that. 


Dr. Batson, Co-host
29:10
Yeah, yeah, and. And while fish can't talk, they can be measured, and so I think that's. 


Dr. Goldenberg, Co-host
29:17
I think they generally don't opine yeah. 


Dr. Batson, Co-host
29:20
And they generally don't opine. But but I think I think that's what you're talking about is a really, really easy example and again, that's a story of, that's a story of an anima. Embarrass myself now. Cherry picking of fish, right. 


Dr. Goldenberg, Co-host
29:34
We got to stop this is terrible. 


Dr. Batson, Co-host
29:36
Um sure, sure, we got to stop. It's terrible, but we're just still falling down. 


29:41
Yeah, but the idea of cherry picking came came from cherry trees, right when you go out and you pick cherries, and you know, and it's funny because I grew up with a grandmother that had a cherry tree in her yard and we would pick cherries in the summer and some of them are rotten right and so and you pull them off, or they're, they're, they're almost on their way to being rotten, and some of them are or too hard to eat because they haven't ripened yet. 


30:02
And so the idea of cherry picking is, you know, if you're trying to understand the, the overall quality of cherries on a particular tree, if you pick just one or two rotten cherries, you would assume this is just all these cherries are rotten, and vice versa. So that idea, cherry picking comes from people that were, you know, on the earth interacting with cherry trees, and it's a very, very appropriate analogy, having having actually lived that myself. So I appreciate that. So so coming back to the basic idea of not cherry picking and taking a broad view of the scientific literature Is going to help, overall, bring us closer to the truth than if we did not go through that arduous and laborious process. 


Dr. Goldenberg, Co-host
30:45
Yeah, I think I mean all of science is essentially we're trying to approximate the truth, right, like that's that's the purpose of science, like we don't have a god's eye view of, like what things are. We can run experiments, we can gather things and we can approximate the truth and essentially the idea is that we know we have standardized tools, we think, to better approximate that truth and that's what we're trying to apply in a forensic setting. 


Dr. Batson, Co-host
31:09
So this is great and I appreciate that. I know it's a fairly Sophisticated topic and we're having to use analogies to try to kind of kind of bring it down to earth. 


Dr. Goldenberg, Co-host
31:18
We're having to butcher analogies. To be to be clear, ram to butcher analogies as well, great. 


Dr. Batson, Co-host
31:23
Well, I don't have a whole lot of other questions on. I know you're going to be Doing some more lectures in the near future on risk of bias and I think that's something that if our audience is Interested in, they can most certainly they. I think there's a place where they can contact us and and it'll be there. And if you're interested in getting more training, actually, you know, whether you're a layperson, doctor, attorney who's interested in in really understanding evidence synthesis and how it can be applied in the forensic setting to to Come closer to the truth, that's something we're dr Goldenberg Berg and I are both, both eager to eager to share. And Other than that, I don't have any more questions great. 


Dr. Goldenberg, Co-host
32:06
Well, yeah, I mean I think I can talk about bias forever, but I think the the basic take home is is just that is that we're we're trying to take high level Scientific approaches to better approximate the truth, and you and I have both kind of been shocked with the level of what we've seen out there. So we, our hypothesis is, this is sort of a better way to do that, and I think it's. I think it's playing out and Evidence that this is is essential, in my opinion, for getting a broad look at things and hopefully that becomes the norm moving forward. 


Dr. Batson, Co-host
32:38
Sounds good. Well, thanks again, dr Goldenberg, for sharing your background, and your experience in this area was was invaluable, and we'll see everybody next time on the brain injury forensics podcast. Thanks for listening. 


Dr. Goldenberg, Co-host
32:51
All right, take care, thank you. 


Introducer
32:58
Thank you for listening to the brain injury forensics podcast with doctors bats and in Goldenberg, brought to you by brain injury research solutions. If you'd like to learn more about our unique approach to brain injury forensics, email us directly at info at brain injury research solutions calm, or learn more on our website, www.braininjuryresearchsolutions.com. There you can sign up for webinars, explore featured papers and learn about the team. Enjoy the podcast. Don't forget to rate us and review us on Apple podcast to help spread the word.