The Trip Lab

#22 – Is Modern Medicine Still Evidence-Based? Reclaiming Evidence, Restoring Clinical Wisdom

Mary Ella Wood, DO Season 2 Episode 22

Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.

0:00 | 35:16

Is modern medicine still evidence-based, or have we quietly mistaken rigor for certainty?

Evidence-based medicine is essential. It’s why we save lives, advance care, and trust modern healthcare. But as medicine has become more specialized and disease more complex, something subtle has happened. Rigor has increasingly turned into reductionism, and evidence is often applied in ways that don’t fully match the realities of clinical practice or patients’ lived experiences.

In this episode of The Trip Lab, I take a careful look at what we mean when we say “evidence-based medicine.” We explore the difference between statistical significance and clinical significance, how guidelines are created and why they are evidence-informed rather than infallible, and why many patients feel unwell despite having “normal” labs.

This conversation also examines how modern research methods struggle to capture complexity, particularly in chronic, system-level disease. We look at where reductionism has helped medicine advance, where it now falls short, and why ancient healing systems and emerging fields like systems biology, functional medicine, and precision medicine are pointing us toward a more integrated future.

This episode is not a rejection of evidence. It’s an invitation to reclaim it. To restore clinical wisdom alongside data, and to practice medicine with both rigor and curiosity.

In this episode, we cover:

  • What “evidence-based medicine” actually means and how it’s evolved
  • Statistical significance vs. clinical significance
  • The strengths and limitations of medical guidelines
  • Why reductionist models don’t fully explain chronic disease
  • Why patients can feel unwell even when labs are “normal”
  • How medicine might evolve to better study complexity
  • Why medicine is both a science and an art

The podcast name, The Trip Lab, nods to psychedelics, but a “trip,” psychedelic or otherwise, is ultimately an exploration. A willingness to step outside familiar frameworks, question what we think we know, and notice connections that weren’t obvious before.

If you’ve ever felt tension between what the data says, what the guidelines allow, and what the patient in front of you actually needs... or if you are a patient who has been failed by modern medicine, this episode is for you.

Evidence-based medicine is essential.

It is the reason medicine has advanced as far as it has. It is why we live longer, why we survive diseases that would have been fatal just a century ago, and why modern healthcare has been able to reduce suffering at a scale that was once unimaginable.

Some of the most powerful medical advances weren’t even high-tech—things like handwashing, clean drinking water, and sanitation dramatically altered human survival. And alongside those foundational public-health innovations, evidence-based research gave us life-saving surgeries, antibiotics, cardiovascular interventions, and medications that have unquestionably changed outcomes.

So let me be very clear at the start of this episode: this is not a rejection of evidence-based medicine. It is a scrutiny of what evidence-based medicine actually is right now, and how we can reclaim evidence in a way that matches the pace of our technological advancements. 

What I’ve witnessed over the past decade—something that began long before I graduated medical school—is a quiet but important shift. Evidence-based medicine has increasingly become reductionist medicine.

I first began to seriously question this during my surgical training (I actually started out in general surgery residency before pivoting to my current career in integrative medicine). On ICU (critical care) rotations, we would scrutinize the literature in painstaking detail. Decisions like normal saline versus lactated Ringer’s were debated intensely, with statistical analyses parsed down to fractional differences. One option would be chosen because it was slightly better in the data.

And to be clear—that process is not wrong. That rigor is part of what keeps patients safe. But watching this play out made me step back and ask a deeper question: What do we actually mean when we say something is “evidence-based”? And how much clinical meaning are we assigning to very small statistical differences?

That question stayed with me. And then COVID happened.

Suddenly, we were facing a novel disease with no established evidence base. There were no guidelines, no long-term outcome data, no randomized trials to lean on. Clinicians and patients alike were reaching for anything that felt potentially helpful—often high-dose vitamins, supplements, and lifestyle-based interventions. We even played around with high dose vitamins given through IV in the COVID ICUs. Something i never thought I would be doing in general surgery residency. 

This got me curious. I started reading. And what I found surprised me. Many integrative therapies had comparable levels of evidence to interventions we routinely use in conventional medicine—sometimes even more.

Take blood pressure as an example. On average, many antihypertensive medications lower systolic blood pressure by about seven to fifteen points. That’s meaningful. But so does a Mediterranean lifestyle. So do interventions like regular physical activity, stress reduction, and even certain supplements like CoQ10. AND these lifestyle or integrative options actually get to the root cause and change the SYSTEM that has been damaged… where our blood pressure drugs SOLELY just lower the blood pressure. Now JUST lowering the blood pressure does have its benefits in preventing kidney disease and earlier heart attacks… but if were not TRULY treating the root cause of why high blood pressure manifested… something else or worse will get ya, because the SYSTEM was not fixed… just a blood pressure number was. 

So why don’t we talk about these lifestyle or integrative therapies in modern medicine spaces? Why don’t we learn about them in medical school? Why did i have just ONE single hour of nutrition training throughout the 20,000+ hours it takes to go through medical school and residency? Its NOT because it’s not effective—it’s money. There was no financial incentive to rigorously study lifestyle interventions at scale. Drugs generate profit. Lifestyle does not. And so the evidence base itself becomes skewed—not because something doesn’t work, but because it isn’t lucrative to study. I dive deeper into this history behind this in episode #15 - which is my deep dive series intro. The episode is called ‘how big pharma lead us to call eastern medicine alternative’ if you want more of a historical lens. 

But in this episode, we are going to focus on NOW. 

We’re going to talk about what evidence-based medicine was meant to be, how it slowly became overly reductionist, and why that matters—especially in chronic disease. We’ll examine how guidelines are created, how clinical significance gets lost in statistical language, and why patients so often feel unwell even when their labs are “normal.”

And then we’re going to rebuild. We’ll talk about what a more integrated, system-aware, and clinically wise version of evidence-based medicine actually looks like—one that patients are asking for, that many physicians feel but don’t always have language for, and that I believe represents the future of medicine.

Before we go any further, I want to briefly acknowledge the name of this podcast—The Trip Lab.

While the name obviously nods to psychedelic medicine, which is a major focus of this show, a “trip” also means something broader. It’s an exploration. A willingness to step outside of familiar frameworks and examine what happens when we look more closely, more curiously, and more honestly.

That’s what I intend to do with this podcast… and my career. To explore the frontiers (of medicine and elsewhere), to question what we assume we already know, and to better understand how systems connect rather than treating them in isolation. Life is a trip, and meaning is made through exploration, reflection and intentional change. 

But back to this episode. One core idea that should lead us to examine evidence with more scrutiny is the difference between statistical significance and clinical significance.

We use these terms all the time in medicine, but I don’t think we pause often enough to ask what they actually mean in practice.

Statistical significance tells us whether an observed effect is unlikely to be due to chance. It answers a very narrow question: Is this result mathematically real?

Clinical significance asks something very different: Does this meaningfully change a patient’s life?

Those two things are not the same—and yet, in modern medicine, we often treat them as if they are interchangeable.

A result can be statistically significant and still be clinically trivial. With a large enough sample size, even very small differences can reach statistical significance. And in isolation, those numbers can look impressive. But when you zoom out and ask whether that difference actually changes how a patient feels, functions, or survives, the answer is often much less clear.

This is where I first started to feel uneasy during my training. In critical care and inpatient medicine, we would agonize over studies comparing interventions that differed by fractions of a percent. One fluid showed a slightly lower mortality signal than another. One intervention nudged an outcome just enough to cross a p-value threshold.

Again, this matters. Precision matters. But it also made me ask: Are we mistaking statistical cleanliness for clinical impact?

And beyond that—are we even asking the right question?

Many of these studies are designed to isolate a single variable and measure a single outcome. But patients don’t live in single variables. They live in systems. A modest change in one lab value or one endpoint may not translate into a meaningful shift in the overall trajectory of health.

This becomes especially important when we talk about relative risk versus absolute risk—a distinction that is rarely explained to patients and, frankly, not always sufficiently interrogated by clinicians.

Relative risk can make effects look dramatic. A therapy that reduces risk by 20 or 30 percent sounds powerful. But when you look at the absolute numbers, that reduction may represent a change from, say, a 5 percent risk to a 4 percent risk.

That may be statistically impressive, but clinically, it’s a very different conversation—especially when you factor in side effects, cost, and long-term burden.

We see this across medicine. Medications are often framed as outcome-changing based on relative risk reductions, while the absolute benefit to an individual patient may be modest. Meanwhile, interventions that affect multiple systems—sleep, nutrition, stress physiology, inflammation—are dismissed because they don’t fit neatly into reductionist study designs, even when their real-world impact may be equal or greater.

This distinction becomes especially important when we look at how efficacy itself is defined—particularly in oncology.

Many cancer drugs are approved by the FDA based on demonstrated efficacy with acceptable toxicity. But what often gets lost is that efficacy does not necessarily mean improved survival.

For years, a significant number of oncology drugs were approved not because they extended life expectancy, but because they met a surrogate endpoint known as partial tumor response rate. In many cases, this meant the drug was shown to shrink the primary tumor by more than fifty percent in volume.

On paper, that looks like success. Tumors are smaller. Imaging improves. Biomarkers move in the right direction.

But when researchers later examined long-term outcomes, many of these drugs did not meaningfully improve overall survival. Tumor shrinkage—while visually compelling—turned out to be a poor proxy for whether patients actually lived longer or better.

This isn’t a critique of oncology or drug development. It’s an illustration of a much larger issue: we often mistake improvement in a measurable outcome for improvement in a meaningful one.

And this happens because surrogate endpoints are easier to study, faster to measure, and more compatible with reductionist trial designs. But patients don’t ultimately care whether a tumor shrinks by fifty percent. They care whether they live longer, suffer less, and maintain quality of life.

This brings us back to relative risk versus absolute risk—and to the broader question of how evidence is framed. A therapy can look impressive when outcomes are presented in relative terms or when surrogate markers improve. But when you zoom out and examine absolute benefit, system-level impact, and lived experience, the story often changes.

So the question I keep coming back to is this:

Does this intervention meaningfully change a patient’s life?
Does it address the system, or does it simply shift one measurable outcome?

Because evidence-based medicine was never meant to be about chasing statistically significant numbers in isolation. It was meant to guide decisions that improve real human lives.

Another foundational issue in how we generate evidence is who that evidence is actually based on.

Randomized controlled trials rely on strict inclusion and exclusion criteria. And there’s a reason for that. These criteria are designed to isolate variables. They allow researchers to answer a very specific question: Does this intervention work under controlled conditions?

That rigor is essential. Without it, we wouldn’t know whether a drug or intervention has a true effect at all.

But the phrase isolate variables is the KEY —because what it also means is that evidence is often generated in highly selective populations. Patients with multiple chronic conditions are excluded. People on multiple medications are excluded. Pregnant women are excluded. Older adults are often excluded. Anyone who introduces “noise” into the data is removed.

What we are left with is a version of the human body that is cleaner, simpler, and far more uniform than real life.

So doctor do look at a study and ask whether the results are translatable to broader populations. But in my opinion, that step is not interrogated deeply enough. Guidelines are written, recommendations are made, and those recommendations are applied widely—often to patients who look nothing like the population studied.

This is the paradox: we must study interventions this way to establish causality—but doing so severely limits real-world applicability.

Human beings are not controlled experiments, as much as we want the to be, and as much as we MUST study them this way (right now) to advance medicne. But… every patient comes with a unique combination of genetics, comorbid conditions, medications, environmental exposures, stress physiology, sleep patterns, nutrition, and lived experience. Yet we often act as though evidence generated in a narrow slice of the population can be cleanly extrapolated to everyone else.

This limitation becomes even exaggerated when we talk about women’s health.

For decades, women were NOT included in clinical trials— due to concerns about hormonal variability, pregnancy risk, and reproductive potential led to women being left out of drug studies and biomedical research well into the late twentieth century.

As a result, much of what we consider foundational medical evidence was generated primarily in male bodies and then generalized to women after the fact.

Even today, women’s health remains significantly underfunded and under-researched relative to disease burden. Conditions that disproportionately affect women—autoimmune disease, chronic pain syndromes, functional GI disorders, hormonal disorders—are often less well studied, less well understood, and more likely to be dismissed when objective findings are limited.

This is not because these conditions are less real. It’s because our research frameworks have historically struggled to study complex, system-level, and hormonally dynamic states.

When patients—especially women—are told that their labs are “normal” but they don’t feel well, this is not a failure of imagination on their part. It is often a failure of the evidence base to reflect the full complexity of their physiology.

And this is where cracks begin to show in a purely reductionist interpretation of evidence-based medicine—not because evidence is wrong, but because it is incomplete.

Even when evidence is strong, the model we use to interpret it increasingly does not match the reality of modern disease.

Reductionism is not inherently bad. It is the foundation of how we learned anatomy, physiology, pharmacology. Break a system into parts, study one variable at a time, establish causality. That approach is why we have antibiotics, why we have surgical breakthroughs, why we can treat acute disease with precision.

But the more time I spend in medicine, the clearer it becomes that many of the conditions driving suffering today are not linear problems. They are systems-level problems.

It this isn’t just me as a sole doctor postulating… Research is emerging —consistently—that validates what many functional and integrative clinicians have been circling around for years: chronic disease involves networks, not isolated organs.

Take mental health. We are watching the science around the gut microbiome and psychiatric symptoms evolve in real time. Human studies continue to show associations between gut microbial patterns and depressive symptoms, and meta-analyses of microbiome-targeted interventions like probiotics and synbiotics suggest modest but meaningful improvements in depression and anxiety in certain populations.

Is this definitive? No. The field is heterogeneous and messy. But the direction is clear: the gut, immune signaling, metabolic pathways, and brain function are not separate categories. They are interacting systems.

Or consider cardiometabolic disease. The older model was almost entirely lipid-centered. But the modern evidence base increasingly supports chronic, low-grade inflammation as a major driver of atherosclerosis and cardiovascular risk, independent of LDL. Major scientific statements and reviews now explicitly emphasize inflammatory signaling pathways—IL-1, IL-6, CRP—as part of the cardiovascular story.

We have outcome-level trials that made this harder to ignore. When anti-inflammatory pathways are targeted in carefully selected populations, event rates can shift—even when cholesterol itself is not the variable being manipulated.

So modern medicine is evolving. It is beginning to name what many system-oriented clinicians have been saying for a long time: chronic disease is frequently multifactorial, network-based, and interdependent.

But here’s the tension: our healthcare structure and research methods have not evolved at the same pace. We are still deeply siloed.

Specialization has been one of the greatest engines of clinical advancement. Cardiology, gastroenterology, psychiatry, neurology—these fields have made extraordinary progress because experts dedicated their lives to a narrower slice of human physiology. And even within those fields, subspecialization has driven precision—electrophysiology, interventional cardiology, heart failure, neuroimmunology.

This is good. We need it.

But as we get more specialized, we also need the parallel evolution of a different kind of clinician: not just primary care generalists—but what I would call explorative generalists.

Clinicians who are trained to think in systems, to track patterns across organs, to hold complexity without collapsing it into premature certainty. People who can integrate emerging science across fields and ask, “What connects this?” rather than, “Which silo does this belong to?”

This is one reason functional medicine has resonated with me and a lot of people. At its best, it is a framework for exploring systems-level physiology—gut-immune-brain connections, inflammatory drivers, metabolic resilience, endocrine signaling, nutrient sufficiency—before disease becomes diagnosable on a narrow lab threshold.

But we also have to be honest: functional medicine has been tainted in places by wellness culture and by its own version of reductionism. Sometimes the underlying pathophysiology is extrapolated beyond current evidence. Sometimes claims outrun data. And sometimes a complex system is reduced to a single villain or a single “magic” intervention.

A perfect example of this is the longevity conversation around NAD. 

We have legitimate mechanistic reasons to care about NAD biology. NAD is involved in energy metabolism, DNA repair, cellular stress responses. NAD levels appear to decline with age in multiple models. So reductionism—and marketing—can easily turn that into: “NAD declines with age, therefore supplementing NAD is anti-aging.”

That is an interesting hypothesis. But it is far too simple for the complexity of aging physiology. And when you look at the human clinical literature to date, the more honest conclusion is: we can often raise NAD-related biomarkers, but meaningful downstream clinical benefits are not consistently demonstrated yet. I have a whole podcast about NAD and this topic, if you want to check it out. 

But my bigger point is this: reductionism can creep in everywhere – not just superspecialized conventional medicine when we mistake a biomarker for the whole patient.

But also in integrative and functional medicine spaces when we mistake a mechanistic pathway for a guarantee of outcomes.

What we need is a medicine that can explore without concluding too early.

A medicine that can hold the rigor of evidence-based practice while also respecting that chronic disease is systemic, multi-causal, and personalized—especially in an era where research is rapidly proving just how interconnected the body actually is.

This brings me to medical guidelines.

In modern medicine, organizations like the AAP (American Academy of Pediatrics), the American College of Cardiology and the American Heart Association publish clinical guidelines that are widely considered the gold standard for care.

These guidelines shape how we practice. They inform board exams, malpractice standards, insurance coverage, quality metrics, and clinical decision-making across the country.

And to be clear—guidelines matter. They help standardize care. They reduce extreme variation. They protect patients from idiosyncratic or unsafe practice. In many ways, they raise the floor of medicine.

But somewhere along the way, the word guideline began to be treated less like a guide—and more like scripture.

There’s an unspoken culture in medicine that says: If it’s in the guideline, it’s correct. If I deviate from the guideline, I’m practicing bad medicine.

That mindset can quietly transform clinicians from thoughtful, curious scientists into algorithm-followers—executing protocols rather than actively engaging with evidence.

So let’s talk about how guidelines are actually created.

Guidelines are written by panels of physicians and subject-matter experts who review the available evidence and then come together to make recommendations. This process is rigorous, thoughtful, and often painstaking.

But here’s the part that isn’t always fully talked about: guidelines are evidence-informed—not purely evidence-dictated.

Within a single guideline, recommendations can be supported by vastly different levels of evidence. Some are grounded in large, high-quality randomized trials. Others are supported by smaller studies, observational data, mechanistic reasoning, or—importantly—expert consensus.

And expert consensus is not a flaw. We want experienced clinicians weighing in when evidence is incomplete. Medicine cannot wait decades for perfect data before acting.

But we have to take a step back and remember that this is still just a group of people making a decision based on their experience (and expertise… lets not forget that, they certainly are subject matter experts). BUT when a recommendation supported by consensus is treated the same as one supported by robust outcome-level evidence, a breakdown occurs. At the end of the day, these guidelines are still the product of humans making decisions for large, heterogeneous populations—based on incomplete, evolving data.

That doesn’t make guidelines wrong. But it does mean they are not infallible, and they are not substitutes for clinical judgment.

Guidelines were never meant to replace thinking. They were meant to support it.

And this matters enormously in complex, chronic, or system-level disease—where rigid adherence to algorithms can obscure nuance, suppress curiosity, and prevent clinicians from asking whether a recommendation truly fits the patient sitting in front of them.

When guidelines become the ceiling rather than the floor of care, medicine loses something essential. Not safety—but wisdom.

On the other end of the spectrum, there’s another breakdown in how we understand what “evidence-based” actually means—and that’s in public perception.

This is what we see: A scientific paper is published. It has a carefully worded conclusion in the title… but what is not in the title are the clear limitations and very specific conditions under which those conclusions apply to. But that part gets buried and a headline gets written in the news. Then it continues to be reduced and reduced and pops up as truth in more news outlets, wellness blogs, social media accounts—often stripped entirely of its nuance.

What started as a narrow scientific finding turns into a broad definitive claim.

We see this all the time. Nutrition is probably the most obvious example.

Eggs are the classic case. One year, eggs are bad for cholesterol. The next year, eggs are back. Then they’re neutral. Then they’re protective. And the public understandably wants a final answer. Are eggs good or bad? Should I eat them or avoid them?

But that framing misunderstands how science works. The question isn’t whether eggs are universally good or bad. The question is: For whom? In what context? At what dose? Alongside what other dietary and lifestyle factors?

Medicine—and really, most complex systems that show up in life, science, philosophy and ill just say everything—don’t operate in binaries. But culturally, we crave certainty. We want checklists. We want to conquer one problem, move on to the next, and never have to revisit it. But if we truly operated this way, we would still think the world is flat.

Science shouldn't (and doesn’t) work that way. Evidence evolves. Understanding deepens. New variables emerge. What we know today is not the final word—it’s a snapshot in an ongoing process.

And this is where the public conversation around evidence often goes wrong—not because people are uninterested in science, but because they’ve been taught to expect certainty where none actually exists.

This brings me to my favorite part of this conversation—the space where curiosity lives. The space between what we know and what we are still exploring.

In integrative medicine, we spend a lot of time in this space. Historically, many of the therapies we explore were labeled “alternative.” But what that really meant was that they existed outside the dominant research and funding structures of conventional medicine—not that they were inherently unscientific.

Practicing in this field means living with a lot of emerging evidence. And that naturally raises an important question: How much evidence is enough to recommend something?

That’s a fair question. But I would argue that there is an even more important one—one that keeps medicine both ethical and open-minded: Could this intervention plausibly cause harm?

If the answer is yes—if there is meaningful risk, unknown toxicity, or the potential to interfere with essential treatment—then we absolutely need more data before recommending it. That’s non-negotiable.

But if an intervention is low-risk, biologically plausible, and does not meaningfully threaten patient safety, the calculus changes. The question becomes not “Is this proven beyond doubt?” but “Might this help, and is it reasonable to explore? Is my patient interested in this? (more commonly– patients BRING me these ideas so the interest is already there)... and also, what is the financial cost? Is it reasonable to spend money on x-y-z intervention taking int account its potential to help even if its uncertain it will or not?

This is where integrative medicine often diverges from rigid interpretations of evidence-based care. Not by abandoning rigor—but by incorporating risk, context, and patient values into decision-making. 

I think this is a little more philosophical so let me give you a common example. 

Take acupuncture for chronic pain or insomnia.

The data here is mixed, but consistent in one important way: acupuncture appears to help some people. Effect sizes are modest. Results vary by condition and individual. And mechanisms are still being explored—likely involving neuroimmune modulation, endogenous opioid release, and autonomic regulation.

But the risk profile is extremely low when performed by trained practitioners. Serious adverse events are rare. There is no systemic toxicity. And for many patients, the alternative is long-term pharmacologic management with well-documented risks.

So in that context, the question isn’t whether acupuncture is a universal solution. It isn’t.

The question is: Is it reasonable to explore a low-risk intervention that may improve symptoms, especially when conventional options are limited, poorly tolerated, or unwanted by the patient?

This is not reckless medicine. It is ethical curiosity.

It is acknowledging uncertainty without being paralyzed by it. It is recognizing that absence of definitive evidence is not the same as evidence of absence—especially when harm is unlikely and patient autonomy is respected.

And this framework applies far beyond acupuncture. It applies to mind–body practices, light-based therapies, nutritional interventions, and lifestyle changes that affect multiple systems at once.

The key is not to overpromise. Not to collapse complexity into a single fix. And not to let curiosity turn into certainty too quickly.

Good medicine lives here—in the balance between humility and exploration. Between knowing what we know, and staying open to what we have not yet fully mapped.

So this is where we are now. But to understand where medicine needs to go, we also have to look backward—and forward at the same time (a little trippy).

We need to revisit clinical wisdom that predates modern medicine. Wisdom that came from physicians like Hippocrates, and from ancient healing systems such as Ayurveda and Traditional Chinese Medicine. And at the same time, we need to look toward the future—toward new research methods, new frameworks, and emerging ideas like precision medicine.

Before our current model of modern medicine, healing largely came from two places: what came from the earth, and what came from ourselves.

Food. Herbs. Plants. And the human capacity to think, reflect, regulate attention, and alter consciousness.

These practices were often organized through rituals, ceremonies, and what we might now label as “magical” thinking. But when you strip away the language and symbolism, much of this was early mind–body medicine. It was an intuitive understanding that the body, the mind, and the environment were inseparable.

Over time, medicine became increasingly siloed—and eventually, placebo-blinded randomized controlled trials became the gold standard for evidence.

That model gave us incredible advances. But it also introduced a quiet problem: many of the most important drivers of health cannot be studied this way.

How do you placebo-blind meditation?

How do you isolate nutrition when it affects every system simultaneously?

How do you study herbs—which contain hundreds of biologically active compounds—as if they were single-molecule pharmaceuticals?

A single medication is one molecule, designed to target one pathway. A single herb may contain hundreds of interacting compounds working synergistically across multiple systems. Studying those two things using the same reductionist framework simply doesn’t work.

The same challenge applies to psychedelics and mind–body medicine. We cannot truly placebo-blind a full psychedelic experience. Let me just state the obvious here: you know if you’ve blasted off to another dimension or not.

Microdosing psychedelics may be different and fit better into our current models of study—and I have an entire podcast about this coming out soon and how the placebo response plays a role…. But my point here is that some interventions fundamentally resist the structures we’ve built to study drugs.

And even beyond that, we’re now realizing that we can’t study drugs the way we used to either.

As we’ve explored throughout this episode, the body cannot be reduced to isolated organs. Chronic disease is system-wide. The heart is not just the heart. The brain is not just the brain. Inflammation, metabolism, immunity, and neuroendocrine signaling overlap constantly.

Our study designs haven’t fully caught up to this reality.

So, how can we change this? First, I am thinking of pattern-base evidence instead of outcome-based evidence. So what if: instead of asking, “Does this intervention improve one predefined outcome?” we asked, “What patterns shift when this intervention is introduced?”

Chronic disease doesn’t move in straight lines—it moves in constellations. Sleep changes. Energy shifts. Inflammation markers fluctuate. Mood, cognition, digestion, and resilience evolve together.

We could begin studying interventions—especially lifestyle, mind–body, and multi-compound therapies—by tracking systems-level pattern shifts over time, rather than forcing them into single endpoints. I think this would involve longitudinal, dense data from smaller cohorts. Within person change overtime, not just population averages. And pattern recognition across systems rather than binary outcomes. 

Next, I am thinking of precision curiosity trials. Personalized N-of-1 at scale. The future may not be larger randomized trials—it may be thousands of deeply characterized individual experiments.

Imagine structured, ethical N-of-1 trials where patients explore low-risk interventions—nutrition, supplements, mind–body practices—while tracking individualized biomarkers, symptoms, and functional outcomes over time.

This framework would: Treat each patient as their own control. Allow interventions to be explored responsibly. Generate real-world data about who benefits, under what conditions. Embrace variability instead of trying to eliminate it. Instead of asking, “Does this work for everyone?” we ask, “For whom does this work—and why?”

That is precision medicine without pretending certainty too early.

If medicine is going to evolve, it cannot abandon rigor—but it must expand its imagination.

The future of evidence-based medicine is not less science. It’s science that can hold complexity. Science that respects systems. Science that allows exploration without prematurely declaring victory.

This is not a return to the past. It’s a synthesis. And it’s already beginning.

So when I ask the question, “Is modern medicine still evidence-based?” the answer isn’t a simple yes or no.

Evidence-based medicine is not broken. But it has been narrowed. And in that narrowing, we lost something essential.

Evidence was never meant to replace clinical judgment. It was meant to inform it.

It was never meant to silence curiosity. It was meant to guide exploration.

And it was never meant to reduce human health to isolated variables—it was meant to improve real human lives.

As disease has become more complex, more chronic, and more systemic, our approach to evidence must evolve alongside it. That means holding rigor and humility at the same time. It means honoring what we know, questioning what we assume, and staying open to what we haven’t yet fully mapped.

Reclaiming evidence doesn’t mean rejecting science. It means practicing science more honestly.

And restoring clinical wisdom doesn’t mean going backward. It means remembering that medicine has always been both an art and a science—and that the future depends on our ability to hold both.

So thank you for taking the trip with me.

An exploration meant to poke holes in familiar frameworks, question what we think we know, and open our eyes to connections that weren’t obvious before.

Stay curious….. Keep tripping.