What's Up with Tech?

How Digital Diagnostics Is Transforming Diabetes Care Through AI

Evan Kirstel

Interested in being a guest? Email us at admin@evankirstel.com

Imagine a world where AI can detect serious eye disease in just 60 seconds—not in a specialist's office after months of waiting, but right at your primary care visit. That's the reality Digital Diagnostics has created with their groundbreaking autonomous AI system for diabetic retinopathy detection.

In this eye-opening conversation, Mark Daly reveals how their FDA-cleared technology is transforming diabetes care by addressing a critical healthcare bottleneck. "There's a huge lack of access to care because there just aren't enough ophthalmologists," he explains. Their solution? Bringing specialist-level diagnostic capability directly to primary care settings where patients already receive regular care.

What makes this approach remarkable isn't just the technology—it's the immediate impact on patient outcomes. When patients receive their diagnostic results within a minute of their scan, something remarkable happens: they follow through on treatment recommendations at significantly higher rates. Some even begin addressing their underlying diabetes management. This immediate feedback creates a powerful moment for engagement that traditional weeks-long referral processes simply cannot match.

The conversation explores Digital Diagnostics' ethical framework for AI development—what they call "AI the right way"—which considers fairness and equity from data collection through post-market surveillance. Mark shares fascinating insights about their global deployment challenges, future applications beyond eye disease, and how they successfully navigated the complex landscapes of FDA clearance and insurance reimbursement.

Perhaps most compelling is their vision for AI's role in healthcare: not replacing physicians, but freeing them to focus on what matters most. "Our goal is to help doctors spend more time with patients, not less," Mark emphasizes. "The real value for the ophthalmologist is to sit there with the patient, talk to them, look at their eye, and explain what's going to happen—not sitting in the back office looking at a computer screen."

Want to learn more about how AI is transforming healthcare access? Visit digitaldiagnostics.com or connect with them on LinkedIn to follow their journey.

Support the show

More at https://linktr.ee/EvanKirstel

Speaker 1:

Hey everyone, fascinating discussion today as we dig into AI-powered healthcare, with a real innovator in the field of digital diagnostics Mark, how are you?

Speaker 2:

Hey, doing great Evan.

Speaker 1:

Thanks for being here Really intrigued by the amazing work you're doing and we're going to dive right in with Irma from Avira Health, maybe with the first question and topic.

Speaker 3:

Yeah, so obviously today we're happy to have Mark on the program we are going to be diving into the iPower revolution transforming healthcare, and the topic specifically is digital diagnostics, which happens to be a name of your company. Digital diagnostics, which happens to be in the name of your company. You have FDA cleared system to autonomously detect diabetic retinopathy. So can you step back and walk us through the founding vision of digital diagnostics and how Luminetics Core brought that vision to life?

Speaker 2:

Yeah, absolutely. Our founder, dr Abramoff, has actually been working in the field for a really long time. He actually had the idea for doing what we now call Luminetics core way back in the 80s when he was in grad school, and he's been working on it ever since, waiting for the technology to advance as the field has come forward. If you look at his early research papers, he's done so much in the field, from just computer vision and some fundamental computer vision libraries all the way through to what we have now with Luminetics Core. So his whole vision the whole time has always been that the computer has the ability to see and sort information much quicker and faster and more reliably and more consistently than a person does. It doesn't get tired.

Speaker 2:

So how can we use that to help deliver this eye care to people who desperately need it? Because the problem that we see and that we're going after is there's a huge lack of access to care because there just aren't enough ophthalmologists. Or even if you're in a place that has enough ophthalmologists, are you able to get to the ophthalmologists, are you able to see them? And so using technology to help accelerate access to care that was the vision for the company and really help bring that care equitably to as many people as possible at an affordable price, and that's really what we've been setting out to do and working on ever since.

Speaker 1:

Wow, wonderful mission. So let's dive in. Tell us about your core AI algorithm. How does it work, how is it maybe different from other similar systems and what makes it unique?

Speaker 2:

Yeah, so Luminetics Core is a software-only medical device that's designed to diagnose more than mild diabetic retinopathy.

Speaker 2:

So we work in combination with a device that's called a fundus camera, which is a special kind of camera that can take a picture of the back of your eye, the retina. And the reason you need to use a special camera is that the way the eye is designed to bring light in it doesn't make it easy to actually see what's going in there because of the way your eye is designed. That's what makes it work so well, but it also makes it hard to look at. So we have to use this special camera to take a picture of the back of your eye. But what so much cool stuff we can do with it. And so what we do with Luminetics Core is we look for biomarkers, which are specifically markers in the image that indicate something happening biologically in the eye. So there's all these different features that you can see on an image, just like on an x-ray. You can see where a break is on a bone. On a picture of the back of your eye, you can see all sorts of different things happening there, and so what we've done is we trained our model to look at these biomarkers in the eye, the same way that a physician gets trained when they're going through school. And so we actually are looking for different biomarkers specifically and then in aggregate, after the image looks at all, after our machine learning model looks at all these different biomarkers, we come up with a determination based on these biomarkers and what we know from the ETDRS scoring table. This is whether or not you have a diagnosis of more than mild diabetic retinopathy and we refer you out to a physician to get treated. Or it looks like you're good, you don't have DR, we'll see you back here in a year.

Speaker 2:

And so fundamentally, this machine learning algorithm takes pictures and then it applies the algorithm to determine sort of the outcome of what we think of happening in those pictures, and then, once we get that result, it goes into the patient's chart, but it's also delivered to the patient right at the point of care. And that's really maybe the most interesting thing to me about this device is that when a patient comes in, they get this picture taken of their eyes and within about a minute they can get a result right there, at the point of care, and we found that that makes a huge difference in terms of how people react to this information and what they do about it in terms of their outcomes, and so we actually have some cool papers that show patients who get these results at the point of care, when they get an image, actually will make interventions. They'll go get that follow-up done to protect their vision. Some patients will even go so far as to start addressing their underlying issues with A1C and diabetes. So it's a hugely transformative thing to get access to that information.

Speaker 2:

What normally would happen is you'd have to go to your primary care. They would refer you out to a specialty care doctor and then you'd have to go to that specialty care doctor, get the images done, wait another couple of days or even weeks to get the results back. People get lost in that process, and so that at the point of care has a huge impact and that's maybe the last interesting piece is that we focus on doing our screenings in the primary care setting at your regular provider that you see for your, rather than going out to all these specialty clinics. Because that's really the problem is, if you could go to the specialty clinic, you probably already being there, but you can't. It's there, just aren't enough of them, or people aren't able to access them. So by taking changing where we do the exam, we're able to access much more people, and then that really helps close that care gap that is so interesting.

Speaker 3:

I've certainly had that picture of the back of my eye taken, but I had to go to a specialist to do it. I didn't even know that kind of access exists and I didn't know what was happening behind the scenes with all that data and how it's being processed. So let's talk about what's happening behind the scenes. We hear a lot about AI being used the right way, happening behind the scenes. We hear a lot about ai being used the right way. So, um, from your perspective, how do you bake the ethics and equity into the development and deployment of your technology?

Speaker 2:

I mean, you started touching on this, but tell us more, yeah no, I mean, you can see right in the, in the company tagline, right, ai, the right way. It's fundamental to everything we do and really the whole um. It starts at the very beginning, before the product even exists. The whole process has to be built around this ethical framework. So we have there's this paper that our founder, dr Abramoff, helped contribute with this group called the Center for Collaborative Ophthalmic Innovation, the CCOI, and the CCOI is really a group of physicians and industry experts and patients and advocates that all get together to try to come up with ways to make things better for everyone, and part of one of the processes that they came up with was this total process lifecycle about how to do things equitably, from bioethical principles, and so for us, that starts from this idea of the data we use.

Speaker 2:

We know where the data comes from. It comes from patients who have consented to be involved in the process and help us do our model training or model validation, all the way through doing analysis and understanding of the disease state and historical access, and all the way through the process from the data gathering and the design, then even as we go into designing trials and how we pick trial sites, and all the way through into post-market surveillance, where we make sure that we're monitoring the system so that it's still able to be used safely and that it still performs in the field the same way it did back in the lab when we designed it and in our trials when we tested it with the FDA. So there's a whole process-based piece about the ethics that we really think is important all the way through the life cycle. It's not just one thing you do once, it's not just a checkbox. It's really a big part of our culture, from gathering the data to building the product from the whole way through.

Speaker 1:

Incredible. So, as you know, we're in the midst in the US of a diabetes crisis. Yep, probably globally. In some places, absolutely A third of children might be pre-diabetic. It's shocking. So what are the biggest hurdles to implementing your solution? Autonomous DR screening in primary care? I think you have some great examples. Ohio Health, I see on your website, has been an amazing rollout, but tell us more.

Speaker 2:

Yeah, I mean that's really what we've been looking to partner with is the folk.

Speaker 2:

There's a couple of different types of providers we were going for.

Speaker 2:

Academic medical centers are another big provider, that group that we work with because, again, a lot of times the problem there is they have enough physicians, they have enough cameras, but they don't have enough time to do it all.

Speaker 2:

In the bigger academic settings they have a lot of other cases going and you get this huge backlog of screening exams.

Speaker 2:

We talked to a large academic in the mid-Atlantic region and that's one of their problems is that the ophthalmologists have a couple hundred DR screening exams that they have to read every night while they go through their Epic inbox, and it creates delays in patient care, creates huge dissatisfaction for the physician and you just going there clicking no, no, no, no, no over and over again is tedious, time consuming.

Speaker 2:

It's really not the best use of all their skills and training. We really think that the best use of physicians' time is with the patients not staring at pixels on the screen, and so that's really where we found a lot of success is in places for, like those larger centers, where they just have a backlog that they can't fill or then the access piece to the other, like primary care settings Like SSM is a great group that we work with in a bunch of different states in the Midwest. They have care settings all over the place and again, like that's a great app in terms of access to more rural locations or places if you're not in the big city, like an EMC you know we have success stories with both.

Speaker 3:

Yeah, speaking of success stories, you have global presence and obviously healthcare systems are different around the world. So tell us what you've learned from international deployments, for example, your partnerships in Saudi Arabia.

Speaker 2:

Yeah, it's definitely a very interesting process. There's a completely different just from the commercial side of things. It's a totally different sales process. Regulatory is more similar than you maybe think in terms of. If you have your FDA certification approval, then you can usually use that to get into those markets with a lot less effort than starting from zero. But that's because we've already gotten the FDA approval. So one of the things that's really interesting too is the focus on integration and scale is very different.

Speaker 2:

So for a lot of customers like in Saudi is a great example they have huge access issues in terms of huge patient populations with no physicians. So there they might be able to staff up 20 or 30 rooms to do Illuminati's core exams, but there won't be necessarily physicians on the receiving end of a referral to actually treat the patient, so that the problems are a little bit different. Like in the US, if we can catch the disease, we can typically route you to a provider to get you treated. In Saudi, one of the issues we've seen is they don't necessarily have. Just because we've been able to diagnose you here, now we have to take you to another place to figure out the treatment.

Speaker 2:

So, there's different pieces there as well, but I think we're still kind of a little early days in some of those other markets as well. We're largely focused in the US right now, but it's really exciting to see the appetite and the engagement for folks in different markets like Saudi or UAE or other places in the Middle.

Speaker 1:

East. Amazing, and can you share any clinical outcomes or metrics from real world use cases?

Speaker 2:

Yeah, I mean we have a bunch of papers on our website so I don't want to misquote them from the top of my head and get any of the numbers wrong. But I think the biggest thing that I always get excited about in my favorite story is we've got a case study about a patient who sort of changed their behavior after they got the real world exam, and that one I always personally love. The other one that has that's really important is looking at real world outcomes of patients that are treated with the AI versus patients that are not. Again, having a point of view result, you get better follow through in terms of actually the long-term patient outcome, and that's that, to me, is probably the most exciting piece is just recognizing that if you have your exam interpreted by the machine, we can prove with science that you actually will have a better outcome than waiting for a physician to do it. So I think there's a lot of opportunity there and a lot of we're really early days in terms of gathering evidence. There, too, we have some early data.

Speaker 2:

But if we wanna look at things like visual acuity or other longer-term diabetes things, we've got research projects that we're working on, collaboration with some academic partners to look at more of this real-world evidence and to keep generating that Because that's one of the biggest things we've seen is, as we build this evidence, then we get more trust from physicians, and then it creates this virtuous cycle of more adoption and then more data, and then we can keep the snowball rolling. Yeah.

Speaker 3:

Speaking of virtuous cycle, I know you're exploring other disease areas like glaucoma or age-related macular degeneration, even neurodegenerative diseases. What application is next kind of on your pipeline and why?

Speaker 2:

Yeah, so there's a couple different things we've got going on. I think the one thing that's probably the easy, obvious one is we're currently our device is cleared. I can't stick in a giant device in a little closet. So one of the things we're working on right now is getting access on more cameras and handheld cameras so that we can again that access piece is such a key piece. We think that getting being able to diffuse what we have out just to more people is hugely valuable.

Speaker 2:

And then, in terms of some of the oculomics pieces, we're doing a lot of data gathering and research right now with academic partners to figure out what are these other diseases that we can detect from the eye. Are we going to actually be able to make claims for and then go through with a new device submission? There's a lot of cool research on neurodegenerative diseases, cardiac diseases, kidney disease all sorts of cool stuff that we can do, being able to say, all right, well, what benefit can we give to the patient? Where is there a workflow problem that we can solve? That was part of why we picked diabetic retinopathy as our first actual disease was that we know that there's a problem that this automation can eliminate. We don't want to just pick a new disease like AMD or something where maybe, depending on what kind of AMD you have, the right 90 degree is putting an AI that can detect that they're useful, compared to something like a cardiology where if?

Speaker 2:

I say, hey, it looks like maybe we need to get you on some statins. That'd be a lot more compelling, a lot more valuable for patients. So we're trying to evaluate, based on sort of both the reliability of the system and the data and also the potential impact for patients and providers, because I think that's really a lot of AI systems that I've seen in other specialties Radiology is a great example. They're helping the providers do things. They're not necessarily thinking about the patient or that part of the workflow, but what we've seen as a major differentiator in stickiness is helping the patient, not just helping the providers, like it's assumed that the providers need help, but if you keep the patient centered, it typically everything else flows much better than if you're trying to just, you know, use technology for technology's sake, which some of these AI models are really nifty but are you changing and helping or are you just putting a hat on a hat, like it's you can see?

Speaker 1:

from the outside. Yeah, makes sense. You've also cracked the code on FDA clearance and reimbursement models, which is a huge deal. Any advice for your peers or other innovators who are going?

Speaker 2:

through that at the moment, early on, right? So let's say you wanted to do a screening thing for glaucoma. Glaucoma screening is defined in the Social Security Act and it's defined really explicitly that it has to be a human physician that does the screening if you want to get reimbursement from the US government. So you need to get laws changed. Sometimes you need to understand what insurance companies and providers want to do. So we have a whole team that networks with providers, that talks to CMS, that talks to all these different groups, because the other thing to think about is when you're designing your clinical trial and your evidence. The clinical, the evidence you need for the FDA is to show that your device is safe and effective and secure, which is different than what you need to show an insurance provider that your device is better than an alternative and is cheaper than an alternative treatment, right. So you have these sort of not exactly conflicting but not purely aligned goals and all of that needs to be contemplated up front when you're coming up with your device design and idea. So you have that buy in from provider, from payers, that they're actually going to be interested in reimbursing your service, because the worst thing in the world you see a cool breakthrough device and then no one can get paid for it and then it just sort of dies on the vine.

Speaker 2:

We did a lot of work upfront on that reimbursement piece and maybe the last piece about the FDA and the regulatory front is it's a very dynamic environment right now.

Speaker 2:

I think that the way we've gotten cleared and the feedback we've seen in subsequent submissions and what we hear from peers in the industry is that things are changing a lot right now and really the best advice I have is to get a really strong regulatory partner who understands the landscape, who understands the relationships and the things that matter and then follow their advice.

Speaker 2:

Because a lot of times what I've seen from other folks, especially in the startup space or the health tech innovator spaces they look at regulatory as a checkbox thing you do at the end of the process as opposed to a integral part of the whole thing that you really need to think about your destination before you even start your journey and if you don't do that, it can become a real hard problem trying to fit a square peg in a round hole in the last mile. So doing that upfront planning for both the reimbursement and the regulatory is hugely valuable in their intertwining. You got to think about them together, even if you're going to have to meet them with slightly different goals and objectives in your trial.

Speaker 3:

Yeah, I like your thoughtful approach ahead of time to think through all these concerns, and I love how you said earlier, you put patients also at the center of kind of adding value using your technology, not just providers, and that's how we move this whole ecosystem forward. So maybe a little bit of a controversial question. You touched on this, but let's address it directly. Some people are concerned that maybe autonomous AI could threaten radiologists and ophthalmologists, and critics are concerned about displacing physicians by computers. Clearly, you already laid out how this is an adjunctive process. This is a helpful tool, so maybe just explain from this specific perspective augmenting doctors, not replacing them 100%.

Speaker 2:

Our goal is to help doctors spend more time with patients, not less time, because the real value for the ophthalmologist is to sit there with the patient and talk to them, look at their eye and be able to explain what's going to happen to them or how they're able to treat whatever disease they have, and be able to explain what's going to happen to them or how they're able to treat whatever disease they have. Sitting in the back office and looking at a computer screen and typing up a report doesn't create value for the patient Really and for the physician. They're doing it because someone needs to make the record, not because it creates value. So I think that that's really the big opportunity is to take things that don't create value for the physicians off of their plate. Not that what the task is isn't valuable. Interpreting images is a hugely valuable task and even with some of the multimodal models that I'm seeing in the radiology space, there's a lot of complexity. I mean, if you look at what AI clearances radiology devices have, there aren't autonomous devices there. I don't know if there will be autonomous devices, at least not in the current regulatory framework. And even if you look at expert groups like the ACR, acr is not in favor of autonomy. They don't think that that's their end goal either, and so I tend to agree that having an autonomous sci-fi kind of device, I don't think that that's the real North Star.

Speaker 2:

I don't think that that's where we're going. I think we're going to have devices that take painful, tedious things out of the mix, just like spreadsheets took away all the manual paper process for accounting and let people be speculative and do things. That AI is going to be very similar. It will take some of the cognitive burden and workflow burden off of physicians, but ultimately that will allow them to spend more time with their patients, whether that's doing treatment or explaining things. I think that's really where the value is.

Speaker 2:

I see some of the AI tools that try that are maybe more patient focused, and I always worry about if I'm a patient, I want to talk to a person. Like talking to a machine is not going to reaffirm me when I'm going through a medical event and that, in fact, I kind of want to talk to someone who I have trust with, and as much as I trust the machine to hit the brakes on my car, that's not the same as telling me how to feel about a diagnosis or what my outcomes might be. And that's where the machine, when it really works well. It takes those distractions away from the provider and it can just be two people having an interaction. But we have a long way to go, with a lot of the technology, to get there, and I don't know that everyone agrees with that vision.

Speaker 1:

So lot of the technology to get there, and I don't know that everyone agrees with that vision. So we see a lot of interesting things out in the field. Well, interesting things indeed, and amazing work. We're all rooting for you. Many of us have friends, family members, with diabetes. It's such an important mission. What's up for the rest of the summer and fall? What are you guys up to? Where can folks meet you or see you out and about?

Speaker 2:

um, we've got a couple different conventions and things coming up. I actually I don't know off the top of my head what our marketing stuff is. I'll have to, I'll have to follow up on that, but I think we've got um. Arvo is always a big show for us in the fall. Uh or um sorry not about a in the fall. So we'll definitely be there um, and I think there's a couple other trade shows coming up between now and then too, but AAO in November, I think, will be the big one.

Speaker 1:

Good luck with that.

Speaker 3:

Can people find you online?

Speaker 2:

Oh yeah, definitely. We're on all the different social things, linkedin and our website, digitaldiagnosticscom.

Speaker 1:

Well, thanks for joining, really appreciate the insight and the work you're doing, and thanks everyone for listening and watching and sharing this episode. And be sure to check out our new TV show, techimpact TV, now on Bloomberg and Fox Business. Thanks everyone. Thanks Mark, thanks Irma, thanks Mark, take care.