Starkey Sound Bites: Hearing Aids, Tinnitus, and Hearing Healthcare

A Deep Dive into Starkey’s Industry-leading AI Hearing Technology

Starkey Episode 73

Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.

0:00 | 37:13

Send us Fan Mail

In this episode, we turn the tables on our host, Dave Fabry. For the first time, Dr. Fabry finds himself on the other side of the Sound Bites microphone, to take our listeners on a deep dive into Starkey’s industry-leading hearing technology. From artificial intelligence (AI) to deep neural networks (DNN), learn how this technology is making life better for patients and how hearing professionals can take small steps toward implementing it into their practices. Hearing Matters host Doug Beck guest hosts this special episode of Sound Bites.

To learn more about the latest in Starkey’s hearing technology, visit starkeypro.com

Link to full transcript.

SPEAKER_01

Welcome to Starkey Soundbites. I am typically your host of this podcast. My name is Dave Fabry. I'm Starkey's chief hearing health officer. But in this episode, we're doing turnabout day, as we would have called when I was uh growing up in high school. Uh, tables are turning, and in this episode, I'm being moved to the metaphorical other side of the microphone to do a deep dive into hearing health technology, including artificial intelligence. My friend and hearing industry veteran, Dr. Doug Beck, who is obsessed with technology, as I am, uh, will serve as a guest host today. So, Doug, uh welcome to SoundBites and uh go ahead, take it away.

SPEAKER_00

Thank you, David. It is an honor to work with you. I've watched a lot of the uh Soundbites uh video clips, and I think it's a it's a great venue. So I'm honored to be uh sitting on the other side of the chair and interviewing you for a little bit. Um let me start with today's primary topic is artificial intelligence. And and you know, when I think of artificial intelligence, I think about it that's a big umbrella term. Under that, we have maybe machine learning, under that, maybe we have deep neural networks. Um the the uh the ability of computers to solve problems, to process information, and to determine and engage a maximal response seems to be the working definition of artificial intelligence. It's not just measuring, it's not just metrics, but it's got to do something with it. So does that work for you? What would you say?

SPEAKER_01

Yeah, I mean, I think you you set the stage well. I think uh AI, artificial intelligence, has become a term that is both ubiquitous and a meaningless buzzword in many ways. I think it is important and appropriate to frame it in the context of our discussion. And when we think of, as you said, an overarching definition for AI is really a system, or in our case, hearing aids, that are capable, in the purest sense, of adapting or learning, even reasoning. When we think back to Alan Turing and some of the visionaries who began working with AI, the definition of a win with artificial intelligence, if you consider that a win, is when a person cannot distinguish uh between whether they're conversing with another human being or a machine. So the ability to do that, I think, uh is what will define success with AI. Then, as you said, under machine learning, a subset of that, it's really a rule-based system where we apply, let's say let's pull it into our domain. If we're thinking about it with uh speech and categorizing acoustic environments, which is essential to what machine learning can do to provide automatic switching and application of directional microphones, noise management, wind noise management, special processing that's different for speech and music, all of the things that we've grown to come and expect uh hearing aid users to be able to do by simply wearing their devices and going throughout their day. The first challenge is to how well and how accurately can a machine learning system categorize different acoustic environments? And then to your point, the the some the something to be done is then how well can directionality, compression algorithms, wind noise management, noise management, uh, how well can those solutions be applied and what is the the benefit to patients in the long run?

SPEAKER_00

Yeah, and and we're at that point now, it's 2024, where I want to go back to uh Hans Morovic uh in 1997 from the Carnegie Mellon uh university, uh had an incredible prediction. And and he, in his abstract, it says, uh, it's predicted that the required hardware and software will be available in cheap machines in the 2020s, so that uh the processing power and memory capacity necessary to match general intellectual performance of the human brain can be approximated. And and so here we are, and and we've pretty much got relatively inexpensive DNN available. And DNN to me, it's so important to separate that from digital. Of course, it is part of digital, but it's much more advanced. When people think about deep neural networks, um what they should be thinking about is how they can talk to Siri and ask almost any question, and Siri will have an answer that's probably correct 98% of the time. Um, you can go to your Google photos or your Amazon photos, and I can say, hey, here's a picture of Dave Fabry. Show me all my pictures of Dave Fabri, and it'll go throughout my instant um uh backlog and memories, right? And within a few milliseconds, I have all the pictures of Dave Fabry. So DNN is much more sophisticated than simply digital. So tell me how that's applied in Genesis and Genesis AI. What are you doing with DNN that separates it from the pack?

SPEAKER_01

Sure. And one thing I do want to back up just a little bit because you know you said that the prediction was in the 2020s. We've got five and a half years left, and I don't think that it's necessarily outside the realm of possibility that we will achieve that definition still within the 2020s, but it is subject to um Moore's law and the hardware part of this, where transistors, Moore made that prediction and it lasted far longer than he ever thought that it would. And you know, he he unfortunately died uh a year or so ago, and so it survived through his life. Uh, and I don't think he ever thought it would.

SPEAKER_00

And to be clear, Moore's law was that the ability of memory and chips and processing would double about every 18 to 24 months. And and he made that prediction, I believe, in the mid-60s. Trevor Burrus, Jr.

SPEAKER_01

Yeah. And and the number of he made the prediction that the the number of transmitters, uh, transistors on an IC would double every two years, and that has held up. People have talked about the death of Moore's Law on the hardware side, but perhaps even more interesting and germane to this topic is that on AI, the computational power and artificial intelligence is doubling every three and a half months. And so that's why we've seen just astronomical, uh uh exponential improvements in AI processing. OpenAI and ChatGPT sort of took the industry by storm. We deal with uh natural language processing with hearing aids, if you will, when you think about some of the features that we're incorporating, like translation, transcription, and that ability to have natural conversation is what enables that new interface, if you will, through the human voice into the computer or into the phone or whatever you're interfacing. Yeah. And think about those applications for hearing aids, is what's so exciting. And it is improving and accelerating in that improvement at an extremely rapid pace. One of the additional nuances, if you will, of AI that goes into the Turing definition and others related to that is the natural language processing and a computer's inability to show empathy. There's been a lot a lot of examples where computers and using artificial intelligence really are not yet capable of bridging that gap in uh into empathy. And that'll be when I think that the success of this achieves. And one of the reasons that Starkey uh uses human operators uh anytime a patient or provider calls into the company, they get a human voice. Because we haven't yet found that the ability to fool uh the person on the other end into a user experience that replicates human communication is at a level where we feel confident that people don't get frustrated when they're dealing with that uh uh stops in time, the momentary uh uh glitches that impact many of the voice commands using AI and chat.

SPEAKER_00

And that's a wonderful asset for Starkey, quite frankly. I don't think any of the other major manufacturers um have real people answering the phone. And it does make a difference. You know, um one of the things about technology that we spoke about years ago was this idea that your hearing aids could maybe have a better feel for your intention. And we talked about that years ago using geotags. And what I meant by that is then if you go into, let's say, a Starbucks, uh your hearing aid is already online, you know, if you're a regular Starbucks customer and it knows you're in Starbucks. So maybe it'll turn on a tighter directionality, maybe it'll turn on more noise reduction, maybe it'll do X, Y, or Z using geotags. But I think we're we're pretty far past that at this point, aren't we?

SPEAKER_01

Yeah, it's a really good point. When we first introduced Halo, which was our first made-for-iPhone hearing aid from 2014, we were using acoustic environmental classification. Um, so that, and and just to unpack that a little bit. Now, hearing aids, modern hearing aids, when you and I first started in the profession, if we wanted to fit a patient with directional microphones, we needed to have the patient manually switch via a switch on the device, a button to from a quiet environment to a noisy environment that would incorporate directionality. Now it's table stakes. People expect that even on the all across all of the tiers of technology, patients can put the devices in, and if they desire, they don't have to interact with their hearing aids or or an app any more than that, and they just go throughout the day, the device categorizes the type of environment as quiet or noisy or speech is present or musical, et cetera.

SPEAKER_00

Yeah, right.

SPEAKER_01

Using a machine learning classification system. So people can argue, sure, well, we've had machine learning for a long time, but that's just scratching the surface because when we launched Halo, we used that machine learning classification plus geotagging to try to improve slightly on, as you said, when you let's say you go to Starbucks or in Minnesota, we go to Caribou uh every morning, and it's the same environment, roughly the same time of day, so it's busy, the baristas are working. But so so it gets closer to say, well, if I've saved a special coffee shop memory, it'll apply those settings once it recognizes automatically, automagically, that I'm in the coffee shop. But we wanted to go one step further when we began incorporating inertial measurement units in our devices and began to use a feature we call edge mode dating back to 2020 when we launched this feature so that it would now incorporate not only sophisticated machine learning classification, but situational intent, as you said, in the user's intention to say, I'm in this environment right now, let's say I'm using geotagging and I'm saying, okay, it's in um it's it's in my favorite coffee shop, but today there's a different barista working. Um and uh she's soft, soft spoken, and I can't quite understand her the same way I could my regular barista. So by simply tapping on the device or pressing a button in the app, we do an additional acoustic scan that takes a look at the unique set of environments at that moment in time, not just in the location, but at that moment in time to say, right here, right now, what's in front of me is what I want to hear. And then it applies special offsets on the basis of the acoustic parameters in that environment for that person in front of them, with additional offsets beyond what the machine learning classification system can do.

SPEAKER_00

This is so important because what's happening now, uh, early on in the introduction, in the commercial introduction of deep neural networks, uh, some of the DNNs out there were static. In other words, that system had been trained on voices or phonemes or syllables, you know, human uh speech sounds, and that's great, because then it could recognize those and do a better job processing that while leaving some of the background noise behind, unprocessed. But now we're getting away from the static DNN systems and we're into more of a dynamic deep neural network. And I think this is where the benefits that you're seeing in the newer systems uh come about because we're able to process in real time.

SPEAKER_01

Correct. And our neurosound processing includes an onboard DNN accelerator to say that in edge mode, it's taking an additional bite of the apple to classify that environment to that unique situation that the individual is in right now. It's not dependent on training of that model for uh that specific environment or, as you say, a less than dynamic environment, because it's taking into consideration right then, right there, what the individual is. And when we first started it, we just simply optimized for clarity of what's in front. When we continue to enhance that feature to now allow individuals who want to use uh edge mode to be able to say, well, I want best sound, which provides additional offsets beyond the personal program, our automated program. Sure. We can also have them select to enhance clarity even further in challenging environments for someone who's soft-spoken, for someone who's in a noisy environment, any way to improve clarity of the voice beyond what is capable from the typical personal AEC program. Or finally, they can also use suppress the noise even more aggressively than what would be done automatically. And by doing so, it personalizes to that environment for that individual, combining very sophisticated machine learning classification plus the DNN that allows for that optimization. And when we've conducted studies, we we completed a study with Stanford last year on uh edge mode in comparison to the personal program, that that one using very sophisticated classification plus the um the optimization through directionality, noise management, whichever features are required in order to achieve that success. And when we did this, when Stanford did this, they found that the additional benefits on speech intelligibility were about a dB and a half beyond what was achieved through the use of the personal automatic program. That is not comparing omni to directional, where we know that that's about a 4 dB benefit if you're lucky without venting, and it can be decimated from that to be 2 dB of benefit. This is comparing using that machine learning plus listener intent provided an additional one and a half dB or so benefit on some of those speech measures that they they evaluated. That's a significant finding. If I can get 15 to 20 percent better word recognition, that's going to not only enable me to be engaged in the conversation better, ease of listening is is likely improved, and the overall user experience is improved. And I think it's a harbinger of where we can go with DNN as we see computational power continue to improve and AI models continue to improve at that doubling every three and a half months or so.

SPEAKER_00

Yeah, and if I can frame that a little bit, because when you say one and a half dB improvement, most people don't get that. Here's the way it really works: when you're using directional premium hearing aids and you put them in the directional mode versus an omni mode, you're gonna improve the signal to noise ratio typically by about two or three dB. If the stars align and everything's great, you might get four dB. So that signal to noise ratio is critically important for the patient who needs a benefit in listening. They don't just need things louder, they need things clearer. And this becomes so important when you think about Mead Killian's work from 30 years ago when he was uh first developing the Quixon. He said, people with normal thresholds, normal hearing, normal listening ability need about two to three DV signal to noise ratio to get 50% of the words correct. But then he said, people with a traditional mild to moderate loss, uh, they need about eight. So there's an awful lot of difference between somebody with normal thresholds, normal listening ability versus somebody with a mild to moderate loss. And it's not just about making it louder, it's about improving the signal to noise ratio. That to me is much more of a foundational issue. So when we can improve it by one and a half dB, that's roughly 15% of word recognition improvement. So it's kind of a huge improvement. And so I didn't want to um let that go without commenting. Uh, but when we're talking now about a dynamic, deep neural network system, uh, something like what we have now in edge mode, tell me about the impact of that on fall, fall detection, and general health and wet wellness. Because now we're doing multiple processing at one time in parallel universes.

SPEAKER_01

Yeah. Uh well, before we leave edge mode completely, I do want to just talk about the continued evolution of that feature. Because when we started it, we began only with edge mode situational and we provided best sound, which had offsets that were intended to improve clarity of speech. But then now we incorporate that additional granularity to allow uh even more further enhancements for speech clarity, whether it's a soft talker wearing a face mask. That's uh, you know, hopefully we're out of that era, uh, or behind some sort of uh uh screen. Yeah and uh or whether it's uh uh someone in a noisy environment, it can optimize even greater for that or suppress more noise. But people said, well, veg mode is so great, why don't you just incorporate that into the personal program, our automated classification system? And we said, well, it may be, you know, I used to talk about it as like ludicrous or now plaid mode on a Tesla. You don't always want that extreme acceleration. And similarly, you may not always want that uh those aggressive offsets to provide clarity even within the same person. Sometimes situationally at the beginning of the day, I'm a little sharper, the end of the day, I may be a little bit more fatigued than I'm at uh if I'm at a conference and it and we're going to a reception and it's noisy. I don't want those extreme offsets so I can suppress noise more in that situation. But we we we can people and patients and providers continue to say, well, if it's so good, put it in uh so you can make it automatic. And so in the latest product enhancements within Genesis AI, we now have the ability to use edge mode automatically. It will continue to adapt rather than the end user having to tap in and tap out of edge mode throughout the day. So we're we're moving towards a more automated approach for the user experience. I I have many patients who just say they want to put the devices in and set it and forget it, but they want the benefit of that enhanced clarity or that enhanced comfort. Sure. And now they have that with edge mode automatic on the top-tier devices. The lower tier devices have those other variations of edge mode. But this is one area I think for professionals that enables them to talk about that automatic edge mode as a differentiating feature on higher-tier technology. I think even with sophisticated DNN algorithms continuing to adapt and evolve, we're going to see the need to combine machine and human to deliver better results by taking into consideration sophisticated uh acoustic processing and categorization of sound environments, plus the listener's intent to say, right here, right now, this is what I want to hear.

SPEAKER_00

Yeah, and it, you know, you have to start somewhere. It's important that we we know historically that patients do like having control over their hearing aids. You know, there was a period of time about 10 or 15 years ago when it was a big discussion should we have a volume wheel or not, because you know, we had so many automatic compression-based circuits that we could we could vary the volume and keep it sort of at MCL most of the time. Um, but I think as we progress towards, you know, as as good a unit as we can possibly build, we have transparency times two. And what I mean by that is uh I think it's very important. I think cosmetics is a huge issue. And um, you know, we talk about access and affordability, that got us to OTC. But but the numbers haven't changed a lot. Now it's been almost two years of OTC. And I think the third leg of that stool, which which, you know, frankly wasn't brought up very much at the FDA, is the issue of cosmetic appeal. And this is the nice thing about the custom products that you have available, is when you have custom products, often they can be nearly invisible or invisible in the canal or you know, in the canal, ITCs. And these types of products make it, I think, much more um pleasant for the patient because a lot of people, I was doing a paper on this recently, and if you go to 2024 and you put in cosmetic, hearing aids, you see that's still a big thing for so many patients. Now, my point of mentioning all this, these products now, when you're talking about a Genesis AI, they're available in custom, they're available in the smaller products, and and frankly, the BTE is what I've always referred to as a micro BTE. And I think that's so important because the goal, I think, of many fittings is to be transparent times two. And what I mean by that, it should physically not detract, it should be pleasant, it should be attractive, it should be invisible if possible. And acoustically, we we want a sound that is so comfortable and natural that we quite frankly forget it's even there. You know, and you and I actually wrote an article about this, I think it was 14 or 15 years ago, that hearing aid should be transparent physically and acoustically. And I and I think that's what you're approaching then.

SPEAKER_01

Yeah, I mean, the cosmetic appearance still is, stigma still exists, but I'm finding that among the gen the young Youthful generation, baby boomers, first-time users, are less stigmatized, but they have higher expectations for the performance of what the acoustic benefits that can be provided in every environment that they want to listen to. But now getting into that, one of the areas I think that's often not given its due is the user experience. And many of the patients that I work with, to your point about transparency, acoustic transparency, physical transparency, they don't want to have to always engage with the device. Right. Hence the edge mode automatic. One other area that I've seen is typically in the past, when I'm setting up with these modern devices, I'll use the automated, uh, what we call the personal program that automatically adapts. But then I'll give a situational program like a restaurant or a crowd program or an outdoor program. One thing I'm seeing since we introduced edge mode several years ago, in my hands at least, when I look at the data logging for patients and I instruct them how to use the manual programs plus edge mode, and especially edge mode automatic now, that they're using the personal program and edge mode in lieu of manual programs to get to that point. They don't want to interact and have to remember now which program, which beep or which program did they say to use in in restaurants. And instead, all they have to do is remember use personal plus edge mode. And it's combining that machine and the human intention to really get at the bottom line of what you're saying. Choose the form factor that's important and appropriate based on your hearing loss. Look at how it is that you want to engage through an app, through tapping on the device. Edge mode is a feature that does not require that the patient has their uh phone with them. They can activate or deactivate edge. It truly was not named by accident. It's edge computing because everything you need to incorporate edge mode is on board the hearing aids without requiring a connection to the phone for those who want it.

SPEAKER_00

Which is wonderful, really, because nobody wants to carry a lot of stuff. Anyway, listen, before I let you go, I do want to cover a little bit about DNN and multiple path processing and the fact that you still have these fall alerts coming through. And tell me about fall alerts, health and wellness in the newer products.

SPEAKER_01

Yeah, so you you've hit on one that that we've been using uh inertial measurement units in our devices since 2018 that are capable of monitoring physical activity, and we've continued to evolve that feature so that within the app, the the MyStarky app, uh, on the Genesis products, the pay it will automatically log how many steps a patient takes throughout the day, whether they're walking, running, uh, riding a bike, and it will log those activities. Why do you say that? Why is that important? I can do that on my wrist, but yeah, you you can do that part on your wrist. You can't do the monitoring of social engagement, which is so critical given all of the research suggesting you know, loneliness, isolation, depression, possibly even cognition in those individuals who are at elevated risk for cardiovascular disease may decline in untreated hearing loss. Well, the issue then of the physical activity part is because of the direct link between hearing loss and cardiovascular disease, another uh comorbidity uh that uh we know is significant, and even a mild hearing loss places you at roughly three times the risk of a fall compared with your normal hearing counterpart. So in 2019, we introduced a fall detection feature that is capable of uh using that inertial measurement unit to detect when the patient is wearing their hearing aid and they're connected to their phone, it detects a characteristic measurement signature, if you will, movement signature of a fall. And then it will select uh uh you the the the end user can select up to three individuals to serve as recipients of a fall alert via a text message. And if I'm wearing my devices and I fall, I hear uh an alert in natural language processing that says uh fall detected and alert message that has been sent. So I know that I'm connected to my phone, it's sent. Then when one of my three contacts opens up their phone and looks at the text, at one of them, it will say alert received. And that lets me know that I'm not that old woman from the 80s who fell and couldn't get up and wondered whether anyone knew about it. So I know when someone received it, and then they're looking at their phone, hopefully calling me or texting me. Um, and if I can't respond, maybe I was knocked unconscious. They can use location services. You talked about geotagging earlier. Another benefit is they can see on a map where I was when I fell, call emergency services, or come and see if I'm okay at that point. And we think because of falls, falls are uh a tremendously expensive uh health uh uh feature, not only economically, emotionally, causing stress to family members. It involves the whole family, the whole village, really, when it when an aging person wants to live in their own home and the family members are worried about their their loved one who may be at risk of a fall. And as you know, a fall often starts a downward spiral in health. The fall may not kill you, but it will contribute to a lack of mobility, isolation, loneliness, depression, that downward health spiral. My own mother died several years after suffering a fall where she broke her hip.

SPEAKER_00

And so that's that's right. That's that's just about the most common uh um result of a fall in an older uh person is the breaking of a hip. And that does start quite often a downward um spiral. And that's very, very common. You if you look that up on Google today in 2024, you'll see that it is still uh a very, very important um caveat. You have to look out for these things, and the sooner the patient is seen and treated, the better, of course.

SPEAKER_01

Look, and and and and having that conversation for clinicians, you know, hearing and balance is within our scope of practice. And it should be that we're looking at the patient beyond just two ears that we're doing real-ear measurements on. We're thinking about the patient's welfare as a whole, and having that conversation may initially require a little bit of adaptation. But uh the CDC in the U.S. has come up with three questions that can effectively determine whether an individual is at elevated risk for falls. And it's simply, do they worry about falling? Have they fallen in the last year, or have family members talked about it? And that will sort out it's three questions. It's not even requiring you to expand your scope of practice all that much. But what we also know is while a fall detection feature, and we're the first and only one in our space to have that feature, and we think it is something that enables professionals to operate at the top of their game in terms of best practice. But it's not really and it uses AI in the sense that those acoustic signatures, the switching to the phone, the alert scent, the natural language processing, a lot of things you can take for granted. But while a fall detection feature is great, we want to go further and we want to begin to use those sensors. So the CDC has developed measures uh called the steady protocol, stopping elderly accidents, death, and injury, and they've come up with objective measures that are typically done in a clinical environment, done to assess an individual's balance, strength, or gait. And they can identify deficiencies that typically requires a professional to look at uh a person doing these exercises, simply standing up and sitting down. How many times can they do that in 30 seconds without using their arms, walking and turning around and sitting back down, or balancing with their feet astride or or in in front of each other? And we thought, well, we hypothesize we could do this using that same inertial measurement unit. So we again partnered with Stanford, who has some very promising results that individuals can do this, these measures in the comfort of their own home. And then if they identify a weakness on balance, strength, or gait, that then they can do exercises that can improve that deficiency, ultimately with the long-term goal of trying to help individuals help themselves to reduce fall risks before that fall occurs. All of that involves artificial intelligence.

SPEAKER_00

It does, and and I think it's a great application. Uh two things come to mind. Number one, uh some people will still say, well, what does this have to do with the hearing aid? And when you go back to the study you quoted, uh, I should say that was Frank Lynn's study. I think it was 2011, maybe 2012, Archives of Odolaryngology. And when he said as hearing loss increases uh to moderate, I believe at that level of a moderate sensory neural loss, it was a times three risk for a fall. So that's where that comes from. And these are our patients. You know, when we're dealing with um older patients uh who have more significant hearing loss, this is the patient who this is so important for. So, Dave, before I let you go, uh I want to talk a little bit about the clinician and end user benefits for this advanced AI. What can you tell me about things, such even trivial things like changing wax guards? What can you now do with this advanced AI that we couldn't do two years ago?

SPEAKER_01

Sure. Well, the the first one, uh uh the how do I know when to replace wax guards? I would say when it's necessary. And we use acoustic-based uh uh assessment, if you will, uh and a storage, if you will, of a feature we call sound check. I believe we're still the only one in our space to have this feature where a daily diagnostic or weekly diagnostic, or anytime a patient starts to notice they're not hearing as well with the devices, they have to sort out what's the reason. They can run a test that is run in just a few seconds that will tell them is the mic are the microphones working, are the receivers blocked for any reason, including wax, and is the circuit functioning? All of that can be done by the patient in the comfort of their own home or wherever they have an environment that's quiet enough that they can run this sound check feature. It's a great example of something that combines acoustic measurements that can be stored on the device and a quick automated test that will ensure that their devices are functioning properly in every environment where they are. Another clinical benefit that we do is called um you know, we know that real ear measurements is a part of uh best practice, and yet a lot of clinicians find that they don't have the time to do it or they don't feel the benefit or the value. You and I have preached on this for decades.

SPEAKER_00

Oh, yeah.

SPEAKER_01

We we now have a feature called auto-rem that is incorporated into our devices that enables a clinician who has a real ear measurement system that they're comfortable with and they use for best practice to speed up that initial target match by allowing the system using AI to simply match to whatever targets, including proprietary targets, because now they can actually load the proprietary targets into the real ear. They don't have to do it, it's already stored in there. And it will match to those initial targets and acoustically match the device to the patient's ear, taking into consideration all of the uh individual acoustic parameters that are measured via real ear, but faster, roughly twice as fast as if the clinician themselves is matching target. That's something machine learning is really good at doing is simply matching to a given target or looking for specific adjustments that it can do very rapidly.

SPEAKER_00

And and auto-rem, I think you guys introduced that about four or five years ago. And um, this is a nice application of that same type of technology. You know, it's it's taking something that you have and that you're improving and saying, how else can we use this? And I I like the idea quite a lot of measuring the SPL to make sure it is where it's supposed to be. That's that's brilliant.

SPEAKER_01

Yeah, there's there's a lot of acoustic factors, as we talked about with edge mode relating to speech understanding and noise. We talked about the health and wellness features. Yeah, there's a lot of other features that we take for granted, like the self-check or the auto rem, or the last one is using that natural language processing that I mentioned earlier. Sure. Tell me about that. To consider that in in the US, the um there there's a significant number of aging individuals who are on medications and chronic medications multiple times every day, whether they're taking them coming off of a surgery or after you know some illness, uh, or whether it's a chronic regimen, we know the compliance with a chronic regimen to medication is only about 50 percent.

SPEAKER_00

Right.

SPEAKER_01

One of the things we thought was could we in the app allow either manual reminders to be put in that says, you know, take your medication three times a day, and you'll hear it in your voice. So take your medication, drink water, dehydration in the aging population.

SPEAKER_00

So much better than a beep alert, right?

SPEAKER_01

So much better. You hear it in your own voice. But then we've also taken it a step further with smart reminders to tell the person, put your hearing aid in in the morning, run self-check once a week, it'll remind them to do it. All of these things really fill the three buckets of sound quality, speech intelligibility, health and wellness features, and using that natural language processing for everything from these intelligent reminders to real-time translation, transcription. The mind just boggles in terms of where we're going. And I know we're out of time, but but I'm really excited. And I don't want to leave this conversation before I say that the role of the clinician is still paramount in this process to engage with the patient, to find out how they want their user experience to be. Absolutely. To pick out the best form factor, to use best practice to ensure that they're fitting the devices to the patient and following that patient on their journey.

SPEAKER_00

Exactly. And David, I'm so glad you said that because to me it's all about, you know, can can we understand what the patient's needs, the patient's goals, the patient's expectations are, and can we meet them? And it's so important. Often it doesn't get done, which is a shame, but it's so important. I'm glad you underscored that. You're right, we are out of time. Uh, I want to thank you for being on the other side of the mic of Sound Bites. I I enjoyed learning about this new product and this new process, and I'll look forward to maybe doing this uh again sometime soon.

SPEAKER_01

Thanks very much, Doug. Appreciate it.

SPEAKER_00

To our listeners, thank you for listening to this episode of Starkey Soundbites. If you enjoyed the conversation, please rate and review us on your preferred podcast platform and share with your friends, colleagues, and network. You can also follow us or hit subscribe to be sure you don't miss an episode. Also, we'd love to know what's on your mind. What questions do you have for our hearing experts? Please send an email to soundbites at Starkey.com. We'll be featuring your questions and getting some answers from our Starkey experts on future episodes. Thank you so much. Have a wonderful afternoon.