
Hearing Matters Podcast
Welcome to the Hearing Matters Podcast with Blaise Delfino, M.S. - HIS! We combine education, entertainment, and all things hearing aid-related in one ear-pleasing package!
In each episode, we'll unravel the mysteries of the auditory system, decode the latest advancements in hearing technology, and explore the unique challenges faced by individuals with hearing loss. But don't worry, we promise our discussions won't go in one ear and out the other!
From heartwarming personal stories to mind-blowing research breakthroughs, the Hearing Matters Podcast is your go-to destination for all things related to hearing health. Get ready to laugh, learn, and join a vibrant community that believes that hearing matters - because it truly does!
Hearing Matters Podcast
Intelligent Hearing: Dr. Dave Fabry on AI Advancements in Hearing Aids
Join us for a fascinating conversation with Dr. David Fabry, Chief Hearing Health Officer at Starkey, as we explore the groundbreaking innovations in hearing aid technology. Discover how advancements in machine learning and deep neural networks are transforming these devices from manual tools into intelligent systems that automatically adjust to any acoustic environment, enhancing the user's auditory experience.
Dr. Fabry takes us through an illuminating journey into the realm of artificial intelligence in audiology. By drawing parallels between a child's language acquisition and the learning process of deep neural networks, he vividly explains how these technologies have evolved to dramatically improve signal-to-noise ratios. We dive into the transition from traditional machine learning to modern DNN, highlighting how real-time processing and open real-time analysis are revolutionizing noise management by offering a fresh perspective on sound environments.
In our final segment, we explore the significant role of AI in enriching hearing aid features, from binaural fusion to sound localization and the integration of voice assistants. With insights on the exponential growth of computational power and the pivotal role of apps like HearShare, we discuss how AI technologies foster user independence, safety, and improved quality of life. Tune in to uncover how these cutting-edge advancements are not only enhancing auditory perception but also redefining the future of hearing aids.
While we know all hearing aids amplify sounds to help you hear them, Starkey Genesis AI uses cutting-edge technology designed to help you understand them, too.
Click here to find a provider near you and test drive Starkey Genesis AI!
Connect with the Hearing Matters Podcast Team
Email: hearingmatterspodcast@gmail.com
Instagram: @hearing_matters_podcast
Twitter: @hearing_mattas
Facebook: Hearing Matters Podcast
Thank you. You to our partners. Sycle - built for the entire hearing care practice. Redux - the best dryer, hands down. CaptionC all by Sorenson - Life is calling. CareCredit - here today to help more people hear tomorrow. Fader Plugs - the world's first custom adjustable earplug. Welcome back to another episode of the Hearing Matters Podcast. I'm founder and host, Blaise Delfino and, as a friendly reminder, this podcast is separate from my work at Starkey.
Dr Douglas L. Beck:Good afternoon. This is Dr Douglas Beck with the Hearing Matters Podcast, and today we are interviewing my dear friend, Dr. David Fabry. Dr Fabry is the Chief Hearing Health Officer at Starkey, where he has worked, except for a brief sabbatical, since 2009. Past board member and president of AAA, past editor of Audiology Today, the American Journal of Audiology, and current board member of the American Auditory Society. Dr Fabry formerly worked at the Mayo Clinic Walter Reed Army Medical Center and the University of Miami Medical Center. He is licensed in Minnesota, Florida, California and Rwanda because who isn't? And he is the husband to Liz and dad to Lauren and, most importantly, probably grandpa to Charlotte. Hi, Dave!
Dr. Dave Fabry:Hi Doug, how are you? Yeah, much to my daughter's chagrin, because she says that now, since Charlie entered the world, she realizes she's second fiddle. And I always tell her she's my favorite child, she's my only child, but grandchildren are an experience unto themselves. It's fantastic, and they win every battle.
Dr Douglas L. Beck:All right. Well, sorry, Lauren, but life is life.
Dr. Dave Fabry:All right.
Dr Douglas L. Beck:Well, sorry, Lauren, but life is life All right. So, David, I want to talk to you a little bit today about some of the more sophisticated technology that Starkey has introduced and, in general, the advantages and the new areas that we have open to us. I mean, through new technology, we have a much better representation of signal-to-noise ratio, we have much better directionality. We have much better directionality, we have much better overall sound quality. So I want to ask you what's the difference between machine learning, which I think many of us are comfortable with, versus DNN, which is a relatively new technology? I mean, you guys introduced it six, seven years ago deep neural networks, but the newer products have a DNN that operates a hundred times faster. So tell me let's just go back to machine learning versus DNN what's the difference?
Dr. Dave Fabry:So for many hearing care providers they're probably comfortable with the fact. When you and I first became an audiologist, remember back back in the previous millennium in the way back machine, if we wanted to fit a patient with hearing aids that had directional microphones, we couldn't leave those directional microphones on all the time because patients would often report bacon frying in their hearing aids or they could hear a sizzle because the two microphones often raise the noise floor. In those early 80s, let's say when I first became a professional. So at that time if we wanted to give the patient control over turning, engaging or disengaging the directional microphones for situational use, we actually had little switches on the hearing aids that would change programs. And with the advent of made-for-iPhone hearing aids that would change programs and with the advent of made for iPhone hearing aids that were compatible with an app and now made for Android, the user interface is often not only onboard controls but on the user app and so it's eliminated that need for having those switches. But the other area that has been impacted was by machine learning and I would say 10 or 15 years really. We've had devices that, rather than requiring intervention on the patient's part of switching to a quiet or noisy program. The devices are continuously monitoring the listening environment where the patient goes throughout their day and detects when there is speech present, noise present, some combination of those maybe music, maybe wind noise, you know, quiet might have some different acoustic parameters that we want to apply to those settings. So there I've given you just like five acoustic environments where people want to be able to wear their hearing aids, rather than requiring five different settings on a hearing aid or putting a manual programs in an app. Now even the lower tiered devices automatically monitor the environment, detect when noise is present, detect when speech is present, some combination of those, and automatically adapt and engage only those features like directional microphones, noise management, wind noise suppression when it's appropriate, and that is through machine learning classification systems.
Dr. Dave Fabry:So everything that we learned when we were in grad school and we've both been environments where we were working in auditory research, where we were working in auditory research, we can characterize differences in terms of spectral or timing or periodicity to differentiate between speech, noise, music, et cetera. And those features really are the way that machine learning systems monitor and we program them to say here's the spectral characteristics of speech and of noise and of music. And then where we first started getting used to this umbrella definition of artificial intelligence is we have to train these models, whether it's a machine learning model or a DNN model. With a machine learning model, first we give it the rules. We say here are the spectral and temporal and timing differences, and we give it the rules and then feed it a bunch of different acoustic environments.
Dr Douglas L. Beck:The crux of what I want to get to Dave, if you don't mind, because the explanation of the multiple parameters and noise environments that's critically important to understand. Machine learning is going on in parallel with that Correct learning is going on in parallel with that Correct, but now what we've got is DNN Distinguish for me. Let's say, sophisticated machine learning, much like what you just described, versus now, what can you do with deep neural networks?
Dr. Dave Fabry:I like to use the analogy of let's say you have an acoustic deck of cards and you're using a machine learning system to sort those cards into piles. And let's say, first we want to sort by suit. So now we're saying, okay, the machine learning task in that case with the face cards is to look for hearts, diamonds, clubs or spades, and it's looking at those and then you sort into four neat piles. But let's say instead, if you want to sort by face cards versus numbered cards, different rules, different features that differentiate between face cards, or, uh, the suits could be colors, you know you could get in hearts, and so it all depends on the, the sophistication of the features that you're feeding a machine learning algorithm. To answer the question of what you wanted, whether you're playing go fish, whether you're playing euch, you're separating certain cards out of there and you've got to sort those. So with a machine learning system you tell it the rules search for red, search for a heart versus a diamonds, search for a spade and a club and the number of features, and then see how accurately, if you gave those rules to someone who knew nothing about playing cards, but you gave them the rules and said, now sort them into the piles that I'm expecting. See how accurate they would get at sorting those into a deck for an accomplished card player to sort. Now extend that analogy to acoustics and say if, in an acoustic environment, what would you say for a hearing aid user is the most critical feature that they want out of performance, out of their hearing aids? We know what the drivers of hearing aids, but what would you say is the most important driver of hearing aid performance? Um, speech and noise, right. So now what we're saying is okay, now with a dnn algorithm, instead of giving it the rules of saying here's, spectral and temporally, are the differences between speech, noise, music et cetera, we're saying and we're feeding it a lots, lots of stimuli that are speech alone, noise alone, maybe windy environments, quiet environments, a whole bunch of acoustically isolated environments, and then mix, importantly, speech and noise at a variety of signal-to-noise ratios with different reverberation, different real-world kind of situations. But what we've done in a DNN model, instead of saying here are the spectral, temporal and timing differences, we're saying here's the target, and let's say the target in this case is to always pull speech out as much as possible and optimize that signal for comfort, clarity or audibility and then put it on that task.
Dr. Dave Fabry:And similar to the way the analogy I always use is when children are first learning language, or when you're trying to work with your grandchild to say the name that I've chosen is Papa. So I'm always trying to get her to say Papa. So she doesn't know anything about spectral or timing, structure or phonetics or anything she's you know, she's a year and a half old and but what she knows is that when this gray haired guy comes into the picture and he's going Papa, papa, papa, she associates that sound with me and she begins to start developing her own rules from Papa. That's a great analogy. She has no rules. No one told her the rules. All she knows is these people come into her visual field and they go Papa or Gigi or mama or dada, and and she begins to associate that and then she starts to develop her own rules. That's DNN, the way that the brain is learning language.
Dr Douglas L. Beck:I love that. I'm going to steal that in my next lecture.
Dr Douglas L. Beck:Steal away, yeah, now let me ask you a question. About 15 years ago, you and I and some other folks in the profession were talking about artificial intelligence, but back in the early days we would speak about AI, mostly with regard to machine learning. But we didn't differentiate that because back in those days there was only machine learning, and so I think the term artificial intelligence was pretty much dismissed, I want to say eight to 10 years ago, and we stopped using it in the profession. And now it's back.
Dr. Dave Fabry:Everyone's using it now. Yeah, every, I mean everywhere you turn. My, my, my toothbrush has artificial intelligence, you know, and it's become ubiquitous and it's become a buzzword, and I think it is important, you know, and I think I hope we go into the weeds a little bit here, because I know sometimes people say you're getting into the weeds, but that's where all the good fish are. So let's go into the weeds a little bit to try to really understand and differentiate between AI, machine learning and DNN.
Dr Douglas L. Beck:Yeah, very cool. Well, with going into the weeds here, tell me, in your experience and your knowledge as a researcher, and with current very sophisticated DNN, is DNN better able to improve the signal to noise ratio, given steady state noise which is what we used to say about machine learning right, if it's steady state background noise, much easier for the circuit to suppress it. Does that matter anymore?
Dr. Dave Fabry:Well, I think DNN is capable of improving signal to noise ratio in dynamic environments better than machine learning Because, again, remember, with machine learning we've given specific rules and as signal to noise ratio starts to get poorer it becomes more difficult, even in static environments, but especially in dynamic environments where someone's at a cocktail party and they're going into busier and quieter areas, but there's always noise present and DNN, if it's always specifically targeting, either, you know, and again, even in DNN models, you can give different directives. You can say, hey, always optimize speech, or you can say always suppress noise. Those may seem like the same thing but they're very different objectives, because when you're saying always suppress noise, you could reduce noise but not appreciably enhance speech as much as you might. If you're going in the other bucket where you're saying always optimize speech by trying to identify it and then amplify the speech when they're mixed together and you're only using noise management, now I'm going way back to my, my master's thesis, which was working on some some of the first digital noise suppression systems that used a single microphone, and you and I both know that during that era a lot of people were making claims about noise elimination and, and you know, magically reducing noise. But through a single microphone, when speech and noise are mixed, you can reduce noise but you'll often reduce speech as well, and it really many of the DNN algorithms to this point have been targeted to reduce noise but not really taking into consideration directional microphones and how they adapt and binaural input and different things.
Dr. Dave Fabry:And that's where I think we start to see DNN be able to be more successful because, in the same way making an analogy back to the grandchild that not encumbered by the biases of knowing what rules and what ways in the lab or in the past clinical behavior that we've tried to enhance speech as much what ways in the lab or in in the past clinical behavior that we've tried to enhance speech as much as possible in the presence of noise. Now we take more of an open book. A friend of mine, mike Maddox, says you can't read the label when you're inside the jar. You and I have been inside the audiology jar for 40 years plus and looking at it with a with a beginner's eyes, that child's eyes, of the way that DNN can do, not encumbered by our past biases about saying oh well, whenever a patient has this complaint, I always do that they can look at dynamic environments differently, incorporating different features, because it's one thing to classify the environment, the other is then what are you going to do about it? And then what? And then what?
Dr Douglas L. Beck:Well, that's always the question, right, because we can always detect noise. Noise is easy to detect, but it's very difficult to eliminate with traditional circuitry such as machine learning hat. Now, when you're talking about deep neural networks, one thing that I think can be confusing we used to have deep neural networks that were trained on specific speech sounds, right, and it was kind of a closed loop. They weren't actively openly processing, they were processing against a memory bank or of sounds that had been acquired. The newest DNNs are doing this all in real time. Is that right?
Dr. Dave Fabry:the newest DNNs are doing this all in real time. Is that right? Yeah, they're in the wild. And again, now you still have to train the models to sort of identify how it is that this specific DNN model is going to behave. For example, like I said, you could say I specifically want a target to always enhance speech as much as possible, always suppress noise as much as possible. Our edge mode feature has the capability of either providing best overall sound, combining that intelligibility and sound quality, but then it can be specifically going after speech audibility, the, the, the razor's edge on that can sometimes in in chasing audibility in a DNN model you can sometimes affect sound quality. So sometimes extreme speech enhancement algorithms can actually reduce sound quality and then it becomes a fulcrum as to how much you can try to improve that without impacting sound quality adversely, Because hearing aid users want speech in noise to be audible but they also want sound quality to be good and we've often thought of those as at odds with each other.
Dr Douglas L. Beck:And it makes good sense. I mean, as you're using more and more technology, as you're attacking the sound more aggressively, there are trade-offs and this is a natural consequence of that. So let me ask you a question when people say that my hearing aid has artificial intelligence, does that always mean it has deep neural networks of AI?
Dr. Dave Fabry:as an umbrella term that encompasses both machine learning and a subset of that in deep neural networks. So I think artificial intelligence speaking about machine learning as artificial intelligence is not unreasonable, because it's a rules-based approach that can operate on its own to categorize or classify think of a deck of cards into different sets and then the success of that machine learning algorithm is judged. In our case, as to how would a human judge this environment Would this be a noisy environment, a quiet environment, a musical environment, whatever? Versus how well the machine does that. And the issue is most modern machine learning AI systems are only about 80 to 85% accurate at best, and that's because speech can be both a stimulus of interest or a jammer. Music, similarly, can be something you want to enjoy and listen to. We're both well, you're a musician. I like to hang with musicians because I'm a drummer, but music sometimes is that what they're blasting in overhead in an elevator when we're carrying on a conversation, but other times it might be somebody we want to stop and both listen to. We want to identify and process that differently. One more thing that I'd like to say is that one of the reasons that we haven't really been talking about DNN in hearing aids until more recently is that it requires tremendous computational power, you mentioned. In many cases you still require training to get DNN. Models can go down rabbit holes if they're trained and they go down a wrong alley, because now they're not encumbered by rules You're not giving them the rules and they can learn the wrong way. But with enough samples and you feed them millions of different acoustic environments and then they develop the model. The classic DNN experiment is cat versus dog, but when we're thinking acoustic environments we make that analogy. But so it requires massive amounts of data to be fed into this model. And then you need computational power.
Dr. Dave Fabry:And one of the things that blows my mind is you're familiar with Gordon Moore and Moore's law, right? So you knew him as a kid? Yes, so in the sixties, gordon Moore predicted that the number of integrated circuits would double every two years 18 months to two years. There have been both and but, but that model held up until his death. He just died last year. Yeah, and and Moore's law, even though they predicted its demise for many, many times throughout his life. It held up much longer than he thought it would. But AI has been the model that they've said. Well, you know that and a number of other factors, quantum computing, et cetera are going to make Moore's law obsolete. But what do you think the rate, using that analogy, of 18 months or two years, doubling for computational power in AI, using AI, what do you think the doubling rate is right now? What period of time?
Dr Douglas L. Beck:If I had to guess, based on what I know from the industry and electronics, I would guess three to four months. Yeah, it's three and a half months. Seriously, we didn't prep on that, that was a guess.
Dr. Dave Fabry:Three and a half months, the computational power of AI doubles, and so we're seeing explosive benefits and opportunities and expansions of this.
Dr. Dave Fabry:And although we don't yet have an NVIDIA chip in hearing aids, because they would cost about $40,000, but we're seeing, you know, the benefits of some of those low power, high computational power chips that and we've incorporated that in our latest product that, as you mentioned, provides a hundred times more computational power than just our previous generation.
Dr. Dave Fabry:It's incredible in terms of the it's startling the advances, and so that's the reason that we're seeing this transition from machine learning to DNN. There's there's a room and a place for both. There are things that machine learning does very well, but there are things that DNN can do that now start to emulate or eventually surpass the human brain. I still believe that it is the combination of the machine and human intervention in the form of a clinician that helps personalize and optimize the benefits of hearing aids, and I think it's going to be that way for some time. I mean, I'm not a fatalist that says the Terminator and the machines are going to rise and take over the world, but I think it's going to augment our superpowers to give us more than what either side alone can do.
Dr Douglas L. Beck:Dave, tell me about what you anticipate next three, four, five years. How will artificial intelligence change hearing aids? Will we have hearing aids without AI in five years?
Dr. Dave Fabry:I doubt it. I mean, like I said, most hearing aids employ some form of machine learning now and I think it's only going to become more and more prevalent as we move forward across all of the technology tiers. Job one for every hearing aid is always going to be speech understanding in noise particularly, but sound quality, and increasingly we're focused on as well spatial awareness and really what we're seeing, as we have hearing aids that now incorporate a very broad input, dynamic range, 118 dB in our case, broad frequency response and starting to think about the matching of the two devices and the integration of those devices and the coordination of those devices. And you know this as a musician taking the sound from being here, putting it out in space, and that, interestingly, is a top driver of expectation for many hearing aid users is spatial awareness, location of sound, and musicians in particular want to be able to think about, close their eyes and think about where all of the instruments are.
Dr Douglas L. Beck:Oh, absolutely Particularly when you're talking about larger concerts, when you're talking about symphonic instruments, symphonic bands, orchestras, things like that, and people are set on stage in a certain way so that the musical instruments mix well for the audience. It's not random where the violinists sit or the clarinet or the trumpets are. Those are all pre-assigned.
Dr. Dave Fabry:And that layering of sound is possible with increased input, dynamic range, broad frequency response and intensity, and that timing processing delays. For processing, I mean the early dnn models. The earliest way that we use dnn were when we used, if you will, a co-processor in the form of the phone with voice ai. That was in 2017, where we offloaded some of the processing to the phone and then did it because it had higher computational power then sent it back to the hearing aid. That introduces temporal delays that impact the quality and also that ability, that spatial awareness. And anytime you're using a coprocessor rather than a DNN accelerator directly in the audio pathway, you're going to get timing delays and you're going to get more power consumption.
Dr Douglas L. Beck:And once you get up to I think it's about 10 milliseconds, the delay becomes significant enough that the visual and the auditory don't sync and it's like watching an old, you know black and white movie. Yeah.
Dr. Dave Fabry:Many people won't notice 10, but, but, but. But people that are audiophiles and musicians certainly fill that bill can notice even, even down to pretty low levels.
Dr Douglas L. Beck:So, dave, when we talk about binaural fusion with deep neural networks, it seems the opportunity is there to better maintain interaural loudness differences and interaural timing differences, which of course would then be respectful of head shadow effects and reverberation as well and to talk about how important it is for interaural loudness differences and localization. It's really stunningly important. In other words, if I had a sound one meter from my right ear and a sound one meter from my left ear, and these are two people speaking let's suppose it's Blaise Delfino and he's speaking in my right ear and I put a microphone on my right TM and my left TM, like real air sort of a thing, mm-hmm at six, seven, eight thousand hertz, blaze would be 20 decibels louder in my right ear. And that that is so important to me, being able to localize because I can hear five, six, seven, eight thousand hertz. The difference between my right and left ear, with him off on my right side, is over 20 decibels 28, 21, 22.
Dr Douglas L. Beck:So that's my first cue about localization is where is it louder? And then we have interaural timing changes that once we essentially get above and below 1500 hertz, we get a lot of information from phase and timing. Time of arrival at the eardrum. Can you give me some general statements on how deep neural networks particularly like the Edge AI, because that's your newest product. How does that maintain binaural fusion?
Dr. Dave Fabry:Well, I think you've hit on a very important topic that really excites me about the future, with the improvements in computational speed and computational power, to be able to think about it in the sense that you know, we always say your ears are really sensors, your ears are not. The cochlea is not where the hearing takes place. It's a sensor that is then feeding the brain and as we've transitioned from the 1980s, when 80% of hearing aids were fitted monorally, to now today, where virtually every patient begins a clinical assessment if they have measurable hearing, they're starting with two hearing aids because we want to have that patient be able to again hear audibility and speech understanding and noise, but we also want them to be spatially aware of where those sounds are. So coordinating that input between the two ears at the brain level, not just treating the right ear and left ear separately, is really important.
Dr. Dave Fabry:First of all, I want to get on my soapbox for a minute. First of all, I want to get on my soapbox for a minute. We haven't really advanced the evaluation, the verification of successful binaural summation, but as to whether the devices are really the most sophisticated and I'll tell them myself here. The thing that I mostly do is I'll have the patient close their eyes and walk around and start talking in a low voice and have them point to where I am. And the biggest thing I'm trying to look for is on over-the-ear hearing aids. The first time patients are fitted they'll often make front-to-back errors. They'll do well laterally, but front-to-back, because of the issues that you just discussed, they'll make errors.
Dr. Dave Fabry:So I think, not only from clinical verification and validation of the successful fitting, but now the better coordination. We've focused on directional microphones left and right ear and then we focused on noise management and've focused on directional microphones left and right ear and then we focused on noise management and we focused on other features like that once we classify the sound, then what we do. So what? What's next? What do we do? But I think now we start to think about how it is that the brain is processing sounds and and making people more spatially aware as a goal beyond just saying we're fitting them binaurally and we need more computational power and more computational speed to ensure minimal timing and level differences between the two ears. From the acoustic side of things, in the way that AI will impact us in the next five years, is to better consider that acoustic picture as in three dimensions, rather than thinking in two dimensions, if you will Right Absolutely, and this is a very important factor.
Dr Douglas L. Beck:I'll tell you, when I started wearing the Edge AI, I had a much better perception of localization. I was able to tell more acutely where sound was coming from. Let me ask you a question, because I don't know the answer to this Are these hearing aids talking to each other when you're wearing Edge AI, or are they each independently?
Dr. Dave Fabry:processing. Well, they're independently processing. There is communication that occurs between the two ears, either using 2.4 gigahertz, bluetooth or NFMI. You know, obviously, in some but not all of the products, we use NFMI, but the timing, the computational power and that speed of processing and the communication of the state of whether you know this environment is incorporating one way. It gets to the next point I want to make about when answering the question about DNN potential.
Dr. Dave Fabry:Where DNN, I think, really excels is fusion of multiple sensors.
Dr. Dave Fabry:So think about this as a sensor, this as a sensor, but then think about that since 2018, we've been incorporating inertial measurement units in our hearing aids that can track physical activity, can track and report falls. That's an AI feature as well, because it gets a motion signature of a fall, the motion optimization we call it. When I'm walking and I'm in an environment I may elect to on the basis of input coming from the two ears. If there's a voiceover on this side and I'm walking and it's a target, it will leave that directional microphone in Omni and reduce the other to consider that specific use case and that's really fusing different sensors. If you will specific use case and that's really fusing different sensors. If you will microphone sensors left and right and the motion sensors, and I think that's where we're seeing a lot of opportunity to think about things differently and to think about again the way you're constantly using visual input, auditory input, tactile, sometimes, you know, motion vestibular proprioception, to consider how you keep the shiny side up, you know, and how we keep walking.
Dr Douglas L. Beck:You know, and the more we do of this, the more natural the sound quality becomes. I'll tell you, I was just last weekend giving a lecture in Las Vegas and I went to, you know, the building, the Sphere, and it's quite amazing because it maintains your sensory input better than any other venue I've ever attended. Now, I was fortunate and it was U2 recording. So, you know, they opened the venue and I think they were there for four months or something recording. They sold out every performance and I guess they took the best audio and visual of everything and that became the spheres youtube production. Now, it's very cool that they did that because, I'll tell you, I was there with about 100 people and you're staring at the stage and there's nothing there. The visual is so real you could swear it's Bono, it's the edge. Yet you know, and you, you feel like it wasn't like being in Disney 30, 40, 50 years ago, where you'd be in the audience and they'd throw something out and it was 3D and it would appear all over you. This was truly like being there, immersive, totally sensory, immersive. I've never seen anything as well done and the sounds from the left were truly from the left. Sounds from the right, truly from the right.
Dr Douglas L. Beck:Obviously U2, one of the most popular bands in the history of the world, so the sound was extraordinary, but I think they're maintaining and recreating those interaural loudness, interaural timing differences, which is ridiculously difficult when you're thinking about a presentation that loud. I mean, it's probably somewhere between 110, 115 dB the whole night, which is loud, but you know, it's absolutely a stunning sensory experience and I think that's what we're going to in hearing aids, where we get more and more and more realistic, because early on it was just about hearing making sounds loud enough that we surpassed threshold. Now we're entering the age of listening, which is, we know how to make things louder. That's not the goal. The goal is to make things clearer. That's not the goal. The goal is to make things clearer.
Dr Douglas L. Beck:Now, some people need both. Some need it louder and clearer, but many people would be so delighted to have conversational speech that was simply clearer, and I think DNN is providing that. I can tell you, in my world, dnn has made a huge difference. I mean, I now will wear my hearing aids when I'm going out to restaurants. I'll wear them when I'm going to a cocktail party rather than leaving them home because I knew it was going to be noisy and uncomfortable, and I think this is where we're going. I think the way the sphere in Las Vegas can take a real situation and make it appear real later, that is stunning. And if we can do that in real time with hearing aids, you know, the sky is the limit.
Dr. Dave Fabry:That's really what I think and when, when. As I said, you know, we have the binaural fitting rate in the U? S is around 78% and I think we we pretty much are at the top worldwide in terms of sort of maybe Australia and some of the Danish countries are comparable, but but thinking about two ears integrating and thinking of them as sensors and center and sensor integration between microphone sensors, motion sensors, other um sensors, in terms of the way that the, the user controls, or the app or or the acoustic snapshot that edge mode provides and now updates continuously, all act as input to, again, the. If the overall goal is to emulate or ultimately surpass the way that humans think about this, we're way beyond just thinking about directional microphones and noise management and and wind noise suppression as, uh, our main tools that we have once we sort into the appropriate environments.
Dr. Dave Fabry:Now I want to talk about another element where I see AI impacting our world and everyone else's. I mean, do you use chat GPT? I do on occasion. Okay, what do you like about it? What don't you like about it?
Dr Douglas L. Beck:I like the easy access. I like the instant answer. What I don't like is that the answers are often wrong.
Dr. Dave Fabry:Yeah, it's really a function of garbage in, garbage out, whatever, and it's getting better as they look for more sources and try to. You know, again, it's feeding data to get more sources, not just something that one guy or one gal posted online one time about an answer to a question, but rather, with more data, the answers will become more accurate, but, but rather, uh, with more data it will become the answers will become more accurate, but but, um, you know you hit on it does learn.
Dr Douglas L. Beck:I mean, I, I, I put in I don't remember what it was, but I put in something like tell me about David Fabry, right, and it'll pull you up in your profile. And then I'll say something about tell me about David Fabry at 2015, playing drums at the american academy of audiology, and says there's no data on this. But then, if I give them more and more and more data, it says oh and, and it finds it. You know. So, yeah, garbage in, garbage out and chat gbt does I think it's learning in real time?
Dr. Dave Fabry:oh yeah, no, it definitely is. As it gets again, it's all a function of its appetite and and getting more and more information. And if you're using a service where you're not paying for that service and you're wondering how do they get paid? Well, you're the data. You know and it's one of the issues that I really enjoy working for a company where we make medical devices class one or class two devices that require a certain amount of confidentiality and we're not monitoring people without their knowledge in terms of their content. You asked about ChatGPT.
Dr Douglas L. Beck:What did you have in mind? Because I think I might have derailed you.
Dr. Dave Fabry:Yeah, no, no. With ChatGPT we've had a function that really and Brandon Swalich has been driving this because he's an Ironman fan and Jarvis Do you know what? Jarvis is an acronym for Just a real, very intelligent system, and so the idea is that a voice assistant that is capable of learning and accessing information and the way that you know all of these, whether it's Siri or Alexa or whatever system you might use Chad, gpt the ability to ask questions and hear them back answers in natural language, I think is really critical. It gets away. Old people have arthritis, neuropathy and really like natural language processing.
Dr. Dave Fabry:But the thing that's different about our intelligent assistant that we've incorporated is that if a patient wants to say, hey, how do I clean my hearing aids, and they forgot from the clinical intervention how the clinician demonstrated it, well, we have a learn section in our app and they can go and sort through the learn section. But they can just say, hey, how do I clean my hearing aids? And it'll direct them to that video that shows them how they clean their hearing aids. And then they can say how do I turn the volume up? And then it'll show them what the user controls look like, or the controls in the app look like. Or they can say, hey, turn my volume up or change to the outdoor program so they can command and control their devices. And then, finally, they can say um, uh, you know what's what's the weather today, and it'll use the location services so that it knows where you are. And, um, and they can say who won the, the, the NFL championship in 1967. And the answer would be Green Bay Packers Are they a football team Dave.
Dr. Dave Fabry:They are a football team and they've won more recently than your team, but the idea is many of our patients are not really tech savvy. We have many patients who are very tech savvy, but many want to have it in one app and they can just hit the button and we'll intelligently triage between questions about how to operate their hearing aids, commanding and controlling their hearing aids, asking the weather or asking for some trivia on the web, and it'll do that intelligently and automatically.
Dr Douglas L. Beck:One other thing it does that I noticed, is it has a transcribe function, which is really crazy.
Dr. Dave Fabry:I mean, I can sit there real time.
Dr Douglas L. Beck:Yeah, I can dictate notes into my hearing aids and then I can send it to myself as a um text message or an email or whatever.
Dr. Dave Fabry:yeah, yeah, no, it's really cool and and the translation features and all of that, and being able to go in and display and then hear the language back in your voice or use it in a rosetta stone fashion, if if you know a little like I know a little french and I want to be reminded of it I can ask the question in English and then hear it perfectly pronounced in French in my ears and then I can try to attempt to emulate that. It's great.
Dr Douglas L. Beck:I have to play with that more. I saw it in there and I use it on a little tiny bit of Spanish that I know, and it was great. I have to get back involved with that. Let me ask you before you go, because I know we're running out of time here. So the HearShare app is for the caregiver, and this is a unique opportunity for couples, whether they be, you know, husband, wife or friends or whomever, because the person wearing the hearing aids is the patient, but there are other carers or significant others around that individual who might benefit from having some input. So tell me what is the HearShare app?
Dr. Dave Fabry:Well, the HearShare app is a peace of mind function, I think, for caregivers and family members of loved ones who have their hearing aids and maybe want to live independently and the family member worries about them suffering a fall. So, as you mentioned earlier, we have a fall detection feature in edge AI. For the first time, we have balance assessment using the CDC's steady algorithm that can assess strength, gate or balance deficiencies that a person has relative to their peer group and and they can actually improve their balance over time with the hope, long-term, of potentially preventing falls from occurring before they do. But with respect to physical activity, we've monitored steps, standing for musculoskeletal strength and the diversity of acoustic environments as well. For that, a person uses their hearing aid on a daily basis as well as wear time, and so think about if you have a family member who wants to live independently but you're worried about them getting up in the morning putting their hearing aids in, wearing their hearing aids 10 to 12 hours a day.
Dr. Dave Fabry:Have they suffered a fall? Are they physically active? All of those things, with permission of the hearing aid user, can be viewed in nearly real time by the family member or the caregiver so that they can really provide a dashboard and a confidence builder that their loved ones are safe, even if they're around the world apart from them. They can continue to monitor and make sure that they're doing all right and it gives confidence to the family member. In many cases I've heard from the hearing aid user it empowers them because they want to live independently and they're concerned that their daughter or their son is too worried about them. But they also want to know if they do suffer a fall. They can see, even if they're not one of the trusted providers that received the fall alerts. A caregiver can see. They fell three times in the last month and sometimes having those difficult conversations about making the house safer can save you know it can save hearing lives. Let's say, at the very least, yeah.
Dr Douglas L. Beck:Dave, before I let you go, one final thing. I know that you guys quantified and verified your balance test that's in the Edge AI in accordance with Stanford University. Can you just give us the highlights of that, because it was the same three functions, but I think you had outcomes data, yeah.
Dr. Dave Fabry:Basically, there's objective measurements for balance, gait and strength that the CDC developed in their steady protocol, which is stopping elderly accidents, deaths and injuries, and there are age-based norms for strength, like getting up how many times in a 30-second time interval can you get up out of a chair without using the arm supports on the side? Almost one, yeah, one, and for a guy, your age age it should be about 17 and um. So what we did in collaboration with stanford in a multi-year partnership, was develop a protocol where that then they tested individuals from 55 to 100. I think there is a paper coming out in otology and neurology that should be out before too long and the issue is we use human observers, as it's a clinical test where a PT or a PMNR doc or an audiologist would be counting how many times a person can stand up and sit down. That's the strength test. Similarly, balance how long they can balance on one foot or feet astride or in step or serially, and for 30 seconds. So they pass or fail that. Or can they get up in a chair, walk 10 feet, turn around, sit back down.
Dr. Dave Fabry:That's a timed measurement and, as I said, there are age-based norms and you can develop a test panel that tests balance, gait and strength and says you have a deficiency in this area relative to your peers. We found that the human and the machine scoring of these tests were very close to each other statistically significant, there was no difference between the human and the machine scoring. And then we did it in a remote session, like you and I are doing here, where the observer was online and then finally in an unsupervised fashion, so that the ideal is a patient could identify an area of weakness, then go to work on the area of balance, gait or strength where they're deficient, with the hope that they can strengthen that deficiency and then ultimately reduce the risk of falls in the longterm. That's the real goal. We're not there yet, but that's the goal.
Dr Douglas L. Beck:Yeah, and that's consistent with diagnosis first, treatment second. So if we can get an assessment that's quick and easy, using a set of hearing aids that's been validated to the Stanford study, that can help us to determine which exercises, which protocols we need to work, on too, and that motion of stand, sit or move around, that's all done with a movement signature, if you will, of the IMU.
Dr. Dave Fabry:And then the scoring is all done on timing or numbers of that acoustic or that movement signature for those different events. That's all using AI as well.
Dr Douglas L. Beck:That's brilliant, david. I know you've got to run a chief hearing health officer at Starkey, dr Dave Fabry. Always enjoy talking to you, Dave, I wish you a joyous afternoon and I will talk to you soon. Thanks, doug. See you soon, you bet. Thank you.