Starkey Sound Bites: Hearing Aids, Tinnitus, and Hearing Healthcare

Introducing Edge AI with Dr. Achin Bhowmik

Starkey Episode 77

Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.

0:00 | 36:52

Send us Fan Mail

Starkey’s latest hearing technology has arrived! Chief Technology Officer and EVP of Engineering Achin Bhowmik, Ph.D., sits down with Dave to share what excites him most about Edge AI. The highlights include: 30% more accuracy at identifying speech, 6 dB additional reduction in low-level noise, and 100 times the processing power, all while maintaining Starkey’s industry-leading 51 hours of battery life. Tune in for a deep dive that explains why this technology gives patients and professionals the edge they've been looking for.

To learn more about Edge AI, visit StarkeyPro.com

Link to full transcript

SPEAKER_01

Welcome to Starkey Soundbites. I'm your host, Dave Fabry, Starkey's Chief Hearing Health Officer. Now, if you're a hearing care provider, you know that whether you've been in business for just a few years or for decades, that there's been a change recently in the patient. We say it's the digital patient, and what I mean by that is we're seeing a change as it relates to stigma. For many years, when I certainly became a young audiologist many, many years ago, stigma was the number one factor as a barrier to adoption of hearing aids. What we're seeing now that I'm an aging baby boomer is boomers are less stigmatized by hearing loss and use of hearing aids, but they have higher expectations for what they can do. We're all connected to our digital devices all the time. And this digital patient is, I think, one of the most exciting opportunities collectively for all of us, both on the technology side and importantly from the perspective of the patient. So that's the topic of today's episode. And I can't think of anyone better than our guest today. But before I begin, I want to thank you for your viewership or listenership of SoundBytes. If you have ideas for other topics that we should be covering, send us an email at soundbites at Starkey.com. Please like this episode if you enjoy it. Share it with your colleagues, your friends, your networks. But now, the main act. Dr. Achen Bomick, who leads our RD team as VP of Engineering and Chief Technology Officer. There's no one better to get people excited about the technology that they have to fit on their patients today or from the patient perspective to think about the possibilities with this latest device. So, Achen, welcome back to SoundBites.

SPEAKER_00

Thank you, Dave. It's always a pleasure to talk to you. I learned so much from you. Technology can only serve a purpose here. We are here to help people hear better and lead better lives. And there's nobody that plays the role of providing technology to fit the patient of hearing loss than you. So it's a privilege to work with you closely, uh, learn from you, and also just to talk with you now today, to share the exciting new technology that uh we just introduced.

SPEAKER_01

Well, thank you for that. And right back at you. It's it's a privilege. I mean, really, I guess we represent the art and science. You'll be the scientist. I'm the artist, I guess. Let's play that role. And I'll play that role. But but it really does take that. And that's why, you know, I think we're so excited about uh edge AI, which is the main stage today. But but before we get into the technology, um, let's talk a little bit about the digital patient. Yes. Um, you know, one thing that maybe a lot of people don't know is that in addition, you you know, you're a relative newcomer to the hearing aid industry. Seven years. Seven years. It's crazy. I mean, you were the new guy for a long time. Now seven years has gone by. But one of the things that I admire most is your curious mind and your also your patient focus really led to you last year saying, hey, I really want to be able to dispense the product too, not full-time. We we need you in the seat that you're in, but but that you're a licensed uh uh provider in the state of California. So you understand a little bit about the digital patient, and now you are licensed and working with patients yourself in addition to.

SPEAKER_00

It was a it was a fantastic experience going through it to learn and then you know pass all the retain and practical tests in California. It was quite an experience, but it does give you uh a feel for the technology that again is for the people. But I want to go back to uh the topic of stigma and what at the heart of it and what is it? What is in the idea of digital patients? Well, to me, um, you know, the people are evolving and the life is changing thanks to all of the technology in our lives, right? So the digital patient are essentially the generation that is more and more comfortable with technology. So I think, you know, the when I look at as a new relatively newcomer in uh hearing aids, seven years now, but the common thread that I've had with my past work, even before the seven years, is the area of computational perception. And so while the underlying technology's objective is to seamlessly integrate into our sensory perception system so that I can hear better, understand what I'm hearing better, and it will make me go about my life easier, more fun ways. There's also about the design. So it's the design and the functionality. Uh, the design of the devices have to be such that you don't want to hide it, right? At the same time, hearing aids are not your consumer electronic devices. So they have to sort of disappear and not encumber you from interacting with people you love and events you are in. So we strike this balance really well. A stunningly beautiful design that you'll be proud to wear, and yet it is not between you and your interactions with other people, sort of disappear. I'll I'll uh point to what I mean to your audience. But the next is functionality. Its core function of helping me hear better by clarifying the world of cacophony and making intelligent decisions on what part of the sound should be amplified, what should be suppressed, because it doesn't matter to me in the specific environment I'm in. In other words, I am sitting down in a cafeteria with you and having a conversation. The AI system in the device needs to be able to understand that environment, enhance speech, and suppress noise. At the same time, if I was there by myself, I'm not having a conversation, just sitting back enjoying a cup of coffee, all of what was cacophony and nuisance is now important to me. So it should not cut back all of that. So functionality of the device is playing a strong role besides the improvement in design in helping remove the stigma and help with adoption. You love to hear the device, you don't use it because you must and you have to.

SPEAKER_01

Right. I I think you raise a very important point when you talk about disappear. Um, I would agree that with the digital patient, the modern patient starting to adapt and adopt hearing technology, no longer is focused so much on making it disappear visually. Yes. They're in love with the design, but they want to disappear in terms of the user experience, the user interface. They want it to be that technology that doesn't require a lot of intervention to get to uh better hearing. And I think that disappear is a different derivation of disappear visually versus disappear as a barrier in communicating with other people.

SPEAKER_00

Yeah. So I'll give you an example. At a Silicon Valley IEEE conference, I was making a presentation. I was telling them about all of these new features. We'll talk about how we uh uh enabled amazing speech perception and noise reduction with deep neural network technology and what all these uh the device, all the features to help keep me safe and healthy, connect me with the world of information, all of that. At the end, I said, Do you want to see the devices? Of course, everybody wants to see it. And I said, I've been wearing them all along. And then, just like you did, you know, I pull my devices out of my ear and here, they go, Wow! Yeah, they're stunning designs, but nobody knew I was wearing them. Yeah. Because we designed them to disappear. Like Mark Weiser said, you know, technology, when it's sufficiently advanced, it disappears. It's always there, but it's not there. It's not encumbering you, it's not preventing you from natural interactions. So it's a stunning individual design, along with the amazing AI features that we hope we talk about.

SPEAKER_01

Yeah, well, and and and the one other part I want to talk about, and I love that example, and I've seen you do that reveal times, and that's why I do wear brightly colored ones. I know people who can make those for me, and and I do so with the intent of saying, even when I try to make them stand out, people are still shocked when I show that I'm wearing devices. And and and that, but the disappearing element of visual disappearance versus disappearing into their life is an important distinction of this always-on technology that is continuing to adapt and improve performance. And and I'll go, uh, another generational thing that we see is, and I've talked about this before on this podcast, is while my parents were worried about cardiovascular disease and cancer, um, you want to get a baby boomer's attention, talk to them about cognition and cognitive decline, that we're all facing that. And I think there's been very important research that has shown hearing care is connected to overall health care. And in an effort to provide that always-on technology that helps people hear more. And I think as somebody that has fitted more than a few people, people with edge AI, the first comment is I'm hearing more, but I'm hearing better. Right. Instantly, we've talked about and we've lived in this environment where we've talked about directional microphones and noise management as making everything quieter. Yes. Patients really want to hear as we do with normal hearing. And I think that's what AI and DNN, and you've led this charge of talking about mimicking the uh the auditory function of the brain, because these are just sensors, our ears are just sensors, sending to the brain and integrating all of that input is one of the things that's most exciting. Yeah. And so it can also be intimidating to a lot of providers. And so, what I was wondering is if you could talk a little bit about uh, you know, we're both uh really enthusiastic and excited about this, and we both do fit, but why is it that it that a clinician should stay up to date?

SPEAKER_00

So let me first go back to connecting technology to the experience you just explained uh that people are having with AJI devices. Talking about I hear more, but I can hear better and understand speech better. Let me connect that to the technology. Yes. And and that hopefully sigues to why the clinicians need to be able to explain this amazing role the new technology is playing in these devices in patients' lives. First, let me talk a bit about DNN, right? Much has been talked about deep neural network. Uh, a simple way to understand that for our audience, it's mimicking and copying the ways that a healthy human brain processes sensory perceptual information. In this case, talk about sound processing. The sound processing in traditional hearing aids used to be done by digital signal processor chips or DSP, small number of cores that run at relatively high speed. But we know brain processes information very differently. It's a dense network of interconnected neurons with massively parallel uh way of processing information. So as a result, brain is able to deal with subtle patterns within the signal, so it doesn't have to squish the signal and you know, either amplify what you used to be considered to be frequency signatures of uh patterns in speech and then suppress any noise. You basically lose the granular details in sound. A well-developed deep neural network that's trained with a lot of data is able to preserve the subtle signatures such that you don't have to discard information at a very low level, and then you yet can result in enhanced speech and reduced noise. Couple that with the dynamic range that we have in our device, wide dynamic range. That means you have a window of loudness levels of sound that you can deal with from extremely faint sound to very loud sound, and then use deep neural network to clarify speech. As a result, people are saying, I'm hearing more, but I'm uh hearing better, and I can understand speech better. This is a breakthrough uh development in in hearing aids. You know, for a long time, hearing aids played a core role of amplifying sound, and then it with the advent of DSP, it allowed you to program the device to provide that amplification as a function of frequency to customize and personalize for somebody's hearing loss. Today, with Deep Neural Network, we're going one step beyond that. We are now able to process the sound like a healthy brain does in hearing, in suppressing what you don't want to pay attention to at this moment and enhancing what you want to pay attention to. So it's not just amplifying by the windows of frequency, or rather, it's amplifying by the nature of the sound, which is only possible with a with an AI system.

SPEAKER_01

Absolutely. And and I it is a consistent thread that I'm hearing from patients when they're first fitted with this device. I think we're excited, and and let's let's dive into the technology. You've talked about the framework, the architecture with AI and DNN and what the possibilities are. Now let's go under the hood a little bit with Edge AI. Uh, many people are excited about Auracast, Omnicast, and I want to talk about this connectivity. This is again an important component with the digital patient. We're all connected to our digital devices. What sort of possibilities exist for that? But but also remind people that we made a lot of improvements even between Genesis and Edge AI with regards to, as you've just detailed, about speech understanding in every listening environment with that 118 dB input dynamic range and a broad frequency range to really use every bit of the patient's residual auditory area, giving them exposure to all of the sound that their brain can handle, but then remembering that the brain is the most powerful processor on the planet.

SPEAKER_00

Yeah, so that's a good point. Let me just spend a minute talking about the improvements we made from Genesis to AJI. And this was a Herculean task for us, because as you know, Genesis AI is an amazingly good hearing aid. In fact, it's the best hearing aid you can you can get in our industry today. So when you said we want to make AI devices better, in fact, substantially better, it was a difficult challenge. So, in ways that we just uh summarize key aspects of why AJI is much better than even Genesis. Number one, now for the first time in our industry, an embedded deep neural network hardware accelerator engine on the chip itself, the main processor of the hearing aid, fully on in the main audio path.

SPEAKER_01

Why is that important?

SPEAKER_00

It is important because now you can have hundred times more DNN processing in AJI compared to Genesis. So all of the benefits of utilizing a deep neural network for enhancing speech, reducing noise, adapting to any complex acoustic environment, and at the same time, keeping the ultra-long battery life that we had. As you pointed out, the brain is not only the most complex computational system we know of, it's also extremely energy efficient. So this deep neural network approach of processing sound allows us to not only enhance the sound like never before in hearing aids, but also provide ultra-long battery life. So AJI, 100 times more DNN processing than Genesis AI, but preserves the 51 hours of battery life with our REAC RT devices.

SPEAKER_01

I mean, that seems to defy the laws of physics, right? Because you've got dramatically improved computational performance, but you're not sacrificing uh battery life. Because we know with Genesis AI, we took range anxiety off the table for patients who wanted to use rechargeable devices and not run out at the end of the day, like they often do with competitor devices.

SPEAKER_00

So, you know, to an engineer, I like to say it's a tug-of-war. Typically, no matter what device you are designing, whether it's a phone or uh a uh consumer electronic mobile device or hearing aids, it used to be a tug-of-war. Do you want more computational performance to do more signal processing? In that case, the power consumption is going to be higher. So you had this balance to strike. Well, often to get out of a conundrum and change the playing field, you have to change the architecture. So by going from the traditional computing architecture with DSP to this neural network architecture allows us to provide a lot more computational capabilities for signal processing and enhance the sound, at the same time, not give up on the battery life. It's a whole different ballgame in the way the computational system of AGI is designed.

SPEAKER_01

And compared with devices that use DNN but as a separate chip, much more efficient, uh, lower temporal delays, and uh not sacrificing the battery life.

SPEAKER_00

So thanks for bringing that up. So we did do detailed uh experiments in the lab and put those two systems through the ringers. And first, let me explain to your audience that there are three, really three ways of we could have done DNN in the hearing aids. Number one, you could have just programmed your couple of DSP cores to run some lightweight neural network. Then performance wouldn't be very good. You would still sacrifice your battery life, maybe you'd get 20 hours of battery life or so. Second, we could have taken the approach of adding a co-processor chip, keep your DSP, put a coprocessor chip for handing off AI processing and bring the signal back. So one of our computers trying to do that. I say it's a compromised system architecture because it's one, it is going to just drain battery. You know, if you have an external chip that relies on an external bus, as we call it, like the interconnect between the two chips to bring data back and forth and offload the computational tasks to the chip, do the processing and bring it back, it will have two bad effects. One, it will be higher power consumption because you're consuming power, moving data back and forth, uh, and then integrating it to the main audio path again. And a direct consequence of that is the latency. So, in the in it, in the technical terms, if you take the approach of embedding the deep neural network hardware in the chip itself, like the approach we have taken, versus a coprocessor architecture, it would be we are nine times faster in DNN processing because you're not having to bring the signal back to the chip and combine it again, cutting down all of the latency involved in the bus. And the second is the energy efficiency. Up to three times, 2.7 times more energy efficient architecture when you're fully integrated. You know, I always say it's hard and consumes a lot of talent and money to do the right architecture. An example in the consumer electronics industry, Apple was the first one to embed neural processing unit in their main chip, while many other companies were taking the co-processor approach. And suddenly, you know, iPhones had more than twice the battery life and amazingly better performance. In our industry, we were the first and only hearing aid manufacturer who have taken the approach of deep neural network fully embedded neural processing unit within the main processor of the hearing aid.

SPEAKER_01

Right. So more faster speed, more computational power, more efficient. To patients and to providers, though, the proof is in the performance. Yes. And that performance is best measured in terms of the signal-to-noise ratio benefits in noisy environments. That still remains the biggest challenge. So talk a little bit about that.

SPEAKER_00

That's a great, great point. So the best way to evaluate a system's performance in noisy environment is to, as you said, measure signal-to-noise ratio enhancement. So for your audience, you could be in a noisy environment. So that sound that you are exposed to has an inherent signal to noise ratio. If the signal is the same level as the noise, you got a zero decibel of signal-to-noise ratio. You could be in an environment where the signal, the speech that you're trying to listen to, the conversation, is actually lower in intensity than the noise itself. So you could have a negative uh DB in SNR. So the function of a good hearing aid is not only to amplify the sound, because if you just amplify all of the cacophony and amplifying speech by this much and noise by the same amount, it doesn't help anybody. It's not going to help you understand conversation in a loud and noisy environment. Specifically, people with uh hearing loss who have lost the ability in their vernicies region in the brain, the neural circuit that's responsible for creating SNR and understanding speech, reduce noise. We need with the new technology hearing us to do that. So we tested the ability of age AI's deep neural network system to improve SNR. And guess what? This is an unbelievable result we get. In a complicated and difficult situation where you have diffuse noise all around you, with the listener sitting in the middle, noises around 360 degrees. The deep neural network system that's always on in the main audio path of AJI devices produce up to 13 dB, 1.3, 13 dB of SNR improvement.

SPEAKER_01

That's remarkable.

SPEAKER_00

That's remarkable capability of the sound processing engine. And of course, in a in a hearing aid when you're fitting, and thanks to you, I learned how to fit patients with hearing aids. It's then a matter of how am I configuring that hearing aid? If I provide a fully occluded fitting to a patient where all of the sound the patient is hearing is coming from the deep neural network engine. They're going to see an amazing benefit of up to 13 dB of SNR improvement. But fully occluded may not be a solution for everybody because you want to keep it open vent and you have some environmental sound come in for other reasons. And in that case, your signal is going to be a combination of what the sound processing unit generates, the output of the deep neural network, combined with some sound from the environment that's leaking into the ear through the open vent. It would so it's up to 13 dB of benefit you can get for a fully occluded system.

SPEAKER_01

And I think that's really important the way that you've described that from the standpoint of up to and the concessions and the trade-off that a clinician makes to satisfy the expectations of their patient, in that someone has really good hearing in the low frequencies. You may be able to deliver that up to performance, but the patient may say, my voice doesn't sound natural. And so it's that trade-off. And so, but I think it's really important in the partnership between manufacturer and uh professional that we provide up to benefits, the optimal benefits, but also not just to cherry pick environments that will only create unrealistic expectations. Yeah, it is about personalized getting experience. Absolutely.

SPEAKER_00

It's not about a technology just powering through on what the spec can do.

SPEAKER_01

Yeah. So, I mean, that to me is what's so exciting is the performance in quiet and the sound quality in quiet, the performance in noise. Now, let's transition because we're already, I mean, uh as excited as we are about this, I don't want to run out of time to talk about one of the things that people talk about uh as a main function, and is that connectivity piece. We got a new radio in this device.

SPEAKER_00

So I want to talk about three things and three more things. And then first one is this radio. Thank you for bringing it up. So this is a very exciting time for the hearing 8 industry. You know, we have been developing this technology quietly. We've had a front row seat and a driver seat, which led to the co-development of this new Bluetooth standard, and that will enable AuraCast. So this new radio in AJI is a significant jump forward from uh Genesis AI in the sound quality. This new standard uh includes uh switching out the old G22 codec, audio codec, with LC3 codec. It does a few things. One is amazingly improved sound quality, fuller sound, better experience. Instantly, your audio streaming experience, whether you're streaming music or a podcast from the phone to the hearing aids, is got to be much better experienced.

SPEAKER_01

And I've had patients that immediately when I when I fit them, and then we talk about the ambient performance, and then I say, now stream either voice or music, and they're blown away by this.

SPEAKER_00

Yes, and the part of part of that experience that your patients are uh explaining, part of that is coming from the LC3 codec itself in the new Bluetooth standard. Part of that is our increase of the throughput on the wireless system to provide more robustness of connectivity to 2x the range we have. Uh and the second part, we also re-architected the streaming sound processing engine in AJI compared to Genesis, segregating it from the environmental processing engine. Streaming has its dedicated processing now.

SPEAKER_01

And I love that.

SPEAKER_00

It will be just awesome next level experience for streaming sound quality. And sorry, I should mention. Uh and we collaborated with more ecosystem players. We already work very closely with Apple, Google's for the phones, but we partnered with Intel, AMD, and Microsoft for the PC ecosystem. So for the first time, with PCs that have got this new radio, many of them do now, you can stream directly from your PCs into the hearing aids.

SPEAKER_01

It's been amazing, you know, to just be able to stream from my PC, from my iPad, from my phone, and be able to move back and forth between them is great because as you know, my goal for a number of years now has been not to uh duplicate, replicate commercial audio wireless headsets, but to come in that same neighborhood. And I think we're there now.

SPEAKER_00

Yeah, and also it's going to, it's we're at the onset of Oracast Wave. Yeah. Uh over the next months and years, you will see many, many Oracle devices come into the play in the airport, at a restaurant, in a bar, you'll be able to stream from whatever TV you want or sound source directly into your into your hearing. It so you're on the cusp of it. It's going to be a big upgrade to people's experience on connectivity paradigm.

SPEAKER_01

And now carrying through with where we said where you're the science and I'm the art on this for providers, one thing to take advantage of that improved sound quality regardless of the patient's hearing loss is I recommend that you consider customization and personalization of the delivery system. Knowing that right now in the US, the majority of devices that are fitted are receiver-in-the-canal devices, but the majority are with dome tips. When a patient is inserting a dome tip into their ear canal, they are not always getting a consistent location. And sometimes the sound is careening down the ear canal. When we go with a custom case receiver, with a big vent or with a small vent, again, to meet the needs of the patient, that we get a more consistent, comfortable, and sound quality improvement versus using a dome tip. And it's what we've been known for. But customization doesn't end with a custom style of in-the-ear device. And so I would encourage people to use every tool in their toolkit for those patients.

SPEAKER_00

For better personalized fit. And we also be out be remiss if I didn't mention this amazing technology capabilities of AJI is not limited to the rig devices alone, which should come with personalized custom molds, but also in our custom devices that has 2.4 gigahertz connectivity.

SPEAKER_01

Exactly. Okay, so that was, you said you had three things to talk about.

SPEAKER_00

So the number, so then so go going back to the three things we want to talk about. You know, we've been consistent since 2017. Yep. That the hearing aid's core function, number job number one, is to help us hear better, clarify the world of sound, help me understand conversations better. That's the core function of hearing aid. And we want to just step up on technology front to fulfill the promises of what the device people buy for. Always. That's number one. Number two, we're always focused on keeping people healthy. So we were the first company to embed motion sensors and hearing aids, and we started by tracking people's physical activities, steps, running, biking, more accurately, if I can say, than restore on devices. But then we were the first ones also to introduce fall detection technology because we want to keep our patients safe. If that fell, we were able to automatically detect it and send alert messages. But with AHI, we've gone one step further. We have developed a technology that enables the patient themselves to evaluate their risk of falling using the CDC steady protocol. And we partnered with Stanford University to validate this on 250 patients, really proving that a balanced neurologist's evaluation of patients' risk of falling is matched by our AI capability that's built into the hearing aids, enabling the patients to do the same from the comfort of their of their home. So, with this, hopefully, we're taking a giant step forward towards preventing people from falling, but not only detect when they fell.

SPEAKER_01

I'm really proud about this. It's such an exciting opportunity as somebody whose mother fell, and although the fall didn't kill her, the consequences of the fall, a broken hip, led to her eventual demise just a few years later. And so, really, while a fall detection feature is great, it's already too late if they've if they've broken their hip. So, what I really love about this is that uh typically that steady protocol, those uh exercises that we've used to assess balance, gait, or strength are done in a clinician's office, whether it's a physical therapist or a primary care physician or physician or audiologist. Hearing and balance is within our scope of practice. Yes. So the issue is if the the patient in the comfort of their own home can assess whether they're at elevated risk on balance, strength, or gait, that then they could do exercises is what you're saying to potentially strengthen that weakness.

SPEAKER_00

That's true. Yeah, they know their risk of falling, and they can do something about it by doing vestibular exercises, balance exercises to improve their system.

SPEAKER_01

Yeah, and I know we've got a ways to go on that to get to that point, but it's really an exciting development that, as I said, um hearing and balance is something that professionals should be concerned with. And even a mild degree of hearing loss places you at three times the risk of falling than your normal hearing counterpart. So okay, so that's two.

SPEAKER_00

Number three, our vision has always been about this invisible, powerful device I have in my ear, make it a conduit to the world of information. So I can simply talk to it and get access to amazing information that will help me in a day-to-day environment. So we are also releasing our first generative AI-powered uh assistant capability will allow it to do many things. Number one, you can control your device by just talking to it. Hey, increase my volume, change the memory. So instead of reaching out for your app on the phone or pressing buttons, you could just talk to it. Number two, you could ask for real-time information. What's the weather outside today, right now, right? Or what's my next meeting and stuff like that. Number three, you could ask it to remind you of stuff. Remind me that at 10 p.m. today I have to take my medicine. Whatever medicine you might be taking. You might be taking libiter. You want to be reminded about that because you don't want to forget about it. Any number of medicines. The device will speak in your ear then because you have requested it to remind you. But that's not where it ends. The whole world is using generative AI to have a lot more information from the internet. They are having a conversation with assistants out there. So with the new assistant in uh AJI devices, you can have conversations on any topic like you do with ChatGPT, because in the back end, we are using the same technology. You could be having a conversation about where do you want to go on a vacation this summer? Do you want to go to Italy? Do you go to France? If Italy, which cities you want to visit and why? Or you might ask it, hey, look, I'm feeling feverish, and but throat is fine. So what might what do you think I might be having? The consultations you can have with powerful AI sitting in the cloud, your device would be your personal conduit to it, invisible as long as you have a connectivity.

SPEAKER_01

Look, and and and hearing loss should not be a barrier to use to using all of those sophisticated features. The I think the big differentiation with this is that with Edge AI using the MyStar key, they stay in the app. It will intelligently triage between a question that they have about operating their hearing aids, or if they want to actually control their hearing aids, as you said, or if they want to engage in natural language processing to deliver a reminder to take their medication, or to ask a little bit about a place where they're visiting right in real time. What should I go see? Right. And I've seen you demonstrate this, I've used it myself. It's really exciting. And it's just that seamless, as you talked about, the best technologies are ones that just integrate into their life without having to leave the app and go use a separate program that and it intelligently triages between all of those different environments, is what is particularly exciting about this to just envelop all of their use cases and intelligently go from one to the other.

SPEAKER_00

Right. And to sum this up, you know, I'll I always put myself up as a model uh subject for it because I don't have hearing loss yet. The challenge for me was can I be the user who benefits so much from these devices that I don't want to live without it? Number one, can it clarify the world of sound, enhance speech, reduce noise in a busy meeting that I'm having, so that even with normal hearing, I would still benefit from the devices. Number two, can it help me track my health? Know my risks about health before even I know. Number three, can it be my personal assistant that helped would help me with information? AJI is an incarnation of that vision come to life. And of course, we're not we're not ending here. But what we built is one of a kind technology that your patients and our patients would completely see how this enhances their life and helps them connect with their loved ones and to the world of information, helps them stay healthy. We're extremely proud about what we've been able to do with AGI devices.

SPEAKER_01

Well, Achin, thank you for being here today to talk about this. Your enthusiasm comes through the microphone. Uh, and and I you know share that enthusiasm for where we're going. And I think the as we said, uh hearing is believing, and I challenge the providers out there to fit your most challenging patients, even if they're fitted with especially if they're fitted with competitive products. Try it for yourself and see, because hearing is believing, and and fundamentally that's job one. All of the other things that you've articulated, I think demonstrate that. Additional values. Yeah, the additional values. All AI is not created equal. And I think the way that we're navigating this future is really exciting because ultimately, starting with Mr. Austin and cascading through to Brandon Swalich and the leadership of the company today, that patient focus is really at the heart of everything that we do. Yes. So thank you for being with us here today. And uh I it's always exciting to have you on the podcast. I look forward to our next visit to talk about the next generation of technology, which I know we're already working on.

SPEAKER_00

It was a pleasure talking to you. I can't wait to come back next time to share more about what's in the cooking.

SPEAKER_01

We'll look forward to it. So, for our listeners, thanks for listening. Thanks for your loyal listening. Please like, subscribe so you don't miss a single episode, and send us an email at soundbites at Starkey.com if you have ideas regarding future content that we should cover on uh podcasts in the future. Thanks so much, and we look forward to seeing and hearing you soon.