Starkey Sound Bites: Hearing Aids, Tinnitus, and Hearing Healthcare
Being a successful hearing care professional requires balancing a passion for helping people hear with the day-to-day needs of running a small business.In every episode of Starkey Sound Bites, Dr. Dave Fabry — Starkey’s Chief Health Officer and an audiologist with 40-years of experience in the hearing industry — talks to industry insiders, business experts and hearing aid wearers to dig into the latest trends, technology and insights hearing care professionals need to keep their clinics thriving and patients hearing their best. If better hearing is your passion and profession, you won’t want to miss Starkey Sound Bites.
Starkey Sound Bites: Hearing Aids, Tinnitus, and Hearing Healthcare
Starkey’s CTO Achin Bhowmik Dives into the High-Tech Details Behind Genesis AI
Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.
Starkey’s Genesis AI is our greatest engineering achievement to date. The journey to reach the recent launch of Genesis AI began in 2017, when Achin Bhowmik, Ph.D., stepped into the role of Chief Technology Officer and Executive Vice President of Engineering at Starkey. The former Intel Corporation executive and Starkey President and CEO Brandon Sawalich kept the project under wraps for five years. In this episode of Starkey Sound Bites, Dr. Bhowmik shares the backstory with podcast host and Chief Innovation Officer Dave Fabry. He also dives into the high-tech details of what separates Genesis AI hearing aids from the rest: they are built to mimic the brain. Find out what this means for the patient experience and where hearing healthcare providers will see the benefits.
Welcome to Starkey Soundbites. I'm your host, Dave Favry, Starkey's chief innovation officer. If you've listened to this podcast before, you know that I love to talk technology. And Starkey recently introduced some groundbreaking technology in the hearing aid space. Technology really that has never been seen in our industry before. And there's no one better to speak on that topic than our CTO, Dr. Achen Bomick, who's my friend, my colleague, and importantly my boss. So I'll be on my best behavior, Achen, for uh for the next little bit here. But welcome back to Starkey Soundbites.
SPEAKER_01Always great to talk to you, Dave. Yeah. And uh share our exchanges with our uh audience.
SPEAKER_00Indeed. And um, you know, this has really been a long time in the coming. Yes. Um you joined Starkey in 2017.
SPEAKER_01August of 2017.
SPEAKER_00And um really, almost in secret, uh began working on what is now known as Genesis AI at that time.
SPEAKER_01And we kept it quiet uh up until now.
SPEAKER_00Yeah. So let's go back to the beginning then. When you first when Brandon Swalich, Starkey CEO and president, um uh had discussions about bringing your industry knowledge coming in from your past experience at Intel, heading up the perceptual computing division, and then you took this turn in your career to join our space, and and and I can tell you personally, I'm very glad that you did. Talk a little bit from ground zero how it is that you began on this.
SPEAKER_01Thank you very much uh for the amazing setup here. So I I think if you go back to the vision, it was very clear by the time I learned enough about what hearing aids are. Uh, you know, sitting by the feet of Mr. Austin, talking with uh Brandon, you, uh, and other other colleagues, listening to customers and patients, it was very clear there was an opportunity to significantly improve the experience with these devices. Collectively as an industry, we were not where we could be. And it required us to build the product grounds up with just reimagining reimagining the devices inside out, from its architecture, design, the processor chips, the signal processing algorithms, the mobile app that patients use it for controlling, the way the uh professionals fit these devices with the fitting software, the whole thing.
SPEAKER_00All new everything. Right.
SPEAKER_01So it does feel like a long time, five years in the making. But given the magnitude, I would say even the audacity of the technical targets we set for this grounds-up redesign product with everything new. So when you say everything new, it's not a marketing buzzword here. The processor was we built grounds up, and it takes time to develop uh a processor that's custom built for uh for the kind of experience we're delivering with Genesis, which we're gonna talk about, to developing new paradigms of sound processing along.
SPEAKER_00Additive compression system.
SPEAKER_01We'll talk about that.
SPEAKER_00Yeah.
SPEAKER_01Packing it all into an industrial design that should all be proud about, extremely comfortable to wear, uh, pushing the patient experience higher, grounds-up built mobile application that's so intuitive to use, and fitting experience you're gonna talk about. So uh looking back five years a long time, but that's just how long it takes to build a product of this magnitude.
SPEAKER_00Absolutely. I mean, um, even and you I don't think you mentioned the receiver cable on the RICS is new. It's probably still a snap fit that I thought our six-pin receiver snap fit was the best connector in the industry. This one's better.
SPEAKER_01This gets better.
SPEAKER_00And uh the the use and the durability of that, the pliability of the receiver cable. We left no stone unturned in this. And so let's talk a little bit. Now, you know, we just had the launch, and in that we sort of hit the high points. You know, but now sometimes you get to go into the weeds, and I can't think of a better person to do that with and a better place to do that than on this Soundbites podcast. So let's talk about deep neural networks. Um, you know, your expertise in this area is legendary coming from Intel and your use of that in the visual spectrum and then now in the audio space. What talk and unpack a little bit for the professional? Keep in mind this podcast is designed for hearing care professionals. Why should they care about a deep neural network and a hearing aid?
SPEAKER_01It's a great place to start. So, what this all of the hoopla and excitement around AI, deep learning or deep neural network being a branch of AI, what people should care about it because it changes the use the experience with the technology and the product. In the domain that I like to often uh bring an example from is autonomous car. When that works, there is still work to be done to bring it uh to everybody. Just imagine having just to sit back and relax and the car takes you where you want to go. It changes the experience when it works. In our domain, we are of course at the forefront, given in the last five years of work, on developing and deploying uh AI and deep neural network for hearing applications. In the old days, the engineers had to handcraft specific type of settings and algorithmic tweaks to make that device behave well as we go about our daily lives, from having a conversation with you to walking through the noisy cafeteria to walking out into the street where cars are honking and there's wind blowing. The world throws at you all these very complex acoustic environments, and the device has to learn to deal with it. The old techniques up until now, based on handcrafted algorithms, were simply not good enough. What AI and specifically deep neural network does is a data-based approach for the device to become smarter and deal with challenges that are thrown at it from acoustic viewpoint in a way that was not possible with previous technology. So essentially you take this neural network, and to general audience, I like to explain that as uh mimicking the circuit that you have in your cerebral cortex of the brain. Brain is an amazing uh computational system, in fact, the most amazing computational system that you know of. At its uh core, not to offend the neuroscientists, uh the way the computational equivalent description of it is relatively simple. You have a network of neurons that are heavily interconnected and then learn from the information that is thrown at it. Yeah. When a kid is learning to recognize language, they're basically training that neural network.
SPEAKER_02Right.
SPEAKER_01Copying a page from biology, the technology equivalent of that is a deep neural network uh built into the device. Well, there are two ways of doing it. One, you could just build it in a software, which is what most people have done so far. But with Genesis, it was an opportunity for us to go one step further and build a hardware architecture where neurons are interconnected in a block on the chip itself. It's an on-chip accelerator that does such a good job in high performance uh execution for neural network algorithms, which should provide amazing listening experience for your patients, which is why our professionals should care about, because it will improve the experience that their patients have with these devices as far as their listening goes in day-to-day life.
SPEAKER_00So, correct me if I'm wrong. Let me try to unpack that. There's a lot there. So, in terms of emulating the way that the brain works in terms of the deep neural network, if you will, in terms of nature in our brain, we're integrating information that comes from the left, from the right ear and the left ear, even from vision and from multiple inputs, timing, frequency, intensity, where noise location sources are, what speech is occurring. And the brain is taking all of that in and is capable of using the ears as sensors and submitting that, and then sorting out speech, usually is the most salient target that we want to hear. And what we're doing in the neurosound processor is emulating that by looking at inputs that are coming. Now, this this works, this device can be fit monorally, but you would say then it's designed with binaural systems in mind from the ground up to be taking those inputs in both sides and then um monitoring to be able to determine is this a quiet environment, is there speech present, is there noise present, some combination of the two, where are the noise sources, et cetera? And what it takes more than anything is an insanely fast processor, I would imagine, to be able to take all that input in. Can you talk a little bit about the engine that the neurosound processor has on this new chipset?
SPEAKER_01And as I do that, just to build on what you just said, it's it's exactly what it is. It's like taking a step back looking at the neurobiology of human hearing, because at the end of the day, this product is designed to help you hear better.
SPEAKER_03Right.
SPEAKER_01And so far, for decades, the focus has been on what is lost in the ear.
SPEAKER_00Exactly.
SPEAKER_01And you know, the uh neurobiology of ear is very well known, where it performs the functions of a directional microphone, it performs tonotopic frequency analysis in the cochlea, you have the outer hair cells that provide nonlinear amplification. And finally, you have the transduction that sends neural pulses resulting from the sound waves into the brain.
SPEAKER_03Right.
SPEAKER_01However, the magic, as you said, happens in the brain, understanding conversations, uh, reducing noise, and helping us pay attention to what we care about or should care about. That happens up in the cortex. And this amazing computation that is done by a network of neurons suffers if we live with hearing loss for a long time. You have brain atrophy, neuron losses, and all of that. The opportunity for processing that goes on in your hearing now is not just to amplify sound in a way to mimic the sensors, sensor cells in your ear, but also pre-process that signal in a way to help your brain understand those sounds that matter. Processing that is typically done in human, healthy human brain. But opportunity for us to perform processing in the neural network and a fast computation to deal with additive compression systems, such that it becomes easier for our patients to understand speech, to reduce noise, and just go about their day-to-day life much easier with these devices on than otherwise, or even with legacy devices.
SPEAKER_00Indeed. And so on the chip, it needs much faster processing than has ever been in a hearing aid before. Certainly in anything we've offered or anyone else.
SPEAKER_01So let me just geek out on the numbers a little bit on the process. Yes, and you pointed out my uh past life uh at Intel, the small uh chip company.
SPEAKER_00Yeah, you know a little bit about chips.
SPEAKER_01So chips I can geek out for a long time, but just let me tell you a few things about it. Uh that the processor itself takes the longest time to develop uh in any product that you build. And it took us five years to bring this to market, from the concept to the market. This processor chip is once in a decade upgrade. It's not a minor or even incremental improvement from the processor of the uh the past uh generation devices, or even compared with what our industry is familiar with. Just to give you for your audience some numbers. We have crammed in, packed in six times higher number of compute elements or transistors into this tiny chip that's actually smaller than the past.
SPEAKER_00Right. So it's smaller with six times as six times the chip.
SPEAKER_01And it has four times faster speed.
SPEAKER_00Okay.
SPEAKER_01It has that deep neural network hardware accelerator engine built right into the hardware of the chip. It has five times higher processor memory, ten times higher system memory, because guess what? AI needs as much computation as you can throw at it, and it needs memory footprint for bigger models to be hosted on it. So this engine under the hood is what enables this amazing performance listening experience for the product. But this engine, the upgrade of the processor, it's once a decade uh significant upgrade.
SPEAKER_00So with that engine, that processing speed, the number of transistors, the um the uh onboard memory as well as everything else for doing all of this computational power must be tremendously draining on the battery.
unknownRight?
SPEAKER_01So it that should be the traditional conventional wisdom. Yeah. In fact, I call it the tug of war for the engineers because more computation performance, the more processing you do, right, the more energy it draws, and hence lower battery life.
SPEAKER_03Yes.
SPEAKER_01So I call it uh uh the challenge of defying gravity. You have this four times faster, six times more transistors, a deep neural network, and amazing sound processing engine that we built on it, um, where you have an additive system, it's dealing with different sounds in parallel and bringing it all together at the end. We couldn't do that before because we didn't have the compute power. So, what does it do to the current draw and battery life? Should we be just preserving the battery life? I mean, that'd be a great uh fit.
SPEAKER_03Yeah.
SPEAKER_01But we defied gravity in engineering terms because not only we have this orders of magnitude higher compute performance, we have doubled the battery life on a single charge. It's insane. So whereas the previous goal for prior products was make sure that you can get a full day of use from a single charge. Now you get more than two days of full use out of a single charge from Genesis RIC RT devices.
SPEAKER_00Yeah, I mean, I think that's incredible in terms of the RIC RT with a telecoil with all of this processing and computational power, and it has 51 hours of battery life if you're using it just as a straight hearing aid with no streaming. And even streaming, your 40 plus 45 hours of battery life almost two days.
SPEAKER_01All the while you have now 10 times faster noise reduction system.
SPEAKER_00Yep.
SPEAKER_01You have 20 dB wider dynamic range, a lot more processing going on. At the same time, you're using the device twice as long as you used the other devices.
SPEAKER_00And you talked about that in this case, it's an onboard DNN accelerator. So talk a little bit about where would professionals and ultimately patients expect to see the benefits? You talked about faster noise processing. Um is that where this shows up in terms of noise management? Is it shows up in overall sound quality, speech intelligibility? Where is going to be the tangible benefit that a professional and ultimately an end user sees from Genesis AI?
SPEAKER_01I'll call out a few areas. Of course, it's a grounds-up redesign for the overall experience for listening, but I'm going to call out a few areas where the patients are just going to notice the difference immediately. Uh the X the listening experience in soft sounds, they will just appreciate how quiet and reduce noise in quite relatively quiet environments. That's why we spend most of our times in your quiet study.
SPEAKER_00Yeah, and and I'll tell you that I've worked with patients on this product and fitting them, and even those who'd been fitted with Evolve AI. One patient, I asked him if he wanted me, you know, if he was willing to try uh Genesis as we were developing it. And he said, sure, I'll try it, but I can't imagine how you're going to give me anything that's better. And and in the frame of spontaneous user acceptance, I put them on them, programmed them, and we were in a quiet room when we did, and he said, right off the bat, this sounds more clear. Yeah, it's quiet. He said, I'm hearing every word clearly. And you know, I I know everyone says when they introduce new products that that's the case. Yeah, but but I'm telling you, this is a tough.
SPEAKER_01They'll forget that they're wearing the device.
SPEAKER_00Yeah.
SPEAKER_01So that uh the second place they will see the benefit, and this is all stemming from the wide dynamic range and uh seamless processing of sound with additive compression system that deals different sounds of different uh response times. Yep. They should feel reduced listening effort in noisy environment, in understanding conversation and speech in difficult environments. They will feel the difference there. So from the softest sounds of life to noisy environments where uh you could get every help you can in understanding speech, they should feel the difference from the prior generation products and from other products in the market.
SPEAKER_00Yeah, and we will have, we're gonna talk about this over the next several podcasts. Uh if we don't get through everything that we intend to today, we'll have you back. Dr. Sarah Burdack is gonna come back and talk a little bit about the clinical outcomes. Um, one of the things that I think really impresses me beyond that, how quiet something that is is really exemplary is the sound quality when there's low ambient environments. But then we have a 118 dB input dynamic range that's the largest in the industry. And as you said, it's processing those very soft sounds, keeping loud sounds from reaching an uncomfortable level when programmed by a professional, and then delivering that sound quality throughout all of that range using this additive compression system that uses different time constants and different attributes for transience versus steady-state noise, and really enabling the hearing aid user to make the maximum use of their residual dynamic range, the residual auditory area. And in so doing, I think one of the things, and I'll tease a little bit on some of the clinical outcomes, is we've seen for the difference between previous hearing aids and current the the Genesis AI, is the magnitude of the difference increases with increasing loss.
SPEAKER_02Right.
SPEAKER_00And I think it's really because of this processing that is emulating it. Enabling them and enabling them to make every use of their remaining hair cells and the brain.
SPEAKER_01Yes, yeah. So we are we are taking on some of the processing tasks that their healthy brain would have done, but they unfortunately don't have that uh between the ear and brain.
SPEAKER_00And offloading that, and that translates into ease of listening.
SPEAKER_01And also just the the the crunchiness, the speed of processing, 10 times faster noise reduction. So before the before you feel it, it's gone. And then the wider dynamic range that you're talking about, 40% lower noise flow.
SPEAKER_00Right.
SPEAKER_01And people are going to feel that.
SPEAKER_00Look, and and in our industry, others have used uh training, using a deep neural network for training of the devices. Right. But as you said, this is an onboard accelerator that we're using. And uh Edge Mode Plus. Now, is that an example when people have said, well, edge mode is pretty good for those patients who get into challenging listening situations and can either double tap or button press in an app to activate an edge mode uh uh uh uh acoustic analysis in a challenging environment. Edge Mode Plus is incorporating elements of DNN already, correct?
SPEAKER_01Yes. And like you said, uh I used to say uh edge mode is like putting the power of artificial intelligence at the fingertips of the patient, right? Yeah so now with uh lot more data in our bag, a lot more use conditions and use cases, allows us to go to that next level with the edge mode plus, where it's optimized for uh listening experience, speech understanding, or you might want to go into a noise reduction mode and get extra benefit that you need uh from this, with the power of deep neural network providing the algorithmic uh performance.
SPEAKER_00Yeah, in particular, those moderate and really noisy situations are where this is now scratching the surface, and then we're not done there yet. As we've said before, we're just getting started. And with this product, this is really planting a flag with this uh onboard DNN accelerator as to where we can go with this in the future. You know what it does is it resets the baseline.
SPEAKER_01It resets the baseline. It's almost like every time you have a breakthrough technology, it resets previous baseline. When vacuum tubes used to be the devices powering your electronic devices, when transistor came, it reset. So from you know single-digit to tens and hundreds of vacuum tubes, now you have billions of transistors on the chip. So the platform, the engine under the hood that we have now, it allows us to basically get on a bandwagon for unlocking newer and further benefits with algorithms.
SPEAKER_00Right. And you know, now for the clinicians that are listening, a lot of times the unknown, the anticipation of a breakthrough like that has some fear associated with it. So reassure me as an audiologist that deep neural networks aren't going to put me into extinction.
SPEAKER_01Right. In fact, if anything, it is going to help because the patients are going to be more satisfied. You know, the way that I explained the devices, first of all, you need the devices to be fitted really well. And the only way to get that done is through the uh professionals. And we didn't get into the details of it, but the advances in the technology for genesis is not just in the process or Or sound processing algorithm or even the mobile app, but the fitting software, ProFit, takes it to a very different level that our professional customers will immediately notice. We call it minute fit, four clicks from box to first fit. And even the quality of the first fit, where we have algorithmic advances to fit very well, no two years at the same. So the improvement we have in the fitting algorithms are going to enable great patient experience. And once you have fitted the patients and they're out in the wild, you want them to be happy. And what these devices do with AI and deep neural network is it instantly analyzes whatever challenging environments that are going to walk through their day-to-day lives is going to make, guess this, 80 million automatic adjustments every hour. That's 22,000 a second. And in fact, for a that's almost 2 billion per day. So these devices, without me doing anything, is constantly analyzing and adjusting for me is going to make a happy patient. It's actually great for us and great for the professional customers. As Bill Austin likes to say, his vision was can you make hearing aids to be my very own brain assistant? That's what it is. Because what you lost in your healthy brain, what used to be in your healthy brain, the ability to process sound, understand speech, reduce noise, you don't have them anymore with hearing loss. These devices are the very personal brain assistant. The AI engine in there is constantly working, constantly analyzing, making two billion analysis and adjustments every day without you even knowing about it as you're going about doing your own things, helping you hear better, helping you reduce your listening effort, helping you enjoy the softest sounds in life, connect with people, and connect to the world around you. It's great for the entire ecosystem. Good for the patients, great for the professional customers who are feeding those patients, great for us, because we have satisfaction of bringing advanced technology to serve people that we serve.
SPEAKER_00For sure. And you know, you mentioned ProFit is the new fitting software. Um and for people that have been very content and very happy. I've heard a lot of feedback over the years that Inspire is uh for many professionals their favorite software, but we've even streamlined that further. I mean, with with uh simple things like being able to have the devices automatically detect left ear, right ear if the receivers are cut through that smart connect.
SPEAKER_01We needed to do the s have the cable, we needed to redo cable to put smarts in it.
SPEAKER_00That's right. And so now as soon as you go to read a new set of devices, you're not reading two left devices. You're reading and identifying the left one and the right one. Yeah.
SPEAKER_01Transport climbing that out, these are innovations that should be a good idea.
SPEAKER_00They're ones that we tend to forget with all of this newness. Yeah. And I'm telling you, that's a pain point in the past, you know, to sort of, oh, okay, now I have to look at the tiny printing or try to read which one's which, and this automatically does it. Four clicks from start from the box into the ear. Yeah. And with that first fit. Um, and and yet for those people who loved Inspire, there's a sense of familiarity in Profit with innovation by things like that that Smart Connect. Yeah. I think another feature that will, for some people, just uh go unnoticed, but it's embedded in there with that uh uh accuracy of the first fit is that after you run the feedback initialization, it takes into consideration the venting and it improves the uh venting, it uses the venting to improve on the feedback optimization and that first fit.
SPEAKER_02Yep.
SPEAKER_00Estat 2.0 is taking full advantage of that improved dynamic range, both in terms of the amplitude and the frequency, to better model. We're able, because of this additive compression system, to deliver more high frequency gain, but with great clarity. Yes. When some people are gonna say, oh no, they're going more high frequencies, that's gonna sound harsh. Not it not. We didn't hear that.
SPEAKER_01And and also talking about uh patient, sorry, uh the professional benefits, yeah. Should we talk about the firmware upgrades? Like how long does it take to upgrade the firmware? We have now four times faster firmware upgrade uh possible genesis compared to evolve.
SPEAKER_00A pair of devices in about three minutes, maybe four minutes. Yes. And that's remarkable.
SPEAKER_01Yeah, and how about being able to do that through the mobile app?
SPEAKER_00In the app. So one of the things, and as you know, um one of the things I I do is um uh it's like the hair club. I'm I'm I'm not just an employee, but I'm a customer. And so I wear the devices when we're testing out new ones. And one of the cool things is in the radial dial under my hearing, you come in there and you look at my settings about my hearing aids, and it tells me whether the firmware is up to date or not. If it's not up to date, I can do a firmware update within the app myself as the end user. That's convenience for me and for the professional. For the first time I can do that, and it it updates the firmware in just minutes. Yes. Many clinicians will prefer to use that as an opportunity to have the patient come back in, explain to them what the new features are that are going to come along with the firmware, and they may want to control that. But for people like me that are maybe a little more tech savvy, I want to have the convenience to get access to those features as soon as they're pushed out. And so it it again, we're finding this balance for that that uh from the bot the minute fit from from the from the box to the ear in four clicks, but yet I'll still, as a professional, have access to the uh 24 channels on the on the premium product. And for soft, moderate, uh loud inputs plus MPO, I can go in and get under the hood and tweak all I want, but I don't have to.
SPEAKER_02Yes.
SPEAKER_00Same on the end user side. Edge mode uh now offers that same easy button, if you will, that I can go into edge mode and just click one button, it'll do an analysis in an environment. But notice that I have two additional features now that allow me to optimize for speech audibility and clarity and to take advantage of that additional noise management, that more aggressive noise management. And so it we're really finding, I think, you know, I'm biased, but we found the sweet spot between ease of use and that comprehensive adjustment by the end user and by the professional. And I think it really continues that objective that we've had for a number of years of saying our technology in the professional's hands continues to deliver the best patient outcomes and that people, professionals, don't need to be afraid of DNN. 80 million sounds like a lot of adjustments every hour. But it happens by itself.
SPEAKER_02But it happens by itself.
SPEAKER_00And then really being able to see the proof in the proof in the data of the outcomes that we've shown ease of listening has improved, more audibility for soft sounds, as you mentioned, uh, more high frequency uh amplification to provide that clarity without harshness, um, that full advantage of the dynamic range and overall speech understanding in quiet and noise. I think you know clinicians are going to be blown away, and their patients will um uh I think, you know, and I want to hear from you if if you're getting a different experience and you try the devices. But we're delighted to bring this to market. And I'm appreciative to you uh in terms of your team and your partnership to helping to deliver this um really uh significantly.
SPEAKER_01Thank you for bringing that point up. It's uh so earlier I was having a conversation with Brandon about just what it took, right? So five years went in a flash.
SPEAKER_00Yeah.
SPEAKER_01But every part of the company played a role.
SPEAKER_00Everyone.
SPEAKER_01It's uh in technology side, you have from ID from vision to ideas to advanced research and development to engineering to operations for manufacturing, and finally to the commercial teams for you know getting trained and have it out there. It took the entire company the last five years to bring this, I would say, breakthrough line of products.
SPEAKER_00Yeah, it and and and my home is the RD group. And you know, I think there isn't a person in RD who wasn't involved in some way or another. It's a massive massive effort. I think we had over 200 patent submissions since 2017. Right. Um, many of which pertain directly to this product. We still have some of the features that we introduced back with Livio and continued in edge and involve with physical activity tracking, but also continuing to make these devices smarter. Um now we've doubled the activity classification so that the device in the old days I'd get on the elliptical trainer as I do most mornings, um, and uh and it would say that I got steps and it would say that I got some uh exercise. But now it can say it can differentiate between running, uh, between standing, of course, um, laying down even, um, riding a bicycle. Yep. Uh all aerobic activity. All of that is automatically detected and and chronicled in the app for those people who want to record it.
SPEAKER_01Continues to have the life-saving fall detection feature. And we happen to have the only device in the industry that saves patients' lives if they fail and send a lot to their loved ones.
SPEAKER_00Yeah, and and all within we we highlighted a little bit of the MyStarky app. We'll we'll have more opportunity to go into this in greater detail in the future. But but I think just you know, there's so much to talk about. But within the MyStarky app, the user interface puts all of the most common adjustments front and center so that now you just have the radials for edge mode program My Hearing, which is, as I said, where it is that you go in and look to see if you need a firmer update. Volume control adjustments. Yeah. Even most people are going to turn the volume up and down in the two ears because the professional has set it up to be balanced, but I can still override that and go in just underneath the hood and see that everything is right here.
SPEAKER_01All the controls are. Very simple in an intuitive interface.
SPEAKER_00And you know, while the program, for somebody that suffered lifelong visual challenges, the nice thing is I can just swipe and it will change from memory to memory, and I get the audio prompt in my ear. Changes the color. That's such a subtle detail of that little different palette so that I don't have to concentrate on trying to read in low light or challenging environments. And I know that I've changed and I get the audible feedback in the ear. So I couldn't be more excited. I know how passionate you are in leading the team through this. I thank the entire team at Starkey, from RD to marketing to sales to operations to everyone that is making this happen. And um can't wait to see what the market thinks. It's a big moment, and congratulations.
SPEAKER_01Thank you very much.
SPEAKER_00And uh, for those people who listen to the uh Soundbites podcast, if you enjoyed this, please like it. If uh you uh hit subscribe, you'll be sure not to miss a single episode in the future. I know we'll have you back to talk more about this because we're just getting started with this, and I can't wait to see where we're going next. I can't share everything because, as I said, yeah, the RD part of me um wants to tell everyone about and shout from the mountaintops, but we have a lot of other things that are coming out in the future that we know that the patients are gonna like. So thank you, watching for what you do. Thank you for being here to share and nerd out with me a little bit today on this podcast, and uh look forward to seeing you again.
SPEAKER_01Thank you, Dave.