Starkey Sound Bites: Hearing Aids, Tinnitus, and Hearing Healthcare
Being a successful hearing care professional requires balancing a passion for helping people hear with the day-to-day needs of running a small business.In every episode of Starkey Sound Bites, Dr. Dave Fabry — Starkey’s Chief Health Officer and an audiologist with 40-years of experience in the hearing industry — talks to industry insiders, business experts and hearing aid wearers to dig into the latest trends, technology and insights hearing care professionals need to keep their clinics thriving and patients hearing their best. If better hearing is your passion and profession, you won’t want to miss Starkey Sound Bites.
Starkey Sound Bites: Hearing Aids, Tinnitus, and Hearing Healthcare
How Starkey Became the AI Leader of the Hearing Industry
Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.
As Starkey’s R&D team celebrates nearly 300 years of collective work experience in the field of artificial intelligence, Dave sits down with Starkey’s Vice President of Advanced Development, Amit Shahar. They talk about Starkey’s journey to becoming the industry-leader in AI and DNN, including the strategic decision to establish a global R&D presence, with key operations in Tel Aviv, Israel. They also chat about their visions and predictions for the future of hearing aids. If you love a conversation that goes deep into the weeds on technology and how it’s developed, this is an episode you won’t want to miss.
To learn more about the latest in Starkey’s hearing technology, visit StarkeyPro.com.
Welcome to Starkey's Town Bites. I'm your host, Dave Favry, Starkey's Chief Hearing Health Officer. In this episode, we'll be talking to Amit Shahar, the Vice President of Advanced Development here at Starkey. He's a friend, colleague, and someone I admire greatly for his ability to create and develop new technology that actually shows up in our products. He's one of the leading experts here at Starkey who've been helping to write the roadmap for the entire hearing industry when it comes to advancements in technology. I can't wait to do a deep dive with him today. But before we do, a quick note to our listeners. If you have topics that you would like to hear more about, please send us an email at soundbites at Starkey.com. We'd also invite you to rate, review, and subscribe to this pop podcast to be sure that you don't miss a single episode, and thank you in advance for doing so. Now, back to our guest and the highlight. I'm just the appetizer here for Amit today. Amit joined us at Starkey six and a half years ago. I remember well uh a conversation that we had uh when you were a newcomer to the industry, and we sat at an American Academy of Audiology conference where you came to sort of immerse yourself in all things audiology. He's been one of the driving forces behind our ground groundbreaking work in the hearing health technology, helping us to lead the industry not only in terms of hearing, but in terms of health and wellness, which is an area of personal interest to me, and also in the area of using intelligent assistant and really having the hearing aid act as the hub, if you will, for overall health and wellness. But job one is always better hearing. So, Amit, thanks for joining us finally on the podcast today. This has been long overdue.
SPEAKER_00Thanks for having me. It's great to be here.
SPEAKER_01Well, before we go any further, what I'd like to talk uh about a little bit before we go down uh many rabbit holes, I expect, in the next half hour, uh we need some ground rules to make sure that we don't spill the beans because the boss will not be happy with us if we have any competitors listening to this podcast, looking for insights as to the future of our roadmap. So remember uh that uh uh we're gonna share work that we've gotten to this point and a little bit, tease a little bit into the future, but uh let's not open up the roadmap completely. Okay, deal.
SPEAKER_02Okay, that's gonna be very challenging, but uh I know just give it a shot. Yeah, I know. All right. Well what we see today in the product uh are a thing that we started developing five, six, seven years ago. So what we are working on now, I think that you're gonna be seeing in the product in six years from now.
SPEAKER_00So I'll do my best. All right. Well, here we go.
SPEAKER_01But first, really, tell us a little bit about your background. What what got you this far and what attracted you about the hearing aid and hearing care industry?
SPEAKER_02Okay, I'll be honest, the uh hearing aid industry was not on my bucket list, uh that's for sure. Um, I was working in different industries. I started the defense industry and then consumer electronics and then the health industry. And my last job before coming to Starkey was uh at Intel. Um, I was doing a lot of cool stuff. Um from uh augmented reality to virtual reality to robot drones, mobile phone, very cool stuff. So I wasn't looking to move. But then Achin called, Archin Bomick, our uh chief technology officer, and uh he wanted to talk to me and he explained initially I would I thought of nothing to do with the hearing aid industry, um, but he had this vision about uh changing the industry, and he said he needs me to uh to uh execute that. Uh so that's how I find my way uh to the hearing aid industry, and you know, I saw the spark in his eyes. And it turns out that from a technology standpoint, the thing that I was working on throughout my entire career, which included sensors and AI and human interface, are actually very similar to what I was about to do in Starkey, and that's what we are doing today.
SPEAKER_01Yeah, and you know, I've come at this, as you know, from the complete opposite end. This is my 42nd year as an audiologist. So I've been immersed in hearing health care for a very long time, and uh I'm approaching this from that perspective, coming into uh the artificial intelligence in the way that really when when you began. And do you remember what you said to me uh the first meeting that we had where we were walking around uh uh AAA, the American Academy of Audiology conference, about the industry.
SPEAKER_02Yeah, you talked about the sea of sameness.
SPEAKER_01And uh, you know, and and that hurt a little bit because like I said, uh when I've been in this industry for so long, we see incremental improvements. But but in terms of what we've taken on since you joined and Achen joined, um, we really have differentiated. And I think it that begins, that differentiated ourselves from the rest of the industry by really focusing job one, hearing, but also bringing in uh artificial intelligence. And and you know, I like to say, and I'm not even kidding, that we put the AI and hearing aid um by starting to focus on uh hearing aids that could, of course, help you hear better, but also live better by focusing on those health and wellness features and the virtual uh assistant and things. And so that your comments to me when we first met still ring in my head about that sea of sameness. And I think now you can say that uh certainly because of the emergence of artificial intelligence um and with that focus and outside perspective that you and your team brought, um, I I think we're there we can arguably say that we're there's not a sea of sameness anymore. At least uh I think we're differentiating ourselves in this space.
SPEAKER_02I I would definitely agree.
SPEAKER_01Yeah. So I spoke of your team briefly. Tell us about the team. You're based in Tel Aviv, Israel, and uh that's where our advanced development group is that you lead up. Tell us a little bit about why there, and a little bit about your team and what it is that you're focused on doing, and how does that integrate with the rest of Starkey?
SPEAKER_02Yeah, um, okay, I'll give a little bit of a background. Um, every product company always um needs to balance between the execution, which is releasing product, fixing issues, doing incremental improvement, doing support, etc., versus developing the future, future products, future technologies. That's why, and in many cases, uh whenever there is a debate or an issue, uh execution always wins, right? It's it always it's always more important. And that's why the most successful technology uh companies in the world created a separate group, advanced development or whatever name they call it, that is focused and almost like shielded from the execution so that they could, regardless of what's going on, that they could still continue to develop the future. And that's why when Starkey decided strategically to become a technology leader, uh, that's where we decided to create this advanced development group. And advanced development group is is chartered with developing new technology, but also bringing it to the market. It's not a research group, it's our our job is not to just create papers and go to conferences. Our job is to bring this technology into uh into product in collaboration with all the engineering groups and everyone. Um so one of the first things that we did when we established this group, and by the way, it came with a substantial investment. You know, the decision, you know, it's easy to take decisions if you don't need to put money, but Starkey did decide to put a lot of money in bringing resources and recruiting the best talent that we could possibly find. Um, and the first thing that that we did, we realized that uh there is only so much talent that we can find in in the twin cities. And we decided to go and look for the talent when the talent where the talent is, and that's why we decided to go and open the first uh RD facility outside of the US in uh Tel Aviv, Israel, uh also known as the startup nation. And it's no coincidence that all the big tech companies have their RD centers in Israel. Uh, there's a lot of good talent.
SPEAKER_01Let me stop you for a second on that. So, why is that that so many technology companies um have research and technology or research and development um divisions in Tel Aviv? Because as you said, it is uh Israel is known as startup nation, but why is that?
SPEAKER_02It's a lot about the culture, the academia, the the um thrive for excellence, and and I think it's a lot about the culture and how it is being uh pursued. Um I think that's that's uh I mean a lot of research is about that. Why, you know, after the US, it's um uh most uh startup uh in uh Nasdaq from Israel. And uh I mean it's it's a long conversation, but uh what we did then uh we decided to uh to open a facility in in uh Israel. And after that, we started hiring also in in other locations. In fact, today, in addition to Widden Prairie and Tel Aviv, we have more than 10 other locations just for advanced development. We have people in in Europe all the way to the Silicon Valley. Uh, we go where the talent is, and this is really important if we want to achieve, and I believe we did achieve it to become uh the leader in technology in this industry.
SPEAKER_01Yeah, as you said, I mean it the the talent pool is very deep. And I think I've read somewhere where you totaled up all of the experience that we have with artificial intelligence in your team, and it's something approaching 300 years of collective experience.
SPEAKER_02Yeah, yeah. Yeah, on AI. We have a lot of other uh experience, but uh yeah, it's we have over 280 years of work experience in AI.
SPEAKER_01That's remarkable because I think again, your charter, in addition to focusing on those things, as you said, uh I I say that many invention dies a lonely death if it doesn't innovate. That is, unless it makes an impact in the market. And your team really, in its relatively short existence, has already contributed to many features and technologies um that have uh shown up in Starkey products already and uh and the showcase for the future. You know, as Brandon says, uh, we're just getting started. And uh so let's let's talk a little bit about um what they're working on now, again, without spilling the beans, but uh talk a little bit about the the overall direction. How much effort is focused on better hearing versus uh some of these other projects that sort of uh spill over into uh overall health and wellness and virtual assistant and things like that?
SPEAKER_02Well, but by far our uh you know biggest investment is on better hearing. I mean, it goes into improved sound quality, speech and noise. It also is not just about the the hearing aid itself, it's also about providing more tools to clinicians, a lot more uh technologies in in various uh areas. Uh, we do have uh sufficient uh uh effort on uh health and wellness and uh in uh intelligent assistance. We do believe uh in the future of that. And you said we are leading that the pack in in these areas as well. Um but this is uh I would say it's the icing on the cake. Uh most of our uh effort is in best hearing, as we call it. And um you know if you want to start talking about the more detail, then uh you know it's as you had mentioned AI is is going to eventually overrun every every aspect of algorithms, and uh that's that's where we are putting a lot of our efforts in.
SPEAKER_01Yeah, and so with that, let's let's dive into that area a bit. And and I'd like to begin with uh defining the way you think of an overarching definition for AI. Um I I like to say that artificial intelligence has uh become ubiquitous, but also is bordering on a buzzword. Um you can't hardly turn uh a television commercial on or look at a social media ad um, or it feels like have a conversation these days without someone throwing in AI into the mix. And um uh so I I think I read something just last weekend where consumers are increasingly suspicious of products that use AI in uh in their advertising. And I think some of that is fear-based or a lack of understanding. And um, but so let's begin by providing sort of uh uh you know this intimidating definition of artificial intelligence. Um talk a little bit about how you define it and how does it work on a very basic level.
SPEAKER_02Okay, uh great. My favorite uh subject. So uh yeah, it's as you had mentioned AI is um the very broad term of any system that can reason. I mean, it doesn't really say anything. And I, you know, to be honest, a lot of companies abuse this and say, oh, we are doing AI. Okay. Um but what we are doing is there is a machine learning, which is a type of AI or deep learning, and let me explain what what it is. Um so if we take the traditional uh algorithm development, in traditional algorithm development, you you give the computer or the hearing aid uh very specific rules and instructions. They are very explicit, and you're trying to find all the various uh situations that would happen. And you you you tell it if you see that, you do this, if you if you have that, you do this. So you're kind of writing in very, very detailed uh specific uh instructions, and that's the traditional algorithm. And the problem with that is what happens when the hearing aid or the computer sees something that it didn't see before, then it don't know what to do. Okay, so it's always limited. When you're talking about um machine learning, then you still have rules. You give the machine rules, but then you give it a lot of data and it learn, learn from the data what to do. The good thing about it is that in this situation, when the computer or the hearing aid uh finds uh uh uh something that didn't see before, then it it can still reason and still figure out what to do based on what it learned until then. So it could could uh could do things that were not specifically explicitly told him to do. A subset of that, which is what we are doing, we we call deep learning, there are no rules. Okay, you give the machine, the computer, a lot of information and let it figure out what the rules are and what to do. And it gives it enormous flexibility. And in fact, if you think about it, this is how our human brain works. Like you, as a human being, I I presume. If you mostly becoming a cyborg, if you see a see a dog or or a cat, um nobody gave you instructions of count the number of legs or look for the ears or do something like that. How do you know that it's a dog? It's because when when you were a kid, you were shown examples. So that's exactly how the DNN or the deep neural network is is working. So it's no surprise that the architecture of how this uh deep learning is working, it's based on neural network. That's how the architecture is built in the computer and it mimics the biological brain. That's what it is. Um so I hope that that explains it. It gives it enormous flexibility, and that it that's the greatest power of that.
SPEAKER_01Yeah, and I think you know, uh, then how can we provide definitions or an explanation for, as you mentioned, AI in general, machine learning, and then deep learning or deep neural network to help them help their patients understand it. And I think, you know, many will say, well, we've used machine learning and hearing aids for the last 10 to 15 years, and people will say, we have we've had AI from a machine learning standpoint, and that largely is through environmental classification. And and that's a useful feature. But as you say, that's been based on the rules that we know about speech, spectral and temporal and periodicity characteristics, attributes of speech that differentiate it from noise and music and wind. Um, but it can't capture every element of what it is that we're looking for. As you said, with DNN, um, we're not giving it those feature rules, but but but I think the the granularity depends on how do you define what the goal is for your deep neural network.
SPEAKER_02I I would say um first, if you if you ask me what are the benefits of DNN, you have to understand what the benefit for the patient for DNN. I would say uh let's talk the one big advantage with the DNN is that it's just better in real life, uh real life environment. If you theoretically, if you have two hearing aids that on paper they look the same or tested the same in the lab, I would always prefer the one with the DNN because it because what I explained earlier, it is doing a better job in uncertainty in dynamic environment. When things change, I mean in the lab you can always set it up to be one very specific perfect situation. But uh DNN is is better in dynamic environment, it's better in non-stationary environment, like in speechy noise. It is better in any kind of acoustic scenario. So you don't have to just have predefined acoustic, it could do any acoustic. It is much, much better in performing uh classification, like knowing how to classify what is the what is the environment. It allows better end-to-end optimization, it is much better in personalized solutions, doing things that are very personalized to uh to a patient, and it is also better in sense of fusion, like things taking uh acoustic and I new data and fuse it all together. So a lot of a lot of advantages that comes with the DNA, it's just it's not just a buzzword, that's that's reality. Yeah, the way to explain it other than telling you know this is just perform better in real life. Um I I like the uh the brain analogy because if you think about how you know, as we grow up, we lose brain cells, and and the hearing aid with the DNA actually brings back neural network. It's it's like a piece of brain that you're always as you get older and you lose neuron and neurons in your brain, the hearing aid brings back more neurons. I think that could be a good way to explain it. It actually helps you upload some of this work by having a small brain of its own.
SPEAKER_01Yeah, and and I need every bit of those brain cells as I get older and older every day. So then staying with that um within our domain on acoustic classification, let's start there and talk about a machine learning classification system. And you know, I like to say, on the base of uh studies that I've seen, um, in in uh looking at the way a hearing aid can automatically classify in comparison to the way a human would classify an acoustic environment as quiet or noisy or musical or windy, et cetera, that you can get to about 80, 85 percent agreement between a machine learning classification system and a human. But those people that want the remaining 10 to 15 to 15%. 15 to 20 percent of environments, you and your team developed edge mode as a solution for that by combining machine learning classification and then really capturing the listener's intention to be able to say, I'm in a challenging environment now, no matter whether that's a quiet or noisy or mu uh any kind of environment, to capitalize on the remaining 15 to 20 percent of situations that confuse machine learning situations because it's usually the case that you're listening for one voice amidst other voices or music, which can be both a stimulus of interest or a noise. And you and your team really developed the original edge mode as a means of situational application of the listener's intention combined with machine learning. Is that correct?
SPEAKER_02Yeah, yeah, and that's actually uh one of the first examples, good examples of using uh machine learning on a hearing aid. So it's you said edge mode because it is running on the edge, it's running on the device. And that's that's super powerful, and that's uh it's no coincidence because if you look also in other industries, the first things that were used by uh machine learning were on classification. Machine learning is really, really good in doing classification, and that's why it was one of the first things that we were doing, and I think it's a very successful uh feature that we have, and it's only getting better over time.
SPEAKER_01Yeah, over time it's it's become one of the most widely used features by patients and and someone who works with patients, once they understand how to use edge mode, um, they increasingly use it in lieu of manual programs that I that I apply to their hearing aids for situational environments, restaurant, crowd, and music. And increasingly, patients are using what we call the personal program plus edge mode. And that feature really isn't static. It's using DNN, and you talk about it being better for dynamic situations, but um, talk a little bit about the way edge mode and edge computing has evolved from uh its original implementation to where we are today.
SPEAKER_02Yeah, so we are first, you know, first implementation we had it uh on on the device. We we later on added because you know people were you know looking to see if there are other ways to activate it, we added the ability to use the mobile phone and also have some more aggressive or less aggressive uh solution. And then as you know, a lot of people say, Oh, if I have edge mode, can I use it all the time? Why can't I use it all the time? And you know, we take these requests uh uh very seriously, although you know it's it's almost like being in turbo mode all day, so not always recommended, but uh it just uh released uh in previous uh product uh uh dynamic edge mode that allows you to turn it on and it constantly keeps you and and do the adapt the adjustment and all the time um look for the right and the best uh uh scenario and the best setting for for the patient. So that that's that's what we did.
SPEAKER_01Yeah, and even for some of my oh excuse me, go ahead.
SPEAKER_02Yeah, I I I just wanted to address you. You said something earlier that uh I wanted to address about you know uh companies that claim to have DNA and how do you know if it's real? So it's um if you look at the at the advanced advancement of DNA, the problem with DNA and why it took so much time for it to get into the hearing aid is because it requires a lot of computing power. It's almost impossible to put such uh computing power on the hearing, it and the only way to do it is to actually put it in the hardware, bake it into the hardware, and that's what we did in Genesis. We took this neural network and we put it inside the silicon on the chip as part of our hearing aid. That's what allows us to you know open the door for all sort of new capabilities that were just not possible before. So if if you hear someone doing machine learning, you can just ask, I mean, do you have uh a neural accelerator on your hearing aid? If not, then you can always question if it's if it's real or not.
SPEAKER_01Well, and even that, I mean, the thing that is amazing to me as a practitioner who works with patients, and we've seen a very rapid transition from replaceable zinc air batteries to rechargeable batteries over the last five years or so. And I think for me, it's quite remarkable to that we've been able to deliver Genesis with that DNN accelerator directly on the silicone, as you said, to deliver 51 hours of battery life, up to 51 hours with the RIC RT. And that's with being able to use Edge Mode, which uses that DNN application situationally or automatically throughout the day, if a person just turns it on in the morning and goes throughout their day. Um, when you have a separate path uh or a separate uh chip uh that is doing DNN, um, that can really take time in terms of transferring that information between uh the two chips, and that uh increases power consumption and shortens battery life. And I think what we've done with Genesis AI is nothing short of remarkable by taking range anxiety for the hearing aid user off the table by having them uh have that confidence really at the time that they purchased their devices and years down the road that when they put their devices in the morning, you know, regardless of how much they're going to engage with edge mode and edge mode automatic, they're still gonna get all day battery life today and three, four, five years in the future.
SPEAKER_02Yeah, that's a very interesting point that you bring up because we did uh we did consider in the past to have like a separate chip and we decided that it's not efficient. If you add it as a separate chip, then yeah, you're a lot of your the advantage is diminished by passing information between two chips, and and definitely the the power is is uh is a big drag. Yeah so that that's why we took the time to develop a ground-up design that will have the neural network as part of that. And even if you look at uh biology, you don't see a lot of successful uh animals that have uh more than one brain, right? Most of the successful animals have one brain that is optimized, and you know, there are some that have few brains, they're not very successful.
SPEAKER_01I like that analogy. Well, and and now AI isn't only about edge mode. I mean, when we're talking about better hearing, and you talked about one of the advantages of AI is better at personalized solutions. Talk a little bit about other hearing-related examples that also incorporate functionality that is provided by AI to get to solutions, let's say first fit, faster and more personalized for that individual.
SPEAKER_02Oh, yeah, we have a lot of examples. Now I need to think what am I allowed to say and what I'm not allowed to say.
SPEAKER_00But talk about features that are in the product. In yeah.
SPEAKER_02Um well, we use um we use uh DNN to also uh improve speech enhancement and suppress suppress noise. As I said, you know, it's it's really good in doing that. Um so that's uh I think that was the second thing that we incorporated in in our hearing aid. We are also using uh machine learning in various forms also in starting to help the clinicians uh in fittings. Um more thing that I can disclose, but it's not just about running on the hearing aid. A lot of the most of the things that we're doing are on the hearing aid. There is also we're using that in in uh improving uh fitting and um various other areas in many.
SPEAKER_01Two of my favorites, two of my favorites are the way that we're acoustically matching with the feedback initializations uh stimulus. Because of course, when we don't have the benefit of having the patient in front of us when we're developing using our fitting strategies and fitting formulas. So we're optimizing to chemar. But uh the ways that your team developed uh the ways to use the uh the feedback initialization stimulus to acoustically adapt to the individual's ear characteristics, to the way uh the depth of the mold, the depth of the receiver in the ear canal, and use that to get to first fit uh a personalized solution faster by incorporating that feedback initialization stimulus. That's one of my favorites. And then the other is auto-rem, which I use on every single patient with the real ear system that I have, and it's compatible so that now, uh as opposed to the past, where we could say and argue all day long to a clinician that we believe in best practice, but we also believe in Estat. Well, they didn't have that formula in a lot of the independent uh REM machines, but now we've partnered with them to incorporate that that uh uh ESTAT or NAL NL2 or whatever target they use and use the ability of machine learning to quickly uh optimize to whatever prescriptive target that you have faster than what a human could do.
SPEAKER_02Yeah, yeah. These are excellent examples. And uh yeah, we are we are working to improve that uh more and more. A lot of things in in development at the moment, uh including more improvement and a lot of new technologies that are running on the device itself using our neural network.
SPEAKER_01Love it. Well then uh moving to health and wellness. Um uh for me the the best example of that has been we're the industry's first and still only uh manufacturer to incorporate a fall detection feature. And that was also developed uh by your team and enhanced by your team.
SPEAKER_02Yeah, so we we strongly believe in in the importance of that uh to our patient. And uh as you said, we started with uh um making sure that the people who already uh fell that they get the help needed, but we are continue to develop capabilities, including you know the ability to uh measure if someone is uh is at risk of a fault. Um so going from just uh detecting it after the fact to be able to detect it before the fact, and we have more on the roadmap to get more and more in helping patients uh um with with this uh huge problem, which has a lot of uh um relevance for our patients, you know, because of the close proximity of this uh the balance problem with hearing problem. Uh so we are we are working on that, continue to work on that, and invest in this area as well as other areas uh around that.
SPEAKER_01Yeah, I I I think you know you really hit on one of the areas that's most exciting to me of an advantage of AI is to be able to integrate different sensors, whether that's the inertial measurement unit or IMU sensor that we first incorporated in Livio AI to monitor physical activity tying to that comorbidity with cardiovascular disease, but then also using that physical activity sensor to detect falls. Fall detection feature is fantastic, but if a person falls and breaks their hip, uh that's already too late. It often starts a downward uh health spiral. So the idea of being able to evaluate gait, balance, and strength and work towards uh the patient being engaged in trying to improve the deficiency in one of those three areas is really exciting. And uh and I look forward to uh where we're going with that. We won't dive into that on much deeper level here, but I think we like to say hearing care is health care, and tying into those um activities or comorbidities like cardiovascular disease, cognition, fall risk, which even a mild hearing loss places a person at three times the risk of falling. Um it's fertile ground for continuing to uh uh study ways that all patients have to do is wear their hearing aids and then use the integration of inertial measurement units, microphones, uh, you know, the devices, and then the listener's intent to provide that integration of all of the human with the neural network, with the machine, and those sensors. It's really an exciting time.
SPEAKER_02Yeah, and and you know, because we have our hearing aid are always on on our patient, and they are at the best location on on the human body to measure all of these capabilities. And that's that's what makes it so exciting, the the um the potential and the relevance of of doing that on a hearing aid. That's uh that's where we are very passionate about continuing to develop uh uh technologies in this domain.
SPEAKER_01Yep. And as you said, the ear is a remarkably good spot to measure a number of biometric functions, including you know, heart rate and a host of other things. So I look forward to talking and having you back in the future when we get to the point where we can talk more about that. Um The other thing, uh uh briefly on the intelligent assistant side and ways that patients can be more engaged in the the ongoing uh use of their hearing aids and interfacing with the practitioner. One of my favorite features that we had a long time ago, but really wasn't widely used until we made improvements. And your team was instrument instrumental and influential in the improvements in the self-check feature, a dashboard that enables the patient to quickly uh evaluate one of the most pressing questions on a day-to-day basis. Patients say, well, how how often do I need to w replace a wax guard in my hearing aid? And I say, when you when when it says that you need to, and I believe we are still the only manufacturer to incorporate a self-check feature that the patient can quickly run in just a matter of seconds that uh provides a diagnostic assessment of microphone, circuit, and receiver status. And and you know, talk a little bit about that and maybe hint at where we're going in the future.
SPEAKER_02Yeah, I mean, uh first I you know you remind me that uh you know about a year or so ago when we went to to do kind of a tour at clinic, we were kind of surprised that the clinicians were not recommending patients to run the uh self-check. And uh, you know, it's I'm happy to see that this is growing rapidly. Really rapidly. This is basically, you know, if if patients, if all patients will learn how to use it, um will have uh they would have a better uh experience and and their clinicians could could avoid the issues ahead of time. So we um yeah, we we don't rest on that. Uh we want it to become more sophisticated and be able to do and test things in the background, uh, to be able to check all the time to see how things are are going and and also to allow clinicians to get more information about what's going on in the field. Uh, we would like to give more power to the clinicians so they will, you know, when when a patient comes to them, that they will already have the ability to know what happened, what is the situation, have recommendations on what they should do. Um it's it's about giving a lot of power to the clinicians to make them be more sophisticated. And some of these things are, you know, I mean, all of these things are kind of running in the background. Nothing that you need to do in order to get that. And eventually, as a clinician, you'll get all these abilities. So we're spending a lot of effort in this domain. I hope we can maybe soon reveal some of these things and be more specific about the capabilities that we provide. But this is an area that is very important for us.
SPEAKER_01Yeah, and um, I I see we're already uh reaching the the limits of our time that we have scheduled for this podcast. But I really appreciate that you bring up the point that we have some deep collaborations with academic environments like Stanford and other universities, and also the hearing aid uh industry in the past has often been very provincial in terms of wanting to invent everything themselves. And I think your your team pulling in from that target-rich environment in Israel and around the world, from people who've had experience not only in the hearing industry, but with other manufacturers and coming from other industries, um, some of those partnerships have already paid dividends in terms of innovation that we've introduced into the market. And I know we're we're um continuing to lead in other areas because of uh the ability to make connections with other tech leaders and identify uh synergies and opportunities for everyone to raise awareness for the importance of hearing as a vital human sense.
SPEAKER_02Yeah. I'm smiling because I know uh it reminds me that when I joined, I it that was really how you know I thought the industry was working, but it's all in-house. And and I, you know, this thought about you know, why do we need to do everything in-house? Our goal, we are setting the goal, we're trying to get there, and if someone has a better solution that we could use, or if someone is smarter than us in some areas that we could collaborate on, we we will do it. We are trying to get to the goal of getting the best product, the best uh solution to our patient, the best service to our patient. How we get there, uh we we need to figure it out. If someone else has a better solution, or if someone can can help us in getting there, we will use that. That's the that's our approach today.
SPEAKER_01Yeah, couldn't agree more. Well, one more question, and then we'll wrap. But if you could make one bold prediction as someone who's involved uh as a visionary for the hearing and hearing aid industry and the hearing industry, what's one big prediction for hearing aids in the coming years that you might make?
SPEAKER_02One. Well, it's one. It's easier to predict the future when you're the one creating it. Uh but well, I would say, you know, let me try to do to say a few things just on the technology side, because uh Yeah, definitely I would say uh neural network DNN will be everywhere. Um I think hearing aid will become more personalized, um it will understand the environment better, they will be much, much smarter. I strongly believe that hearing aid will also become the getaway to other devices, like uh the personal assistant that we've been talking about, that it will seamlessly connect to a lot of other devices. I think that that that is coming. And also health. I think health will also be part of it.
SPEAKER_01So that's my prediction. So um thank you so much, Amit, for being with us today to share where you've been, a little bit of where we are, and also a tease on where we're going. And to our listeners, we thank you for listening to this episode. Please rate, review, and share with your friends and colleagues and your networks if you enjoyed this episode. And if you have ideas uh for content in future episodes, send us a note at soundbites at Starkey.com. Amit, I can't thank you enough for this engaging and intriguing discussion. Uh, and I appreciate, as always, your friendship and uh having you as a colleague here at Starkey.
SPEAKER_02Thank you.
SPEAKER_01Thank you. Take care, and we look forward to uh seeing and hearing you again uh real soon on this uh broadcast uh and other podcasts in the future.