About the Hearing Matters Podcast
The Hearing Matters Podcast discusses hearing technology (more commonly known as hearing aids), best practices, and a growing national epidemic - Hearing Loss. The show is hosted by father and son - Blaise Delfino, M.S., HIS, and Dr. Gregory Delfino, CCC-A. Blaise Delfino and Dr. Gregory Delfino treat patients with hearing loss at Audiology Services, located in Bethlehem, Nazareth, and East Stroudsburg, PA.
The Benefits of Deep Neural Networks in Hearing Aids
In this episode, Blaise Delfino discusses the deep neural network in the new Oticon More hearing aid with Dr. Douglas L. Beck, vice president of academic sciences at Oticon.
Dr. Beck explains that in Oticon’s newest hearing aid, the Oticon More, there is a deep neural network, or DNN, which enables a wearer to have an even better hearing experience than before. He explains that artificial intelligence (AI) is as simple as a thermostat in the refrigerator. It senses when it needs to adjust the temperature and then does so.
DNN is a much more sophisticated form of AI. It learns in the same way the human brain does. It’s used in a variety of everyday tasks, for example buying something on Amazon. Once you buy a certain item, Amazon will let you know when similar items become available. The general idea of a DNN is that it learns through repetitive action from a collection of samples.
In a hearing aid, the DNN is trained with millions of real-life sound scenes – a restaurant, train station or busy street. The DNN learns to identify and balance each sound within it, so the wearer can access the sounds most important to you.
The Oticon More was trained with 12 million complex real-life sound, which it then learned to analyze, organize, and balance. This hearing device can utilize the DNN’s intelligent capabilities when balancing and prioritizing the sounds that are important to the wearer.
The benefit of the DNN is that the wearer’s brain has access to the full sound scene, so he/she can hear the person next to him/her, as well as other environmental sounds, all balanced and amplified in a true-to-life way.
This is because a DNN provides the brain with more meaningful sound information, which makes sound much clearer and speech easier to follow. In fact, research shows that Oticon More delivers 30 percent more sound to the brain and increase speech understanding by 15 percent.
Connect with the Hearing Matters Podcast Team
Facebook: Hearing Matters Podcast
You're tuned in to the Hearing Matters Podcast with Dr. Gregory Delfino, and Blaise Delfino of Audiology Services and Fader Plugs, the show that discusses hearing technology, best practices, and a growing national epidemic: Hearing Loss. We are so excited to invite the Vice President of Academic Sciences at Oticon, Dr. Douglas Beck, Dr. Beck, welcome to the show.Dr. Douglas L. Beck:
Thank you. It's an honor to be here.Blaise Delfino:
This is the second episode that you've been on the Hearing Matters Podcast and the very first episode, you actually had a power outage before we recorded remotely back in Texas.Dr. Douglas L. Beck:
I was trying to forget that.Blaise Delfino:
So it is a pleasure to have you back on the show. And during this episode, because we're recording a mini series, we're going to be discussing the benefits of deep neural networks in hearing aids.Dr. Douglas L. Beck:
Doug, what are deep neural networks.Dr. Douglas L. Beck:
So deep neural networks are the most sophisticated form of technology, when you think about artificial intelligence, that is a catch all term, and it could describe something as simple as the thermostat in your refrigerator, that would be artificial intelligence, you set it for one temperature the same as the thermostat in your house. And when the sensors can detect that it's gotten too cold or too warm returns on the AC or the heat and that would be artificial intelligence. Then we have more sophisticated forms of that such as what people use in their everyday life, like Amazon, and Google, you know, if you go to Amazon, and you buy a brand new shirt, and you buy a cotton shirt, and it's blue, and it costs $39.95 so Amazon now knows that you buy cotton shirts that are blue, that are $39.95 and when they have others, they're gonna say, Hey, Blaise, we thought you'd like to know this is about right. And so that form of artificial intelligence is actually called a deep neural network, where they take in hundreds of 1000s, if not millions of data sets, they analyze it, they look for patterns, and they're able to make sense of that to their own advantage. So when you talk about a deep neural network, in a hearing aid, it's a very, very different way of processing. It's not just a simple amplifier. What we're doing is we're training that deep neural network on 12 million speech sounds like in the Oticon MORE, and then the hearing aid actually knows what it is you're looking for the hearing aid can understand what is a speech sound and what is not a speech sound. And by intelligently prioritizing the sounds that it amplifies because it's trying to amplify not just speech, sometimes, there are environmental sounds that are just as important or more important, so you want those available to you. And when you're doing this kind of advanced speech and sound processing, we have shown in our scientific white papers that there are many advantages, but I'll give you four of them off the top of my head. Number one selective attention, so selective attention means that the patient wearing the hearing aids can better attend to the individual they want to attend to in a background of noise. Number two is better speech to noise score. Because what happens with something like an Oticon MORE is you're giving an incredibly significant advantage to the signal to noise ratio. So what that means is that the signal pops out in relation to the background noise, the signal is much louder that you're trying to attend to than the background noise that you're trying to ignore. Number three would be better recall or better memory. And statistically significantly, using a deep neural network even against our very best product up until we introduce deep neural network, people have statistically significantly better recall. To a large degree this is because the sound they're hearing takes less auditory processing because the sound signal is more vivacious, it's cleaner, the neural code within that signal is more representative of the acoustic sounds and it has more information to pull from. Now the fourth benefit, what we've done is we've looked at EEG studies. So you take a 64 channel EEG have a patient in a very, very difficult listening situation. And you can then correlate that acoustic sounds in the real world the patient is listening to to what's coming out of that patient's EEG, which is a reflection of how their brain has processed that sound. With our excellent hearing aid, the best product we made through December 2020 was the Oticon OPN S 1, that correlates about 30% with the EEG so you put that hearing aid on a patient, you put them in a terrible signal to noise ratio like three really, really challenging, and the correlation between the acoustic sounds in the hearing aid is perceiving and what the EEG is 30% with a Oticon MORE, it's 60% correlation. So in other words, the brain is actually benefiting from that processing. And the deep neural network, you could say, is trying to get a hearing aid to respond in the same way that the brain would because a healthy brain with a healthy auditory system can do most of these things automatically. But as we age, and as our neural processing slows down, we need more help to achieve the same goals which is the ability to understand speech and noise which is the ability to pay attention to who you want to pay attention to, which is the ability to remember the information you've received acoustically. So these are the advantages of a deep neural network. And there's more. We've compared sound qualities, there's a brand new study that I believe is being published in the Hearing Review in August by myself and a few colleagues. And this is based on Mushra. And I'm not going to go into Mushra, but Mushra is a technique that we use to judge high fidelity audio signals, we took our two largest competitors, and we compared speech and noise, in cafes, in restaurants and wearing face masks. So we had the three manufacturers Oticon MORE versus our two top competitors. And 80% of people who were in the study, which was off the top, my head, I think, was 24 people, maybe it was 18, 80% of them chose the best sound quality as being associated with the Oticon MORE product.Blaise Delfino:
WowDr. Gregory Delfino:
That that's incredibly impressive and fitting patients with the OPN S, and now the Oticon MORE, we've had the opportunity to fit a close friend of ours and a mentor of mine, he wore OPN S 1's and then fit him with MORE 1's, and he said Blaise, incredibly enhanced sound speech understanding, I'm loving what I'm hearing.Dr. Douglas L. Beck:
And you know, these are my favorite people to fit with oticon more is an experienced hearing aid whereBlaise Delfino:
YesDr. Douglas L. Beck:
particularly because you know, their number one complaint is going to be well, Blaise, I did great with you in the office, but it was quiet there. And obviously I could hear what you were saying and I responded appropriately. But then I went out to dinner with my significant other, I had no idea what anybody was saying. And that's fairly common among people who have traditional hearing aids, and then they were a set of Oticon MORE in challenging situations, and they can tell the difference. And you know, it really boils down to quality of sound. Now many of the manufacturers are doing brilliant things in other arenas. And you know, we all applaud that, that's great, that's pushing forward the boundaries. But you have to understand that the primary reason people seek hearing aids is to understand speech and noise, it's not really to hear things louder, it's to hear things more clearly. And a deep neural network allows us to do that, like we've never been able to do before. So the sound processor or the Oticon MORE is actually a deep neural network and many people immediately as is a cloud based, it's not cloud based. And the reason for that is if you have a cloud based deep neural network, the hearing aids are then sending a signal to your phone, your phone is then communicating with the You're folding those jeans up four or five, six times. cloud, the cloud is giving directions how to process back down to the phone, back to the hearing aids. So you could have an 8,10,12,15 millisecond delay, which will totally screw up the synchrony between the auditory and the visual picture. And what happens then also is if you don't have an internet connection, you can't do it. There are some people who have explored using a secondary device which we lovingly call a dongle. And so they will use a dongle to process the sound well, that's okay. But we learned a really important lesson at Oticon when we came out with the Connect clip. Now, people not familiar with the Connect clip, it's a remote mic, but it improves the signal to noise ratio by about 12 to 15 db. So it's absolutely stellar. It's brilliant couple 100 bucks, but it goes with most of our sophisticated hearing aids, you can get that sort of a device and that allows you to understand speech and noise like nobody's business. Those are just remarkable. Why don't people use them? Well, two reasons. Number one, you have to keep charged. And number two, you have to carry it, it is small, you know, it's it's one inch by two inch, it's a tiny little thing, but nobody wants to carry anything extra. I mean, when I travel, I only bring carry on, it doesn't matter how long I'm staying, you know, they have washing machines, and they have hotel cleaning service, I do not want to carry anything at a Absolutely, and you keep sending them out to get them cleaned. But that's so much better than carrying stuff that you don't need to carry. So I get it, it's a matter of convenience. But that's why Oticon MORE is still to this day, you know, six months after launch, we are the only manufacturer that has a deep neural network on the chip. So every Oticon MORE has its own deep neural network, which has been trained on 12 million speech sounds.Blaise Delfino:
Dr. Beck, what can patients expect, with regard to their soundscape when they're fit with Oticon MORE?Dr. Douglas L. Beck:
Well, what we're trying to do is give you the full soundscape. What most manufacturers approach sound processing by using directional and beamforming. And the point of directional and beamforming is to instead of giving you 360 degrees all around you, it will try to give you perhaps 90 or 110 degrees. And that's mostly coming from people in front of you, which makes good sense because when you're in a conversation, you're generally facing the person. So that's great. But what happens if the noise, like at a cocktail party is all behind Greg, and so everybody over there was talking and I'm trying to pay attention to him, but all the noises be. So the directional is capturing all of that further, directional and beamforming. Traditionally, with receiver in the fit now or receiver in the ear, RICS and RITE's they'll give you about two or three DB improvement in the real world of signal to noise ratio. And that's proven, we know that directionals work, they do improve the signal to noise ratio, but it's a tiny little bit. When you go back to even OPN 1 and OPN S 1, we were tracking them at about 6.3 db improvement in signal to noise ratio. So it's quite a dramatic difference. And then when you go up to Oticon MORE with the deep neural network, you're reading about another one and a half, maybe 1.8 db signal to noise ratio. So what we're trying to do is not make everything louder because patients don't really want the world louder, they want to clearer. So we tried to do that by improving the signal to noise ratio is one of the primary benefits. And when you do that, it makes it easier for people to selectively attend makes it easier for them to understand speech and noise. And it makes it easier for them to recall the information because when originally entered their brain, it was a cleaner signal. So they're able to remember and work with more of it with less listening effort.Blaise Delfino:
Doug, we have a lot of audiologists, and hearing healthcare professionals that tune into the Hearing Matters Podcast and some questions may be what is the difference between the OPN sound navigator and now the Oticon MORE? So what is the difference? Because the OPN and the OPN S, of course, were on the same ship, the V locks was the OPN, O P N 1 and then V Locks S and now this is the Polaris chipDr. Douglas L. Beck:
So it's a whole new chip now is the digital noise reduction different than that of the OPN sound navigator? Can you kind of bring that to light a bit for us?Dr. Douglas L. Beck:
What we've done in the Oticon MORE with a deep neural network is we don't use traditional noise reduction. We're not using amplitude modulation, we don't use directional and beamforming in a traditional way, we have six times greater resolution in our amplifier. So there's basically two pieces to the deep neural network. There's MSI, which is more sound intelligence. And that would be where the deep neural network resides. But then you also have MSA, which is more sound amplifier. So it's a six times greater resolution. It allows us to compress sound intelligently. So for the audiologist, they'll understand this for the people who aren't audiologists, I apologize. But when we talk about compression, there's a point in loudness at which compression is turned on. So let's suppose conversational speech is about 50 decibels. So we might say at 60 Db start compressing sound because it'll get too loud if we leave it linear. So we tend to compress sounds. And we do that for very good and solid reasons, like if you don't compress sound, things get too loud too quickly. If you don't compress sound, you could cause further hearing loss. If you don't compress sound, you could cause acoustic trauma. So there's a lot of reasons to compress sound, but supposing that I'm speaking at 50, and we say, okay, everything over 60, we're going to compress it two to one, so then it's 60 is when that knee point, or that threshold is reached at 70, that would be 10 Db louder than the knee point, if we're compressing a two to one, that would be five dB. So now that sounds true, dynamic range is five dB, it should have been 10. Well, your brain really, really needs these acoustic cues to make sense out of sound. So instead of using directional and beamforming, we open up the entire 360 degrees, we're not trying to limit it to who's in front of you, we're trying to use intelligent protocols and intelligent algorithms that allow all important speech sounds to come through the hearing aid, then we address background noise in a very different way. Because we're not just amplitude modulating, we're trying to work with speech and noise, we're trying to take 12 million speech sounds that we know are important and make sure that those go through, we have supplanted feedback with our hearing aids, you know, you can put your hand right up to it and you don't get feedback, it's not feedback reduction, it's feedback control, there isn't any feedback. And so we've done all of these things to improve what's called the neural code. So the neural code is simply when you're using a sound that is heavily compressed like two to one or three to one, and you're using directional, you've taken a full circle of sound, we've made it smaller, smaller, smaller, until the sound that you're delivering is just a little tiny beam of sound. And then you're expecting the brain to make sense of that. But the brain is looking for all of the information so that the brain can attend to all of it and then focus on what the brain wants to focus on. Well, if you're only giving it this tiny little beam of sound, the brain can't orient to the entire sound scene because it wasn't given the sound scene. So we think that's a huge part of why this works out so well is because we allow the brain to orient then focus and then recognize this is very different traditional hearing aids, they focus the sound before it gets to the brain. And this is why these EEG studies are so important is when we look at how well the brain does with an acoustics housing that has been enriched with all of the information around the brain correlates 60% with the EEG when worn by the patient with the Oticon MORE hearing aids, and much, much less with other hearing aids. So the brain is able to make sense of that sound and use it intelligently to understand speech and noise.Blaise Delfino:
And the goal of wearing hearing technology is to increase speech, understanding decrease listening effort, and introduce our patients to this new hearing world. Dr. Beck, you've been in the hearing healthcare industry for well over 30 years, both you and Dr. Delfino. I would consider both of you pioneers and incredibly grateful to learn from two of the greatest minds in audiology. Is this the most excited you've been about a product launch? That is our question here because you've seen so much, is this the most excited you've been?Dr. Douglas L. Beck:
Yeah, this is pretty good. You know, you're able to see the culmination of all of the r&d and all of the thought leaders coming together. You know, we have the world's largest research lab in Denmark. And you know, they've been working on this stuff for years and years and years before it ever comes to market. And probably the majority of things they explore never come to commercialization, they never come to fruition, because they didn't really pan out. And you see a technology like this, where all of a sudden, these guys are getting so excited, not just about what they can do in the lab, but what it could mean commercially. And we're just at the very beginning, you know, when you talk about deep neural networks in five and 10 years, you're gonna see this whole thing explode, because the potential there is unlimited, because we can do whatever we want with sound and speech recognition and voice recognition, and giving a more intelligent sound to the brain so the brain can make the most sense of it. The human brain is rather phenomenal when you think about this in terms of hearing and listening. So hearing is just perceiving sound. Listening, is making sense of sound. Listening is de-coding the sound that your brain perceives. We're just now getting away from hearing as the primary thing where we just make stuff louder, to listening, which is where we make it loud enough, so it's audible, but we can shape it and we can create a sound that one can listen to or untangle more easily, so it makes sense so they can selectively attend. So they can do a better job with speech and noise so they can remember more of it. So they spend less listening effort processing sound, more listening effort, understanding sound. So yeah, we're on the verge of some brilliant stuff. It's a great time to be an audiologist.Blaise Delfino:
You're tuned in to the Hearing Matters Podcast with Dr. Gregory Delfino, and Blaise Delfino of Audiology Services and Fader Plugs. On this episode, we discussed the benefits of deep neural networks in hearing aids, with Dr. Douglas Beck. He is the VP of Academic Sciences at Oticon. Tune in next week, where we welcome Dr. Beck back on the show, and we'll be discussing cognition, audition and amplification. Until next time, hear life's story.