Across Acoustics

Performing Hearing Research Remotely

June 27, 2022 ASA Publications' Office
Across Acoustics
Performing Hearing Research Remotely
Show Notes Transcript

The COVID-19 pandemic forced many researchers in the fields of psychological and physiological acoustics to scramble to find new ways to perform hearing research that is traditionally done in the lab remotely. The Acoustical Society of America Technical Committee on Psychological and Physiological Acoustics launched the Task Force on Remote Testing (https://tcppasa.org/remotetesting/) in May 2020 with goals of surveying approaches and platforms available to support remote testing and identifying challenges and considerations for prospective investigators. In this episode, we discuss their findings with two of the co-authors of the resulting papers, Ellen Peng, of Boys Town National Research Hospital, and Erick Gallun, of Oregon Health and Science University. 

Associated papers:

“Remote testing for psychological and physiological acoustics: Initial report of the P&P Task Force on Remote Testing,” Proceedings of Meetings on Acoustics (POMA) https://doi.org/10.1121/2.0001409
Authors: Z. Ellen Peng, Emily Buss,   Yi Shen, Hari Bharadwaj, G. Christopher Stecker, Jordan A. Beim, Adam K. Bosen, Meredith Braza, Anna C. Diedesch, Claire M. Dorey, Andrew R. Dykstra, Richard Freyman,   Frederick J. Gallun, Raymond L. Goldsworthy, Lincoln Gray,   Eric C. Hoover, Antje Ihlefeld, Thomas Koelewijn, Judy G. Kopun, Juraj Mesik, Virginia Richards, Daniel E. Shub, Jonathan H. Venezia, and Sebastian Waz

and

“FORUM: Remote testing for psychological and physiological acoustics,” The Journal of the Acoustical Society of America 151, 3116 (2022); https://doi.org/10.1121/10.0010422
Authors: Z. Ellen Peng, Sebastian Waz,   Emily Buss,   Yi Shen, Virginia Richards,   Hari Bharadwaj,   G. Christopher Stecker, Jordan A. Beim,   Adam K. Bosen, Meredith D. Braza, Anna C. Diedesch, Claire M. Dorey,   Andrew R. Dykstra,   Frederick J Gallun, Raymond L. Goldsworthy,   Lincoln Gray,   Eric C. Hoover,   Antje Ihlefeld,   Thomas Koelewijn, Judy G. Kopun,  Juraj Mesik, Daniel E. Shub, and   Jonathan H. Venezia

Visit the Remote Testing Wiki.

Read more from Proceedings of Meetings on Acoustics (POMA).

Read more from The Journal of the Acoustical Society of America (JASA).

Learn more about Acoustical Society of America Publications.

 
Music Credit: Min 2019 by minwbu from Pixabay. https://pixabay.com/?utm_source=link-attribution&utm_medium=referral&utm_campaign=music&utm_content=1022 

 

Kat Setzer (KS)

00:06

Welcome to Across Acoustics, the official podcast of the Acoustical Society of America’s Publications office. On this podcast, we will highlight research from our four publications, The Journal of the Acoustical Society of America, also known as JASA, JASA Express Letters, Proceedings of Meetings on Acoustics, also known as POMA, and Acoustics Today. I'm your host, Kat Setzer, Editorial Associate for the ASA.

Joining me today are Ellen Peng, of Boys Town National Research Hospital, and Erick Gallun, of Oregon Health and Science University. We’ll be discussing their article, “Remote testing for psychological and physiological acoustics: Initial report of the P&P Task Force on Remote Testing,” which appeared in Proceedings of Meetings on Acoustics and is based off a talk they gave at the ASA’s 179th meeting, Acoustics Virtually Everywhere. Thanks for taking the time to speak with me today, Ellen and Erick. How are you?

 

Ellen Peng (EP)

01:04

I'm good. How are you? Happy to be here.

 

KS

01:07

Thanks. Good. 

 

Erick Gallun (EG)

Doing well excited for this opportunity to talk about this exciting work.

 

KS

01:13

Awesome. Yeah, I think our listeners will be very interested. So first, just tell us a bit about yourselves and your research backgrounds.

 

EG

01:23

All right. So I am a professor at Oregon Health and Science University. And I have also done work at the Portland VA as part of the National Center for Rehabilitative Auditory Research. I was trained as a cognitive psychologist working in hearing and then became interested in communication disorders when I had an acoustic neuroma. And when it was removed, I lost all hearing in my right ear. And so all of a sudden, hearing healthcare became a lot more interesting to me. And so now I study, you know, how it is that we use our two ears because I realized there's a lot of interesting things that happen with two ears that you don't notice until you only have one.

KS

02:16

Right.

 

EP

02:18

And I'm a research scientist at Boys Town National Research Hospital. Up until about a year ago, I was a postdoc at University of Wisconsin - Madison, where I did work with studying, actually, spatial hearing in a very special population, children who used two cochlear implants. I was originally trained as an architectural acoustician, and sort of through my postdoctoral work that's slowly going to this really interesting population of children and children with hearing loss, to try to figure out how to maximize their benefits by using those devices through individualized fitting strategies.

 

KS

03:00

That is super cool. So to give a little backstory to this interview, in May 2020, a couple months into the COVID-19 global pandemic, the Acoustical Society’s Psychological and Physiological Acoustics technical committee created a Remote Testing Task Force. Can you tell us a bit about how and why this task force was created?

 

EG

03:19

Yeah, so I was the chair of the P&P committee at that time. I was just giving it over to Gin Best. And we realized that with COVID, a lot of people were interested in figuring out how to keep their labs going because we were shutting down labs, nobody was coming in. And necessity being the mother of invention, lots of people were getting interested in what can we do without bringing people into the lab. And so we called that “remote testing.” And we decided that something that P&P could do, that would be helpful, is bring together a bunch of people who were experts and poll the community and compile our results and sort of tell people, what was out there that are some best practices,you know, what are people doing? What have we figured out? And we brought in Chris Stecker to chair it because he was a big skeptic. He was like, “Well, I use an anechoic chamber to do my work. And I don't think any of this is really going to work.” But I'll note that he has actually done some remote testing work since then. So I think we won him over. 

 

KS
04:39

Ooo! Well, that's a great story and a great idea.

 

EP

04:42

This is really great. I was really glad to see this—our community with the leadership and senior scientists jumping on to this newer platform or these newer methods—because as a junior scientist, for me, our perspective’s that we can’t afford a whole lot of downtime during COVID and the whole lockdown. And our vacating of our resource base just seems very scary at the time that some of the junior scientists on that paper, we were already starting to talk about, you know, can we move some of the experiments online? Is that a new platform? Is that a new way of collecting data to continue research, literally the weekend after lockdown started. So, so I was really glad that we have a task force that's led by the senior scientists who have done a lot of work in this area and have the expertise to help us through.

 

KS

05:37

That's great. It sounds like it helped a lot of people. Like in particular, like you said, junior scientists. 

So in your article, you go over key issues involved in designing research studies for remote testing. Let's go through those. First, before starting research with human subjects, there are some areas related to compliance to consider. What are these, and how are they affected by remote research?

 

EP

06:04

So there are, general-procedure-wise, not a whole lot of difference between remote testing or remote research within lab. The key factors that we consider are getting the consent that ensures that there are full opportunities for asking questions from the research participants’ perspective, because when we move things to the remote environment, we don't always sit with the research participant, to make sure that they have access to all of the informed decisions, so therefore, the informed part of the informed consent. That's really important, and giving them opportunity to ask questions during this process. And as well as data security that keeps compliance with IRB and HIPAA during this process in the same way and in the same quality as we would have done it in the lab environment. And that's also important. And one of the more tricky parts of this process is actually also payment—how do we deliver a payment in a way that is satisfactory, that sort of meets the expectation that the research participant would have come in, in the lab, and the mechanism to actually deliver that payment, and we need to be creative, depending on where we deploy these research studies.

 

KS

07:29

Very interesting. Okay. So what platforms are used for conducting research remotely?

 

EG

07:34

So in the task force, we asked people what they were doing. And we basically, were able to categorize everything that was happening, into sort of two buckets: one that we call a take home, and the other one that we called web-based. So for take home, you create the testing device in your lab, and you get it all calibrated, and you make sure that everything is working exactly the way you want. And then you give it to the participant-- you either go to their home, or you mail it to them, or they come in and pick it up, and then they go test at their remote location. For web based, you just have people go to your website, or the website of one of the many companies that have started doing this. And the participant either downloads the software, or just uses an online interface and does all the testing with their own equipment. And so pretty much there's a there's a whole bunch of flavors of each of these, probably a lot more flavors of web based than take home. But that's sort of the framework that we've used.

 

KS
08:52

Okay, and so what would be the different applications for take home versus web based?

 

EG
08:58

So we didn't really see a fundamental difference between what you could do with one or the other. There's some things that, you know, if you want to, for example, have people use a speaker array, which actually one person has done. They actually sent people a box that they unpacked and set up an array of speakers in their living room, and then they did the testing, and then they packed it all up and sent it back. That's pretty unusual. Most of the work is done over headphones. And so the big difference is, you know, do you really know exactly what the signals were that you were presenting? Or are you going to put in place some things that will sort of figured help you figure out whether or not it was kind of in the right ballpark? And there's not very many things where it turns out to be really important that you have calibration. Although, if you want to simulate a, or if you want to do a hearing test, or get as close as you can to a hearing test ,with very low-level signals and very precise calibration, probably you're going to want to use one of the take-home options, and there are a number of companies that have started creating those.

 

EP
10:25

Yeah, so for web-based application, the benefit of it is, that perhaps over the take-home application, is maybe perhaps the cost. I would say it's probably the cost for some of us in my position as junior scientists is that we probably don't have all the funding to sort of curate all this equipment and have it all sent out. So having a web-based application to be able to do, delivers signals that are good enough is really appealing to us, in that area. And we were at the time working with young adults, so access to a computer, access to internet to be able to get on the research experiment, was a little easier than other populations. For instance, like older adults, or individuals living in rural areas, so that will make it a little bit more challenging. But a lot of the web-based application studies focus on replication at this point, to replicate the effect that we see in the lab. And so that we largely utilize sample size that we take from like university campuses, for instance, still, for that application. So it was a little easier in that aspect to sort of hit the ground running.

 

KS
11:53

Okay, so it sounds like it depends a bit on what your resources are and the populations you're working with are, essentially. 

So with listening tests, you need to present stimuli with high fidelity and consistency across participants. With in-person testing, investigators can select and calibrate their audio hardware to be consistent across participants. How do you manage this consistency remotely?

 

EG
12:19

Yeah, so I touched on this a little bit when I was talking about the advantage of take home. But even when you are sending equipment home with people, it's pretty different from your, you know, having it be your lab. And so it's very important that you look carefully at whether or not your system will perform as you expect when you put it in the hands of somebody who is not a scientist. And so that means simplifying your instructions, but it also means getting to know your hardware, and finding platforms that are as robust as possible. And so a lot of people have been spending time looking into these things, so you know, what actually happens in the field and making measurements. 

And I also mentioned that you can do some things to sort of make sure that people are actually getting the signals that you expect them to. One of the things that had been developed, before we started doing this, is a binaural test to check to see whether people are wearing headphones. So it's pretty cool. There's a thing called “Huggins pitch,” which is a, you can play noise into the two ears and if you introduce a phase shift, it'll sound like there's a tone at the frequency where there's a phase shift. And so you can ask people—so people don't know that you've done this—and you just say, “Okay, I'm going to play this at multiple intervals, and you press the button when you hear the interval that has the tone in the noise.” And if they're wearing headphones, they'll pick it out, no problem. If they're listening over loudspeakers, it'll just sound like noise; they won't be able to do the task at all. 

 

KS

Oh, that's really cool. 

 

EG

Yeah, so super clever. And a lot of people have started using that.

 

EP
14:21

Yeah, so something like that has been really, really helpful for web-based applications. Particularly, we kind of opened the option for these, like bring your own devices, or we have these huge variabilities in terms of the sound card, in terms of the headphones that our participants recruited for these web-based studies will carry during the experiment. So having these sort of perceptual screenings to make sure to weed out those individuals that don't have good enough audio quality ahead of time, to make sure that those who go through our experiment use good enough audio devices actually was important. So though something like the Higgins pitch and similar tasks have been used in some of the web-based studies as well, so that's really good that we have this very quick verification phase and, and it went into application very shortly.

 

KS
15:15

Yeah, that sounds very helpful. So what other things can be done to reduce variabilities that are critical for research questions?

 

EG
15:26

Yeah, so this is a really big thing. It's a big question, just in general. There's, in psychology and psychological research, there's something called the reproducibility crisis, which is that a lot of very influential papers have turned out not to be reproducible. And it's more of a problem in social psychology than in perceptual psychology, but it's still something that we're all very concerned about. 

And so one of the things that we think really helps with reproducibility is making your stimuli and your methods very clear, so that everybody knows exactly what you did, making them available to other people so that you can actually have replication studies. And this remote testing just has baked into it, the idea that you're going to be creating things that can be done by multiple people in multiple labs. And so all of a sudden, this big source of variability, which is that everybody has programmed their own software and built their own hardware systems, you can remove that. You know, if you do a take-home study, and everybody uses the exact same hardware, and you share your code, you know, either for the web based or the take home and everybody's doing the exact same experiment. Now, all of a sudden, you can easily do replications, you can get your n up from the number of people that would be easy to get into your lab. 50 or 100 is a large n for a lab, but, you know, with a web-based system, you could easily get your n up to 1,000 or 10,000. Just, you know, all you need is money and patience.

 

EP

17:24

Yeah, so the, I think the hardware that we are largely using for remote testing, whether it's take home or web based, we sort of put the emphasize on, you know, these are commercial-grade, consumer-grade, audio hardware, and that are relatively inexpensive compared to the more the more high-end, in-lab equipment that we've been using—on the order of thousands of dollars apiece, but something that's a little bit more accessible. So that actually increases our replicability across studies, that you can just go relatively inexpensive to replicate the hardware and the software as well. 

There's also this big push for open science, with sharing codes and experiments, that you could host the codes. Or if you build the experiment online on some of these commercial platforms, you can even share the experiment, and other users will just be one click away from being able to really reproduce your experiment the exact same way you did. So it's been really, really convenient, and a huge emphasis on convenience through remote testing.

 

KS
18:36

That is great. What other factors in remote testing can affect how a participant performs the tests and the study?

 

EP
18:46

Sure. So, in psychoacoustic studies, we typically are quite concerned about how the participant’s attention during a task may affect some of their individual variabilities in the data that we see. This has been sort of the main focus of having really high-end equipment, having sound booths and in-lab environment to run these studies. So what we are seeing in the more recently published studies, that we review in the newer version of the paper, there are actually about 35 different studies published using remote testing methods. We're starting to see this evidence that attention during the tasks in the home environment may not matter as much as we think it will influence individual performance in a task, as long as the task itself is engaging and there's consistency in how we delivered instructions. So that lifts a huge, I guess, concern over whether remote testing will be able to generate similar data quality as we have done in lab environment.

 

KS
20:01

Right, right. So how does remote testing affect data management?

 

EG
20:12

So there are a couple of things that happen differently. One is that, unlike in the lab, where you might have all of your data, immediately go to the server, you need to either get your in-home, your take-home device back and upload the data, or you need to set up some process by which your in-home device is going to send data to a secure server. If you're using a web-based system, now the data are being saved on their server, but you have to get the data on to your server. And it all has to stay secure throughout this whole process. And so every institution has their own idea of what “secure” means, and what is an okay thing to do with your data. And so, the good news is that there are a number of HIPAA-compliant solutions out there so it is quite possible to make sure that everything is encrypted and safe and you know, is complying with all of the requirements that we have to keep patient data safe. But some institutions think that those servers are okay, and other institutions need a little more convincing, but they're slowly coming around, especially as more and more people are doing this. And as Ellen mentioned, in the JASA paper that was just published, the Forum paper, that was the JASA version of the POMA paper. We looked and saw what work had been done since the task force started, and we found 35 papers that have been published.

 

KS

22:08

Oh, wow!

 

EG

22:10

So this is a big deal. A lot of people are getting involved. And we think that, you know, the data management solutions are taking account of this, and that the Institutional Review Boards are also becoming more savvy.

 

KS

22:28

That's great. So and actually, that kind of leads into our next question, since there are so many studies that have been published, what are some examples of studies that have been able to show that remote testing can be used to answer the questions you want to answer?

 

EG
22:43

Yeah, so we actually did a study to see whether our take-home solution was going to work in a bunch of different environments. And so we have a tablet that runs custom software that we built that's freely available. And we have a pair of headphones, the class of headphones that we think is inexpensive and high quality. And so that's our standard setup. And so we had one experiment where we just compared our standard set up twice to see what the test-retest reliability is. And then armed with that data, we then went and looked at what happened when we had a different pair of headphones. So we unplugged the regular headphones and plugged in a different pair of headphones, without doing any additional calibration, to see what happened. Well, the first thing that happened was all the other levels went down by 14 DB. But all of the data looked exactly the same as when we did our test-retest with the exact same system. So it turned out that the tests that we were doing, were not, it didn't really matter exactly how, what the level was. And then we did—

 

KS

24:09

That’s cool.

 

EG

24:10

Yeah,  so we were very happy with that. We said, “Okay, good. So maybe people could use whatever headphones they had lying around.” And then we were worried about noise in the environment. So we went to a cafeteria at UC Riverside and recorded the cafeteria noise, you know, everything from the espresso maker going, to people rattling their trays, and then we played that out of a loudspeaker in the test room. And we had people either do it with the loudspeaker on or with the loudspeaker off. And again, we found that there were pretty much no differences between how they did in those two environments. And so that also gave us a lot of confidence that we could do some of this stuff remotely. And so, since then, we've been actually having people download the software onto their own phones or tablets and use whatever headphones they have, and that basic finding has been replicated.

 

KS

25:20

That's really exciting. 

 

EP

25:23

Yeah, some of this previous work that Erick has done, and other senior scientists have done, to show that these methods are potentially still generating really good data quality, really sort of gave us an initial confidence to be able to move in-lab testing to remote settings, to use, you know, these consumer-grade hardware and software solutions. Without that, we wouldn't even think about going after that route. And there are, so that really sort of calls for a series of validation studies, right? So basically, these are validation work that show the solutions work, that we can still uphold data quality,  the same way that we would have done it in the in-lab environment. 

So I work in the space with children, to study pediatric hearing loss. So I'm also very happy to see some of these validation studies that we review in the new JASA paper that focus on the pediatric population, because children in their home environment—even though I say that attention of our research participants didn't quite matter, that's probably most likely just restricted to adults. We still need a whole lot more that we can build more confidence for the pediatric population at this point. 

 

KS
26:59

Right. Right. That makes sense. 

So it sounds like remote testing has advantages even outside of a pandemic. What are those? And do you think remote testing will stick around even after folks can return to their labs?

 

EG
27:11

Yeah, I think that this is definitely something that is here to stay. It's something that we were very interested in before the pandemic, because we want to get to a broader population. As somebody who works at an academic medical center, the people who will see my advertisements and come into my lab are a very limited set of the population of all people who could benefit from improved hearing health care. And if we want to be able to collect evidence that is applicable to the general population, we need to sample from that population, and the way to do that is to go where they are. And so once we have established that our methods are valid, and that people can use our tests, then we can start partnering with other researchers, clinical researchers, and get our take-home devices, or have people download our software in clinics, not just all over the country, but all over the world. I see a potential future where somebody in a remote village in Thailand, you know, bicycles into the village with a phone and a pair of headphones and runs around and tests everybody's hearing and, you know, all of a sudden, the hearing healthcare has come to a place that, you know, would never see an audiologist. And I think that that is the kind of future that really inspires me to keep working on this.

 

EP
29:02

I totally agree. So during the pandemic, we were able to put together our scientific community to give us an opportunity to grow that and have confidence of using remote testing as a solution as a community as ourselves. That really sort of pushed our work with the potential for even greater good down the road beyond the pandemic itself.

 

KS
29:27

Yeah, it sounds like the impact on accessibility is very impressive, from what you found. And it sounds like you've made a big difference in improving research going forward even after the pandemic, and perhaps can change how we conduct hearing research altogether. 

What are your goals for the task force at this point?

 

EP
29:49

So I think the task force at this point has reached our goal to build a sort of collective confidence to use remote testing to answer research questions in hearing science and with this additional potential to expand into hearing health in a global setting, to reach out to even more broader clinical populations. In the new JASA paper that we just published, we now identify 35 published studies between 2020 and 22, within these two years, and we went from, you know, really panicking in terms of what we do to continue research studies, to having these great collection of research that was done and conducted and published using remote testing, that really shows our productivity during this period, and the collective competence that we build as a community.

 

EG

30:52

So I think that one of the things that's very exciting about remote testing, and having put together this information is that, as Ellen mentioned, the emerging researchers really can benefit from this. So there are a lot of people who are starting their own labs or hoping to start their own labs, you know, who don't have the resources to do a huge study, you know, and, you know, with lots of equipment. And so, by making this wiki and making the information available, people can go and see what the range of possibilities are and choose the thing that works best for them. And so I see ourselves continuing to update this as the field changes and evolves, and something that Acoustical Society can do to help out people who want to work in this area. Because it's, it's often very confusing, trying to figure out how to get started. And making a destination, that's a starting place for getting your bearings, and giving you the resources and the references and, and who to contact, I think is really a lot of the point of having a professional society. And so I think this is squarely in what ASA should be doing. And I'm, I'm excited that we're able to provide this for the community.

 

KS

32:27

Yeah, that's great. It sounds like it will really help with improving research and making the act of research more accessible to a wider number of scientists, I guess. 

Thank you so much for taking the time to speak with us. For those listening who are interested in learning more about the Remote Testing Task Force recommendations, we'll be including a link to their wiki in the show notes, as well as a link to their Forum article on the same topic that recently published in JASA. Have a great day. 

Thank you for tuning into Across Acoustics. If you would like to hear more interviews from our authors about their research, please subscribe and find us on your preferred podcast platform.