The Alerting Authority
The Alerting Authority is a podcast dedicated to improving how we warn the public when seconds matter. Hosted by Jeanette Sutton, a leading researcher in public alerts and warnings, and Eddie Bertola, an expert in emergency communications technology, the show brings together practitioners, policymakers, technologists, and thought leaders shaping the future of public alerting.
Each episode dives deep into real-world challenges behind creating, issuing, and delivering life-saving alerts. From Wireless Emergency Alerts (WEA) and the Emergency Alert System (EAS) to IPAWS implementation, crisis messaging, public behavior, and alerting policy, the hosts explore what works, what fails, and why.
Rather than focusing solely on tools or software, The Alerting Authority examines the “human side” of emergency communication—decision-making under pressure, message design, training gaps, coordination across agencies, and the psychology of how people interpret warnings.
The podcast aims to empower emergency managers, communicators, and public safety professionals with actionable insights, practical guidance, and candid conversations with the people who have shaped, studied, and experienced alerting at every level.
Whether you’re responsible for issuing alerts, designing systems, researching risk communication, or simply interested in how warnings save lives, The Alerting Authority is your go-to source for understanding and improving public alerting in a complex and rapidly evolving world.
The Alerting Authority
Do Alerts Really Work? RAND Study Part II | Who Gets Missed, Opt-Outs, & Alert Fatigue Explained
Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.
In Part II of our deep dive into the groundbreaking RAND national alerting study, we go beyond the headline stat that 91% of Americans received the Wireless Emergency Alert (WEA) and uncover the real story: who didn’t—and why it matters.
Host Jeannette Sutton is joined again by RAND researchers Rachel Steratore and Andy Parker to explore critical gaps in emergency alert systems, including:
- Why rural communities are less likely to receive alerts
- How age, device type, and mobile carriers impact delivery
- The surprising truth about opt-out behavior (especially among younger and lower-income users)
- The role of awareness, trust, and alert fatigue in public response
- How disability, language, and accessibility factor into alert effectiveness
- Why “sending the alert” doesn’t guarantee people actually receive—or act on—it
This episode also tackles one of the biggest unanswered questions in emergency communication: Do alerts actually lead to action?
You’ll hear insights on:
- The difference between receiving, understanding, and acting on alerts
- How risk perception (fear vs. familiarity) shapes behavior
- Why education and public awareness are major missing pieces
- The future of alerting across devices (phones, watches, smart tech, and more)
- What the next generation of research must focus on
If you’re an emergency manager, public safety professional, researcher, or just someone curious about how alerts work during real crises—this episode is essential listening.
👉 Watch Part I first for the full context of the RAND study
👉 Don’t forget to like, subscribe, and share to help improve public safety awareness
In our last podcast with Rachel Sterritover and Andrew Parker, we learned how Rand conducted one of the largest public alerting surveys ever fielded in the United States. Over 80,000 respondents nationwide collected within hours of a live national test in 2023. That study revealed that approximately 91% of adults with working cell phones received the alert, demonstrating extraordinary reach. But it also surfaced critical disparities related to geography, device type, age, carrier differences, and opt-out behavior. We explored many topics that related to WIA performance and receipt rates and touched briefly on the subjects of opt-out rates and alert fatigue. Today we are digging in deeper. So, Rachel and Andy, welcome back to our show.
SPEAKER_02Thanks for having me.
SPEAKER_01Happy to be here.
SPEAKER_00So I know I whenever we were talking about, you know, the RAN study before, we had more to talk about and the show ended. And so again, thanks for being back. I think we're going to jump right into it. And for those that are listening, if you're not familiar with it, please listen to the first one, the first podcast where we talk about this. And this is uh really building off of those discussions and things that we haven't talked about, just like Jeanette said. Um so just to move forward, um, can we talk more about the who as far as receive the test alert? And like what do we know about people with physical, cognitive, or language-based limitations and those non-English speakers that we generally don't serve as a whole? Sure.
SPEAKER_03Let me jump in on that. Let me actually start with uh break this up into those characteristics of the carriers and the devices that are associated with the Who. And then the characteristics of the individuals themselves. So we did see differences by which carrier somebody had their device through. So those individuals who were with Verizon reported a lower rate of receipt than those who were with other major carriers. Though that could be for a variety of different reasons. There could be technical reasons there, but there could also just be familiarity with the way that these different companies communicate with their customers. Um one of the things that uh I think is a very important question is do people know what they see when they when they're getting a WIA alert? Do they recognize that as part of a system or or or what it stands for? Um we also saw differences by the type of fault. Um, and particularly less common brands. People who had less common brands were less likely to report having received the alert. So the Apples, the Samsung's, the Motorolas of the world, those folks were reporting a greater rate. And again, there may be technical and non-technical reasons behind that. Um, one of the things that's very important was very important to us was to understand the type of device that people were had. Now, we were reliant, because it was a survey, on it being their self-reports of what they had. But we also relied on the uh catalogs of devices with oh, I locked, I I locked my cats out of the rooms. I love it. Uh that that um Rachel's is somewhere around there, I'm sure. Uh one of the things we were we we did is we went to a fairly extensive uh uh set of questions asking them what type of phone they have, the one they use most commonly. And then we matched that up using the carriers' websites with the types of phones they have and what type of capability those phones typically ship with with respect to Wii A. So um as uh your audience may or may not know, there were there have been several standards for WIA over the time, referred to as 1.0, 2.0, and 3.0. But most new phones ship with WIA 3.0 as their as the standard. And that includes things like being able to receive longer alerts as opposed to just shorter alerts. It includes the ability of very specific geotargeting. Um, those were advances that came with the different things. But what we found was that um there were a small number that still had phones that we could identify as Wii of 1.0 phones, um, about two to three percent. About 10%, another 10% had Wii of 2.0 phones. Um, and the rest were were largely Wii O uh uh 3.0 phones, but there were a number that we just couldn't determine because it just it was there wasn't clear evidence one way or another. Um, and so there's there is a question of who owns which type of phone, and that's important because that may not be we this wasn't a part of our analysis, but that might not be evenly distributed across the population. We do suspect that some of the less familiar brands are lower cost phones that may have lower capability. Um getting into the the the who question about uh uh the the individual characteristics. Older adults were less likely to report receiving the alert. Um and younger adults more likely to receive the UR alert. This was especially striking with those who are over 65 who had the lowest rates. Um those folks also had the lowest awareness of Wheel. We asked them if they'd heard of the system before. So people over six ages 65. Um, and that may have, for instance, uh contributed to confusion about what that was they were seeing. So um I could imagine we don't have direct evidence of this, but I could imagine that that that could have contributed to the lower rates of receipt uh or reported rates of receipt. Um we spent a fair amount of effort with uh uh uh on the topic of disability. Uh we were working with uh some colleagues at Georgia Tech University. They have a center there that is focused on uh uh technology and and disability. Um and it's a it's a it's a nuanced area. There are certain types of disability where folks um are early adopters of technology because those technologies are supportive, um particularly those individuals who have vision or or hearing disabilities. Um there are others that aren't that's that's not the case. Um when we looked at just disability, we found a slightly lower rate of receipt for uh those reporting any sort of disability versus those not. But when we controlled for other factors, that largely went away. Um it seemed to be that was that was accounted for by other factors. Um we did see some awareness differences um uh in disability. Um uh, but none of that was terribly striking. And that was, I think, in some sense, a comforting finding. Um what was not less comforting was the effects with respect to rurality. Um those individuals, and this was a fairly systematic uh trend. The more and more rural the area was where you lived, the lower the rate of receipt of the wheeler, of the test of it. And that I think is more concerning. Um, and probably technology related. That's a feature of the of, at least that would be my interpretation of it, of the of the network in those areas. Can I ask you a quick question about that? Yeah, absolutely.
SPEAKER_00Just to clarify, when you say so, for those in a rural area, they were less likely to receive it, and that's different than less likely, you know, to recognize what it is, right? It it's it didn't it didn't come on my phone, like it didn't arrive. Is that what the survey showed?
SPEAKER_03To be completely accurate, individu it's respondents to our survey who lived in more rural areas were less likely to report having received DLR. So this is still a self-reported receipt. Um and there may be other things that go into that, but uh it is um I don't remember I don't recall if there are rural urban differences in awareness, but um you know familiarity with the system, but there might be. And uh but there was this was a fairly striking trend where where the the the system appeared to be to have lower reach, um, at least by these self-reports in more rural areas. Um for for non-indigental speakers, we didn't see particularly strong trends, which again was probably a comforting finding. We alerts right now um are primarily English and Spanish. They were they've been exploring the possibility of extending out to other languages, but um but we didn't find big effects.
SPEAKER_00No, I just think that that I mean that's critical information because like as someone who sends out these alerts, one of the things that um really bugs me when people say is, well, I sent it out. And in their in their minds, hopefully they're not thinking, well, I have just assured my community, you know, that that everybody is getting the information. Um, because with the results of this study, we're able to see that there are some inconsistencies in how the message can be received. And so simply hitting the button to say go and and pushing it out may not get the message out as as you think it would. Um, but again, having this knowledge that again you just um shared with us will help alerting authorities with that. Yeah.
SPEAKER_03So let's say we send an alert to Western Pennsylvania uh and about a snowstorm, like is like uh there we might be concerned that we are reaching older, more rural adults at a lower rate than we are folks in cities or folk younger folks. Um and I think uh that could help inform a layered strategy for getting out the word. Uh if you know that uh that there are um going to be gaps there, we there may be other ways to account.
SPEAKER_04I think it's just such an important thing to consider when we we've talked a lot about people who choose to turn messages off. And yet we also have these populations that simply aren't even receiving them. And that's a really important finding from the work that you've done. Um and you know, a little as you've been talking about people who are rural, you know, we think about people who don't receive the message because they're rural, but there are other things that are also affecting uh the reasons that people are not receiving messages. And one of those is opting out. Um and in your research, you found out that this correlates with demographic factors, um, such as people with lower income and people who are younger adults. And I wondered, um, can you talk a little bit about um why or who is opting out of their of um alerts, the those who have these subsidy programs or people who are younger adults?
SPEAKER_05Yeah, I'd be happy to jump in on that one. So on subsidized phones, this was one of the strongest signals we saw in the data. And reported opt-out rates were almost three times as high for people who receive their phones through subsidy or discount programs. Think programs like Lifeline, which reduced the monthly cost of phone or internet service for qualifying low-income households. What's interesting is that the story isn't simply subsidized phones don't receive alerts, because in our results, receipt was actually a little higher for people with subsidized phones. Um, so that runs counter to the idea that subsidy-provided phones, if they had lower-end device capabilities, would necessarily lead to lower receipt. So, my interpretation is this looks less like a pure technology capability problem and more like a user experience trust or burden problem potentially. Um, something about the experience of alerts or the context in which they're received may be pushing people toward opting out for some reason, um, even though the system is reaching them in these cases. And that's really why, you know, digging deeper on those factors associated with, you know, ownership of subsidized phones within these, you know, data is so important. It may help, you know, show observed differences, not only in opting out, but also in receipt and awareness. Um, we mentioned this in the first uh uh podcast too, but I want to say it again here and emphasize that these are correlations and not causal claims, but they're strong enough to justify follow-on research, especially qualitative work to understand sort of the why behind the opt-outs in this group. And, you know, this trend should be a priority because as Andy started to kind of talk about, lower income groups may be at greater risk, perhaps in emergencies. Um, opting out, you know, limits their ability to receive certain categories of warnings. Like these are sorts of questions that we would want to, you know, dive into a little bit more. For younger adults, uh, we do see higher opt-out behavior compared with the older age groups that Andy mentioned. I want to be careful about pinning that on any single driver. One interesting marker in the data is awareness. And compared to adults 36 to 65, younger adults age 18 to 35 were significantly less aware of WIA prior to the national test. Now, again, that doesn't prove you know causality here, but it may help explain part of what's going on. If someone isn't familiar with what WIA is, who's behind it, what kinds of alerts it's intended for, they may be more likely to perceive certain alerts as low utility or irrelevant over time. And that can also feed into our people opting out. A second plausible explanation might be comfort with phone settings. Younger adults, you know, are often more fluent at managing notifications and toggling off features they don't want, um, whether that's apps, services, or system settings. So if we uh categories feel more burdensome than they do than official, they may be quicker to go into settings and opt out. So operationally, you know, the implication here is quite straightforward that if opt-outs are concentrated among younger adults, it can reduce, you know, the reach of certain alert categories within that demographic. And you could think of younger adults too being highly mobile, um, moving, you know, between jurisdictions for work, school travel. And so losing that reach of the system can really matter here. So we really need to figure out the why. Um, is it relevance, trust, alert, volume, usability? And that's where follow-on work, especially qualitative research, can clarify what's driving that behavior.
SPEAKER_04Yeah, I love that. As you were talking about people with subsidized phones, I wondered if there is an awareness among people who receive a WIA about the cost to them as the phone owner, that they're not paying for a text message to receive this. Um, and not recognizing that. I mean, Wii A is not well understood. And it's it hasn't, I don't believe, been very well publicized. Except for when something goes wrong and then people talk about those messages that look like an Amber alert, and people are, you know, it's like, oh, of course I know what you're talking about now. But when they arrive on your phone, because of the way that so many of these messages are structured, you in many times may have no idea who it's from and why your device is being taken over by some incomplete message. Um, there's so much for education.
SPEAKER_05I have to laugh because after our first podcast, a dear friend of mine and somebody who worked on the project with us, Katie Wilson, reached out and um said that her sister, who's based in the UK, had listened in and she's a lay audience, you know, has no background in this. And she found it very educational and enlightening to figure out that oh, these aren't just text messages. Like the government doesn't just have a list of phone numbers, right? That they're sending these messages out to. It's it's broadcast. And so she was even just sharing, you know, the the uptick in her knowledge base of like how the system even works, and that she found that um fascinating.
SPEAKER_00Conspiracy theories fly after these large tests. And I don't know what you're talking about, right? Everybody walking around with their foil cap still.
SPEAKER_03Uh we we we actually did some effort to try to collect some of those at the time just to see because we were worried if any of it took off. We wanted to sort of have some basis for for sort of the timeline for it. Um, it ended up not it it died back down pretty quickly, but a lot of what you're talking about is that I did want to highlight just, you know, I don't want to dig too much into the methodology of the study, but listeners may wonder if there are demographic differences in opting out, could that affect the demographic differences we're seeing in receipt? And um, in this case, no, because you're the the specific type of alert that was that that this test went out on is uh a national alert, is not one that you have the option of opting out of. It's designed if some for if if the federal government needs to alert the entire U.S. public about something. It has never been used in practice. Um, it has only been used for tests like this, uh and only a couple of times. Um, but um that wouldn't have affected these results.
SPEAKER_02Well, yeah, really important.
SPEAKER_00I know we're gonna go on. I have to touch on what Rachel said about your friend in the UK. Um, I A, I think it's great that she was listening to it and uh talk about education. We we talk about the public and how they're educated, but I just want to point out too, we have a lot of emergency managers and alerting authorities, as Jeanette said, who really don't understand this themselves. And yet they're the ones tasked with trying to then provide public education. And and I'm not saying it's the blind leading the blind, but let's just say it's it's those with an incomplete understanding. Is that is that fair? That are trying to answer questions, um, which can, I think, cause more confusion.
SPEAKER_04It's a very complicated system. I get questions that I still can't answer, and I feel like I have a pretty good grasp on it.
SPEAKER_03Especially when you layer it in with uh other systems that locally maybe the local jurisdictions may subscribe to, like Reverse 911, um, or it's being rolled into uh sort of wrappers like what's happens in LA County. It's it's we as are actually part of they've they sort of they've they've put a wrapper around it with some, I believe, an app that that is uh that it comes out in somewhat somewhat different form, uh a name, uh at least or it's described there under a different name. I do think there is an opportunity here, and the practical implication of all of this is is setting expectations with with the ultimate recipient. So this is what this is what we will do, and this is what it will look like at the time. Um but your point, Eddie, is an excellent one. That's yet another thing that that local officials need to have some expertise on. And I think that, you know, there are professional societies and stuff that can work towards education in that respect and and whatever. But I think that there is needs to be um some attention to what is this. One of the interesting conversations I had um with some of the folks at the federal level who are who are leading this uh uh these systems is um wouldn't it be neat if we could figure out a way to uh incorporate citizen science into our educational system? Imagine a imagine some sort of program where you're encouraging uh when there's a national alert like this, or a local test, or or something else, encouraging students to do essentially what we did in our survey as a means of like did you get the alert on your phone? When did you get it? What did it look like? Take a screen capture of it. I mean, you could imagine what those what those projects would look like as an opportunity to learn about this and as an opportunity for kids to take it back to their parents and teach them.
SPEAKER_04I love the idea of getting screenshots of what these messages look like on all kinds of devices with all kinds of capabilities.
SPEAKER_03Um it's all of our team members asked and asked their family members and stuff, and we have a folder of of a whole bunch of different what they all look like.
SPEAKER_04Yeah, that's great. We I would love to share those sometime. Um and of course, in the last two years since the test, all kinds of new capabilities have rolled out. So those screens look different. Yeah. And they're gonna continue to change. And we frankly don't have a good understanding of what things look like until it shows up on someone's phone. And it's like, oh, surprise, there's now maps added. It's like, where did that come from? Um, and and if you look hard, you can find information in some devices, you know, information being rolled out. But it's it's not something that is advertised to our alerting authorities that these changes are coming. And so just having access to those images, I think would be just a tremendous service to all of those who are issuing messages.
SPEAKER_03There's a um that's a fantastic idea. And there's there's uh you know, a little known part of the system is that there are some broad guidelines of what wheel alerts should look like or contain, but the device uh manufacturers have a lot of control over the actual implementation. And I think we mentioned this in the last podcast. One of the one I think one of the most striking ones is that with Apple devices, and I think this is still the case, you know, the the the there's a there's a button right on the on the screen that'll take you right to the settings where you can turn it, when you can opt out. And lo and behold, opt-out rates are greater on most phones. Um and uh but it's yeah, it it isn't all the same. And and I agree with you, having having a um a quick visual of this of that diversity and and how it's evolved over time would be really useful. Part of the problem is we don't know when these alerts are coming up. And so we can't tell everybody take a screen capture at that time. Um but uh but yeah, we did that for the national tests a couple years ago.
SPEAKER_04Well, anyone who's listening, if you get an alert, take a picture of it and send it to us. And we will start creating a repository of all these interesting changes that you can't find unless you actually pay attention at that moment.
SPEAKER_00No, absolutely. Especially if you don't think it's a good message or you're not sure what happened, send it to us. Um one thing that I'm looking forward to, because we're gonna talk about, you know, potentially future research and everything else, is we talk about WIA, we almost always think of a phone. Um, but really, especially as technology is continuing, I I know that the iPAUS team has done a lot of research and and a lot of push to get more devices to receive it. And so it's it's transitioning to more of a WIA capable device um because that alert can go straight to your watch, right? If if it's connected to the to the cell or if it receives that information. Um I'm seeing them on fridges, I'm seeing them on different things. And so that Wii a capable device should display it. Um so I mean they're they're gonna be everywhere. Um and so with all of the research on the ability to receive the message, and and we've talked about that, um, did you guys learn anything about the ability to act on a message, or is this a gap that needs to be really looked at in future studies? It's interesting you asked this.
SPEAKER_03Um we when we first started uh uh in on designing that uh that project, one of our colleagues, an engineer, uh mocked up a failure mode analysis. And what that means is just sort of what all has to go right for this thing to work? Where are the places where it could go wrong? A long chain of these things. And as I recall, the the last of it was it was being an engineer, I presume, most of those boxes and most of those failure points were engineering failure points. And the last box was, of course, people, and that's the biggest failure point. Um, but it was the the but it was a uh my immediate reaction was you could do the same whole same thing from the point at which the device from the device in the person, just like like the person part of this. So you could ask yourself, was the device turned on? Did the device receive the alert? Those are sort of the sort of the two device parts of it. Did the person notice? Did they understand what they were receiving? Did they trust what they were receiving? Do the person have any response options available to them? And what do they think about it after the fact? Like how does that feed into the next one? Uh, their experience. And uh those are each points at which you could do research or you could investigate what are the levers that uh you know what what are the sources of uncertainty there? What are the levers that we have to try to improve the relationship between receiving an alert and actually acting upon it in a way that is helpful to yourself and to everybody else? Um the study we conducted didn't wasn't able to look at that because first and foremost, it was a test alert. There wasn't an action to be taken at that point. Um, I think this uh speaks to an important gap here, which is to understand these same sort of dynamics in real events with real alerts. Um and uh that's tricky, um, but it's doable. Uh we could also do this in, you know, possibly laboratory studies, or uh definitely in more tightly controlled experimental environments. Um those are research methods that we we understand and could do to try to start to get at this this part of the problem, which is uh understanding where things can go wrong from the point at which the per person perceives it and on and and and notices it to uh uh them doing what they need to do to protect themselves or not, if that's the right answer.
SPEAKER_01I'm excited to to think more about how we can study some of those things.
SPEAKER_04There's been various experiments that have attempted to tap into some of those different angles that you were talking about. Um I don't know if you're aware of the study that was done at UCLA that was uh an experiment um where people received a wireless emergency alert for a shooting while they were in a lab setting and they were doing skin conductance and heart rate testing. And um it was really interesting that the IRB allowed that project to move forward.
SPEAKER_03I'm curious as to how did they actually were they actually Wii alerts or were these devices that were mocked up don't look like mirrors?
SPEAKER_04They they did not describe the inclusion of the sound in any of the write-up, which makes me think that it wasn't actually a Wii A.
SPEAKER_03It was more like a text message that arrived because I I was just wondering how they would how are they allowed to use the Wii A system to do this. Yeah. I could I could imagine how they could run a program that mocked it into devices that were completely controlled within the lab.
SPEAKER_04Yeah, yeah. But I loved the premise of it. Um it's just it's a little scary to do a active shooter situation in a college lab. But I mean, you know, it's just one of our one of our initial insights that was collected and as though it was a real event. Um and there's there's certainly many, many other ways that we can collect that kind of data. Um, but one of the things you were talking about, Andy, that I want to actually turn to Rachel about was about the different perceptions that you might measure. And Rachel, in our last conversation, we talked a little bit about human cognition and how that affects and contributes to risk perception that might factor the way that people respond to warning messages. Well, it does affect how people respond. But could you talk about that a little bit from your academic perspective so our audience understands better about risk perception and measuring those things?
SPEAKER_05Yeah, yeah, I'd be happy to. And you're right, this is you know a bit outside of what we directly measured in in the RAN study. But when I say risk perception, I'm talking about how people identify hazard, how they mentally weight it, and how they decide what it means for them. You know, how likely is it, how severe it could be, and what they should do about it. So a useful way to think about what drives risk perception comes from classic work in the risk perception literature, uh, that people's judgments often cluster around two big dimensions, unknown and dread. And so when a risk feels unfamiliar or hard to understand or uncertain, it has a high unknown characteristic. Um, and when it feels scary or catastrophic or out of one's control or has a high dread factor, people tend to judge it as a higher risk inherently. And there's a classic illustration of some of this in some of the early studies in the 70s, uh, you know, found that microwave ovens were perceived as relatively high risk because they were new and people were uneasy about microwaves in in their households. Um, and then you repeat a study like that today, and microwave ovens rank very low on risk perception scales because they're familiar. People have a stable mental model for them. Um, so that's a good, you know, reminder that familiarity changes cognition and affect or how people feel about something over time. Cognition is sort of that part where how people are thinking about the risk and their mental model of the hazard and what they believe will happen, what they believe the warning means, whereas affect is, you know, how they're feeling about that risk, whether it's fear, urgency, anger, skepticism, annoyance, whatever it may be. So those two pieces don't solely determine behavior, but they do help shape how someone, you know, receives a warning message, internalizes it. And if a warning lands in a context where the hazard feels, you know, both real, personally relevant, um, and the message is clear and actionable, they're probably more likely to act. And if it lands in a different context where it feels ambiguous or distant or repetitive, people may discount it and delay action or look for confirmation elsewhere. So for warning messages, a big part of effectiveness is aligning it with those human factors, you know, reducing the unknown component by making messages clearer and more interpretable, and channeling that kind of dread type factor into specific actions rather than leaving people kind of feeling paralyzed or disengaged. So those are kind of some of the things I was alluding to in our last chat about those two dimensions and how they influence not only how people perceive a risk, but then how does that translate into action in real time? Um, so two big important factors.
SPEAKER_04Yeah. Well, they're so important to consider as you think about the next stages for research as you actually measure the response to taking action or to turning messages off because they're related to those perceptions that you were just describing. People's tolerance for risk, people's judgments about it, their prior experiences, how dreaded something is. So all of that is background that is really important for those next questions that I think that you want to answer that moves away from the correlations into the why and the how. And so I thank you for sharing that with us because those are those are really important dimensions to understand.
SPEAKER_00Well, can we just ask a fun question to you now? Of I mean, we've we've had discussions about, you know, what you think the next steps for research um can be. Um the the hypotheses and and um other things you know developed in relation to the data. Now, I guess my my fun question is if you could pick something next to research uh based on this, like the next step. And it is, I guess, for each of you, what do you like, what would you go for? What would your next research project be in relation to this? And then what's the best way to go about doing it?
SPEAKER_01Yeah, maybe I'll jump in first.
SPEAKER_05Andy and I, when we talked about this, you know, we broke it up into kind of what are some of the next steps or hypotheses we're interested in. And then the second bit of that being like, how do you do that? Um, so I'll start to address maybe parts of the first one or at least things that are interesting to me. So Andy, you can, you know, jump in and add where you see fit. But a top priority in my mental model is disentangling what's really driving opt-out behavior. Right now we can see strong correlations by geography, alerting patterns, the subsidy foams, age, but we don't yet know which levers matter most. And that part matters because it's much easier to do something about opt-outs when you know what buttons to push. Is it alert frequency? Is it geographic overreach, message relevance, timing, trust? The list goes on. Um, and so until we start to separate those drivers, we're guessing about which interventions, educated guessing, I will say, um, on what will, you know, is actually reducing disengagement or improving engagement. Um, so that's one. Uh the second big thing that comes to mind would be learning more about this awareness piece. So um, some other research shows too, as we were talking about education, is like in most cases, if a topic is highly scientific or technical, lay public or non-specialists don't always need to know all of the technical details in order to make informed decisions related to that risk or technical issue. Um, but there would be key features that they would need to know, and that is discovered through doing research, um, figuring out what some of those key elements would be that they would need to know to make some of those um risk-informed decisions. Um, so in this study, you know, awareness floated a little bit into the background because receipt and opt-out were such urgent, you know, system performance questions. But in many ways, awareness is that untold part of the story. Uh, we can confirm that you know, most people are receiving alerts. We can work on minimizing opt-outs, but if a large share of the public doesn't know what WIA is or who's behind it, why it exists, we're missing, you know, like a big piece of that system effectiveness there. Uh so low awareness cannot likely feed into low trust or perceived relevance, perhaps disengagement. So you can see there's like a string of concepts or constructs that are related that we need to find out, you know, more about. And one more maybe that would come to mind is shifting from did the alert arrive to did the alert feel useful? Um, we now have you know good evidence that the system can technically reach people at scale. That was like the point or one of the big points of the study. But that next layer is perceived, you know, utility when the alert arrives. Do you feel that it helps you make better decisions? Does it clarify risk? Does it support action? Or does it just feel like you know, more noise buzzing on your phone? And that's where, as I mentioned, more qualitative work too, and it doesn't have to just be qualitative work, but that's you know, a big piece of this that we couldn't address in this primarily quantitative survey.
SPEAKER_00Hey, Rachel.
SPEAKER_05Things like interviews, deep, you know, focus groups.
SPEAKER_00No, and I'm I don't want to stop you, but I want to help everybody understand. You're using words qualitative, quantitative. For everybody out there who is not a doctorate, who doesn't have their doctorate, what are the can you can you talk about the differences, uh, but just even between those? Because a question I have gotten is well, why couldn't you check for all those things? Right in this study? And because it is it is very narrow. Sure. But just just to help break it down for for the rest of us in the room.
SPEAKER_05Yes. And I'll try to break it down um both by how you do these different types of research. So quantitative, again, these aren't perfect definitions, but for just a general audience, the quantitative part is more number-crunching. We want to be able to say, use a survey to say something generalizable about a bigger population, but we only have a sample. So we're gonna be a little more quantitative in that that 91% of people received the alert. The qualitative part of that would be talking to the 10% or a portion of the 10% that didn't receive the alert, do a deeper in-depth interview, say, um, or focus group amongst, you know, a group of people that didn't receive it, and start to disentangle the reasons why. What factors drove their decisions to opt out? What sort of, you know, um awareness levels do they have? What's their background? Like all of these deeper questions of digging into the why and the factors can be done in, say, a qualitative interview versus something that's targeted at just more descriptive aggregation of information. So quantitative think 91% received, qualitative being let's actually talk to people and get a deeper understanding for uh these factors involved. So that's kind of a high-level uh piece there. Um so, so with that, uh, I think it's worth restating, you know, what we said last time about the public needing to be a partner and empowered to make these decisions for their own, you know, safety and emergency response options. So I would summarize all of that with the next research phase is disentangle what's driving opt-out, elevate awareness from sort of a background variable to a core outcome and start measuring not just reach, but uh utility and trust. And I think Andy's gonna dive a little bit more into um some thoughts we have about the how.
SPEAKER_03I one of the things when when and and Jeanette, you probably have had this experience as well. Uh one of the things when you do research around um emergencies and disasters and emerging events um uh is that most of that isn't announced ahead of time. Um most alerting doesn't have a lot of advanced warning um because the events that they're alerting about don't have a lot of advanced warning. Um if we're talking about a hurricane, sure, we may have several days of of buildup to that. But if it's a tornado outbreak, probably not. And if it's a flash flood, perhaps not. And uh or or other events. And so it's uh, and even when you're talking about a hurricane, your your advance warning is about a week. In typical research timelines, that's not much time at all. Um, and uh so it the work that we do in this space is often quite different from the pre-planned study that we did around this national test, which we knew was happening in our case, years in advance. Um, and it was uh so we were able to sort of design the study to do to get this big sample, to do it quickly, to tell people who we were gonna survey ahead of time, you're gonna get this survey. And it's real, and it need you, and it's important that you respond to it quickly because we want to get your memory, we don't want to, you know, have to rely too much on memory. Um but what we really care about, honestly, is how this works in the real world. And that requires a different approach. Um most disaster research is reactive. Something bad happens and we quickly mobilize to get out there and collect information as quickly as possible. Um, you can't very well do la disasters in a laboratory. That's hard and questionable sometimes. Uh um so it's uh so one of the things that has gotten us thinking about a lot is the need to invest. In infrastructure. If we're talking about physical science, there are sensing networks for earthquakes or or or tsunamis. There are the the National Weather Service is a sensing network in many ways. When it comes to the human problem, when it comes to social and behavioral science, where we're coming, we don't have that infrastructure for the most part. But we could really use it. And what would that what would that be? Having that uh collection, uh that sample readily available to us in advance would be useful. And there are standing survey panels that can play this role. We actually tapped into some of those for our study. But they could be used in a more real-time way if we, if we, if we had uh, if we prepared in advance for this. Um one of the things that's uh really intriguing about that is a challenge of doing research immediately after something bad happens is it can be incredibly tone-deaf. It can come across as creepy and ghoulish. And uh, why are you asking me these questions when I'm trying to just survive? But if this if this person has been recruited well in advance and and we said, when something happens, we're gonna call you because your experience is important. And we're gonna pay you for your time. Put some money in your pocket while you're you're trying to do this, and we're gonna, and we're gonna um and we're gonna ask you questions that are gonna feed into real policy and real responses. Give them a reason why they should participate. Suddenly that survey or that phone call or whatever it is doesn't feel out of place. So getting that group of people, we need assessment tools in advance uh that are ready to go. We need analytic plans and communication plans that are ready to go. That's all we know how to do that. It's it's not a small endeavor by any means, but uh but but I think that we've been moving towards sort of thinking about this as uh what would be the social science equivalent of the sensing network? What would be the social science equivalent of the testing lab? What would be the social science equivalent of of historical data? Um these are these are I think are questions that um would help really move forward in this space in a more data-driven way. Um I you know, think about the benefits of this. Um we capture public experience and response as it happens, rather than relying on people's memories, which are faulty. Um we can quickly identify gaps, for instance, in information or options or resources that emergency management can respond to. Uh we could systematically document what's what happens in a way that we can learn from it after the fact in a better way. Um I could go on about that, but I but I do think that um uh it's important, and we said this, we've said this several times, uh, that the public is an incredibly important partner in any emergency response. And if we ignore the public experience uh and the public and their understanding and their behavior and all of that, you know, all of our understanding of the engineered system here is just gonna run into this brick wall of ignorance around how the public act, what the public actually does. We don't want to treat them as a black box.
SPEAKER_01We know how to actually start on a I think those are great ideas.
SPEAKER_04You know, we have done quick response research for so many decades as part of disaster scholarship, you know, going into the field to collect that perishable data. But a broader survey where you have human social sensors that are prepared to respond immediately would reduce so much burden on the researcher of getting into the field. Plus, you just you just can't get there to collect this kind of data quickly enough. So setting those up ahead of time would be uh ideal.
SPEAKER_03But I I I love the term scientific readiness. Um operational readiness. Is that is it it it's um and we have colleagues who have thought a lot about this. Lori Peake and her her crew, uh, for example, uh in Colorado uh uh have been really pushing some of this. But um, but I do think having standing capability is um and you pointed to another thing about infrastructure. It lots of if you do this right, it becomes a resource that lots of different scientists can tap into. Um it's it's it's uh the the my economist friends would say you're you're sort of absorbing the fixed costs uh uh that and not making every group reinvent that part of it. Um when uh COVID-19 hit, NSF released a call for rapid proposals, and many groups submitted proposals to to do work in this space, and even in the within the program. I I I I I've worked with and and Rachel's worked with, uh we know several of the groups. We were part of that. And a lot of us had to go through the same exercise of very quickly trying to stand up data collection and doing all this stuff. After the fact, I'm sort of like, that was so redundant.
SPEAKER_04Yeah, yeah. That was an incredible effort. And like you were saying, not only is it challenging for the researchers to get the people to tap into it, but if you were, if everyone was trying to go to the same community, then you've got people who are completely overwhelmed by um scientists coming in and asking them similar questions over and over again.
SPEAKER_00Absolutely. Yeah. And for anybody who's out there who's thinking, again, it's the question of, well, you guys didn't ask this, or you didn't ask, you know, something else, or we don't have a certain piece. There was still a ton of information that was collected that has been analyzed. And there are really way too many variables out there to do them all at once anyway. And it's gonna take, as you know, Rachel said, you know, subsequent studies. And and one thing, um, Andy, I really like what you said. And it's just a reminder to everybody out there who's like, well, why don't we have more information? Um, and as I guess, as a caveat, we, thanks to this study, have a lot more information than we ever have in the past. But people are too busy surviving often to do that rapid response. And I this was in a statistics class I took uh for my undergraduate. I remember it was a professor who made a comment. He's like, Well, I'd like to know more about the process of somebody dying, but they're too busy to respond. And and it really, if you think your tornado example and others, when we're in the moment, it is really difficult to gather that data. And I guess to that, I I am really impressed with just the amount of data that you guys were able to get. And I look forward to new and interesting ways, as you guys were talking about, of collecting more data as we pass through other types of hazards, which they will happen. Um, and I hope we are able to get that to answer all of the other questions. I I swear, Rachel, I want a little like graphic with all these levers because I swear you pull one and then you're like, well, I didn't get the response I needed. So then you reach for another one.
SPEAKER_01Yeah.
SPEAKER_00It's gonna be it'll be huge once we get all that information.
SPEAKER_04This has been, again, another great conversation. I love talking with my colleagues in research. Um, and you bring such a special perspective of having done this really massive, incredible national study, which was really a service to the nation. So thank you for your work in this area and for bringing all of this knowledge to our listeners and to our readers and to our watchers. And for everyone who's who's listening and watching, please like and subscribe and share this podcast with others so that we can get this information farther to the people who really need it. So thanks, Andy and Rachel, for joining us again today.
SPEAKER_02Just thank you again.
SPEAKER_00Download their study, get it, and you'll you'll see links to it. Um, and apply what it what it's showing us, to apply the results to make your programs better.