Getting2Alpha

Amy Bucher: Designing Behavioral Nudges

Season 11 Episode 1

Amy Bucher is a leading voice in behavioral science & the author of Engaged, with a passion for designing systems that help people thrive—especially in health & wellness.

Join us as we dive into Amy’s insights on what really drives behavior change, why personalization matters, & how her work at Lirio is bringing customized healthcare nudges to life.

LinkedIn: https://www.linkedin.com/in/amybucher/
Engaged: https://amzn.to/4moF6Za
Lirio: https://www.lirio.com/

Intro: [00:00:00] From Silicon Valley, the heart of startup land, it's Getting2Alpha. The show about creating innovative, compelling experiences that people love. And now, here's your host, game designer, entrepreneur, and startup coach, Amy Jo Kim.

Amy Jo: Amy Bucher is a behavioral scientist, an author of engaged, a practical guide for designing products that motivate action.

With a background in psychology and deep experience applying behavioral science to healthcare, Amy now serves as chief behavioral Officer at Lirio. Where she leads the development of personalized AI powered nudges that support better health choices.

Amy Bucher: Internal motivation, it's about aligning a behavior with things that you value or that are part of your identity or that are one of your personal goals.

People do not tend to like to be told what to do.

I think of our nudges as a Trojan horse that bring much more powerful [00:01:00] behavioral science inside of it. 

Amy Jo: Join me as we explore Amy's approach to ethical behavior design and find out why she thinks personalization is the key to turning good intentions into lasting change.

Welcome everyone. I'm here with the amazing Amy Bucher. What nationality is that name from? 

Amy Bucher: It's German, and that's actually the German name for books.

I changed my name when I got married, which I was on the fence about, but the fact that it means books, I was like, okay, I gotta go for that. 

Amy Jo: That's awesome. So welcome to our podcast. I'm thrilled to have you here. 

Amy Bucher: I'm thrilled to be here. I've been really excited about this. 

Amy Jo: Yeah. We spoke years ago. I looked it up. It was actually pre pandemic. 

Amy Bucher: What? 

Amy Jo: And oh, how times have changed. 

Amy Bucher: I had no idea it was that long ago. 

Amy Jo: Well, we've been connected for a while and I'm so excited to get an update. You have been requested by our community as a guest, really eager [00:02:00] to hear what you have to say, and I know the folks here have questions as well.

So let's dive into it. Let's start with your definition of behavioral design. 'cause that term gets thrown around all over the place. So how do you define it in practical terms? 

Amy Bucher: Yeah I say it's basically the application of behavioral science frameworks and approaches to design, but then of course it's, well, what is behavioral science?

And I call that the set of disciplines that really look at the influences on people's behavior. So, I'm a psychologist, that's what my training is in. But the team that I have hired at Lirio who worked for me represent public health. I actually have a few people with kinesiology degrees because in the exercise sciences, you're looking at fitness and how do you get people to do those activities.

Health, geography, there's sociology, anthropology, behavioral economics, of course. So all of those disciplines that kind of share at their heart, understanding why people behave the way that they do. 

Amy Jo: One of [00:03:00] the issues that confuses a lot of people is the relationship between behavioral science and nudges and intrinsic motivation.

A lot of what we focus on is really grounding what you do in intrinsic motivation, and then using nudges and mechanics and other progression techniques to move people toward what they want to do anyway. Not every app can do that, and not everybody approaches it. How do you think about internal motivation and integrate that with your work?

Amy Bucher: I think this is why personalization has been a red thread through my career because internal motivation, it's about aligning a behavior with things that you value or that are part of your identity or that are one of your personal goals. So I'm not talking about a goal, like I'm going to get 10,000 steps today, but maybe a deeper seated goal to be healthy or live a long life for your kids.

If we can help people in the moment [00:04:00] draw a connection between the specific things we're asking them to do and those bigger, more meaningful personal things, then that's where intrinsic motivation comes from. And of course, the gold standard of intrinsic motivation is, it's just enjoyable.

It feels good. You being in the video game space, I'm sure you know all about flow and those experiences that people can have while gaming. I don't find that those are as common in the health behavior world where I live. Sometimes people who get really into exercise may experience that like the runner's high.

But a lot of times if it's something like taking your medication, we really have to think more about those values, that identity and those life goals. And where personalization comes in is I can't impose those things on you. I can't tell you what you value. I can maybe guess and the more I know you, the better that guest might be.

But it's really about getting you to share that or getting you to give me enough clues that I can help connect with that. So I sometimes think of my job really as a translator. How do I take this? Usually clinically decided health behavior. If it's something like a cancer [00:05:00] screening, there is a recommending body that has said it's a good idea for people with these characteristics to get this screening on this timetable.

How do I take that, dry clinical behavioral recommendation and understand something about you and then find that middle ground where it's like you're not getting the screening just to check a box on some kind of, healthy person checklist. You're getting it because it actually supports your vision of yourself as someone who cares for yourself and your family, or because this is going to enable you to live longer than your relatives did, who had a particular disease and you wanna do better in the next generation.

So, I've always been very attracted to digital technologies that allow us to understand something about the individual that maybe isn't self-evident from a database, and then use that to talk to them a little bit differently. 

Amy Jo: You made a post recently that touched on this issue, and I think it's such a great example of the power of personalization when you combine it with nudges.

You made a post about vaccines and autonomy. Yes. Can you [00:06:00] recap that and share like the aha insight? 

Amy Bucher: Yeah, so in my job at Lirio our product is called Precision Nudging. And so we send nudges messages to people around their behaviors, including vaccines. And one of the things that happens when we send a message via text message is that if people reply, we actually can see that it comes into a database.

So we monitor those. We wanna understand what people are experiencing and get their thoughts, their reactions, and their own words. And it gives us a lot of insight for improving the product, improving the messaging as well. But a theme that consistently comes out both among the people who choose to vaccinate and those who don't, is that they value having the ability to make that choice for themselves.

People do not tend to like to be told what to do. There are circumstances, we use a technique called credible source a lot, which is if somebody like your family physician recommends something, you're more likely to do it because you believe that this person is a good authority and you've sort of decided that it's okay to listen to them.

But even in that scenario, you've made a choice that you [00:07:00] are going to let this credible source influence your behavior. So, we've published now two papers on what we've learned from these text message replies. One from COVID-19 Outreach in 2021, and one from RSV Vaccine Outreach in 23, which by the way, that was a brand new vaccine.

It got approved in June by the FDA for adults in the United States, and we began nudging in August that year. So we were especially interested in those text messages because, this group had never been offered this vaccine before. We did not know what their reactions would be. That autonomy theme was so loud and clear.

I mean, just whether people were upset because they felt like they weren't being given enough autonomy or pleased because they liked the way that the nudge was presented and they felt like it really was their choice. One of the things that we just did this last summer is we ran some at this last summer.

It's June. It's so hot out. It feels like, sounds like we're in the late athar arent, we just did this in the last few months though. Some patient research where we were looking specifically at autonomy supportive messaging. And what is the best way to really enforce this idea that [00:08:00] you're empowered to do what feels right for you?

And when we nudge around vaccines this fall that's gonna be a big part of the way that we anchor that messaging. And I think that's gonna be especially important. I mean, the ACIP the body that makes decisions about which vaccines are recommended is meeting this week. And there's a lot of, turbulence and uncertainty.

We don't know what they will recommend or what will be available to people. I do expect it will look different from prior seasons, but I think given this atmosphere it's more important than ever to help people feel like their choices in their hands. That they have the support and resources to, if they do feel like they need to learn more.

And a lot of the replies, they are questions. People really are trying to learn more. So it's about supporting that process for them and not telling them what they have to do. 

Amy Jo: So it sounds like a really good example of personalized nudges. 

Amy Bucher: Yeah, definitely. 

Amy Jo: So, we're gonna talk about the field of behavior science, which you are an expert in translating into product land and some of the pitfalls, and [00:09:00] how you can navigate that.

Let's start with nudges. So nudges are small reminders or pushes in some way that are intended to get somebody to take a certain behavior. Right. In a nutshell. Correct. Yes. So for a while, nudges were the solution to everything. Oh my gosh. And then there was a rash of, we tried nudges and it didn't work in this situation and it didn't work in that situation.

So just like every technique, it's not magical solution to every problem. Where you've now had so much experience with this, what have you learned about situations where nudges are pretty likely to work, maybe with certain, design approaches, and then situations where nudges in general can't really get in there and solve the problem?

How can you help us sort of sort through all of this noise about whether or not nudges actually work? 

Amy Bucher: I love this [00:10:00] question, and I'll tell you all, when I first joined Lirio, I reacted to the name Precision Nudging for exactly that reason, because I was trained in social and organizational psychology, which is really about systems thinking and how does context affect behavior.

And nudges are a little antithetical to that because they don't necessarily, I shouldn't say antithetical, but they're not they're such a small part of the picture. I've come around because I think the term nudging is meaningful in the marketplace. People are familiar with it, and they have some sense when they hear that term that it's gonna be something behavioral, it's probably gonna come in a subtle little package.

And I think of our nudges as a Trojan horse that bring much more powerful behavioral science inside of it. So in my mind, if we send you a text message about the vaccine, the fact of the text message is the nudge. That's the sort of alert on your phone that lets you know that something's changed in your environment for you to react to.

But what that text message says is conveying deeper behavioral science that's hopefully addressing some of those bigger reasons why you maybe haven't gotten that vaccine yet. Whether it's [00:11:00] giving you education or it's reassuring you about something, or it's letting you know that credible source is recommending this for you.

One of the things that I think is really important for any kind of nudge to work is to consider the system in which it's being put. And when we launch our products at Lyria, we go through this process we call behavioral discovery and design, where we actually sit down with a client, we audit whatever action path we're nudging people towards.

So if it's, scheduling a cancer screening or walking into a pharmacy for the the vaccine, we really wanna understand what we are asking people to do. Some of that's because you have to communicate that request clearly in a nudge if you expect people to follow through. We all know, I mean, this is just a good UX type design principle.

You have to be very precise and clear about what you ask people to do. Because we're a very creative species. We'll figure out some way to do something different otherwise. But what we find oftentimes is that there's something in that action path that is, it's full of friction, it's hard for people to complete.

We have one example working with a health system client around scheduling mammograms, and they [00:12:00] had a chatbot, they were actually asking people to click a link in the nudge and go to this chatbot and schedule their mammogram appointment. And we could see from the digital data that a lot of people were clicking that link, but not very many of them were coming out the other side of the chatbot with an appointment scheduled.

So we did an analysis and we found that the chat bot flow was kind of confusing. And the biggest pitfall was midway through the flow. It would ask people for their insurance information. Most people who started that chat bot flow didn't have their insurance card in hand. There was nothing upfront that prepared them.

This might be necessary. And in fact, it wasn't actually necessary to book the appointment if you were an established patient. But there was nothing indicating that you could skip the question either. So we made some changes to that appointment flow and we were able to really quickly see a lot more people who were coming out the other side with that appointment scheduled and then ultimately getting their mammogram.

So that's an example where the nudge by itself might bring someone to a front door, but without considering the bigger system you're asking them to act in, you're not ultimately gonna get the outcomes that you're looking for. So we try to [00:13:00] always take that wide lens and make sure that we're setting the nudge up for as much success as possible.

The behaviors where it works really well are the low friction behaviors. So things like opening an email that doesn't cost people a lot to click open on an email. And so, a nudge. And we do this in terms of like our subject lines in an email where we're really good at getting people to have their curiosity peaked and to see relevance right away.

But then once they're inside that email, if we're asking them to schedule a mammogram, that's a bigger thing. They have to think about taking time away from work, potentially getting childcare, transportation, so forth and so on. And so that's where a nudge really isn't enough and we need to accompany it with other approaches.

Amy Jo: That's really amazing. How has AI changed how you're doing this work? I imagine in profound ways, but maybe you can, a lot of what we talk about in our community is swap notes on what we're doing with the ai, what we tried. This didn't live up to its promises. This was amazing. But then we [00:14:00] had to edit it.

What are you learning and trying that's accelerating your work. 

Amy Bucher: Well, two separate things. So one is Lirio itself uses ai, right? So, which is part of why I wanted to join the company. I was really interested in AI and hadn't had the chance to work hands on. So we use a type of AI called reinforcement learning, and what it basically does is select what goes into our nudges and other aspects of those nudges as well.

Is it an email or a text message or an app notification? When does it come do we send it or not? We can actually look at what is the likelihood that this is an effective nudge, and if that likelihood falls below a threshold, the AI can basically say, this won't achieve the objective. Don't send.

So that, that's changed my work a lot, just in the sense that I have to think differently about building the product. I have to think differently about how I, the meta information around our content. We do a lot of tagging to make our content machine learning readable, and so that our reinforcement learning systems can kind of know what's what.

So there's that whole piece of [00:15:00] it. Then on the personal side, I've been using AI tools myself more and more to try to accelerate my work. We're going through an organizational initiative to improve our efficiency with ai. So we're piloting different tools for different teams, and we do have to be careful about what we use because we work with PHI, so we have to be very careful that we're not exposing any.

Any data or that type of thing to a tool that might be training on it. But like for an example, my team has been using Canva to build some of our visual assets. And this is very recent, but it's really accelerated our ability to prep for client workshops. So we just did a really cool storyboarding workshop with a new client in the interest of figuring out their patient action paths like I just described.

So this is a sort of a concierge health package that people can purchase a subscription to. And we were able to use Canva to create these very cool storyboards that we then brought to the client site and had them react to and mark up and put post-its over. And it was really visually interesting and I think it just showed a greater degree of preparation [00:16:00] and knowledge than we would've been able to do with not using AI. Especially 'cause my team , most of us are behavioral scientists. We're not people who are good at creating visual assets. So that was fantastic. Just the other day, one of the things I used it for is I took some of those text message replies that I was mentioning with the vaccines. If somebody responds with an emoji, it gets transferred transformed to these like nonsense characters, which as a human being I can't do anything with.

And I just had this thought, I'm like, I wonder if ChatGPT can help me with this. So I extracted just those items from the Excel spreadsheet where they were, threw 'em into ChatGPT and I was like, can you make sense of these nonsense characters at the start of each of these messages? And it was like, yes, this one's a thumbs up.

This one's a heart. So I actually had it put together for me a table of which messages elicited which emoji reactions and now we are getting this sentiment that we couldn't get before. So that took me, I don't know, 20 minutes with AI. I had to go back and forth with ChatGPT to have it reformat and, fix some mistakes as I was figuring out what it was doing.

But it would've taken me a long time to do it [00:17:00] manually. So that was a really cool use and one that I look forward to trying again.

Amy Jo: That is awesome. Love it and put a pin in the storyboarding. I wanna come back to that. Sure. Let's talk about the field of behavioral science. Your beloved field. You're trained up, you know it very well. You don't just know it, you ha have to translate between the science and product, right? Yeah. And be able to tell what's real and what's bullshit.

In the last few years, there's been quite a hubbub with, Francesca Gino and Dan Ali's work. Yeah, you're familiar with what I'm talking about? Oh, yes. So in a nutshell, for those not familiar, these are two of the most prominent behavioral scientists, and they were accused of very credibly falsifying data.

And furthermore, many of the studies they published are not reproducible in independent [00:18:00] labs. So there's two separate issues there. There's falsifying data, which of course, in an age of AI in research falsifying data is just exploded as an issue, right? But so that's one issue to address, and the other is reproducibility in behavioral science.

What are your honest thoughts about this as an insider, and how are you navigating this as you do this translation and make sure you're not applying junk science? 

Amy Bucher: Yeah, there is so much to unpack there. So first of all, I should just start by saying I've been really disappointed to see these things unfold.

I have followed the Data colada blog for years. They were kind of the whistleblower on the Francesca Gino case. They do a lot of going back over published studies and, if the original data sets are available or if they can obtain them, they'll actually go and try to reproduce analyses.

And they were able to identify a lot of the issues that sort of led to the Geno case blowing up and getting national [00:19:00] attention. And she actually had a lawsuit against them. I contributed to their defense fund. So just to give you a sense of how I felt about all that, like, I think they're doing the Lord's work and I'm glad that, the suit against them was dismissed.

So they're free to continue doing that. I'm really grateful that there's people in the field who have the skills and the attention to do some of this policing because it is really hard. We have a peer review process in behavioral science that is supposed to safeguard against this, and I've seen some changes to it in the last few years that I think are helping.

So it's now encouraged and sometimes required that you provide your data to journals alongside with your journal manuscript so that can be part of the evaluation process. There is an emphasis on pre-registering studies. So actually just if folks are interested in behavioral nudging, some of the more famous nudging studies are lead authored by Katie Milkman out of Wharton, university of Pennsylvania.

And she's awesome about pre-registering all of her stuff and like her study materials are publicly [00:20:00] available like that. That's best practice now that you make it as easy as possible for people to see what you're doing and you're registering your plans ahead of time so people know that you weren't doing something on the backend to take advantage of work in the data or something like that.

It's still not, I would say entirely common practice. And even for folks like us, we do some academic publishing, but we are limited in how much we can disclose because first of all, we're working with PHI. And so like that vaccine study I mentioned, we were not able to provide that raw data set of vaccine replies to the journal because it's too easy to read the content and re-identify people.

Like there are people who will include personal health information, like they'll actually talk about their health conditions, they might reference their home address or the medications they're taking. And so, we felt that the right ethical balance there was to not share the data, even though that now means.

The transparency isn't maybe what we would like from the research side. So those steps are helping. They're not curing everything. I [00:21:00] think some of the other issues are journals still favor publication of positive results. So, we have what we call a file drawer problem. If you do a study and the results are not significant or if they're negative, like your intervention, your tests didn't work, it's really hard to get those published.

And there's a couple journals that have cropped up that specialize in that, but they don't tend to be very highly regarded. So if you are a faculty member at a university, you're trying to get tenure, you're trying to get promotion, you're not incentivized to try to publish, you're null findings are published in those types of journals, you really have an incentive to try to get very strong study findings.

Which I think is part of the problem. I'm actually finding that. There's less of that pressure outside of the academic world. And it hasn't been really typical for folks like me to publish in peer review until more recently. But I think that the pressures are a little bit different there, where maybe we have a little bit more freedom to, I don't know, not secede so much, but one of the things that really struck me, there's been a few papers that came out in the last few years where people are basically [00:22:00] asking, do nudges even work?

Because the effect size is when you look across, studies end up being really small. And I am not totally dissuaded by that for two reasons. So one is I think there's a big case to be made for matching the nudge to the person in situation. So going back to personalization, a lot of times these studies are just done en mass.

And of course the average result isn't big, but if you look at the people that nudge really works for, like, it really works for them. So I think that there's an intermediary there to explore. But the second piece is, I actually think there's something good knowing that we have this reproducibility crisis about seeing some of these small effect sizes, because it tells me that people are publishing on things maybe more honestly.

They're not necessarily only putting out the papers that have these big block blockbuster findings. So, and you asked me how I deal with this as someone who uses this research I will tell you, I do not cite. Certain people you mentioned Ari and Gino, and of course those are two people that I make an effort not to cite.

And if I see that a technique we wanna try is [00:23:00] coming from their research, we'll put extra scrutiny on it before we use it and we will actually use things sometimes very knowingly that don't have an evidence-based behind them if we have a good hypothesis that it might work. Like we're in the business of generating knowledge, not just using existing knowledge.

So, I might try to reproduce something that one of them had done if I felt like it, it was reasonable and not harmful, and maybe we could learn something. But in general, I would shy away from using their work directly. And there's other folks on that list as well. And then, I think as much as possible trying to practice ethical research ourselves.

And the last thing I'll mention with the peer review. So I try to participate in peer review. It's hard. I get so many requests to review papers. I couldn't possibly do all of them if I wanted to, but I think it's important to give back as someone who's publishing. And so I try to say yes, I don't know, six times a year, say, and I'm also a section editor for JMIR, formative research.

So I'm in charge of finding people to review some of these articles, and then I have to synthesize their feedback and make a decision about, do people revise? Is it [00:24:00] accepted? And I think a drawback to this is. Like I myself, I'm not a strong quantitative person. So when I review an article I think I give very good feedback on the framing and the interpretation of the results and how do we apply this.

But I am not capable of reproducing the statistical methods in most cases and saying, oh, these numbers don't seem right. And as a section editor, I can see that sometimes it's very hard to get the right diversity of viewpoints on a specific paper. So I think that's an issue with peer review, and I don't know what the solution is there, but it's, my own experience doing this has kind of opened my eyes that any one paper might not have been really pressure tested from every angle.

Amy Jo: Wow. It's a tricky needle to thread. 

Amy Bucher: Totally. I Also, when I was an undergraduate at Harvard there was a faculty member there who was caught up in academic misconduct and one of her grad students actually blew the whistle on it. The grad student had wanted to do some initial additional analysis I think wasn't able to get the raw data or did get it, and it [00:25:00] was f but regardless, this grad student figured it out and went to the university and there was an investigation, and it turned out this person had in fact, falsified quite a bit of data.

She was let go. I really admire what that grad student did because they set their own career back by several years. Like there was a huge personal cost to that. And so I just think it's worth calling out that in these situations we know about, there were people who raised their hands and took a big risk, and I think they did a huge service to the field.

Amy Jo: That's amazing. I love that you surfaced how much is driven by incentives? Because if you look at the overall system and the way that ACA academic journals work and the way that tenure works, it makes complete sense. I gotta tell you the, one of the reasons I didn't become an academic after I got my PhD was my first job in a research lab in neuroscience grad school.

I showed my boss a scatterplot of the data that we collected, and he pointed at a [00:26:00] bunch of outliers and he said, throw those out. That's not what the funder wants to see. Yeah. And I had this moment, maybe you've had the, I had this moment of like me throw those out. 

But it was so eye-opening. 

Amy Bucher: My moment was not about the data.

I had an advisor and before we submitted our manuscript, she pulled out, it was old enough that it was a hard copy of the journal and she looked at the editorial board and she made an effort to cite as many of those people's papers as possible, regardless of whether they were really relevant.

So our lit review was kind of all over the place and I was just like, this is how we get published by, kissing the right ass. Like I don't wanna do this. And so, yeah, that's actually not a thing I have paid attention to in my own publication. It's funny in saying this, I hadn't even thought to do that in my own publication in recent years, but in grad school, to me, that was just such a negative eye-opener, and I think it was part of the reason why I didn't even pursue academia.

Amy Jo: I feel you. So I hear that [00:27:00] there's an updated lirio and you might be able to give us a peek and perhaps a demo. I think we're really eager to hear more about Lirio, what it is, especially after this buildup. 

Amy Bucher: I don't have a demo to show. I was a little optimistic there, but we are really evolving right now, which is exciting.

So, we are going global. The storyboarding I exer meant exercise I mentioned was actually for our first international client in Costa Rica. So part of the reason that we wanted to invest the time in having something that was so in depth was because in addition to learning the client, we're learning some of the cultural nuances as well that healthcare is received.

We are also really evolving our ai, which has been exciting. So we had a white paper come out last year about what we call the large behavioral model. But the idea here is that as we interact with an individual over more types of behaviors, we can start to triangulate on who they are as a person. So, I always use the example like if you're afraid of needles, that will [00:28:00] affect your vaccine behavior, but it won't necessarily affect your mammogram behavior.

So we can't treat people in I don't know a terrified healthcare consumer segment. That's not accurate for them across different situations. And so what we're working with our technology is essentially to map out the individuals so that as they, go from the pharmacy to the doctor's office, to the grocery store, we can interact with them in the right way to really personalize for their needs in that situation.

Amy Jo: Thank you so much. You were just dropping gems. Amy, thank you so much. I learned so much. 

Amy Bucher: Well, thank you for having me. This is such a great community. And I really appreciate everything that you put out there too.

Outro: Thanks for listening to Getting2Alpha with Amy Jo Kim, the shows that help you innovate faster and smarter. Be sure to check out our website, getting2alpha.com. That's getting[number]2alpha.com for more great resources and podcast [00:29:00] episodes.