The ThinkND Podcast
The ThinkND Podcast
Evidence Matters, Part 3: Transforming Child Care
Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.
Episode Topic: Transforming Child Care
Listen in to a conversation on the current state of child care in the United States and the innovative work being done in local communities to create a thriving sector that equitably values children, families, and workers.
Featured Speakers:
-Chloe Gibbs '00, University of Notre Dame
-Carrie Cihak, King County, Washington
-Jessica Tollenaar Cafferty, Best Starts for Kids
-Justin Doromal, Urban Institute
Read this episode's recap over on the University of Notre Dame's open online learning community platform, ThinkND: https://go.nd.edu/33ac8c.
This podcast is a part of the ThinkND Series titled Evidence Matters.
Thanks for listening! The ThinkND Podcast is brought to you by ThinkND, the University of Notre Dame's online learning community. We connect you with videos, podcasts, articles, courses, and other resources to inspire minds and spark conversations on topics that matter to you — everything from faith and politics, to science, technology, and your career.
- Learn more about ThinkND and register for upcoming live events at think.nd.edu.
- Join our LinkedIn community for updates, episode clips, and more.
Welcome and Introduction
2welcome everybody. On behalf of King County Metro here in the Seattle region and the lab, the Wilson Sheehan Lab for Economic Opportunities at Notre Dame. I wanna welcome you to the launch of our Evidence Matters series. We're really excited, to get this off the ground. My name's Carrie Haw, and I lead learning and impact strategies and partnerships at King County Metro Transit. I'm really passionate about advancing, uh, racial equity through bringing together the expertise of our staff, our research partners, and our community. And so I'm really delighted to be in community with all of you here today. I was looking back at, recently at the original concept note for this series. Mm-hmm. Which was dated February 20th, 2020. And just thinking about what a different, time that was. And we had originally conceived this as a, a like a two day summit. And, um, given COVID and everything that's happened in our lives, we've reconceptualized it. As a series of virtual events, and it's really so great to, see, you see so many friends and, and colleagues from around the country here. And despite the changes in COVID, our, purpose for these, events remains the same. We're working to build skills and knowledge, that support a focus on impact and outcomes in the social sector. We wanna celebrate some of our successes in evidence-based cha in evidence-based practice and also share our challenges and engage you in, in helping us overcome those. And we are really interested in building a communative practice so that we, we can learn from each other. we are particularly interested in promoting evidence-based practice among transit agencies, and I know many of you joining us here today are from transit agencies. There's a lot more work to be done, in, in this area to focus on mobility as an outcome and as a human right. And we all know that transit and transportation are a key determinant of access to opportunity and to achieving racial equity. So we've got our work to do and we're excited about it. At the same time, we expect that many events in this series are gonna be of interest to a broad, range of, of social sector organizations and research partners. at King County, we're really fortunate to have, our friends at the Lab for Economic Opportunities. Leo, as our partner on a broad range of projects. Our relationship together dates back about five years to 2017. Together, we've been exploring the impact of King County Innovations in addressing a wide variety of issues like homelessness prevention, criminal justice, emergency rental assistance, and access to transportation. And we're always bringing a strong focus on racial equity into that work. So in this first Evidence matters, event today. We're gonna have about a 15 minute presentation on, what impact evaluation is, hence the title of our series of our event today. And then we're gonna turn it over to a panel of, three folks who recently attended a course on impact evaluation from J working. in the transit field to advance a focus on impact and outcomes. So you'll hear from them and we are definitely planning to, spend some time at letting you ask questions of us. and, we'll be recording the presentations, so up until the q and a, and then we'll turn the recording off and allow you to ask your questions. Throughout as you're listening to, to the event. feel free to post questions in the chat just as they come to you. I'll be monitoring that chat and helping our, our panel lead. pull your questions up and when we get to the q and a, you can also just raise, use the raise hand function, turn off your mic. Um, we want this to be interactive. So with that, I want to introduce, Dr. Rachel Cher Dawson. Uh, Rachel is the Associate Director of Research operations at Leo, and she comes into that role with a long, a lot of experience in education and early childhood. And the thing I love about Rachel is that she's constantly working to bring that policy perspective. The research, the practical side of research and, practice together. And so with that, I will turn it to Rachel.
4Thanks so much, Carrie, and thanks everybody for joining us today. It's wonderful to see so many people interested in this topic that we all feel so, so passionately about. So. What I'm gonna do is really just kind of do a overview of what is impact evaluation, who is Leo, what are we about, and how do we, engage in this work? So, um, at Leo, we, we recognize that around, uh, the country, across America, there are service providers like our friends in King County that are doing great work to take on poverty and all its complexity. We, uh, unfortunately still know too little about what's working and why. And so at Leo, we help service providers apply scientific evaluation methods that's impact evaluation to better understand and share effective poverty interventions. so why we exist? So like our why, why Leo was formed just about nine years ago. Is that, you know, there are 34, around 34 million people living in poverty in the United States. and there's about a trillion dollars spent every year fighting poverty. Uh, unfortunately, only about 1% of that funding goes to known evidence-based programs. And so, that also means that only about 2% of our amazing service agencies around the country have conducted an impact evaluation. So this is. Um, new ground for, uh, the partners that we work with. so how does Leo do this work? What do we do to engage it? And I'll say one thing we do that, our lab takes very seriously is we put. providers, people on the ground, running programs at the forefront of all that we do. So we don't sit from on high as researchers and say, we know exactly what the answers are, or we know exactly, where the research should guide us. instead we go and we identify the innovators, so people around the country who are designing and running programs to really try and improve the lives of our most vulnerable, neighbors and friends. so we also, we do that by teaching about impact evaluation. So through things like this webinar where we just engage with audiences to help everyone understand what an impact evaluation is and what sets it apart from other kinds of evaluations. and we overlay research design on anti-poverty programs. And so the way to think about that is really just that. In and disrupting every single thing about the way, a provider runs a program. Instead, we listen and learn how, how social service providers are running programs and then we, uh, help work with them to design ways to integrate data collection and. research into the way they're running programs so that we can really ask and answer the questions about who's benefiting from the program and, how much is the program impacting them. We also, of course, are a, our foundation is a, at a university, so we are a learning organization. We learn and iterate alongside our providers so that as we learn things, we share what we're learning, with. Analysis of data as as it comes in so that our partners know, what we're seeing. And also can utilize that to think about how they're running programs or how they're reaching out to their clients to engage them in their programming. And then finally we share our findings and work to scale them. So really that just means, you know, as we get results on our studies, we don't just want them to be published in academic journals that, you know, maybe five economists read. We also want, uh, people who can use that information to, improve the way they do programming or change the way they can do programming or scale their programming. Um. Their, so that they can utilize it to really help people that can benefit from that knowledge. And of course, the other part to that is getting the information in the hands of policy makers who the ones making funding decisions about which programs to fund or what kinds of,
5um,
Key Components of Successful Impact Evaluations
Panel Discussion: Learning and Applying Impact Evaluation
4funding to attach to different kinds of programs. And so that's our kind of culminating, event of how we, we work our process. So why impact evaluation? So impact, of course, understanding impact starts with knowing, and so evidence-based solutions provide better insight and actionable steps for fighting poverty. That's why we do the most rigorous kind of evaluation. And the more we know, the better our impact can be. So, that, that's really what drives us, is trying to understand and know the most we can about the impact of programs and, give that information back to our partners. Um, also mo more evidence equals more support. Uh, one thing we hear consistently from a lot of the providers we work with is that today's, you know, policymakers care about impact on people's lives. And so being able to prove the impact with evidence. Helps our partners secure additional funding, whether that's through government sources or philanthropic sources to support their poverty fighting missions. And then additionally, third party validation, of impact really helps, and gives a lot of credibility where, um, there may be questions of, of how do we really know what's working or for whom, and many social service leaders and their staffs and supporters welcome this kind of independence. independent confirmation of the impact of their, their agency's work. It's really kind of critical that, an agency that really wants to understand the impact and demonstrate it to their stake stakeholders, you know, works with someone like Leo and, and others in this space to create, that continuous improvement loop. and this does ultimately also attract future investment. So randomized control trials, that's, the primary way that we do our impact evaluations. And, this terminology may sound familiar to you. It's, in the medical field, this is how drug trials are done and, um, other kinds of medical interventions are tested out, to really understand the impact and validate, um, whether something is working or not. For the, um, for the population it's trying to help. And so randomized control trials, we abbreviate that RCT, help us attribute differences in outcomes to the program and not to other factors. And so to do that, we have to select a group that's exactly like the group of participants in all ways, but one, and so that their exposure to the program is being evaluated. And on this slide, we have apples. And so you can think about this as comparing apples to apples, literally. if this gives you the good visual, um, but, but really that's, um, what we do try and find because, to, to compare an apple to an orange would not give you the same kind of information because we know there's something fundamentally different about those two fruits. And, if we think about this in terms of people being served in programs, we to really understand how a program works to benefit somebody, receiving services. We wanna understand how it works for them compared to someone very similar to them. So some components really there's four key components of successful impact evaluations. and we start with of course, a well-defined intervention. And what that really means is the program that's designed to help a vulnerable population, is clearly defined. It's not just like, oh, we try a little of this, or we try a little of this. of that, it's, clearly defined kind of what the program is, what it's, uh, designed to do, who is, providing the intervention or service, how much service they're providing so that we, in the research world called dosage. So is it something that, that someone's receiving service or contact with a provider once a week or is it a one-time thing? So, so really all the details that. Clarify what exactly the program is doing. And, uh, kind of the counterpart to that in many ways is the second, uh, key, uh, to a successful impact evaluation is meaningful and measurable outcomes. So this is different than outputs, which you may hear in, um, different, uh, you know, types of evaluations. But outcomes we really wanna think about, not just how many people were served by this program, but. What, how were they better off because of the program? So then you can think about things like, employment or income or, uh, educational completion or, in the instance of our, partnership with King County, things like housing stability, homeless shelter entry and, transit usage and mobility and health outcomes, things like that. dialing into what are the key outcomes that this program is meant to improve for the people being served. And then a third element is, uh, ample sample size, which is kind of a tongue twister and a rhyme. But, um, there we're just thinking about to really do this kind of evaluation. Well, we need, um, there to be, and, and programs need there to be, a decent volume of people that are eligible for service. So that we can really, do the kind of analysis we wanna be able to do and demonstrate it with, statistical significance, which really just means like, can we for sure say something about, what the program means for people being served by it? And then that also relates to kind of the fourth component of a successful evaluation. And that's the comparison group. So again, Like in the apples to apples compare or, or, uh, analogy that we had in the last slide. It's thinking about, do we have a group of people who are eligible for the same program that we're evaluating that otherwise look the same as the people getting the program? So we really need comparable groups so that we can compare the, the outcomes of each group and know that the only real difference. That one group got the program and the other did not. So some pain points. So we have worked with, we currently have about 70 different research studies around the country in, I think over 30 different communities with over a hundred different partners. But we've also talked with, probably close to a thousand different providers around the country, over the years. We, we, we know that what we're, asking providers to do and working with them to do is different than how they've run programs before, or collected information or enrolled people into programs. And so some of the things that we hear are these kinds of, things that, you know, is it seems unethical to put people in a control group when they desperately need a service that you offer. Or that that kind of research doesn't seem right or that, how can you, this is a big question. How can you conduct research that excludes people from services And then, from really mostly the frontline people enroll, actually enrolling people into programs is, I don't wanna be the one to tell them no. And that's a, something we take very seriously. Um, and, and, um, that, that level. when you're face to face with someone, what that feels like and looks like. And so, go to the next slide. I, thanks. So for us, I can kind of address, you know, how, how we, know that to be an ethical randomized trials and impact evaluation to be an ethical way of doing research. Is that, for our providers, we know that they're serving as many people as they can, but for the most part, there are many more people, eligible for the programs that they're offering that don't get the services. And so, you know, we get a lot of feedback that the way that people enroll into programs as kind of first come, first serve, or there's a wait list or, there's some other. Rule being used to allow people to get a program, uh, the services that they need. And so, to that point really this is randomizing people into getting the program or not, is just a different way of prioritizing who gets the program. So the same number of people are gonna be served by the program. If we were never there, right? So we're not, we're not, turning people away unnecessarily. And in fact, we definitely talk to providers who, are interested in this kind of work. And they're, they're serving a very small number of people. And, to our knowledge and to their knowledge, they're serving everybody eligible. And so in those instances, this wouldn't be appropriate or. the ethical way to do things and we would say, okay, we need to think about this a different way. But, so many programs that we, that, um, we talk to and that we ultimately work with actually have a lot of excess demand for their service. And they, want to help as many people as possible. They, want to target the right people. They have scarce staff time. They wanna protect client privacy. So those are kind of the motivations we hear from providers. And we're able to do, um, and acknowledge all of those things in a impact evaluation. it's just by doing it this way, we also can maintain a comparison group. That's something that's important to our kind of evaluation. and we select people randomly instead of through a wait list or first come first serve kind of thing. We then are able to fill these treatment and control groups, so to get the apples to apples, groups to compare to each other and understand the impact of the program. And then ultimately the way we, collect data and understand the impact we utilize surveys and measurements that the agencies often are already collecting. some information on clients that they serve and clients that come to them. And then, we of course work with our providers, partners to access the data that they're collecting or to, work with the state or local governments to access data. They collect on, on everyone in the, in the sample that we would be analyzing. And so those are kind of the priorities that we balance when we do, um. Work with providers and do impact evaluations and we, take very seriously the ethical obligations of providers and of doing research the right way. And, this, you know, allows us to then do this rigorous kind of research, that allows us to then demonstrate impact for vulnerable populations and really help, providers around the country and help the people they're serving. Choose and run the most impactful programs.
1Great.
2Great. Thank you so much Rachel. And, um, if anyone has questions for Rachel, go ahead and put your questions in the chat and we will turn to that, at the end of our, at the end of our time together. but right now I wanna turn to our panel. And this panel of folks is really special to me, um, because, in many ways our panel facilitator, Rohit, named Polly, it is kind of a founding partner in this series as well, because it was really through j Pal North America, that King County and Leo got connected with each other. So I'm really excited to have Rohit Heroes, our not as our moderator. He's the senior research and policy manager at JP North, north America. And, Rohit is always balancing that fine line between realism and hope, and he has a amazing way in which he blends his data and people skills. We've got three panelists. As I stated, recently participated in a course at J North America on impact evaluation. Maria is a program manager here at King County Metro in our reduced affairs team, and Maria brings a lot of policy, program management and direct service experience into making our transit services more accessible to everyone in our community. She's always bringing an equity lens to the work and particularly I appreciate how she's always helping us to bring in the context of our customers lived experience. Lori Mims is also at King County Metro where she's the lead for our research and innovation work. her primary focus is around customer and market research, and that informs the design of many of our metro services. Lori's a strong champion for our equity and social justice work at Metro and helping everyone in our organization live our values and our work. So thank you, Lori. And then Juda Santos is a program manager in equity access and mobility at the Bay Area Metropolitan Transportation Commission. She oversees the delivery of the Lifeline Transportation Program across a nine county region. Really huge region. And Judas is also a fellow at the Robert Wood Johnson Foundation, which I think really recognizes how Judas is constantly, seeing the interconnections among things such as the interconnection between transportation and health and how important that is to advancing equity. So with that, Rohit, I'll turn it over to you to kick off our panel.
3Thanks, Gary, and thanks you. Thank you all for having me and for just putting together this terrific panel. So my role is mostly to get outta the way with the panel, so I'll try and do that as quickly as possible. Before we begin, I wanna introduce J. So we're the Poverty Action Lab, a global research center at I that works to improve the labs of disadvantaged communities by figuring out which social programs and policies work. So you can see our mission is very closely aligned with that of Leo's. I sit at the JAL North America office where we focus on partnering with nonprofits and governments in North America to create new evidence and social programs, disseminate evidence certain, improve policymaking, and training government leaders to be more effective producers and consumers of evidence. Uh, all the participants on today's panel recently attended jal's evaluating Social Program scores, which is a five date. Typically in person in these days Zoom training that provides an in-depth look at how and when randomized valuations can be used to rigorously measure social impact methods and considerations for their design and implementation, and how findings can inform evidence-based policies and programs. It's stopped by J related professors and senior staff, and it's tailored to the needs of researchers, policy makers like yourselves and practitioners from NGOs, governments, international organizations, and private sector companies from around the world. So if you're interested in learning more about the course and program, please reach out. Happy to provide more information. and I think the proof in the, when you'll azi from Laurie, Maria and, uh, shortly. But, uh, without further ado, I'll, um, I'll move on over to the panel. And so with that, I'll just talk with the same question for each of our panelists. Karen, were you planning to them first or some further? Okay, great. I'll just launch right into the questions then. so I'll start with, uh, the same question for all of you and maybe the order we can go in is alphabet. So if I remember my alphabet correctly, and jp and also it's Judas, Lori, and Maria order. so what motivated you to learn more about impact evaluation? And we'll start with Judas.
6Thanks Rahe. Um, so at the Metropolitan Transportation Commission, we monitor and report on our policies and programs as part of our practice. With our renewed commitment to equity and the implementation of our equity platform, I wanted to not only grow my understanding of the fundamentals of impact evaluation, but I also wanted to learn how it could institutionalize the practice of, impact evaluation, particularly applying an equity lens to not only our internal work that we do, but also the external work of policy development and project delivery. Secondly, I wanted to learn how other organizations have resourced and funded practices like this and what metrics, others have used that incorporate health outcomes. And then lastly, the, the relationship between project management performance and impact evaluation, how they build, from each other.
3Great. And uh, uh, Lauren.
7My mute button. Thank you for your patience. my reasons are very similar to what Judas just mentioned, but also for personal and professional growth. king County is focusing on equity and social justice and improving the quality of life of residents, in the county. So I thought this class would help me understand or think about how to go about, um, looking at outcomes. How do we really measure and understand the impact that we're having on an improving outcomes for residents in the, county? and it's important that we know how well we're doing so we can shift or make adjustments or tailor our services, um, to do reach our goals and objectives. Uh, so those are the main reasons I was interested in, participating in this course.
3Thanks, Laurie. And Maria, same question for you. What motivated you to learn more about impact evaluation?
8Yeah, thank you for that question. as Carrie mentioned earlier, I get to work on a couple of different pilots and projects here at King County Metro and through that work I've gotten to work with some really great researchers. But I'll admit that one of my reasons for wanting to get a better understanding of impact evaluation was because in my head I was making a distinction between the research that I was seeing as researchers with an uppercase R while I was only seeing myself as a researcher with a lowercase R. so I personally wanted to feel more confident about being able to make sure I was really understanding what was the evaluation efforts that I was a part of. And being able to feel more confident in being able to be a liaison between those capital R researchers and other folks on our, on our team that maybe didn't even, uh, identify as researchers at all. but I did since realize that like, while that was my main motivation, I realized that the distinction that I made in my head between these capital R researchers and these lower arc researchers wasn't true. and that in reality you don't have to be this academic. Upper face, our researcher in order to get started with doing impact evaluations.
3Thanks, Maria. So I heard professional and personal growth melding the two conceptions of researchers. And so now that you've learned all of this and completed your professional and personal journeys in this course, uh, how do you, how do you bring that forward and move forward with what you've learned? And maybe we'll stop with you again, Judas.
Challenges and Reflections on Impact Evaluation
6Um, so in my particular situation, I learned that there are a number of organizational elements that are important to set the stage prior to rolling out an impact evaluation effort. Um, the first is that there needs to be a strategic priority and linking whatever project or policy, um, that you're working on with funding at the highest level and at the very beginning. secondly that there's a structure and process in place. To support the equity evaluation initiative. Third, identifying specific strategies and equity metrics early, upfront and collaboratively. And then lastly, having cooperative collaborative partnerships and championing the evaluation process not necessarily tied to, uh, a particular outcome. So it was important in our situation to have this groundwork in place, um, setting the stage prior to beginning the initial work of impact evaluation.
3Thanks Judi. Alright, Laura, you know the drill.
7Yes. So for me, um, I feel like I can start with where I have direct influence. So the research projects that I'm currently working on, actually taking what I learned in the client to that work. And then also, uh, working with others in the agency to shift our culture in the way we think about. Research and utilizing research, uh, to evaluate our programs and really focus on bringing, uh, research to the table early on, in the development of the programs and projects so we can identify our goals and objectives, um, our measures of success. we don't consistently do that from the agency. So I think, actually being deliberate and intentional in sharing information with folks throughout the agency so we can shift that culture and really, uh, understand the value, what this type of research can bring to the table and how we deliver our services and programs and how it can affect change for, folks in our community.
3Thanks Laurie. And, uh, last but not least, Maria.
8Yeah. so I realized that if I wanna play a role in helping my organization use impact evaluation to influence decision making, that I need to play a role in helping demystify what impact evaluation means and what it actually looks like. and I really love that Rachel really got into and talked about what are those pain points that are happening. Um, and when I'm a part of those conversations, part of that is to really listen deeply with community, what they're saying, what's their feedback, how can we incorporate it? And at the same time, being able to get the message across of, yes, these, this might be hard, but part of the learning is like, how can we then do things that are bigger and better for everyone on a larger scale? Um, and being able to do that and have those d sometimes difficult conversations across multiple levels of the organization. I feel like I also now have a responsibility and opportunity, but then also share back with other folks on my team that may not have as much, um, exposure to what impact evaluation means, or what it looks like, and being able to at least have them be familiar with what are the concepts, what is the purpose, what is the goal? and like Laurie mentioned, being able to really play a role in building that culture of, of, interest of collaboration, of, We really are trying to incorporate those best practices about impact evaluation throughout the organization and being able to collaborate when all of these, um, research projects are each trying to get, learn something a little bit differently, but ultimately are all trying to get at the same understanding of like, how do we make lives better for the people that we're serving? Um, and so I'm really excited to be able to continue moving forward in that work.
3Something you said is interesting and I'd love to, if any, either you or Laurie or Juda any, if you all wanna jump in on this, that would be great. You said about how impact valuation, you thinking about the goals as well of what you do. To what extent have you found that doing in the process of doing an impact evaluation has taught, got you to, I mean, sort of revisit or rethink the goals of the program or vice versa, where as you think about the program, it's fed into the goals of impact evaluation, if you'll. Um, and this is a question for any, any of you. I don't mean to put just Maria on the spot because, uh, she brought this up.
8I think one of the things that forced me in some of my research projects to step, take a step back on is, well, we know for Metro at least, that the intermediate outcome is we want people to. Feel like they can access transit easier, that it's more accessible, that they can ride more. But it really then has been more reflecting of, and what do we expect that to look like once they do have better access to transit? Once their mobility is increased, what do we imagine, um, those downstream outcomes, that real impact to be? And in some ways it was interesting in like some conversations had been like, well, they'd have better jobs. They would be more likely to be employed, they might be able to better access healthcare. and I think in recent conversations we've taken it one step further and been like, well, what, what happens after that? And I think it's really people feel connected. They have a community, they feel like they, um, are in ownership of their lives and that they have, that they're able to move forward in life in the way that they so choose.
3Thanks, Maria, Laurie, or Judith? I see you just unmute. Go ahead.
6Yes. So here in the Bay Area, we've, we launched the Clipper Start means based transit discount pilot. And during the course of, of, of the evaluation, we initially had focused on more implementation outcomes. however, because of this, this renewed commitment to equity, we've really focused on the impact outcomes related to access to opportunity, as well as, a affordable addressing affordability. So we're, we are carrying, it, a step further learning from, from King County about really thinking through, um, health outcomes and, and health impact, particularly around wellness and how the transit discount is impacting, let's say, for example, employment and access to healthy foods. So, so really, going through the course, helped us not only, Learn, enhance our understanding of the fundamentals, but also taught us what other, what other, agencies are doing related to health outcomes. Oh,
7Rahi, will you repeat the question?
3Oh, sure. I, uh, I was just asking if y'all could reflect on the ways in which the, just the process of doing the impact evaluation has. If at all had you revisit the goals of your program and vice versa, whereas you think about the program more, you really think about what the goals have even doing in impact Evaluational.
7I would just say for me, focusing more on outcomes and actually evaluating, if what we're doing is really helping, the community, and just making sure that we tailor our services and policy, to make those improvements. Just focusing more on outcomes versus how satisfied our customers are and thinking that we're actually moving forward with our goals. Um, I don't think measuring satisfaction is enough.
3Thanks, Laurie. So, uh, just, uh, that has me thinking of something else where a question I, I got not that long ago was, well, how are you sure the activities of JAL are actually having an impact? You'll hear so much about having an impact. How do you know you are actually making an impact? And my knee of reaction was, gosh, are you questioning my value? and I realized that, that can be a common response, right? Where if you're running an impact evaluation or you're thinking, gee, let's figure out whether this program is actually achieving its intended impact, you're almost questioning the work of it. So I found that can be a challenge. is that something you all have experienced or are there other challenges you've found in advancing this focus and impact evaluation that you've had to overcome as part of this process? And again, this a question for any or all
7for me. I think, the challenge, lies with time. Uh, we're very focused on producing and implementing and, uh, some folks are concerned with, uh, the time it takes to actually develop a thoughtful. Evaluation process as well as budget and resources. So I, I'd say that I would identify that as such time and resources.
6So for in, in my experience, so in, in our evaluation, it was important for us not only to report on it. But also, to react to it as far as, so how are we going to change or not change? And so as well as the timing. So we recently expanded our clipper start pilot from one year to two. And so we just, uh, began our second year and we've used the, the results of the valuation to, uh, do, do some minor, um, tweaks for, so that we can, evaluate certain variables at year two. So it was, actually, it was refreshing and, and validating for me that the evaluation was, was helping to, to reiterate the program and also evaluate other, other factors that we would normally have evaluated.
Impact Evaluation Madlibs and Closing Remarks
8I think the only thing I'll add is that sometimes it, there's the question of, well, we already did a something like this. Why do you need to do this evaluation again? and I think that's where it's important again, to understand that there could be evaluation, but then there is a distinction between what impact evaluation means of really being able to test like what would the difference have been about this? And not just of like the pre and post, but. Being able to have more of that control group and being able to more accurately say that the outcome that we're seeing is because of this intervention that we did, whatever it is. And so that is sometimes a challenge of the distinction isn't very clear, so it seems like you're doing the same thing again. And so it can be, why are we doing this? We already know this outcome or this response. It's like, well, you know the outputs, you don't know the outcomes, you know? What are some things of like, people might be writing more. Yes. But do we know like what that, what happens after that? Like do we know the impact it's actually having on people's lives? And it's like, we don't, and so I think that's, uh, having that kind of distinction of what, what take, what takes impact, evaluation to that next level and why it's worth doing, even if we similar projects may have been done. being able to explain the value of that, can sometimes be a challenge.
3and so just on that a little bit to the output outcomes distinction, can you speak a little bit to then how you're thinking about that challenge? how are you thinking about recording or measuring those outcomes and getting at that? What happened next? The ultimate outcomes you truly care about.
8I can go, I guess. Uh, that's one of the things of, that's also the pin, one of the pin pain points around, impact evaluation usually is that having to have a control group and a treatment group being able to, Bring, yeah. Being able to bring that in. I'm sorry, can you repeat your question again and make sure I'm fully answering it?
3Yeah. Just how you're getting at those, ultimate outcomes that are being measured. Are there, do you have any, any results, for instance, that, uh, have you thinking about those early outcomes or how Well, given that it was a challenge, how you've been thinking about overcoming that challenge, get at the ultimate outcomes you care about.
8Yeah. I think, uh, to Lori's point, one of the challenges is that it, it requires thoughtful and being thoughtful and intentional before you even start it. And so that's a big challenge of having to create the, having to have enough lead time for it that you're able to think through, well, yes, we can measure, orca boardings, but what happens beyond that? How are we, making sure that we're getting that information from, From both groups, the treatment and control before as well as after. and incorporating that from the very beginning.'cause once you start a project, if you don't have that impact, evaluation design originally in place can be hard to pivot. so I think that's one of the things of before starting any project is like being able to stop and pause and be like, do I have the time? Do I have the capacity? Is this something that would be worth those efforts? And some of them, and that's an important thing to know, is that sometimes. Some projects might be more of, we just wanna understand the feasibility of like, what would this look like? And it might not always mean an impact evaluation, but if you're really trying to understand like the impact, like how are people's lives really changing, then you have to have that, that opportunity to stop and reflect at the beginning.
2Rohit, I can jump in here a little bit too on the question of, you know, do we have any early results? What are we seeing in terms of outcomes? And, with, in our work with Leo, as well as, a researcher who's here with us on the call, Matt Friedman from University of California Irvine, we actually tested, the effects of giving people a fully. Subsidized transit pass. So a pass that would allow them to ride transit at no cost compared to our regular discounted fare program. And what we found in, you know, we were able to generate pretty results pretty quickly. And what we found was that people are more than doubling their use of transit if they're offered a fair free pass as compared to a discounting pass. We're also now starting to find, some of the, the other outcomes of what, how that has changed their lives in other ways. for example, we're finding that people have improved their financial stability as, at, as we've seen data from looking at credit scores, people's credit scores have actually improved and it seems to be due to having that, that pret transit pass. Because we did it in this randomized control trial, re research design. So it's really exciting and you know, that work that we did early on has now influenced the rest of our research program and how we're rolling out actually, a fair free pass for.
3Thanks Gary. It's always, it's always nice to hear, uh, early results that you sort of see the, like, the end of the long impact evaluation tunnel. Uh, and I see NAS on the call as well. Thanks for joining, Matt. so we're gonna play a little game of impact evaluation madlibs, if you'll, so I'm gonna get each of our panelists to start with a sec sentence, and I'd like you to complete it. Now, it doesn't have to be completed in just a sentence, so feel free to elaborate as much as you'd like. We've been starting with, I'm gonna mix it up a bit and start with Laurie instead. so Laurie, if you could complete the sentence, please. Impact evaluation is important in our work because, oh, you might still be muted, Laurie, sorry. Thank you.
7It helps us understand how well we're doing at affecting change. are we investing, our resources wisely? so that we can accomplish our goals and objectives. so I think that's why it's important to make sure that we provide the proper services that we're responsible with our money and, and investments and actually create, the change changes that we're sleeping.
3Thanks, Laurie. we have a different question for each of you, so that makes my lips a little more exciting. We'll transition to you to this. So my, your biggest takeaway is
6impact equals factual minus counterfactual. I was like, that's like tattooed in your brain. So the program, the impact of the program is measuring the difference between factual minus, counteract.
3That warms the les of my jal heart. Judas, um, I dunno if any of you have ever seen the old, uh, Saturday Night Live skits with, I think it was with Dan Arod as, uh, UCCI, where he had to sum up the knowledge of any one field in one sentence. So economics, supply and demand. So
1yeah.
3impact evaluation, knowing about the counterfactual. Perfect. I think you would ease the course. And so Maria, we'll end with you. The best advice you have for others considering impact evaluation is. So
8remember that it's more than just a randomized control trial. Um, Rachel mentioned this earlier, but I think it is worth repeating again, that impact evaluation is still guided by strong ethical principles around obtaining genuine and clear consents, doing no harm, minimizing risk, and making sure that those participating in research are also reaping the benefits and not just shouldering the burden. and that impact evaluation can and probably should be done alongside, you know, having strong partnerships with people that you are engaging in these research projects with. and, um, incorporating any other kind of practices that can help really strengthen those, those evaluation projects.
3Wonderful. Thanks Maria.