GradLIFE Podcast

AI @ Illinois: Making AI Legible with Clara Belitz and Ali Zaidi

March 05, 2024 Graduate College (UIUC)
GradLIFE Podcast
AI @ Illinois: Making AI Legible with Clara Belitz and Ali Zaidi
Show Notes Transcript

This episode is part of our special GradLIFE series AI at Illinois, where we delve into the impacts of artificial intelligence technologies on our graduate students' research, teaching, and thinking.

On this episode, Bri Lafond (Writing Studies doctoral candidate and Graduate College Career Exploration Fellow) sits down with Clara Belitz (School of Information Sciences) and Ali Zaidi (Computer Science) for a dynamic conversation about how AI has the potential to affect a lot of different aspects of our day-to-day lives. 
______
Show Notes:

GradLIFE Blog | Graduate College, Illinois

AI @ Illinois | GradLIFE Blog

Generative AI Center of Expertise @ Illinois

School of Information Sciences @ Illinois

Clara Belitz @ School of Information Sciences, Illinois

Computer Science @ Illinois

Karrie Karahalios @ Illinois Computer Science

Ranjitha Kumar @ Illinois Computer Science


GradLIFE is a production of the Graduate College at the University of Illinois Urbana-Champaign. For more information, and for anything else related to the Graduate College, visit us at grad.illinois.edu

John Moist:

I'm John Moist and you're listening to the GradLife podcast, where we take a deep dive into topics related to graduate education at the University of Illinois, Urbana-Champaign. A couple of months ago, we surveyed graduate students at Illinois to understand the impacts AI technologies are having on their research, teaching, and thinking. We got a ton of interesting responses, which you can read in our blog. To learn more, we reached out to grad students who are working with and thinking about these emerging technologies. I'm here with Bri Lafond, a doctoral candidate in writing studies and a career exploration fellow here in the Graduate College. Bri- you've been working on this series, AI at Illinois, that's bringing together Illinois scholars for conversations about these big questions. Tell me about the two grad students you interviewed for our first

Bri:

Thanks, John. In this first episode, you'll be hearing my installment. conversation with Clara Belitz and Ali Zaidi. Clara is a doctoral student in information science at the I-School. Her research focuses on algorithmic justice, working to integrate social and technical approaches to issues of equity and computing. She's broadly interested in AI ethics and how we incorporate social identity into predictive systems and the knowledge assumptions of computing. She researches with both the Human+ Machine Learning Lab and the Community Data Clinic. Ali is a fourth year doctoral candidate in the Department of Computer Science working with professors Karrie Karahalios and Ranjitha Kumar. His current research focuses on uncovering tensions between clinical personnel and artificial intelligence, as well as how to better integrate AI based tools into their clinical decision making workflows. Previously, he's worked on user-centric evaluations of smart home routine software, and data driven design of user interfaces. Given Clara and Ali's different disciplinary positions, you can imagine we had a dynamic conversation about how AI has the potential to affect a lot of different aspects of our day to day lives.

John Moist:

Let's take a listen.

Bri:

There's this idea of like, how do we make what AI is legible and understandable to a number of populations. So like Claire working in like education, and also in public advocacy. And then Ali like working with like patients and doctors and medical settings, thinking about this kind of larger question of AI and literacy, like, what does that mean to you all? And like, how might that kind of work be extended? I think we'll go ahead and start with Claire this time.

Clara:

So I think starting off, I love this idea of literacy, Absolutely. Yeah. because I think we already have some sensibilities of different types of literacy, right? We talk about media literacy, we talk about, you know, early childhood literacy, right? There's all these different types of literacy. And so I think I really liked that lens of AI literacy, because it's just another type of critical thinking we need to be able to engage in and I recognize that's not a simple statement. But it does, I think, help us to sort of put AI in the context of human systems we already have. And I keep saying it's a tool. But I think that's something I've really been drawing on is these long histories actually, of technology. The cotton gin was a tool, the printing press was a tool. Mechanized factories are a tool, maybe I should say technology, it's a little broader than tool. But to say, we've actually dealt with all kinds of technological revolutions as people, whether we've dealt with them well or not, I will maybe leave to the historians and the sociologists. But it is to say that, while generative AI and AI in general are new technologies to us right now, we have dealt with radical technological change in the past, to bring it back to the factory and labor, I think that you brought up jobs, right, I think people feel very worried that they're going to be replaced by robots or replaced by computers. Artificial intelligence has the potential to create a world with more equity, where we all get access to better life conditions, it also has the potential to concentrate power and wealth in the hands of the few. And so neither of those outcomes are predetermined, neither of those outcomes have to happen. I would rather the equitable outcome happen. But I think that knowing that we have the ability to engage with these systems, learn about them, critique them, figure out where they fit, and where they sometimes don't fit, sometimes they won't fit. And that's really important too. And just coming in with a little bit less of an eye for black and white thinking and a little bit more of an eye for this is a really cool opportunity. How do we use it in a way that ideally democratizes and shares the power and wealth that will come from these systems and makes our lives better and easier across the board? Rather than letting these systems force people out of their jobs where then, you know, some people don't have jobs and some people end up very wealthy through the use of AI?

Bri:

Yeah, I think that idea of technological determinism is so important because a lot of these conversations are about like, the foregone conclusions of "AI is going to change everything." But we don't really know at this stage. But yeah, Ali, anything to add to that, or to kind of come back to this idea of literacy and AI literacy more broadly,

Ali:

I think there's a couple of different kind of ways you can think about it. And something that, in addition to doing stuff with AI, I've done a lot of work with, you know, smart homes and smart devices. And the kind of, I remember, the first paper I ever read, you know, my advisor gave to me was this paper, I think it's 2006, there's like a quote that she always says to me, and it's like, people want to control their lives, not devices. And so I feel you can kind of port that idea a little bit to the idea of AI or even know, all the different sort of systems that are out there. You know, in general, like kind of, like Clara alluded to, these are all tools, but at the same time, people want to use these as vehicles to achieve larger goals. Usually, it's not like I want to just use generative AI for the sake of using generative AI, I want to use it to, you know, learn about something to synthesize knowledge, to maybe cheat on homework, you know, regardless of what their goals are, I think, you know, specific to some of the stuff that I've been thinking quite a bit about, it's sort of this idea that, you know, with AI literacy, I think sometimes it's sort of like a pendulum where I think before we sort of had like black box systems, which is like the idea that, as computer scientists or systems builders, we don't want to show people how the sausage is made, so to speak, we don't want to show the insides of what algorithms are functioning this, how outputs are even coming to be, you know, all these sorts of different considerations. In my experience, at least, we're sort of fighting against that idea where no, like, it's important to make people understand sort of sort of how a system comes to a conclusion that it makes and like AI explainability is a huge, huge thing. You know, in some of my work, I've also realized that a lot of the people, these populations that these tools are built for, like they're not computer scientists, and so to inundate them with a bunch of information about things, it's kind of difficult for them to understand, that almost seems just as bad as not telling them anything at all. It reminds me sort of of like Apple giving you terms and conditions that are like 100 pages long. And you're not going to read any of that. And so if you know, a system gives people 100 different formulas and super detailed explanations of "here are the algorithmic decisions that were made," it's just as ineffective at informing people and making them literate about AI as not telling them anything at all. Sort of that this is a pendulum and we don't want to swing too far to the other side, I think there is a balance, but I also think it's highly contextual.

Bri:

And definitely, there's this idea of opacity and like the opaque nature of these systems, to where there's black box, of course, like that they're proprietary systems and trade secrets and all of that, not wanting to reveal these things, but also the sheer volume of information. Humans just aren't capable of processing all of these different like inputs and outputs, and what we do with that. So I know, Ali, some of your research works on this in terms of like thinking about how we kind of explain these systems in a particular context, could you talk a little bit about that?

Ali:

As I sort of work primarily right now in the AI and health space. But the thing is, is that health and medicine are super duper contextual. Every specialty within medicine is extremely different. So currently, I'm working a lot with the space of concussion diagnosis. And the reasons for that are firstly, like, the concussion diagnosis state of the art is extremely subjective, where you go at, you can go into five different doctors offices, or nurses or athletic trainers' offices, and they'll use one of 20 different maybe questionnaires and physical examinations. And, you know, there's not a whole lot of consistency. What we're hypothesizing is, hey, can we leverage AI and machine learning techniques to sort of introduce a level A: of objectivity, and

then B:

consistency to this sort of workflow, because we think that would make the lives of these people a lot easier. And what we're first doing is we first want to do what we're calling like a workflow analysis, we want to see what the existing landscape of diagnosis even looks like. And to what extent, not necessarily just AI, but just do computers even play a role in the diagnostic workflows of a lot of people. And what we found is that this is actually kind of mixed. For some people. It's still just pen and paper, maybe even just looking at someone. Like we've interviewed a variety of different people from physicians and emergency departments and sports medicine, to you know, school nurses. And so school nurses, we were asking them, like, [they] don't use computers at all. The kid comes in and they we look at them and we say, "Okay, go to the doctor, don't go to the doctor, you're good to go." For them, for example, we're skeptical about how AI could actually even fit into their workflows. Because I think you want to apply it as like a magic solution to everything. But that's not really the case. Like Clara said, like sometimes AI just doesn't fit with what people are trying to do in certain kind of workflows. And so you know, for them, that's something we're we're acknowledging that, hey, maybe this is something that we need to kind of drastically change how we would design a tool like this for a school nurse versus a neurologist, you know, for them, it's just two different sets of experiences. It's the types of patients they see what their workflows look like. And so the idea is that we're trying to flip that and make it more involved. We want to involve the stakeholders, you know, from the beginning, and get their perspectives and understand not only how the

systems should look, but A:

also whether they, whether we should even kind of go down this route at all, and then B: what are the sorts of perspectives and kind of obstacles and tensions that are already there that we would have to address within whatever design that we would make of these systems? That's sort of a broad overview of some of the stuff that we're kind of thinking about.

Bri:

I kind of wanted to pull Clara in here and to think about this idea of like public advocacy as well. So like, obviously, Ali's working in a specific medical context and with different kinds of practitioners. But like, in doing a little bit more of this public advocacy work, how does that kind of work?

Clara:

One of the things that I've been thinking about a lot is the lens of information access and information, justice, and information activism, which are all fields that exist inside of information science. And what that basically is, there's a lot of just big words that don't necessarily mean something, right. And so a lot of times people even say like, what is information science? Isn't everything information? And in some ways, yes. And so it's part of the question becomes, like Ali got at, what is the right level of organization for people to be able to use it? So there's a couple of things sort of examples that I like to use. The first is an example of complete disorganization, right? So let's say you walked into a library, and there was just books in piles everywhere. That's not useful, right? If the point of a library is for people to be able to find and look up verified information, right, you sort of also trust the library, you trust that the librarians that the system of the library itself, is not going to give you something that is complete misinformation, right. Something might be fiction, or it might be a memoir, maybe it's not necessarily verified, but we have the sense of genres too right, if you read something that was written by a professor, and it's from an Academic Press, you sort of have an expectation of the level of fact checking associated with that. Versus if you pick up a memoir, you sort of say, okay, this is one person's story, it's not necessarily indicative of a larger research project, it can still inform my worldview, I might still learn something from this person's experience. It's still super valuable knowledge. But it's a different type of knowledge. And both are useful, but they have different roles. And so it's really important that we have trust in the library, because at a certain point, if we stop trusting that institution, then we won't believe that we can go there and get good information. For example, Google, in the last few years has started doing a thing where if you ask-- you used to not be able to type a question into Google, right, it would just muddy your search results to add too many extraneous words. And now anyone can go in, you don't need to be an expert at Google. And you can just ask even a verbatim question. And you'll often actually even get an answer pulled up right for you. But it'll be highlighted in a paragraph, then it'll give you a link to the website in which it came from. If you click on that link, it'll bump you down exactly to where that quotation came from. And then you have the ability as a human to judge do I think this website is trustworthy? What other information is here? Did this actually answer my question? Because sometimes it doesn't. And you can either keep searching or say, Okay, I feel good. Or maybe you ask a really simple question like, "what does the weather look like tomorrow?" And Google responds with the nice little pictographs about the weather. And it tells you that it's source is the National Weather Service, and you say, great, I feel pretty confident that tomorrow will be 40 degrees. So you have that ability. That's that literacy, right? You have that literacy to say, what is the verifiable source? How did we get here? Do I feel like I got the answer I needed. And right now with something like ChatGPT, we don't really have any of that verification available to us. But if you said, I want to use ChatGPT to help me write an intro paragraph for my paper, because I've already written the whole thing. And I know that writing introductions is so hard, synthesis is one of the hardest things we do as humans. I would love to get ChatGPT to help me synthesize my own work. You're still the expert there too, right? If you read that intro paragraph, and you say,"Yeah, that sounds like my voice", or "Yeah, that's the information I was trying to communicate." Fantastic use of ChatGPT, go for it. But you're not necessarily expecting that ChatGPT gave you that information verification, because you went in knowing what you were doing. There are times though, that I think ChatGPT sits in that middle ground, you might say to GhatGPT, "what started World War One," and I don't know exactly what ChatGPT will tell you. But it'll probably bring up, you know, the assassination of Archduke Franz Ferdinand. But how many factors were you looking for? What level of information did you want? Are you a 12 year old writing a report for your first world history class, or are you an undergraduate who was actually supposed to write a five to ten page research paper, there's no sense of that literacy. There's no sort of ability to make the judgment in coordination with the system. And so I think about that a lot that what, as Ali said, what is that level of information that will allow us to have critical thinking, because it's not inundating people with every last formula. It's not inundating people with exactly which you know, how each node is weighted at every, you know, in the backpropagation on the link. And people don't need to know that in order to have a sensibility of if the information is trustworthy. And so I think that this work of translation is really important. And it does require some amount of expertise to even judge what kinds of information would become usable. And without assuming that all users need the same information. Like Ali said, maybe nurses at a school need a different amount of information, than clinicians running an ER, those are different scenarios, they both probably would like support in their diagnoses, they would like to be more reliable in helping students with concussions get the medical care they need, but they're in different settings. And so I think a lot of the question then becomes what is each of those settings? And the advocacy part for me is saying, what do people tell us is what they need. And I think Ali is also getting at that it's not making these assumptions about what people need, but rather, going to people and saying, what questions do you have? What would allow you to be able to interact with this? What would make you feel like this is trustworthy or not trustworthy? And it's not necessarily one or the other all the time? It's not that ChatGPT is always trustworthy, or is never trustworthy? It's knowing when it's trustworthy, essentially.

Bri:

Yeah, and the kind of current lack of transparency ultimately, because like, if you did ask Chachi Beatty to answer the question about World War One, and then maybe ask for sources. Like where are they getting that information? You may very well get some false leads there. Like it may be incorrect information. I did ask ChatGPT. The other day, if it hallucinates, and assured me that it does not. I think that that has kind of come up as like, this is like big, larger conversation. And like, what actually does it mean for an AI to hallucinate? How do we kind of verify what we know? And I think that speaks to this larger question of literacy. It's not even necessarily knowing exactly how AI works, but knowing how to work with it, to verify it, to question it to kind of think through these things. In terms of the future of AI. And this could be generative, or again, like some of the other kinds of work that you all are doing with larger questions about algorithms. What are some of like, the exciting possibilities that you see, because we've kind of been talking quite a bit about like that healthy skepticism to have, but what are the kind of like things that you are personally excited about the possibilities that AI can bring?

Ali:

I think first off though, like, I'll even just say, right now, as part of my research, what I'm doing is, I have 11, or 12, like transcripts of like 30, to 40 minute interviews, that's a lot of data. And I have been going through it, and I'm you know, analyzing it, but also, it has like an AI summary component. And I have one paragraph summaries of each of them sort of now, giving me a general sense of sort of, hey, what are some of the contents of the interview, if I was to sort of, maybe tell someone in a couple sentences what it was like, that's really cool. And it's really helpful. Again, like we said, we have this skepticism. So I'm not just going to now take those summaries, copy, paste them all together, and submit it as a research paper, because I don't think that would work. But it is super helpful. And you know, so that's even in the now. But like, in the future, I'll say, like, just me, working in some of the medical context, you know, just reading about some of the really cool stuff that is out there and is continuing to come out there. Like we're able to kind of identify problems spots on like, you know, scans for cancer, other sorts of things like EKGs, and things like that, being able to identify really potentially troublesome medical issues. And at least, you know, the key here is we're not trying to replace doctors, but just assisting them, where if we can make the process even a little bit easier, which I think that's kind of the goal is to make things easier for them, then maybe they can see more patients in a day or in a week or over the long run, see more people and be able to get people the care that they deserve. I think another thing is that what we're trying to do is we're trying to build these tools such that you know, if someone is at home, you know, they're 20 minutes away from a hospital, 30 minutes away from a hospital, you know, we're all not so lucky to be right next to Carle. And they have to make a choice like, "is my condition that bad where I need to go to the doctor" or can I maybe wait until the next day or something like that, you know, the idea that can we give them tools that can help at least make that decision a little bit easier, or give them more information so they can make a more informed decision. For me what sort of drives me in this space to even get in the space of healthcare at all is sort of twofold: where I want to assist physicians and clinicians in their day to day diagnostic or clinical workflows. The idea is that there's a lot of potential inequalities in care that result from things like proximity to hospitals, like, you know, socioeconomic status, like maybe you only have one car and that car someone needs to take to go to work. And then, you know, how are you supposed to get to the hospital? You know, all these sorts of things? And so the idea is that, how can we help leverage technology in such a way that we can sort of make these you know, health care outcomes slightly more equitable? Well, I do have like this skepticism of AI, I think it will also be at the forefront of, you know, in the future, hopefully, to make some of these, you know, that dream a little bit more of a reality. Now, there's something to be said about, this will also become very lucrative for people. And when money is involved, you know, blah, blah, blah, but ignoring all that, you know, we can kind of say that there's the potential for that. So that's something that sort of drives me and it's definitely something that I at least think in the future can be something that can be made slightly more possible than it is now.

Clara:

I think anytime we can replace an extremely dangerous human task with technology, I am in theory for it. I think we've definitely seen with AI that right now, this is not being done equitably, right, especially when you think about, going back to like, one thing I wish people knew is that these systems are built on so many layers, right? And so we can talk about the AI itself. But then we also have to talk about like, the data cleaners and the moderators, we've seen a lot of examples of people in the global south being traumatized by terrible content that they've had to moderate on Facebook, for example, we've similarly seen mining for rare earth materials can be really dangerous for the people who are actually working in those mines. Right. So I think, without flattening that, if and when technologies can replace dangerous conditions, I think that's really fantastic. Right? To go back to even the mills, children were being maimed every day, in mills in, in factories, right? As well as adults. And replacing some of these dangerous tools with less dangerous machines was theoretically a wonderful opportunity to have children have the opportunity to go to school instead of losing fingers. But it also required the labor movement to say,"great, we are going to make sure that children get to go to school, but we also need to make sure that people are sharing this wealth equitably," essentially. And so I think when I see opportunities like that, that's when I get really excited when I see. Okay, yeah, it'd be really fantastic to see a lot of the things that are dangerous for people being replaced with technology, while being aware of who might actually be under the surface, still participating in those dangerous things. But you know, for example, in a car, auto braking, lane assist, right, little things like that, that can keep people from being distracted or sleepy drivers, could potentially prevent car accidents, I think would be really wonderful. It'd be fantastic if we could reduce the number of car accidents in America, right? That is a place where people actually die quite frequently. So I see a lot of opportunities when AI is basically regulated and governed, to potentially bring a lot of safety and wellbeing, to people. But yeah, I know, you said what were we excited about? And I'm still bringing in other words that I'm not as excited about. But I think that there are some really cool, exciting opportunities. I just, I'm always sort of tempering my excitement with what are we sort of assuming about how AI is going to be built or about how AI is going to be implemented when we talk about our excitement. So I definitely have excitement. But I always keep the grace still in there, even though I know I'm sure that you're going to ask that next.

Bri:

Yeah, these tools can be potentially cool. But like your point about, that AI isn't just AI... AI is the people that are building the servers and governing them. It's they exist on like, on land, it's not just in a cloud somewhere, it's built from materials that have to be mined in potentially dangerous conditions. AI is not just AI, AI is a system. And it's it's kind of like interconnected, it's collecting labor, right? That it's collecting all this information that people have created over time and kind of putting it all in a mishmash, like in a stew. And bringing it out for us. So I think that's an important thing to remember is that there are like people underneath these things. So yeah, I kind of wanted to just move into a final question. We've been talking a little bit about the future and like what you're currently doing, but in terms of like the research that you're performing now and the work that you're trying to do as graduate students, like, what is it like working with topics that are so on the edge on the bleeding edge of like, what is out there? You know, there are folks who do like archival work, and it's it's not like there's going to be a new medieval manuscript might get discovered, but you know, it's not, there's not a new one dropping next week that you need to, like, you know, consider in your research. How do you kind of like confront that and dealing with like researching something that changes moment to moment, week to week? Ali?

Ali:

I think it's thankfully, you know, the research that I do at class, it's more under like human computer interaction sort of type things. And so I don't classify myself as like a pure ML [Machine Learning] or AI even researcher, because for them, it's like, every month, it seems like, everything is changing. And so, in some ways, I'm thankful that the work that I do is more user-driven. And so by design, that takes a little bit longer of a time, while the work that I'm doing, you know, it is it, I at least hope it's novel and compelling. You know, that's all we that's what we all want. But I think for me, what I sort of think about, and what it's like is that you're required to sort of always be updated on like, the sort of related work that's out there. And so I'm not the most organized person, so I probably have it across like five or six spreadsheets, but I have lots of different related works and things in and whenever anything comes up, I'll you know, skim it a little and, you know, see, okay, is this something that's super relevant, not that relevant, or somewhere in between and sort of, have it in the back of my mind, through being, you know, a doctoral student for a few years, now, you sort of gain a couple of skills that I think the only way-- I honestly don't know how to teach someone these skills, I really just think it's time spent doing it, where like, skimming papers and understanding how relevant this is to my own research, something that now I feel relatively confident in being able to do and being able to discern. What I'll say is that it's honestly fun, it's really cool to be able to do user facing like this research that directly, you can kind of directly interface with the stakeholders, and you get to see, you're not sort of in this, like, closed rooms, writing code, and maybe one day someone will use this. But at the same time, you have to just work with this technology and see what its actual human impact is. At the same time, it's a little scary, because the most real feedback you'll probably get is from a person that... they're not even like a computer scientist. My best sets of feedback are when I'll go ask some person that like works at like a bike shop in town or something about like a system that I've built, because those are like, you're not gonna have people that like, you know, it's tough, cuz you're in a bubble, but you're in most, most of the time, you're not going to have other like, doctoral students, or faculty or even people that aren't even really working in higher education may be working with your system. You're gonna have people that work in other industries or work at other jobs or have different varying levels of schooling, or education or socioeconomic status. And I think it's like irresponsible sometimes to sort of assume that there's like a baseline level of knowledge or something that they sort of have to have to use your systems or, or, you know, when you're evaluating, you sort of have to be understanding that not everyone has the same background, or the opportunities like, you know, that you have or anyone else has. So, it's really cool, because you get to see the people and the stakeholders firsthand some of the time. But also, it's kind of nerve-wracking, because you, the last thing you want to do is be perceived as like this elitist"egghead" person coming in with like, "oh, we're gonna bring in AI to this and you uneducated masses," or something like that, which is the last thing that any of us want. And so yeah, it's like a combination of really cool, a little nerve wracking, but overall fun.

Clara:

Yeah, that definitely all resonates with me. Overall, I would say I love it, because I think it keeps me both curious and cautious, which I think is the perfect place to be as a researcher. I think it really keeps us from assuming that we have all the answers. So I think that is when you start to risk having the academic and the ivory tower inflated ego, you think, "Oh, I know everything." But we can't possibly know everything, because it's all still happening. So that's a really good reminder for me, but it is also kind of exhausting. I think it's a lot harder to stay in that place of constantly learning than to sort of retreat into Okay, I have my, my answers. And I also definitely see, for me personally, a relationship to time I've spent doing like community activism and organizing, because we use this metaphor of flying the plane as we build it, which I know is similar to a metaphor you have used about building the bridge, or researching the bridge as it's under construction. And I think it has a similar energy to say, we're not going to get it right every time. And that's okay. It's not a reason to stop. I mean, if you thought you were building something really evil, that would be a reason to stop. But as long as you think you're building something that will help people that has a good intent. And you've also considered the impacts, you know, you've considered users who might not have been included you are considering, like we've talked about before, who's included in who's excluded, what biases are baked into the system, what do we maybe need to remedy as we move forward, while also saying "being afraid of making a mistake is not a reason to stop." I think we have to be open to critique and hearing if and when we make mistakes, but I think it's a very vulnerable position to be in. But I also think it's really exciting position to be in. And that's for me why it's so important to have the relationship back to community, the relationship back to explaining the relationship back to being an educator and a translator, because I don't think I have all the answers. I just think I am one person working on this. And there are lots of people working on this, both inside and outside of the academy. And they all have a stake in this and they all have good information to share. And so I see my role is what can I do to help translate this information? What can I do to get information into the hands of people who maybe will know what to do with it? What can I do to make sure that the things we build are at least intending to be as equitable as possible as we go? So yeah, I think it's both challenging and exciting.

Bri:

Awesome, I just want to thank you both so much for having this conversation today. And I'm excited to see what you all do as you kind of continue. So yeah, thanks. Thank you very much.

Ali:

Thank you so much for having me. It was great to talk and great to meet you, Clara. Great to see you again, Bri, as well.

Clara:

Yeah, thank you, Bri. It's definitely exciting. I don't think I said about 80% of the things I wrote down. So I appreciate having the space to chat, and it's really nice to meet you all Ali, it's definitely exciting to hear about your work.

John Moist:

GradLife is a production of the Graduate College at the University of Illinois. If you want to learn more about the GradLife, podcast, blog, newsletter, or anything else graduate college related, Visit us at grad.illinois.edu for more information. Until next time, I'm John Moist, and this has been the GradLife podcast.