The Infectious Science Podcast
🌍 Welcome to the Infectious Science Podcast – Your source for cutting-edge insights on infectious diseases and the power of the One Health approach! 🎙️
Our mission? To empower YOU with the knowledge to better understand and prevent the spread of emerging diseases. Whether you're a researcher, clinician, student, or simply curious about public health, we bring experts and thought leaders together to spark innovation, collaboration, and critical thinking.
Join us as we dive into the latest research, share inspiring stories, and make complex science accessible to everyone. Let’s build a healthier, more resilient world—one episode at a time! 🌱💡
Subscribe now and become part of the global community driving a safer future! #OneHealth #PublicHealth #InfectiousDiseases
The Infectious Science Podcast
How Generative AI Can Speed Research, Elevate Care, And Keep Humans At The Center
Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.
Curious how AI can make healthcare feel more human instead of less? We sit down with medical writer and AI adoption strategist Dr. Núria Negrão, who went from bench science to building practical ways for clinicians, researchers, and communicators to use generative tools without losing accuracy or empathy. From HIV educations roots to today’s most promising AI workflows, we trace what’s working now and where the next breakthroughs may land.
We unpack the real bottlenecks: clinicians stuck typing and scientists drowning in papers. Dr. Negrão shows how ambient scribe tools can free clinicians up for face-to-face time with patients, while research copilots can scan literature, connect ideas, and surface the studies that matter. We talk medical education use cases—virtual patients for difficult conversations, culturally sensitive practice, and adaptive learning that meets people where they are. Along the way, we tackle the hard parts: AI hallucinations, bias reinforcement, privacy risks, and the myth that AI is either flawless or useless. The answer is supervision, sourcing, and clear guardrails.
Regulation-by-principle anchors our approach: no emotion surveillance, no automated life-and-death allocation, strong data protections, and human override in care. Then we look at the upside for patients. Imagine leaving an appointment with a plain-language summary of what the doctor said, clear next steps, and links to trusted support groups—plus a secure assistant to answer follow-ups when anxiety spikes at midnight. That’s not replacing clinicians; that’s better navigation of the health system. If you want a grounded, hopeful take on AI in healthcare, science communication, and medical writing—one that boosts health literacy and speeds discovery—this conversation is for you.
If this sparked ideas, subscribe, share it with a friend, and leave a review. Tell us what you want to hear next so we can keep building tools and stories that serve real people.
Thanks for listening to the Infectious Science Podcast. Be sure to visit infectiousscience.org to join the conversation, access the show notes, and don’t forget to sign up for our newsletter to receive our free materials.
We hope you enjoyed this new episode of Infectious Science, and if you did, please leave us a review on Apple Podcasts and Spotify. Please share this episode with others who may be interested in this topic!
Also, please don’t hesitate to ask questions or tell us which topics you want us to cover in future episodes. To get in touch, drop us a line in the comment section or send us a message on social media.
Instagram @Infectscipod
Facebook Infectious Science Podcast
See you next time for a new episode!
Welcome And One Health Context
SPEAKER_02This is a podcast about One Health. The idea that the health of humans, animals, plants, and the environment that we all share are intrinsically linked. Coming to you from a team of scientists, physicians, and veterinarians, this is Infectious Science. Where enthusiasm for science is contagious. All right, welcome back to this episode of Infectious Science. Thanks everyone for tuning in. I am really excited today to be here. I'm one of your co-hosts, Camille, and we are really fortunate to be joined by Dr. Narian Grau, who's going to talk to us about AI and how it's changing conversations around health and science and medicine and the ways that we can communicate those. Dr. NGR, could you introduce yourselves for our listeners?
SPEAKER_03Hi, Camille. Thank you so much for inviting me to be part of the podcast. I really like listening to you guys, and I'm very honored to be here. So thank you. Yes, hi everyone. My name is Nudia. I am a medical writer at the moment, and I am an AI adoption strategist. That's what I'm calling myself, basically, because I've been doing a lot of work in AI education and helping people to figure out how they can use AI in their professional lives as scientists and as medical writers. My background is as a scientist. I was a bench scientist in academia and in industry for quite a bit. I have a PhD in cellular biology, and then I also worked in biotech on developing diagnostic tools. And at the same time, throughout my career, so I, if anything, I actually started on the science communication side of it. I say I've been doing science communication since high school. The first project I did was in the 90s in high school was all about teaching all of my peers about HIV and AIDS. I did a project, and my school was like, this project is awesome, and you need to go teach everyone. So I taught everyone in my school, and then I went to other schools teaching everyone about HIV and AIDS. So that's that was the beginning of my SICOM career. And I in college, when I was teaching, I was teaching also a lot of writing in the sciences. So I got training education on how to teach writing in the sciences. So I think I've been doing this for quite a bit. So I also see this whole AI education thing that I'm doing now as like a natural progression from that. Because I really see talking about AI and teaching people about how to use AI as a science communication because artificial intelligence is a science, and learning how I feel like learning about the science itself helps us use the tools better. Yes.
Early SciComm And HIV Education Roots
SPEAKER_02Yeah, no, absolutely. What a wonderful overview. Thank you. That's really cool. So I I just, you know, finished my dissertation, and that's I was studying the effects of HIV and cocaine in the brain. So very cool that you also did to work on basically just like the public-facing education aspect of that. And what's really cool is I think medical literacy has been shown to like increase more when you're not in like a hospital setting, like in schools or in local community centers or churches or barbershops and things like that. People take in the information and really absorb it better than just getting it from something like a hospital. So that's really neat, very cool. Always fun to learn somebody who did some cool SATCOM work. So speaking of which, could you tell us a bit about how do you think AI is currently changing science communication and medical writing? And then where do you see that going in the next couple of years?
Where AI Stands In SciComm Today
SPEAKER_03Okay, so I think right now we're at the very beginnings of it, and it's when we're talking about professional communicators, right? If we're talking about people that their job is to do science communication or scientists themselves, or writers that work in writing about science or writing about medicine, I think you have some people that are very hesitant to use AI, mostly because one, it's a new tool. And if you're not from the computer science side of the world, anything that is computer science is a little bit unknown, and then people may be a little bit afraid of it. Just like I'm just thinking about myself. Like when people used to say, Oh, you need to learn how to code, I'd be like, no, I do not. I do a lot of work already. I do not need to learn a new language. Thank you very much. So I understand like the fear and uh hesitancy there, but it is also because these new AI, the generative AI tools, they write what sounds like really convincing text, but at least if you just ask a very simple question, it can come up with wrong facts. And because when you are communicating science or you're communicating medicine, there is like such an emphasis on being accurate. It's so necessary, so important. People are like, oh, if I use that tool, it's gonna give me the wrong facts and we're gonna get into trouble. So I think a lot of people are afraid of it. On the other hand, on the public using it, I think you have people that have not used AI at all in the population. But those that are students, we know that students are using AI a lot. So I am guessing that the same way that all patients go to Dr. Google to find about their diseases, I am sure that there are lots of patients going to Dr. ChatGPT at the moment to find out about their diseases. So I think the state of it is like that. And then there are a few people everywhere in all industries that are really trying to figure out how they're going to use these tools in a way that is safe, effective, and responsible, right? For example, I know people are using it to create education materials for medical students, to create practice questions for tests. There are lots of people trying to do more personalized learning. So the bot will assess how you're doing in your learning and will change the difficulty of what it teaches you or asks you next because it can really be personalized. So, more in a medical education, I know that there are people are starting to get into that. I know that some companies are creating virtual patients so that doctors can train on these virtual patients, train their communication skills. It is really hard to tell someone bad news. So, like you can practice that, and you can practice culturally aware communication as well. So maybe you feel like you're not too sure how to talk about a specific topic with a person that is from a different culture than yours. So they use these virtual patients for that. So I think really interesting things are starting to be done.
SPEAKER_02Yeah, yeah. And I think you you really hit on something that AI does really well, which is if you have something written up or you know what you want to say, it can help you find the best way to state it, I have found. And it's a really useful tool for that. Here's a concept that I'm trying to explain, but am I hitting the audience that I want to or intend to and making sure that information is clear to them. So that's really cool. And I've also heard from a conversational standpoint that AI is potentially being trained to work almost as like a telehealth chat feature for like therapy, which I think is really wild and also could dramatically increase people's accessibility to things, right? Like I grew up in a rural area and healthcare in rural areas is very hard to access. But of course, there's always the human side of things, right? And so I don't think, and I I think you kind of hit on this, it's not replacing people, but it is changing how we do things.
Medical Education, Virtual Patients, EQ In Models
SPEAKER_03Yeah, yeah. And what you're talking about, like the chat bots, they're like companion bots or the ones that will become, say, therapist, therapy type bots, or coach type bots. One of the really popular apps on mobile phones is character AI, which is people basically talking to these companion bots, right? And you can talk to different personalities on character AI. And you have a lot of young people in particular spend a lot of their AI time talking to these AI best friends, AI romantic partners, but also coaches and therapists and things like that. So, one thing that is interesting about this is the ChatGPT's new model, the 4.5 model, in tests of emotional intelligence, it seems to have higher emotional intelligence than all of the other models, which is basically like what you want to do if you want to have more of these bots talking to people in more of a companion or a therapist type of thing. So that is really interesting. Another company that was really good at doing this, I think they were called Inflection AI. They are the ones that most of the company went to Microsoft. The CEO of that was uh Mustafa Suleiman, that now is the CEO of AI at Microsoft, and he was one of the original founders of Google Deep Range. So, like he has a long history of working in AI, and they were really interested in this emotional intelligence part of the AI, where a lot of the other companies are more on the IQ, they are more on the EQ. So that is it's really interesting. Yeah.
SPEAKER_02Yeah, yeah. And I could see how that could also be useful for us. Are we having culturally sensitive conversations when we're breaking medical news or like how do we bring this to people in a way that is approachable and understandable? And that's something we think about a lot in science and in medicine. So, what do you see as like the major benefits of AI right now in SciCom and in medical writing? How is it on the ground helping us?
Practical Gains: Triage, Scribes, Efficiency
SPEAKER_03So, for the way I think about it is AI, even if right now is not very good, it is improving so fast. And I see the potential of it, and I get really excited. So, I get excited about what are the things that we cannot do now, that we struggle to do now, that if we could automate some of our processes, would allow us to have time to do that. So, for example, even in the US where there are so many doctors, you still have not enough doctors, right? And so, how can we think about how AI can help us with this? People want to see human doctors, but maybe triage can be done with some AI chatbots, or maybe they can be with the human doctor for the really important parts, but maybe the AI could do some of the note-taking. Like doctors spend a lot of time just typing on their computer. And if you go to a doctor's appointment, like half of the time they're typing on their computer, they're not even looking at you. So imagine you could get an AI voice recorder, right, that is just transcribing your conversation to the patient and filling out the form so you can now focus on this patient. So, in a half an hour appointment, you have right now like 10 minutes of actual contact with patients. That means that maybe you could increase the actual contact time to 20 minutes, but you would still have 10 minutes there gained that you could see another patient. So, this is how you could both improve the quality of contact between doctors and patients and see more patients. And that's just a very simple thing. And that is something that could be possible today if we could get around the technology issues of does it understand everything that we're saying? Does it understand medical terms and the privacy issues and all of that? If we could make these bots so good, I think we are very close to being able to do that. In terms of science communication and science writing, there was the study that was published in December of 2024. It was a really good study by academics and at different universities. The lead author was from Stanford, and they looked at 4,000 people and the impact of AI at work. So this is the most comprehensive survey of how AI has been used in the workspace, right? And what they saw was that about 30% of people are using AI at work to do work tasks. So 30% is not that huge of a number. And when they looked at the industries that have the highest adoption, it was like marketing, IT, and customer service, right? So it wasn't healthcare or science communication or anything like that. So what I'm trying to say is I think the adoption is still very low for people. What I think there is a lot of potential in, and this is what I tell people is if AI only did this, I would already be super happy. Is in a lot of what we do is research. And in research, AI can be really transformative. Because if you think about it, we can only read so fast. So if I am reading about a disease, I usually give this example because I work a lot with non-small cell lung cancer, but also because non-small cell lung cancer is a huge field. There's like papers coming out every day. I think if you go to PubMed right now and you look just in 2025, more than a hundred papers have been published already on non-small cell lung cancer. So it is impossible to read everything about non-small cell lung cancer for a human. AI can read everything. Okay, and give you a generative summary. Exactly. It can read everything fast. So that is something that the bots can do now that is better than us because we just don't have the capacity to read that fast. The trick is, okay, so how can we take advantage of that, right? And that is what we need to think of is okay, so I don't need to read everything. I need it to read everything, to find the most relevant sources and give me the most relevant sources so that I can read the really key sources, right? So I think that is the biggest unlock whenever I'm telling people about different ways of using AI, it is always in research and search that I think is the biggest unlock at the moment. And then the other thing is connecting different ideas. So because it can read everything and it remembers everything, because that's the other thing. We don't remember everything we read. It is just the way it is, right? But it remembers everything. So if you gave it everything to read about non-small cell line cancer and you're talking about something else and you ask it a question, it remembers, oh yeah, there was that one paper that I read that said something about this. So it is really good at making those connections and going to get those things. And then if you use tools to read all of these papers at the same time, which right now, really, the start of the show is Notebook LM, you can speed up also your reading part of research, right? So your understanding and asking questions and making connections between different papers. I talk about the efficiency gain, so how much faster it is. What I want to emphasize is not just about the faster. What the faster it does is allows you to also go deeper and to also do better work. It is not just about the faster. It is more than that. Yes.
AI For Research, Search, And Synthesis
SPEAKER_02Yeah, I think I could see how that could help us ask better research questions, right? Because so many times, if you're seeing scientific questions being posed as for new grant funding or whatever that is, it is definitely limited by what the people working on it can read and get through. And the dissertation work I did was on HIV. There are so many HIV papers. I have another friend who works on coronaviruses. There are so many papers on COVID. And it is like such a quagmire to wade through to find exactly what you're looking for and then to make sure you don't miss things. And we're only humans. So I could see it being used for that. But also there's so many clinical care summaries that we can see published as like here's a case study of this is what happened, this is what was missed, this is how we eventually figured out what was happening. And I could see AI being used to say, okay, this is happening a lot, this might be a gap in knowledge and how we're educating physicians. If we're seeing an increase in this geographical area of this particular disease, particularly I can think of that for like vector-borne diseases or something, if you're seeing an increase, I think that could be really interesting to track and map using something like AI. Um so I think that there's a lot of potential for it to really improve our health, but also our science. So I think that's really cool. Yeah. But with all of that being said, what are the potential downsides of AI right now, right? It's constantly changing. And I think there's a lot of fear around AI. And I think there's a lot of fear around anything that's new and not super known or regulated. But could you talk about that and how founded are they really?
SPEAKER_03Yes. Yeah. So there's different levels of fears, right? So you have the same problem that you had with Dr. Google, you have with Dr. ChatGBT, right? The patient goes and like searches for their symptoms and they can get convinced that they have something. These chatbots are trained to please you as the user because they are chatbots, they will pick up what you drop, the hints that you drop. So if you have someone who is anti-vaccine, I think this is a very clear example where probably all of the chatbots are being trained not to re-emphasize anti-vaccine points of views, except for maybe Drock. But you can see how if it gets the feeling that the user has a certain point of view, that it would just reinforce it. And then you get into the fallacy that we all have as humans, which is like reinforcement bias, right? So we just look for things that confirm our views. And because it sounds so authoritative, it sounds so good, we say that it is intelligent, we say that it knows everything. I just told you that it knows everything and it never forgets, right? It is so good, it knows everything, it reads everything and never forgets. It's so much better than humans that you can get this false sense of confidence. And uh, and that can give you a problem. So that's one level of issues. And that is if a patient goes, but imagine also a doctor. If it is something that you're seeing every day and you know about it all the time, is one thing. But if you see something strange, you do go to your books, right, or to the sources that you trust, then you find what does this match? Because you're not an encyclopedia, you're human. So now, as these chatbots become more efficient, like the fallacy that they know everything and they're so authoritative that they can be saying something wrong because they do hallucinate, right? And hallucinators they make mistakes. They say something that is not accurate or not factual, but they say it very convincingly. And whenever something like that is convincingly saying something, it is really hard for you to catch it, and especially when most of the time it's accurate. So one thing that I've heard someone say, so this is not my original thought, is that it would be better if it was wrong, like 25% of the time. But because it is only wrong like 5% or 2% or 1% of the time, it is more dangerous because you're not on alert. You turn off your critical brain because you're so used to it being right. So the fact that it is getting better and better at being right all of the time is actually a fear. So that's one. Another fear that a lot of people have is that we're gonna get chatbots talking to chatbots like so. I want to send you an email, so I say to my chatbot, and with voice mode, I can do this, send an email to Camille about this, and then you're like, oh, go read my email. So Nuria sent you an email about this, oh, send her an email about this, and then what are we doing? Like, why are we not talking to each other? So people have these dystopian views of the future like that. Right now, most of the content out there is human-made content. Right. But I think that's going to change. And the value of human-made content is gonna go even higher because you're gonna have a lot of uh AI created content. So a fear there is that because what it does really is it brings it out to the average, so then you have an average of everything, and that's very boring.
SPEAKER_02Ah, okay.
Risks: Hallucinations, Bias, Privacy, Overreliance
SPEAKER_03Yeah, then the uh knowledge becomes the average of everything, and that's a little bit boring. So that's a fear. We don't know if that's going to happen or not, but that is a fear that is out there. And then there's the problems with privacy. You could say something, and then the company that owns the thing can be acquired by a different company, and then they might have an interest in your data that you didn't know before, for example. Like imagine you have all of these personal conversations about your health status with character AI, but a health insurance company buys character AI, and now they know all of this information about you, about your medical history, and will they take that to make decisions about your health care? Maybe, maybe yes. But those are like the type of fear things that people have, and I hear people talking about there's a lot of others, but there's a little bit there.
SPEAKER_02Yeah, yeah, I think that those are really good. And I don't think it's also always like specific to AI, right? Like I think there's a lot of fear around data being collected on social media. How is this being used? Who has access to this? And I think a lot of times it's scary not to know, but I don't think it's unique to AI, and I think sometimes when I hear it in conversation, people are like, oh, it's gonna take all this information. Well, so does your iPhone, you know. So, like, so that to me is not the scariest aspect because I don't think it's new, but also that's saying as someone who's 25 and I'm of the generation of a digital native who's yeah, I've like always been on this. And I think there's a level of familiarity with it, and so perhaps that's also dangerous to just be like, oh yeah, your data's just gonna get used.
SPEAKER_03But it does get used, and I think that's the reality. So the the thing there is like the ability to collect data is one thing, and then the next step is to take action on it, right? Yes. So you can have all of this data, but it is really hard to process it. Let me give you a specific example. The United States military is said to have more spy data on whatever things they are spying on than they can process. A report a few years ago said they spend an incredible number of man hours just listening to everything, reading all the documents, looking at all the photos. It takes a lot of people just to do that. And so to go from this huge data collection to then being able to take action on it is a step that up to now we weren't able to do. But now with these new models, the models are becoming so good that we might be able to take action on them. And I am not an AI doomer, like not at all. But if we're going to take the fear seriously, what we need to understand is that there's a step there that wasn't there before, that now is starting to be there. Where, for example, so Google two weeks ago, three weeks ago, released this AI co-scientist paper. So they gave this AI that they have, it was Gemini 2.0, but like some version of Gemini 2.0 they gave to some scientists for them to do experiments. One of the experiments that they did was, I'm not gonna try and say the details because I don't know the exact details, but they were working on this for years, like more than 10 years, and they had all of these hypotheses and they tested each hypothesis like one at a time, right? And then they gave this to the AI chatbot, like the same question and the raw data, and it validated their years of research, right? So what I'm saying is what used to take 10 years or used to take 15 or 20 years can now be done in hours or in days or weeks because of the increased processing power of these new models. So that is where data privacy might become more important. Right.
Regulation By First Principles And Boundaries
SPEAKER_02Yeah, thank you. That's an excellent clarification on that point. I really appreciate that. And so, on the topic of data privacy, in your opinion, what would be the ideal way to regulate AI going forward so that it's the best it can possibly be for science communication, health communication, like in order for it to like really improve our health, which is so important. Like I can think of all the ways that it improves and makes things faster for us analyzing agricultural trends or things like that, but that's not data people are super concerned about it having. I think health information has always felt very personal, and where you get your health information really depends on who you trust. And so, what would be the ideal way to regulate AI, in your opinion? Like in an ideal world, what would you like to see?
SPEAKER_03So I'm not sure that I know that I have an exact clear picture of how I would regulate AI. I think the way I would start thinking about this is a little bit more from first principle. So we don't want the AI to be evil, right? So I would think about what is it that we don't want the AI to do? So we don't want the AI to pick who lives and who dies. We don't want the AI deciding who gets UBI and who doesn't, right? A universal basic income, a dystopian future where the AI decides everything. And they're like, these ones, they're gonna die in one year. We don't need to give them access to food anymore. They're gonna die anyways. That is not something that you want the AI to do.
SPEAKER_02I would agree with this.
Health Literacy, Plain Language, Patient Support
SPEAKER_03I also support that this should be a basic tentative no. You don't decide who lives and who dies. Yes. You there was a study published today, today or yesterday, like very recently. They did a study, not with the reasoning models, so not with uh 01, not with Google's thinking model, anything. They did what, but they did do it with 4.0, with all of the best models, but not the reasoning ones. And they found, so they did rank choice, like rent choice voicing, like rent choice choosing, and the AIs have favorite, they think that not all human lives are equally valuable. They have picked which human lives are more valuable. I think Japan, uh people from Japan are the most valuable people. Catch me on the next slide, moving to chat box and that AI is more valuable than human life. So it's an interesting study. Listen, I'm not saying that all of the bad stories about the chatbots taking over and being our overlords are there. What I'm saying is that is not something that we want. We do not want the AI to decide which humans are better and which humans are not better. One thing that the European Union said, and I think I agree with them, I don't want the AI to be monitoring my emotions. Maybe um, yeah, so what if I'm angry? What matters is how I act. I don't want to go to jail because inside I wanted to murder you. I didn't murder you. You know what I mean? And so I don't want the AIs to be monitoring my emotions and making decisions like that. The idea of a social score, I don't want that. So I think that is where I would start is what don't I want the AI to do and start looking from there. But always thinking about because I really want the AI to do what it did in this Google thing, that they were able to do like 10 years of research, it got shortened to a few weeks or whatever it was. I want that. Because that means that maybe we can get to a treatment to a rare disease 10 years faster, right? So that I want. I want an AI that means that doctors don't have to spend uh more than 50% of their time typing on the electronic record uh software that they use and they can actually take care of patients. So I would balance that. I would balance like, well, what do I want and what I don't want, and use that to guide regulation.
SPEAKER_02I think something I'd love to see in the future for AI is I feel really strongly about people should have access to their own health information in a way that they understand, because I've seen so many instances where that's not the case. And so I think that almost if if we end up using AI in that context, like almost like a generative plain language summary of here is what you discussed today, here are like action items for you, if we get AI to the point where it's not hallucinating, or that it's still kind of working in concert with humans, of we can improve health outcomes, because I feel like that would be life-changing for people to understand how to, in plain language, best manage their diabetes, or like what does this diagnosis of a melanoma stage one really mean? Like that kind of thing. I think that would be huge. And I think that you would see just a general increase in health literacy, and I think that's desperately needed at this point, but I think it's also something that we get our health information from places we trust. And as you were saying earlier, AI is certainly very trusted in some ways because you either really trust it or you really don't, I think. Um, and you're right, there's definitely still ways we need to keep our guard up with it because it does hallucinate and it does make mistakes. But if we can make that almost like an apolitical way to get your health info, I feel like would be desperately needed.
Getting AI-Literate: Try, Verify, Curate Sources
SPEAKER_03Yeah, that and like when you said when someone receives a tough diagnosis, it is known that people don't process it, people don't even understand all of the conversations. It might as well not have that conversation with the doctor because people don't remember it. They go home and then they have a million questions once they're home. So that's why a lot of times they say take someone with you when you go talk to the doctor about some tough diagnosis, because then that person is not as emotional. If you're an advocate almost, yes, exactly. But also you can hear what the doctor is saying. The patient that just got the shocking news almost cannot even hear. You know what I mean? So, like having an AI that records the conversation and then can have a conversation with you later about it, and can play you. Oh no, but listen, the doctor said this at this moment. And you're like, oh yeah, true. But there's more than just the doctor, because it listened to the conversation and knows exactly where to go in the recording, number one. But number two, if it also had access to like background information about the disease and all of that, you could also have conversations with it about what's going on. So one thing that I always say to people is patient forums are very powerful for patients because in the middle of the night, you're like really worried about something. You can't call your doctor, but you can go to a patient forum and you can ask a question, and you get all of these other people that have the same disease and they give their experience, and it is really important for patients. So you could kind of have that uh with an AI as well, an addition to that, because people want to connect with people, but to have something there that you can kind of like, oh, this made me feel a little bit calmer. I think that would be great. I love that idea.
SPEAKER_02I think that's excellent. I hope we see that. I think that would be beautiful. I also think AI could potentially connect you to those very human forms, right? Of here is a list of resources, like here are groups, and here's what this is if it's lupus or if it's cancer, whatever it is. Here are people who have similar experiences to you, so you can actually connect with real humans. I can give you this information as like the AI, and then help connect you also to the human aspect of care that we all really need.
SPEAKER_03Yeah. And more than just a list, because it can understand you and like your preferences, it can actually send you to the resource that would be better for you personally.
SPEAKER_02And you don't have to hunt for it when you're already like processing or really worried or something. I think that would be so powerful. Yeah. Gosh, I think that is so cool. Okay. You're brilliant. I have learned so much from this. So thank you. I do have one last question. For you. As always, we can cover it at home with our listeners. So, how would you recommend people increase their AI literacy if they're interested?
SPEAKER_03Okay, so one thing I think everyone should do is just try it. Try it at home. Try it for your own thing. Try it about something that has nothing to do with work, nothing to do with nothing. I don't know. Take a picture of your fridge, the inside of your fridge, and ask it to give you an idea of what to cook for dinner. Something. I thought my GPT write me high food.
SPEAKER_02I thought that was a good way to start it.
Closing Thoughts And Listener Invitations
SPEAKER_03Yes, exactly. Or I don't know, just try different things and experiment and see what it can and what it cannot do. See what you like and what you don't like. And that is number one. The second thing is yes, if you're going to use it for work, right? Yes, the AI make mistakes, but so do humans. And that is why we are there. It will make a mistake, you catch it and you fix it. It is not a huge deal. Like the only problem would be if you were telling the guy to do the thing and you weren't going to even look at it. Just the fact that it makes mistakes, so watch you dare. You caught it. That is a good thing. You can move on now and go do it correctly. When you're using it for work as a professional, a science communicator, the fact that it makes mistakes when I am talking to it does not make me feel bad at all. Because what it means is that I caught the mistake. So it's good, that's my job, is to make sure that it doesn't have mistakes at the end. So I would say know yourself. So do you prefer to read? Do you prefer to scroll social media? Uh do you prefer to listen? Like I am a big podcast listener, and that is how I like to get my information. So I listen to podcasts, right? I have a few and try different ones. There's lots about AI. I listen to podcasts about AI and education specifically because I'm very interested in continual medical education. And then I listen to some more general AI podcasts, like AI in business a lot. I think about that a lot. And also I read energy podcasts about AI. As I said, I think of AI as a science, so I want to understand the science behind it. I want to know about the studies that are published. I really like that someone tried to see. I say I prefer AI arguments, and I'm not surprised that I prefer AI, but I don't want it to. Thank you very much. Things like that, right? So I would pick the medium that you like and then look for the people that you like to listen from or read, because people have different styles, and it's great that we have almost 8 billion people in the world, and we can find the ones that have the style that we like. So yeah, so that is what I would say. I'll pick the topics that interest you, the medium that you like, and the people that you like, and just don't make it a second job and listen to a podcast once a week. It'll be fine. Uh, but experiment and experiment and experiment is what I would say.
SPEAKER_02I really appreciate that. And thank you so much. This is excellent. I feel like this is such a big topic right now, and a lot of people are thinking about it or worried about it. And I feel like there's so much potential for it. So I tend to do your workshop and uh and now after hearing this, I don't know, it just makes it very hopeful. I think there's so much good that it can do, and I think that's something we can use, and it's a great tool, so we can use it as a tool.
SPEAKER_03Yes, exactly. I agree with you. Thank you for inviting me. Yes, I always call them tools because I want people to understand that is what they are tools, and yes, and I'm always thinking about okay, so what is it that I could not do that I can do now? And I'm trying to figure that out, and that is what I wanted to do.
SPEAKER_02Absolutely. Thank you so much, Julia, for joining us. Thank you, everyone, for listening to this episode of Infectious Science. As always, let us know what you want to hear. And thanks for joining us.
SPEAKER_01Thanks for listening to the Infectious Science Podcast. Be sure to hit subscribe and to the infectious science.org to join the conversation, access the show notes, and to sign up for our newsletter and receiver free material.
SPEAKER_00If you enjoyed this new episode of Infectious Science, please leave us a review on the top of that. And go ahead and share this episode with some of your friends.
SPEAKER_01Don't hesitate to ask questions and tell us what topics you'd like us to cover for future episodes. Click ahead, drop a line in the comment section, or send us a message on social media.
SPEAKER_00We'll see you next time for a new episode. Stay happy, stay healthy, stay interested.
SPEAKER_02Partners with innovators and science and health, working with communities to develop nimble approaches to the world's most challenging health problems.
Podcasts we love
Check out these other fine podcasts recommended by us, not an algorithm.
This Podcast Will Kill You
Exactly Right and iHeartPodcasts
This Week in Virology
Vincent Racaniello