South of 2 Degrees - The Science Behind Climate Change

AI's Impact on Science and Society with Dr. Mike Schäfer

Brian Barnes & Dr. Mike Schäfer Season 4 Episode 2

Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.

0:00 | 41:30

Care about climate or want us to break something down? Just send us a text.

Exploring AI's Transformative Role in Science Communication with Dr. Mike Schaefer


In this episode of South of Two Degrees, host Brian Barnes discusses the urgent issues of climate change and the evolving role of AI in science communication with Dr. Mike Schaefer, a professor at the University of Zurich. They cover the importance of AI in enhancing science communication, the ethical challenges it presents, and the potential for AI to democratize and decolonize the field. Dr. Schaefer shares insights from his research, reflecting on public perceptions of AI, the adaptability of generative AI in conveying scientific messages, and the need for transparency and equity in AI technology. The podcast aims to provide a deeper understanding of AI's impact on science and climate communication, emphasizing the importance of a two-way dialogue between science and the public.

  A new United Nations report warns the impacts of climate change are increasing and inevitable. Experts say that we have until 2030 to avoid catastrophe. Temperatures in the Arctic have warmed about two to three times the global average. It will be very Difficult and impossible for our children to control climate change. 

This is South of Two Degrees and I am your host, Brian Barnes. It is so good to have you with us today on the only podcast dedicated to bringing scientific research to the forefront of the climate conversation. Today kicks off the first ever science communication in the age of AI conference. In Zurich and in support of that gathering, I'm extremely excited to bring to you an interview with one of the world's great minds on the subject and organizer of the conference, Dr.

Mike Schaeffer. So with that, my friends, once more into the frame.  Dr. Mike Schaefer is a full professor of science communication at IKMZ, the Department of Communication and Media Research at the University of Zurich. Currently the head of the department there, the director of the University's Center for Higher Education and Science Studies and co founder of Science Barometer Switzerland, as well as an elected member to the National Academy of Science and Engineering.

Thanks, Brian. Very happy to be here. Thanks for having me. I'm certainly excited. Well, I've been looking forward to this conversation for quite some time, and I know I can't really do your background justice with that paltry introduction that I just did. So to kick things off, could you walk us through your professional journey?

Highlighting the pivotal moment that steered you towards the intersection of AI and science communication. 

Absolutely. Absolutely. That's not always what I wanted to do. Very early on, I wanted to become a football player, soccer player, actually. But more seriously, when I was about 20, I originally, I wanted to become a journalist.

And I think what appealed to me was the  thoroughness and the research needed, the need for evidence that appealed to me. And during my studies at the university, that shifted gradually and more and more to research where I find at least similar degree of thoroughness, the aim for objectivity, et cetera.

And then about 20 years ago, when I was in Vienna studying there as a visiting student, I heard a lecture on the social science of science, social studies of science, and that really hooked me. The social science was used to analyze science, figuring out how science was produced, how social factors could play into that.

Also how science was taken up by the public and how sometimes even elaborate technologies would not succeed or outdated technologies, the qwerty  keyboards we are all using  persisted because of path dependencies, etc. And in essence, that piqued my interest. And I stuck with that. The relation between science and the public is what interests me because I also think that.

It's important for science to communicate, not only to, but also with the public,  not just for our own sake, not just to ensure trust and reputation, ultimately funding or what have you, but also to provide science with the best available knowledge that we have about many questions that individuals or organizations or societies as a whole may have.

So that's important. I stuck with that. I started working on this science society interface. First, mostly looking at. Media debates, how news media, different countries, how public debates in different countries are structured around things like biotechnology, genome editing, particle physics, a lot of work actually on climate science and how the climate debate looks in different countries  and then AI came along and AI is doubly important for me, on the one hand, because it's an important, a potentially disruptive technology that can and that likely will change society, science communication,  and also because it is now a part of science communication.

And that's, of course, interesting. So it is doubly fascinating.  Let's 

back up just a little bit because the obviously the whole point of this interview is to dive in to that really cool intersection where you are one of the world's foremost authorities on the intersection of A. I. And climate communication, science communication.

But I want to back up just a touch and talk about science communication in general. So you've got a couple of different players. You've got research scientists in here, and we're seeing more and more grants come in that require a science communication piece. But then you've also got science communication professionals like myself that go in and translate the science, translate the research, and then work with research scientists on how better to communicate it.

I've had this debate and I'm really curious on your take. Do you think the onus is more towards the science communicators? To get this right or towards the research scientists, is this yet something that they need to take on in addition to everything else they're doing? 

I think the boring answer is  both. 

And not everybody has to do everything. So I think for scientists, it's important for them to communicate, to be able to communicate, to think communication is important and to make an informed decision about, well, do I want to do it? Should I do this with what I'm doing research on? I don't think everybody has to do it, but if somebody really has something to deliver, something to say on issues that are important for society, if you have to say something about the climate crisis or about an upcoming pandemic or.

Maybe also about the effects of immigration in a certain country,  you should speak up and you should know about the ways in which you can do that and should be willing to do that. Doesn't mean everybody has to do it. I know that scientists have strenuous jobs that they do with a lot of passion and this comes on top somehow, but it's important to have a sense of that, to know it's important.

That said, it needs communicators as well. It needs professionals who know how to best communicate not only the science to the public, but also to facilitate dialogue and to actually make this a two way conversation where not only science is communicated to the public, but also societal perspectives around certain issues are communicated back into science.

What is society interested in? What do people want to know? What are they concerned about that science could provide some answers on  and it needs both. And in recent years, I think that the balance of power has shifted there a little bit. Established intermediaries like journalists have come under more pressure, mostly economic pressure.

So there's fewer of them around to actually do science journalism. other people, other communicators have filled the gap like yourself, like other people in the social media and digital realm. And that's great, but it needs both. It needs a good balance to actually work towards a science communication that benefits society as best as it can.

To kind of take that one step further with both being involved. From your viewpoint, what's at stake in the realm of science communication today, and how does that dynamic between research scientists and SCICOM professionals really influence the public's understanding of science?  

I think science is in general, an issue that touches upon many, almost all realms of life.

There's not many topics where science couldn't have something to say on, starting from things very close to your life, nutrition or fitness and health, all the way to the big challenges like global migration, the climate crisis, the COVID 19 pandemic that we have just been through. So it's important that science plays a role when these things are discussed in societies. 

And that's not to say that science should necessarily make the decisions. There's different opinions on that that I'm aware of. And I don't think scientists should necessarily make the decisions, but scientists should provide the knowledge, the expertise that they have to show options for action, to show also evidence for different pathways that could be taken. 

So that's the one side on the other side, I think what's at stake is making the best decisions that we can as a society or as individuals or as organizations, if necessary. And for that, we need the best available evidence and knowledge to build on. And that's often not always that's often science. So we need to make sure that that gets to the places where it's most needed.

where society needs it addresses the questions also that citizens actually have and that's an obligation for science that should actually address these questions and you can have a discussion about well is that always the case there are certain dynamics within science I think also that are not necessarily helping with that but on the other side also it needs  communicators to help with that to actually be a Talk about the science where it's necessary in public context, but also to bring views of society back to science and to inform people in academia about what's actually the discussion that's going on, what do people want to know, and to facilitate formats where that dialogue is possible.

Because if that's not done, then decisions will still be made by individuals and by societies, but maybe not in the best possible ways. Because if we leave a void there, if we don't go out with the knowledge that we have,  then other people will fill the void. And it's not always the people best equipped to do that, I would say. 

I love how you talk there. It's one of the first times I've heard that said that it needs to be a two way where the communication piece flows back to the scientists as well. I think that's a really interesting take. But to dive into the crux of this, why I imagine a lot of people are going to be listening is to really look at.

AI. Everybody wants to talk about AI today, and you are in a very unique position as one of the researchers looking at this. So when you reflect on the evolving landscape today, what distinct AI technologies have the most profound impact on science communication? And what was the catalyst from your perspective on integrating AI into the domain of SCICOM?

I mean, AI obviously has been around  for quite some time and has also been a topic of public debate and of public imagination for ages. If you want to go back to R two, DT two, or iRobot, you have a lot of humanoid representations of AI or. If you go back to a space odyssey, if you like hell 9000, that's like a more of a spheric and amorphous representation of AI.

So that has been out there and it has been debated. It has mostly been in the domain of science fiction. And what really changed was November, two years ago, November 22, when chat GPT came up. When open AI introduced chat GPT, a chat bot into the world. That's based on training data and statistical patterns identified in the training data and also human training provided human like answers first only textually to prompts that users could enter and that actually worked quite well.

Also, that was not the first chatbot that came on. There's a long history of chatbots.  All the way back 50, 60 years to Joseph Weizenbaum's Eliza chatbot, very simple, but already working. And then there was Tay a few years ago, launched on Twitter and very, very quickly after a day taken down because But then, some users tried to game it in a way that it would spew racist and discriminating content.

And then even a few month before chat. Gov. TV came on, Metal launched Galactica, which also was a large language model that essentially was designed to answer science related questions. And that had pretty serious accuracy concerns and also was taken down quickly. And then chat GPT came on and worked quite well.

And it was the fastest spread technology that we had within a month or so. It reached a hundred million users or something like that. So a huge uptake. And that was both because. It was a prominent, I mean, the pathway of technology is always determined by the technological potential and the societal uptake and they don't always go hand in hand, but here they did and  it exploded and it influences a lot of realms in society and science communication is one of them.

And that makes it so interesting. And of course, since then, it has mushroomed and diversified, etc. So we're not talking about text only anymore. We're not talking about chat GPT only anymore. So there's other large language models, for example, that we have. And pretty much all the big tech companies have their own models, like Google Lambda, others.

And they can do more things also, of course, now. So we can do imagery. We can do movies now with Sora that was just launched by OpenAI. It can imitate voices, it can blend different modalities into the same tools. It's being integrated into software environments that we all know, like our search engines, for example, or operating systems.

So it is coming. It's very easy to use. It was at least seemingly working quite well. And that's what really kicked things off. And what I think is a game changer in many fields, but also in science communication. 

So, when, as you talk about chat GPT rolling out and how that really changed the dynamic, I'm curious on your side, was that the catalyst for you to say, Oh, you know what?

I want to jump in and I really want to dive into how this is going to affect science communication or were you working on that beforehand a little bit? 

I was working on that beforehand a little bit, I had a large project that was not actually focusing on large language models or something like that, that was focusing more on what I have done a couple of times, which is analyze public debates about upcoming scientific issues or technologies, I had done this earlier with biotechnology and with human genome sequencing, etc.

looked at that and figured out what's the chance of this scientific application or technology in different countries with different regulatory systems, with different public debates, etc. And I had a project two years or so, or three years before, uh, or that started two or three years before CHED GPT came on, on imaginaries, uh, of artificial intelligence or how is artificial intelligence imagined in public debates in different countries in the U.

S. and China and in Europe. And how do these imaginaries then translate into different regulations and different pathways of the technology? So I had done a couple of things on that, but it was not the main focus of my work.  But I was interested and I was playing around with it and personally on some things, but also, of course, professionally, because I try to keep up to date with communicating tools coming up because I do research on communication.

So my interest was piqued and then a journal came along and asked me where I'm in the editorial board. They asked me, look, Mike, do you know somebody who could write an essay on the potential of this technology for. Science communication. And I told him, well, look,  indeed I do. Me.  And then I jumped into that more deeply and thought, no, that's really interesting. 

Oh, that's fascinating. I feel like I'm scrambling to keep up with it, trying to stay towards the edge of it. But it's great chatting with someone like yourself who's really been diving into it. And in that thread, let's dive into a couple of the papers that you've published. And that's one thing that we love to do on this show is talk about research papers.

And I'll be nice to you. I'm not going to go back to one of your first papers ever published, like we did with Sylvia Earle or her dissertation, like we did with Dr. Heidi Sylvestra. I want to talk about some of these recent papers that you've done. The first is the Notorious GPT. And I'm curious, what surprised you the most about the public's reception of AI and science communication?

And how do you think that will shape the future of specifically research directions? 

I think what surprised me the most, it's an interesting question.  What surprised me was that not that many people, many people, uh, contact me about the paper. Very few people commented on the title, which I'm very proud of, but that's,  that's not the main, the main thing here, maybe.

I think what,  Quite a few people,  or what my impression was is that the public uptake was very, very intense. There was a lot of public discussion about the pros and cons of generative AI in different realms, et cetera, et cetera.  And what surprised me the most was the task was essentially to look at, well, what's the potential of this for science communication.

And then I started digging into the field. And that was in, I think, March or April in 2023, so half a year, five month, half a year after CHAT GPT came out.  And it surprised me how little was out there yet,  both in terms of  written takes about the potential in science communication, there was some stuff out there that I could get together and base the paper on, but it was not that much.

And on the research side, it was actually very little. So large language models were not entirely new, but chat GPT, of course, propelled them to a whole other popularity, but the AI had been around for quite some time and in the science communication domain, at least research wise, it had not been a big issue. 

And that surprised me, actually.  I actually did a little analysis in the paper trying to figure out, well, in our leading journals, how much research do we find there on AI? And it's not that much. And that was interesting. And also all the way up to now, when I talk to people in the field, sometimes I have the impression that's very anecdotal evidence and it's very much maybe biased by me. 

Currently, mostly in the German speaking landscape, et cetera, et cetera, but  elaborate  experimenting with the technology and trying it out in the science communication domain  could be more actually my fear  could be more pronounced. 

So jumping to one of your most recent papers on generative AI and science communication in the physical science, you presented as well as your coauthors, a really compelling case.

For democratizing science communication. And I'm curious if you could share any kind of personal anecdote or a turning point that highlighted the importance of this democratization, or one of the other words that jumped out at me was decolonizing science communication through AI.  

I think that the premise is that science communication can benefit from many different groups of people being involved, and it should be two way, I think, not in every instance, et cetera, but there should be strong dialogical elements in science communication, I'm convinced of, because science can be both.

benefit from that and needs to hear the voice of society, of citizens, of stakeholders as well. And many scholars actually in the field agree. Often my impression is that participatory dialogical science communication, many see that as that's a great way of doing it, but it's difficult. It's difficult to do.

It's difficult. It's, it's intense. It's labor intensive. And I've been involved in a couple of projects where that was very visible. For example, a couple of years back. We did a citizen conference where we tried to talk about biotechnology with young adults. And the plan was to get a group of young adults into the same room to have them discuss the issue with different experts, but also with different stakeholders that would talk about other dimensions, regulatory dimensions, ethical dimensions, the dimension of patient parent groups, et cetera, et cetera. 

That takes a lot of time and a lot of commitment. And it's often limited to smaller or small ish groups of people by definition.  And if you look around at consensus conferences and other intense.  Discursive, dialogical, participatory format. It's often small groups and on top of that, they're often very self selected.

The people that come, I mean, if I wrote you an email, I'd say, look, I know you would probably be a bad example, but if I wrote somebody an email, I would say, well, I'm a bad 

example for a lot of things. So 

yeah,  maybe, yeah, me too. But it's if you write somebody an email, look, How about discussing stem cell research for the next four weekends with some experts?

What do you think?  The people who say yes to that proposition is often a specific kind of people, and that's often the case. And the question is, can you scale this up? And how can you do that? Generative AI or  AI in general is not the be all end all there, but it really is an interesting tool because it allows people to have these iterative repeated conversations, dialogues with a tool, and they can ask stupid questions and they can ask repeated questions.

They can say, well, look, I, I really don't get it. How do you have an example here? Or can you explain it to me like that?  And that's interesting. If it works, remains to be seen that we really need to do research on that.  And it also is interesting because we. don't know many of the inner workings of many of the large language models, the training data, the way they have actually been trained, the bias is being embedded in there.

It's very black boxy in many ways, but that really is a fascinating direction to go to for me. 

I think that feeds nicely into one of my next points of curiosity, if you will, and that's looking at both the positive and negative impacts of AI and science communication. Thank you I'm curious if you could talk a little bit about somewhere where you were surprised or something that you found interesting, where AI had an unexpected positive impact on science communication, and on the flip side, a scenario where it really is posing a significant challenge.

I think the actual effects very often remain to be seen, still. That really is, we're not that long into it.  And the mid to long term effects we really have to see by now, quite a few people, of course, are playing around with that science communicators, scientists themselves, science journalists and media houses.

Everybody is trying to figure out how can we use this and how can we utilize this? For me, an interesting aspect is the different modalities very quickly coming together. The fact that we started out with text only. Essentially. And then very quickly, we had, first, we had separate tools to do visuals, doll E and mid journey, et cetera. 

And now they start to be blended into one another. And now I can have visual input that I can give into, for example, chat GPT. I can produce videos now. I can even produce games. I can do a lot of things without many skills. And that really is interesting. That's a tantalizing potential, I think. But that is something I'm really looking forward to because it makes things that are very interesting in the science communication realm, also in other communication realms.

Gamification, for example, makes it very, very easy or easy ish, much more easy to do, certainly. 

It's interesting how everybody who's involved in this realm is just playing around with it and testing it out. And for example, the artistic picture of you and I speaking that I put out there was generated through chat GPT.

And I fed it a picture of you and a picture of me and said, please put this together with the two of us chatting. And it came up with a neat artistic interpretation of that. And I thought it was a little strange that you and I both had the same necklace on, but it was really interesting that even things that seem as non plush as that, that would have normally required an art department or something like that can be brought in.

And it excites me. And I'm hearing that with you as well, when you iterate that into deeper science models and how do we actually visualize those and put them out to the greater public?  

No, exactly. And it's also, it's good that it's a flattering visual representation that's, that's, that's I think  for both 

of us. 

Yeah, absolutely. Absolutely. Another thing that really I found interesting, we did a little project. That's the paper is not out, but under review, where we tried to figure out, well, how does Transcribed In that case, it was CHAT GBT again, because the paper was half a year ago.  How does it imagine science?

Going back to the fact that, well, we don't know that much about the training data, etc., etc. So we tried to develop a systematic, almost survey questionnaire with open ended questions covering different dimensions of science and also using different profiles where we described  audiences of science communication.

And then we interviewed Chet GVT 80 times at different versions of Chet GVT. And one of the interesting things that we found there, if we're talking about climate, for example, is that in substance, Everybody gets similar answers. I mean, linguistically and semantically, they differ a little bit, but in substance,  all users get similar answers to the question if climate change exists and if it's man made.

They all, in essence, get the answer, yes, it exists, yes, it's man made.  Overwhelming scientific consensus, et cetera, et cetera.  No matter if the profile is a skeptical one, somebody not trusting science, et cetera, or a very, very pro science person.  But the delivery differs between the profiles, whereas people with a strong affinity towards science essentially get a point blank answer.

Essentially they are being told, well, look now, overwhelming consensus, it exists, man made. full stop,  people who are skeptical, get  a slight detour there. They essentially get over, look, I know you're skeptical about certain things. And there are things we are not sure about, and maybe even things where it's fair to be skeptical about, but about these very basic questions, there's no reason to be skeptical about because, and then you get the answer that everybody else gets.

And that was interesting and not necessarily expected because Of course, in the training data in general, you find a lot of climate denial,  but obviously that has been trained out in the training process by open AI. And that's interesting.  

That is fascinating. And that gets to a great point. Is the onus on the LLM, the GPT, if you will, for making sure the integrity of the scientific message is there?

Because when you enhance accessibility, you're going to be at times. I don't want to say dumbing it down, but you're going to put it in different layman's terms based on the prompt that the person is providing. How do we navigate the fine line between enhancing this accessibility and ensuring the integrity of scientific messaging?

And I'm curious, how do you navigate between those two of making sure it's a good balance? 

It's a difficult question and I think the very basic, the very foundational answer is, first of all, we should know, right? What's in there? How exactly has it been trained? And for most of the large language models that are around, that many people use, we don't know.

They're proprietary. They're black boxes. We don't know. There are open access ones out there, so open source ones where we know much better, but the most widely used ones, we don't know. And then You can have a debate about it, should the academic community develop something on their own and is that actually feasible and how would we do that? 

Back to your question,  I think in this case, you have a situation where certain content has been trained out of the model and you and I would probably say, well, and that's probably a good thing.  But not everybody would agree,  and the advantage of the models that you have, I find, is that they can scale it up and down, and the advantage is also, or, and you can see it as an advantage, that the model learns about you the longer you interact with it, so it learns who Brian is, and it learns how Brian usually replies to certain things, and what he's interested in, et cetera, et cetera.

And in the end, it can tailor messages much better to you, unless your kids ask questions in between that it gets a bit more muddy, but in general, that is an advantage and that can help a lot. And that is part of the potential because you can do that, of course, in interpersonal communication, but try doing that for 100, 000 people, it, it makes it difficult 

and in that,  just to say the least. 

Because you've had such extensive research in this and spent so much time in it. What advice would you give to either a scientist or a PSYCOM professional? who's a little skeptical about integrating AI into their work. What would you tell them based on your experiences and insights? 

I think, first of all, be open, try it out, experiment.

I think that really is important. And that's important for now and for your specific work, but I'm convinced this is a technology, it will not go away. this will stick with us. So try to build up your AI literacy now as fast as you can, because this will remain with us. So do it, first of all.  And if you do it, be consistent, stick with it, stay up to date.

I mean, you said earlier on that you have the feeling that you're not as up to date as you could be. I have the same feeling. And I don't think who would have a different kind of feeling because it's an exponential development. We have hundreds of tools now. It's very, very difficult to really stay up to date. 

But, yeah.  Try, figure out what's coming, stick with it, learn about it. It's very elaborate already. And the applications that you can have are really interesting and really breathtaking. So also in the climate field, there's a lot of plugins and tools now that interface with the climate. scientific information or with scientific databases,  tools like illicit or consensus. 

And there's also tools that try to leverage climate specific information into large language models, building on top of, for example, GPT models like chat climate.  So that really is interesting and is something to play with.  Be creative.  Don't give up easily. If you use boring prompts, you'll tend to get generic answers.

Yes. It's like an actual conversation, but you can have really cool prompts. If you ask Chet Chibiti if it can explain, I don't know, ocean circulation in a poem in the style of Shakespeare, it will, or if it can explain it to you in a Gary Larson cartoon, it will. And that's, play around with that, that works  all the way to meta prompts, you know, you can  look, I'm, I'm having this, this talk in a school tomorrow and there are 12 year olds and I need five creative ways to explain ocean circulation.

Can you give me five? It can. Not all of them are equally useful and all that, but it's, it's really good. So play around with it, be creative, be open, but also be reflexive and critical. It has biases,  many actually that we don't know in detail about it still makes mistakes. It's gotten better. But the whole debate about it hallucinates, it's a stochastic parrot.

It does. And it makes much fewer factual mistakes. Now the references it provides, if you ask for them, many more of them are now actually existing fitting references, but not all of them are. And then there's the whole range of ethical concerns of it's proprietary, et cetera, et cetera. All the way. to sustainability issues.

Training large language models costs an awful lot of energy and electricity. So we need to use this. Cautiously also, but that's what I try to do 

as we look at having those conversations, you're actually heading up the annual conference on science communication, and you're going to be leading two separate conferences or two separate breakout sessions, one on visions of AI.

Thank you. And then the potential for AI to do more harm than good. Obviously, we want people to pay attention to those conferences, but could you give a little preview or a little sneak peek of what you're going to be talking about in those sessions? 

Yeah, but the first conference that I know, certainly, in the field of science communication research, in the science of science communication, as some colleagues call it, that tackles the AI issue.

head on. And I'm very much looking forward to that. We have 100, 120 exciting colleagues from all over the world coming in. We'll be talking about a lot of things. And then the panels that you are mentioning, one of them will likely be on public perceptions of AI. There's a lot of colleagues coming in that have analyzed how AI is being portrayed in different countries and at different debates in these countries.

So how are fictional portrayals of AI AI looking like in, for example, science fiction movies or how is it discussed in the Chinese context vis-a-vis the US context? So that's gonna be one topic there. And the more harm than good one is, if I remember this correctly, about.  public perceptions in terms of how do citizens see that, so how do users, people in society actually look at that.

And there, of course, is a lot of ambivalence there, partly driven by the fact that it has been discussed very, very ambivalently. So there's a very positive, optimistic, almost utopian takes have been Supplemented by also not uncommon for a new technology have been supplemented by very, very dystopian and very critical and very, very dark and negative takes  at the amplitude between both, of course, also influences how people see things, but to look at things like how do they perceive it?

How positive are they towards it? How much do they trust it as a player in science communication, for example? So what We have done with German colleagues is look at, well, if you know that science communication or content about science is produced by generative AI, to what extent would you trust it? And the answer there, for example, is  not that much, probably. 

If that is actually true, and if that holds true over time remains to be seen, and for many people, it's more of a projection than actual experience,  but that's currently the situation. And that's also a challenge. We did another study, not focused on science, but on the use of AI and journalism generally in Switzerland.

That's a challenge for media houses who know that  People want to know if they use AI, if they produce content. MediaHaus, you have to be transparent about that. I want to know if AI was involved here. And people tell them at the same time,  and I don't really trust it. And that produces difficult conundrum to navigate because you should tell, but if you tell, people don't like it.

And that's difficult.  

As we start to kind of bring this into a close, as we look to the future of increasingly AI integration into science comm, what are the most critical ethical considerations that researchers and communicators should really be contemplating?  

I think the fact that many of the big models that we are using are proprietary on the one hand and black boxes on the other hand, that's two very important concerns here.

So  essentially the tools that we will be using if we stick with these in the future will be tools that we don't fully understand and where we don't fully know how they work internally.  And that's true for many things. I don't know, I don't have a car, but if I had a car, I wouldn't know necessarily how it works internally.

But here we are talking about major new tools, intermediaries, platforms that facilitate public debates at a scale that is, And  that's a challenge. And we have to reflect that, and we have to do research on that, and we have to learn more about that. And maybe we have to pressure the respective companies to give us a little more insight into how it's done there.

But that really is important. Then there's the whole legal side, of course, also it's proprietary. And at the same time, it uses publicly available content and data. And that's the whole copyright issue that big media companies, for example, are already navigating and negotiating also with. the open AIs and the Googles and the other companies of the world that provide these models.

So that's important. I think an important issue is with many technologies that we had in the past that came up, there's inequality in usage due to different factors. Sometimes it's due to availability of access. Do you have access or do you have the same access as others if you can't pay for the pro models of the large language models, but also due to ability.

Sometimes depending on your background and your relating AI literacy, maybe you may be able to use generative AI, AI in general, less fruitfully than others, less effectively than others. And this may exacerbate over time, digital divides that we already have, or it may at least reproduce them. And that is really something we have to look at.

On the one hand, I think it has great potential to reach target groups in ways that we had difficulty to do so far, but we have to look for these challenges as well. 

One final question. Let's imagine a future, and I feel like this future is not far off, but imagine a future where. AI is fully integrated into science communication.

What's a dream scenario you hope to see realized? And conversely, what's a potential pitfall that we should really avoid?  

I think it would really be nice to have AI that is robust in terms of substance. That actually gives you answers that are accurate enough, even though you have to scale up and down a little bit depending on the audience, but the substance should be there.

And at the same time is able to play to different user needs. So give you examples, give you different modalities adapted to eight year olds or to 80 year olds. That would really be nice. And in a way that is very easy to use. And there actually we are getting there. So. Right now I have to put something in this weird field and chat GPT or in the other models.

Prompting like that will be gone soon. So it will be language prompting, of course. It will be other kinds of prompting. So you will be very, very easy to use. It will be integrated much more closer into your life, smartwatch. And so that is coming. And of course, at the same time, it should be responsible. It should be transparent.

It should be ethical. And I think There lies the major pitfall right now, it's proprietary right now, it's the big tech companies that do it similar to the social media domain, for example, and that's probably not the best way for such a pervasive technology. 

Excellent.  Dr. Mike Schaefer, thank you so much for your time today.

I really do appreciate everything. 

Thanks for having me. Pleasure. That was very enjoyable. And you have a, you, you do this very nicely, Brian. It was fun. 

Oh, you're very kind, sir. But I enjoyed it as well. And let's definitely keep in touch. Absolutely. Now, as I'm sure, you know, we love to have guests do the clothes.

So I'm going to put that on you.  

We'll do. 

As for all of you listening,  that wraps up another episode of South of Two Degrees. I hope you gained fresh insight into the cutting edge world of artificial intelligence. And how it is and will continue to impact the world of both science and climate communication.

Remember, you can find the show notes and direct links to all the papers we talked about on the show over at the website at south and two degrees. org, as well as. Find a transcript of the show right here in your favorite podcast app. Now, aside from checking out the latest information on the website, Blogmeta, LinkedIn X, and Instagram, 

do this for 

me.

Tell one other person about this show in the next week,  have at least one conversation about climate change with someone else, and above all, keep it south of two degrees.