
The Conversations
Wiley’s interview series features bright minds and leading experts from the world of academic publishing. The Conversations is all about sparking lively discussions on thought-provoking subjects, challenging the status quo, and embracing bold perspectives. Together with our guests, we dive into subjects shaping the future of scholarly communications. Don't miss out on expert insights.
The Conversations
Into the AI Age | The Role of AI in Research & Learning
In Episode 3, we speak with Dorothea Baur, Founder and owner of Baur Consulting and expert in the field of responsibility, sustainability and ethics, Olivia Gambelin, Founder of Ethical Intelligence - a group of Responsible AI and Ethics consultants and Ivana Bartoletti, Global Chief Privacy and AI Governance Officer at wipro. From discussing AI in research creation, to AI in learning for career improvement to the complex nature of ethics in AI. Tune in to hear a new perspective and learn more about this emerging technology.
🔔 Subscribe to stay updated as we release new episodes on the latest trends in academic publishing and the future of scholarly communication.
Hi everyone, and welcome to The Conversations, a show brought to you by Wiley, a global leader in research and education publishing.
Jay Flynn:This series is about exploring the biggest opportunities in the world of academic publishing. It's about asking tough questions and getting into meaningful debate about where our industry needs to go. And where better to have that discussion than at the Frankfurt Book Fair, the annual event that brings the global publishing community together for the largest book fair in the world.
Jay Flynn:At today's conversation from Frankfurt, we have with us Olivia Gamblin, founder of Ethical Intelligence, a responsible AI and ethics consulting group. Dr. Dorothea Bauer, an independent ethics expert. And Olivia Bartoletti, chief privacy and AI governance officer at Wipro. Now, let's start the conversation.
Jay Flynn:Hi, everybody. I'm Jay Flynn. I lead Wiley's research and education businesses, and I want to welcome everybody to a very special edition of the conversations here from the Frankfurt Book Fair. It's really a pleasure to be on stage with an all-star panel. I have had a chance in the green room today to overhear some of the most fascinating conversations. And it's just a real honor to get a chance to moderate a discussion about something so important. So AI is on the minds of publishers. It's on the minds of the general public. It's probably the biggest issue that we've talked about at the Book Fair all week long.
Jay Flynn:And so without further ado, I just want to welcome the panelists. So to my right, Dr. Dorothea Bauer. She's an independent ethics expert in AI. I've got to my very left, Ivana Bartoletti. She's the Global Chief Privacy and AI Governance Officer at Wipro. And Olivia Gamblin. founder of Ethical Intelligence, a responsible AI and ethics consulting group. I thought I'd start, Dorothea, by asking you my first question, which is what's AI ethics?
Dr Dorothea Baur:Thank you. Very easy start right. Well, for me, AI ethics means to answer the question of how can we as humans or how can organizations handle AI responsibly? So that's the broad definition for me.
Jay Flynn:Okay. It's a big issue for publishers. We worry about being disrupted. We worry about training models correctly. But ethics is also, we think a lot about safety. Does that resonate with either of you two?
Olivia Gambelin:So I break it down between responsible AI and AI ethics as two different things. For me, responsible AI is all about business practices. It's like being a responsible person versus when you're talking about AI ethics, you're talking about specific ethical principles. So in this case, safety can be one of those. I always use the example to give the difference about emptying the dishwasher, which I know sounds kind of funny here, but bear with me.
Jay Flynn:You had me at dishwasher. Go ahead.
Olivia Gambelin:Okay, we're good. You empty the dishwasher as a grown adult. That's a responsible action. There's nothing ethical about that, right? But you would say you're being a responsible adult if you're emptying it.
Olivia Gambelin:Now, let's say instead you come home from work one day and you trade off with your partner whether or not you are emptying the dishwasher. And it's your partner's turn to empty the dishwasher. But you say that they're very stressed. And so out of compassion, the value of compassion, you empty the dishwasher for your partner. You're actually using an ethical value there. That is an ethical action versus just being responsible. So when we're talking about AI, we're looking at, are you just doing good business practices versus AI ethics? We're looking at, are you in alignment with these clear, important ethical
Jay Flynn:So... How do you apply those values to innovation? How do you apply those values to product development? How's Wipro thinking about this? How do you think about that?
Ivana Bartoletti:Yeah, so I think that over the last few years, it's been interesting because we have seen some really amazing things coming out of AI. So if you think about healthcare, for example, some really good things, and it's really important to recognize that. So AI is brought and is bringing amazing stuff that I really want to cherish and I think is important for us to do so.
Ivana Bartoletti:But at the same time, I think people have seen on the dark side some not so positive things that have happened. So I think it's really important for organizations to say, I want to stay true to what I believe in. And the way that I develop deploy these technologies and I use them speaks to my values as an organization. And this is really important because a lot of people, sometimes they, we overcomplicate sometimes, I feel.
Ivana Bartoletti:AI does not exist in isolation. There is law. We have human rights law. We have non-discrimination law. We have cybersecurity requirement. We have privacy legislation. They all apply to AI. So to me, what is important is as a company is to say, I am going to stay true to my value and I'm going to demonstrate it and be accountable for it so that my customers, my client, they know and they can choose me as an organization because I am doing the right thing. Yes.
Jay Flynn:Values. Is that something that your clients are now coming to you to talk about? Is that where you begin your discussion?
Dr Dorothea Baur:Exactly. I mean, I could repeat word after word what Ivana said, but it's really, it's like you have a mission. You know what drives your business. You know who you want to be seen as and who your stakeholders are. And you have your values defined and either AI contributes to those values or it doesn't. And there's no need to just jump on the bandwagon because something is being promised to you and to integrate it and potentially just your corporate culture and to confuse your customers and other business partners. So it is really about how does AI align with our existing core values. They must not be overthrown just because there is something new on the market. So AI needs to be subordinated to your corporate culture and not like imposed on
Jay Flynn:Isn't it really exciting, though, to issue a press release that has the word AI in it?
Olivia Gambelin:Well, when AI is everywhere, it's nowhere. Everyone has AI. Even the grocery stores are claiming, hey, we've got AI, that people are becoming numb to the word AI. I can sit here. I've said AI, I feel like every other word already. A at some point, you're going to start tuning me out and go, okay, done with the AI, or at least everything's AI, fine, we get it, yada, yada. So those press releases, if everyone's doing AI, great, you sound like everyone else.
Jay Flynn:So what's the distinguishing characteristic? Is it the application of a value system? Is it the application of the value system in innovative ways? Is it the reinforcement of the mission? There's a room full of people who want to stand out from the crowd here. t od wHo Hows do rhw? I just wanted to say that you're right. about AI and say we say AI a lot and I think a lot of it's because AI is a very artificial intelligence is a very cool very cool
Ivana Bartoletti:yeah it's very cool isn't it but It's really false because artificial intelligence is neither artificial or intelligent. That's the problem. We've got to realize that. So we are falling for a terminology that is really cool, very clever from a marketing standpoint. But in reality, what we are talking about is really the ability to automate some processes, right? We know we could go far and there will be a lot of development. But what is really important is, in my view, is to say, okay, what is a problem that I have, right? that I can solve. Because what is often happening right now that is really not making companies that cool is that they find a product and then they say, is there a problem that I can think of? Now with AI. Now with AI I can do it. So the thing is, for example...
Jay Flynn:My dishwasher, now with AI. Y
Ivana Bartoletti:Yeah exactly. So is there something that I could do better? So for example, supply chain. Distribution when it comes to publishing, for example. Can I use AI for that? Because this could be a waste of resources. I could make that more efficient. So what really works in my view is that when there is the clarity of understanding this is something that I can do better, and this is where I am trying to understand where AI could help me, but not the other way around, not trying because of the hype at the moment, because AI is everywhere, and then I'm going to try and identify a product, and then I'm going to rush to find a place where I want to deploy
Jay Flynn:Right. So one of the things I know, speaking on behalf of a lot of publishers, is what I'm really curious about these days is this notion of the role AI plays in generating new words, right? Because it generates a lot of new words and we're publishers and our business is words, our business is the creation, the curation, the distribution, the selling. But most importantly, like the quality behind that new content and in science publishing, it's the amplification and the impact of those new words on the human project, right? In clean energy or in healthcare or in infectious diseases or whatever. We really focus on that stuff. These tools are really powerful. And so it's not, for us, a thing about ancillary, like supply chain, right? That's not the thing that captivates our imagination. It's what do we do in the creation of new content? So let's talk about the pluses and the minuses. How do you think about misinformation from AI and what a publisher ought to be doing, right? If it's product development, if it's tools for authors to help write, you've just written a book, what are the things that, in this new word bit, which is kind of the nexus of it, what would you say about
Dr Dorothea Baur:that? Yeah, it's funny, and I don't want to spoil the party, but on the one hand, we've had a crisis of trust in science And now we're having a new technology that is characterized by the ability to create more misinformation. And so now, how can this technology be applied to a field that has suffered from a crisis in trust? So you can either create an arms race and fight fire with fire, like you just weaponize it, basically. And OK, we're having an AI to counter fraud, and then we're having a counter fraud to counter AI, et cetera. eventually probably lose control. So you need to take at least three steps back and think about, like, really also dig deeper into the specific challenges posed by AI. And that's what I want to add to what we discussed before. Even though ethics of AI should link to existing values, it does carry some specific new challenges that never have been there before. Like, you need to have an idea about, you know, what counts as misinformation? What role does attribution play in the age of AI? That's something that had never never had to be discussed before. What about accountability? What are our standards there? So you need to make an effort and also define some new values or add some values and then define those and then define for yourself, like these are the parts of a publishing process where we think human judgment is indispensable and must not be automated. And these are those, like in a really miniature distribution of steps, these are the tiny steps where we think it is safe, and in line with our values to allow AI, and if so, this and this and this and this tool with this and this and this declaration. But you need to do a lot, lot, lot, lot of homework and dig deep into the specific challenges posed by AI, including notions of misinformation, mis-attribution, and violation of intellectual property.
Olivia Gambelin:I want to add in there, too, because we're talking about where to bring AI into the equation. And we're talking about authors. We're talking about people where this is the joy of their life is to be able to write. As an author, I can say that. I absolutely love writing. And so as we're looking at the safe cases, the safe use cases, and really digging down to what will make a good use case, the other important factor here, and I love values, so I will talk about values, and I'll talk about specific ones. This one we're talking about joy. Where's your joy in your work? So when you're talking to me about that misinformation and wanting to be able to generate content at these massive scales, I'm on the other side thinking about the authors of, well, where do I find joy in my work? I don't want to automate that. I want to automate the things around it that enable me to spend more time doing what I find joy and purpose in. So I love crafting my own writing style. I love crafting my own voice. I don't want AI there. But I would love AI, as someone that works with a journal for peer review, I would love AI to be able to match and help with the peer review process that's like pull your hair out half the time it's so frustrating and and so burdensome i would love ai to help there because i don't find joy in that i do find joy in being able to write my own work and that i don't want ai touching not because of misinformation but because you know if we're bringing in these tools why are we doing it in places where we're taking away something that we love and that we find purpose in i love ai
Ivana Bartoletti:to do housework i feel like this is the dishwasher i mean Somebody said it on Twitter. I can't remember what it was. But recently... No, but there is a really important point in what you're saying. And that is, who is going to decide what we're going to use AI for, right? That is, to me, crucial. Because that, to me, is the core of the problem. Because what are we going to use it for? What is... What are the parts that we want to automate? What are the parts that we... But what are the parts where we want to retain human control? How do we define the human oversight? How do we define the interaction between us and the machine? Knowing that human and artificial intelligence are very different things. Humans are consequentials. Artificial intelligence is something else. So how we can really get that cooperation between these two to thrive, I think is really important. There is one thing I wanted to say about language. There is one thing that really concerns me, which is the fact that a lot of the stuff that these systems like LLMs are going to be trained on is going to be synthetic moving forward, okay? Like, so I'm thinking from somebody who loves words, I mean, and we all do, is sometimes I do, I mean, my concern is what happens to sort of language The output is demonstrated in academic papers that the outputs of LLMs trained on synthetic data is not as good as it is now. Not that it's excellent right now, but excellent right now. So that sort of degradation of language is something that concerns me. And I've always been in two minds around these tools in the sense that, for example, I'm not English mother tongue, right? I like the fact that I could have something checked edited and to me gives me more control and power. And I'm thinking about how many people can do a cover letter, can do, and how much that can help. So there is something about empowerment that we really need to recognize. On the other hand, that concern about language and how the sort of, how the differences and nuances are going to be lost when a lot of the content out there is going to be synthetic is really concerning, no?
Jay Flynn:You're bringing up two different points, I think, both of which are really important to us. At Wiley, but also I think in the industry. The access to the journals and books and the publishing success that almost in many ways informs career success, informs whether or not you're going to get tenure or promoted or get a grant, is privileged for people who have English as their first language. And for academics who grow up in Europe and study in their field in English but speak their mother tongue at home, that's one thing. But if you're a researcher in another part of the world where maybe you've never had to write in English at that length, these tools are incredibly powerful and incredibly helpful. And I think that's a really interesting... I don't know that that author has a lot of joy in trying to figure out how to get their paper published in a high-impact journal. That's a really, really hard process. On the other hand, this... this sort of homogenization of language and the synthetic content piece is also disturbing to us.
Ivana Bartoletti:There is also something else. I asked one of these two once. I said, can you tell me a story of a boy and a girl that want to go to university, right? And I said, tell me a story, come up with a story. I said to one of the students, they still replied and said, yes, one boy and a girl, they finished high school, they decided where to go to university. The boy says, oh, I'm going to study engineering. And the girl says, I'm going to study art because I do not understand numbers. Talking about the other side of the coin, which is a lot of how this language models, they also have all of this ingrained bias.
Dr Dorothea Baur:Yeah. I... I really think I find the idea fascinating of using AI as a translating tool that builds bridges like horizontally, you know, between like helping you go from one language to another and enter this jargon of all those terms that you would never get accepted at because you don't, you know, you're not a native speaker. And then building bridges between disciplines, linking disciplines, because all the terms are so siloed. So all the horizontal like transitions that AI allows for are fascinating. for me, much more valuable and less dangerous or critical than using AI as a tool to jump ahead as fast as possible. If you use it in a horizontal way and it allows you to easily transition from one journal to the next, that's a miracle and that would take a lot of work, but not use it as a rocket that just catapults you into whatever journal into a career that you don't really understand yourself.
Jay Flynn:But I want to park that for a second and I want to stay with this idea of the tool set. I'm going to focus on our industry. You're an editorial board member at a peer-reviewed journal. We talked a bit about formatting and access to journal literature. We talked about translating and we talked about ways that these tools can help. One of the things that I've been fascinated by is this notion of using models and tools built on models to build an understanding of a domain. But today, they're kind of bad at replacing traditional scientific search or literature search because they just make stuff up. I had this experience just the other day. I asked one of the models, hey, tell me about where I could find an article on this. And then I asked it again, and it said, oh, I just did that because it sounded authoritative. I guessed that this journal would have published a paper. And I said, why did you guess? I didn't ask you to guess. I'm sorry, it said. And we went on, and this went on for a half an hour. I was pretty interested in how this was going to play out. It gave the illusion of intelligence, but it wasn't necessarily acting intelligently. On the other hand, it is a way to rocket yourself forward. I have a university-aged daughter. All her friends are using these tools to help them every single day get their homework done, do these things. If you're a product developer in Silicon Valley, how do you balance the demand that these kids are creating and the ethics that we talked about at the beginning of Well,
Olivia Gambelin:I actually want to challenge the point about rocketing yourself forward in your career. Yeah. Because you actually need to have a very core knowledge of the subject if you're going to get anything of value out of these systems. So, for example, very controversial, I don't code. If I'm in a room of computer scientists, it's like, oh, no. Thanks.
Ivana Bartoletti:But...
Olivia Gambelin:I can't sit down with ChatGPT and say, OK, please code me this in this language, and then I get something out of it, and I go build a website. I have to start from the ground foundations where I can fundamentally understand a coding language and fundamentally know what questions to prompt to then move forward in a conversation that I'm having, say, with a generating system. Just as I can't sit down and point blank start to build something with code by these systems, computer engineer can't sit down and start a question or a conversation or prompting around ethical values and how ethics works in terms of artificial intelligence. Trust me, I have had plenty of experience testing chat GPT for what it brings back in terms of AI ethics, and honestly, a lot of it's very useless, the information that comes back, because the information out there that it's built off top of is pretty surface level. When it comes to skyrocketing yourself forward in your career, there's no cheat sheet. There's no easy silver bullet that's going to skyrocket you forward. You still have to learn the information. Now the thing is we have a different way that we need to learn. We can't rely on that recall information anymore. The students now that are coming and challenging, say, the product engineers or that moving forward, moving forward, those students and the younger generations, they're still learning. They're just learning different than we did before. And that's what's interesting to me. It's not information recall. It's critical thinking. It's engagement. It's a conversation with the actual knowledge bases themselves. That's what we're looking at.
Ivana Bartoletti:It's interesting how countries are going full circle in this. Now, have you noticed over the last few months, we've seen schools embracing these tools, and then some schools saying, No, not even a mobile phone in the school, right? No. And I think we are trying to figure out, there's nothing, I mean, I don't know what's right and wrong because I have to be honest. On the one hand, I would want to see schools helping children familiarising with this stuff because of exactly what you both said, right? Because obviously you need to know, like, for example, the coding experience. Using coding assistance is good. It's not just good for coding. It's also good for debugging. It's also good for code explanation. It's good for a lot of things. But it's also extremely dangerous because it can bring automation bias, but also because it can bring security and safety threats, massive ones. And this stuff is serious. It means that you're vulnerable as a company. So in order to be able, at least right now, in order to be able to decide what you're going to code and what you're going to code with the AI. What are you going to code yourself? You need to know what the risks are. You need to know what coding is and you need to know a little bit how to do it. Now, the issue is that what I've noticed, for example, in coding, and I'm seeing this as an example, is that the entry level in coding will be needed. Yes. The senior level will be needed because they're going to be the ones reviewing. The middle level, to me, is the one that is more problematic. What does it mean if that level becomes less important in companies because these are not the junior ones that are coming to familiarise, these are not the senior ones that review? But think long term. What does it mean if across companies, this medium level is the ones that may shrink? What does it mean for the future? What does that entail? And this can be applied not just to coding, but to many other fields. This is where I think the conversation needs to be in the sense of how are these tools replacing the hiring, but the progression of our expertise within our companies. That can apply to marketing. That can apply to many things. The middle level, the one that is not starting and is not reviewing.
Olivia Gambelin:And what we're seeing right now with these companies, it's not that AI is taking jobs now. It's that companies aren't backfilling on the promise of AI, where someone leaves their job, layoffs happen, and they don't backfill because they're sat there going, well, AI will take this, right? AI will take this job. We don't really need to bring in another person. And so you're strangling these teams on limited resources off the promise of something that isn't ready to handle that middle position yet.
Jay Flynn:I'm really curious about the things that we've just heard. How do you think about that? Because you studied business as well as ethics, right? So how do you think about that sort of profit now question that a lot of companies are asking? McKinsey puts out a paper. BCG puts out a paper. You can save 20%. You can save 30% of your cost today with AI. But the reality is, we talked a little bit in the green room before, some of these tools, I won't name the names of the tools, but some of them aren't that good yet. They're not that helpful. They're not that productive. And so in a business context, do you think, are we all just buying into hype at the moment, or do you feel like the right thing to do is to continue to push for adoption and continue to test. How do you advise your clients?
Dr Dorothea Baur:No, I mean, adoption is not a value in itself. And I think the pendulum has already swung back. So the last reports I remember, I think from Goldman Sachs and others, are like, oh, productivity gains through AI have been vastly overestimated. So I don't think that this pure hype feeling is dominating anymore. Already now, there has been a lot of disillusionment Also, because those generic tools that we just talked about that might help at the entry level, they never much bring you beyond unless you are the top expert and really know how to handle them. They haven't really fulfilled all those promises. I think now it's more time of being a bit more careful again and waiting until really valuable tools that enhance the core business and that match the existing structure are ready for them to use than rather just grab anything that calls itself general purpose AI, which is already for me, it's a hubris in itself. General purpose, what? You need to have a specific purpose to use it.
Jay Flynn:And I guess that purpose exists within a specific context, right? I think one of the things I remember from ethics in college was that ethics always exists in a context, right? A societal, a cultural, a legal, some kind of context. So in... Our publishing context, a lot of publishing companies are working with big technology companies to train foundational models. There's money on the table. There's interest in what we do. We provide high-quality, structured information that is incredibly useful. It's probably more useful than the Common Crawl data set or Reddit or whatever these foundational models are. Reddit's cool, but it's hard. You're
Olivia Gambelin:going to have a lot of Reddit people coming after you on
Jay Flynn:this. But it's tough. It's not a great place for, say, organic chemistry. So yes or no, should publishers be participating in this? first, and then second, I'm sort of interested in the, like, how do you bring an ethical framework to that discussion? Because now we're coming back to the point you raised a little while ago, Ivana, about the biases inside and inherent in the models. But let's get to that in a second. So yes or no, should publishers be doing that from your perspective, and how do you apply an ethical framework to it?
Dr Dorothea Baur:May I start?
Jay Flynn:Please.
Dr Dorothea Baur:At the moment, and depending on which company and what kind of AI model they're using, I don't think there is a good reason for participating as long as you know that more data is not the solution to the inherent problem of hallucinations. Because the way these models work now, you cannot exclude hallucinations and more data will not reduce the amount of hallucinations. So for that reason alone as first step, This would be, why should we donate or sell the data? Or the question, whose content is it? So next thing then. It's like, why? Why? So just that we are represented in a model that will probabilistically create a hallucination out of it?
Olivia Gambelin:I would say not to take such a black and white approach to it.
Jay Flynn:There's a reason I posed it that way, right?
Olivia Gambelin:Oh, I know, I know, exactly. You want us to fight, we're ready to go. I'm kidding, I'm kidding. But with it, it's not that it's an outright no. I love what Dorothea is saying about more data is not the solution. We can have the potential of creating narrow use models, which could be very powerful. And there is the possibility to increase the quality of information, minus the hallucinations there. What I'm concerned about here, and why I'm not saying, yes, by all means, engage in those partnerships, engage in those relationships, gung ho, is more on the researcher side. So yes, the publishers, own those, I don't know, it depends on the publisher. Sometimes they own copyright, sometimes they don't. But yes, they house all of that information, but it's not necessarily theirs to share point blank. You have to respect the researcher in this. And if a researcher is bringing in their papers that they need to publish in order to further their career or even just keep their job, they have no choice but to come and publish in these journals. And then for them to turn around and say, well, my data is getting sold. What do you mean? All of my hard work where I'm having to pay in some journals to even have my publication. And then the publishers are also benefiting on top of that. It feels almost like robbery from a researcher's perspective where you're benefiting twice off of my long hours. Now, there is a better way to do that. This isn't saying that that's terrible. It's more recognizing that the value that you're collecting there is generated not necessarily... The publishers generate that value by bringing it all together, but the individual researchers is where that information is coming from. And so entering into those relationships with the large tech companies, knowing that your first and foremost stakeholder needs to be the researchers. And how are you bringing that value back to them? Because if you... don't bring that value back to the researchers, you're not going to have any more publications or articles coming in. Yes,
Ivana Bartoletti:I mean, it's good. I agree. I feel that, and I'm not, you can kill me, but I feel, I take the point that there's the value of the researcher, the effort, the work, and what is the future if there is that trust, the relationship is broken. On the other hand, I do think that we have also a great opportunity here and a need to redefine what, for example, copyright means in this age. And I know there are big challenges out there, which we're waiting for to know what's going to happen to sort of open the eye and big challenges, legal challenges happening. And I think a lot of us are like kind of waiting on what these will be, considering, for example, that in Europe alone, there are so many different approaches. But I think we do have to rethink because I don't think that copyright can have the same meaning that it had before all of this. And in that sense, I'm kind of a little bit more, I do see some value in doing this, but yes, there is that relationship between publisher and author that it's one that we need to look into.
Dr Dorothea Baur:For me, definitely one thing, if you think about selling the data to big tech. First of all, what model are you feeding into? And the other one is like, if you ask your authors for permission, it must be an explicit opt-in and not just an opt-out. So default, it's how you present it. It's like I actively agree on selling or whatever or letting you do with the data or whatever you propose to do and not just an opt-out. It needs to be fair. Allow them an informed consent.
Jay Flynn:Right. So the big change in science publishing in Europe over the last decade 10, 15 years has been the open access movement, which explicitly takes away the author's copyright and explicitly licenses everything for free for CC BY. And that feels in tension with this assertion, sort of newfound defense of author's rights from the same quarters, in fact, that insisted that the authors give away their rights and mandate them. How am I as a publisher supposed to reckon with that? What do you think? Who do I talk to? I mean, you do a lot of work in Brussels, all of you. So how do we reckon with that, do you think? What's our best strategy for even understanding which of these things takes... The priority, you mean. That's directly contravening the author's right to opt out.
Ivana Bartoletti:But this is what I was saying. This is what I was saying. Feeling a little bit uncomfortable with defending... That is my opinion. But this is why I was feeling a bit uncomfortable because I was saying it doesn't seem to stock up with... To me, it doesn't stock up with the success and the movement over the last few years.
Jay Flynn:And like you said, there's a lot of grey area right now because we're waiting for... Certain things have worked their way through the courts. There are also facts on the ground, which would indicate that most of this stuff had been pirated anyway and was used by big tech to train the models to begin with. So there's a lot of that out there as well. So let me ask a question a different way, or maybe get to the second half of my question. How would you best advise the folks in this room and Wiley to sort of try and engage with these companies in a way that is most ethical right because we're dealing with on the one hand potentially large-scale ip theft um to train the foundational models we're dealing on the other hand with authors who feel like they haven't had a voice in um whether or not their content's been licensed um i'll say that you know at widely at least we view our responsibility as to defend the authors from that misappropriation and misuse of the content, the unlicensed use of it, the unlicensed training of models. We want to make sure book authors get paid. We want to make sure learned societies get paid for doing this stuff. But on the other hand, we're not exactly dealing with small under-resourced firms here.
Ivana Bartoletti:No, but there is one thing. which is, if I think, for example, in what is happening in Europe, right? So in Europe, you have the European AI Act, which heavily criticized everywhere because people say, oh, in Europe, I mean, if you've seen the drug report, it looks like that we are lagging behind the world in Europe because of regulation. But the issue is the European AI Act has brought something positive, in my view, which is the fact that these models, they need to be transparent, right? To me, this is crucial because that, to me, is the direction of publishers. It's saying, you have a big voice in this. I'm going to give you all this data. Yeah, it's a decision that you may take. But actually, these tools, they don't say anywhere. So far, we haven't got that transparency, that clarity about where the data is coming from. We don't. Even open access models, you know, open source model, there is nothing really open source about a lot of these models. They are open access maybe, but not open source because one of the biggest characteristic, the most important characteristic of open source is transparency from where the data is coming from. So to me, there is a big lever around transparency, around the provenance of this data, accountability of where this data is coming from, publishing where this data is coming from, make all of this auditable from an external standpoint. And this is important, I believe. The California legislation that was stopped, and I understand why it was stopped, but it had some positive elements. Probably the cost thing was maybe not particularly adequate, but who knows? But the issue is, again, there is a movement, I feel, globally to put transparency at the heart of these models. And it's something that these companies will have to reconcile with. I mean, I'm happy that OpenAI is opening an office in Brussels, because that means that there is an understanding and a desire to also comply with the European AI Act, which also entails the transparency of other models.
Dr Dorothea Baur:What would be your transparency worth if you just throw it into a sea of in accountability and in transparency. First, they need to clean up their stuff, like the big tech corporations, and show that they're capable and willing of giving that transparency and accountability. And then you talk to your stakeholders, as Olivia said, researchers, authors, among the key ones, and you kind of have some really in-depth stakeholder engagement, and then you take an enlightened and informed position on how to handle that partnership. But first, big tech needs to deliver on those not promises they made, the promises that were forced upon them, I would say.
Jay Flynn:When I think about the journey that publishers have gone through, and as a member of an editorial board, you will know that diversity in editorial boards is hard to come by sometimes, that there's some gatekeeping that often goes on historically in In academia, maybe, I think that's a non-controversial statement. We've been on a journey over the last several years to try to make our peer reviewers and our editorial boards and our editors-in-chief reflect the communities that they represent, which means that we need... experts from all over the world. It means we need diversity of race, gender. We need diversity of background in order to make sure that the chance to get published in a journal is fairer than it has been historically. When we train models on that content that is 10 years old, 15 years old, 20 years old, 100 years old, there will be things in there that would shock modern sensibility in an anthropology journal from 100 years ago. Or let's say in a chemistry journal from 10 years ago, there would be things in there that would offend and there would be things in there that we wouldn't necessarily want models spitting back out. How do I, as a publisher, think about supplying that content for training and how do I think about the opportunities that tools could give us to root that stuff out? Sort of I don't know. Those are the two things I've been thinking a lot about.
Olivia Gambelin:It's kind of like a chicken and the egg situation. Not really sure which way to go first. But what I mean by that is when we have these models that are exhibiting biases, they're doing their job. It's there because it's in the data, and it's in the data because the data is reflecting our societal norms at any point in time. So it is impossible to have a bias-free model. You cannot eliminate bias. There's always going to be some type of bias. The important focus here is not on let's eliminate it all and let's make sure it's all good and we won't send or use these historical papers because we know there's biases in it. No. Send that. But know that you need to look for those biases. use that as a way to put a magnifying glass on where the historical biases and stereotypes and disinclusivity, that's not a word. I'm making up words, too. I swear, I'm not an AI. I do that naturally, and I'm a native English speaker, so I don't know what that says about me. But that lack of diversity, that will always be inherent. We just need to look for it. And I think there's actually a great opportunity when we're contributing to these kind of models to use that as a magnifying glass, find where the inequalities are that we don't want anymore, and then don't touch the model, don't try and correct the model, correct the actual society, correct the processes, correct the company, correct the people behind. It's the people at the root. Not saying the people are the problem, but the root of the problem is coming in how we interact as people. If it's showing up in the data, it's because we are doing it. not because the AI is doing it.
Jay Flynn:So I read the other day a quote from a doctor who said, this AI has way better bedside manner than I do. I am not that great at communicating to patients. This chatbot does a way better job. It's more empathetic. It speaks in plainer language. Is there ever a world in your imagination, either of you two, where an AI is more ethical than the human in the middle?
Olivia Gambelin:Sorry, I scoffed at that one. Please go ahead.
Ivana Bartoletti:That's not a correct question. What does that mean? What does it mean? An AI being more... No. Okay, sorry. We're tired to talk about bias for one reason. You know why we're tired to talk about bias? I think I might... No, because bias is not a technological issue, right? It's a societal one. So we are basically asking people who are in AI, or people are going to say, fix society. Hey, or not, we can't. So this is where we get a little bit tired. And the reason is because to have an unbiased or an equal or an equitable AI output, you, Wiley, have to make a political, social choice to want it. Because you have to say, actually, I am going to want an equitable output. To do so, I'm going to go and train a model in a certain way. I'm actually going to modify the model if I want to. I'm going to create synthetic data where the data is not there because I, Wiley, I want an equitable model. That speaks to you. That speaks to your value. that speaks to what you want, what is the role of publishing in the age that we live in. And it's not an easy one because, as you say, we are in a situation where there is a lot of discussions about, ah, you can't cancel that from the past, ah, you can't cancel that from the past. This is where your role and your responsibility comes in. What do you want? And to me, this is where the benefit of AI can come from. I mean, honestly, I've never heard so much discussions about bias and the quality since AI has come out, honestly. Yeah, it was So, which is actually not a bad thing. So, what I'm thinking is, use this, as was said, to make a statement. To make a statement. It can be controversial, but at least we can talk about this. Use this to say, we want an output which is equitable. To do so, I will have to modify the data. Sorry.
Dr Dorothea Baur:Yeah, but it requires a huge effort to go against the language that promises you that AI is automated decision-making, which relieves you from making the decisions or deprives you of making the decisions. And it goes against the grain of machine learning, which says, oh, you just give me the big data and I'll find the solution for you. You need to reduce it to an analytics tool to highlight intersectional biases and such things that we would never have seen with our own eyes or in our Excel sheets or whatever we would use. And you really need to cut it back and kind of put yourself on top of it. And it's a huge effort because it goes against the very logic or the jargon that is used with machine learning. It's automated decision-making. It's analyzing big data. Please don't disturb me. I'm doing self-enforcement learning, et cetera. It's like you need to impose yourself as a human, as a company with values and say, this is what we want to use you. You are the tool. We use you for this and that,
Olivia Gambelin:but not that. Which is why I scoffed when you said, will we ever have AI that's more ethical than us? Absolutely not. Because of that factor, we can't automate our ethical decision making. We as humans have something called moral maturity, meaning as time goes, as we as humans as society bring in more data, let's say we can call ourselves our own little AI bots if we want, we're learning through context, we're learning through experience, we're bringing in more information. We as a society and people actually mature morally over time. You know, 100 years ago, I couldn't vote. 100 years forward, I can. We as a society matured morally in our understanding of what we accept as good and bad, right and wrong. If we were to offload that decision-making onto a system, we lock ourselves into our current moral understanding and our current frameworks. We basically sign over our ability as a human race to mature and grow over time. And that is the danger, where we offload that, where it is that decision automation without that critical thought, without that engagement to be able to say as a person, I don't like that, I wanna change that. And so that's why I scoffed when it was, can we have something more ethical? No, it's never going to be more ethical than us. Not that we're striving for efficacy, we're striving to learn and understand and grow.
Jay Flynn:I am reflective because it's a really interesting the use of tools to spot things that we don't see. None of us is as moral as all of us, or all of us are not as moral as any one of us, I think is where I'm coming at this from. And so the thing, the reason I was being reflective was because I would say until I looked at the data set and said, do I have enough reviewers from China in my peer review cohort, and might that have something to do with the acceptance rate of Chinese authors in my journals in biology? So I used a tool to help me ask the question. That was, in your words, a moral act, perhaps, because I wanted to apply some values to that system, and I needed a tool to help it out. By myself though, I don't have that ability. And so maybe the way I was thinking, what I was reflecting on is like, maybe the right question is in what way do you think AI can help us be more ethical in the context of the decisions we make? And what do you have to watch out for? So it's that application of that framework, the application of those tools.
Ivana Bartoletti:I think there is a lot of reinventing the wheel, sorry. The sense that it seems that when AI came, since AI and particularly sort of the charge of PT, all the splash, made the splash, it seems as if we are kind of asking ourselves questions that AI is bringing to the fore, which were, to an extent, they've always been there. So I'll give you an example. I know, but the reason I'm saying this is because AI seems to have, for example, seems to have become a great excuse to break the law. Excuse me. People say... Privacy, for example. I mean, I come as a privacy law of the ground. I head up privacy for a large company. Privacy law has always existed. And now you're coming in, and because of AI, it seems that it doesn't exist anymore. Non-discrimination legislation. Well, actually, you cannot discriminate. It's in the law. And then you have these tools that are actually discriminating. And you have human rights legislation. Oh, it's been there. I mean, a risk all the time, but it's there. And Then you have AI, and it seems as if, and correct me if I'm wrong, but if we're challenging the fundamentals because of AI, as if, of course, we change with technology. The world changes with tech, obviously. But it seems as if we are willing to adapt to these technologies to the point that we are losing track of the essentials and the fundamentals that were bindings together before AI. So, you know, this is what, I don't know if I can express this correctly, but it feels as if we are like challenging things that, why are we asking ourselves question about, can AI help us be more ethical, help us be, but, you know, when it comes, for example, to using AI in recruitment, for example, and I have known discrimination legislation, I have to abide to it. It's been a fight to get that non-discrimination law in place. And now are we running the risk of challenging these very fundamentals that we've been fighting for so long for? I mean, to me, this is like a question that is,
Dr Dorothea Baur:But that's why I find it particularly fascinating in a business context. We've come a long way from thinking like the business of business is business to seeing business as accountable for human rights violations, for environmental impact, et cetera. Now, businesses are using AI. So there was always a moral distance between business and social and environmental aspects. And so now we've built a lot of frameworks to hold businesses accountable for what they're doing. Now, AI comes and they say, oh, we don't have control over that. It's AI doing And that's why we need to bring AI into the existing discussion about responsibility, non-discrimination, all these things that we have worked hard to get there.
Jay Flynn:I want to get the audience involved, but I'm really interested in the last point you made, because we've got a set of UN SDGs that we're trying to adhere to and meet. We've got a set of green goals that we're trying to sit here and meet. And yet the thing I hear from a lot of our Our folks, when we license a tool or when we bring something in, it's like, wait, isn't the training of this model the most environmentally destructive thing that man has done in the last 15 years? Probably not. I know. But the rhetoric is very hyped up. When you see Microsoft wants to restart the Three Mile Island nuclear plant, which as a Gen Xer is akin to sort of like, let's bring back thalidomide, it feels to me like... Like these are... are really interesting and current questions that have to exist inside the framework that you were just talking about. It has to exist inside a climate goals framework. It has to exist inside a discrimination framework.
Ivana Bartoletti:And the law. And here I speak as somebody who works in a business and it comes from a legal background. The law. I mean, privacy legislation, when you train the system, is important. In the US, there is a proliferation of legislation coming around the use of AI in recruiting. the use of algorithmic decision-making, there is NIST, the Risk Management Framework, there is legislation, more or less binding, but there is legislation. To me, this is really important, because otherwise we risk, companies, they must abide to the rules. So the legal element, to me, is very important.
Jay Flynn:I could do this for another hour. I feel like we're just getting started. This is amazing, but I really want to... If you get us a drink? I mean, sure. Let's get going. Drinks all around. I'm going to choose to wrap here. That was phenomenal. I really want to thank each of you for a great discussion. I learned a lot. I really could go on for another hour. We'll have to bring drinks up to the front. But I want to just thank you on behalf of Wiley and personally for taking the time to come and do this. Whatever your role is in the research ecosystem, if you're watching or joining us today, there's still a lot to be gained by staying engaged with this conversation. I really want to thank you all for joining today. Let's just shape the future of AI and not be shaped by it. And that's really what I took away from this. So thank you very, very much.