The ThinkND Podcast

RISE AI, Part 6: AI Ethics by Design

Think ND

Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.

0:00 | 1:08:51

Episode Topic: AI Ethics by Design

Gain a strategic edge in the evolving AI landscape. Join Vatican AI consultant Father Paulo Benanti and CMU machine learning expert Professor Arti Singh for a high-level dialogue on innovation and human dignity. This fireside chat bridges ethics and engineering, offering essential insights for leaders committed to building responsible, human-centric technology.

Featured Speakers:

  • Fr. Paolo Benanti, Third Order Regular Franciscan
  • Aarti Singh, Carnegie Mellon University

Read this episode's recap over on the University of Notre Dame's open online learning community platform, ThinkND: https://go.nd.edu/45e27f

This podcast is a part of the ThinkND Series titled RISE AI.

Thanks for listening! The ThinkND Podcast is brought to you by ThinkND, the University of Notre Dame's online learning community. We connect you with videos, podcasts, articles, courses, and other resources to inspire minds and spark conversations on topics that matter to you — everything from faith and politics, to science, technology, and your career.

  • Learn more about ThinkND and register for upcoming live events at think.nd.edu.
  • Join our LinkedIn community for updates, episode clips, and more.

Welcome

Speaker

Good morning everyone. Sorry to break this wonderful conversation here. but I'd like to get us started on time so we could, uh, stay on time and how's it going? How are we doing after two days of rise here? Excellent. Thank you. Thank you. And we realize it was rainy and cloudy yesterday morning. As you see, we act to your concerns and we have fixed it for today. It takes us a lot of work, uh, to make sure that the sun's shining bright every day. So, but thank you again for joining us and, and just, I'm, I'm really looking forward to a conversation today as it sort of picks on where we, what the things that we have seen, heard, talked to each other about over the last couple of days at inter intersection of AI on Rise and deployments and use cases, and our, the two, the two, uh, friends and amazing scholars that we will be. I'll be having a conversation today with, we'll sort of touch upon all the topics. But, but one thing I'd like to also highlight is yesterday, I'm not sure how many of you attended a poster session of the students. Wasn't that amazing? I mean, you know, these students are doing so just give them a, huge round of applause. Uh, are doing some amazing, amazing work. So, I'd like to first welcome, uh, father Paula Ante, upon the stage here. Uh, he's a third order regular Franciscan and leading ethicist of technology. Uh, father Binti focuses his work on the ethical and anthropological significance of the digital age. He's a professor of ethics at, of AI ethic, professor of ethics of ai, at Lewis University. It's a very interesting university he was sharing with me yesterday, or a day ago, that, uh, they create, they educate the next business, lead the business leaders of tomorrow. So imagine if you're in a classroom and being educated by. The man himself. Right. He would be a So hire business leaders from Louis and Notre Dame. We create great business leaders, by the way, the Vatican, and he's also the Vatican consultant on AI and the president of Italian government commission on the impact of AI on media and journalism. His council is sought at the highest levels of global policy. He was recently appointed by the UN Secretary General to the group developing a proposal for the GO global governance of ai. And he was also appointed by President Biden to the National Science Board. And just to share with you a small world effect. yesterday, father Benanti was sharing with me that he was on a phone con. He was asked to be on a phone call with the US Ambassador to the Holy Sea, and they were talking, and Father Binti said, I was in South Bend with Natasha. We were having dinner at this restaurant. The US Ambassador to the Holy Sea said, that's my favorite restaurant in South Bend. Who was the, who was the US Ambassador to the Holy Sea? Any guesses? Joe Donnelly from Indiana. So, so it's a small world effect that sort of comes together here. And then joining him is, professor Artie Singh, uh, who's a distinguished professor in the machine learning department at Carnegie Mellon University. That's the number one department in ai and also the director of the NSF AI Institute for Societal decision making. So, so you can see why we have these two amazing, brilliant minds together on the stage to talk about. AI ethics and how we think about societal decision making using ai. And folks who are not familiar with these NSF institutes, these are the most competitive, research AI institutes to be awarded by NSF. There's about a$20 million research grant that we receive from NSF and, and ART is leading one of those at the societal decision making at CMU, and she's a recognized expert at the intersection of machine learning statistics and signal processing. Her, her own research has focused on designing principle interactive algorithms with real world application to scientific and societal domains. And I don't know ti how you're gonna find time, but she's also the general chair of ICML 2025 and, uh, lead expert on multiple NAS study committees. So please join me in welcoming, uh, uh, father ante and ti. Alright, I'm gonna grab a seat next to you and get you on the conversation. I just wanna be there. So this is how we'll do, so we will maybe talk to each other and everyone else here for the next 45 minutes or so. And then we will invite all of you to join in the conversation. and we will be constrained by time. So the way to get your question out, first one to the micro lens, right? so we will follow the computer scientist in the room. We'll follow first in, first out strategy. Fifo, right? We won't be doing LIFO here today. Last in, first out. Alright, so, uh, so I'll start with you. uh, father Ante as, as you a lot of your work, even last time when we visited, uh, say here at Notre Dame. You gave a fascinating talk at the Lucy Society Series on Algorithmics. and as we start thinking about an AI for Good is, is becoming a thing, right? Folks keep, you know, it's used correctly or incorrectly. I don't know yet, but I'd love to get your perspective on what does, does it mean that it's AI for Good once is deployed? Principles of human dignity must be upheld from that perspective. Let's just, what is Air for Good in your, in your perspective?

Speaker 2

Uh, first of all, thank you all for having me. Uh, it's a pleasure to be with you. Uh, well, you know, um, let's start from a perspective, uh, should we talk about ethics in technology? because you know, when, for the first time, 60,000 year ago, a human being stick for the first time in his hand, a clap stick. It could be a tool to open much more coconuts or a weapon to open much more school. So is it an genetics in technology or should be an genetics in the human being that is using technology?

Speaker 3

Mm-hmm.

Speaker 2

Well, this is a long topic just to, to arrive to, uh, an understanding that we have today. We understood that it's not just a matter of using technology in a way or in another way, but once a technology is released inside a society, it acts as a form of order and displacement of power. So, supposing that we have to build a railway where we put the track and where we put the station is something that is beside the good or bad of the railways, is allowing someone to get transportation and is denying someone else to get the transportation. So it's not just a matter of being good or bad of the railways. There is an effect in society that is connected to different stakeholder, that has to be analyzed, has to be put in the middle, has to be in some way, negotiated in society. Well, now it's not anymore timing, which such kind of effect is produced by concrete steel, another technology. But every time that we've wrote, a conditional form in an algorithm, if this, then that, or we allow a machine learning to determine something like that, we are giving a form of order and a dimension of power inside the society. AI for Good is the form. Such kind of innovation that take care of the different stakeholder in society that will be touched by such kind of technology. and this is a, a step on in the direction of complexity. I heard yesterday a lot of super interesting discussion and it happened that we have to, to distinguish between the developer and the player. Because these things is something that happened between the development and the player. Let me do an example. You can develop an API call for an imaging recognition system. If we apply, let me do a really Italian example to the recognition of a coffee bean in a roasty to understand if the coffee bean is good. The other example is with past 10 pizza, but I jump it. Well, sorry. It's a I'm a professor. It's a meta deductible things if you smile, I don't lose you. Well, if we apply to such kind of things, there are no ethical problem. But if we apply the same API to the patient in an emergency room in a hospital, while the form of order and the displacement of power touch the human dignity, and so here is where AI for good happen. It's not to this to say if AI is good or bad. It's not a medieval things, you know, you, you smell, develop angels here. But it is a constant questioning technology in the societal appliance that is, is not like drop and forget, but is an ongoing recursive process. That simple drive the technology to express the majority of the principle that we agree and the majority, to respect the majority of human rights that we decided to put at the center of what we call development, human development. Not only, uh, you know, innovation, but human development. And so to make short, long story, AI for good is the element that transform innovation in human development.

Speaker

Thank you so much for that. Father Binti. So I'm picking on that Artie, your institute focuses on societal decision making and has made some significant inroads and crisis management. So let's pick on where Father Bin Auntie left us that you know. AI for Good is in leads to integral human development. And when you're doing crisis management, you are leading to human development and human maintenance, so to speak. So, tell us a bit about some of your projects and work and how it may tie in with what Father Binti just laid for us.

Speaker 4

Right. So, um, good morning everyone. hope this is working. Everybody can hear me? Yeah. Okay. So, yeah, I totally agree with, uh, a lot of the things that were said that, developing AI for social good, we are doing it in the context of disaster management, really requires, I would say, developer and deploy together working with the stakeholders throughout the entire pipeline. We did not just develop, we don't develop tools or even come up with projects on our own as AI researchers, and then go and try to convince some disaster managers to like, you know, oh, this is a useful tool that you should use. Yes, many times they're unaware of what the potential could be, but that potential has to be co-discovered. So we start by talking to, we work with several, state, he, uh, state emergency management departments, also organizations like American Red Cross to really identify first just conduct a need finding study, right? Is AI the right tool? Where can AI be plugged in, right? What is the data that we will have access to? How it's sweated, what are the algorithms, the assumptions that go into it? And then try to develop something that identify first a problem setting and then develop algorithms with continual feedback from them. So I'll give you an example of that. we recently, developed a method that, an AI model to, uh, do building damage assessment, okay? After a hurricane, let's say. And this was built off of drone data. Now, the great thing about drones is that you can send them when it may still be unsafe for human, you know, emergency responders to go out there. So you can collect that data very early. A big challenge is to look through that. Several, you know, minutes or hours of in fact videos and comb through for relevant information. Humans can do it and they still do it. We are not replacing that part, but it takes them a lot of time. And developing an AI method, even if it is crude and can give them some sense of where there's most damage, helps them to decide on follow up actions. Okay? So again, the actions are decided by the humans, the emergency managers, but the tool just gives them early access to information that otherwise they would be working in the blind for. So that's an example, uh, where we were able to release one of the largest, hurricane related disaster data sets, combining 10 different hurricanes. A key challenge for AI was annotation. It was never annotated. So we got it annotated through a citizen science effort and we're able to develop these methods. They're not deployed fully yet. We did a tabletop exercise in April, again with emergency managers to understand how, you know, where it can be used, what are the pitfalls, how they would like to see the information. And then getting that feedback, which was actually more positive than we thought. We then are conducting training sessions with them. We have trained, I think two subsequent rounds, about a hundred disas disaster managers on using this tool. So that's just an example.

Speaker

No, thank you Artie. So first of all, I really like your point about the developers and Deployers working with stakeholders. It also gets to what Father Ante was talking about that what is AI Africa, right? And uh, something which I even shared, I've been thinking about deeply is that we even think about responsible ai. AI for good. There is no mathematical function for responsibility that you can give to a machine learning algorithm means to do a gradient loss, gradient descent and figure out do your loss optimization. So it is the developer deployer continuum that, that both of you have alluded to. So, so let me, so I agree. So I'll ask you a question. Let's assume to, this is a question for both of you. You're in a disaster management scenario, and generally the, you know, the humans are, as you both have talked about as a decision makers. But let's say that decision needs that fraction of a second timing, that you may not have the time for a human to look at the data. unless the system is trusted or not trusted, you can decide whatever attributes you want for the system. What, like, what should one do? Should we still wait for the human and let, when can we let the machine decide? Or should the human always override a machine's decision?

Speaker 2

Well, the easy one to me, huh? Okay. Well, let's me make it a little bit more complicated. You know, my background is in engineers, so, uh, I think the problem is the 99%, error free threshold. We can, uh, we can imagine, especially now with, uh, LLM and generative artificial intelligence, that, uh, we are used to divide a task in a series of steps. And the problem is how many steps can a machine take before we have a threshold of error below the 99%? because if we have a 50% threshold with a hundred step, it's like to flip a coin, you know, it could be good and could be bad. Also, if we have 80 or 91 on 10, you need human supervision. And if this is the threshold and you need human supervision, the time has to be a human compatible time that allow the human supervision to be a significant expression in the chain. But now we have model that come guaranteed 2000, probably steps with 99% of, of a error free threshold. Probably the, the one is the cloud models for tropic that can guarantee 33 hour of coding without a lot of problem. And this is a lot. This is a lot. Now the problem is in Dean's transitional time in which every seven months we see this number of step rise up. We can imagine to put in place with this threshold of precautionary principle, much more kind of different task that the machine can make in an autonomous way. Uh, what does it mean precautionary things? Something that my father that is an engineer to will tell to me, think twice. Cut once. This is the, the things, but because we have this kind of technology and because this kind of technology can shorten the time of intervention in mission critical things like in a disaster. Mm-hmm. What we should in start to let me express in that, in that way to draft a coexistence map in which some kind of task could be automated with such level of threshold, other tasks need to be put in place with human supervision. Uh, and we have to give time. To people to be the responsible in action and some kind of task has to remain to the human things. Uh, we have a lot of example in which, you know, we don't respect the human time in a human machine that she's in making model. Uh, there was a moment in which we start to develop autonomous driving car and if there is an an uncertainty, they call the driver to take an action. Yeah. But if you making one second and 50 and you are watching Netflix on the car. don't let me use a bad word, but it, it, it, it's, it's not really, you know, it's not really, and this is the point. So it, we are assisting to the rising of a new, probably also discipline that is this crossing intersection area between human and machine. And if I can express it like a philosopher, like I, I am a philosopher and I'm, my topic is moral philosophy. I think that today the people that are simple designing the user experience are the one that are shaping the most political fact in our society because this is, this is where you are drafting what remain of humanity as a source of meaning in decision. And I think that in critical area, like disaster recovery is where, because humanity is at the center you are, you would like to make a recovery not to save goods. But to save peoples, it's where we can draft a for good user experience and user interface that can serve as a gold standard model for a lot of other business. This is why it's so important, the example that you make.'cause what the, is the new human, the core element that you face is where you can say loudy. What do you think is the value of the human beings and the human being decision making system?

Speaker 4

Yeah. So, um, one point that I, I'll disagree, but the rest I agree.

Speaker

Well, that's what we want. Our goal is to get them to a point of disagreement, right? And then we ask for forgiveness from Father Ben.

Speaker 4

Yes, absolutely. So I think the point that I, uh, disagree slightly is that I tend to think of it as, baselines are important to consider. How are things being done currently? So I would not just shoot for always that like 99% accuracy. That's, I think, a counterproductive goal. I think we have to think about how it's done currently and do actually a counterfactual analysis to see how would the AI or the human working with the AI do, right? Or maybe just AI in some scenarios where it makes sense, but you have to evaluate that counterfactual according to range of criteria, which include ethics, accountability, right? Privacy, all of the things in addition to of course, accuracy and the benefits sense of it. So I would say that actually studying where event, there is no one recipe, first of all, right? Like when you should just defer to the humans, when you should defer to AI and so on. I think it's very context dependent and developing such a framework where you can understand what human AI complementarity looks like in a particular domain, given the risks of that domain, given the, you know, many, several considerations. Is actually a really important open question that we are trying to address by asking the following question. We want to develop a prescriptive framework for AI adoption. Can I prescribe the way you use AI in a particular, given the, the parameters of a context that can situate it for better outcomes, right? Or adoption actually is our, is one of the things that we are looking at. So to me this means you list out what are the user factors, right? Things like privacy, accountability, where the decision maker will be held responsible. You list out the organizational factors, and I'm not listing all of them, just examples, legality, operationalizability, and so on. And you list out societal factors like ethics, right? Uh, morality and so on. And then think about how you can mediate the effect of those factors on AI adoption by the right use case, the way the human engages with the ai. As an example, a radiologist could always defer to ai. That's clearly bad choice. We all know. But if it uses AI when it's not confident and when it's declaring a not cancer. So asymmetric triaging, not when it's saying a cancer right? But if the human thinks this is probably not a cancer, but I'm not sure that's a good point to use AI actually, because you only can hopefully detect a potential cancer, right? But not miss out on one. So there are particular ways of engaging with it that in different contexts may make sense. And identifying those is the key.

Accountability and ethics

Speaker

Thank you. So this is great, right? So I, I don't think you all are, you all are generally in agreement. so we still have to double click on it. I want us to get to a stage of a We are, so, so we have, so and I agree. So I think what it comes down to is, can we explain the why, right? And, and I think there are two principles to it. Ot, for me, as a computer scientist, you think about. The, the human machine interfaces and, and how, how we are, can we, the notions of explainability, can we explain what's going on? Can, can the human understand what's going on, perhaps or perhaps not? What are the different factors, whether you both talked about accountability, right? Moral accountability as well, and the societal intervention factors or accuracy, et cetera. Clearly accuracy has to be put aside, right? Uh, yesterday, uh, a Luna from Meta gave an amazing keynote where she talked about the notion of actuality where if you just look at the accuracy, some LLMs look good, but you look at the data of hallucinations at the same time they look, don't look that good. And if you put that together, so there is accuracy is a misnomer in many ways, uh, in these situations. So if you were to design the system, and, and Father Beante, you said you are an engineer philosopher, Franciscan. Uh, so, uh,

Speaker 2

a lot of sin.

Speaker

So. What does moral accountability look like? If you were to say that the systems that we computer scientists are deploying, and, and if you can say, okay, this is one idea of moral accounting accountability that will transcend, you know, faiths and everything that, that, you know, we go, okay, yes, this is something that we should have in the system.

Speaker 2

Well, you know, we are talking about a technology that it's really near to a general purpose technology. So it, it's not so easy to identify a global model without applying to the different, sector. Because if we talk about healthcare, it's something, if we talk about, um, you know, mechanical production of steel is another things. But, when we talk about accountability, we talk about the fact that at the end of the day. There is someone and what this someone could be also a company that take the responsibility for the serious of action that was put in place by the system. You know, uh, we can say that accountability from the perspective that we are talking about is a promise enforced by the law, and the promise is, I take care of this and I take the responsibility for this. Well, um, the real thing is there are three different line of action on this, and there is not still one model that Twin and the other that lose. There is one line of action that say that only if there is a direct responsibility of the developer on the function. That could be a really accountability. but this is struggling with a lot of system in which the complexity is so high in which you have a really transformation of the deploying during the deploying system of the, of the model that you're putting in place. Uh, and this is happening, for example, in all the digital transformation of a lot of business. You can apply, uh, a really interesting algorithms in a healthcare system. And then you see that, you know, the quality of the outcome is really different, not for the, for the system that you're deploying, but for the nature of the object that is the human beings. and I have a long, long talk last month in Seattle with neurosurgery, and the problem of neurosurgery, especially the one in an endoscopy, is that there is someone that has to adapt the theoretical, displacement of the brain. The actual position of the brain that the system is discovering when you're making the surgery? Well, where is the accountability here? Could be in the system, could be in the system, plus the doctor, is so much critical and context depending, that probably cannot be the same model of applying it in a production line or something in which you define all the, all, all the system that has to happen because if you, if you apply in an industry, you define the production line and because you've defined the production line, you, you know, the guard rail and you not the standard. When you touch the unpredictability, usually we have the accountability that remain to the human beings for the really an impossibility of, of having an abstraction of what the real situation is when you give a gun to an officer. There is something that remain in the personal judgment of the officer in that kind of situation of risk. And so probably here the discussion on accountability is different. There is a third model that come, especially out from China and other kind of government that come not from a non directly Western model in which is the harmony of the whole. That is much important of the right, of the single, of the individual in that system. You look at the global number, if before you have 97% of success globally and now you are able to go to 98, okay, it's workable for accountability. I don't think there is one that fits all and we need to make something, and the disagreement of before is the sign of this in which there is an interdisciplinary committee with different voice and different perspective that is able to say this is a workable threshold of accountability. Which there are clear these kind of things. would you like to produce a model that then deploy can change in deploying it? I don't think you want that. This is easy for academia because we stay higher, you know? But when you deploy, a lot of things happen. Well, the real things that is missing here is the model is a way to involve the deployer in this accountability stuff.

Speaker

I totally agree. However, you know, we are so, we are imposing this notion of more a high order of moral accountability right. On machines. And yet at the same time, we don't have a compact between us humans about a higher purpose or higher moral accountability. and then we, but we are saying the machines must So, uh, so imagine a world where humans and machines are coexisting. Could we at least have a pact between humans, humans and machines and machine to machine about some idea of a moral accountability? And, and yes, we, we uh, we do humor ourselves about the hallucinations of LLMs, but that makes me think about a court of Margaret Mead, what she said, what people, of Margaret meet of a noted anthropologist. Uh, what people say, what people do and what people say they do are entirely different things. What she's saying is, we all hallucinate, right? so, uh, so how do we sort of bridge that gap that we're, are we putting machines at a higher expectation that what we are putting each other,

Speaker 2

uh, well, this is really cultural, you know, I can accept that the doctor make what is possible. And then the exit is not positive. I will not accept to go on a machine treatment if there is something like that. But this is a, a cultural stuff. The core question that you're rising is, could we have a universal ethics? This is the problem. Yes. Well, uh, you know, I'm a scholar of the G Suites always answer to, to a question with another question. What does it mean with GI can say this here because there are no G suite around, well, what, what we def in which way we define ethics. Well, if you understand headaches as a value, well, let me tell you a secret. I'm a Catholic, and as a Catholic, I think that marriage is an high value, but I'm not married. So also among a singular group like Catholics, value are connected to individual choices. So we cannot universalize value. It's not workable. Dot can we universalize moral norms? Well, a moral norms is an attempt to protect the value and if value are variable, well moral norm, at least we'll know some kind of exception in some situation. There is a long debate on legitimate defense and things like that. So should we surrender to that? Well, there is a third element in ethics that we call principle. So supposing that we are in an a autonomous driving car mm-hmm. Make it sense that in the unavoidable situation of an accident, the car try to minimize the damages. Well if you are someone that work on the street, if you are the owners of the car, if you are in another car. Yes, it makes sense. Welcome to principle. Well, if we focus on principle as a practical way to solve conflict, we can define a set of principle that are universal that can underline to this discussion of accountability in which if the machine is not minimizing the damages, someone will be accountable if the machine and end up in autonomous satellite around the, around the, the earth. You know, the most valuable thing on satellite is the payload and the propellant. It's payload. So we have a autonom system. I, I was in a huge, huge conference, global conference in Paris a couple of years ago in which, you know, before, when we have NASA and za, they make a phone call and say we are in a collision orbit. Okay, this is my turn. I shift the orbit, I pay, I use the payload. This is your turn. You shift the orbit. Now that you have a lot of private players, they try to call him and they don't answer. They try to send it in the mail and they don't answer. No one would like to pay the payload for that. In that case, there is another principle. Make my own business my own advantage. So this idea of principle and this idea of having a minimum set of workable principle, I think is the entrance door for the accountability.

Speaker

Thank you so much Father Beand. You. You set me up wonderfully for the question to our now, so rt, you have done human factors research, ML research, and let's say that we form an interdisciplinary committee. and I see the three of us are part of the interdisciplinary committee. and, uh, we come up with informed by father ante, a set of principles. Could we incorporate those set of where are we in terms of our own technology, whether in algorithm, human factors or interfaces that we could flex and create and deploy these, build and deploy these systems that will be true to those set of principles and we can measure them.

Speaker 4

Yeah, I think, uh, that's a great question and that's why I think I take the easier route, which is I always compare, as I said, like against what's the baseline, what are the humans doing, as you're saying, right? If the human is the baseline and they have those biases, we need to compare against that, uh, whether those will be improved. So suppose we did, and that's why I, I try like not to, I mean, in my personal life I do, but like in the development, like not think so much about like global morals because I think that's a bigger question. Or principles, but let's say we agree upon some principles, right? With an interdisciplinary committee, which is totally what it would take. can algorithms be made to adhere to that? I think, that's a hard question because not everything can be, algorithms do need a metric, right? It has to be quantifiable. So to the extent that we can quantify things appropriately, and if this committee can agree on what that notion is, it can be incorporated in algorithms. otherwise it's gonna be hard. But what I would say is very often I think about, maybe it's not the model that has to necessarily be, uh, adhering to all those principles. It's the process in which we use the model, right? So you cannot, you may not be able to establish trust in a model, because models are always gonna have failures, but you can establish trust in the pipeline. The way the model was developed, the way it's been used. And I think that's a better framing to try to aim for, rather than saying, will my model adhere to some principles?

Speaker

So what you're talking about is almost like a use case setting that how it's being used, defining

Speaker 4

development and use

Speaker

case development and use case setting. So, which also means that we have to sort of think about, you know, equity by design, right? The design has to be, you know, representative of, like yesterday, their friends from Ascension representing about, you know, their work in healthcare. And they think about, you know, these big gaps in access, big gaps in, you know, access of patients to the healthcare system, access of, you know, big gaps in deployment. Who you deploy to your absence of data within this, within the population that you're, you're building it to even a significant gap in the, the demographic mix. Of the physicians and healthcare professionals that the patients are seeing. So this, it's a lot of complexity from that perspective as, as a societal, uh, norm that has become now. So, so walk me through like some of the thinking about, so how do we achieve, and I do believe that data machines and everything, uh, and all the work that you do could be quite responsive to this. How do we achieve an equity by design? Like what could a well designed thought to AI system help us overcome some versus accepting what's almost, you know, what's a normative construction society, but saying now we can help get better at it. We can help improve our, I was talking to Frank yesterday about, uh, that, about, you know, uh, the data and what the different clinical trials that could be done, which are much more representative, et cetera. So, so walk me through as to what you think the, how design could lead to more equity as.

Stakeholder Metrics Design

Speaker 4

Right. So I think the key ingredient to this is the engagement of the stakeholder throughout the pipeline. Something I mentioned before, right? You really have to talk to the stakeholders and that group has to be representative, right? So it's important to make sure that you are reaching out to a diverse and diverse in every possible way, right? I think the mistake we very often do is we define diversity very narrowly, and that leads us into trouble disagreements, right? So it could be, it could be geographic diversity, right? It could be ruler versus urban, right? And it's, you really have to think very broadly and ensure that you have all the stakeholders and engage with them throughout the pipeline, right? Making sure first, should AI even be used, get, get an answer to that problem. Where should you try to use it? Very often the use case you start with does not end up the use case that we actually implement, right? It, it undergoes that's, yeah. It, it always, always happens. Uh, what, what is the data we are using is. Representative, what are the assumptions we are baking into the algorithm? And so all of these are design principles we can follow while developing the ai right? And eliciting those again, if, even if you have to get two metrics, elicit those metrics from people, from stakeholders. So I'll give you an example, uh, of that. for example, social welfare functions have existed, uh, in the social science literature for a while. Uh, economics also. Uh, but were not used by AI researchers as much. Right? And even if you use them, you just like make up some metrics and then try to equalize them, right? As opposed to that you can actually learn it from data, you can look at people's past decisions and you can try to learn those metrics. What did they use when they made decisions and then try to incorporate that. So that's just an example, but I think really that engagement through every step is the key, including, of course, the evaluation, post-talk, and refinement.

Speaker

Excellent. So let's say we do all that. And then, then Father Benanti, one of the points you hit even with the autonomous calls was crossed, and I still remember, couple last time when you were here. We were together in my Tesla, and he asked me a question, do you own this car? I said, I remember writing the check. I own the car. He said, do you, or do you only own the tin box or aluminum box, whatever you call it? At that time, I said, no, I want the car. He said, who decides your software functions? Who decides your experience in the car? Because it's run by software, right? There is no engine. It's basically the software. You get a upgrade and you go, thank you Elon Musk. And you say, I will do it. This is what you're telling me. I should drive the car today. I'll drive it like that. I have gr in my car. I have an AI assistant now in my car to you. So he essentially said, you don't own it. You're just a subscriber to the wishes of a software firm on how that experience should change. And I felt horrible that day. Father Ben. I, I was painfully thinking about and then he also, you, you made an imp and then we had a follow on conversation the next time you came as well, where you said, we need, and I've been thinking about it. We have to write, finish writing that paper. the department of software studies to,'cause we need to study, it's not just computer science anymore, but we need to study the, the deployment development of software. We need to study software as well being. And in fact, I was talking to yesterday and he was also talking about similar perspectives too, uh, on what does being a software mean. So as we have these system and autonomous cars and everything else, one big question is you've, you've talked about should they take over or does it, you know, I've, I have tried to do a proposal writing while using self-driving car. Trust me. And I was kicked outta self-driving car was banned for months. so, uh, don't do that. It's not safe. but you got a proposal done, so it's good. so how would you sort of think about trust in these uncertain machines?

Speaker 2

Well, first of all, that kind of observation was before El Musk go in politics. So,

Speaker

yes, it was two years ago.

Speaker 2

Two years ago. In, in a, in a inapt moment. Yes. Well, um, beside that, we are facing a radical transformational society. You know, that usually was the finance on, let me say, hardware. Not on software, and a lot of things that we are talking today is due to this kind of transformation, the software receiving the work. Try to think what does it mean in a regular regulatory perspective? You know, I, I was part of this couple of bodies that work on regulation. Well, if you have to define, for example, a knife. A knife is a piece of metal. And you can say in an port, a knife cannot enter, gun cannot enter in an port, it's easy to regulate. Yes, no, maybe. Okay. Well, but if we have a software defined entity like a car, let's go to Chinese cars just to not talk about alon. Well, we, we, in Europe, we have a lot of Chinese car BYD Well, no one is guaranteed to us that tomorrow with an autonomous software update, they can become a, a bomb. They simple start to, to burn the battery. That could be a security issues or they can steal the conversation of the people inside the car, things like that. So the problem is in which way you will regulate an object that can change the nature with a software upgrade, and that is something that is struggling on society. You know, we are talking, this topic could be simple, be an under point, a soup point of this. If we change the nature of the object in a non-stable way, how could a traditional stable legislation be applied to the new nature of this object? Uh, in United States, I make my pitch in, uh, in Georgetown, in bioethics, you know, you need a license to touch the body of someone. If you, if you make the nails, you need a license. If you're a surgeon, you need a license. Well, we could have a series of software defined object that can touch the body of people without any license. Mm-hmm. Or in which way you can have a license for such kind of object. Well, this kind of questioning cannot simple remain in a discussion among user and producer because the nature of the object is so flexible, it's so changeable. And here is where we need the new sort of guard rails that has to define a sort of constitutional beings of the object. So should we imagine a sort of hard guardrail that do not allow such kind of object to become other kind or class or class of object? Because this is new, you know? And especially with the most fragile part of population, that could be terrible. Imagine a chatbot that was, define the simple, made for fun and interaction can become the best pushing to suicide for a young, fragile adult. Mm-hmm. So those are issues that not depending on the developer, that are not depending on the player, but they depend on the complexity of reality. So for a complex reality, we need the complex thinking. And this is the best bet on university, like the one where we are. This is, those are the place where complexity can leave and exist and be offered to society. Uh, it's a new season with a lot of question, but you know, it's a wonderful time to be a professor in this sky. We have something to talk about and paper to write together.

Speaker

Absolutely, and, and very well said, right? Because this is where the universities, you know, because we can form, what are the universities very good at is forming interdisciplinary task forces and working group, right? For everything we form a task force. so maybe we need to form like one that, because there is different perceptions we can bring in the stakeholders. And in fact, yesterday a fence from Accenture and Telstra were talking about even from a business perspective, I think, and they, they align with some of our discussions here from the business perspective. What you're talking is. What does it mean for the business to begin with? Then let's go and do ai, at the Lucy Family Institute, where we have, uh, an IRE programs, uh, NSF funded program on interdisciplinary training for ethical data scientists and, uh, undergraduates as sophomores. They interview the stakeholders first. They don't start writing code, right? Write the problem statement. Think about, and then second chapter is think about the ethical God wills that you may be, you, you may have to worry about and then you start writing code. So we could do all that, right? And, and then yes, it's a new season, a new chapter. How do we, how do we like scale it? How do we make this not just a conversation that universities like Notre Dame, you know, can lead and the right places to have these conversations, or Louis or Carnegie Mellon? How do. Are with the, what we are seeing with all the AI systems today. And we had a fence from the, you know, we are the first dedicated to Latin America and, and what does AI mean over in Latin America? That data isn't represented. The voices are not in the data, the voices are not in the model. The developers have no idea, no in, in building these things. so what do we do? How do we, you know, we at the rise, ai, constant, responsible, inclusive, safe, and ethical AI conference. We have, we have touched on safety, we've touched on ethics. We have talked, touched on responsibility, what we do, but how do we ensure inclusivity in the data before the developers write a line of code for the algorithm? Start with you.

Speaker 4

okay, so I would say that, yeah, first you, you're asking specifically about inclusivity of data?

Speaker

Yes.

Speaker 4

Right.

Speaker

Because you know, we, we could. What I'm getting to is that we can I, I agree with both of you. Right? I think it's a, it's a, it's an important goal. Something that we should be talking about. Universities are well situated to do, but how do we ensure that even as a university, like today, we have every continent here, uh, that every continent a commercial aircraft can go to. I, I represented here. So this is great. How do we sort of scale, like how do we make sure that the, you know, I have friends from Chile, they always say this Spanish and there's Chile. Right? Only they know, they understand what they're saying, although it sounds Spanish. so it's their problem. They have to fix it. They're doing the, the lms by the way. No. Yes. Thank you. so, so that, but that inclusivity isn't there. So how do we get that inclusivity of voice, the data, the research people to help us, to help us frame the developer minds? To help us frame, uh, frame the deployment mindset and something that father ti you said that this, you know, at some point the deployers are also not, you know, could be confused.

Speaker 4

So yeah, I would say, um, that it is really about awareness and training, to me, which is, and, and many times, and to the question of scaling it up, right? how do you do that? Right? Uh, so you can start at every level, right? First of all, really thinking of it as starting at the elementary or school level, right? Going up through, you know, you can talk about education, like community colleges, uh, undergraduate, graduate, but also workforce, upskilling at retraining. Just the idea about AI awareness, what's hype and what's right. you know, actually like really concerning issues. We can only identify even as like, and, and I should include actually outreach to public as well. Right. And we really have to do it across, I think all the levels so that people start having a mindset which says, oh, I'm using, if I'm using ai, whether I'm developing or not, even if I'm just using it right, do I know what are the right questions to ask? And I, at least that's the approach I think we have been taking in our institute to really think through this vertical. and think about like, what are we doing in terms of raising public awareness. So we partner with like our, you know, uh, several libraries to be able to do that. we are doing that with high school educators. So one question, two years question about scaling up. Initially we thought, well, maybe we should have, we actually are developing curriculum for like high schools on AI and society. What's the best way to convey that scalable and initial idea about like, training the school students actually is not as scalable. So we instead came up with high school educator workshop, let's train the educators. Who can, then they know how best to talk to their constituents, right? So we run this workshop every summer. We have, we can only accommodate 50 each summer, but we usually get many more applications. But those are 50 educators all across the US and we run it every year. And now they are starting to conduct their own workshops, right? That's the way to scale it up, where you initiate something, you provide them with the tools. And it's not that we develop the content and we say, here, go use it. It's really co-developed in these workshops. As an example, I was telling somebody earlier, uh, we got several participants in the first round from, tribal schools. We did not know how to create content for them that would really appeal. But these educators actually got involved and then created content that would suit their, they came up with like a module on classification of fake, uh, Navajo jewelry, right? So that's an example where we could not have done it, but we'd ran a workshop, they co-designed stuff that works for them and now they can. Taken, like propagated further. And that's just like one again group. But like that happens to many other groups. There are other folks coming from different, uh, you know, again, ruler backgrounds, other backgrounds, and they really know how to adapt the stuff. So I think that to your scalability question, that's my answer, that you train the trainers and then enable them to train others is one good way of doing it, but has to be done across all levels.

Data Philosophy and Standards

Speaker

So your deployment layer is, are trainers who then become developers. So it's like, what's your cycle

Speaker 4

in a way?

Speaker

Yes.

Speaker 4

Yeah.

Speaker 2

Well, let's start from the philosophical problem. What is a piece of data? It's something from reality that you simple make a judgment distinguish from noise. You know, if we have a piezo. in some way sensor the piezo is, uh, transforming in signal something that is important for us in the reality, and could be the pressure or could be something else. So every time that we talk about data, epistemologically speaking, we decided something else is noise. And the problem here, what of the human trait could be defined as a noise or secondary or not important from the process? To this. There is another secondary philosophical problem, and the secondary philosophical problem is that sometimes some new brilliant, uh, software engineer can use noise to extract more information. There was a really fancy experiment that was made three or four years ago in which something that is noise, the images in Google Street view that are not connected to the address was used by researcher to, uh, simple, have an interference on the vote intention of a neighbor. Because if you capture some kind of cars, if you capture some something that is noise, if you need to navigate, actually it become piece of information in another channel. A a and this is interesting, you know, because we are facing something that you cannot define. This is data forever. In this world, such kind of noise can become data tomorrow. Well, this is the first philosophical layer that probably need to be more reflected and put also with the epistemology in the pipeline of production of these tools. And this is part of the answer. The second part of the answer come from the, the other standard in engineer. You know, if you buy, an hydraulic pump, in which way you know if that kind of hydraulic pump is good or bad for your circuit. Because with the hydraulic pump, you have something that is a data sheets that say the limitation of the speed of equipment and is, is ability to do things. Well, and because the model that we train on piece of data include some kind of limitation connected of the quality of data and there is a lot of mathematical theorems that say that the answer is current or not current to the data, according to the sampling of data. It's time probably to have some kind of transparency on, on data sheets connected to data sets. And this is something that is not on certification of the producer of the algorithm, but it's something that simple relay in the transparency in the process, you know, to obtain such kind of tools. We use such kind of data sets. That allow people to understand if this kind of tools is able or not able to be adopted inside, a system. Uh, may I make an exemplification with a joke? You know, supposing that you make a classifier to, to define if something is a, is a cat or not a cat, you take a lot of picture and you have to say yes or no. One of zero to cat or not cat. And you can have, you know, a lot of different cat at, at one point you have a pet of a tiger. Hmm. Technically speaking is a felon. Is it a cat or not a cat? According to the people that label it, it could become a cat or not a cat. Now imagine that it become a cat if I'm using that API to make a fancy app for my phone. Let me tell you, if you can touch this pet. And a tiger become a cat. It's better if I get well insured, you know? Well, so this is why no one know this kind of level that we call base truth and the base truth. It's connected to the trustability and reliability of the system in the real life. So we need some kind of window here, glass window, that allow the people that are involved in the production, the different, developer deploy, and probably also some kind of user to be aware of the limitation of the system. And it's a long journey to do that. Do not push ourself in the direction of regulation, but in the direction of standards.

Speaker

I, I couldn't agree with you more. I think it's not about. Doing all this to craft new regulations, but rather doing this, you think about what standards, guidelines, principles could be, and it is a long journey. As, as both of them have talked about, all the way from as we have to scale it, we have to think about data. What does data mean? What does education mean? Training the trainers, how do we sort of bring the world together? So it is, but at the same time, it's a very, it's a very hopeful place to be at because if we collectively can start to think about these issues, former interdisciplinary committees all across the world to tackle these issues, we would get it right because the, the, the eclectic group of individuals like this, if we can gather like this at different places globally, we could begin to make a, make a change. And whether, what we are doing at Notre Dame or Carnegie Mellon or at Lewis and, and the conversations that, uh, father Ante's having with the Vatican, with that. I can keep talking to these friends of mine, you know, and I would ignore that. I, I wouldn't even think that any one of you are here, right? But that will not be good accountability to you. so please ask questions. Thank you. And see they, they remembered first and first.

Speaker 5

Great. Good morning. So father, this question is for you. So I am a Catholic graduate of this university and have clients where I talk about AI all the time. My sense of moral responsibility and accountability is tremendous. we get all sorts of training on the technology, the algorithms, stakeholders, this and that. What questions should I be asking, as a Catholic, in the forums that I participate in? And then also, how do I get ready for that? Because it's true that, you know, we have our teachings and how do I make sure that is front and center, as we have discussions like this?

Speaker 2

Well, I think that, and this is something that I'm proposing to Natasha to work on. We can develop a sort of, you know, when when you go to an emergency room, you have a series of doctor that has to question you things to understand what is the diagnosis? You know, because usually 99% of time, and the difference is an error of measure. When you go to the, to the emergency room, you say, I'm feeling bad. But that has to become a label, that is a diagnosis to have a cure. Well, what the doctor use or used to do is a best practice that is called, triage. Should we put in place an ethical triage to allow during this building process of value with ai, a clear view of the ethical point that are on the table. And that has not to be Catholic, as to be general for the good, AI for good. So we can develop, and I think that this is something that we can work, especially if we come from an, and the Notre Dame could be interesting, aner, you know, uh, ethical triage for deploying ai. And I think that could be really workable. Really scalable and do work in the direction of certification, but work in the direction of producing an awareness on unwanted consequence that no one would like to have in their, in their own product, because their own product will be their face. And face mean values mean mean a lot of things. So that could be a workable way that allow you to not be responsible on the effect, but allow you to give them the ability and the instruments to understand the complexity of the solution that they are putting in place.

Speaker 4

Can I ask a follow up quick, just wondering? So if, are there lessons to be taken from ethical triaging for doctors that could be applied here?

Speaker 2

Uh, yes. I think it's the long history of biotic here, you know, in which you have a different capabilities of the patients and according to different capabilities of the patients, the kind of question that you can put in place can really change the, the real power that you give to the patients to be the pro, the real, player of his own, uh, therapy. You know, and this is where the complexity and different, the cultural things that you took before, Come in, come, come in place, you know, uh, try to think with the minors, the ability of minors or giving you answer or a, a, a really young child that is not able to express something. Or if you have a dementia patients, someone that is, so you can have a non-competent client that is looking for a solution, but is not competent on describing what is under the hood. So there is a lot of deteriorator on that, that we can, in some way, metaphorically use to make it much more complex and much more, sophisticated, including we can imagine to develop some kind of advocacy group that bring voices where there is not competencies for, for people that are involved in San Check, such kind of things. So if you're asking to philosopher to make it complex, yes, I can do it,

Speaker

Marco, what on each.

Advocacy Beyond Academia

Speaker 6

All right, well first of all, thank you. Oh, no, you're good. Oh, uh, thank you all of you. Definitely. You've given us a lot to think about. I definitely have a lot of, you know, processing going on in my head, so hopefully I can articulate this question correctly. so we talked, uh, a lot about, you know, values can, you know, vary from individual to individual. And we've talked about, you know, inclusivity in data and all that kind of thing. So one thing I was thinking about, so at a conference like this, right, and really this applies to any conference, you know, conferences are generally attended by people who care about the subject, right? you know, same thing we be like for, if you had an orthopedic conference, it would largely be attended by orthopedic surgeons'cause they're the ones who are, you know, interested in an experts in, in that field. So my question is how do we, you know, take what we are kind of discussing here and how do we bring in others who might be in, you know, physicians of influence? That maybe we maybe, you know, don't have the same values or haven't thought about this before. So how do we bring people in? And we, we, I know we talked about, you know, the younger generation, you know, and, and, you know, incorporating this from the ground up, but obviously there's, you know, people who are already, you know, like well educated out in, in various different positions. And, you know, AI is something where, you know, it is scaling very, very rapidly and we need to kind of act now. We don't, you know, a lot of, and across a lot of talks we've, we've seen people saying constantly like, you know, this is the time to kind of get it right. So how do we, I guess the question like, to sum it up is how do we bring others in to this, you know, and align with them and how do we do it quickly? Easy question. I know. So,

Speaker 4

yeah, I think, first advocacy definitely plays a big role. And actually this is something I feel like we do not do enough of as academics. Right, including as professors, right? We never used to go to, to down to Washington DC to talk to senators, right? And we are doing more of that now. We should all be doing more of that now, but also training our students to do that, right? So if you take an average student in computer science, I mean clearly at Carnegie Mellon, but even at other places, they do not think about the impact of their research, right? They just think about here's the, you know, Delta I'm trying to improve on and giving them the soft skills, right? That you should be able to talk to your grandma to, you know, the person who comes to like, maybe, you know, uh, does some work for you in your house. Or even to like, people who are in like more influential positions as you were saying, like, go down. So there are many programs being started now that are encouraging from the student level, but of course also faculty, academia to go out there and, uh, talk. We had a, like a several hill events, a uh, AI hill events where we went and talked to the senators and so on about it. But I think the importance of doing that now is really crucial and we should step up to the game.

Speaker 3

Yeah, maybe a couple of questions from Yeah.

Speaker 7

One quick question. So, uh, it's not actually quick question. uh, so now we, in the time when, AI is producing new knowledge, right? The appearance of this autonomous AI scientists, right, which produce new knowledge, which pretty much is yet unknown to humans, right? We can see this in computer science, chemistry, biochemistry, and so on, right? So this is changing the landscape a lot, right? mostly what we, we've been discussing now it's the existing knowledge and how to steer it in ethical way and so on. But now the new knowledge is appearing right? And it's kind of this, even science is democratizing in that way, right? For a cost of maybe a few dollars, right?$5. Let's say you can produce a paper which likely has an innovation, which was not seen yet, right? This is only now. This was not one year ago. Today we have this, you can only imagine what this will be in a year or two, right? and, um, could you put this well, appearance of this automated AI science into the discussion which we had, uh, so far, right? because now imagine, agents, right? which at the moment agents are okay pieces of software, okay? We call them agents. They have a little bit of a protocol and so on. But now the appearance of this kind of self-improving, recursive, adapting agents, right? Which on the way can produce even knowledge, which we are not even aware of yet, right? Where does this lead us, right? So this, uh, and this is not, you know, the way how things go fast, right? So while we talk, this is happening, right? So could you put this in a perspective?

Speaker 2

Well, it's not an easy question because again, it depends where we apply it, but I think we, we learn something in drug discovery. You know, it's look like AI should produce a new shockwave of drugs that can help a lot of, a lot of pathology. But, uh, no, not one drugs still exist. So the theoretical side of your question and the practical side of your question seem, not reconciliate expect, but

Speaker 7

can you put this one year or two years

Speaker 2

ahead? Well, yeah, but it's a problem because you know, you need also know how to transform such kind of knowledge in a production process, in in drug discovery. Yeah. So this is where, you know, the, the dream of the first moment become a more complex thing. Maybe next January the first drugs come to trial. But there is a delay of five years and we was imaging at tomorrow things. So it's a problem, could be a problem, but the problem is how to, how to make this kind of capability, able to work with the, uh, research and discovery function that usually was connected to the human beings. Also for, for example, IP law, who will be the owners of such kind of discovery and who could in some way have a patent on it. So there are a lot of practical question that is much more not on the system. Okay? If you would like to produce paper is a, is a different things. But if you would like to produce real things, we need to adjust the society to be able to do that. And what we saw until now, there is no way to have a patent on something produced by a machine. So this is in some way challenging. It's challenging, like the software defined reality that we took before is challenging the society. I don't have a clear perspective on that. That's good. But you know, I see that there is some delusion on drug discovery.

Speaker

If it's a quick question, we'll take it. If not, I'll invite you to talk to them, offline.

Speaker 8

it can, it can be a high level question. Okay. Um, so my, my question is surrounding, um, we've been discussing AI models, accuracy of output and sort of the direct impact of the outputs of ai. I wanted to understand more your ethical perspective around the AI ecosystem, including, data centers that are being built in communities and have an impact on the environment data acquisition within the public sphere. So just sort of understanding not just the models themselves, but the impact of AI as. You know, as an entire ecosystem that is going to be developing over time here.

Speaker

30 seconds each of you.

Speaker 4

So first, yes. I think people are starting to pay attention to that, but uh, again, I think advocacy and even like speaking up on part of the public, getting involved in, you know, if they're holding town halls and so on, like going, expressing your opinions about it. I think it's really crucial. And uh, yeah, that's my 30 seconds or less than that.

Closing Thanks

Speaker 2

Well, in 30 seconds, uh, the new data center of meta will simple cut off of the grid, the Clinton, Illinois nuclear power plant and all the energy will, we drove to this kind of a data center that mean that we are challenging a social things that has not one, only one solution, but as multiple solution that is about how we should understand the data center and computational. Power because you, we are used to describe the social contract, something that happened around the infrastructure. Water pipe, energy, telecommunication now is a data center, a super user of such kind of infrastructure or should be part of such kind of infrastructure. In 32nd, I cannot do nothing more than questioning.

Speaker

Thank you all very much. Please let us thank both of them. Thank you.