floating questions

Brian Hsu: Definition & Assumption Behind AI Ethics, Shift in AI Talents & Career Choice

Rui Season 1 Episode 5

Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.

0:00 | 35:46

What is even fairness in AI? Is there such a thing as too much diversity in AI output? Should AI reflect the world as it is, or the world we wish it to be? 


In this episode of Floating Questions, Brian Hsu, a Senior Responsible AI Engineer at LinkedIn, takes us into the heart of responsible AI, sharing his journey from grad school stardom to the trenches of building fair and impactful machine learning systems with interesting tales. We also delve into why Brian chose to forgo a PhD at top-tier institutions like Stanford, and his reflections on the type of talents of AI needed in practice. This conversation is part technical, part philosophical, and wholly human. Tune in and decide for yourself: how can we really program ethics into the machines of tomorrow?

Rui: [00:00:00] Welcome to Floating Questions, the podcast where curiosity leads, we follow, and stories unfold. My name's Rae, simply asking questions. Shall we begin?

Hey, what's up? Good to see you again. Yeah, great to see you again. That was like three years. Yeah, yeah. It's been a long time. Three years, a lot has changed. I'm in a pretty good place right now. I hope you are too. I'm doing great right now. Thanks for asking. And what's the big stuff in your life in the past three years?

Basically, since graduation, I've been at LinkedIn doing responsible AI, and that's been a great experience. I feel very fortunate to like have done one thing for three years and still continue to enjoy it. Um, and I think a large Part of that is just being in a good team, good company. [00:01:00] Aside from like, uh, work being good.

The major fork in the road that I had to walk was the decision to like do a PhD or not. And I would say like, that was probably the biggest event in the last three years for me personally. After all you were. The number one in the entire class when we're graduating, and I'm pretty sure I'm at the bottom.

Yeah, thanks. I'm surprised you remember that. Somehow that happened. I mean, as you can imagine, like, part of me being like a big nerd in grad school was because, like, I wanted to continue it. Uh, I had always wanted to do a PhD. And so my plan was like, Oh, master's is just like a stepping stone to get there.

But essentially the story is that I was at LinkedIn for two years. And part of that was just to get a taste of industry life as an AI engineer. Cause before that I was in consulting. So I was just wanting to break into tech. So I did that for like a year. And this whole time, you know, I was trying to also just be.

Make my resume better as far as PhD applications go. And I was able to do [00:02:00] that. I was able to get a paper into NeurIPS along with some other co authors at LinkedIn and that made me like really appreciate the whole process of research and sort of solidified me wanting to go into a PhD and everyone was very supportive.

My tech lead at LinkedIn wrote me a letter. The. Professor that I was working, uh, doing RA for at MIT, uh, also wrote me a letter. I thought it was like a sure choice that if I got it in somewhere, I would go. Uh, so I was like pretty fortunate in that I, I gotten into some pretty good choices back to MIT was one option, Harvard and then Stanford business school.

Uh, and Columbia was, uh, in the engineering program. So doing the trips to and talk to professors, uh, was really fun. Cause once in my life, uh, you know, I was on the other side of the table. They were trying to convince me to go and, uh, you know, offering me like, Oh, this is all the selling points about our school.

And like you would succeed here and not other places because, you know, we have better graduate outcomes or have better funding and all this stuff. [00:03:00] I think they recognize that like someone coming from industry who had done fairness was a little different than their usual profile for a grad student. And so they had a lot of like interesting propositions for me as well.

I love talking to them, but the reason why I'm still in industry and not doing a PhD was kind of a funny story. Essentially the week before the deadline for signing the offers, I had basically decided in my head that I wanted to go to Stanford just because it was near where I currently live. And fellow in my company, asked me to go get dinner 

So I was like, Okay, I'm not going to change my mind, but let's just see what they have to say. And lo and behold, they changed my mind. So that's the short of it. But how did that come to be? They made a very good point in that we're in a world that's evolving in how they perceive the value of degrees, right?

So it used to be that you needed a PhD in order to do this technical field. That's just not really true anymore. You know, now, There's master's students [00:04:00] who are hired at LinkedIn. We even hire like apprentice engineers to do ML engineering, let alone don't have a master's degree, don't have a like computer science degree.

So like someone on my team right now is like a biologist by training and we're basically teaching them to be engineers and they can be really great at it. Um, so. The first point they made was, okay, you don't need a PhD to be a researcher. You effectively are already doing that at LinkedIn. And we have very smart people for you to work with.

And like, if you want to continue to write papers, yes, it'll be hard to find the time to do it, but you'd also make a lot more money while in industry. And I thought it was a good point because these days, especially with technology, like chat GPT has made it a lot easier for me to learn things. I'm also just learning lucky to be in a company with a lot of support and researchers to help guide me.

I still want to be a researcher at the end of the day, but I'm sort of taking the long path there. And their whole pitch of why do a PhD? Uh, if you're not going to be a professor, if you go do a PhD, [00:05:00] And you come back to industry that doesn't make a lot of sense. And when I was talking to the professors, I realized this is a very tough industry, honestly.

Even when we were in grad school, Ray, we worked with a lot of TAs and they're incredibly smart people and they're at top tier institutions. But even then they have a hard time, jobs and as an assistant professor. For example, in places that they would want to, if you're okay, being at the mercy of going to some random school, that's definitely an option, but if you want to have your choice and have the autonomy to go wherever you want, that's reserved for the top graduate of the top PhD programs.

And I know that's not me. So yeah, that basically made me realize I'd rather just stay in industry and carve my own path and try to do research on the side. And that's pretty much where I am now. Interesting. I have two immediate. Comment slash questions. Um, the first one is, I think this is the air to shape [00:06:00] fairness or whatever thing that you care about in a I right as you are trying to roll out products and trying to build a business model on top of all the machine learning and artificial intelligence.

You really need that set of guidelines and you probably actively work on things that's really needed in the world instead of at least one layer removed from it. So honestly, I think that's a great choice. And the second thing is, I'm actually curious from your perspective, why is there a shift from, you know, you have to have a PhD in this field in order to even just start being an AI engineer to you don't really have to have the background and honestly, you can just come and learn the ropes.

Yeah, that's a really great question. I think it's really a matter of just what you can do with a basic understanding these days. There's honestly pros and cons to it. For example, if you take what we learned [00:07:00] in class. Grad school, which is a lot of optimization theory. That stuff is great. If you're trying to solve really specific problems, but a lot of the practical problems that are solved, at least at LinkedIn by AI engineers are stuff that you can understand and deal with without a degree in optimization theory.

I think a lot of people are recognizing that as long as you understand things like, Oh, how to, you know, Create new features or adjust your model somehow. You don't need some crazy theory to add some value to your company. Because at the end of the day, we care about boosting metrics like engagement or sessions or something.

And to do that, like doesn't actually require any theoretical understanding. Like it helps to have theory to guide your exploration, but a lot of the work isn't necessarily required for that. And also the other thing was with generative AI, a lot of, you know, The innovation is coming from people who know how to use it well.

And also the engineering behind it from a theory perspective, I've been reading about this, how language models work and such, [00:08:00] but a lot of the theories is right now more like guessing, Oh, we think that this is a theoretical framework for how language models do in context learning. You don't really know for sure, rather than try to figure out what the formal theory is.

I think a lot more of the gains have been coming from like just the raw engineering behind it. It's like, Oh, we make the inference costs really low. So you can just call the LLM a lot of times and have it figure out what reasoning path it should take. And so on. I think a lot of the engineering benefits, which is.

Detached from the theory has been driving a lot of the innovation and value. And that's why these days, I think people are more accepting that if you're a really good hacker, that by itself can be like a huge contributor to business rather than someone who's just purely good at theory. That makes a lot of sense because first of all, I would say there is a very small percentage of people probably is really suitable to, uh, develop comprehensive or really cutting edge framework and [00:09:00] algorithm, right?

Assuming that's true, then the actual modeling research will be concentrated in that group of people. And maybe this group of people will be naturally attracted to like open source model research or big tech that sponsors theoretical research. Yeah. So then the rest of the application is about how do you take whatever is out there and applying your domain in a way that just makes sense.

Yeah, exactly. That that's a great way to put it. Yeah. It's really just that like the marginal gains these days, uh, are made in theory. A lot of the Big leaps in like actual business value, I think are coming from engineering and just people who are really clever in the way that they hack together these systems and models.

As you were describing how the infrastructure actually is the thing that really enabled the development of the LLM era, [00:10:00] alongside probably the sheer amount of data available online, right? This sort of is analogous to probably early internet days. When you finally have more bandwidth, online speed, better personal computer, then you started to have more e commerce, that new wave of business.

Probably this is similar to that, right? Yeah. Yeah. I think like those parallels are all very true. Well, so far you haven't done a self introduction yet. Um, I know you sort of wrapped it up in your story so far, but, um, is there anything else that maybe you would like to add? Um, Let's see, not too much. I guess a lot of what I do now is basically responsibility.

I am LinkedIn. And then outside of that, I try to keep up with a bit of fundamental research and the fairness space. Um, just thinking about some of the more traditional problems and fairness as well. outside of that, I'm a [00:11:00] typical Silicon Valley. computer guy, which is to say, you know, weekends go towards, uh, reading or tennis or playing video games and hiking.

That's pretty much, uh, me these days. It's a short list of things I do, but very stable and very happy out. One quick question. What really propelled you into the responsible AI? Because. You can pick a lot of different areas about AI, right? But why the responsible part of it? Yeah. When we were in grad school, we were studying optimization theory.

And we had one class about fairness. And I just thought of fairness was like, you optimize some, uh, objectives subject to some fairness constraints. So I was like, okay, it's just the application of an optimization problem that sounded pretty interesting. And that was the way that I saw its role in machine learning.

Yeah. I just thought it was an interesting match for what we had learned in school. Um, the second reason is to me, a more interesting field in terms of really touching different [00:12:00] aspects of tech. There's very few teams at LinkedIn that get to work with product people, data privacy people, and legal as well.

Well, we talked to a lot of academics who are interested in like how, um, they can gear their theoretical work more towards a practical setting. Um, we think about things from like a theory standpoint, we think about it from like a governance or managerial standpoint. And I think in that sense, it's very unique compared to some of the other, just like purely product focus, right?

Like you only care about. The engineering aspects, efficiency, and whatever metrics that your PM tells you to care about. Um, so I ended up really liking that whole multifaceted angle where like, you have to think about different things and talk to different kinds of people, get their perspectives, and it's also informed how I think as well.

Like I've also developed my own thoughts about what responsibly I should look like and what other people think and how mine is similar or different. Okay. Wow. That's a natural cutting point to what is really [00:13:00] responsibility and ethics. Yeah, because I think there are so many layers, right? Like there is training data layer, there's model architecture layer, there is decisions made on top of the model layer.

Um, Feel free to just pick one area and then start from there. I guess maybe I can talk a bit about the origin. I think one of the really big kickstarters to, uh, responsible AI was the use of AI for recidivism prediction. When a criminal or a convicted criminal wants to go up for bail, the court has to make sure that they won't commit another crime and pose a risk to society.

And so a company, um, you know, Had built essentially a model to make this prediction. And that really got people to start asking like, Oh, uh, is this being fair with respect to demographics? Is this AI model fair? Um, and so fairness largely has a very specific algorithmic definition related to the expected outcomes and, uh, how it varies across [00:14:00] groups, conditioned on some stuff.

That spawned people to think about it in other cases. It's like, oh, what about loaning, credit lending? People have very intuitive definitions of it, right? For example, the first thing that people would think about usually is like equality. Let's just simplify and say we're in a binary gender world. If it's giving loans to 50 percent of males, it should also give loans to 50 percent of females.

And that's people's most intuitive understanding of like what fairness looks like. But of course, if you're like an algorithms person, you start thinking, well, that doesn't really make sense because the base rates of qualification may be different. Uh, maybe for some reason in a specific locale, males actually repay their loans more frequently than females for whatever reason.

Then you want to account for that base rate essentially. And there you start getting into the problem of like, okay, so now fairness isn't just Inequality. It's like equality conditioned on some other stuff. And I think that's where, you know, there's different perspectives on, um, how you define that specifically.

I think it's [00:15:00] very interesting, the different definitions that people have come up with. But in general, I think fairness in the algorithmic sense can be quite different than fairness from a intuitive sense, which is largely about equality for everyone. Fascinating. And I have to say, I'm a little bit nervous about having this conversation, especially in this climate.

I'm going to take a deep breath and then let me ask the question. Uh, for whatever reason, let's say the male population does pay back loan more than the female population. So this dimension does help you get a better risk signal. Yeah. Can you share Why, or why not, we should include this dimension into the model training, because if the world is what it is today, right?

Then for a business, you probably should take that dimension into account [00:16:00] because there is a more systematic reasons why that is the case, right? Maybe somehow males get better jobs or they get better education. So the fix isn't about the algorithm. The fix is more about societal problems. Right, right. So the question is essentially, should we include these demographic attributes into the model or things related to the demographic attributes?

Right. So going back to the whole like loan prediction thing, um, you know, common criticism is maybe you don't directly use, uh, people's gender as part of the loan application, but you use their zip code and maybe. By using their zip code, it's partially an identifier of their gender because one gender is more prominent in one zip code than another, uh, or the same thing is said a lot about race as well.

So it's kind of a proxy variable. And when do these proxy variables, are they actually good or bad? Right? Well, the [00:17:00] good news is the whole ML community has seen it. Thought very hard about it. And the current lingo that's used to describe this falls into this distribution shift literature. So, um, essentially the idea here is different groups may fundamentally behave differently conditioned on the data that you have.

And we live in a world where we have a lot of data, but we don't have everything. And when we don't have everything, we have to just work with what we have. And when we do that, there's some things that we just can't observe. Because of that. It appears that different groups behave differently. Uh, so to give an example, if we're on LinkedIn, job recommendation algorithms, you find that males tend to prefer software engineering jobs, and they're more qualified for this job for whatever reason, at least to the recruiter than females are, uh, so they get clicked on more.

Well, is it that males are more responsive to recruiters in general, or is it because we haven't observed enough things, right? Maybe it just happens that males. Are more on the [00:18:00] platform more frequently. So they respond to recruiters with higher rate, which makes them more appealing to recruiters. Or males tend to just provide richer data on their profiles, which makes them more appealing to recruiters.

If we're in a world where you have one person, again, binary gender assumption, male and female representation, and they're exactly the same, then you would expect them to behave the same because the data is exactly the same. But what if there is some. Something you don't observe, and it appears that this male form and the female form actually behave differently.

So when they behave differently, you sort of want the model to be aware of that. And by being aware of that, it can actually make the model more fair and better performing. And that's actually like the type of solutions that people generally talk about when it comes to fairness. So just to make sure that I understand, what you were saying is I'm going to Continue using the example that you just gave about LinkedIn, click on job posting.

[00:19:00] You are saying the fundamental assumption is that male and female wouldn't behave differently. If you knew everything, like if you had all of the data. Right. Fundamentally, you're saying like these two groups would behave exactly the same, assuming that like, you know, everything about them. Yeah, that's the typical assumption for fairness.

And why that is an assumption. So the tenants of fairness are essentially wanting to assume that like people don't have vastly different behaviors. And if they do, then you don't want to be the arbiter of that behavioral difference. If you observe that different groups behave differently, who are you to say that this is the ground truth, the population difference between the groups.

So rather than assuming that there is a population difference. Ground truth difference. You would rather assume that people are the same if you had all the data. And people may have different [00:20:00] philosophies towards that. And the fairness community does have a specific term. But I think that's the assumption that I personally take when I'm thinking about algorithms.

What are the other alternatives? What are the other assumptions that could be reasonable as well? Yeah. Yeah. Well, you could also just. fundamentally believe that people are different, right? Let's, so even if you had all the data in the world, you got like, uh, a male and female into a room where like, they share exactly the same upbringing and current status and all that stuff.

They still just have different preferences, uh, and it's because of their gender. And if you believe that's the kind of world we live in, then fairness, I would say becomes a very tricky problem because again, that goes towards, you now have to assume that there's some population difference. And I think that's where it gets complicated because now you're becoming an arbiter of what that difference is.

And, you know, maybe you can, maybe you can't, [00:21:00] uh, That's a philosophical decision. Thank you so much for explaining the reasoning behind that assumption. The next question is, well, that is a great theoretical framework. Obviously you don't have all the data and you don't have all the conditions. How do you deal with that?

To recap the story, right, the fundamental assumption is that if we did have all of the data and we were able to use all of that in a model, then under the assumption that people behave the same condition on their data, but we don't have all the data. And when that happens, what could end up happening as far as the model is concerned, is that different groups behave differently and it's because we aren't able to condition on some things.

And so. What happens in that case is a lot of times when you go and estimate, say the loan, uh, repayment probability, you will find that there's a difference between groups. And in that case, you essentially would have to want to tell the model. There's like a downward [00:22:00] skew for like females. And if you want fairness, then you would want to adjust that upwards, essentially.

Or you want to make it match like the empirical truth. Based on what you observe. And so if you tell the model this, then it can actually make the distinction rather than implicitly learning that, Oh, these groups are different. And that's actually like a very common approach to mitigating biases in these machine learning models.

Uh, which is to say, you just directly use the demographic data or proxies of the demographic data, which. Sounds like totally against conventional wisdom, right? Because at the beginning, I was saying a lot of people raised the concern that if you use zip code, then it can infer this demographic data, but actually, that's what you want it to do in a lot of cases.

Otherwise, it goes with the worst alternative. It learns to make the distinction itself, and you may not have good control over how it learns that process. This is fascinating for me. To hear I'm a little curious about the how part the mechanism of [00:23:00] adjusting the weights for a specific characteristic of a group of people.

Do you adjust the weight on the training data layer, modeling layer or the decision layer? Yeah, good question. So one immediate way to do it is to adjust it at the decision layer, which is to say that, and we have a paper about this actually, but I say that the machine learning model estimates the loan repayment rate for females to be like 60 percent condition on that score.

But then when you go and estimate like, Oh, of all the females that get 60%, how many actually repay their loan? And it turns out to be 80%. Then you basically would tell the model like. When you predict 60 percent and it's a female, I'm going to make the decision pretending that it's 80 percent because that's what actually matches like the empirical outcomes.

So there I'm directly making a decision that uses a person's gender, but it uses it to their advantage because the machine learning model was previously underestimating their likelihood. And now I'm [00:24:00] telling it, this is the correct likelihood. So that's an example of how one would adjust the decisions.

Interesting. I can see this as a reinforcing loop, right? Maybe this is addressing the fact that fundamentally the online training data isn't representative of that population enough. So then you make a slightly different decision in the end, despite the probability that they give you. Then over time, because you let more female loan applications through, then you end up collecting more data.

To actually reflect the true probability of loan repayment. And that becomes your training data again, until potentially what you observe versus what you predict match. Yeah, yeah, exactly. There's a lot to say about feedback loops and that's a whole other beast, but essentially what you said is right.

Maybe there's some discrepancy in your training data that doesn't match actual outcomes for whatever reason. [00:25:00] And what you want. To do is essentially teach the model of this behavior. Fascinating. Before the chat, you sent me a link to anthropic constitution for AI for their own model training, I read that block.

Are you also developing some form of constitutions for LinkedIn AI? Practice. Currently, I'm not super involved in that process. Some other folks on my team are. I personally think this will be a pretty common method that people use just because different companies have different perspectives on policy.

You know, whether it's Anthropic, OpenAI, Google, like their policies don't represent customization, this constitutional AI essentially adds a layer of, Oh, what are your company priorities? Policies and like adds that to the, um, language model aspect. I didn't really understand how they [00:26:00] really baked constitution into the model training.

I saw a very simplified diagram for that process, but if you understand that a little bit more, please help me. Uh, at a high level, it comes down to making the right preference in terms of generating the outputs. So the chat GPT was traditionally trained with reinforcement learning with human feedback notion.

When you ask it a question, it creates like maybe four answers or so. And then a human annotator or a machine learning model would rank these answers in terms of how good they are. Uh, and how good they are could be multiple dimensions, right? You can imagine like one dimension is helpfulness. You don't want it to just say, I can't answer that all the time.

Another dimension might be the tendency to spill bad information or harmful content. You may down rank things because it contains bad language and so on and so forth. Essentially. By having a way of reinforcing that this answer out of four answers is correct. You can imagine how that process translating to constitutional AI, where this [00:27:00] answer aligns better with the constitution as I'm describing it.

And therefore the model should generate things that are more similar to this answer. I have to say when I was reading that article. Just like you said, they try to break down different dimensions, like helpfulness, harmlessness, all that stuff that the company and in general people care about, and I was just thinking.

Jesus Christ, AI really just drop all the philosophical and existential questions right in front of professionals in the past, you don't really have to think about probably a lot of this. But now you're gonna have to define what is ethical. I mean, you might as well just do a philosophy PhD and become a professor.

Um, Do you feel intimidated when you are wrestling with all of these questions? Do [00:28:00] companies that you have seen in the field like hire philosophy professor to really get involved in this type of conversation? Yeah, it's a really great question. I think there's so much debate about it. And it's healthy debate, honestly, because now we're in a space where day where people can see and interact with AI and they can make calls about what's fair or a biased or irresponsible.

And a lot of interesting perspectives has come out of it. The example I like to point to is the Gemini model. There was a big. Issue where it was generating things that were too diverse. So for example, it's like, if you asked it to generate a picture of a Nazi, it could generate like Asian Nazi. You can understand the intention was okay.

You know, I don't want doctors to just be Asian or white, but I think they kind of took it too far and Oh, Asians can be Nazis or the Pope can be Indian American or something like a weird. uh, mismatch of what's actually true [00:29:00] versus like what's theoretically true in a very diverse world. This is the kind of stuff that made me reflect a bit on philosophically what is fairness because what's offensive to one person seeing like asian as a nazi or asians as serial killers may not be offensive to other people.

Maybe there's someone out there you And this is an absurd example. Maybe they're offended that Asians can't be serial killers. You're like, Oh, you're excluding me from a potential role. No, a hundred percent. And I have to say. I mean, Asians were Nazis during World War Two. Japan was in the same camp with Germany.

Yeah, yeah, yeah. I mean, in that particular case, it was generating Asians as like Nazis in the German sense, uh, not in the colonial sense. But yeah, I do see your point. There's some reality where this diversity is interpreted as helpful or interpreted as harmful. And so it becomes a very sociological problem.

This is beyond algorithm. What does human society believe? [00:30:00] Is adequate amount of diversity versus too much diversity. Yeah. And you will never get an aligned answer on that front. Exactly. And I think this is the part that. I feel like that makes working with AI both exciting and also terrifying because at the end of the day, you're not paid to be a philosophy professor, you're paid to get things done and generate great product for people.

And also, by the way, the company will have to make money. Where do you really cut the line between we have done enough to address the most potentially obvious problem versus the public can always have a perception of you are evil and you are not doing adequate enough of job of addressing all of these issues, right?

Yeah, yeah, exactly. Part of what makes like fairness slash responsibly I exciting right now is just there really is no right answer and societal preferences will evolve as well. [00:31:00] I'm still personally trying to think about how should I think about the situation. My perspective towards that Google fiasco was, well, as a user of AI, as long as it's useful to me, then that's what I consider fair.

And I think fairness for generative AI should mean that it's equally useful for everyone, whether you're using it for school or equally accessible for everyone as well. Everyone can use it to the same capacity. And like, what's unfair would be if, uh, some group maybe doesn't have access to AI or doesn't know how to use it as well as another group.

Then I feel like that's something that we should work to correct but then like I was telling that to someone and they said to me, that's a good perspective, but what's useful may also reinforce bad things. You as a person may find stereotyped statements to be useful to you because what's funny, but it's not good to reinforce these things for people because they may just get the wrong ideas.

And I was like, you know, that's also true, I guess as a company or as a researcher, one just has to think very carefully about [00:32:00] how do you assess people's opinions and I think the future. Um, especially getting back into constitutional AI will involve probably a lot more crowdsourcing. When I say crowdsourcing, I don't just mean some people in a certain country, post colonialism, getting very low pay to make decisions about what's good and bad, following some policy created by big tech.

It should really be society, like the public society. Um, and maybe that'll be a future world someday. Oh, I thought just flashed through my mind, which is when you talk about society and let's say open eye, do you mean collecting feedback from the U. S. society or the global society? Yeah, yeah, it's a good question.

I don't really have the right answer. Honestly, it could be like tailored to different countries could be like maybe a global representation. Um, Yeah. Again, this is exactly one of those middle ground things where society as a whole will probably have to make a [00:33:00] decision. This is another point of, I think tech is breaking the traditional national boundaries.

Uh, and it's fascinating to think about if you build a large language model that it's being used by billions of people around the world, and they all give you feedback in some senses, this model is much more global. But this model is owned by a specific country, at least an entity in that specific country.

Maybe it should be hosted by, I don't know, United Nation or some type of like international. Yeah, I don't know. It's a good question. Right. And then it also leads to the question of who should society optimize for. And there, I have a interesting little philosophical tidbit Rawlsian philosophy, formerly.

But it basically says who is most deserving of help in society and the Rawlsian perspective is whoever is [00:34:00] worse off. And why is that? Well, the thought experiment is essentially like if you gather society into a room, right, and everyone forgot about their identity, as in they don't know what race they are.

They don't know what gender, economic status, and so on and so forth. Um, who would this? Group of a very forgetful people choose to protect. And Rawls says they would choose to protect the people who are most disadvantaged. Why? Because you could be part of that group. Uh, but you don't know because like you've forgotten your identity.

This is a very common approach in AI as well. And I think it's a very clear example of the importance of philosophy these days. At least rules of thumb to making these decisions for who should we advantage? Like, who should we care about during model training and so on. Maybe the PhD that you were supposed to get, it's not in optimization or more AI engineering.

You should actually get a PhD in philosophy. I mean, seriously. Yeah. I [00:35:00] mean, if there were unlimited time in the world, I I'd probably just spend my entire existence in like universities. I think like this kind of stuff is so fun to think about. Yeah, for sure. Thank you for the discussion. I've learned so much.

from you and just some very eye opening points around how to think about responsible AI. That'd be awesome.

Rui: This could be the last episode of Floating Questions, or it may not be. Either way, I hope you enjoyed flowing along with us today. If you liked our journey, please consider subscribing. Thank you for listening, and may the questions always be with you.