Cool Careers & How You Got Them

2.1 - Tech Policy Fellow Lama Mohammed

Zain Raza Season 2 Episode 1

Send us a text

In this episode, Ms. Mohammed describes the ins and outs of tech policy and all that it entails, full of lots of new ideas and concepts.

Speaker 1:

Zain Raza. Hello everyone, welcome back to another episode of Cool Careers and how you Got them. I'm your host, Zain Raza. I'm sure lots of adults ask you what you want to be when you grow up and if you're anything like me, you don't have an answer for them. Hopefully, my podcast gives you some inspiration. I'm very excited and lucky to be joined today by Ms Lema Mohammed, who has a very cool and interesting career. She's dedicated her career to the intersection of tech policy, social media and cybersecurity and, without further delay, let's get into it. Ms Mohammed, how are you?

Speaker 2:

I'm great, Zain. Thank you so much for having me.

Speaker 1:

Absolutely, of course. Thank you for joining us today. So I, just before we even get into everything, I just wanted to say that I learned so much in just trying to prepare for this episode, just by learning about what you do, that I feel like if everybody that listens to the episode today can get like a tenth of what I learned just from hearing us talk, I think it'll be a great success.

Speaker 2:

Awesome, awesome, thank you.

Speaker 1:

Okay. So yeah, let's start off how we always do. What is your official title?

Speaker 2:

So I am currently a tech policy fellow at New York University Center for Social Media and Politics. I started in January, so fairly new to the role.

Speaker 1:

Now can you talk a little bit about what your organization does and then your role within that organization?

Speaker 2:

Yeah, absolutely so. The Center for Social Media Politics, also known as CSMAP, is a research center that is housed within the politics department at NYU and we essentially study the intersections of social media technology and democracy. So think about misinformation and disinformation and how those things you know sort of trend online, the effects of misinformation on how people vote and looking at propaganda online and how that affects how people understand particular political issues, foreign influence campaigns and how that might affect how people vote, and basically understanding how the digital and information ecosystem impacts society. So that's kind of what the research lab does. The purpose of doing that is.

Speaker 2:

We see now in the digital age that a lot of elites, such as our political and elected officials they now use social media like Twitter, currently known as X and Instagram threads to communicate with their constituents. But also within social media, we see a lot of opportunity for threat actors to pose misinformation in attempts to affect how people understand and communicate. So in order to sort of mitigate some of those things, we now see policymakers trying to pass social media laws, privacy laws. We now have the AI Bill of Rights. We have, like the NIST AI framework, the White House recently released a cybersecurity guideline. So that's sort of a way to sort of commit businesses to responsible technology practices, but a lot of the times, some of the bills that we are currently seeing in Congress aren't really well informed by research. A lot of the times, the reasons for, you know, passing a Online Safety Act, for example, is sort of based on misinformed folklore. There's a lot of fear mongering on how, like, ai is going to take all our jobs, or like the big tech bros are coming for us, or a big brother, and so that kind of pushes policymakers to sort of propose policy getting at the wrong ideas, and so the aim of our center is to actually take what we've studied and help policymakers make well-informed decisions based on empirical evidence. So a lot of the times, for example, people will talk about post-2016 and how Cambridge Analytica changed a variety of how Americans voted. Our research actually shows that this wasn't the case, that while there was foreign influence that was spewed across a lot of Americans' Facebook profiles, that actually didn't really change the outcome of how they voted. And that's really really important for both media in terms of actually reporting factual information and also to help policymakers change how they prioritize the regulation of tech and focus on the right ideas.

Speaker 2:

So that's the purpose of CSMAP, the purpose of my role. I'm the only policy person in my research lab. Most of the people employed are postdocs, phd students and very technical research people who are both specializing in, like political theory, statistical analysis, and some people are like sociology. So the goal of a policy fellow is basically to take the research that CSMAT produces and translate it for a policy audience, because a lot of the things that we produce are very, very important and very, very influential, like some of the things that we post are on the front page of the New York Times. But in order to actually understand the important aspects of the research you have to have a bit of statistical and political theory of an understanding of that, and that's really hard for some policymakers. So, as someone with both a computational science background and experience of policy translation, my goal is to basically take the research and make it an identical format for policy audiences.

Speaker 2:

And, in addition to sort of taking the research, I sit with a lot of people in government, both in the US and the European Union, basically talking to them about AI, election misinformation, commenting on the direction of particular policy.

Speaker 2:

So I was recently asked to give my thoughts about, you know, article 40 of the Digital Services Act, which is a European technology law, and so that's one component. The other component is also making sure that the lab is well aware of things that are happening on the federal, local and state level regarding tech policy, because a lot of the times, you know, researchers tend to be very in the zone with what they're trying to produce and don't have much time to think about what's happening sort of outside of their thought expertise. So my goal is also to keep the lab informed about what's happening in policy. And then, third, I also help produce particular products that are at the intersections of what we do and policy. So I'm currently working on an op-ed studying TikTok and the importance of making sure that the way, the direction that we're going in regarding the TikTok ban, build, divestment, whatever you want to call it is based on actual research and not folklore. So that's kind of what I do on a day-to-day.

Speaker 1:

Okay yeah, super simple stuff, not complicated at all.

Speaker 2:

Nothing, complicated nothing about saving the world or anything like that.

Speaker 1:

Yeah, no, super, super low key. That's good. So now we talked a little bit about what you do right now, but when you were a teenager, what were you like?

Speaker 2:

Yeah, so I didn't go to high school in the United States. I actually graduated from high school in Bangkok, thailand. My dad is a diplomat with UNICEF, so I spent, if not all, my adolescence, overseas. So I was an IB student. I went to international school and I kind of juggled between whether I wanted to be a journalist or an international human rights lawyer. I really didn't think I would end up working in tech policy, and so I applied to a lot of schools that had great international relations and policy programs because I knew I was going to be in either camps. And then I got to college I realized that I didn't want to do international human rights. I didn't want. I didn't really like the concept of like going to another country and then telling them how to do their democracy based on the American way. I really didn't. I thought it was quite imperialist in my opinion.

Speaker 2:

I know that's a hot take, and so I decided to just focus on domestic policy instead, because I think there were a lot of things that were happening at the time. This was like after Trump got elected, the Muslim ban was intact, a lot of things regarding, like women's rights were being scaled back, and I was like there's a lot of work that needs to be done in the US, and I just didn't know what area and domestic policy that I wanted to focus on. So I went to American University. It is a small, liberal arts-esque school in Washington DC, very big on international relations and political science and, like everyone, it's known as America's most politically active college, and so I spent a lot of time going to protests and being a little advocate. I spent a lot of time going to protests and being a little advocate, and a thing that a lot of AU kids like to do is watch C-SPAN for fun. So I remember my second semester of my freshman year. We were watching Mark Zuckerberg testify for the first time in Congress regarding the Cambridge Analytica scandal, and I remember being like technology and policy, and I'm watching these policymakers have a really, really hard time understanding like what the heck Facebook is and how they track data, and even I was struggling a little bit as well and I was like, if technology is going to be the future of like policy regulation, I should probably understand what this means and it's clearly playing a very intricate role in democracy. And so I was like I'm going to go and get like a computer science minor. And everyone was like, oh my God, like what are you going to do technology policy? And I was like, exactly. So I got a minor in computer science and I also got a minor information systems and technology, because that's where I started taking some cybersecurity classes.

Speaker 2:

And then my major ultimately was in interdisciplinary studies. So I took classes in comms, econ, law and government and my focus was basically tech. So I would take things I was learning in my STEM minors and applying it to my policy classes and then taking what I was learning in policy and legal theory and applying it to my STEM classes. So I would talk a lot about like ethics and privacy and all these things in my STEM classes. And it was interesting because so many of my peers were like technology is completely neutral, like how could AI be racist?

Speaker 2:

And I was like, actually that's not the case, and so I would try and change a lot of these things, and so a lot of the time spent in academia was basically making up my own major, making up my own classes and doing a lot of independent studies with professors instead of doing my required coursework. Um, and now I like go to campus like every other month to talk to professors about, like, building coursework in this area and now they have so many classes in this, in this field, which is super exciting. Um, and I low-key forgot what the question was, but I hope this was. This answered the answer a little bit.

Speaker 1:

Yeah, it was when. When you were a teenager, what were you like? So yeah, that's a good answer, I was always.

Speaker 2:

I was always very curious, um and it's still like a trait that people describe a lot about me, and I was always really fascinated by I loved science fiction, so I would read a ton of sci-fi. Um, my favorite book in high school was 1984. Um, my favorite film is 2001, a A Space Odyssey, which is basically like about robot takeover. So it was. It's not it's unsurprising that I have found a way to bridge both my love for science fiction and political advocacy. So I think a lot of the things that I witnessed as a teenager very much influenced my ultimate career decision witnessed as a teenager very much influenced my ultimate career decision.

Speaker 1:

They got it, and you touched on this a little bit. But when you're talking about living internationally, I was just wondering was that experience did that influence you in your career or your career choice? Like, how has that you know? Because I'd assume it would be different than someone who wanted to find this kind of niche thing if they had grown up in the US their whole lives. So I just wanted to hear from you.

Speaker 2:

Yeah, that's a really good question.

Speaker 2:

I would say yes, I think, and one of one of the areas that I hope to to work in later on in my life is like mass surveillance.

Speaker 2:

So I lived in countries where, you know, the First Amendment didn't exist and like we were not allowed to critique like the monarchy or and like people who did online, like political dissents online they would either be like jailed or sometimes killed.

Speaker 2:

A lot of these things just like didn't exist. And I remember when I moved back to the US for college, people would ask me things that I wasn't allowed to talk about and I was like wait, oh my God, I have a First Amendment, like I'm fine, and so that made me think a lot about mass surveillance and how surveillance also preexists in, like the digital space, um, so that's sort of that is also like an area that I think a lot about, because, even to this day, like a lot of people in the global majority don't have an open and safe internet, and so I think that's another like important concept to think about, um, when we live in like the us, and this is something that we're basically afforded and other people aren't. So even though, like, the idea of the internet is to democratize. A lot of the times, like people don't actually have democratized online spaces, which is frightening to think about.

Speaker 1:

Yeah, absolutely. Now, also kind of teenage years related, is this your first career?

Speaker 2:

This is actually my second job since graduating college. I worked a little bit when I was a teenager, but it was mostly like minimum wage jobs and that was just basically just to get a little income while being a student.

Speaker 1:

Okay, but I mean, even in those kinds of careers, how did those things impact?

Speaker 2:

you Did this when you go to college, but basically like every floor in a dormitory hall has something called an RA, which is a resident assistant, and they basically like take care of all the residents.

Speaker 2:

They put on programming and that taught me so much.

Speaker 2:

Like to this day, I refuse to take both my desk receptionist job and my RA job out of my resume because they provided me with so many skills, like learning how to talk to people, conflict resolution, being a leader, a lot of the times you had to make really difficult decisions when you would get like really strange phone calls, like asking about student information, and you actually like couldn't do that.

Speaker 2:

You had to be very aware about like HIPAA, which is a health protection law, and then like another law regarding like the confidentiality of student information. So in those jobs who are actually given so much responsibility on behalf of somebody else and I think, like to this day, when I'm dealing with like clients, that I think about like client and like worker relations and confidentiality in that regard having to make like really difficult decisions, sometimes like do I take this person's side in a conflict? Do I not take this person's side in a conflict? And also like time management, like being a student and doing extracurriculars and working a job and sometimes interning is very challenging, and I learned a lot about time management and what kind of leader I am and how I work with other people and actually how to respect other people too in these jobs, so they provide really great skills in that regard.

Speaker 1:

Yeah, no, the RA stuff. That that's serious. I can't even babysit like my three cousins I'd never be able to take care of like a 48 freshman yeah, no, no chance. And the time management thing too. I also like high school time management, junior year especially junior year is tough. Junior year in college is tough too yeah, it's I mean, but the time management, like it, is important and you can make things easier on yourself. Um so, obviously you're on the show, so I think you have a cool career but, in your opinion.

Speaker 1:

In your opinion, what makes your career cool or unique?

Speaker 2:

yeah, um, no one's ever asked me that before it. It's a good question. I think what makes my career unique is that it is I like to say it's the job of the future. Technology isn't going anywhere and technology is being developed at such a rapid pace and I think, for the very first time, we have to force ourselves to reanalyze policy and legal concepts. We need to make sure that they're updated to reflect the ever-changing dynamics of technology or make them just broad or specific enough to include technology. So we have to rethink how we think about civil rights. We have to rethink about the think about, you know, civil rights. We have to think, we have to rethink about, like the right to privacy that also extends to online spaces, and so I think it's very interesting to have to look at really old concepts that I've studied for forever and applying them to brand new concepts.

Speaker 2:

So I know, when I was in college and even when I was in high school, I never wanted to be in a career where I would just do the same thing every day, and that is absolutely not the case in my job. Like a social media company could change tomorrow and my whole like job could change, like the whole course of my like priority could change. And also I think it's really interesting because so few people have things to say about very important concepts that are affecting our daily lives, like privacy, cybersecurity, ai, governance that even as a 24-year-old like I have very high level people coming to me and asking me for my thoughts, which is crazy, and so the opportunity to be an expert is very possible in this field and it's not boring and you're doing really, really important work. I think a lot of the times when people think about tech, they think, oh, I'm just going to be like a software engineer for like this big tech company and like write code and that's it. But it's actually a lot more broader than that.

Speaker 1:

You can actually do a lot of public good good, which is possible in this space yeah, okay, I I really actually related to the whole thing where you were talking about how my my day is different, like every day, or like my my priorities could change because, like the whole thing with high school is like every single day yeah, same thing, you know I hate.

Speaker 1:

So I can't wait until I have some enough, some excitement or some action, um, but if you had to identify, like some, some traits or certain qualities, um, that would it. You should have to be successful in this field. What would they be?

Speaker 2:

yeah, um. So I tell a lot of people you don't need to know how to code to work in tech policy. I think that that's okay. I think what's most important is actually learning how to be a good communicator. Like so many people in this space are so intelligent, they have years and years of experience and like so many degrees, but it's so difficult to understand what they're trying to say and they're important and these are really important concepts aren't being translated properly.

Speaker 2:

So really being able and this is a this is a thing that you'll learn on the job being being able to take really difficult concepts and make it digestible for policy, media and the public is extremely important. So, can you take a 20-page paper and condense it down to a two-page policy memo? Being able to talk about these things to anybody is very, very important. So communication skills is an absolute must. That is, both reading, writing and speaking. I also think that for this job, you should also be a very good reader. So for me, like I spend maybe like half the day just reading, reading bills, reading articles, reading like brand new research and being able to actually come up with my own analysis, so that that concept of both pairing your reading and pairing it with analysis to create your own argumentation is very, very important and these are skills that you will get in the humanities and social sciences, to be quite honest.

Speaker 2:

And I think another important skill I would say is being collaborative and being, just like, generally a kind person, because I think a lot of the times, like, maybe an individual doesn't really know so much about a particular thing, but people's willingness to learn and people's willingness to work with other people is great. Like a lot of the times I remember when I was working my first job out of college, by the second year I would have people ask me like, who should we hire for this job? And sometimes I would say pick this person because they can work so well with our team, the client loves them and, yeah, maybe they may not know things, but that doesn't mean that they can't learn. And so I think being somebody who's adaptable and can work with a variety of personalities I would say is very important for this job.

Speaker 1:

Yeah, absolutely. I get that. You touched on this a little bit already, but if you had to give like a quick short thing, why is what you do important?

Speaker 2:

Yeah, absolutely so. Why this kind of why tech policy as a whole is very important is because technology isn't going anywhere. In fact, it's becoming more advanced, it's becoming more ubiquitous and a key component of our lives becoming more ubiquitous and a key component of our lives. And I think people get so excited about the future of technology that they don't really think about the adverse consequences of technology before it is too late. And it impacts marginalized communities, democracy and we need people who are going to be able to say, hey, obviously this is an exciting development, but let's think about the legal, social, ethical consequences of this technology and let's inform policymakers about these legal, ethical, social consequences to make sure that, if things do go wrong, that there is a way of protecting those individuals and providing them a pathway to justice.

Speaker 2:

And I think you know this freaked out a lot of my you know computer science peers because they were building really amazing things but they weren't thinking about the harms that they could potentially bring. So in our machine learning class I did my final project basically testing gender and racial bias on open source facial recognition code, because a lot of the times when technology is being built, people are looking at open source code to be a foundation to what is going to evidently be like a facial recognition software or whatever, whatever they're building or whatever whatever they're building um, and unfortunately, a lot of these open source codes exhibited horrible gender and racial bias. So, and and then even like, when you intercepted the identities, it was even worse. So my facial recognition code was not, was not able to identify like black women as women, as a, at a higher like, at a lower success rate than being able to identify like men, like white men, as men, I heard about this.

Speaker 1:

I heard about this.

Speaker 2:

I think yes there's a great documentary called coded bias on netflix that explores this issue to an even greater degree. Um, and so people were super freaked out. They were like, oh my god, ai is racist. And I was like, exactly, but that also means that when you're training data on your code, when you're training your system, make sure that you're training your data on a diverse data set. Make sure that the things that you are producing are reflective of every identity, because the internet is created by no-transcript Women have been denied jobs because AI, for example, like an AI screener, has exhibits gender bias, and then people are rejected from like home loans and really, really important things.

Speaker 2:

And, of course, like on a social media level, it's like a whole other thing with like privacy and mis and disinformation, but I it's like a whole other thing with like privacy and and mis and disinformation, but I could spend like a whole other hour talking about but that's basically like why this job is so important. Why this field is so important is because technology is never going away. People need to be able to study its harms and stop those harms from actually impacting a lot of people in a bad way.

Speaker 1:

Sorry, quick aside. What is the difference between misinformation and disinformation?

Speaker 2:

That is a great question. So there's a really great TikToker you should watch. She's on TikTok, she's on Twitter, she's on all social media. Her name is Dr Casey Bissler and she talks about these things in plain English, with memes and everything. So if you want to be able to stay up to date on some of these difficult concepts, she's a great person to look at. I follow her and I learn from her all the time, even though I work in this space. So the difference between mis and disinformation so misinformation is just the idea of information being inaccurate and false, where disinformation is the actual, deliberate idea of spreading malicious and misinformed content, such as like propaganda, spear ph, phishing and like different hoaxes. Um. So I always say like misinformation is like the actual false information and disinformation is the act of spreading that false information the intent is involved with disinformation yeah, the intent to spread false information is disinformation okay, got it.

Speaker 1:

So you know. We talked about how, like you know, every day might be different and whatever. So I don't know if there's such thing as a regular day, but if there was one, what would it look like? Can you take us through it?

Speaker 2:

Yeah, um, so regular day would be me like coming to the office, signing in, checking my email, responding to emails, um, reading my newsletters for the day so I understand what policy priorities are happening in the tech space, and then also like developments in tech, but also that also will encourage me to think about, you know, new ideas that the center should be thinking about, or potentially emailing a reporter back and being like, hey, you wrote about this today. Check out the speeches that we recently produced on, like this topic. Let me know if you want to speak to me or X researcher about this particular issue and then maybe I'll. I'll see a a concerning thing that's happening in policy and maybe write an op-ed or send some comments on a bill. So, reading, reading, reading email, responding to email, reading newsletters. And then I will go on congressgov and look at any kinds of hearings that are happening for the day that I might need to sit in and watch, and then look at particular bills that might be that might have been introduced, both on, like, the federal level and then the state level in New York, because New York is proposing 65 AI bills, so I'm tracking those right now and potentially flagging it to the center, being like hey, this is a bill that's currently being developed.

Speaker 2:

Should we reach out to this office and give them our thoughts about like this, because there's no provision on like data research or data access, and they'll be like, yeah, sure, like we have to this often, we should talk to them, um, so reading bills, and then, if there's a congressional hearing, I'll watch the congressional hearing.

Speaker 2:

That takes like four hours of my day, um, and then usually I'll have meetings in between, sometimes with co-workers, sometimes with external stakeholders, talking about like developments in the lab, and then external stakeholders external stakeholders a lot of people want to write op-eds with us or projects with us, or just like get to know more about like this particular research and the policy implications. So I'll spend my time doing that and then, if I have time at the end of the day, then I'll work on a side project so usually an op-ed usually submitting comments or doing my policy tracker that I'm building for the lab. And then every Monday night, the postdocs and PhD students will present research that they're currently working on and then everyone in the lab will give them thoughts about what the direction of the research is going in and then other resources.

Speaker 1:

Okay, great, and you mentioned going to the office. Are you in person full-time?

Speaker 2:

um, so I am in person three times a week monday through wednesday and then thursday friday. I'm remote, so very hybrid, um, but I can pretty much come in whenever I want. If I want to come in on the thursday or friday, if I have something happening in new york, that they all come into the office, um, and commute, because it's much easier, but otherwise it's quite flexible. We don't have to wear business suits or anything. It's very nerdy, so it's a nice environment.

Speaker 1:

Okay, nice. So we talked a lot about work. But you know, all work with no play makes you a dull boy.

Speaker 2:

So how do you use your free?

Speaker 2:

time all work with no play makes you a dull boy. So how do you use your free time? So I spend a lot of time reading. I'm a bit of a bookworm. I'm currently reading Invisible Women, which is a book about data bias and research and how it impacts women, because a lot of the times, um historically, research, when research is being produced, women aren't usually involved in the testing process. Um, and sometimes, when products are released, they actually harm women a lot. So obviously I I read a lot about working things. I'm not working just because I'm super fascinated by um, my field um, so that's what I'm currently reading um.

Speaker 2:

I do a lot of writing um, so like this is also, I guess, related to work ish. But I tried to do some op-eds, um, because my overall goal was to um be a public intellectual, so somebody like dr casey I hope to be when I'm like older a little bit, also being people being like oh, you want to know about this, like you should look at lama, like she does xyz in this field, um, and somebody that people can email and ask and ask for help, because a lot of the times, um, when people are talking about like algorithmic accountability and like bias, they aren't talking to people of color, um, and a lot of these things impact people of color and religious minorities and there aren't a lot of people in the space to actually do that. So I hope to break that glass ceiling. So reading writing I guess like exercise, please exercise when like that is a not, that is like a not non debatable thing thing. I just finished a fellowship with the Internet Law and Policy Foundry which took up a lot of my free time. So I actually ran a podcast during my time at that fellowship called the Tech Policy Grind, which is basically a podcast run by early career professionals, sort of interviewing people in this space, similar to this podcast actually interviewing people in this space where we learn about their career and then also talk about a particular like high level tech policy issue. So recording pockets.

Speaker 2:

I was supposed to go out of my time and then I also volunteer with two nonprofits. One nonprofit is called Girl Security. They basically are working to expand the number of women and gender minorities that work in national security and then also debunk what we mean by national security. So it's not just like FBI, cia, it's more about actually protecting people and the interests of people. So I mentor young women who are interested in careers in national security. And then I also do a lot of panels and workshop facilitations for young women. So in the summer I'm going to be doing a facilitation with girl security on AI and elections and teaching high school girls about some of these issues that we're talking about and some careers that they could do in order to mitigate some of these things that we're talking about in the workshop.

Speaker 2:

And then the second nonprofit I volunteer at is called All Tech is Human. They're basically a grassroots based in New York that is working to diversify the number of people that work in socially responsible tech and basically expand their socially responsible tech pipeline and make sure that everyone that is working under this umbrella is talking about like humanism and ethics. And I'm a volunteer and an affiliate and basically I help. All Tech is Human produce reports on this issue. A lot of the times I'm like quoted in some of these reports that they produce. So they produce reports like AI and human rights, technology and democracy.

Speaker 2:

Like a socially responsible tech guy like basically how to, you know, get involved in sort of career and socially responsible tech? There's a mentorship program, there's like a university program and it's basically just informing the public about. You know a lot of these things that we that we talked about today. So I don't have a lot of free time, but I enjoy what I do, um, and I'm always learning and everything that I participate in, um, so I think I think that's, that's pretty much it, and I do have a social life. I do. I do hang out with friends and things like that. So not not to worry.

Speaker 1:

I was going to ask when do you find time to sleep? But okay, and you know, we touched on this a little bit the collaborative, collaborative nature of your work, whether it's like within your own department or you said, like office to office, but like in your actual day to day, are you interacting with people or is that just like your general work is interlinked.

Speaker 2:

Yeah, that's a good question. I would say both. Um. So, in terms of working with other people, like, for example, this, this TikTok op-ed I'm working on, I'm working on with a with a coworker Um, sometimes I'll have people come into the office and be like hey, can you explain to me, like, what this bill means, or like what are some of the policy implications for, like this particular, like development? Sometimes people will Slack me that and I'll answer via Slack, or people will come into my office and we'll talk about it. So, a lot of education happening on that front, and then we are only about like 10 full time staff members in the lab, only about like 10 full-time staff members in the lab. But we are also trying to collaborate with other research centers within NYU, because there are so many really interesting ones, and also like other, like grassroots work and other grassroots organizations that are working in this space or could use our research expertise but don't have the means to actually produce the research.

Speaker 1:

Okay, got it, and that's generally how it is for everybody in this field you're all working together, kind of thing.

Speaker 2:

Um I would say no. Some, some, some of some of the advocacy orgs, um in this space have so much money and brain power they don't actually need to collaborate with anybody. Sometimes they might, but most of the time they usually don't. I think the one thing where we see a lot of collaboration in this space is when people do like webinars and panels, because they want to get a variety of expertise across. Like the tech policy ecosystem and this is something that I'm trying to do more of is that tech policy is huge. Like there are so many different people working in so many different professions. Like you have all the privacy people, all the cyber people, all the academics, trust and safety, but they don't actually talk to each other, and so I'm hoping to sort of break that, break the silo a little bit by getting people to talk to each other a little bit more, because this space is very interdisciplinary but, like the different professionals need to work together. So that's a little annoying sometimes, but not a big con or anything like that.

Speaker 1:

Okay, got it. So now, when you look to the future, what's the next step in this field for you?

Speaker 2:

When, you look to the future. What's the next step in this field for you? Much to my much, much, much to the anxiety of my mother?

Speaker 2:

I have built myself a 10 year education plan. I I realized that the thought about being like in being a, being a young person in this space sometimes I'm the youngest person in the room when I am on like very important calls or um in a conference and, as a woman of color, I have to make sure that no one is going to try and undermine my expertise and to sort of protect myself from that. I will be getting a master's. I'm still deciding whether or not I will do it in a in public policy or make my own master's, which you can do NYU fun fact and then so I would do that part-time while finishing my fellowship at NYU. But ultimately I think I mentioned this earlier I want to still be a public intellectual in this space, so producing my own research, so PhD.

Speaker 2:

But I also want to litigate and actually work to defend individuals, civil liberties, against the potential um harm that's coming from technology, so actually protecting people from like privacy implications, mass surveillance, etc. But in order to litigate you need a law degree, so, um, after I obviously finished, like my fellowship at NYU and my master's, I hope to apply to combined JD PhD programs, which is very crazy, but I I want to be subject matter expertise, while also defending the rights of of, of marginalized people. And then I think, like once I'm done being an attorney for a little bit, and obviously like having a PhD will be helpful for this, I would love to be a professor. But that's like end end, end of my end of my time.

Speaker 1:

We've got time. We've got time. Well, thank you so much for joining us, ms Muhammad. And yeah, as we finish up, as always, let's do our mailbag. So to submit for the mailbag, please email Zane at cool careersersandhowyougotthemcom, fill out our get in touch form on our website, coolcareersandhowyougotthemcom, or DM us on Instagram at coolcareersandhowyougotthem. So today's question comes from Vikram, who is a junior in Oklahoma, and his question is I thought I wanted to be a software engineer and study coding, but now, with the emergence of AI, my parents were concerned that humans writing code would become obsolete. What are some areas in this space that I can work in without feeling that my job would be redundant because of AI?

Speaker 2:

Yeah, great question. So I will say one thing. Because of AI yeah, great question. So I will say one thing. There is a great expert who works at UC Berkeley. Her name is Dr Jessica Newman and she talks a lot about AI and labor and her hot take is that AI actually isn't going to take our jobs away. It's not going to be a threat to our jobs. And, as someone who has played played with chat GPT a little bit and knows how to code, there have been many times where I've given chat GPT my code and it's produced and it's and it's fixed my errors, but it hasn't actually fixed everything. Or I've asked chat GPT to write me a code for something and I've had to go in and actually fix the code because it's messed some things up. So I want to say that while generative AI is exciting, it's actually not super brilliant and that's why it can pose some non-threats to an actual software engineering job. So if you're still interested in being a software engineer, I say go for it.

Speaker 2:

The only threat that I would see is the continuous tech layoffs that are happening in the big tech space, and the good thing is you don't have to just resort to big tech. I think it's also important to look at how you can apply these skills to other jobs. So, for example, the government is always looking for people who know how to code, because public interest technology is a very, very important space. A lot of the times we don't have enough people that are actually working to build good infrastructure for government services, especially for veterans, especially for people who are looking to apply for, like unemployment. I don't know if you've ever been on like a, on like a new jersey government site, but it's like it's so old and like it could definitely use like some updating and the process could be made so much easier, but they just don't have people who are willing to like work in government and actually make those things smoother. So public interest technology is like a very, very important thing to think about. Um, and then I will also say like software engineering. So a lot of software engineers actually have a lot of skills that are good for cybersecurity.

Speaker 2:

Cybersecurity is like I think the Department of Homeland Security said that there are 900,000 vacant jobs in cybersecurity Like there is a demand for cybersecurity professionals. So anytime a software engineer is like oh, I don't know, like my job is running, I'm like literally moved to cybersecurity. The skills are transferable. Cybersecurity is a very exciting career. Like you'll you, your job will not be the same day to day. The income, the money is great Whether you work in private or public. You're working in like a public space.

Speaker 2:

You're defending, you're protecting consumers, information, and I don't I don't know if, if, if ai is going to be advanced, ever advanced enough to be able to actually um, fix the gap. I know a lot of cyber security companies are trying to train ai to sort of patch vulnerabilities, but even then, like they, they've struggled a little bit and nothing beats like actual professionals who are trained to, to, to predict, um and actually patch information and hot, and there's another um, there might be a there, I think. Maybe the information, which is a media outlet, I think covered this in a story. But also with the emergence of multiple generative ai chatbots, um, sometimes they're getting overloaded with very poor information and and so there's a saying of garbage in, garbage out. So the effectiveness of these chatbots and like AI as a whole, is still very unpredictable, and so the actual threat to a software engineering job, I say, is pretty limited at this point.

Speaker 2:

But I would say, look at public interest technology as a as a field, and then cybersecurity is a great place to work in and you don't. And if you want to switch out of coding, you could do like policy in cybersecurity. You could do like threat detection. There's so much available and now like people are paying people to hack into their system and help them like work together with the cybersecurity professionals to actually like improve their information infrastructure. We have come to a place where hacking actually isn't even a bad word now. It's actually it's actually has a neutral connotation to it.

Speaker 1:

OK, just out of curiosity, though won't the AIs become better at like you know? It's like a learned process, like whatever you practice at, you'll improve that. So, even despite how much practice they have at creating code, you don't think they'll ever be able to replace like a human coder it's hard.

Speaker 2:

it's hard to say for sure, but from my experience of playing around with it, it's coming to a point where training date, where people are trying to figure out, like how to make very impactful training data, um, and there's just again like there's this idea of an information overload, where I don't think the chatbot will ever be able to actually analyze all these different methods of actually being able to create effective code to a point where it can actually produce code in a short amount of time that's going to actually be impactful to building a overall large system.

Speaker 2:

That might I may be wrong Like that could change, like maybe people will find a way to make effective training data, but I think, with training data being so difficult, it it's a. It's a long time gone from then and even then, like coding aside, um, a lot of news outlets are actually suing open ai for scraping data on their like on their websites, and so if all these news outlets are pulling out from providing open ai data, like our chatbots even going to be providing accurate information, let alone code um, so that's like tbd. We'll have to wait and see, but definitely, if you're interested in this case, like it's called new york times versus open ai fascinating um case and in legal theory yeah, and you know, what's concerning about the chatbots and the inaccurate data is that sometimes it won't even tell you.

Speaker 1:

Like I just don't have access to this, like I don't know, it'll make something up yeah, yeah, because it'll yeah, which is way worse, because it it'll, like it understands like the kind of language that, like, you need to go, so it'll give you something that sounds really, really good. You'll be like, oh yeah, totally yeah, and then like you use it and your argument or whatever, and it's like this thing doesn't actually exist, like it just made it up.

Speaker 2:

Yeah, it's not a good look there's also another story you can look at, um, I think, the new york city mayor's office. They built their own chatbot for business owners and landlords and when people were using it, the chatbot was giving the landlords and business owners like illegal ways to to like harm their consumers and they're like it's just, it's just, it's really bad that's hilarious.

Speaker 1:

Um, if people want to learn more about you or what you do, where can they go?

Speaker 2:

Yeah, so my LinkedIn is great. It low key, needs to be updated, but I post a lot of commentary there. You can follow on C-SMAP on all social media at this point. That includes like Mastodon and threads.

Speaker 1:

If you want to know more.

Speaker 2:

Mastodon and threads which is like what is the? New alternatives to Twitter. So basically it's part of the Fediverse. This could be a whole other conversation.

Speaker 1:

I don't know what that is either.

Speaker 2:

The Fediverse is basically this idea of democratizing the online information ecosystem and it's supposed to be like more uh, open and um. It allows for like there to be like cross communication across multiple platforms and stuff like that. It's so crazy. I'm not. I'm not even an expert in this um, but there's all kinds of like research and lore like can read about the fettyverse, but they are. They basically became popular in the last two years for uh um in the uh elon musk takeover of of twitter and people were trying to flock to a new, to a new site. So a lot of people are now on mastodon and threats um, which c-strap exists on. So for people who are existing on those social sites, we are there. But we're also still on linkedin and and and x um if people prefer to find our research there no, I've never heard of this, am I?

Speaker 2:

out of touch now I'm out of.

Speaker 1:

I feel like my grandma. I tried to explain her like what instagram is, and she was. I've never heard of this. I feel like her. Now I'm getting gold. Okay, well, as we wrap up, you've got the ear of many highly ambitious students. Do you have any final advice for them, or possibly an ask?

Speaker 2:

Yeah, in terms of advice, I think, now that I'm in my mid-20s, I have learned a lot of things. One, when you're applying to college and you don't get into the school that you want to get into, it's okay. Life has a really weird way of working itself out and to this day, I tell myself, what didn't work out for you worked out for you. Like. I think about like what if I didn't go to school in DC? Like what if I didn't watch that one hearing at this one time? I would have a completely different career. And like maybe I wouldn't have been as happy as I am right now. And so I think, like, accept rejections as they are. You can always try again, without a doubt, but sometimes, like life has a one door opening doesn't opens the door for a thousand other opportunities. So I think that's a good way to think about, like just rejections as a whole. Two, I'm someone who's like very type A and very by the book, and I was like that throughout all of college. I was like we are taking this course at this time, like I'm building my five, 10 year plan, um, but then, like when I was switching majors and switching minors, to figure out, like what I wanted to do. I allowed my playbook to to dial a little bit Um. And even when I was a junior in college, I was like no, we were going straight into law school. Like I'm taking the L set at this time, Like I'm doing this. And then I went to a law school career fair and it totally freaked me out. I was like maybe I'm not meant to do this right now and I like took time off and I explored, like what issue areas I really care about which has been so helpful. And even then, like there were times where I was like ready to apply to grad school but then, like an opportunity came my way and I was like you know what? Like why don't I just take this opportunity? So, like, why don't I just take this opportunity? So I found myself saying yes to a lot of things that just were thrown my way and putting some things on hold, because you know you can go back to school anytime, like there is no right or wrong time to do that. But some of the opportunities that I'm in right now, maybe if I said no, maybe I wouldn't be able to do yourself, be flexible and be open to so many different opportunities, because my professor always tells me she's like, if you don't let yourself be open, like maybe the career that you want doesn't even exist yet, and so you don't want to lock yourself in one path and not think about, like what you could be doing isn't even here yet. So I think that's another good thing for all the type A kids out there. And then I was going to say one more thing and I don't remember. But oh my god, never stop learning, literally like you may graduate and do your like job or whatever, but don't close yourself off. Like, continue to learn and I think, being a continuous learner. Also.

Speaker 2:

Again, a lot goes back to my second point, where you allow yourself to be open to so many different opportunities, like even to this day. Like I know nothing about, like metaphysics, but every now and then I'll listen to a metaphysics podcast, just like, tickle my brain a little bit and be like, oh, this is really fascinating. Um, and then I also sometimes read um articles that and op-eds by people who come from like a very different like political, uh ideology as me, just so I can think about like, why does this person think like this? Like, why does this group of people? Or like, why do people who believe this think like this? And I think that's very important. When you're communicating with people, at least in like the political space, like you have to, you have to talk to people who have, who come from all different sides of the political spectrum, um, and bipartisanship is very important to getting bills passed, so I think that's also like a good thing to do. One like never stop wanting, but also like think about reading things that you may not necessarily agree with, but just like, just read it in case. Obviously, don't read things that are like from junkie news sites or anything like that, but, you know, read an op-ed by somebody who comes from a different political ideology on, like the new york times or the washington or something like that. Um, so I would say those are three, three career, three career things.

Speaker 2:

I would say my ask for young people is don't let anybody undermine you because of your age. That literally does not matter. That is like, so that is something that I continue to learn every day, where I have a lot of imposter syndrome. So, all again, like I walk into a room, people are much older than me, people can speak so much smarter than me, and I'm like you know what? Like I'm just gonna shut up. Literally don't. If you have something important to say, say it. And like, don't let people pick your age at your age for you to stop you from from talking about anything. There are so many young people out there who are literally changing the world regardless of their age.

Speaker 1:

Like I think about, like greta thunberg and like, even like olivia rodrigo.

Speaker 2:

Like. They are young women, but they have the ability to to make change just because they believe in a cause and they don't really care about people trying to like undermine their ability to do that work. And there are so many like young people in my field too. Like there's a student that I know who's a senior at my university. He has a whole nonprofit dedicated to keeping big tech in check and trying to force big tech to design a platform that is safe for kids to use, and he's like 20 years old. You know what I mean. Like. And there's another girl that I met. She like started an AI like ethics nonprofit and she was like mentioned in time 100 this year, you know. So I, I that's what I would also ask is like don't let it ever undermine from what you want to do, because the world's a big place and sometimes people need to be kept in check, and the people who will keep them in check are the young people. So that is my. That is like my spiel.

Speaker 1:

Yeah, great. Well, thank you so much for coming on the podcast today, really appreciate all the information you gave us and I think we all learned a lot.

Speaker 2:

Yeah, thank you so much for having me, and I always say this to people. But if you ever want career advice or you want to learn more like if you send me a message on LinkedIn, like I will most of the time always respond, I'm happy to do virtual coffees. Like I don't gatekeep, I don't gatekeep.

Speaker 1:

All right, thank you so much.

Speaker 2:

Thank you, have a good day.

People on this episode