The Black Hat Files

Thinking in the Age of Intelligent Systems with Dr. Rumman Chowdhury

Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.

0:00 | 29:19

Artificial intelligence can write code, analyse data and simulate human reasoning. But what happens to security when machines start thinking alongside us?

In the first episode of The Black Hat Files, host Phillip Wylie sits down with Dr. Rumman Chowdhury, Founder & CEO of Humane Intelligence and one of the leading voices in responsible AI.

The conversation goes straight to the questions security leaders are now facing.

  • How do you test systems that learn on their own?
  • What does red teaming look like when the target is an AI model?
  • And how do organisations stay accountable as AI moves from research labs into real-world deployment?

Drawing on her work leading global AI safety initiatives and adversarial testing programmes, Dr. Chowdhury breaks down how intelligent systems fail, how they can be manipulated and what defenders need to understand now.

From algorithmic bias to AI red teaming, this episode explores what it takes to secure systems in an era where machines are starting to make decisions.

Because in the age of intelligent machines, critical thinking is still the most powerful defence.

Intro

SPEAKER_01

One of the big things people are concerned about is, you know, will AI take our job? Never in the history of any technological innovation, as you have pointed out, have we ever actually done less work. The opposite happened, where all we do it seemingly is work more and more and more. And AI is no different. It will actually create more work and different types of work for people to do.

SPEAKER_00

Let's crack open today's file, Thinking in the Age of Intelligence Systems. I'm joined by Dr. Ramon Chowdhury, founder and CEO of Humane Intelligence. Welcome to the show, Dr. Chowdhury.

SPEAKER_01

Thank you so much for having me.

SPEAKER_00

So it's very cool the subjects that you're speaking on, artificial intelligence. And before we get started, why don't you share a little bit about your background with the audience?

The Industrial Revolution Myth: Working Less

SPEAKER_01

Sure. Well, I've worked in the field of AI governance, responsible AI, and today you might even call it AI evaluations for the past decade. I built Accenture's Practice in Responsible AI. I had a startup that I sold to Twitter. I was Twitter's director of machine learning ethics. And then, you know, for the past two years, I've been doing my own consulting work. I started Humane Intelligence as a nonprofit, focusing on more inclusive methods of test and evaluation of generative AI models because I, along with many other people, realized that this very transformative technology was doing a lot of information synthesis. And my concern was what data and what people are being represented. So that's now a public benefit corporation. So now I'm building the infrastructure to help companies scale up how they govern AI systems, but it's driven more from their industry expertise and it's motivated by them solving real-world problems for their clients.

SPEAKER_00

Oh, so what did you speak on today?

SPEAKER_01

Today I talked a bit about the future of work. I think one of the big things people are concerned about is, you know, will AI take our jobs? Will we have no economic opportunities because everything will be automated away? Now, a lot of these frontier model companies, the definition of artificial general intelligence that they have landed on is the automation of all tasks of economic value. So then by that definition, that means none of us would have jobs. And I was unpacking the research, the literature, what we're seeing about what really is going to happen in the future of work.

SPEAKER_00

Yeah. So if you wouldn't mind going into the details of the future work, because then everyone's worried about is AI going to take my jobs. You see some jobs that you could kind of see going away because it's able to automate some things. But I know some of the hopes too that hopefully we'll be in a space where, you know, figure during the Industrial Revolution, the jobs that took away, the physical jobs wasn't bad. Uh, it made better jobs for individuals. Technology-wise, it gave better jobs for individuals. So, how do you see that whole evolution of uh of jobs with AI?

Automating Knowledge, Not Labour

Future-Proofing the Next Generation

Output vs. Knowledge

The AI Classroom Experiment

SPEAKER_01

Yeah, absolutely. You're touching on a really, really important point here, probably the key point. A lot of this doom and gloom narrative that AI will, quote, take away all jobs has this baseline assumption that there is a finite amount of work that we will do and that pie is going to get eaten by AI. Never in the history of any technological innovation, as you have pointed out, have we ever actually done less work. And this idea of the four-hour workday or the two-day work week actually started in the Industrial Revolution where people were speculating that we will no longer have to, you know, find someone to make shoes or use horses and buggies or do manual work. Therefore, most of our time would be spent in idle pleasure and reading books and writing poetry. And literally none of that happened. Actually, the opposite happened, where all we do it seemingly is work more and more and more. And AI is no different in terms of it will actually create more work and different types of work for people to do. My favorite anecdote where I try to remind people is, you know, because I am old enough to remember this. There was a time in which you had one person at a company that had the title of webmaster, and their entire job was the online presence for an organization. Now that sounds ridiculous now because entire arms of companies have hundreds of people creating online presence like websites, e-commerce, marketing, et cetera. So it sounds so ridiculous today, but remember, if you were to say in 1998, oh, by the year, you know, 2025, you will have 500 people whose jobs it will be to only keep this online presence going, it would have sounded absolutely ridiculous. And I think AI is going to be similar. Now, one thing that is different though, and I did mention this in my talk, one of the reasons why we have this existential concern is that, you know, for the first time, we are not automating manual labor. And that has been every other industrial revolution, even the internet was actually the automation of in of information sharing, which was still a manual process. Now we are automating knowledge generation and knowledge synthesis. I think that's what's scaring us. We've never automated white-collar jobs before.

SPEAKER_00

So for anyone wanting to kind of future proof, what are some of the educational steps? Maybe we'll start with K through 12, people, you know, children. What should people be doing to teach, educate their children to help future-proof them in this digital age of artificial intelligence?

Teaching Agency, Not Dependence

Why Experts + AI Win

Using AI Without Replacing Yourself

SPEAKER_01

There is a lot of concern amongst parents, and actually mostly among young people. I give a lot of talks at universities, and overwhelmingly, the questions I get asked is what should I major in? Should I even major in anything? And actually, to even take a step back, one of the big reasons why young people are increasingly opting out of college is some of them don't even see a point or a purpose to it because they too believe that AI will do more and more jobs. So what you're talking about is a really important uh pedagogical shift that we need to see happen in how we teach kids about AI. And I'm a scientist. I need to dig into the empirical research. Like I want experts to do the work and do the studies and analyze and then see the trends rather than, you know, get scared by the fact that AI can write a 500-word essay that sounds reasonably plausible. Maybe the fundamental question is maybe we should not teach to produce output, but we should teach to produce knowledge. And that's actually the fundamental split we're seeing here. So there's a couple of papers I really love in this space. Um, one of them was quite controversial. I think it was called the Corinvus experiment. It was at a uh Hungarian university where they gave mathematics master students, they split them into two groups. So one group was allowed to use AI tools as much as they wanted, and the other one was not. They were taught a more traditional mathematics background. Um, it was controversial because after two years, the students that had used AI tools not only retained less information, they performed more poorly on exams, they were unable to transfer knowledge, and worst of all, they questioned the purpose of education. They did not see the point of education, and they would say things like, well, the AI can just do it for me. Now, to dig into that nuance a little bit, there was another paper. Um, and I cannot remember the name of it at this moment, but it really dug into the how you teach AI. Now, why did those students get disillusioned? Well, they got disillusioned because they saw education as just producing an output independent of the quality of that output. Because again, remember, they performed worse on tests, they retained less information. So the AI wasn't doing anything for them other than answering the homework. Now, instead, these other scientists looked at different models of teaching AI. Now, there's another way to teach AI where you are teaching agency. So AI as a tool. So Tay and my talk, I talked about a lot about functional literacy. And if you are the end user, the purpose of being literate in AI is to not operate as a result of the tool, but to operate with the tool. And specifically, that means it's not just telling you what to do and you're just the instrument that moves something point A to point B, but that you are a part of the information synthesis, you are a part of quality assurance, you're a part of understanding how this tool should be used. So good pedagogy looks something like you're teaching your students how to use AI at every step of the way, and you are teaching them how to critically analyze the output of the AI. Now, when you do that kind of teaching method, what they found is that it's a good bridge tool to help students get to their next learning goals faster and actually retain more information.

SPEAKER_00

Yeah, that's very interesting. That that balanced approach, because you know, just the quality of the output you get out of AI, if you don't put effort into it, you don't get that much effort out. And people sometimes neglect to realize that you need to be some kind of a subject matter expert on the output. Otherwise, it can give you whatever and you're not sure whether it's valid data or not.

SPEAKER_01

Yeah, that's absolutely right. And actually, one of the studies that was conducted in the workplace is that Procter and Gamble demonstrated exactly that. So they created an experiment. People use AI. In some cases, they were experts, in some cases they were not experts, because one of the assumptions about AI and the future of work is that, and again, a lot of young people worry about this, is that you don't need to actually major in it. You don't have to be an expert on anything. AI will be the expert. You're you just need to come in and type a type a couple of things. Um, and what they found was that that's actually not true. If you give a non-expert AI, they perform better than they would have otherwise. But the biggest gains were seen when an expert was given AI tools, as you said, the discernment, being able to understand what is good or bad, the context, that only comes from people. It doesn't come from AI.

SPEAKER_00

So we kind of touched on what children and and you mentioned like in college should do to leverage AI for someone that's already an experienced security professional, someone already out in the field, how should they leverage AI to help them scale what they're doing and kind of have a little bit of job security?

AI as a Cybersecurity Threat

SPEAKER_01

Well, right now, what we're really seeing back to like having expertise, right? So AI is automating a lot in the security sector. And some of it's actually really great, right? Because there are, you know, a lot of there's a lot of grunt work that can be automated. There are certain things like um keeping up with certain trends, research, uh, you know, observation tools that can be used, that AI can be really, really useful for. You know, the one of the one of the things that I look at is how success is being measured. I said I'm a scientist, I'm actually a social scientist, and I'm a quantitative social scientist. So what I think a lot about is how are you measuring what good means, right? So when we think about how AI is being measured in terms of impact on workforce, it's often measured in terms of productivity. Well, guess what? Yes, of course, machines will always produce faster than people. Any machine will. That's that I don't need to test that to know that. That doesn't mean it's any good. So for cybersecurity professionals, I think the big thing to think about is how can your expertise be leveraged by using AI, right? So back to this like functional literacy. How can AI be a tool that you utilize to just move faster? I think there's a lot of that happening at Black Hat today. I'm seeing a lot of conversation about that. There's a lot of companies pitching products that help you scale, help you move fast. I think that's kind of the point of AI.

Generative AI vs Narrow AI

SPEAKER_00

Yeah, it's really interesting with AI and security how, you know, for so many years, people like yourself were using AI before us folks that haven't had the opportunity to access uh AI until you know, generative AI come along, ChatGPT, we're having access to it. So it's really interesting now to see people that worked in other areas to be able to take their creativity and make it do certain things that maybe some of the folks originally generating or creating AI was working with it, didn't work in those areas, but it's interesting to see some of the advancements in some of the products coming out of that area.

SPEAKER_01

Yeah, and and it's it's a it's a double-edged sword, right? I mean, we also see that AI is upping the bar for cybersecurity professionals because the ability to generate malicious contents, the ability to generate malware, the ability to create AI agents and bots that are literally relentless in attacking companies, governments, et cetera. I mean, I was on in the in the previous US administration, I was on the Department of Homeland Security's critical infrastructure working group. And I was, you know, there with airline CEOs, you know, uh governors, uh, state governors, um, you know, city mayors, um, who were very, very concerned about how generative AI might introduce new risks and new threats. So again, back to how can cyber cybersecurity professionals kind of up their skills? Well, first, like I said, it can help you do your own job better. But second is be aware of the host of brand new concerns that are actually in the shape of old concerns but manifesting in new ways that generative AI is introducing.

SPEAKER_00

So for a security professional that wants to learn AI, uh, do they just need to learn generative AI? Or are there some other recommendations you have?

Trust Is Earned, Not Owed

SPEAKER_01

Yeah, that's a great point. AI is such a raught term because the the pool of things that are classified as AI is just bigger and bigger and bigger all the time. You know, an Excel regression model, I guess, also counts as AI in some use cases, right? Um, so AI here specifically I'm talking about generative AI, which is a fundamentally different product from net with the world of what was called narrow AI. Um, but yeah, I think it is useful to be well-versed in all the kinds of AI tools. The reality is most companies have not rolled out and scaled up generative AI tools because of the significant amount of risk that's involved. So most of the work you will do embedded at a company for things that are already at scale and facing customers is actually not going to be generative AI. It's going to be vanilla AI, narrow AI, um, you know, AI models that are fine-tuned to a very specific use case. It's not going to be transform-based technologies. That being said, what companies are trying to tackle and what there is the most interest in is addressing the risks of generative AI because, again, people see the immense potential. But to be perfectly frank, I think companies are very hesitant to scale. I mean, there was an MIT study a few weeks ago that everybody was talking about that said that 95% of pilots built at enterprise companies fail to produce any ROI and basically fail to scale. And from my perspective, a big part of that is the unknown risks that these models are introducing.

SPEAKER_00

Yeah, that's interesting. I guess it's kind of good that people are kind of being hesitant in approaching it with care because there's some cases where people, you know, companies are trying wanting performance. They're wanting to make money, they're doing whatever they can do to scale. And sometimes they don't really take the precautionary steps to do things securely. So it is kind of good, uh, in my opinion, that they're taking a kind of a more cautious approach.

The AI Hype Cycle

SPEAKER_01

Yeah, I mean, I I think I I would completely agree. And I'm pretty sure the audience of this podcast would agree as well. But I think one of the most important things to think about is, you know, trust is not owed, it is earned. And, you know, tech has done a lot to deserve a lot of trust, right? There's there's so many things in our lives that are better, more efficient, more entertaining, you know, just, you know, improved because of technology. That being said, there are many things in our lives that are significantly worse because of, as you've mentioned, cutting corners with things like security, responsible use, and even ethics, right? How we use data, um, you know, things like people's privacy, their information. So overwhelmingly, the public has gotten more and more smart, right? They're increasingly smarter about asking the right questions. And any company just needs to be prepared, right? They need to know that they're going to be asked by their average customer, how is my data going to be used? Some even more sophisticated customers would ask things about things like data deletion or how you're handling personal information. I mean, people are smart, right? We have been in a world of machine learning and AI for long enough, and people have felt like their trust has been abused and misused for long enough that they're not willing to trade a fun new toy for their information, their data, or even their digital health.

SPEAKER_00

Yeah, it just seems seems like we're getting more educated consumers. Yeah. So, as far as AI, you know, the hype versus versus what's in reality, are we getting a good picture of what AI is capable of or what we see in a lot of hype? So, what's your thoughts?

What AGI Really Means

SPEAKER_01

I mean, we're we're seeing a massive amount of hype. I mean, astronomical and ridiculous. I started my talk with an anecdote about how I was talking to a VC a few weeks ago, and he flat out told me that he doesn't invest in companies helping other companies adopt AI because in the future AI will make all of our decisions for us. You know, I think the only thought that occurred in my head is like, sir, if you think that is true, then why are you on a Zoom call with me? Right. Because if you're imagining a world in which AI is making all decisions, you know, hug your children, right? I mean, do do something else. Like, don't sit on a Zoom call with a stranger saying that that's what you think, right? It's so yes, there is an immense amount of hype. Um, but you know, we've we've seen we're on like our third or fourth tech-based hype cycle. We had it with, you know, cryptocurrency and NFTs, we had it with the metaverse, right? Uh, and every single time it's like the peak seems to peak higher, but the crash seems to happen faster. And now, you know, we are seeing that conversation happening about AI companies because we are three years and many hundreds of billions of dollars in. And there doesn't really seem to be product market fit for very many use cases. In fact, I work with a lot of companies. The number one use case for generative AI models has been back office document automation, like processing, uh, you know, fine-tuning small models so that people can ask questions. Now, this is not the, you know, hundreds of billions of dollars or trillion dollar, you know, explosion that all these investors were promised. And now we're seeing investors get ANSI. I think the big way that we can observe from the market that product market fit has been really rough for generative AI is that open AI, which should be the market leader, is pivoting to ads and porn. Now, if you are at all familiar with the market model of tech, if you are unable to find a customer to buy your product, the lowest common denominator is ads and porn because they are infinitely generative money-making machines that are also infinitely exploitative. So it is concerning to me that one of the biggest model development companies is already going there. They're already at the lowest common denominator. And that to me means that they're having a hard time finding financially productive use cases for what they've built.

SPEAKER_00

So, how far do you think we're off from having AGI?

Why We’re Not Close to AGI

SPEAKER_01

I think that depends on your definition of AGI. So earlier this year, well, for the last few years, I've been a fellow at Harvard University, responsible AI fellow. And earlier this year, I did a seminar series on the concept of intelligence. And one of the things we talked about was AGI. Now, speaking of hype, now that's probably the term that has the most hype. So AGI, I think if you ask the average person on the street, what is artificial general intelligence? They'll give you an answer that's more like the plot of a movie. So it's, you know, an AI that's able to think and act like a human, right? So you might think like Terminator. Unfortunately, all the examples we have are negative. I think the only example we have that's not negative is the movie Her. So, you know, that kind of thing is what people imagine. But actually, you can follow the slippery slope of how these companies have defined AGI contractually, right? So the one to follow, I would say, I mean, everybody has their own definition, but OpenAI started off, and you can, and I actually did this. So you go go to their website and you go on the Wayback Machine and you go back to earlier versions of, you know, how their landing page looked and they had defined AGI. So initially they did define it the same way. So, you know, machines that can think and, you know, think and operate at the same level of human beings. That slowly became um the automation of all tasks of economic value, as it earlier. And that's actually a very specific definition that actually demonstrates that, A, they are a for-profit company. This is not a company that is, you know, or an organization. While they were structured as a nonprofit, they're not operating as a nonprofit in terms of intent, right? Um the wanting to automate all tasks of economic value is actually not encompassing of all the things that human beings can do. Number one. Number two, this even got boiled down further because now there's a money-based price tag. So actually, when open AI produces enough revenue from their model, they will have said to have, quote, achieved AGI. So AGI as a term, as a concept, has gotten diluted to literally a price tag at this point. So why is that important? It's important contractually to OpenAI because once they have achieved this goal of AGI, it actually means something structurally and financially for them. But again, this is so far removed from what you or I might think of when we think of AGI as this noble notion of creating a thinking machine.

SPEAKER_00

Yeah, very interesting. Do you think uh kind of the product companies in the hype is really what's kind of changed what we think of as AGI?

Education as Career Insurance

AI Stress-Testing Institutions

SPEAKER_01

I think the uh need to drive revenue is what ends up changing it. I also think that it's impossible to do. Human beings process a complex amount of information in absolutely wild ways that machines cannot even begin to even approach. So, you know, Jan Lakuna's been saying this for a while. He recently left Meta because he completely disagrees with this focus on large language models specifically and the structure like transformer based technologies as the way of achieving this kind of work. And he's always said that, you know, AI operates pretty much as good as a common house cat, right? And barely that, right? I think my Cat's actually smarter than most generative AI systems. And it's true. Like if you think about the basic things that these models fail at, um, you know, GPT-5 was supposed to be the big market, you know, the big game changer, right? Sam Altman was saying that it's going to be the same as having a PhD in your pocket. And the product launches, and it was from some perspectives, a big failure because people were, you know, going through the same retirement, like, you know, tired memes, right? It can't spell the word strawberry. You can't make a map of the United States. Uh, I think another way to think about it, just stepping back from large language models has been self-driving cars, right? The average 16-year-old can do in two weeks what it took 20 years and hundreds of billions of dollars, if not more, to have a car do poorly. So while Waymo is currently considered a success, you have to think of all the sunk cost and the capital put into it and the many, many decades and PhDs and brilliant minds that were put behind making a thing that can barely mimic what your kid can do in one week of lessons. When you think about it that way, how hard it is to teach a machine to do the things that you and I do without even thinking about it, um, it it just shows you how much work there is to be done.

SPEAKER_00

Yeah, you know, one of the things one of my takeaways from this conversation is that we really need to lean into education more because when you speak of the things that humans do better and more efficiently. So I think one of the things the future-proof people make them uh help their careers is education. It's just that the better you get knowledge-wise and skills-wise, the better you're going to be. And they're really one of the things I'd really like to see more globally. I know that like in the US, education can be a challenge. Then not everyone's getting the education that they deserve and need. Because we really need to focus more on education because then with those tools like AI, you'd be able to do even more.

The Power of Career Pivots

SPEAKER_01

Absolutely. And and I think the other part of it is like lifelong learning will be a thing. And it's fine. It's nothing to be worried about or or scared about. Like you should be aware of what's out there, what the new tools are. And if you work for a company, frankly, I think it should be your company's responsibility to train you on those tools because guess what? That leads to better employee retention. It leads to happier employees, it leads to more productive employees. There's been decades of research on this. So, to your point, this is not, you know, like that idea isn't new. We should be actually embracing and understanding new methods of education for an AI age. Look, so one of the most interesting things about AI is it's not that AI does anything particularly insidious. I think as a technology, it pushes every institution to its upper limit. No matter what we think about, our our electoral systems, our education systems, everything like AI really supercharges everything, but also it then starts to strain at the seams. And then you start to see what the flaws are. I think we all knew that the current that the college education system that we have was actually not really preparing people for the workforce. Like very few people actually do for a living what they majored in in college. And I think we all just kind of knew that and we just accepted that, right? So if college were really about training people for the workforce, we'd all be learning like Excel and Outlook, but we don't, right? People, students, they feel like their education is disjointed from what they then go do in the real world. And maybe that is or isn't the point of education. But the I think the problem is we never had to ask that question. And AI forces us to ask that question because if ChatGPT or Claude or Gemini can write a 500-word essay, you know, about, you know, whatever, some novel, just as well or better than I could, then you have to ask, what was the purpose of teaching students how to write 500-word essays? Well, there was a purpose. The purpose was not to produce an essay, the purpose was to teach you how to critically think, how to think abstractly, how to pull ideas together, how to synthesize, and how to argue how to you know argue convincingly. So let's focus on what we should be teaching. And as you said, what are the things humans do better than machines?

The Advantage of Diverse Skills

SPEAKER_00

Yes. And one of the things for anyone listening to uh is one of the things that I discovered because my career has been all over the place before when I graduated high school, my career choice was professional wrestling. So I did that for a while.

SPEAKER_01

I love that you actually went and did that though.

SPEAKER_00

I graduated, and college wasn't really a thing for me because I took my college entrance exams. High school I didn't take serious, so I needed to hire entrance exams or get like eight letters of recommendations from my high school teachers. I really wasn't that interested in going. So my friend said, you know, you're a big guy, you should go into pro wrestling. So I did it for a while, but then it came to a point where I got married and I needed something more stable. So I ended up going to trade school and learning AutoCAD and then being introduced to AutoCAD. I had no exposure to computers. And I was like probably the worst of my class computer skill-wise. This is like in '94. So once I got out of the workforce, I picked up these skills, and my skills were better than my peers computer-wise. I pivoted to IT, then cybersecurity. So for anyone listening, what you're doing now or going to college, this is not where you're set in stone, where you have to stay. Take those skills and pivot and you know, go beyond into other areas and don't feel like you have to be stuck with you go to school for a psychology degree or maybe art degree or something. You're not stuck in those areas. You can move on to other areas.

Closing Thoughts

SPEAKER_01

Yeah, and and I'm a political scientist. My PhD is in American politics. I went to MIT and I studied political science, right? But I think your point is that also that, and it's really important, again, people kind of navigating the workforce is that every job teaches you something. Like I am sure there are things you learned in pro wrestling, things like maybe interpersonal skills that you probably use today. So, for example, today, the conversation about AI governance and how governments can adapt and how institutions can adapt. When I'm talking about all of this, I'm leaning on my political science education. What I learned about institutional development, I learned about incentive models, I learned about power structures. So none of the conversations I'm having as it relates to those topics relate to any of the knowledge I have about programming, coding, or AI development. But I need all of it. I need to have understood why the constitution was written the way it was written and also how LLMs are built to have a reasonable conversation to tell a company how they should structure their AI tests and evaluation models. Like so all of it is relevant. And frankly, some of the most interesting people you meet are the people who have gone from profession to profession because they have just so many skills that they can pull together that are very, very diverse. So yeah, I think a really good point that you're making is there have always been those of us who are interested in a wide range of things or who are comfortable with pivoting and pivoting and change. And it's not unique to this generation, it's not unique to AI. And again, maybe AI is just pushing the upper limits of it a bit, such that like we now have to think a little bit more fluidly about these kinds of things. But again, people like us have always existed.

SPEAKER_00

Very cool. So that's a wrap. A big thanks to my guest, Dr. Chowdhury. Uh, let's close the file until the next chapter. This is Philip Wiley signing off.