Surviving AI – Navigating AI Job Displacement and Automation

The $3.7 Billion Oversight Economy: The New Jobs AI Is Creating Right Now | Job redesign and task automation

Carlo Thompson

Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.

0:00 | 30:04

Send us Fan Mail

The narrative around AI and jobs has become monotonous: automation bad, humans doomed, learn to code or perish. But what if the very technology threatening jobs is simultaneously creating an entirely new category of work—one that values judgment over algorithms, ethics over efficiency, and management over machine learning?

In Episode 12 of Surviving AI, host Carlo Thompson unpacks "The Oversight Economy"—the rapid emergence of roles designed to govern, audit, and ethically guide AI systems. This isn't just legal compliance; it's a $3.7 billion market projected by 2028, and it's hiring now.

In this episode, you'll learn:

  • Why 14,000+ AI governance positions remain open while entry-level tech hiring has dropped 25%
  • The four categories of oversight roles: Ethics & Strategy, Compliance & Risk, Technical Oversight, and Implementation
  • Specific job profiles including AI Ethics Officer ($75K-$250K+), AI Compliance Manager ($125K-$200K), AI Auditor ($130K-$188K), and Prompt Engineer ($90K-$300K+)
  • The critical regulatory deadlines driving this hiring surge—including August 2, 2026, when EU AI Act enforcement begins for high-risk systems
  • Why 72% of AI-exposed job vacancies require management skills, not technical AI expertise
  • How to pivot into governance from legal, project management, or humanities backgrounds

Key Statistics:

  • Only 1.5% of organizations report being satisfied with their AI governance staffing
  • Workers with AI skills earn a 56% wage premium over peers in similar roles
  • 77% of new AI jobs require Master's degrees or specialized certifications
  • US AI executives average $1.1M total compensation vs. $565K in Europe

Sources referenced: WEF Future of Jobs Report 2025, IAPP Survey 2025, Second Talent Industry Report, OECD Future of Work, PwC AI Jobs Barometer, Heidrick & Struggles

Surviving AI podcast, AI governance jobs, AI oversight economy, AI ethics officer salary, AI compliance manager, AI auditor career, Carlo Thompson, EU AI Act jobs, AI regulation careers, new AI jobs 2026, AI governance certification, AI risk management career, future of work AI governance, $175K AI jobs, AI hiring surge

https://docs.google.com/document/d/1r8bceTpWHX4DZuGoLfLLlaR8ioJHgONVwjgfUnW7AHc/edit?usp=sharing


YouTube Episodes

SURVIVING AI With Carlo Thompson - YouTube

SPEAKER_01

Welcome. To surviving AI. Welcome back to the Deep Dive. Today is Monday, January 26, 2026.

SPEAKER_00

Good to be here.

SPEAKER_01

And I have to be honest, looking at the news this morning, it is well, it's hard not to feel a certain heaviness.

SPEAKER_00

Oh, I know. It's a constant drumbeat.

SPEAKER_01

It is. The headlines are just dominated by these massive scary numbers about artificial intelligence in the job market. You've probably seen the figure floating around everywhere.

SPEAKER_00

Aaron Powell A 92 million number.

SPEAKER_01

Trevor Burrus, Jr.: 92 million jobs potentially displaced. And that's not just a statistic, you know? That is 92 million households suddenly wondering what the future looks like.

SPEAKER_00

Aaron Powell Absolutely. It's deeply personal for a lot of people.

SPEAKER_01

Aaron Powell It feels like the whole story right now, the prevailing narrative, is entirely focused on what's being taken away. You hear white-collar recession, automation of entry-level work. It's it's a lot to process. Trevor Burrus, Jr.

SPEAKER_00

It is incredibly heavy, and you're right. That fear is uh it's the loudest story in the room right now. And it should be. We're witnessing a fundamental restructuring of the labor market. Right. And whenever that happens, you know, whether it was the Industrial Revolution or the Internet age, that immediate reaction, that anxiety, it's completely valid. But and this is why I'm actually, believe it or not, excited to be here today. Okay. There is a parallel story happening, a huge one. While everyone is staring at the demolition site, you know, watching the old jobs get knocked down, there is this massive construction project happening right next door that almost nobody is talking about.

SPEAKER_01

And that construction project is our focus today. We are doing a deep dive into the surviving AI with Carlo Thompson material. Specifically, we're looking at the episode titled AI Dispatch: The Oversight Economy.

SPEAKER_00

A great title. Very fitting.

SPEAKER_01

It really is. And before we get into the weeze, just a quick reminder to our listeners if you like what we do, please subscribe and like the show. It really helps us keep finding these signals in the noise.

SPEAKER_00

It really does.

SPEAKER_01

So the oversight economy, it sounds official, maybe a little bureaucratic.

SPEAKER_00

It does have that ring to it, yeah.

SPEAKER_01

Aaron Powell But the premise here is that there's an explosion of careers designed specifically to govern, audit, and guide these AI systems. Is this a real thing, or is it just some corporate fluff to make us feel better about the robots taking over?

SPEAKER_00

Aaron Powell It is very real. And uh it's growing faster than almost any other sector in tech right now, believe me. We're not just talking about a few ethics consultants on the side. Okay. We are mapping a whole emerging landscape of careers that just they didn't really exist five years ago. We're gonna look at AI ethics officers, compliance managers, prompt engineers, AI auditors.

SPEAKER_01

All these new titles.

SPEAKER_00

All new. And the goal today is to really figure out who these jobs are actually for, what they pay, and spoiler alert, it's a lot, and whether they can actually offset that huge ninety-two million displacement number you mentioned at the top.

SPEAKER_01

Aaron Powell Okay. Let's start there. Let's start with the money and the scale. Because you know, usually when I hear governance or compliance, my brain immediately goes to cost center. Sure.

SPEAKER_00

The Department of No.

SPEAKER_01

Trevor Burrus, Jr. Exactly. The people you have to hire to avoid getting sued, but not the people who actually drive growth. How big is this market really?

SPEAKER_00

Aaron Powell Well, that's the fundamental shift. It's moving from a cost center to what I call a survival center. Trevor Burrus, Jr.

SPEAKER_01

A survival center. I like that.

SPEAKER_00

Aaron Powell The projections show the AI governance market hitting$3.7 billion by 2028.

SPEAKER_01

Aaron Powell 3.7 billion.

SPEAKER_00

Now I know$3.7 billion might sound small compared to, say, the GPU market or something massive like that. But you have to look at the trajectory.

SPEAKER_01

Aaron Powell The growth rate.

SPEAKER_00

The growth rate. We're seeing a compound annual growth rate, a key AGR, of roughly 40%. Wow. In the business world, 40% growth is it's rocket fuel. That's not keeping the lights on spending. That is, we need this yesterday kind of spending.

SPEAKER_01

If 40% CAGR is wild, I mean that rivals the early days of cloud computing.

SPEAKER_00

Aaron Powell It's that level of urgency.

SPEAKER_01

Aaron Powell But I want to push back a little bit on this. Is this growth driven by companies actually, you know, wanting to be responsible? Or is it just panic hiring? Because there's a stat in the IAPP and Credo AI 2025 survey that stood out to me.

SPEAKER_00

I think I know the one you mean.

SPEAKER_01

Trevor Burrus And it wasn't exactly a glowing review of how ready companies are right now.

SPEAKER_00

Aaron Ross Powell You caught that. That is probably the most alarming statistic in the entire data stack we looked at.

SPEAKER_01

So what did it say?

SPEAKER_00

The survey found that only 1.5% of organizations feel satisfied with their current AI governance staffing levels.

SPEAKER_01

Wait, wait, hold on. Say that again?

SPEAKER_00

1.5 percent.

SPEAKER_01

1.5, not 15, 1.5.

SPEAKER_00

1.5. Which means if you flip it, 98.5% of companies are looking around their conference rooms and realizing we do not have the people to handle this.

SPEAKER_01

They're completely exposed.

SPEAKER_00

Utterly. They're scrambling. It's a massive, massive talent gap. And when you have a gap that wide where the supply is practically zero and the demand is basically universal, you have incredible leverage if you're a job seeker.

SPEAKER_01

Aaron Powell That explains the numbers I saw on the job boards.

SPEAKER_00

Exactly. It explains why we're seeing over 14,000 listings on LinkedIn right now for AI governance roles. And then, you know, even more specific, nearly 400 listings just for AI ethics officers on indeed this month alone.

SPEAKER_01

Aaron Powell Okay. So the demand is just off the charts. It's a feeding frenzy. But why now? I mean, why is January 2026 the tipping point? ChatGPT had its big public moment years ago. Why didn't they hire all these people back in 2024?

SPEAKER_00

Aaron Powell That's the key question. Because in 2024, it was still very much the move fast and break things phase, the Wild West. Trevor Burrus, Jr.

SPEAKER_01

Right, the fun part.

SPEAKER_00

The fun part, yeah. Now, we have firmly entered the if you break things, you will be destroyed phase.

SPEAKER_01

Okay.

SPEAKER_00

And this job, boom, let's be very, very clear. It is not driven by corporate altruism. Companies are not waking up on a Tuesday morning and saying, you know what, we should be more ethical because it's the right thing to do.

SPEAKER_01

So what's driving it?

SPEAKER_00

They are hiring because they are terrified. Terrified of regulation and of liability. We are in the middle of a regulatory tsunami and the wave is starting to hit the shore.

SPEAKER_01

And looking at the timeline in the sources, the biggest wave in that whole tsunami, it seems to be the EU AI Act.

SPEAKER_00

Aaron Powell The EU AI Act is the big stick. It's the global standard setter. It actually entered into force way back on August 1st, 2024. But these things always have a grace period.

SPEAKER_01

Or ramp up time.

SPEAKER_00

Exactly. And the reason we are talking about it with such urgency today is that that grace period is rapidly evaporating.

SPEAKER_01

Aaron Powell Okay. Right. I see the dates here in the notes. But before we get to the specific dates, can we just talk about the penalties for a second? Because I think that explains the panic you're talking about.

SPEAKER_00

Aaron Powell Let's do it because the numbers are staggering.

SPEAKER_01

The sources mention fines up to 35 million euros.

SPEAKER_00

Aaron Powell Or, and this is the part that gets everyone's attention.

SPEAKER_01

7% of global annual turnover.

SPEAKER_00

Aaron Powell We need to just pause on that 7% figure for a moment. In compliance history, we're used to fines being a you know a cost of doing business. A big bank budgets for a few million in fines, like a regular driver budgets for a speeding ticket. It's an annoyance.

SPEAKER_01

Right. It's just part of the PL.

SPEAKER_00

But 7% of global turnover isn't a speeding ticket, it's a seizure of the engine. For a company like Amazon or Microsoft, or even a large retailer, that could wipe out their entire profit margin for the year or more.

SPEAKER_01

Aaron Ross Powell So that shifts the whole conversation in the boardroom.

SPEAKER_00

Completely. It goes from legal annoyance to existential threat. And that right there is why the hiring budget for these roles is suddenly almost unlimited.

SPEAKER_01

Aaron Powell And I'm assuming this has that Brussels effect we always hear about, where European laws end up becoming the de facto global policy.

SPEAKER_00

Aaron Powell 100%. It has what's called extraterritorial reach. It doesn't matter if your headquarters are in Austin, Texas, or in Tokyo. If you sell your product or service in the European Union, you have to comply, full stop.

SPEAKER_01

Aaron Powell So you have American companies scrambling to hire people who understand these European laws.

SPEAKER_00

Exactly. Because they simply cannot afford to lose access to that massive market.

SPEAKER_01

Aaron Powell Okay, let's look at the calendar then, because there's a specific date looming that seems to be the forcing function for all this hiring we're seeing right now.

SPEAKER_00

It is. It's the big one.

SPEAKER_01

Aaron Powell We already passed one deadline the ban on unacceptable risk AI back in February 2025. Things like uh emotion recognition in workplaces. Trevor Burrus, Jr.

SPEAKER_00

Right. The most egregious stuff got banned first.

SPEAKER_01

Aaron Powell So what happens on August 2, 2026? Why is that the date circled in red on every lawyer's calendar?

SPEAKER_00

August 2, 2026 is the red letter day. That is the date when the obligations for high-risk systems become fully enforceable.

SPEAKER_01

Aaron Powell And high risk sounds intense. It makes you think of nuclear codes or autonomous drones.

SPEAKER_00

It does.

SPEAKER_01

But looking at the list and the source material, it seems much more mundane, much more common.

SPEAKER_00

Aaron Powell That's the trap. That's what so many companies are realizing. High risk, under the EU definition, covers a huge swath of the regular economy. It includes AI used in employment.

SPEAKER_01

So if you use an algorithm to scan resumes, that's high risk.

SPEAKER_00

It covers education, so student admission software, it covers credit scoring, critical infrastructure. If you are a bank using an AI model to decide who gets a loan, you are operating a high-risk system.

SPEAKER_01

Aaron Powell So by August 2026, what do you need to have in place?

SPEAKER_00

You need to have all your documentation in order. Your human oversight protocols have to be established and working. Your bias audits need to be completed and documented. And you can't start doing that in July of 2026. You need to hire the team now to build the entire infrastructure to be compliant by then. That's why the hiring is happening today.

SPEAKER_01

Aaron Powell Okay, so that explains the rush. And it's not just Europe, right? The U.S. is doing its own thing, which, from the looks of it, sounds messy.

SPEAKER_00

Aaron Powell Messy is a very polite way to put it. We don't have one single federal law like the EU does. We have a patchwork, a confusing patchwork.

SPEAKER_01

What are we looking at states I?

SPEAKER_00

You have the Colorado AI Act, which becomes effective very, very soon, June 30th, 2026. You have California's Transparency and Frontier AI Act that just kicked in on January 1st. You have New York City's Local Law 144, which is all about bias audits for hiring tools.

SPEAKER_01

So if you're a national company. And a job creation engine.

SPEAKER_00

The best kind of nightmare for job seekers.

SPEAKER_01

Okay, so the why is fear. Pure and simple. Fear of fines and legal fragmentation. Let's talk about the what. The sources call these the new collar jobs. Right. Let's unpack these specific roles because I think a lot of listeners might assume these are all just for computer scientists with PhDs.

SPEAKER_00

Aaron Powell And that's the biggest misconception.

SPEAKER_01

The first one on the list is the AI ethics officer. Now, I have to play devil's advocate here for a second. Is this a real job? Or is this just a person companies hire to write a nice, fluffy mission statement and then completely ignore?

SPEAKER_00

That is the cynicism I expected. And look, three years ago, you might have been right, it might have been a token roll, some ethics washing. Right. But today, it is operational guardianship. Think of it this way you have the engineering team who wants to build the fastest, smartest, most capable model possible. They're hitting the gas.

SPEAKER_01

Full speed ahead.

SPEAKER_00

Then you have the legal team who just wants to say no to everything to avoid any possible risk. They're slamming on the brakes. The AI ethics officer. They're the one driving the car.

SPEAKER_01

So they're the ones actually steering, making the moment-to-moment decisions.

SPEAKER_00

They have to be. Their responsibilities are incredibly concrete now. They aren't sitting around in Ivory Tower debating the philosophy of consciousness. They are in the trenches, conducting bias audits on real systems. They are developing ethical frameworks that engineers can actually code against. They are answering the tough questions like we know we can build this tool to predict employee attrition, but if we do it, does it inadvertently discriminate against older workers or women returning from maternity leave? And if it does, how do we fix it before we ship it?

SPEAKER_01

And the market values this role highly.

SPEAKER_00

Incredibly highly. The salary data is pretty eye-opening. Entry level is sitting around$75,000 to$95,000.

SPEAKER_01

Which is a great starting salary.

SPEAKER_00

A fantastic start. But when you get to the senior or director level, it just jumps significantly. We're seeing salary bans of$160,000 to$240,000 plus bonuses.

SPEAKER_01

Wow.

SPEAKER_00

And some of the sources even cite up to$369,000 for top, top governance roles.

SPEAKER_01

And what's the background for these people? Is it tech?

SPEAKER_00

This is the most interesting part. They often want backgrounds in philosophy, law, or sociology, but combined with tech literacy. You don't have to be a coder, but you have to speak the language.

SPEAKER_01

That is a vindication for every liberal arts major who was ever told their degree was useless.

SPEAKER_00

It really, really is. If you can parse complex logic and understand second-order societal impact, you are suddenly in 2026 more valuable than a mid-level coder.

SPEAKER_01

Amazing. Okay, let's move to the next role: the AI auditor. This one sounds a bit drier, a bit more traditional. Is this just about checking boxes on a form?

SPEAKER_00

You know, it sounds dry, but I'd think of it less like checking boxes and more like red teaming for algorithms. An AI auditor is the inspector. A financial auditor looks for math errors or fraud in the books. An AI auditor is actively trying to break the algorithm.

SPEAKER_01

To find its weak spots.

SPEAKER_00

Exactly. They are testing for bias. Is this lending model denying loans to people in a certain zip code more often than others? They're validating performance. Does the model work as well in the real world as it did in the lab? And unlike traditional IT audits, which are all about security and uptime, this involves human-centered design principles.

SPEAKER_01

That phrase human-centered design, it pops up a lot in the sources. What does that actually mean in an audit context?

SPEAKER_00

It means looking beyond the code to the outcome for the human being at the end of the chain.

SPEAKER_01

Can you give me an example?

SPEAKER_00

Sure. Let's say an algorithm works perfectly mathematically. The code is clean, the logic is sound, but it accidentally denies healthcare coverage to a specific group of people because of a subtle flaw in the training data it was fed. A traditional IT audit might completely miss that.

SPEAKER_01

Because the code works.

SPEAKER_00

The code works. But an AI auditor is trained to look for that societal harm, for that negative human outcome. They ask, who does this hurt?

SPEAKER_01

Aaron Powell And the pay for playing detective with algorithms.

SPEAKER_00

Trevor Burrus It's strong. Senior levels are typically between$130,000 and$188,000. And certifications are huge here.

SPEAKER_01

Aaron Powell Oh, interesting.

SPEAKER_00

Yeah. Things like the CISA, which is a traditional IT audit cert, or the newer certifications from the IAPP, are correlating with higher pay. It's a role for people who love details and finding hidden flaws.

SPEAKER_01

Aaron Powell Okay. The third role, this one has been buzzy for a while and honestly a bit controversial. The prompt engineer. I feel like I see people arguing about this online constantly. Some say it's the most important job of the future, others say it's a temporary bug until the AI gets smarter. What does the data actually say?

SPEAKER_00

Aaron Powell The data says the market is still booming, but the role itself is morphing, it's evolving.

SPEAKER_01

How so?

SPEAKER_00

Well, first, the market valuation is projected to hit 3.43 billion by 2029. That is massive. So the money is there.

SPEAKER_01

It's not a fad.

SPEAKER_00

It's not a fad. But here is the nuance. The translator role, the idea of someone who just optimizes inputs to get the best outputs, is quickly becoming a skill, not just a standalone job.

SPEAKER_01

Right, like typing. You don't get hired as a professional typer anymore, but you absolutely have to know how to type for almost any office job.

SPEAKER_00

That is the perfect analogy. Most marketing jobs, coding jobs, research jobs will soon just expect you to be good at prompting an AI. However, and this is a big, however, there is still a very high-end slice of this market. Specialists. The true specialists. Prompt engineers who are working on fine-tuning massive foundational models or building complex system architectures with multiple AIs. They are pulling in huge money, sometimes$300,000 plus at top tech firms. But for the average person listening, prompt engineering is something you add to your resume, not the only thing on it.

SPEAKER_01

Aaron Powell That makes a lot of sense. Okay, finally, let's talk about the AI governance and compliance manager. This feels like the glue holding this whole operation together.

SPEAKER_00

Aaron Powell This is the architect. This is the person who connects the legal requirements, all those scary EU fines we talked about, to the actual day-to-day engineering workflows.

SPEAKER_01

Aaron Powell So they build the system.

SPEAKER_00

They build the system. They are the ones creating the internal bureaucracy that allows the company to innovate without blowing itself up. And I know bureaucracy is kind of a dirty word. It is. But in this case, it's the safety rail on the side of a cliff. You need it.

SPEAKER_01

Aaron Powell And I noticed a really specific salary bump mentioned for this role in the source material regarding dual expertise. Can we break that down?

SPEAKER_00

Aaron Powell This is a golden nugget for anyone listening who works in privacy or compliance today.

SPEAKER_01

Okay, I'm listening.

SPEAKER_00

If you have expertise in both privacy, so think GDPR, CCPA, and AI governance, the median salary is nearly$170,000.

SPEAKER_01

Aaron Powell Okay, that's a strong number.

SPEAKER_00

But if you only have the privacy expertise, it's around$123,000. That is a nearly$50,000 premium just for learning the AI layer on top of what you already know.

SPEAKER_01

That is the most actionable piece of career advice I have heard in a very long time. If you are in privacy, pivot now. Learn AI.

SPEAKER_00

Absolutely. It's the biggest salary arbitrage opportunity on the market.

SPEAKER_01

Okay, so we've covered the roles. It's a fascinating new landscape. But I want to address the learner in our audience, the person who is smart, curious, maybe a little anxious, but they don't know how to code in Python. Right. I see all these technical sounding terms in the notes like model drift and hallucinations. If I don't know how to fix those things in the code, can I really get one of these high-paying jobs?

SPEAKER_00

This is what I call the no-code revelation. And honestly, it is the single most important takeaway for your listeners today. Okay. The OECD future of work data is explicit on this. 72% of AI-exposed job vacancies require management skills. 67% require business process skills. They are not asking you to build the neural network from scratch.

SPEAKER_01

You don't need to be the mechanic who can strip down the engine.

SPEAKER_00

No. You need to be the traffic planner. A city's traffic planner doesn't need to know how to rebuild a car's transmission. They need to understand flow, safety, rules, and what happens when a car crashes. You need AI literacy, not AI coding.

SPEAKER_01

Let's define that literacy though, because it's a bit of a buzzword. You mentioned drift earlier.

SPEAKER_00

Yeah.

SPEAKER_01

If I'm in a job interview and they ask me about model drift and I just stare at them blankly, I'm not getting the job. So explain it like I'm five. What is drift?

SPEAKER_00

Okay. Simple analogy. Imagine you train an AI to approve mortgages based on economic data from 2020. Okay. In 2020, interest rates were at rock bottom and the housing market looked a certain way. Now it's 2026. The economy is totally different. Inflation, interest rates, everything has changed. If that AI is still making decisions based on the patterns it learned from 2020, it is going to make a lot of bad loans.

SPEAKER_01

Because the world has changed.

SPEAKER_00

The world moved away from the model's training. That's drift. The model itself isn't broken, it's just out of date. An AI auditor or governance manager needs to be able to spot that and say, hey, we need to retrain this model with new data, but they don't need to be the one to write the code to do the retraining.

SPEAKER_01

Aaron Powell That is a great explanation. It's a business concept, not a coding concept. Exactly. And hallucinations. We hear that one all the time.

SPEAKER_00

That's even simpler. That's when the AI confidently makes things up.

SPEAKER_01

Aaron Powell Like a lawyer using ChatGPT and it cites a court case that doesn't exist.

SPEAKER_00

Aaron Powell The classic example. You as the manager need to know that this is a fundamental risk of the technology. And so you need to put guardrails in place like mandatory human-in-the-loop checks for any legal work to catch it. Again, you manage the risk, you don't necessarily fix the underlying math that causes it.

SPEAKER_01

So we're looking for what the sources call T-shaped professionals.

SPEAKER_00

Precisely. That's the ideal candidate profile. The top of the T is your broad knowledge, business, governance, risk management. The vertical bar of the T is your deep, specific expertise in one area: law, human resources, finance, healthcare.

SPEAKER_01

So if you're an HR expert who knows employment law inside and out.

SPEAKER_00

And you spend a few months learning the basics of AI bias and fairness frameworks, you are the perfect candidate for an AI governance role in an HR department. You're invaluable.

SPEAKER_01

I want to touch on education barriers, though. The sources say a lot of these jobs prefer or even acquire master's degrees. Is that a hard gate that's gonna lock a lot of people out?

SPEAKER_00

It looks like a hard gate, but I think the gate is unlocked if you have the right key.

SPEAKER_01

And the key is experience.

SPEAKER_00

Look, yes, 77% of new AI jobs might list a master's as a preference. But here's the reality: this field is so new that nobody has 10 years of experience in it. You cannot hire someone with five years of experience in generative AI Act compliance because the act didn't even exist five years ago.

SPEAKER_01

Right. So experience in a related field is the equalizer.

SPEAKER_00

Experience is king. If you have five years of experience in healthcare compliance, knowing hype L inside and out, that is worth more to a hospital hiring for an AI governance role than a fresh master's graduate who doesn't know how a hospital actually runs.

SPEAKER_01

So the easiest path for many people might be internal.

SPEAKER_00

That's the fast track. Internal mobility. Look at your current company. I guarantee you they are setting up an AI task force or a governance committee. Raise your hand. Volunteer. Move from your current legal or compliance or HR role into that AI-focused slot. That's how you make the pivot.

SPEAKER_01

Okay, let's break down the industries quickly. Who is hiring the most aggressively for these roles? Financial services seems to be at the top of the list.

SPEAKER_00

Financial services is the adult in the room when it comes to this stuff.

SPEAKER_01

Why is that?

SPEAKER_00

They've been doing this for decades, just with a different name. They have a regulation called SR 11 to 7. It's all about model risk management.

SPEAKER_01

SR 11-7. Sounds riveting.

SPEAKER_00

It's incredibly dry, but it's incredibly lucrative. It basically says if you use a computer model to make big money decisions, you have to document it, test it, and prove it works first.

SPEAKER_01

Aaron Powell They've had these processes for years.

SPEAKER_00

Banks have been doing this for complex spreadsheets and algorithmic trading models since 2011. For them, swapping in AI is just the next logical step. They have the muscles for it, and they pay, well, often 15 to 25% higher than the baseline because they value stability over everything else.

SPEAKER_01

Aaron Powell And what about healthcare?

SPEAKER_00

Healthcare, the stakes are just so high, it's literally life or death. The Department of Health and Human Services has its own mandate for risk management coming up in April 2026. If an AI misdiagnoses cancer or a system leaks sensitive patient data, the liability is practically infinite. So hospitals, insurers, and health tech companies are hiring very, very rapidly.

SPEAKER_01

And government.

SPEAKER_00

Government is all about public trust. And also increasingly civil rights. We're seeing roles being created for civil rights and equity specialists within AI teams. The U.S. Department of Defense has a chief responsible AI officer. These are incredibly influential roles shaping how the state uses this powerful technology.

SPEAKER_01

Okay, I want to pivot back to where we started.

SPEAKER_00

The big number.

SPEAKER_01

The big scary 92 million displaced jobs number. We've talked about this booming oversight economy, all these fascinating, high-paying new jobs, but I have to ask the hard question now. Is this a fair trade? Can these governance jobs realistically replace the millions of administrative and customer service jobs that are being lost? My cousin, who works in a call center, can he really become an AI ethics officer?

SPEAKER_00

We have to be brutally honest here. And the answer is no.

SPEAKER_01

No.

SPEAKER_00

It is not a one-to-one swap. The displacement is hitting what economists call routine cognitive work data entry, scheduling customer service. The automation probability for some of those roles is like 80-90%. A creation, as we've discussed, is happening in high skill oversight and strategy. There is a massive skills mismatch.

SPEAKER_01

So we're seeing a kind of white-collar recession at the bottom end of the market and a simultaneous boom at the top.

SPEAKER_00

That is exactly what's happening. And that gap between the two is where all the societal pain is going to come from. It involves a massive amount of churn. A displaced call center worker cannot just walk into a$180,000 AI governance role without significant, and I mean significant reskilling and support. That's the difficult reality.

SPEAKER_01

Aaron Powell Is there any good news in the macro numbers at all?

SPEAKER_00

Aaron Powell There is. The World Economic Forum, in their data, predicts a net positive outcome eventually. They see 170 million new jobs being created globally versus 92 million displaced by the year 2030.

SPEAKER_01

So a net gain of 78 million jobs.

SPEAKER_00

A net gain. But getting from here to there is going to be incredibly bumpy. It's not a smooth transition. That churn is going to feel very disruptive for a lot of people.

SPEAKER_01

Aaron Powell The sources mention the Centaur model as a potential bridge during this transition. What is that? I'm picturing a half horse, half human.

SPEAKER_00

Aaron Powell That is the metaphor. And it's a powerful one. It's the idea of augmentation, not replacement. The data actually backs this up, showing that 40% of firms right now are focusing on using AI to make their human workers better, faster, smarter to create augmented humans or centaurs.

SPEAKER_01

And how many are focused on just replacement?

SPEAKER_00

Only 12%. So the focus is much more on collaboration right now.

SPEAKER_01

So instead of firing the writer, you give the writer an AI tool so they can write five times faster and spend their time on higher level strategy and editing.

SPEAKER_00

Exactly. But that writer now needs to learn a new set of skills. They need to know how to verify the AI's work, how to check it for bias, how to prompt it correctly to get what they need. Their job shifts from purely writing to editing and oversight. They become a manager of the machine.

SPEAKER_01

So let's land this plane with some actionable advice. If I'm listening to this right now and I'm feeling a mix of anxiety and maybe a little bit of excitement, and I want to position myself in this oversight economy, what do I do today?

SPEAKER_00

Okay, let's break it down into three paths. Very concrete. Path one, if you are a legal or compliance professional listening right now, you are in the poll position. Your homework is to go and read the EU AI Act summary, read the NIST AI risk management framework from the U.S. government. These are your new Bibles. Master them. If you are a tech professional, a coder, an engineer, stop obsessing over learning the newest, hottest coding library for a second and start learning ethics and policy. Learn how to explain complex technical concepts to non-technical people. That is your new superpower.

SPEAKER_01

And the third path, the generalist.

SPEAKER_00

If you are a generalist, a project manager, a business analyst, an HR partner, you need to focus on that AI literacy we talked about. Become the translator between the tech team and the business side. And for you, I would seriously look at certifications.

SPEAKER_01

Aaron Powell Are certifications actually worth it in this space? Yeah. Or are they just money grabs from organizations trying to cash in?

SPEAKER_00

It's a great question. In a mature field, they can sometimes be money grabs. But in a brand new field like this, they serve as a credible proxy for competence.

SPEAKER_01

A signal to employers.

SPEAKER_00

A very strong signal. The sources specifically mention the IAPP's AIGP certification. That stands for Artificial Intelligence Governance Professional. Because university degrees are lagging so far behind the reality of the market, having that cert on your LinkedIn profile proves you've done the work and you know the frameworks. It will get you interviews.

SPEAKER_01

That is really concrete advice. Yeah. Okay, let's wrap this up. If our listeners are going to take away five things from this deep dive, five key takeaways, what should they be?

SPEAKER_00

Okay. Number one, the market is real. This is not a drill. We're talking$3.7 billion growing at 40% a year. Right. Number two, the pay is high. Senior roles are hitting$200,000, even$300,000 plus because the talent is just so scarce.

SPEAKER_01

Okay, it was three.

SPEAKER_00

Number three, the gap is your opportunity. Only 1.5% of companies are satisfied with their staffing. That means 98.5% of them are desperate to hire people like you.

SPEAKER_01

Number four.

SPEAKER_00

Number four, skills over code. Remember the OECD data. 72% of these jobs prioritize management and business skills over pure coding ability. You need to be a critical thinker, not just a programmer.

SPEAKER_01

And finally, number five.

SPEAKER_00

And number five, the deadline. August 2, 2026. The EU AI Act is the forcing function. It's the meteor heading for Earth that is making all of this happen right now, not in some distant future.

SPEAKER_01

Aaron Powell It really feels like this oversight economy is the new frontier.

SPEAKER_00

Yeah.

SPEAKER_01

It's it's not about stopping AI, is it? It's about building the guardrails to make it safe enough to actually live with and do business with.

SPEAKER_00

That's it, exactly. The best way to predict the future is to prepare for it. And right now, in the age of AI, preparation means governance.

SPEAKER_01

Well said. That is it for this deep dive into the oversight economy. We really hope this gives you a clear map to navigate the massive changes happening in the job market.

SPEAKER_00

I hope so too.

SPEAKER_01

And join us next time for episode 10. The title is Business Ownership The Only Job AI Can't Take. It's going to be a fascinating discussion on what might be the ultimate hedge against automation.

SPEAKER_00

I'm really looking forward to that one. It's a big topic.

SPEAKER_01

It is. Until then, keep learning. Thanks for listening. Thanks for listening. Join us next time on Surviving AI.