The Macro AI Podcast

OpenAI Blueprint: Industrial Policy for the Intelligence Age

The AI Guides - Gary Sloper & Scott Bryan Season 2 Episode 74

Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.

0:00 | 35:08

Send a Text to the AI Guides on the show!


About your AI Guides

Gary Sloper

https://www.linkedin.com/in/gsloper/


Scott Bryan

https://www.linkedin.com/in/scottjbryan/

 

Macro AI Website

https://www.macroaipodcast.com/

Macro AI LinkedIn Page:  

https://www.linkedin.com/company/macro-ai-podcast/


Gary's Free AI Readiness Assessment:

https://macronetservices.com/events/the-comprehensive-guide-to-ai-readiness


Scott's Content & Blog

https://www.macronomics.ai/blog





I'm Gary Sloper.  And I'm Scott Bryan. Today we're going through a new OpenAI publication titled Industrial Policy for the Intelligence Age, Ideas to Keep People First. Our goal in this episode is simple, explain what this document is,  why OpenAI says it published it and walk through the most important ideas in a clear fact-based way.

01:27
Right. Yeah, we don't want to get into,  we never get into politics on this show and we're not here to argue for or against uh each proposal that we're going to talk about in the paper. What we want to do is help business leaders,  technology leaders, and  our listener base in general understand what OpenAI is putting on the table because  this isn't just a uh product company talking about model features like you normally hear from OpenAI.

01:55
This is OpenAI uh stepping into a broader, a much broader conversation about work, uh economic structure, infrastructure, safety, governance, and public accountability in the AI era moving forward. Yeah. And OpenAI is very clear about that. This is presented as an early exploratory document or blueprint really meant to start a conversation. And that's what we want to talk about today. Not a final policy package, not a complete answer set.

02:24
really just a starting point. The company says these ideas are intended to be refined, challenged, and developed through a broader democratic process. Yeah, so this episode is really about understanding the blueprint that they put out. uh What is OpenAI saying? What is it worried about?  What kind of future is it trying to frame?  And what should executives take away from that? Right.

02:53
So what is this document? So we did put a link in the show notes for you to  read the completed PDF. we're not going to go through everything. It's not a long one either. Yeah. So we're going to go through every single point. But at the highest level, this is a policy paper we mentioned earlier. is OpenAI's attempt to describe how society may need to adapt, make some changes here, if artificial intelligence keeps advancing rapidly and  starts reshaping things like work.

03:22
production, institutions, and really everyday life. Yeah. And the premise starts from a very big premise. OpenAI says  AI has moved in just a few years from narrow systems that can handle fast bounded tasks to models that can perform more general tasks that people used to spend hours on.  And then it pushes the idea further and says that if progress continues, we could see systems capable of handling

03:50
projects that currently take people months. And that's the skill of change that they're asking people to think about.  And from OpenAI's perspective, that means  we should not wait until disruption is obvious and unavoidable. Their point here is that if the transition is going to be that significant, then the policy conversation has to really start early and start now.  Yeah.  And it really, you know, April 2026, this just came out a few days ago, that's

04:19
We're not even early anymore. People have already kind of been talking about this.  And we've mentioned this concept. We talked about UBI, universal basic income. And we talked about the fact that it was being explored last year.  So now, know, this publication is coming out by OpenAI. Now they've probably been working on it for a long time. And they're saying that the AI conversation  now has to expand beyond benchmarks and products and into questions like

04:47
Kind like what you mentioned,  how do people share in the upside? How do societies in general manage all the risks?  What happens to our actual labor systems, uh benefits systems, uh tax systems, infrastructure? And how do we keep this uh transition uh people first oriented instead of uh institution first or company first? Right. Yeah.

05:14
And one of the questions you're probably asking, why did OpenAI publish  this and why now? They've clearly stated that  it says very directly that the purpose of this document is to start a conversation about governing advanced artificial intelligence in ways that keep people first. Yeah. And I think that phrase matters. They're focused on keep people first because throughout the paper, OpenAI

05:43
returns to the idea that advanced AI could create extraordinary abundance and productivity, but it could also very easily concentrate power, concentrate wealth, uh obviously disrupt jobs, like we've talked about a thousand times,  and outpace the institutions  that were built for an earlier economic era, like  last year. uh So the document is their attempt to say, if that is the transition we're entering, what ideas should at least be on the table?

06:13
right now.  And also tell us something about where the AI conversation is going. The leading labs are no longer talking only about what the models can do. They're increasingly talking about how the social contract may  change, may need to evolve around those capabilities. Yeah. And for our audience, that's why this matters to  business leaders. uh

06:39
You might think you're just evaluating, you know, co-pilots, agents, automation, customer experience tools, things like that, that we've been talking about. but companies building these systems are increasingly  like open AI and the large language models are increasingly talking in a higher level terms like labor transitions,  uh, the public institutions that we rely on, you know, resilience, energy infrastructure and governance. So that tells you, this is no longer a narrow software story. Right. Good point.

07:09
And so what we're talking about here is, and they've mentioned, really three core principles. And we should probably dig into those concepts here that were in the paper. It's organized around three big goals. First, share prosperity broadly. And I think that would be welcome to a lot of folks. uh Second is mitigate risks.  And third is uh to democratize access and agency. So that is really the backbone of the whole document.

07:38
So you'll understand that when you pull it down to read it in its entirety. Yeah, I think that's a good, helpful way to understand open AIs framing. So on one hand, they're saying AI could produce extraordinary gains in productivity, scientific discovery, health, education,  and lower the cost of essential goods. And on the other hand, they're saying those benefits should not accrue only to a small number of firms or a narrow slice of society. uh At the same time, they're saying

08:08
serious risks  have to be actively managed.  And finally, they're arguing that broad participation in the AI economy should not depend on uh privileged access to the most advanced systems. It should depend on useful and affordable AI that really  expands human agency in its entirety. And the third point is subtle but very important. Open AI is not saying everyone must have unlimited access to the most

08:36
powerful models no matter what. The paper explicitly leaves room for things like tighter controls on a narrow class of advanced systems if needed for safety. But it says broad participation still be possible through broadly available AI that is useful, affordable, and privacy preserving. Yeah. so which really means this is also a document about uh distribution, which can be obviously a political hot button.  But you're talking about

09:06
distribution of capability, distribution of upside, risk, and distribution of power. Yeah. And one of the biggest signals in the paper is the title itself. OpenAI didn't call this an AI governance memo, it called it industrial policy. Right. Yeah. I think that was a pretty deliberate choice, I guess you could say. uh The paper argues that society has navigated major technological transitions before, but

09:35
while doing so, required that  institutions, uh protections,  and public choices to make sure that growth really translated into broader opportunity.  And it points to earlier periods in American history where the social contract had to evolve to reflect these new forms of production and new economic reality. we've been here before. This one's just uh a lot more impactful and much more immediately. Yeah.

10:05
The paper brings that logic into the AI era and says the toolkit may need to be broader than just regulation. It talks about workforce development, research funding, market shaping tools,  targeted regulation,  and other concepts such as uh public-private collaboration. It also says government should not  be individual and act alone, and non-governmental institutions should help pilot and refine

10:33
approaches before governments scale what works. Yeah. And I think that's an important uh point for listeners. The paper's not saying, you know, solve AI with one law. It's saying that this transition could be broad enough that multiple parts of the system have to move together. So a lot of things  are, they're touching on a lot of things so that they can all be worked on, you know, at the same time. So private industry, like you said, public institutions, local communities, infrastructure.

11:04
uh education, research, government.  all these things need to be oh moving and getting in motion at the same time. Right.  And it also addresses a concern many people have, which is  very concrete, the burden of AI infrastructure. The paper says AI data centers should pay their own way on energy, should not force households to subsidize them, should generate local jobs and tax revenue for the economy. Yeah.  And that's...

11:33
one of the lines that, you know, executives should take notice of. tells you that open AI understands that the AI build out as a whole isn't just a software issue. It's also a grid issue, a siting issue, permitting issue, community impact issue, like we talked about on one of our last episodes. Right. And the first major section of the paper is called building an open economy. This is where open AI focuses on things like participation,

12:00
access in the shared prosperity uh component we talked about earlier. Yeah. And I think they were pretty direct on that point. It says that advanced AI could lower the cost of essential goods,  create new opportunities and accelerate scientific progress. But it also says the same capabilities are most likely going to disrupt jobs and reshape industries unevenly. uh And  obviously in that process, some communities probably get left behind.

12:30
um Some workers could become more productive without feeling like they're sharing in the gains and the upside could concentrate into a small number of firms, including OpenAI itself.  then they came right out and admitted that right in their paper. And the first proposal in this section is to give workers a voice in the AI transition. OpenAI argues that workers have deep knowledge of how work is actually performed and where...

12:57
AI could make jobs safer, better, and more efficient.  The paper also argues that workers could help draw the line around things such as harmful uses of AI that intensify workloads or undermine autonomy  or things like scheduling or pay rates. Yeah. And I think that's obviously pretty practical, even outside of policy. In a lot of companies, the best AI use cases aren't

13:27
discovered by  executives or in the boardroom, they're discovered by the people who are uh living with repetitive tasks, uh workflows and things like that. So one way to read this is  OpenAI is saying that worker input should be a feature of deployment, uh not an afterthought. Yeah, good point. And the paper also talks about AI first entrepreneurs. So the idea here is that workers with

13:56
something like a domain expertise may be able to start businesses more easily if AI can take on certain things such as overhead that might be time consuming or cost quite a bit for a new budding organization. Think of things like accounting, marketing, procurement, and back office operations. So the document floats support programs like

14:23
microgrants, revenue-based financing,  model contracts, and shared services to help support this in this particular domain. Right. Yeah. And that's something that we  touch on quite a bit in the  podcast. I think one possible answer to labor disruption is not just retraining workers into existing categories. Another answer is enabling more people to become builders.  And AI will kind of lower the coordination cost of starting something.

14:53
That will change entrepreneurship massively. you know, we've, we've on this podcast, we predicted that there'll be an explosion of startups over the next decade that will create new markets and it'll even pick apart a lot of the legacy companies that are out there. And we're already starting to see evidence of this. Yes.  Lots of opportunity on the horizon for those who are determined. Right. Then comes one of the most memorable ideas in the paper, a uh right to AI.

15:20
So OpenAI says access to AI should be treated as foundational for participation in the modern economy and in much the same way literacy or electricity and internet access have mattered in the earlier eras when they were first  produced and  launched in the market. Yeah. Yeah. They, they, they highlight that it should be, you know, that AI should be affordable access to the big, the big foundational models.

15:48
uh And then free or low cost access into those models, uh free or low cost access to education, infrastructure, connectivity, and the training. So workers, schools, and small businesses,  there are not left out. And that's a  direct attempt to frame AI access as uh an opportunity issue and not a uh premium product issue or a higher class of service. Yeah. Good point. And I think for business leaders, that should ring familiar.

16:18
you. We've seen what happens when digital capability gaps become more economic gaps.  OpenAI is basically arguing that AI capability gaps  could become even more consequential. Yep. Yeah. So then you get into,  do  you handle that? How do you modernize the tax base? And that's where the paper really starts to shift a little bit. It says, what happens if AI changes where economic value shows up? uh

16:47
And OpenAI says that, you know, more economic activity flows toward corporate profits and capital gains and less towards  payroll, heavy labor income.  Then there's some risks there. It could erode the tax base, which obviously supports  programs that have been out there for decades, like social security, Medicaid,  housing assistance.  So the  document really says that policymakers might need to modernize the tax base accordingly as this  shift gets underway.

17:17
And regardless of where anyone lands on a specific tax policy, this is very, you know, this is a very important acknowledgement. Open AI is saying AI may not just change productivity. It may change the composition of the economy in ways that, that pressure the systems built for a different balance between labor and capital. Yeah. Things will just be different. Right. Another major proposal is a public wealth fund.

17:45
The paper says every citizen, including those who are not invested in financial markets, should have some stake in AI-driven growth. The idea is that a fund could be seeded, invested in diversified long-term assets with returns potentially distributed more broadly  to the population. Yeah. And I think what's interesting here is the logic. uh So tax reform is one thing, public wealth fund is another. uh

18:14
Tax policy helps government fund programs and a public wealth fund is meant to help ordinary people directly participate in the upside of AI growth. ah And I think that's OpenAI trying to answer the question  that they probably get all the time is,  how do people benefit not only indirectly, but more directly from the explosion of AI? Right, right. It's a good point. ah The paper also devotes real attention to energy infrastructure, which we hear about energy and electricity costs too.

18:43
power AI systems overall. It calls for public private partnerships to accelerate grid expansion and transmission build out needed to power AI. A lot of these grids are very old here in the United States. It specifically mentions financing constraints, permitting delays, siting risks, advanced conductors, high voltage direct current, and ways to

19:10
accelerate nationally important transmission products as part of this, this proposal. Yeah. And that, that section matters a lot because it reinforces a truth that we just talked about in a recent episode. think it was episode 70. uh AI isn't just something that's in the cloud. It sits on physical infrastructure. uh So power generation, transmission, cooling, land, uh lots of capex and all kinds of political issues there. uh

19:38
And open AI is saying that those issues are now really central to the future of intelligence infrastructure. Yeah. And then we get to what may be the most headline friendly section, uh, efficiency dividend dividends. The paper says that if AI reduces  things like routine workloads and operating costs, companies should look for ways to convert those gains into better worker benefits. So think of things like more paid time off or

20:07
pilots for a 32 hour, four day work week with no loss in compensation, especially if performance holds up. So these are some common things that have been floated in the global economy for many years, but could come to fruition. Yeah, it's definitely headline friendly. 32 hour, four day work week. I'll take a four day work week. Yeah, exactly. So I think the deeper message that they were getting at is that

20:36
productivity gains should be uh visible in people's lives. So not just in operating margin, not just in shareholder value, but back in time back, personal time back. uh Probably also more benefits, stronger financial security and better quality of life uh if this  national investment in AI is taking off.  The paper then talks about adaptive safety nets uh that can...

21:05
activate more automatically when disruption rises. uh Portable benefits that follow people across jobs and ventures and stronger pathways into human-centric work like childcare, elder care,  education, healthcare, and community services. So there is this adaptive safety net  proposition that's  being laid out here as well. Yeah.  And I think you could say that there's kind of a coherent thread.

21:33
that's running through all that. So OpenAI is saying labor transitions uh may be real, uneven, and they're definitely going to happen quickly. So the systems  designed to support people through change need to be quicker, flexible, and  less dependent on the old models where one employer relationship defines everything for the worker. So more support systems based on the benefits of AI. Right. And the final major proposal in the

22:02
open economy section in this document is to accelerate scientific discovery and spread the benefits. So the paper talks about distributed AI, enabled laboratories, faster testing of AI generated hypotheses, and the infrastructure needed to turn validated discoveries into real world deployment. So importantly, it says that this  should not be concentrated only to elite institutions, for example. It should reach universities, community,

22:31
colleges, hospitals, and regional research hubs  throughout the  communities as a whole. So really distributing access  to the technology is what they're saying is very paramount by accelerating this science approach. Right. Yeah. I think that's a strong signal that OpenAI wants to frame AI not only as an automation platform, but as a scientific acceleration  engine. ah And not one that benefits just a few

23:01
winners,  but really is across the colleges and communities. Right. Okay. So shifting gears again, the second half of the paper is called Building a Resilient Society. And this is where the focus shifts from distribution and participation to accountability, safety,  and our favorite governance. Right. Yeah. And OpenAI's argument here is pretty  straightforward. As  AI becomes more capable and more embedded into real world systems, it's going to create

23:30
lots of new vulnerabilities  alongside the proposed abundance.  And the paper specifically mentioned cyber risk, biological risk,  and social emotional risks,  misalignment, pressure on institutions like we talked about. uh It says resilience will increasingly depend not just on what happens before deployment, but how systems are monitored and governed after they're out there in the real world. Yeah, and that is a major shift in

24:00
Emphasis it says safety is not just about lab testing before release. It's about the system that surround artificial intelligence. Once it's, um, operating inside the business or businesses, uh, governments and public life. Yeah. And to just dig into that a little deeper, the paper first calls out for stronger safety systems and high consequence domains.

24:26
So that includes tools for detecting and preventing misuse, threat modeling, red teaming, all kinds of complimentary protective systems like rapid medical countermeasures and things like that. It also talks about creating markets for safety capabilities through procurement standards and other frameworks. Yeah, and Scott, that last point is important. OpenAI is not just saying safety should exist as a component.

24:54
It's saying that there may need to be a real ecosystem and market around defensive AI capabilities.  So next comes the AI trust stack. This is about uh provenance  verification, secure signatures, privacy, uh preserving logs, ah and accountability mechanisms that really help.

25:21
people trust the content AI generates and the actions AI systems take. Yes. Yeah. And this is, uh, this is definitely one that's relative to, to enterprises, to business. So as systems move from, you know, answering questions to actually  taking actions,  um, organizations will care more about traceability, verification and responsibility. So,  uh, who approved what, what happened, what chain of logic or delegate

25:50
delegation was involved. ah How do you investigate a problem without creating a blanket surveillance?  And that's the kind of future that this, this section is trying to anticipate. Yeah, it's definitely an interesting read. And, know, to that point, ah it also talks about auditing regimes. So if the paper really proposes stronger auditing, including a bigger role for institutions such as the center for AI standards.

26:19
an innovation or uh KC, C-A-I-S-I, if you're familiar with the acronym. It argues for a competitive market of auditors and evaluators that really can assess frontier AI systems and  products for safety and security risks. uh It says that only a narrow set of the most highly capable models may eventually require stronger pre and post deployment controls here.

26:48
It's definitely something that I think could be at its infancy when we're talking about auditing regimes. Yeah. And I think there's an important distinction. It's an open AI is trying to draw that distinction between uh targeted frontier model oversight  and broad restrictions that can choke off innovation.  they're clearly trying to argue for narrow, high consequence safeguards rather than just blanket burdens on every AI builder that's out there. Yeah. That's a point.

27:17
And one of the more bold ideas in the paper is the call for model containment playbooks. These are coordinated plans for situations where dangerous AI systems are already out in the world and cannot easily be recalled. Whether because the uh weights  have been released, access  cannot be meaningfully limited or the systems are autonomous  and replicating, like Skynet.

27:46
Yeah. Where's Sarah Connor when you need her? I'll be back.  Yeah.  Yeah.  So, you know,  it's just interesting how  that's something else that they're thinking about. Yeah, exactly.  But seriously, though, that this section shows how seriously OpenAI is framing the long term problem.  It's not only asking how to prevent misuse. It's also asking how societies respond if prevention fails and a

28:16
dangerous capability is already diffuse.  And we've heard some things recently from Anthropic about different models that they're very concerned about. So they're holding them back  before they get out in the open. Yeah. Being able to shut it down.  The open AI paper also argues that frontier AI companies should adopt mission aligned governance  and the structures that embed public interest accountability into decision-making. So it mentioned structures like

28:45
public benefit corporations and also talks about securing model weights and training infrastructure against insider capture and manipulative uses that could occur either directly or indirectly. Yeah. I think that's an important internal facing point. So the paper is it's not just about what government should do to AI companies. It's also about what AI companies should do to govern themselves. Yeah. That's a good point. Govern themselves. And then it turns

29:16
to  government use of AI. It argues for clear legal rules,  high standards for reliability and safety, and better records of AI-assisted government reasoning and action, and AI-enabled oversight tools that inspector generals, courts, and legislative bodies could use to improve accountability with an AI. m It also mentions modernizing transparency frameworks like

29:42
FOIA, F-O-I-A, that's the Freedom of Information Act, which has been around for a while. It's a 1966 US law that grants any person the right to request access to federal agency records. Probably seen that, which, you know, big high profile things such as, you know, the Kennedy assassination or other, you know, events that have happened here on US soil. Yeah. Yeah. And think that's a pretty important governance point. So if

30:10
If governments use AI in decision-making, then transparency and accountability have to evolve with that use.  This obviously becomes a constitutional and civic issue,  not just a procurement issue. Yeah. Yeah. They want to make sure that there's oversight. ah The last group of ideas that may be the most important philosophically, OpenAI says alignment should not be defined.

30:36
only by engineers and executives behind closed doors. It argues for  structured public input, uh transparent model specifications, uh incident near-miss reporting, and international information sharing, really through national evaluation bodies and really a broader global network of AI institutes. So not just a handful of people that should be defining this,  trying to make this more broad and have that oversight as well.

31:06
Yeah. There's obviously a lot to that. think OpenAI is saying that the future of AI should not be shaped by technical elites or  corporate boards. There should be  broader participation in determining what these systems are for,  how they should behave, and what kinds of oversight and coordination are needed  as they become more powerful and impact everything really across society as a whole. Right. And so if you're a business leader, like we always want to address on every episode with you, know, what

31:36
should you be taking away ah as you're listening to this? I'd say there are a few big takeaways and Scott, you can certainly weigh in. ah First, the AI conversation is clearly moving beyond software features and productivity demos. The leading AI companies are now framing the transition in terms of labor markets, uh public institutions, infrastructure, resilience, and really social legitimacy.

32:02
uh Those  are some key points there. And then I would say second, infrastructure is central. If AI keeps scaling, power and transmission costs are not  side issues. They're core economic issues. And who's going to brunt  the  costs and expansion for that? Yeah. Yeah. think there are a couple more. I'd say third, uh governance is becoming operational. So,  auditing, accountability, deployment controls, human oversight. uh

32:32
So those are not just  theoretical policy topics. They're really becoming design questions for  real organizations.  And I'd add fourth, that distribution matters. The companies that are building the future of AI are increasingly aware that capability alone isn't enough. So they're starting to do things like write this paper. The question is,  who benefits, who bears the cost, who gets access, who gets a real voice in how this transition unfolds. So probably you're going to see more of these

33:02
large language model companies coming out with their ideas and very similar to what OpenAI came out with here. Yeah, that's a point. And I did want to add one more point I should have mentioned earlier. This paper shows that the AI era is  starting to mature. It's not just  concepts and lab uh experiments. The leading edge of the conversation is no longer just what can the model do? It's how should institutions, markets, communities, and governments adapt

33:31
around what that model can do. So what is in this document ultimately? It's OpenAI's attempt to put a broad people first framework around the transition to more advanced AI. Yeah, and I think, you know, if you think why was it published? I think it's because OpenAI thinks, believes that if AI can keep advancing at this pace, the conversation can't stay confined to, you know, product teams and lab circles because of its massive impact.

34:02
So it has to expand to include everything that we touched on. and  open AI is, they're not saying that these are fixed answers, but they're opening the door to a broader conversation about how, how to make sure that AI benefits everyone. And I think they're trying to pull other players into the conversation. Yes.  And whether listeners agree with every idea in the paper, it's, it's not the point of today's episode. The point is to understand where the conversation is going and where it started.

34:31
And this document tells us something very important. The future of AI will not be shaped by more than one  technical capability. That's right. Well, that's it for today's episode of the Macro AI podcast. Thank you for joining us. Please continue to like and stay subscribed to our podcast. Please keep us  top of mind with your colleagues. So please share and  let them know about what we've been talking about. until next time, we'll see you soon.