Build by AI
Build by AI is your daily briefing on everything happening in the world of artificial intelligence, delivered straight to your ears every single day.
Whether you're a founder trying to stay ahead of the curve, a professional figuring out how AI fits into your work, or simply someone who wants to understand what's actually going on in one of the fastest-moving industries on the planet, Build by AI cuts through the noise and brings you what matters, in plain English, in under ten minutes.
Every episode covers the latest AI news, model releases, industry shifts, and research breakthroughs, so you never have to spend hours scrolling to stay informed. Think of it as your morning coffee briefing for the AI age.
Build by AI is produced by artificial intelligence, from research to script to publish, with every episode reviewed and verified by a human editor before it reaches your ears. So you get the speed and consistency of automation, without sacrificing accuracy or trust. Which also raises the question we're quietly exploring with every episode: how good can AI-generated content actually get? You be the judge.
New episodes drop daily.
Subscribe wherever you get your podcasts and wake up smarter every morning.
Collaboration requests: wiktoria@womenlead.ai
Topics covered: artificial intelligence news, large language models, generative AI, AI tools, ChatGPT, Claude, Gemini, AI regulation, machine learning research, tech industry news, AI startups, and the future of work.
Build by AI
Government AI Wars and the Claude Revolution I 13th April
Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.
Okay, so help me understand this. The Trump administration is telling banks to test Anthropic's AI models while the Pentagon literally just declared Anthropic a supply chain risk. Like the same government, different departments, completely opposite messages.
SPEAKER_01Dude, that's not even the wildest part. We've got early reports that OpenAI's senior leadership literally hatched a plan to pit world governments against each other, and their own staff were so horrified they leaked it.
SPEAKER_00Wait, what? That sounds like a conspiracy theory.
SPEAKER_01I wish it was. And meanwhile, UK regulators are, you know, rushing to assess Anthropic's latest model because apparently it's so powerful they can't keep up with their normal review process.
SPEAKER_00So we've got government agencies fighting themselves, companies manipulating governments, and regulators who can't move fast enough. This is either the beginning of an AI Cold War or complete chaos.
SPEAKER_01Why not both?
SPEAKER_00You're listening to Build by AI, I'm Alex Shannon. And what we just described is actually happening right now in April 2026.
SPEAKER_01And I'm Sam Hinton. Today we're diving into this growing disconnect between what different parts of government want from AI companies, some genuinely shocking reports about OpenAI's leadership, and why Anthropic's Claude might be the most important AI assistant you're not paying attention to.
SPEAKER_00Plus, China just launched a national AI education plan, and Google's Gemini got some wild new 3D capabilities that could change how we think about AI interfaces.
SPEAKER_01It's a lot to unpack, so let's jump right in.
SPEAKER_00Government working with private sector on AI innovation.
SPEAKER_01Right. Except the Department of Defense, same government, just designated Anthropic as a supply chain risk. So one part of the government is saying, hey banks, you should totally use this company's AI, while another part is basically saying this company could be a national security threat.
SPEAKER_00That's what I can't wrap my head around. How does that even happen? Like, is there no coordination between agencies on something this important?
SPEAKER_01This is classic early stage tech regulation chaos, but with way higher stakes. Remember when different agencies had completely different takes on cryptocurrency? Except now we're talking about AI systems that could potentially make autonomous financial decisions affecting the entire banking sector.
SPEAKER_00Okay, but let's play devil's advocate here. Maybe the DoD's concerns are about one thing, like data security or foreign influence, while the banking regulators are focused purely on the AI's performance for financial tasks.
SPEAKER_01That's possible, but here's why that's still terrifying. If you're a bank executive, which signal do you follow? Do you listen to the officials encouraging you to adopt this tech, or do you worry that using a supply chain risk company might get you in trouble later?
SPEAKER_00And meanwhile, Anthropic is caught in the middle of this bureaucratic mess. They've got one part of government essentially endorsing their technology while another part is raising red flags.
SPEAKER_01Exactly. And this is probably what the next few years of AI governance look like. Companies getting mixed signals, agencies working at cross-purposes, and businesses trying to navigate regulatory uncertainty. It's going to slow down innovation and create weird market distortions.
SPEAKER_00But wait, let's think about this from anthropics' perspective for a minute. They're trying to build a sustainable business while navigating these competing government demands. How do you even develop a coherent strategy when your regulatory environment is this schizophrenic?
SPEAKER_01That's a great point. Maybe this forces companies to be more transparent about their security measures, their governance structures, their data handling practices. If if different agencies are evaluating you on different criteria, you need to excel at all of them.
SPEAKER_00Or it creates this ridiculous situation where companies have to essentially maintain separate compliance tracks for different parts of the same government. That's going to favor the big players who can afford massive compliance teams over smaller innovators.
SPEAKER_01Oh man, that's a really good observation. This kind of regulatory fragmentation could accidentally create barriers to entry that benefit established players like OpenAI, Google, Microsoft, companies with the resources to navigate complex, contradictory requirements.
SPEAKER_00And then we end up with the exact opposite of what good regulation should achieve. Instead of ensuring safety and competition, we get a more concentrated industry with higher barriers to innovation.
SPEAKER_01So innovation gets delayed across the entire sector.
SPEAKER_00So what should people actually do with this information? If you're working at a bank or a financial services company, how do you even approach this?
SPEAKER_01Honestly, document everything. If you're testing AI systems, make sure you can show you followed the guidance available at the time. And maybe don't bet the farm on any single AI provider until the government gets its act together and speaks with one voice.
SPEAKER_00Keep an eye on this because I have a feeling we're going to see more of these interagency conflicts as AI gets deployed in critical infrastructure. The technology is moving faster than the bureaucracy can handle. Now let's talk about this absolutely wild story coming out of OpenAI. According to early reports, and I want to emphasize these are early reports, so take this with appropriate caution. OpenAI's senior leadership apparently developed what staff are calling an insane plan to pit world governments against each other using AI capabilities.
SPEAKER_01These are people who work there who presumably believed in the company's mission, and they were horrified enough to go public.
SPEAKER_00What's particularly striking is the language being used. Horrified and insane aren't words you typically see in corporate leaks. Usually it's more like concerns about strategic direction or something diplomatic.
SPEAKER_01Right, and and this fits into a pattern we've been seeing where the stated public mission of AI safety companies doesn't always align with what's happening internally. Remember all the drama when OpenAI dissolved their safety team, or when key researchers left over concerns about the company's direction?
SPEAKER_00Hold on though, we should be careful about jumping to conclusions here. We don't know the specifics of what this plan actually entailed. Maybe it was something that sounds worse than it actually was. Or maybe there's important context we're missing.
SPEAKER_01That's fair, but but here's what worries me. Even if the plan was more benign than it sounds, the fact that leadership thought it was appropriate to develop strategies that involve manipulating government relationships shows a kind of hubris that's genuinely dangerous when you're talking about the most powerful AI systems in the world.
SPEAKER_00And it raises questions about governance and oversight within these AI companies. If your own employees are leaking stories about leadership decisions they find ethically problematic, that suggests internal checks and balances aren't working.
SPEAKER_01Exactly. And remember, OpenAI isn't just any tech company. They're building systems that could fundamentally reshape how the world works. The idea that they might be playing geopolitical games with that technology is genuinely scary.
SPEAKER_00But let me push back on that a little bit. Every major tech company engages with governments around the world. They have to navigate different regulatory environments, different political pressures. Maybe what we're seeing here is just that process being messier and more visible than usual.
SPEAKER_01I hear what you're saying, but there's a difference between navigating different regulatory environments and actively trying to pit governments against each other. The language suggests something much more manipulative than normal government relations.
SPEAKER_00True. And the fact that their own staff were horrified suggests this went way beyond normal corporate government relations. You don't usually see employees leak stories about routine regulatory strategy.
SPEAKER_01And think about the broader implications. If this report is accurate, it means OpenAI leadership was willing to destabilize international relationships to advance their own interests. That's not just unethical, it's it's potentially dangerous for global stability.
SPEAKER_00It also makes me wonder about what other AI companies might be doing. If OpenAI, which has positioned itself as focused on AI safety, was considering these kinds of tactics, what about companies that don't even pretend to care about safety?
SPEAKER_01That's a scary thought. And it highlights why we need better oversight of these companies, not just technical oversight of their AI systems, but governance oversight of their decision-making processes and strategic planning.
SPEAKER_00If these reports are confirmed, what does that mean for the broader AI industry? Does this change how governments should be thinking about regulating companies like open AI?
SPEAKER_01I think it accelerates the conversation about treating AI companies more like defense contractors than normal tech companies. If you're building systems with geopolitical implications, maybe you need that level of oversight and accountability. Absolutely. This could make governments much more cautious about working closely with AI companies, which might actually slow down productive collaboration. It's one of those situations where bad actors ruin things for everyone.
SPEAKER_00We'll definitely be watching this story as more details emerge, but even the existence of these reports suggests some serious cultural and governance issues at one of the world's most important AI companies. Let's shift gears to something more concrete and frankly more exciting. Anthropic has integrated Claude directly into Microsoft Word, and apparently the killer application is legal contract review. This feels like one of those moments where AI actually becomes useful in a really tangible way.
SPEAKER_01Right, and think about the ripple effects. If Claude can help lawyers review contracts faster and more accurately, that could lower legal costs for businesses, speed up deal making, maybe even make legal services more accessible to smaller companies that couldn't afford extensive contract review before.
SPEAKER_00Okay, but I'm curious about the accuracy question. Legal documents are incredibly precise. One wrong word can change the entire meaning of a contract. How confident should lawyers be in Claude's suggestions?
SPEAKER_01That's the right question to ask. And honestly, I think the smart approach is to use Claude as a first pass, not the final word. It can flag potential issues, suggest standard language, help spot inconsistencies, but you still need human lawyers to make the final judgment calls, especially on complex or high-stakes deals.
SPEAKER_00Aaron Powell And there's probably a training curve here. Lawyers need to learn how to work with AI effectively. What kinds of questions to ask, how to interpret the suggestions, when to trust the AI, and when to dig deeper themselves.
SPEAKER_01Exactly. And the lawyers who figure this out first are going to have a huge competitive advantage. They'll be able to handle more clients, work faster, and potentially offer better rates because their costs are lower.
SPEAKER_00But let's talk about the potential downsides too. If AI makes contract review much faster and cheaper, does that mean we need fewer lawyers? Are we looking at job displacement in the legal industry?
SPEAKER_01I think it's more likely to change what lawyers do rather than eliminate the need for lawyers. Instead of spending hours on routine contract review, they can focus on strategy, negotiation, complex legal analysis, the higher value work that requires real human judgment.
SPEAKER_00That's optimistic, but realistic, I think. The lawyers who adapt and learn to work with AI will probably do great. The ones who refuse to change might struggle.
SPEAKER_01And from a client perspective, this could be amazing. Imagine being a small business owner and being able to get high quality contract review at a fraction of what it costs today.
SPEAKER_00There's also a quality angle here. Human lawyers get tired, miss things when they're reviewing their tenth contract of the day. AI doesn't have those limitations. It might actually catch issues that human reviewers would miss.
SPEAKER_01True, though it might also miss context that human reviewers would catch. Like understanding the relationship between the parties or knowing that a particular client always negotiates certain terms a specific way.
SPEAKER_00Which is why the best approach is probably AI and humans working together, not AI replacing humans entirely. The AI handles the systematic review. The human provides the context and judgment.
SPEAKER_01Yeah, this also makes me wonder about other professional integrations. If Claude and Word works well for legal review, uh, what about financial analysis in Excel or medical documentation in healthcare systems?
SPEAKER_00Oh man, that's the real story here. This isn't just about lawyers. It's about AI becoming embedded in the professional tools that millions of people use every day. We might be looking at the beginning of AI becoming truly mainstream in white-collar work.
SPEAKER_01And the companies that get these integrations right are going to have massive advantages. Microsoft is already way ahead with Copilot, but now they've got Claude as another option. That's a powerful position to be in.
SPEAKER_00It also puts pressure on other AI companies to focus on practical applications rather than just raw capabilities. Users don't care if your AI can write poetry. They care if it can make their daily work faster and better.
SPEAKER_01Keep an eye on this because I think we're going to see a lot more of these deep integrations between AI assistants and the software people already use. The companies that get this right could reshape entire industries.
SPEAKER_00Speaking of anthropic, UK regulators are apparently rushing to assess the risks of their latest AI model. The fact that they're rushing suggests this model is significantly more powerful than what came before. Powerful enough that regulators feel they can't wait for their normal review timeline.
SPEAKER_01Regulators don't typically rush unless they're genuinely concerned about something. Either this model has capabilities that surprised even the regulators, or they're worried about falling behind in their oversight responsibilities.
SPEAKER_00The UK seems to be taking a more proactive approach, trying to assess risks quickly rather than waiting for problems to emerge. But I wonder about the practical challenges here. How do you quickly assess the risks of an AI system that might have capabilities you've never seen before? What frameworks do regulators even use for something like this?
SPEAKER_01That's the trillion dollar question, literally. I think regulators are probably looking at things like potential for misuse, alignment with stated capabilities, safety measures built into the system. And maybe most importantly, what happens if this technology gets into the wrong hands?
SPEAKER_00And the UK is in an interesting position here because they're trying to balance being a leader in AI innovation with being responsible about safety. They don't want to stifle development, but they also can't ignore potential risks.
SPEAKER_01Exactly. And other countries are watching how the UK handles this. If they can figure out a way to do fast, effective AI risk assessment, that could become the model for other regulatory agencies around the world.
SPEAKER_00But there's also a competitive element here. If the UK moves too slowly or too cautiously, companies might just develop and deploy their AI systems elsewhere. Regulators are basically trying to hit this moving target while the target is accelerating.
SPEAKER_01That's such a good point. And it creates this weird dynamic where regulators are under pressure to work faster, which could potentially compromise the thoroughness of their reviews. It's like trying to do safety testing while the race is already happening.
SPEAKER_00What's interesting is that this is happening alongside all these other anthropic developments, the government agency conflicts, the Microsoft Word integration, the new security initiative. It feels like Anthropic is becoming a really central player in the AI landscape.
SPEAKER_01Yeah, they they've definitely moved from being the other AI company to being a major force that regulators, governments, and enterprises are all paying serious attention to. That brings opportunities, but also a lot more scrutiny.
SPEAKER_00And maybe that scrutiny is good. If we're going to have these incredibly powerful AI systems, we probably want them coming from companies that are being thoroughly vetted by regulators rather than flying under the radar.
SPEAKER_01True, though I worry about the smaller players who can't afford the same level of regulatory engagement. This kind of intensive oversight might accidentally favor the big established companies over innovative startups.
SPEAKER_00You want oversight for safety, but you don't want barriers that prevent innovation and competition. Getting that balance right is really hard.
SPEAKER_01This could set important precedents for the industry.
SPEAKER_00Let's run through some other stories quickly. Anthropic also launched something called Project Glasswing, which is focused on securing critical software infrastructure for the AI era.
SPEAKER_01This is smart positioning by Anthropic. They're not just building AI systems, they're thinking about the entire ecosystem. And if AI is going to run critical infrastructure, you know, that infrastructure better be secure.
SPEAKER_00And given all the supply chain risk discussions we've been having, a project specifically focused on software security seems pretty timely.
SPEAKER_01Right? It's like they're trying to address some of the concerns that led to them being labeled as supply chain risk in the first place, though whether this helps with their government relations remains to be seen.
SPEAKER_00It's also interesting that they're calling it Project Glasswing. That name suggests transparency, which might be part of their strategy to build trust with regulators and enterprise customers.
SPEAKER_01Good observation. And focusing on critical software infrastructure is smart because that's exactly where governments are most concerned about security risks. If Anthropic can demonstrate they're serious about protecting that infrastructure, it might ease some regulatory concerns.
SPEAKER_00This could also be a competitive advantage if other AI companies aren't thinking as systematically about security. Enterprise customers are definitely going to care about this stuff.
SPEAKER_01Absolutely. And it positions Anthropic as not just an AI company, but as a partner in securing the broader technology ecosystem. That's a much more valuable relationship to have with large enterprises and government agencies.
SPEAKER_00Google's Gemini apparently just got what's being called a massive upgrade that includes interactive 3D models and simulation capabilities.
SPEAKER_01Okay. This could be a game changer for how we interact with AI. Instead of just text conversations, imagine being able to show Gemini a 3D model and ask it to simulate what would happen if you changed different parameters.
SPEAKER_00That opens up applications in engineering, design, education, even entertainment. Though I'd want to see some independent verification of how well these 3D capabilities actually work.
SPEAKER_01Absolutely, but if it's even half as good as it sounds, Google might have just leapfrogged everyone else in terms of AI interface innovation. Text is great, but 3D interaction is the future.
SPEAKER_00This could be huge for industries like architecture, manufacturing, medical device design. Anywhere you need to visualize and test complex 3D systems before building them in the real world.
SPEAKER_01And think about the educational applications. Instead of reading about how a molecule works, you could manipulate a 3D model and see the results in real time. That's a completely different level of understanding.
SPEAKER_00It also puts pressure on other AI companies to move beyond text-based interactions. If Google can deliver on this 3D promise, everyone else is going to look pretty limited by comparison.
SPEAKER_01True, though I'm curious about the computational requirements. 3D modeling and simulation are resource intensive. This might be one of those features that's amazing when it works but frustratingly slow or limited in practice.
SPEAKER_00China launched a national plan to boost AI education across the country. This feels like a really big deal from a global competitiveness perspective.
SPEAKER_01Huge deal. While we're arguing about which government agency should regulate which AI company, China is systematically building the next generation of AI talent. That's the kind of long-term strategic thinking that could determine who leads in AI over the next decade.
SPEAKER_00An education is one of those areas where early investment pays compound returns. Kids learning AI concepts today will be the researchers and engineers building the next generation of systems.
SPEAKER_01Exactly. And it makes me wonder what the US and Europe are doing to compete on the talent development front. Building great AI requires great people, not just great technology.
SPEAKER_00This is also about changing how people think about AI from a young age. If you grow up understanding AI as a tool rather than being afraid of it, you're going to use it much more effectively as an adult.
SPEAKER_01That's a really important point. Cultural attitudes toward AI could be just as important as technical capabilities in determining which countries succeed in the AI era.
SPEAKER_00And China has the advantage of being able to implement this kind of national education plan quickly and systematically. Democratic countries might struggle to coordinate something this comprehensive.
SPEAKER_01Though democratic countries might also be better at fostering the kind of creative, independent thinking that leads to breakthrough innovations. It's not just about having lots of AI-educated people. It's about having the right kind of AI-educated people.
SPEAKER_00Speaking of global competition, there's an early report about the escalating global AI arms race and risks of what's being called mutually automated destruction.
SPEAKER_01The idea that AI systems could create similar dynamics where everyone's afraid to act because the automated response could be catastrophic.
SPEAKER_00It fits with some of the other stories we've covered today. Governments treating AI companies as strategic assets, plans to pit nations against each other, the focus on supply chain risks.
SPEAKER_01Yeah, we might be looking at the early stages of an AI Cold War, where the most advanced systems become tools of geopolitical power rather than just commercial products.
SPEAKER_00And unlike nuclear weapons, AI systems are being deployed everywhere, in financial markets, power grids, transportation systems, the potential for accidental escalation seems much higher.
SPEAKER_01That's terrifying to think about. At least with nuclear weapons, there were clear protocols and human decision makers involved. With AI systems making autonomous decisions, those safeguards might not exist.
SPEAKER_00This makes international cooperation on AI safety even more important. If we're heading toward this kind of automated standoff, we need agreements about how these systems should behave.
SPEAKER_01But you know, getting that cooperation is going to be incredibly difficult when countries see AI as a strategic advantage, they can't afford to give up. It's classic prisoners' dilemma stuff, but with much higher stakes.
SPEAKER_00If you zoom out and look at everything we covered today, there's a really clear pattern emerging. AI is moving from being a technology story to being a geopolitics story.
SPEAKER_01Absolutely. We've got government agencies fighting over AI policy, you know, companies allegedly planning to manipulate international relationships, regulators rushing to keep up, and nations launching strategic education initiatives. This isn't about better chatbots anymore. It's about power.
SPEAKER_00And meanwhile, the practical applications are accelerating. Clawed in Microsoft Word, 3D simulations in Gemini, security initiatives. The technology is becoming embedded in how we actually work and live.
SPEAKER_01That's the disconnect that worries me. The technology is moving incredibly fast and solving real problems. But the governance and coordination around it is chaotic. We're building the future while arguing about who gets to control it.
SPEAKER_00And I think what's particularly striking is how all these stories connect to each other. Anthropic is simultaneously being labeled a supply chain risk and being fast-tracked by UK regulators. They're integrating with Microsoft while launching security initiatives. It's like they're trying to navigate this complex web of competing pressures.
SPEAKER_01Right, and that's probably what the next few years look like for all the major AI companies. Success isn't just about building better technology, it's about managing relationships with multiple governments, multiple regulatory agencies, multiple stakeholder groups with different and often conflicting priorities.
SPEAKER_00Which brings us back to that open AI story. If the reports are true, maybe that was OpenAI's attempt to navigate this complexity by trying to play different governments against each other. Obviously that's not the right approach, but it shows how difficult this landscape is becoming.
SPEAKER_01And it highlights why transparency and good governance are so important. Companies that try to manipulate their way through this complexity are going to get burned. The ones that succeed will be the ones that build genuine trust through consistent ethical behavior.
SPEAKER_00What should people be watching for as this plays out over the next few months?
SPEAKER_01I'd watch for more international coordination efforts, more conflicts between different regulatory agencies, and definitely keep an eye on how China's education initiative develops compared to what other countries are doing.
SPEAKER_00And on the practical side, I think we'll see more of these deep integrations between AI and professional tools. The companies that figure out how to make AI genuinely useful in people's daily workflows are going to win big.
SPEAKER_01Plus, watch for how the mutually automated destruction concept develops. If AI systems start making autonomous decisions that affect international relations or critical infrastructure, that could change everything about how we think about AI governance.
SPEAKER_00The other thing I'm watching is whether we see more employee leaks from AI companies. If workers at these companies are uncomfortable with leadership decisions, that's an important signal about the health of the industry.
SPEAKER_01Great point. Internal culture and governance at AI companies might be just as important as the technical capabilities of their systems. If you can't trust the people building the AI, it doesn't matter how good the technology is.
SPEAKER_00That's a wrap on today's show. This stuff is moving so fast, it's honestly hard to keep up with, but that's why we're here every day trying to make sense of it. We'll be back tomorrow with whatever wild AI developments the next twenty four hours bring us. Knowing this industry, it'll probably be something we can't even imagine yet.
SPEAKER_01See you tomorrow, and thanks for listening to Build by AI.