Build by AI
Build by AI is your daily briefing on everything happening in the world of artificial intelligence, delivered straight to your ears every single day.
Whether you're a founder trying to stay ahead of the curve, a professional figuring out how AI fits into your work, or simply someone who wants to understand what's actually going on in one of the fastest-moving industries on the planet, Build by AI cuts through the noise and brings you what matters, in plain English, in under ten minutes.
Every episode covers the latest AI news, model releases, industry shifts, and research breakthroughs, so you never have to spend hours scrolling to stay informed. Think of it as your morning coffee briefing for the AI age.
Build by AI is produced by artificial intelligence, from research to script to publish, with every episode reviewed and verified by a human editor before it reaches your ears. So you get the speed and consistency of automation, without sacrificing accuracy or trust. Which also raises the question we're quietly exploring with every episode: how good can AI-generated content actually get? You be the judge.
New episodes drop daily.
Subscribe wherever you get your podcasts and wake up smarter every morning.
Collaboration requests: wiktoria@womenlead.ai
Topics covered: artificial intelligence news, large language models, generative AI, AI tools, ChatGPT, Claude, Gemini, AI regulation, machine learning research, tech industry news, AI startups, and the future of work.
Build by AI
When AI Companies Go to War I 11th April
Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.
So I've been staring at these news alerts all morning, and I think we're watching AI companies basically declare war on everyone, including each other, the government, and apparently their own safety systems.
SPEAKER_00Dude, yes, it's like the honeymoon phase is completely over. Anthropic alone is in legal battles with the Trump administration banning their own users, freaking out bank regulators about cybersecurity, and now they want to build their own chips.
SPEAKER_01And that's just one company. We've got OpenAI getting sued for allegedly ignoring three separate warnings that one of their users was dangerous, including their own internal safety flags.
SPEAKER_00Right. And then Elon's XAAEI is literally suing Colorado because they think anti-discrimination laws violate their AI's free speech rights. Like what timeline are we living in?
SPEAKER_01The timeline where AI companies have gotten big enough and confident enough to fight everyone simultaneously. And I think that tells us something pretty important about where this industry is headed. You're listening to Build by AI. I'm Alex Shannon. And if you thought the AI industry was moving fast before, wait until you hear what happened when these companies decided to stop playing defense.
SPEAKER_00And uh and I'm Sam Hinton. Today we're diving deep into what I'm calling the AI industry's war phase, where companies are battling regulators, suing states, banning their own users, and basically burning bridges left and right.
SPEAKER_01We've got some genuinely concerning stories today about safety warnings being ignored and cybersecurity risks that have bank executives getting called into meetings with regulators.
SPEAKER_00Plus some fascinating business moves that could reshape how these companies operate. So buckle up. Because this is not the friendly AI future we were promised.
SPEAKER_01She claims that ChatGPT was actually fueling her abusers' delusions and behavior. And here's the kicker.
SPEAKER_00Yeah, and this isn't just external complaints. According to the lawsuit, OpenAI ignored their own internal safety flag, specifically something called a mass casualty safety flag. Like their own system was throwing up red alerts about this user.
SPEAKER_01Wait, hold on. Their own system flagged this as a mass casualty risk. And they what, just ignored it? How does that happen in a company that talks constantly about AI safety?
SPEAKER_00That's what's so troubling about this. We keep hearing about all these sophisticated safety systems and monitoring tools, but if they're not acting on their own alerts, what's the point? It's like having a smoke detector that goes off and then just unplugging it instead of checking for fire.
SPEAKER_01But let me play devil's advocate here for a second. These companies must get thousands of complaints and reports. How are they supposed to investigate every single one? And what's the liability question here? Are they responsible for how people use their tools?
SPEAKER_00Okay. But Alex, we're not talking about some random user complaint. This was their own internal safety system raising a mass casualty flag. That's not noise, that's their most serious category of alert. And if you're going to build systems that can influence human behavior, you have to take responsibility when your own safety systems tell you something's wrong.
SPEAKER_01You're right. And this gets to something bigger. As these AI systems become more sophisticated at understanding and generating human-like responses, they become more powerful tools for manipulation. The question is whether companies are prepared for that responsibility.
SPEAKER_00Exactly. And what worries me is that this probably isn't an isolated case. This is just the one that made it to court. How many other situations are flying under the radar? This lawsuit could open the floodgates for similar cases.
SPEAKER_01The practical takeaway here is that AI companies are going to face much more scrutiny about their safety monitoring and response procedures. And honestly, they should. If you're building tools this powerful, you better have systems in place to act when those tools are being misused for harm. They're not just software companies anymore. They're operating systems that can directly impact human safety and behavior. That requires a completely different level of operational rigor.
SPEAKER_00And let's talk about the legal precedent here. If this case succeeds, it essentially establishes that AI companies have a duty to act on their own safety warnings. That could fundamentally change how these systems are monitored and operated.
SPEAKER_01Which brings up an interesting question. What happens to innovation if companies become legally liable for every potential misuse? Do we end up with overly cautious systems that are basically useless? Or do we find a middle ground?
SPEAKER_00I think the middle ground has to be based on the severity of the warning. A mass casualty safety flag isn't about general misuse, it's about imminent serious harm. There's a difference between being overly cautious and ignoring your own red alert systems.
SPEAKER_01Fair enough. And from a business perspective, this lawsuit is going to force every AI company to review their safety response procedures. Because if OpenAI loses this case, every other company knows they could be next.
SPEAKER_00Absolutely, we're probably going to see a lot more investment in human safety teams, faster response protocols, and more conservative approaches to user warnings. The era of move fast and break things is definitely over when breaking things could mean someone gets hurt.
SPEAKER_01Speaking of scrutiny, US regulators have summoned bank executives to discuss cybersecurity risks posed by Anthropic's latest AI model. This is pretty unprecedented. Regulators are essentially saying, we're worried about this AI system, and we need to talk to you about it right now.
SPEAKER_00This is huge because it shows regulators are starting to think proactively about AI risks instead of just reacting after something goes wrong. Banking is obviously critical infrastructure, and if they're worried enough to call emergency meetings, that tells us something about the capabilities we're dealing with.
SPEAKER_01What do you think specifically has them spooked? We're talking about Anthropic's latest AI model here. What kind of cybersecurity risks could an AI model pose to banks that would require summoning executives?
SPEAKER_00Well, think about it. Advanced AI models are getting really good at understanding and generating code, finding patterns and data, and even social engineering through conversation. In the wrong hands, that's like giving hackers a supercharged toolkit for everything from phishing attacks to finding vulnerabilities in banking systems.
SPEAKER_01But wait, isn't that a bit like blaming Microsoft Word for bank fraud because someone used it to write a fake check? These are general purpose tools. At what point does regulating the tool itself become overreach?
SPEAKER_00Okay, but Alex, we're not talking about Word here. We're talking about systems that can potentially automate sophisticated cyber attacks at scale. Imagine an AI that can simultaneously probe thousands of systems for vulnerabilities, craft personalized phishing emails for bank employees, and then help execute coordinated attacks. That's a different level of capability entirely.
SPEAKER_01Fair point. And I guess the banking sector has learned from previous tech disruptions. They don't want to be caught flat-footed like they were with some of the FinTech innovations or cryptocurrency challenges.
SPEAKER_00Exactly. And the fact that this is coming from regulators, not just banks themselves, suggests there might be classified or sensitive intelligence about specific threats or capabilities that we're not seeing in the public reporting.
SPEAKER_01The bigger picture here is that we're seeing the beginning of what could be much more hands-on government oversight of AI capabilities, especially when they touch critical infrastructure. Banks are just the start. I'd expect similar meetings about power grids, telecommunications, and other essential systems. That's a great point. If the company that talks most about constitutional AI and safety alignment is still triggering emergency regulatory meetings, it suggests the capabilities are advancing faster than anyone's ability to fully control or understand them.
SPEAKER_00And let's think about the operational impact on banks. Are we looking at new security protocols, restrictions on AI tools, mandatory penetration testing? These meetings could result in some pretty significant changes to how financial institutions operate.
SPEAKER_01The compliance costs alone could be massive. Banks are already some of the most heavily regulated industries, and now they're potentially looking at AI-specific cybersecurity requirements on top of everything else they're already dealing with.
SPEAKER_00But here's what I find most telling. Not just send a memo, not just issue guidance, but actually call people into rooms for face-to-face meetings. That suggests genuine urgency.
SPEAKER_01Right, and it also suggests that whatever risks they're concerned about, they think banks might not be taking them seriously enough on their own. It's like a parent-teacher conference, but for critical infrastructure.
SPEAKER_00The question I keep coming back to is what happens next? Do we see similar regulatory action in other countries? Do or do other AI companies start getting the same scrutiny? This could be the beginning of a much more adversarial relationship between AI companies and financial regulators.
SPEAKER_01And that brings us back to our theme today. AI companies are increasingly finding themselves in conflict with various stakeholders. Even when they're trying to be responsible like Anthropic, they're still ending up in regulatory hot water. Now let's talk about Anthropic's business strategy, because they're apparently considering building their own AI chips. This would be a massive move toward vertical integration, basically following the playbook that companies like Apple have used in consumer electronics.
SPEAKER_00This makes total sense from a strategic standpoint. Right now all these AI companies that are basically at the mercy of NVIDIA and a few other chip manufacturers. Building your own chips means you control your own destiny, performance, costs, supply chain, everything.
SPEAKER_01But this is also incredibly expensive and complex. We're talking about billions of dollars in RD, fabrication facilities, specialized talent. Is Anthropic really big enough to take on that kind of investment and risk?
SPEAKER_00Well, look at what Google did with their TPUs, tensor processing units. They started developing those specifically for their AI workloads, and it's given them a huge competitive advantage. If Anthropic can design chips that are optimized specifically for how their models work, they could potentially get better performance per dollar than using off-the-shelf hardware.
SPEAKER_01That's a good point, but Google has massive scale and resources. Anthropic is competing with OpenAI and others who are also probably looking at similar moves. Doesn't this just turn into an expensive arms race where everyone's duplicating the same chip development efforts?
SPEAKER_00Maybe, but I think it's more about differentiation. If everyone's using the same NVIDIA chips, then it's harder to get a technical edge. Custom Silicon lets you optimize for your specific approach to AI. Maybe you prioritize inference speed or training efficiency or energy consumption.
SPEAKER_01There's also the geopolitical angle here. With all the tensions around chip manufacturing and export controls, having domestic chip capabilities could be seen as a national security advantage.
SPEAKER_00Absolutely. And this ties back to that regulatory scrutiny we were just talking about. If you're building critical AI infrastructure, the government probably prefers that you're not entirely dependent on foreign supply chains or even foreign-influenced companies.
SPEAKER_01The timeline question is interesting too. Chip development takes years, so if Anthropic is just starting to consider this now, we're looking at 2027 or 2028 before we see results. That's a long-term bet on where the AI industry is heading. That's a huge risk, but maybe they're betting that certain fundamental computational patterns will remain consistent even as models evolve. Like regardless of the specific architecture, you're still going to need massive parallel processing for matrix operations.
SPEAKER_00True, but there's also the talent acquisition challenge. Chip design is a very specialized field, and all the best people are probably already working at NVIDIA, Apple, or Intel. How do you build a world-class chip team from scratch?
SPEAKER_01You pay them massive amounts of money, basically. But that brings us back to the cost question. Between talent acquisition, RD, and manufacturing partnerships, this could easily be a multi-billion dollar investment before you see any return.
SPEAKER_00And here's another angle. What does this do to Anthropic's relationship with NVIDIA? Right now they're probably one of NVIDIA's biggest customers. If you announce you're going to compete with them, do you risk getting worse prices or priority on their current chips?
SPEAKER_01That's a delicate balancing act. You need NVIDIA chips for the next few years while you're developing your own. But you're essentially telling them you plan to compete with them eventually. It's like being in a relationship while actively looking for someone else.
SPEAKER_00The financial implications are staggering too. Right now, Anthropic's biggest expense is probably compute costs, buying or renting access to hardware. If they can build more efficient chips, they could potentially reduce their operating costs significantly.
SPEAKER_01But that's only if the chips actually work and deliver better performance. There's a reason most companies don't build their own silicon. It's incredibly hard to get right, and the failure rate for new chip projects is pretty high.
SPEAKER_00Still, from a strategic perspective, I get why they're considering it. In an industry where compute is everything, controlling your own compute stack is the ultimate competitive advantage. It's like owning the oil wells instead of just buying oil.
SPEAKER_01And speaking of Anthropic's complicated week, they just lost another round in their ongoing legal battle with the Trump administration. An appeals court ruled against them, which represents another setback in what seems like an escalating dispute with the federal government.
SPEAKER_00That's a level of confidence, or maybe desperation, that we haven't seen before. Usually tech companies try to work things out behind closed doors with regulators.
SPEAKER_01Right. But we don't have all the details about what exactly they're fighting about. It could be anything from data access requirements to safety testing mandates to export controls. What do you think is significant enough to risk this kind of public confrontation?
SPEAKER_00I think it's probably about operational control. The Trump administration has been pretty aggressive about regulating AI development, and Anthropic might be fighting requirements that they see as threatening their ability to compete or innovate, maybe mandatory safety testing that slows down their release cycles, or data sharing requirements that compromise their competitive advantage.
SPEAKER_01But here's what I don't understand. Taking on the federal government in court is expensive, time consuming, and you risk making powerful enemies. Why not just comply and work within whatever regulatory framework they're trying to establish?
SPEAKER_00Because compliance might kill your business model. If the regulations are written in a way that favors incumbents or makes it impossible to operate profitably, then fighting in court might be your only option. It's like the old saying, if you're going to be hanged anyway, you might as well fight.
SPEAKER_01That's a pretty dramatic way to put it, but you might be right. And the fact that they're willing to keep fighting, even after losing an appeals court, suggests this is existential for them.
SPEAKER_00You know, they're fighting the government while also dealing with all these other issues we've talked about. The cybersecurity concerns, the chip development, the user banning. It's like they've decided to fight on all fronts simultaneously.
SPEAKER_01Which brings us back to that theme we started with. AI companies are no longer trying to play nice with everyone. They're picking their battles and fighting hard for what they see as their core interests, even if it means burning some bridges.
SPEAKER_00But I wonder if this is sustainable long-term. You you can fight regulators, you can fight users, you can fight other companies. But at some point, don't you need allies, especially when you're trying to build technology that requires public trust?
SPEAKER_01That's a great point. Public trust is crucial for AI adoption, and if you're constantly in legal battles with the government, that doesn't exactly inspire confidence in your technology or your judgment.
SPEAKER_00Plus, appeals court losses create legal precedent. Other AI companies are watching this case closely, because whatever Anthropic loses on could apply to them too. So in a way, Anthropic is fighting not just for themselves, but for the entire industry's operational freedom.
SPEAKER_01Which makes the stakes even higher. If they lose decisively, it could establish regulatory precedents that reshape how all AI companies operate. No wonder they're willing to keep fighting even after setbacks.
SPEAKER_00And here's another angle. What if this is partly about timing? Maybe Anthropic thinks the regulatory landscape will be different in a few years, and they're trying to delay compliance until they have a better political environment to work with.
SPEAKER_01Interesting theory, but that's a risky strategy. Courts don't like it when companies appear to be stalling for political reasons, and it could backfire if judges think you're not acting in good faith.
SPEAKER_00True. I think the broader takeaway here is that AI regulation is going to be shaped as much by court battles as by legislative action. We're essentially watching the legal framework for AI development being written in real time through these lawsuits.
SPEAKER_01Alright, let's rapid fire through some other stories that caught our attention. First up, Anthropic temporarily banned the creator of OpenClaw from accessing Claude following some pricing changes. Sam, what's your take on companies banning their own power users?
SPEAKER_00This is actually pretty concerning because OpenClaw is exactly the kind of third-party innovation that makes AI platforms more valuable. If you're banning developers who are building cool stuff on top of your system, you're basically shooting yourself in the foot. It suggests there might be some deeper business model tensions we're not seeing.
SPEAKER_01What's weird is that this happened after pricing changes. So either OpenClaw wasn't paying the new rates, or there's some dispute about how they were using the API that got triggered by the pricing adjustment.
SPEAKER_00Right, and temporary bans usually mean there's an ongoing negotiation or investigation. But from a developer relations perspective, this sends a really bad signal to the community. If you can get banned without warning, who's going to build serious businesses on top of Claude?
SPEAKER_01It's the classic platform risk problem. You build on someone else's platform, you're subject to their rules and whims, but AI platforms need third-party developers to create ecosystem value. So this kind of behavior is ultimately self-defeating.
SPEAKER_00Exactly. And OpenClaw specifically is a tool that helps people use Claude more effectively. Banning that is like Apple banning developers who make productivity apps for the iPhone. It just doesn't make sense strategically.
SPEAKER_01The fact that it's temporary suggests they're probably trying to work things out. But the damage to trust might already be done. Other developers are watching this and thinking twice about building on Anthropic's platform.
SPEAKER_00And this fits the theme we've been talking about. AI companies are increasingly willing to be heavy-handed with partners and users when it serves their immediate business interests, even if it hurts long-term platform growth.
SPEAKER_01Next, early reports suggest that Canadian AI firm Cohere is in merger talks with Germany's Aleph Alpha. If confirmed, this would be combining two pretty significant players in the AI space.
SPEAKER_00Yeah, and this makes sense as consolidation pressure increases. Smaller AI companies are probably realizing they need scale to compete with OpenAI, Google, and Anthropic. A Canada-Germany combination also gives you interesting regulatory diversification, and you're not just subject to US oversight.
SPEAKER_01The geographic spread is smart too. Cohere has strong ties to the North American market, while Aleph Alpha has been focused on European enterprise customers. Together, they could have a pretty compelling international offering.
SPEAKER_00Plus, both companies have been emphasizing privacy and data sovereignty, which is a huge selling point for enterprise customers who are nervous about sending their data to US-based AI providers. This merger could create a real alternative for companies that want to keep their AI processing closer to home.
SPEAKER_01The timing is interesting too. As AI regulation gets more complex and fragmented across different countries, having operations in multiple jurisdictions could be a major competitive advantage.
SPEAKER_00Absolutely. That's a really good point. The regulatory landscape is changing so fast that companies might have a limited window to do these deals before the rules change. Better to merge now than get stuck as a subscale player later.
SPEAKER_01And then we have Anthropic launching Project Glasswing, which is focused on securing critical software infrastructure for the AI era. This seems like it could be related to those cybersecurity concerns we talked about earlier.
SPEAKER_00Absolutely. Project Glasswing also suggests Anthropic is thinking about AI security more broadly than just their own models. They're talking about securing critical software infrastructure for the entire AI era, which could position them as leaders in AI safety and security.
SPEAKER_01It's also a potential new business line. If you're good at securing AI systems, there's probably a huge market for that expertise as more companies deploy AI in critical applications.
SPEAKER_00Exactly. Instead of just building AI models, you're building the security infrastructure that makes AI deployment safe and reliable. That could be a massive market opportunity.
SPEAKER_01Plus it helps with the regulatory relationships we've been talking about. If you're actively working on AI security solutions, that's got to help when you're in meetings with bank regulators or fighting court cases with the administration.
SPEAKER_00Good point. It's much easier to argue for regulatory flexibility when you can point to concrete security initiatives you're leading. Project Glasswing could be as much about political positioning as it is about technical capability. This is Elon being Elon, but it's also a preview of the legal arguments we're going to see a lot more of. If AI systems are sophisticated enough to seem human like, do they get human like legal protections? It sounds crazy, but constitutional law has dealt with weirder questions before.
SPEAKER_01The anti-discrimination angle is interesting, though. Colorado's law probably requires AI systems to avoid biased outputs in certain contexts. XAI is essentially arguing that forcing AI to avoid discrimination is itself a form of censorship.
SPEAKER_00Which is a fascinating argument because it raises questions about whose speech rights we're actually talking about. Is it Grok's free speech, or is it XAI's right to build AI systems that say whatever they want without government interference?
SPEAKER_01I think it's the latter, dressed up as the former. This is really about whether states can regulate AI outputs, and XAI is using free speech as a way to challenge that regulatory authority.
SPEAKER_00The precedent implications are huge. If AI systems get free speech protections, that could make it much harder to regulate harmful or biased AI outputs. Every content moderation requirement could become a First Amendment challenge.
SPEAKER_01On the other hand, if the courts reject AI free speech rights entirely, that could give governments much broader authority to control how AI systems operate. It's really a foundational question about the legal status of AI.
SPEAKER_00And typically, these kinds of edge cases get resolved through a series of court battles over several years, so we might not get a clear answer anytime soon, but this Colorado case could be the beginning of that process.
SPEAKER_01Okay, so if you zoom out and look at everything we covered today, there's a really clear pattern emerging. AI companies have basically shifted from cooperation mode to competition mode, and they're willing to fight pretty much everyone to protect their interests.
SPEAKER_00Yeah, and I think what we're seeing is the end of the we're all in this together phase of AI development. These companies have gotten big enough and confident enough that they're picking fights with users, regulators, other companies, and even their own safety systems when it suits their business objectives.
SPEAKER_01The safety angle is what worries me most. When you've got companies ignoring their own internal safety alerts or fighting anti-discrimination laws, it suggests that commercial pressures are starting to override safety considerations.
SPEAKER_00But maybe this confrontational phase is actually healthy in the long run. Better to have these fights now, in court and in public, than to have backroom deals that nobody understands. At least when companies are suing each other and fighting with regulators, we get some visibility into what's really at stake.
SPEAKER_01That's an interesting way to look at it. The question is whether the regulatory and legal systems can keep up with the pace of development and the complexity of the issues. Because if they can't, we might end up with the worst of both worlds. Lots of fighting, but no real oversight.
SPEAKER_00I think the next six months are going to be crucial. We're going to see how these court battles play out, whether the regulatory pressure intensifies, and most importantly, whether any of this actually makes AI systems safer and more beneficial for regular people.
SPEAKER_01What strikes me is how quickly this has all escalated. Just a year ago, these companies were mostly worried about technical challenges and scaling issues. Now they're fighting existential battles with governments and dealing with lawsuits about life and death safety failures.
SPEAKER_00Right, and that tells us something important about how fast AI capabilities are advancing. When your technology becomes powerful enough to pose mass casualty risks or threaten critical infrastructure, you inevitably end up in conflict with safety-focused institutions.
SPEAKER_01The vertical integration trend is fascinating too. Between anthropic considering chip development and the potential cohere Aleph Alpha merger, we're seeing companies try to control more of their own technology stack. That suggests they don't trust the current ecosystem to meet their needs.
SPEAKER_00And that lack of trust extends to their relationships with users and developers too. When you're banning third-party tool creators and ignoring safety warnings, you're prioritizing control over collaboration. It's a very different approach than the open, ecosystem-friendly strategies we saw in earlier phases of tech development.
SPEAKER_01The geopolitical dimensions are getting more complex too. You've got regulatory battles happening simultaneously at the federal, state, and international levels. Companies have to navigate completely different legal frameworks depending on where they operate.
SPEAKER_00Which is why that Cohere LF Alpha deal makes so much sense. Geographic diversification isn't just about market access anymore. It's about regulatory risk management. You don't want all your operations subject to the same regulatory authority.
SPEAKER_01And the constitutional questions that XAI is raising about AI free speech rights could fundamentally reshape the regulatory landscape. If AI systems get First Amendment protections, that changes everything about how governments can oversee AI development.
SPEAKER_00The financial implications are staggering too. Between legal costs, compliance expenses, chip development investments, and potential liability for safety failures, the cost of operating an AI company is skyrocketing. That's going to favor companies with deep pockets and hurt smaller innovators.
SPEAKER_01Which could lead to more consolidation, like the Cohair Aleph Alpha talks we discussed. If regulatory compliance is expensive and complex, smaller companies might decide they need to merge to spread those costs across a larger revenue base.
SPEAKER_00But now here's what I keep coming back to. Are any of these battles actually making safer or more beneficial for society, or are they just reshuffling power and resources among big companies and government agencies?
SPEAKER_01That's the key question. If all this conflict results in better safety monitoring, more accountability, and more thoughtful deployment of I systems, then maybe it's worth it. But if it's just expensive theater that doesn't change actual outcomes, then we're all wasting time and money.
SPEAKER_00But if they can fight these cases successfully, then the legal system isn't providing the accountability we need.
SPEAKER_01And the timing matters too. All of this is happening while AI capabilities are still advancing rapidly. We're trying to build regulatory and legal frameworks for technology that's changing faster than our institutions can adapt.
SPEAKER_00How do you govern technology that's evolving so quickly that today's rules might be obsolete by the time they're implemented? It's like trying to regulate a rocket while it's still accelerating?
SPEAKER_01I think the answer has to be more adaptive and responsive institutions. Instead of trying to write perfect rules up front, we need systems that can evolve as quickly as the technology does. But that requires a level of institutional innovation that we haven't seen yet. Alright, that's a wrap on what has been honestly one of the most contentious news days we've covered. The AI industry is clearly entering a new phase, and it's going to be fascinating to watch how these battles play out.
SPEAKER_00Definitely keep an eye on those court cases and regulatory meetings, because they're going to shape how AI development happens for the next decade. If you're getting value from these daily deep dives, hit that subscribe button. We'll be tracking all these stories as they develop.
SPEAKER_01And if you've got thoughts on any of these topics, especially the safety and regulatory issues, we'd love to hear from you. We'll be back tomorrow with more AI news and analysis.
SPEAKER_00Until then, I'm Sam Hitten.
SPEAKER_01And I'm Alex Shannon. Thanks for listening to Build by AI, and we'll see you tomorrow.