Build by AI
Build by AI is your daily briefing on everything happening in the world of artificial intelligence, delivered straight to your ears every single day.
Whether you're a founder trying to stay ahead of the curve, a professional figuring out how AI fits into your work, or simply someone who wants to understand what's actually going on in one of the fastest-moving industries on the planet, Build by AI cuts through the noise and brings you what matters, in plain English, in under ten minutes.
Every episode covers the latest AI news, model releases, industry shifts, and research breakthroughs, so you never have to spend hours scrolling to stay informed. Think of it as your morning coffee briefing for the AI age.
Build by AI is produced by artificial intelligence, from research to script to publish, with every episode reviewed and verified by a human editor before it reaches your ears. So you get the speed and consistency of automation, without sacrificing accuracy or trust. Which also raises the question we're quietly exploring with every episode: how good can AI-generated content actually get? You be the judge.
New episodes drop daily.
Subscribe wherever you get your podcasts and wake up smarter every morning.
Collaboration requests: wiktoria@womenlead.ai
Topics covered: artificial intelligence news, large language models, generative AI, AI tools, ChatGPT, Claude, Gemini, AI regulation, machine learning research, tech industry news, AI startups, and the future of work.
Build by AI
The $30 Billion AI Security Surge I 8th April
Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.
Okay, so Anthropic just announced they're hitting$30 billion in run rates revenue, and in the same breath they're launching a cybersecurity AI model called Mythos. I've been staring at these numbers all morning, and I genuinely can't tell if this is the most strategic move I've ever seen, or if they're basically admitting that AI is getting too dangerous to ignore.
SPEAKER_01Dude, that's exactly what I thought when I saw this. Like, congratulations on your on your massive revenue surge. And oh by the way, here's our new model, specifically designed to defend against AI attacks. The timing is not coincidental.
SPEAKER_00Right? It's like they're saying we're making bank off this technology, and also we're terrified of what it might do. The optics are wild.
SPEAKER_01And here's what's really getting to me. They're only giving mythos to a select group of high-profile companies. Not everyone gets the AI security blanket. That should make people nervous.
SPEAKER_00It's the ultimate good news bad news announcement. Good news, we're growing faster than anyone expected. Bad news we need an entire new AI system just to keep the other AI systems from going rogue. You're listening to Build by AI, I'm Alex Shannon. And that tension between AI growth and AI safety is kind of the theme of today's entire episode.
SPEAKER_01And I'm Sam Hinton. We've also got private wealth managers bypassing VCs to throw money directly at AI startups, Google quietly dropping an offline dictation app, and Intel apparently joining Elon's Semiconductor Dreams in Texas.
SPEAKER_00Plus a music industry showdown that could set the tone for AI creativity going forward. It's April 8th, 2026. Let's dive in. Alright, so let's start with this anthropic story, because there are actually two big announcements here that I think are more connected than they appear on the surface. First, they're debuting this new AI model called Mythos as part of a cybersecurity initiative, and they're being very selective about it. Only a small number of high-profile companies will get access, specifically for defensive cybersecurity work.
SPEAKER_01Yeah, and that that selectivity is telling. You know, this isn't a public release or even a typical enterprise rollout. They're essentially creating an elite club of companies that get the good security tools, which makes me wonder, what do they know about uh AI threats that they're not saying publicly?
SPEAKER_00That's exactly what I was thinking. And here's the kicker. This comes at the same time as their other announcement that their run rate revenue has hit$30 billion. These aren't separate stories, Sam. This feels like anthropic saying we're growing so fast it's actually becoming a security problem. But hold on, let me play devil's advocate here. Is this genuinely about security, or is this anthropic creating a new revenue stream by selling the cure for a disease they helped create? I mean, if AI systems are becoming security risks, aren't the companies building them partly responsible for that?
SPEAKER_01No, that's a fair point, but I think you're missing the bigger picture. Whether we like it or not, AI AI is already out there. OpenAI, Google, Meta, they're all building increasingly powerful systems. You know, if Anthropic doesn't build defensive tools, that doesn't mean the threats go away. It just means we're less prepared for them.
SPEAKER_00Aaron Powell I hear that argument. But there's something that bothers me about the way this was announced. Why the secrecy? Why limit it to just high-profile companies? If this is genuinely about protecting everyone from AI threats, shouldn't they be making these tools as widely available as possible?
SPEAKER_01Well, think about it from their perspective. You don't want to hand powerful defensive AI tools to potential bad actors. And honestly, high-profile companies are probably the most attractive targets for AI-powered attacks anyway. It makes sense to focus your defensive resources where the biggest risks are.
SPEAKER_00Okay, I can buy that argument. So what does this mean practically for businesses? If you're running a company and you're not one of these select high-profile organizations, are you just out of luck when it comes to AI security?
SPEAKER_01Well, this is clearly a preview, so I expect we'll see broader availability eventually. But in the short term, yeah, there's going to be a security gap. Companies need to start thinking about AI security now, not just protecting their AI systems, but protecting their traditional systems from AI-powered attacks.
SPEAKER_00And what kind of attacks are we even talking about here? I mean, when most people think about cybersecurity, they're thinking about malware phishing data breaches. How does AI change that landscape?
SPEAKER_01It changes everything. AI can generate incredibly convincing phishing emails, create deep fake audio for social engineering, automatically find vulnerabilities in code, and even adapt its attack strategies in real time. It's like giving hackers superpowers, and that's just with today's AI capabilities. Imagine what happens as these systems get more sophisticated.
SPEAKER_00That's terrifying. But here's what I keep coming back to. If Anthropic can build mythos to defend against these attacks, what's stopping bad actors from building their own offensive AI tools? Are we just going to have an endless escalation cycle?
SPEAKER_01Probably, yeah. Oh, that's how cybersecurity has always worked. It's a constant cat and mouse game, but that doesn't mean we should give up. Having sophisticated defensive tools is better than being defenseless. The alternative is basically rolling over and letting the bad guys win.
SPEAKER_00I suppose. But it does make me wonder about the economics of all this. If every company needs specialized AI security tools, and those tools are expensive and limited in availability, does that create a two-tier system where only wealthy organizations can afford to be secure?
SPEAKER_01That's a real concern, and honestly, it's not just theoretical. We're already seeing that kind of divide in traditional cybersecurity. Small businesses get hit way more often than large enterprises because they can't afford the same level of protection. AI security could make that gap even wider.
SPEAKER_00Keep an eye on this because I suspect we're about to see every major AI company announce their own cybersecurity initiatives. Nobody wants to be the one without an answer when the first major AI powered cyberattack hits the headlines. So speaking of that$30 billion run rate revenue number, let's dig into what's driving it. Anthropic has expanded their compute partnership with Google and Broadcom because demand for their AI services is apparently skyrocketing. And when I say skyrocketing, I mean like rocket ship to Mars levels of growth.
SPEAKER_01Dude, thirty, thirty billion dollars in run rate revenue for an AI company is just bonkers. For context, that's more than companies like Adobe or Salesforce. And the fact that they had to expand their compute deals suggests they're actually struggling to keep up with demand, which is a good problem to have, but still a problem.
SPEAKER_00What's interesting to me is the partnership structure here. They're not just buying more servers, they're deepening relationships with both Google and Broadcom. Google obviously provides cloud infrastructure, but Broadcom is more on the semiconductor side. That suggests they're thinking about this from both ends: current capacity and future chip development.
SPEAKER_01Yeah, and that's smart because the compute shortage is real. Everyone's fighting for the same GPU resources, the same data center space. If you're anthropic and you're growing this fast, you can't just rely on spot market availability. You need guaranteed capacity, which means long-term strategic partnerships.
SPEAKER_00But here's what I'm curious about. Is this sustainable? Like$30 billion in run rate revenue sounds incredible. But what are their margins? How much of that is just getting fed right back into compute costs? The AI industry has this weird dynamic where success can be almost as expensive as failure.
SPEAKER_01That's the million-dollar question, or in this case the billion dollar question. These AI companies are basically running a race to see who can scale fastest while maintaining profitability. And the compute costs are brutal. We're talking about systems that can cost thousands of dollars per hour to run at scale.
SPEAKER_00I've been trying to wrap my head around the math here. If they're at 30 billion run rate, that's roughly$2.5 billion per month. But if a significant portion of that goes to compute costs. And they're expanding those partnerships, how much is actually falling to the bottom line?
SPEAKER_01That's the thing. We don't have visibility into their cost structure. But historically, cloud companies have gross margins around 70-80%. For AI companies, I'd expect that to be lower because of the intensive compute requirements, maybe 50-60% if they're lucky.
SPEAKER_00Which would still be incredible numbers, but it shows how capital-intensive this business model is. And it raises questions about what happens if demand suddenly drops, or if competitors start undercutting on price.
SPEAKER_01Right. And there's also the question of compute supply. What happens when everyone else starts scaling up too? Google is providing infrastructure to Anthropic, but they're also competing with them through Gemini. At some point, those interests might conflict.
SPEAKER_00That's a great point. Google is basically providing the picks and shovels to their own competition. It's smart business in the short term, but strategically it seems weird. Why help Anthropic scale when you could be capturing that market share yourself?
SPEAKER_01I think Google is hedging their bets. They know the AI market is big enough for multiple players, and they'd rather make money from infrastructure while also competing on applications. Plus, if Anthropic becomes too dependent on Google's infrastructure, that gives Google leverage.
SPEAKER_00Right. So what does this mean for competition in the AI space? If you need these massive compute partnerships just to handle demand, does that create barriers for smaller players trying to compete with Anthropic, OpenAI, and Google?
SPEAKER_01Absolutely, it does. This is becoming a capital-intensive business, where your relationships with compute providers are almost as important as your AI research. Smaller companies are going to have to find niche markets or specialized applications where they don't need to compete on pure scale.
SPEAKER_00And I think that's going to accelerate the trend we're seeing towards specialized AI models, rather than trying to build general purpose competitors to GPT or Claude. You simply can't afford to play that game unless you have Google or Microsoft backing you up.
SPEAKER_01Which might not be a bad thing, honestly. The market probably doesn't need 15 different general-purpose AI assistants. But it could definitely use specialized AI for healthcare, finance, manufacturing, legal work, areas where domain expertise matters more than raw scale.
SPEAKER_00True. But it also means we're heading toward a more consolidated market structure. A few giants providing general AI capabilities and everyone else fighting over specialized niches. That has implications for innovation, pricing, and consumer choice. Let's talk about something that's happening behind the scenes but could reshape how AI companies get funded. According to reports, private wealth managers and family offices are bypassing traditional venture capital firms to make direct investments in AI startups. Instead of being passive investors, wealthy individuals and families are becoming active participants in earlier stage riskier AI bets.
SPEAKER_01This is huge. And it makes total sense when you think about it. These family offices are sitting on massive amounts of capital, they're watching VCs make incredible returns on AI investments, and they're thinking, why are we giving these middlemen 20% when we could do this ourselves? The FOMO is real.
SPEAKER_00But here's what concerns me about this trend. VCs don't just provide money, they provide expertise, due diligence, portfolio support. If wealthy families are jumping directly into AI startups, are they equipped to evaluate the technical risks, the competitive landscape, the regulatory challenges?
SPEAKER_01That's a fair concern, but I think you're underestimating these family offices. A lot of them have been building out their own investment teams, hiring people who came from top-tier VCs. And frankly, some of the AI investments we've seen from traditional VCs haven't exactly been home runs either. Sometimes fresh eyes and different perspectives can be valuable.
SPEAKER_00Okay. But let's think about what this does to the market dynamics. If you're an AI startup and you can get funding directly from a family office, without giving up board seats or dealing with VC governance, that's attractive. But it also means less institutional oversight, potentially less strategic guidance.
SPEAKER_01Right. And it could lead to more AI startups getting funded that maybe shouldn't be. VCs, for all their flaws, do provide a filtering function. They've seen hundreds of pitches. They know what works and what doesn't. Family offices might be more susceptible to flashy demos that don't translate to real business value.
SPEAKER_00And there's another angle here. If private wealth is pouring into AI at the early stages, that's going to inflate valuations across the board. It's basic supply and demand. More money chasing the same opportunities means higher prices, which could create bubble conditions.
SPEAKER_01Yeah, but here's the counter-argument. Maybe the traditional VC model is just too slow for AI. This technology is moving so fast that by the time you go through a typical six-month VC process, the window might be closed. Family offices can move faster, make decisions quicker.
SPEAKER_00That's true, but speed without wisdom can be dangerous. I think what we're going to see is a bifurcated market. Family offices funding the experimental, high-risk AI plays, while VCs focus on more mature opportunities with clearer business models. The question is which approach produces better outcomes?
SPEAKER_01And let's be honest about the scale here. We're talking about family offices and private wealth that collectively manages trillions of dollars, even if they allocate a small percentage to direct AI investments. That's still an enormous amount of capital entering the market.
SPEAKER_00Which brings up another interesting point. What happens to the traditional VC model if this trend accelerates? Do VCs start focusing on later stage investments? Do they become more like consultants or advisors rather than capital providers?
SPEAKER_01I think VCs will adapt. They always do. Maybe they start offering services beyond just funding, technical due diligence for family offices, portfolio management, strategic advisory. There's still value in expertise and network effects, even if the capital equation changes.
SPEAKER_00But there's also a risk here for entrepreneurs. VCs might be demanding and bureaucratic, but they also provide discipline, governance, and strategic thinking. If you take money from a family office that's basically writing checks based on FOMO, what happens when you hit your first major roadblock? And what about the portfolio effects? VCs typically invest in multiple companies and can cross-pollinate ideas, make strategic introductions, create synergies. If you're a family office making one-off AI investments, you miss out on those network effects.
SPEAKER_01Though on the flip side, family offices might be more patient capital. VCs need to return money to their LPs within a certain timeframe. Family offices are investing their own money and can potentially hold positions for decades. That can be valuable for AI companies that need time to mature.
SPEAKER_00True. And there's something to be said for having investors who aren't under pressure to chase the next hot trend or exit within five to seven years. Long-term thinking could actually benefit AI development. Alright, let's shift gears to something Google did that you might have missed. They quietly launched a new AI dictation app that works offline using their Gemma AI models. And when I say quietly, I mean this got almost no fanfare, which is unusual for Google. It's designed to compete with apps like Whisperflow.
SPEAKER_01The fact that it's offline first is actually a big deal. Most AI apps require constant internet connectivity, uh, which creates privacy concerns and limits where you can use them. Now if Google has figured out how to run decent speech recognition entirely on device using Gemma, that's a legitimate competitive advantage.
SPEAKER_00Right. And it makes me wonder why they launched it so quietly. Usually Google is pretty vocal about their AI advances. Is this them testing the waters, or is there something about the competitive landscape that made them want to fly under the radar?
SPEAKER_01I think it's strategic. The dictation and transcription market is getting crowded, with everyone from open AI to smaller startups launching solutions. Uh by going quiet, Google can gather user feedback and iterate without drawing too much competitive attention. Plus, if it flops, less embarrassment.
SPEAKER_00That's smart, but let's talk about the technical implications. If they can run Gemma models offline for speech recognition, what else could they do? This feels like a proof of concept for broader offline AI capabilities.
SPEAKER_01Exactly. And that's where this gets really interesting for privacy conscious users. Imagine having uh iAssistance that doesn't send your data to the cloud, doesn't require internet, doesn't create a record of your queries. That could be a huge selling point, especially for enterprise customers.
SPEAKER_00But there's got to be a trade-off, right? Offline models are typically less powerful than their cloud-based counterparts. How good can speech recognition be when you're running entirely on a smartphone or laptop processor?
SPEAKER_01Uh that's the key question. And honestly, we won't know until people start using it extensively, but Google has a lot of experience optimizing models for mobile devices. If anyone can make offline AI work well, it's probably them. The real test will be accuracy in noisy environments or with accents.
SPEAKER_00And let's think about the competitive implications. If Google can deliver competitive speech recognition without sending data to their servers, that puts pressure on other providers to match that privacy level. Nobody wants to be the company that requires cloud connectivity when Google doesn't.
SPEAKER_01Right. And and this could be the start of a broader shift toward edge AI. We've been talking about this for years. The idea that AI processing moves closer to where the data is generated, rather than everything going to centralized data centers. This might be the practical breakthrough that makes it real.
SPEAKER_00But I'm curious about the business model implications. If everything runs offline, Google can't collect usage data, can't improve their models through user feedback, can't monetize through targeted advertising. How do they make money on this?
SPEAKER_01That's a great question. Maybe it's not about direct monetization, but about ecosystem lock-in. If if Google provides the best offline AI tools that keeps you in their ecosystem, which has value for their other products and services. Or maybe they're planning to charge premium pricing for privacy.
SPEAKER_00The privacy angle is interesting. We've been hearing more about data sovereignty, GDPR compliance, corporate policies around cloud data. An offline AI solution sidesteps a lot of those concerns because the data never leaves the device.
SPEAKER_01And think about the use cases where that matters: healthcare, legal, financial services, government, industries where data sensitivity is paramount. If if Google can deliver enterprise grade offline AI, that opens up markets that have been hesitant to adopt cloud-based solutions.
SPEAKER_00Though I wonder about the update and improvement cycle. With cloud-based AI, you can continuously improve models and push updates instantly. With offline models, how do you keep them current? Do users have to download new model versions periodically?
SPEAKER_01Probably, yeah. But that that might not be a bad thing. It gives users more control over when and how their AI tools change. Some enterprise customers actually prefer that predictability rather than having their tools change unexpectedly.
SPEAKER_00Keep an eye on this because if Google can prove that offline AI apps can compete with cloud-based ones, it could trigger a major shift in how AI companies think about deployment. Privacy and offline capability might become the new battleground. Time for some rapid fire updates. First up, early reports suggest that Fermos, an Nvidia-backed AI data center provider focused on Asia, has hit a$5.5 billion valuation after raising$1.35 billion in just six months.
SPEAKER_01If confirmed, that's insane growth for an infrastructure company, but it makes sense. Everyone needs AI compute, especially in Asia, where the demand is exploding. Nvidia backing them is basically a seal of approval that they know where the market is heading.
SPEAKER_00The fact that they raised over a billion dollars in six months tells you everything about how desperate companies are for reliable AI infrastructure. This is the picks and shovels play for the AI gold rush.
SPEAKER_01And focusing on Asia is smart. The regulatory environment is different. Land and power are potentially cheaper, and there's huge demand from local companies that don't want to depend on Western cloud providers.
SPEAKER_00Plus, if you're building AI data centers, having Nvidia as a backer probably helps with chip allocation. In a world where GPUs are scarce, that relationship could be the difference between success and failure.
SPEAKER_01Exactly. This isn't just about money, it's about supply chain access. When compute is the bottleneck, the companies with the best hardware relationships win.
SPEAKER_00Next reports suggest that AI music generation company Suno is struggling to reach licensing deals with major music labels, including Universal Music Group and Sony Music Entertainment. The dispute centers on how AI generated music should be shared and compensated.
SPEAKER_01This was inevitable. The music industry learned from what happened with streaming. They're not going to let AI companies build billion-dollar businesses on their content without getting paid. Suno is caught in the middle of a much bigger fight about AI training data.
SPEAKER_00And this could set precedent for all creative AI applications. If the music labels win big concessions here, expect similar demands from book publishers, movie studios, and news organizations.
SPEAKER_01The interesting question is whether Suno actually needs these licensing deals. If their AI can generate original music that doesn't directly copy existing works, do they legally need permission? The answer could reshape the entire creative AI industry.
SPEAKER_00But there's also the practical side. Even if Suno doesn't legally need licenses, having the music industry as an enemy is not great for business. Distribution, partnerships, artist collaboration, all of that becomes much harder if you're in a legal fight with Universal and Sony.
SPEAKER_01True. And musicians are already nervous about IEA replacing human creativity. If Suno wants adoption from actual artists and producers, they probably need the industry on their side, not against them.
SPEAKER_00Against This feels like one of those cases where the technology is advancing faster than the legal and business frameworks can keep up. Someone's going to have to blink first. According to early reports, Intel has signed on to Elon Musk's Terrafab chips project alongside SpaceX and Tesla to develop a new US semiconductor manufacturing facility in Texas, though Intel's specific role and investment level remain unclear.
SPEAKER_01Wait, Elon is building a chip fab now? I mean, given his track record with manufacturing at Tesla and SpaceX, maybe he can actually pull this off. And Intel partnering suggests they think there's real potential here, not just Elon hype.
SPEAKER_00The Texas location makes sense given the state's aggressive courting of tech companies, but semiconductor manufacturing is incredibly complex and capital intensive. This feels like a long-term play that won't bear fruit for years.
SPEAKER_01But think about the strategic logic. Tesla needs chips for their cars. SpaceX probably needs specialized semiconductors for satellites and rockets, and if you're going to build a fab anyway, why not make it big enough to serve other customers too?
SPEAKER_00And Intel's involvement could be crucial for the technical expertise. Building a modern semiconductor facility isn't something you can just figure out from first principles, even if you're Elon Musk. You need people who understand the process technology.
SPEAKER_01Plus, this plays into the whole reshoring narrative.
SPEAKER_00Though knowing Elon's timeline predictions, if he says this will be operational in two years, we should probably plan for five. But hey, at least the direction is right. And circling back to our AI security theme. Cisco has joined Anthropic's multi-vendor initiative focused on securing AI software. This brings together multiple technology vendors to address AI security challenges collaboratively.
SPEAKER_01This is smart. AI security isn't something any one company can solve alone. Having Cisco involved brings serious enterprise networking and security expertise to the table. They know how to think about threats at scale.
SPEAKER_00And it suggests that Anthropic's cybersecurity initiative isn't just about their own models. They're trying to build an industry-wide coalition. That could become the foundation for AI security standards going forward.
SPEAKER_01Right. And Cisco has relationships with basically every major enterprise. If they're pushing AI security standards through their existing customer base that could accelerate adoption much faster than a startup trying to build from scratch.
SPEAKER_00The multi-vendor approach also makes sense from a credibility standpoint. If Anthropic was trying to push AI security standards alone, Chris Farm, Chim Chambi Shang Chan, people might see it as self-serving. But with multiple vendors involved, it looks more like genuine industry collaboration.
SPEAKER_01And frankly, given how interconnected modern IT infrastructure is, you need multiple vendors working together anyway. An AI system might be running on Google Cloud using Cisco networking with anthropic models. Security has to work across all those layers.
SPEAKER_00This could be the beginning of something bigger, an industry consortium around AI security standards, which would be good news for everyone who's worried about AI systems being weaponized or compromised. If you zoom out and look at everything we covered today, there's a clear pattern emerging. We've got Anthropic hitting massive revenue numbers while simultaneously launching security initiatives, private wealth bypassing traditional gatekeepers to pour money into AI, Google quietly building offline capabilities. It all points to an industry that's simultaneously maturing and becoming more paranoid.
SPEAKER_01Yeah, and I think the paranoia is justified. The stakes are getting higher. When you have companies generating$30 billion in revenue from AI, when you have family offices throwing around billions in funding, when you have critical infrastructure depending on these systems, the consequences of getting it wrong become massive.
SPEAKER_00What's interesting to me is how the solutions are becoming as complex as the problems. We need AI to secure AI, we need new funding models to support the capital requirements, we need offline capabilities to address privacy concerns. Every answer creates new questions.
SPEAKER_01And that's probably healthy, honestly. The alternative is reckless growth without guardrails. I'd rather see the industry wrestling with these challenges now than dealing with catastrophic failures later. The companies that figure out security, sustainability, and responsible scaling are going to be the long-term winners.
SPEAKER_00But there's also this tension between collaboration and competition that's really interesting. You've got anthropic building industry coalitions around security, but they're also competing aggressively with Google and OpenAI. How do you collaborate on standards while trying to beat each other in the market?
SPEAKER_01That's the classic tech industry paradox. You cooperate on infrastructure and standards because everyone benefits, but you compete on features and user experience. It's like how all the smartphone companies use the same cellular standards but still try to differentiate their phones.
SPEAKER_00Right. And I think we're seeing that play out with AI security. Everyone has an interest in preventing catastrophic AI failures because it would hurt the entire industry. But they still want to be the company that provides the best security solutions. Which could be good for innovation. Different types of investors bring different perspectives, different time horizons, different risk tolerances. That diversity could lead to more experimental approaches and breakthrough discoveries.
SPEAKER_01But it also creates new risks. If you have less experienced investors funding more experimental technologies, you could see more spectacular failures. The question is whether the successes will outweigh the failures.
SPEAKER_00And underlying all of this is the fundamental question of whether we're building AI systems responsibly. The revenue numbers are incredible, the capabilities are advancing rapidly, but are we thinking deeply enough about the long-term consequences?
SPEAKER_01I think initiatives like Anthropics Security Program and Google's privacy-focused offline tools suggest that at least some companies are taking responsibility seriously. But you're right that the pace of development is so fast that it's hard to keep up with all the implications.
SPEAKER_00The next few years are going to be critical. We're at this inflection point where AI is becoming genuinely powerful and widely deployed, but we're still figuring out how to manage the risks. The decisions made today about security, governance, and determined and industry standards are going to shape the next decade. That's a wrap on today's episode. The intersection of explosive AI growth and legitimate security concerns is something we'll definitely be tracking closely.
SPEAKER_01Absolutely.
SPEAKER_00Until then, keep building. See you tomorrow.