Build by AI
Build by AI is your daily briefing on everything happening in the world of artificial intelligence, delivered straight to your ears every single day.
Whether you're a founder trying to stay ahead of the curve, a professional figuring out how AI fits into your work, or simply someone who wants to understand what's actually going on in one of the fastest-moving industries on the planet, Build by AI cuts through the noise and brings you what matters, in plain English, in under ten minutes.
Every episode covers the latest AI news, model releases, industry shifts, and research breakthroughs, so you never have to spend hours scrolling to stay informed. Think of it as your morning coffee briefing for the AI age.
Build by AI is produced by artificial intelligence, from research to script to publish, with every episode reviewed and verified by a human editor before it reaches your ears. So you get the speed and consistency of automation, without sacrificing accuracy or trust. Which also raises the question we're quietly exploring with every episode: how good can AI-generated content actually get? You be the judge.
New episodes drop daily.
Subscribe wherever you get your podcasts and wake up smarter every morning.
Collaboration requests: wiktoria@womenlead.ai
Topics covered: artificial intelligence news, large language models, generative AI, AI tools, ChatGPT, Claude, Gemini, AI regulation, machine learning research, tech industry news, AI startups, and the future of work.
Build by AI
The $974 Billion Question: Is OpenAI Too Big to Fail? I 1st April
Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.
Okay, so I've been staring at these numbers all morning and I genuinely can't wrap my head around this. OpenAI has now raised over$900 billion in total funding. Like that's approaching the GDP of entire countries.
SPEAKER_01Wait, hold up. Are we talking about an AI company or you know a sovereign nation at this point? Because honestly, with that kind of money, they could probably buy a small country.
SPEAKER_00Right? And here's what's really wild, they're not slowing down. They just added another$12 billion on top of everything else. I keep asking myself, what exactly are they building that requires this much capital?
SPEAKER_01Dude, that that's the trillion dollar question, literally. And I think the answer is going to reshape how we think about AI, tech companies, and honestly, power itself.
SPEAKER_00You're listening to Build by AI, the daily show where we break down the AI news that actually matters. I'm Alex Shannon.
SPEAKER_01And I'm Sam Hinton. Today we're diving into OpenAI's mind-bending funding spree, why Claude might be getting a digital pet, and honestly, some moves that feel like April Fool's jokes but apparently aren't.
SPEAKER_00Plus, we'll talk about why your car is about to get a lot smarter, and what happens when AI companies start acting like venture capital firms themselves.
SPEAKER_01Buckle up, because today's episode is all about money, power, and the weird future we're apparently building. Let's dive in.
SPEAKER_00Or should I say the$852 billion elephant? OpenAI just closed the funding rounds at an$852 billion valuation. Then, as if that wasn't enough, they announced another$122 billion funding round for expanding Frontier AI and compute infrastructure, and then added another$12 billion on top of that.
SPEAKER_01Okay, so let me just state this clearly. That puts OpenAI's valuation higher than companies like Tesla, higher than most oil companies, higher than basically every tech company except maybe Apple and Microsoft.
SPEAKER_00And here's what I keep coming back to. What are they actually spending this money on? The$122 billion round specifically mentions expanding Frontier AI globally and investing in next generation compute infrastructure. That's a lot of GPU, Sam.
SPEAKER_01Yeah, but I think people are missing the bigger picture here. This isn't just about buying hardware. When you have this much capital, you're essentially building the infrastructure for the entire AI economy. Think about it. They're not just making AI models, they're creating the foundation that every other AI company is going to depend on.
SPEAKER_00That's kind of what worries me though. Are we creating a scenario where open AI becomes too big to fail? Like if you control the compute infrastructure that everyone else relies on, that's not just market dominance, that's infrastructure dominance.
SPEAKER_01Exactly. And here's where it gets really interesting or concerning, depending on how you look at it. With this much money, OpenAI doesn't just compete with other AI companies anymore. They they compete with cloud providers, they compete with chip manufacturers, they might even start competing with internet service providers.
SPEAKER_00So what does this mean for smaller AI companies, for startups trying to build in this space? If OpenAI can outspend everyone by orders of magnitude, how do you even compete?
SPEAKER_01On the other hand, you're looking at potential monopolization of the most important technology of our time.
SPEAKER_00And the timing feels significant too. We're seeing this massive capital raise, right, as AI is moving from experimental to essential for businesses. It's like they're positioning themselves to own the entire AI supply chain, just as demand is exploding.
SPEAKER_01Keep an eye on the regulatory response to this, because I guarantee governments around the world are looking at these numbers and asking some very pointed questions about competition and market control. This could be the moment that defines AI governance for the next decade.
SPEAKER_00But let's talk about the practical implications for a second. They're specifically funding to meet growing demand for ChatGPT, codecs, and enterprise AI products. That tells me they're seeing demand that's outpacing their current capacity by a massive margin.
SPEAKER_01Aaron Powell Which raises an interesting question. Is this defensive or offensive spending? Are they raising this money because they have to keep up with demand, or because they want to price out the competition before it gets started?
SPEAKER_00I think it's both, honestly. And that's what makes this so unprecedented. Most companies raise money when they need it. OpenAI is raising money to fundamentally alter the competitive landscape of an entire technology sector.
SPEAKER_01The compute infrastructure angle is what really gets me. If they're building next generation compute infrastructure with this money, they're not just building capacity for their own models. They're potentially building the backbone that every AI application will run on.
SPEAKER_00And once you control that infrastructure, you control pricing, you control access, you control innovation cycles. That's a level of power that goes way beyond just having a better product.
SPEAKER_01Exactly. And I keep thinking about what this means for the average person, the average business trying to use AI tools. If one company controls this much of the infrastructure, what happens to pricing? What happens to innovation that doesn't align with their priorities?
SPEAKER_00That's the million-dollar question. Or should I say the trillion dollar question? Because at this scale, open AI strategic decisions don't just affect their competitors, they affect the entire trajectory of AI development globally.
SPEAKER_01And here's something that's not getting enough attention. This funding round values open AI higher than the market cap of most Fortune 500 companies, and they're still technically a research lab at their core. That's a weird disconnect between mission and valuation.
SPEAKER_00Right. And it raises questions about accountability and governance. When a research organization has more capital than most countries' annual budgets, traditional oversight models start to break down.
SPEAKER_01I think we're witnessing the birth of a new type of corporate entity, one that operates at nation-state scale but with private company agility. And honestly, I'm not sure our regulatory frameworks are ready for that.
SPEAKER_00Now let's shift gears to something that sounds almost quaint by comparison. Early reports suggest that a code leak from Anthropic's clawed code 2.1.k8 update revealed some unreleased features that honestly sound like they're from a completely different conversation than what we just talked about. We're talking about a Tamagotchi-style pet and an always-on agent capability.
SPEAKER_01Wait, hold on. So while OpenAI is raising nearly a trillion dollars to dominate global AI infrastructure, Anthropic is making a digital pet? That feels like the most delightfully human response to this AI arms race I've heard in months.
SPEAKER_00Right. The leak apparently came from source map files in TypeScript. So if this is confirmed, it suggests Anthropic is thinking about AI interaction in a fundamentally different way. What do you make of the Tamagotchi angle?
SPEAKER_01Okay, so this is actually brilliant from a user engagement perspective. Think about it. Tamagotchis worked because they created emotional attachment through caretaking behavior. If Claude has a pet component that you need to nurture or interact with regularly, that's not just a feature, that's a relationship-building mechanism.
SPEAKER_00But then there's also this always-on agent capability that was leaked. That feels much more significant from a technical standpoint. We're talking about Claude potentially running continuously in the background, which opens up a whole different set of possibilities and concerns.
SPEAKER_01Yeah, that's the real story buried in this cute pet narrative. An always-on agent means Claude isn't just responding to queries anymore. It's potentially monitoring, learning, and acting proactively. That's a massive shift in how we interact with AI assistants.
SPEAKER_00And I have to ask, is this Anthropic's answer to OpenAI's trillion dollar war chest? Instead of trying to outspend them, they're trying to out-innovate them on the user experience front.
SPEAKER_01I think that's exactly what's happening. And honestly, it might be the smarter play. While OpenAI is building this massive infrastructure empire, Anthropic is focusing on making AI feel more human, more relatable, more integrated into daily life.
SPEAKER_00The always-on aspect does raise some privacy questions, though. If confirmed, we're looking at an AI that's potentially always listening, always learning from your behavior. That's powerful, but it's also a significant shift in the privacy landscape.
SPEAKER_01Absolutely. And I think that's going to be the Yi Chi differentiator between AI companies going forward, not just how powerful their models are, but how they handle that power responsibly. The pet feature is cute, but the always-on agent is where the real innovation and the real risks live.
SPEAKER_00But let's dig into this Tamagotchi concept a bit more. I mean, this was discovered through source map files in TypeScript. So we're talking about actual code implementation, not just conceptual planning. That suggests they're pretty far along with this.
SPEAKER_01And think about the psychology here. Tamagotchis created genuine emotional attachment because they had needs, they had moods, they could da die if you didn't take care of them. If Claude has a pet that reflects your interaction patterns, that's a gamification of AI engagement.
SPEAKER_00Which could be incredibly effective at building user loyalty. Instead of just being a tool you use when you need it, Claude becomes something you check on, something you care about. That's a completely different relationship dynamic.
SPEAKER_01Exactly. And it's also a subtle way to encourage regular engagement. If your AI pet gets sad or neglected when you don't use Claude for a few days, that's a pretty powerful retention mechanism disguised as a fun feature.
SPEAKER_00But the always-on agent capability is what really changes the game. We're talking about Claude potentially running in the background, understanding context from your ongoing activities, maybe even anticipating needs before you express them.
SPEAKER_01That's where this gets really interesting and really concerning at the same time. An always-on agent could be incredibly useful. Imagine Claude noticing patterns in your work and proactively offering relevant insights. But it also means comprehensive behavioral monitoring.
SPEAKER_00And this is where anthropic's focus on AI safety becomes really important. If they're building always-on agents, how they implement privacy protections and user control will set the standard for the entire industry.
SPEAKER_01Right. And and it's a fascinating contrast with OpenAI's approach. OpenAI is betting on scale and infrastructure dominance. Anthropic is betting on creating deeper, more personal relationships between humans and AI. Both could be successful, but they're playing completely different games.
SPEAKER_00What's interesting is that both approaches could complement each other. You could imagine a future where OpenAI provides the underlying compute infrastructure, but Anthropic provides the user experience layer that makes AI feel human and approachable.
SPEAKER_01Or they could be completely incompatible visions. OpenAI's approach suggests I as a utility, powerful, always available, but fundamentally transactional. Anthropic's approach suggests I as a companion, personal, emotional, integrated into your daily life in a more intimate way.
SPEAKER_00And keep in mind, this is still just a code leak. We don't know the full implementation details, we don't know the timeline, we don't even know if these features will actually ship. But the fact that they're being developed tells us a lot about Anthropic's strategic thinking.
SPEAKER_01Absolutely. Um this leak gives us a window uh into how different I companies are approaching the fundamental question of human AI interaction. And honestly, I I find Anthropic's approach more intriguing than OpenAI's raw capital power play.
SPEAKER_00Speaking of AI integration into daily life, here's something that actually launched and you can use right now. ChatGPT is now integrated with Apple's CarPlay. You can access it directly from your vehicle's dashboard if you have iOS 26.4 or newer, and the latest version of the ChatGPT app.
SPEAKER_01Okay, this is one of those features that sounds simple, but is actually pretty revolutionary when you think about it. Your car just became an AI-powered assistant that you can talk to while driving. That changes the entire dynamic of car interaction.
SPEAKER_00Right, and the timing is interesting too. We're seeing AI assistants move from our phones and computers into our cars, which is probably where voice interaction makes the most sense anyway. But what are the practical applications here beyond just asking random questions?
SPEAKER_01Think about it. You could ask ChatGPT to help you navigate complex directions, explain unfamiliar concepts you heard on a podcast, or even help you prepare for a meeting while you're driving to it. But honestly, the bigger picture is that this normalizes AI as a constant companion.
SPEAKER_00There's also the safety angle here that I'm curious about. Apple's pretty careful about what they allow in CarPlay for obvious reasons. The fact that they approve ChatGPT integration suggests they're confident in the safety implementation.
SPEAKER_01Yeah, and that approval is significant because it signals that AI assistants are moving from experimental to essential infrastructure. When Apple integrates something into CarPlay, they're basically saying this technology is reliable enough to use while operating a vehicle.
SPEAKER_00But here's what I keep wondering. How does this play into that always-on agent concept we just talked about with Claude? If AI assistants are in our cars, on our phones, potentially running in the background, we're looking at a pretty comprehensive AI presence in daily life.
SPEAKER_01Exactly. And I think that's the real story here. It's not just that ChatGPT is in carPlay. It's that AI is systematically integrating into every environment where we spend significant time, home, work, car, phone. We're building an AI ecosystem around human life.
SPEAKER_00Which brings us back to that infrastructure question from the open AI story. If these integrations become essential and open AI controls the underlying infrastructure, that's a different kind of power than we've seen from tech companies before.
SPEAKER_01For now, if you want to try this out, make sure you've got iOS 26.4 or newer in the latest ChatGPT app, but keep an eye on how this evolves, because car integration is probably just the beginning of AAI showing up in places you didn't expect.
SPEAKER_00What's fascinating to me is the user experience implications here. In your car, you're essentially a captive audience for extended periods. If ChatGPT can make those drives more productive, more educational, or just more entertaining, that's a significant value proposition.
SPEAKER_01And it's a perfect environment for voice interaction. You can't type while driving, you need hands-free operation, and you often have questions or needs that arise spontaneously. Cars might actually be the killer app environment for AI assistance.
SPEAKER_00But I'm also thinking about the data implications. Your car knows where you go, when you go there, how long you stay. If that location data gets combined with AI conversation data, that's an incredibly detailed picture of your life.
SPEAKER_01That's a really good point. And it's not just location data, it's behavioral data, too, how you drive, when you drive, what you talk about while driving. The privacy implications of AI and vehicles are huge, and I don't think we're fully grappling with them yet.
SPEAKER_00Plus, there's the integration complexity. This requires coordination between OpenAI, Apple, car manufacturers, and cellular providers. The fact that they made it work suggests there's significant commercial motivation behind getting AI into vehicles.
SPEAKER_01Absolutely, and I think this is just the beginning. Once AI assistants are established in cars, the next step is probably more proactive capabilities, AI, that helps with route optimization, maintenance reminders, maybe even integration with smart home systems.
SPEAKER_00The requirements are interesting too. iOS 26.4, or newer, and the latest ChatGPT app. That's a pretty high bar that suggests this integration requires significant technical capabilities on both sides.
SPEAKER_01There's probably real engineering work happening to make AI conversation safe and effective in a driving environment, voice recognition, response formatting, safety protocols. That's substantial development work.
SPEAKER_00And it positions open AI in yet another daily use environment. First it was work with ChatGPT for productivity, then it was coding with codecs, now it's transportation. They're systematically occupying every major context where people might want AI assistance.
SPEAKER_01That's the ecosystem play in action. It's not enough to have a great AI model. You need to be present in every environment where people might want to use it. And cars represent hours of potential AI interaction time every day for millions of people.
SPEAKER_00Let's talk about Google's entry into today's news. They've released VO 3.1 Lite, which they're calling their most cost-effective video generation model. It's available in paid preview through the Gemini API in Google AI Studio right now.
SPEAKER_01Okay, so this is uh Google's play to democratize AI video generation, and the light branding tells you everything you need to know. They're positioning this as the accessible option, while companies like Runway are launching premium programs and OpenAI is raising trillion-dollar war chests.
SPEAKER_00The timing feels strategic too. We just talked about how much money is flowing around the AI space, but most businesses and developers are still looking for affordable ways to actually use these tools. A cost-effective video generation model could hit that sweet spot.
SPEAKER_01Exactly. Um and video generation is one of those uh AI capabilities that feels magical, but has been prohibitively expensive for most use cases. If Google can make this accessible through their existing API infrastructure, that could open up a lot of new applications.
SPEAKER_00What's interesting is that they're launching this through the Gemini API and Google AI Studio. That suggests they're trying to build a comprehensive AI development platform, not just offer standalone tools.
SPEAKER_01Yeah, and that's smart positioning against OpenAI's massive infrastructure play. Instead of trying to outspend everyone, Google is leveraging their existing cloud infrastructure to offer more cost-effective alternatives. It's like they're saying you don't need a trillion-dollar budget to do cool stuff with AI.
SPEAKER_00I'm curious about the light designation, though. In the AI world, light versions often mean significant capability trade-offs. The question is whether cost-effectiveness comes at the expense of quality or just convenience features.
SPEAKER_01Well, no, that's going to be the test for this model. If VO3.1 light can deliver 80% of the quality at 20% of the cost, that's a game changer for a lot of use cases. Um but if light A means not very good, then it's just Google playing catch-up with marketing instead of technology.
SPEAKER_00And this fits into a broader pattern we're seeing where the big tech companies are all approaching AI from their strengths. Open AI with massive capital, anthropic with user experience innovation, and Google with accessible, integrated tools.
SPEAKER_01If you're a developer or business looking to experiment with AI video generation, this is probably worth checking out in Google AI Studio. But pay attention to the quality and limitations, because cost effective can mean different things depending on what you're trying to build.
SPEAKER_00The integration with Google AI Studio is particularly interesting because it suggests Google is thinking about video generation as part of a broader AI workflow, not just as a standalone capability.
SPEAKER_01Right. And that workflow approach could be Google's competitive advantage, while other companies are focused on making the best individual AI models. Google is focused on making AI models that work well together as a complete development environment.
SPEAKER_00The paid preview model also tells us something about Google's go-to-market strategy. They're not trying to compete on free-tier offerings. They're going straight to commercial applications with pricing that presumably makes sense for business use cases.
SPEAKER_01Which is smart because video generation has obvious commercial value. Marketing teams, content creators, small businesses, there's immediate revenue potential if the quality is decent and the pricing is reasonable.
SPEAKER_00And the Gemini API integration means developers can potentially combine video generation with other AI capabilities in a single workflow. That's the kind of integrated experience that could differentiate Google's offering from standalone video generation tools.
SPEAKER_01Exactly. Instead of having to integrate multiple different AI services, you could potentially handle text, images, and video all through the same API. That's a compelling developer experience if they can execute on it well.
SPEAKER_00But I keep coming back to the cost-effective positioning. In a market where companies are raising hundreds of billions of dollars, positioning on cost feels almost defiant. It's like Google is betting that most users don't actually need the most expensive, most powerful options.
SPEAKER_01Most video generation use cases don't need Hollywood quality output. They need good enough quality at a price point that makes commercial sense. If VO 3.1 light hits that mark, it could capture a huge portion of the practical use case market. Exactly. This is Runway saying we're not just a tool, we're a platform. And honestly, with OpenAI raising nearly a trillion dollars, having a focused ecosystem play might be the smarter long-term strategy.
SPEAKER_00The program specifically targets companies using Runway's AI video models, which means they're essentially subsidizing customer acquisition while building their developer community. That's a pretty sophisticated approach to market development.
SPEAKER_01And it positions Runway as the go-to platform for AI video innovation. If you're a startup with a cool video AI idea, they're basically offering you funding and support to build on their infrastructure. That's how you create platform lock-in while looking generous.
SPEAKER_00The emphasis on real-time video intelligence is particularly interesting because that's where the technical challenges are hardest and the commercial opportunities are biggest. Live video processing, interactive experiences, real-time generation, those are premium use cases.
SPEAKER_01Right. And it's a market segment that Google's cost-effective approach probably can't compete in effectively. Runway is betting that the high-end real-time applications will pay premium prices for premium performance, regardless of what cheaper alternatives exist.
SPEAKER_00Next, early reports suggest Salesforce has announced a major AI-focused update to Slack, adding 30 new AI-powered features. That's a significant makeover for worksplace communication.
SPEAKER_01Some of these are probably going to be amazing, and some are probably going to be forgotten in six months.
SPEAKER_00It does show how AI is becoming table stakes for productivity tools, though. If you're not integrating AI into your workplace software, you're probably falling behind. The question is whether 30 features represents thoughtful integration or feature creep.
SPEAKER_01Yeah. And Slack is interesting because it's where a lot of people spend most of their workday. If they get it wrong, it's going to be really annoying really fast.
SPEAKER_00The sheer number of features suggests Salesforce is betting that different teams and use cases will adopt different. AI capabilities. Rather than trying to find the one killer AI feature for Slack, they're providing a toolkit and letting users figure out what works.
SPEAKER_01That's actually pretty smart from a product strategy perspective. Slack has incredibly diverse usage patterns across different organizations. What works for a software development team might be completely different from what works for a marketing team.
SPEAKER_00And it positions Slack as an AI-native workplace platform rather than just a messaging tool with some AI features bolted on. That's a significant strategic shift that could help them compete with newer AI-first productivity tools.
SPEAKER_01The timing is interesting too. Coming right as we're seeing AI assistants integrate into cars, background agents, and other daily use environments. Slack is making sure they don't get left behind in the workplace AI race.
SPEAKER_00If you zoom out and look at everything we covered today, there's a really interesting pattern emerging. We've got OpenAI raising nearly a trillion dollars for infrastructure dominance, Anthropic potentially building emotional connections with digital pets, Google focusing on cost-effective accessibility, and companies like Runway and Salesforce building ecosystems and integrations.
SPEAKER_01Yeah, it feels like we're watching the AI industry mature in real time, and different companies are choosing completely different strategies for how to win. The question is whether there's room for all these approaches, or if we're heading toward a winner-take-all scenario.
SPEAKER_00What strikes me is how much this resembles the early cloud computing wars, but with way more money and way higher stakes, the infrastructure play, the developer ecosystem play, the user experience play. We've seen this before, but not with technology this powerful.
SPEAKER_01And not with technology that could potentially replace human cognitive work. That's what makes this different. We're not just talking about market dominance, we're talking about controlling the tools that might reshape how humans work, create, and think.
SPEAKER_00Which brings us back to that trillion-dollar question from the beginning. With this much money and power concentrated in AI development, how do we make sure it actually benefits everyone? That's the conversation we'll probably be having for the next decade.
SPEAKER_01But what's fascinating is how these different approaches could actually be complementary rather than competitive. OpenAI builds the infrastructure, Google provides cost-effective access, Anthropic creates engaging experiences, and companies like Runway and Salesforce build specialized applications on top.
SPEAKER_00That's an interesting perspective. Instead of one company dominating everything, we might be looking at an AI ecosystem where different players control different layers of the stack. Infrastructure, platforms, experiences, applications.
SPEAKER_01Exactly. And that might actually be healthier for innovation and competition. If OpenAI controls infrastructure, but multiple companies can build compelling user experiences on top of it that preserves some competitive dynamics, even in a world with massive capital concentration.
SPEAKER_00The integration trends we're seeing support that too. Chat GPT and CarPlay, Slack with 30 AI features, Claude with always on agents. These aren't just standalone products anymore. They're becoming integral parts of existing workflows and environments.
SPEAKER_01And that integration trend might be the most important story here. We're moving from a world where I is a special purpose tool you use occasionally to a world where AI is ambient infrastructure that's just always available in every context.
SPEAKER_00Which raises some profound questions about human agency and privacy. If AI assistants are always listening in our cars, always running in the background on our computers, always integrated into our workplace communication, what does that do to our sense of private thought and independent decision making?
SPEAKER_01That's the question that keeps me up at night. The technology is incredibly powerful and potentially beneficial, but the implications for human autonomy are enormous. And with the kind of money we're talking about today, these aren't theoretical concerns anymore. They're immediate realities.
SPEAKER_00The regulatory response is going to be crucial. Governments are probably looking at open AI's trillion-dollar valuation and realizing that traditional antitrust frameworks might not be adequate for companies that operate at this scale with this kind of technology.
SPEAKER_01And the international competition angle is huge too. If one country's companies dominate AI infrastructure globally, that's not just an economic advantage, it's a strategic advantage in everything from military applications to cultural influence.
SPEAKER_00But there's also reason for optimism in the diversity of approaches we're seeing. The fact that Anthropic is focusing on user experience, Google is focusing on accessibility, and runway's building ecosystems suggests there are multiple viable paths forward.
SPEAKER_01Right. And the rapid pace of development means we're likely to see even more innovative approaches emerge. The companies that succeed won't necessarily be the ones with the most money. They'll be the ones that best understand how humans want to interact with AI.
SPEAKER_00That's what makes stories like Claude's Tamagotchi Pets so intriguing. In a world of trillion-dollar infrastructure plays, sometimes the most human approach might be the most successful one.
SPEAKER_01And for the rest of us trying to navigate this rapidly changing landscape, the key is probably to stay informed about these broader strategic moves while experimenting with the tools that are actually available today. The future of AI might be decided by trillion-dollar funding rounds, but it's also being shaped by how millions of people choose to use these tools day-to-day.
SPEAKER_00That's our show for today. As always, if you're getting value from these daily AI deep dives, the best way to support us is to subscribe and share the show with someone who needs to understand what's happening in AI.
SPEAKER_01And honestly, with the pace of change we're seeing, everyone needs to understand what's happening in AI. We'll be back tomorrow to break down whatever wild developments happen next.
SPEAKER_00I'm Alex Shannon.
SPEAKER_01And I'm Sam Hinton. Thanks for listening to Build by AI, and we'll see you tomorrow.