Build by AI
Build by AI is your daily briefing on everything happening in the world of artificial intelligence, delivered straight to your ears every single day.
Whether you're a founder trying to stay ahead of the curve, a professional figuring out how AI fits into your work, or simply someone who wants to understand what's actually going on in one of the fastest-moving industries on the planet, Build by AI cuts through the noise and brings you what matters, in plain English, in under ten minutes.
Every episode covers the latest AI news, model releases, industry shifts, and research breakthroughs, so you never have to spend hours scrolling to stay informed. Think of it as your morning coffee briefing for the AI age.
Build by AI is produced by artificial intelligence, from research to script to publish, with every episode reviewed and verified by a human editor before it reaches your ears. So you get the speed and consistency of automation, without sacrificing accuracy or trust. Which also raises the question we're quietly exploring with every episode: how good can AI-generated content actually get? You be the judge.
New episodes drop daily.
Subscribe wherever you get your podcasts and wake up smarter every morning.
Collaboration requests: wiktoria@womenlead.ai
Topics covered: artificial intelligence news, large language models, generative AI, AI tools, ChatGPT, Claude, Gemini, AI regulation, machine learning research, tech industry news, AI startups, and the future of work.
Build by AI
When AI Gets Desktop Powers and Robot Brains Get Smarter I 17th April
Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.
So OpenAI just gave their coding tool the ability to control your entire desktop. And I'm sitting here wondering if we just crossed some kind of line we didn't realize we were approaching.
SPEAKER_01Wait, hold on. Control the desktop, like actually clicking around and opening files and stuff. That's not just code completion anymore. That's basically having an AI intern with full access to your computer.
SPEAKER_00Exactly. And the timing is fascinating because this is clearly aimed straight at anthropic. It's like watching two heavyweight fighters circle each other. Except the ring is our computers and the stakes are who gets to be our digital assistant.
SPEAKER_01Dude, and that's just one story today. We've also got a robot brain that supposedly learns tasks it was never taught, and the UK just threw$675 million at trying to build their own AI ecosystem. It feels like everything is accelerating at once.
SPEAKER_00Right? Like I woke up this morning and the AI world had shifted again. And these aren't just incremental updates. We're talking about fundamental changes in how AI systems work and who controls them.
SPEAKER_01And the money being thrown around.
SPEAKER_00You're listening to Build by AI. I'm Alex Shannon. And if you've been feeling like the AI world is moving too fast to keep up with, you're definitely not alone.
SPEAKER_01And I'm Sam Hinton. Today we're diving into OpenAI's big desktop power grab, some wild robotics claims, and funding numbers that honestly make my head spin. Plus, we'll talk about what happens when entire countries decide they need their own AI champions.
SPEAKER_00It's Thursday, April 17th, 2026, and honestly, there's a lot to unpack today. Let's jump right in. We're talking about an AI that can navigate your computer, open applications, manipulate files.
SPEAKER_01Yeah, and the the the framing here is crucial. That this isn't positioned as hey, cool new feature. This is explicitly OpenAI taking aim at Anthropic. They're basically saying anything you can do, we can do better, and we can do it while controlling your entire desktop.
SPEAKER_00Right. And that raises some interesting questions about where we draw the line. I mean, when we talk about AI assistance, most people are thinking about chatbots that give you text responses. But desktop control, that's a completely different level of access and capability.
SPEAKER_01Absolutely. And this is where it gets really interesting from a competitive standpoint. Anthropic has been positioning itself as the more thoughtful, safety-conscious AI company, right? But now OpenAI is basically saying we'll give users more power and more control. It's a direct challenge to that positioning.
SPEAKER_00But hold on, let's think about the practical implications here. If I'm a developer and I have an AI that can not just write code, but also run it, test it, debug it, and manipulate my development environment, that's incredibly powerful. But it's also kind of terrifying from a security perspective.
SPEAKER_01Oh, totally. And I think that's where OpenAI is making a bet that users will choose power over caution. They're betting that developers and power users want an AI that can actually do things, not just suggest things. But you're right to be concerned about security. We're basically talking about giving an AI the keys to your digital kingdom.
SPEAKER_00What's really fascinating to me is the timing. Why now? Why this level of desktop integration? It feels like OpenAI looked at where the market is going and decided they needed to make a big, bold move to stay ahead.
SPEAKER_01I think it's because they see the writing on the wall, coding assistants are becoming commoditized. Everyone has one now. But if you can build one that's actually an autonomous agent that can complete entire workflows, that's a moat, that's defensible. And it puts them in direct competition, not just with Anthropic, but potentially with Microsoft's Copilot, Google's offerings, everyone.
SPEAKER_00And think about what this means for the user experience. Right now, if I want AI help with coding, I have to copy and paste, switch between windows, manually implement suggestions. But if the AI can just do it all for me, that's a fundamentally different relationship with the technology.
SPEAKER_01Exactly. And that's what makes this so potentially disruptive. It's not just about making existing workflows faster, it's about creating entirely new workflows where the AI is doing things that humans used to have to do manually. But that also makes me wonder about the learning curve for users.
SPEAKER_00Right, because if the AI can control your desktop, how do you maintain oversight? How do you know what it's doing? How do you stop it if it starts going down the wrong path? These are real usability questions that OpenAI is going to have to solve.
SPEAKER_01Not everyone is going to be comfortable with that.
SPEAKER_00But maybe that's exactly the point. Maybe OpenAI is betting that the users who are comfortable with that level of AI autonomy are the ones who will drive the next wave of productivity gains. Early adopters, power users, people who are willing to trade some control for efficiency.
SPEAKER_01That could be smart positioning. Let Anthropic focus on the safety conscious users who want guardrails and explanations, while OpenAI goes after the users who want their AI to just get stuff done, even if it means taking more risks.
SPEAKER_00So for regular people listening to this, what does it mean? Are we looking at a future where our computers basically have a resident AI that can do complex tasks for us?
SPEAKER_01I think that's exactly where we're headed. But keep an eye on the security and privacy implications. Because once you give an AI full desktop access, you're trusting it with everything your files, your passwords, your personal data. That's a big leap of faith.
SPEAKER_00And it raises questions about what happens when these AI systems make mistakes. If a chatbot gives you bad advice, you can ignore it. But if an AI with desktop control accidentally deletes important files or sends emails you didn't intend to send.
SPEAKER_01Yeah, the the stakes are completely different. But if open AI can solve those problems, if they if they can build desktop control that's both powerful and reliable, they could have a massive competitive advantage. It's a high risk, high reward strategy.
SPEAKER_00Moving on to something that honestly sounds like science fiction, but early reports suggest it might be real. Physical intelligence, which Ishimbakami, which is apparently a hot robotics startup, has unveiled something called P0.7. And yes, that's Pi, as in the mathematical constant, which they claim is a robot brain that can figure out tasks it was never explicitly taught.
SPEAKER_01Okay, so I'm immediately skeptical because we've heard these claims before, right? Every robotics company says they've cracked general purpose robot intelligence. But if this is real, if confirmed, this could be absolutely massive. We're talking about the difference between a robot that can vacuum your floor versus a robot that can look at your messy house and figure out how to clean it.
SPEAKER_00Right. And the key phrase here is general purpose robot brains. Most robots today are incredibly specific. They're programmed for very particular tasks in very controlled environments. But what physical intelligence seems to be claiming is that they've built something that can generalize, that can learn and adapt.
SPEAKER_01And that's the holy grail, right? That's what everyone in robotics has been chasing. Because once you have a robot brain that can figure out new tasks on its own, you don't need to program it for every single scenario. It becomes actually useful in the real world, which is messy and unpredictable.
SPEAKER_00But I have to ask, how do we verify these claims? I mean, it's easy to say your robot brain can learn anything, but proving it is a completely different matter. What kind of tasks are we talking about? How complex, how reliable?
SPEAKER_01That's exactly the right question to ask, and honestly, we don't have enough details yet. But here's what I find interesting. If this is even partially true, it represents a fundamentally different approach to robotics. Instead of hand-coding behaviors, you're basically training a neural network to understand the physical world and figure out how to manipulate it.
SPEAKER_00And the timing is interesting too, because we're seeing this convergence of language models getting better at reasoning, computer vision getting more sophisticated, and now potentially breakthrough progress in robotic control. It's like all the pieces are starting to come together.
SPEAKER_01Exactly. And think about what this means for industries like manufacturing, you know, logistics, uh, even home assistance. If you can build a robot that doesn't need to be programmed for specific tasks, that can just observe and learn, that that changes everything about how we think about automation.
SPEAKER_00Though I imagine there are still huge challenges around safety and reliability. I mean, if a chatbot makes a mistake, you get a weird response. If a general purpose robot makes a mistake while handling physical objects.
SPEAKER_01Yeah, the stakes are completely different. But if physical intelligence has actually made progress on this problem, even incremental progress, that's worth paying attention to. Keep an eye on what they publish and whether other researchers can replicate their results.
SPEAKER_00What I'm really curious about is the name PowerPoint seven. Like that's a very specific version number for what they're claiming is a breakthrough. It suggests they think they're on a clear development path, not just stumbling onto something by accident.
SPEAKER_01Good point. And the fact that they're calling themselves a hot robotic startup suggests they've got investor attention and funding. That usually means they've demonstrated something compelling to people with money who know the space.
SPEAKER_00But let's be practical for a second. Even if this is real, how long before we see it in actual products? Because there's a big difference between a robot brain that can learn new tasks in a lab and one that can do it reliably in your home or workplace.
SPEAKER_01That's the million dollar question. The robotics industry has a history of promising the moon and delivering well, RoomBus, which are great, but they're not exactly general purpose robot brains. Still, if physical intelligence is making real progress on generalization, that could accelerate the whole timeline.
SPEAKER_00And think about the implications if this works. We've been talking about AI disrupting knowledge work, but if robots can actually learn and adapt, that extends to physical work too. Manufacturing, construction, maintenance, all of that could be on the table.
SPEAKER_01Right. And unlike software AI, which mostly augments human capabilities, truly general purpose robots could potentially replace human capabilities in physical tasks. That's a whole different kind of disruption.
SPEAKER_00Though let's not get ahead of ourselves. This is still early, and as you said, we need to see independent verification. But if physical intelligence is even halfway right about what they've built, it's going to be fascinating to watch how it develops.
SPEAKER_01Absolutely. And um and for anyone working in robotics or automation, this is definitely something to keep on your radar. Even if the claims are overstated, the underlying approach using AI to enable robot learning is clearly where the field is heading.
SPEAKER_00Let's talk about some funding news that caught my attention, because the numbers are pretty wild. Factory, which is a three-year-old AI coding startup focused on enterprises, just hit a$1.5 billion valuation after raising$150 million in a round led by Kostla Ventures.
SPEAKER_01Wait,$1.5 billion for a three-year-old company? Okay. I need to understand what Factory is actually doing that justifies that kind of valuation. Because that's not just we have a promising product money. That's we think this could be a category-defining company money.
unknownRight.
SPEAKER_00And the key thing is that they're focused specifically on enterprises. So while we've been talking about coding assistance for individual developers, Factory seems to be going after the much bigger market of how large companies build and deploy software.
SPEAKER_01That makes more sense from a valuation perspective. Enterprise software has always commanded higher multiples because the contracts are bigger, the switching costs are higher, and if you can truly make enterprise development more efficient, that's worth billions. But still,$1.5 billion is a big bet.
SPEAKER_00And Coastal Ventures leading the round is significant too. They're not known for throwing money around carelessly. They must see something in factories' approach to enterprise AI coding that they think gives them a real competitive advantage.
SPEAKER_01You know what's interesting though? We're seeing this pattern where AI coding companies are getting these massive valuations, but the market is also getting really crowded. You've got GitHub Copilot, you've got Anthropics Claude, you've got OpenAI's Codex, and now companies like Factory raising at billion dollar valuations. Someone's going to be wrong about how big this market really is.
SPEAKER_00That's a great point. And I wonder if part of what's driving these valuations is FOMO, fear of missing out. Like investors saw what happened with the companies that got in early on the LLM wave, and now they don't want to miss the next big thing in AI coding.
SPEAKER_01Possibly, but I also think there's a real recognition that AI coding tools could be transformative for how software gets built. If Factory has figured out something unique about enterprise deployment, about security, about integration with existing workflows that could justify the valuation.
SPEAKER_00Let me ask you this though. What does a three-year-old company in the AI coding space actually have that's worth$1.5 billion? Like are they claiming they've solved enterprise software development?
SPEAKER_01That's what I want to know. Three years in AI time is both forever and no time at all. They could have built something genuinely revolutionary, or they could be riding the hypewave with a better marketing story than their competitors.
SPEAKER_00The question is execution, right? Having a$1.5 billion valuation means you need to build a business that can support that kind of value. That's a lot of enterprise customers, a lot of revenue, and a lot of proving that your AI coding tools actually deliver ROI.
SPEAKER_01Absolutely. And in the enterprise market, you're competing not just on features, but on trust, on security, on compliance, on all these things that take time to build. A billion dollar valuation gets you runway, but it also creates enormous pressure to deliver results fast.
SPEAKER_00And here's something else to consider. If factory is worth$1.5 billion after three years, what does that say about the valuations of more established players? Are we looking at a world where every decent AI coding company is worth billions?
SPEAKER_01That's the scary question for investors, right? Oh, if everyone in the space is worth billions, then either the market is absolutely massive, or we're in a bubble, and bubbles have a way of popping when reality sets in.
SPEAKER_00Though to be fair, enterprise software markets can be genuinely enormous. If AI coding tools can make enterprise development significantly faster or cheaper, that could justify massive valuations across multiple companies.
SPEAKER_01True, but there's also the question of market saturation. How many enterprise AI coding companies can succeed simultaneously? At some point, customers are going to pick winners, and some of these billion-dollar startups are going to find out they bet wrong.
SPEAKER_00The other thing that strikes me is the pressure this puts on factory to justify that valuation. They can't just be another coding assistant now. They need to be building something that genuinely transforms how enterprises think about software development.
SPEAKER_01Exactly. At$1.5 billion, you're not just selling tools, you're selling transformation. And enterprise customers are going to hold them to that standard. It'll be fascinating to see if they can deliver on that promise.
SPEAKER_00Let's shift gears and talk about something that I think signals a really important trend. According to reports, the UK government has launched a$675 million sovereign AI fund specifically to invest in domestic AI startups and reduce their dependence on foreign technology.
SPEAKER_01This is huge. And honestly, it's something we should have seen coming. And you know, every major government is looking at AI and realizing that this isn't just about cool technology, but this is about economic competitiveness, national security, technological sovereignty. Six hundred million dollars is the UK saying that we need our own AI champions.
unknownRight.
SPEAKER_00And the phrase sovereign AI fund is really telling. It's not just about supporting startups or innovation. It's about ensuring that the UK has domestic capabilities in what they clearly see as a critical technology. But$675 million, while significant, isn't exactly at the scale of what we're seeing from private investors. I mean, we just talked about factory raising$150 million, that's a$1.5 billion valuation. Can government funding really compete with private capital when it comes to building AI champions?
SPEAKER_01Aaron Powell That's the million-dollar question, or I guess the$675 million question. Yeah, government funding has advantages. It's patient capital, it can support longer-term research, it can focus on areas that might not be immediately profitable but are strategically important. But it also has disadvantages. It's slower, it's more bureaucratic, um, it's not always great at picking winners.
SPEAKER_00And there's the talent question too. If you're a top AI researcher or engineer, are you going to join a government-funded startup, or are you going to go work for OpenAI or Anthropic or Google where the compensation packages are astronomical?
SPEAKER_01Aaron Powell True. But I think the UK is betting on a different kind of value proposition. Maybe it's about working on problems that matter for the UK specifically. Maybe it's about having more control over your research direction. Maybe it's about um building something that serves the public interest rather than just maximizing profit.
SPEAKER_00The other thing that's interesting is what this signals about the global AI landscape. If the UK is launching a sovereign AI fund, you can bet other countries are thinking about doing the same thing. We might be heading toward a world where AI development becomes much more nationalistic.
SPEAKER_01Yeah, and that has implications for collaboration, for open research, for the flow of talent and ideas. The AI revolution started out feeling very global and collaborative, but increasingly it's looking like it's going to be shaped by geopolitical competition.
SPEAKER_00But let's think about this practically. What does the UK actually need to build to reduce dependence on foreign AI technology? Are we talking about competing with ChatGPT and Claude, or are we talking about more specialized applications?
SPEAKER_01I think it's probably both, but the specialized applications might be where they have the best shot at success. Building a general-purpose LLM that competes with GPT-4 is incredibly expensive and difficult. But you know, building AI systems for specific UK needs, healthcare, defense, finance, that that's more achievable.
SPEAKER_00Aaron Powell And there's the regulatory angle too. If the UK can build domestic AI capabilities, they have more control over how those systems are governed and regulated. They don't have to worry about American companies making decisions that affect UK users.
SPEAKER_01Exactly. And think about data sovereignty, too. If UK organizations are using AI systems built in the UK with data processed in the UK, that gives the government much more control over privacy and security than if they're relying on foreign systems.
SPEAKER_00The question is whether six hundred and seventy-five million is enough to make a meaningful difference. That sounds like a lot of money, but when individual AI companies are raising hundreds of millions in single rounds, it might not go as far as you'd think.
SPEAKER_01That's true. But government funding often acts as a catalyst. If the UK can use that six hundred seventy-five million dollars to prove that there's a viable domestic AI ecosystem, it might attract private investment too. It's about signaling intent as much as providing capital.
SPEAKER_00And there's the long-term strategic thinking too. Even if UK AI companies can't compete directly with OpenAI or Google today, building domestic capabilities now means they'll be better positioned as the technology evolves.
SPEAKER_01Right. And who knows, maybe some of the most important AAI applications haven't been invented yet, and UK companies will have a shot at leading in those areas. It's about making sure they're not locked out of the game entirely.
SPEAKER_00First up, early reports suggest that Upscale AI, which is an AI infrastructure company, is in talks to raise funding at a$2 billion valuation. Here's the kicker. They've only been operating for seven months.
SPEAKER_01Seven months in a two billion dollar valuation? That's not a company. That's a lottery ticket. I mean, what infrastructure could they possibly have built in seven months that's worth two billion dollars? This feels like peak AI bubble territory to me.
SPEAKER_00And apparently this would be their third funding round already. So they're raising money every few months at escalating valuations. Either they're onto something revolutionary, or we're seeing some serious irrational exuberance in AI infrastructure investing.
SPEAKER_01But seven months is barely enough time to figure out what you're building, let alone prove that it's worth billions.
SPEAKER_00What really gets me is that this is their third round in seven months. That suggests they're either burning through money incredibly fast, or they're just raising because they can. Neither scenario screams sustainable business model to me.
SPEAKER_01And think about what happens to the employees and early investors when you're raising at these valuations so quickly. The expectations become astronomical. There's no room for the normal ups and downs of building a company.
SPEAKER_00Right, at$2 billion, you're not just promising to build good infrastructure, you're promising to fundamentally transform how AI systems work. That's a pretty big promise for a seven-month-old company.
SPEAKER_01The cynic in me wonders if this is just smart founders taking advantage of a hot market. Get the money while it's available, worry about justifying the valuation later. But that's a dangerous game if the music stops.
SPEAKER_00Speaking of international moves, Anthropic is making a big expansion into London. They've leased new office space that could accommodate four times their current 200-person headcount there. And the timing is interesting. This comes amid rising tensions with the US government.
SPEAKER_01That's not a coincidence. When AI companies start expanding internationally while having tensions with US regulators, that's hedging their bets. London has been positioning itself as a more AI-friendly regulatory environment, and Anthropic is clearly taking notice.
SPEAKER_00It also fits with that broader theme we were talking about. Countries trying to attract AI talent and companies. The UK launches a sovereign AI fund, and suddenly Anthropic wants to quadruple their London presence.
SPEAKER_01Exactly. And for Anthropic, having a major presence outside the US gives them options. If regulatory pressure increases here, they've got a substantial operation elsewhere. Smart strategic move.
SPEAKER_00What's interesting is that they're planning for 800 people in London. That's not just a satellite office, that's a major operational centre. They're clearly betting big on their international expansion.
SPEAKER_01And it sends a signal to other AI companies too. If Anthropic is making this kind of commitment to London, other companies might start thinking about their own international strategies. Brain drain from the US could become a real issue.
SPEAKER_00Plus, London gives them access to European talent and markets. If regulatory frameworks in Europe end up being more favorable for AI deployment, having a big London operation could be a huge competitive advantage.
SPEAKER_01An AI fund. Anthropic might be positioning themselves to benefit from UK government support while maintaining their independence as a private company.
SPEAKER_00Google has updated its AI mode on Chrome Desktop, so you can now view web pages side by side with the AI assistant while browsing. It's a small feature, but it points to something bigger about how we're going to interact with the web.
SPEAKER_01Yeah, this is about making AI assistants seamless and contextual. Instead of switching between tabs or apps, you've got your AI right there, helping you understand and work with whatever you're looking at. It's like having a research assistant built into your browser.
SPEAKER_00And it's Google's answer to all the speculation about AI-powered browsers and search engines disrupting traditional web browsing. They're basically saying, we'll evolve our browser to make AI a native part of the experience.
SPEAKER_01Smart move. Instead of fighting the trend toward AI-mediated web browsing, they're embracing it and making sure they control the experience.
SPEAKER_00What I find interesting is that this is Google acknowledging that people want AI help while they browse, not just when they search. It's about providing context and analysis for the content you're already looking at.
SPEAKER_01Right. And that could fundamentally change how we consume information online. Instead of just reading articles or watching videos, you've got an AI that can explain, summarize, fact-check, or provide additional context in real time.
SPEAKER_00Though it does raise questions about how this affects website traffic and revenue, if the AI can summarize everything for you, do you still need to actually visit the websites that created the content?
SPEAKER_01That's the million-dollar question for content creators and publishers. Google's walking a fine line between providing value to users and potentially undermining the ecosystem that creates the content their AI depends on.
SPEAKER_00And finally, something completely different. Luma has launched an AI-powered production studio in partnership with something called the Wonder Project, which focuses on faith-based content. Their first project is apparently a film about Moses starring Ben Kingsley, coming to Prime Video this spring.
SPEAKER_01This is fascinating because it shows AI moving into creative industries in unexpected ways. Faith-based content is a specific niche with specific values and requirements. And if AI can help produce high-quality content for that market, that's a real business opportunity.
SPEAKER_00And Ben Kingsley is serious talent. The fact that Academy Award-winning actors are willing to work with AI-powered production studios suggests that the technology has reached a level of sophistication that traditional Hollywood is taking seriously.
SPEAKER_01Absolutely. This could be a preview of how AI transforms entertainment production, not by replacing human creativity, but by enabling new kinds of collaborations and making high-quality production more accessible to niche markets.
SPEAKER_00What's really interesting is that they're starting with faith-based content. That's a market that's often underserved by mainstream entertainment, but has a dedicated audience. AI might be helping to democratize content production for underrepresented communities.
SPEAKER_01Right. And faith-based content often has different production values and storytelling approaches than mainstream entertainment. If Luma can use AI to serve those specific needs, that's a much more targeted and defensible business model than trying to compete with Disney.
SPEAKER_00Plus the partnership aspect is smart. Instead of trying to be everything to everyone, they're focusing on a specific vertical with a clear partner who understands that market. That seems like a more sustainable approach than the AI will revolutionize everything strategy.
SPEAKER_01And it'll be interesting to see how audiences respond. Faith-based viewers might be more accepting of AI assistance in production if it helps tell stories that matter to them, even if mainstream audiences are skeptical of AI generated content.
SPEAKER_00Alright, Sam, if you zoom out and look at everything we cover today, what's the pattern that emerges? Because we've got desktop control, robot brains, billion-dollar valuations, government funding, international expansion. It feels like a lot of different things happening at once.
SPEAKER_01The pattern I see is fragmentation and acceleration. The AI industry is splitting into different tracks. Some companies are going for maximum capability and power. Others are focusing on safety and control. Governments are trying to build domestic champions, and everyone is raising crazy amounts of money at valuations that may or may not make sense.
SPEAKER_00Right. And it feels like we're moving from the wow, AI is cool phase to the okay, now we have to figure out who controls it and how it gets deployed phase. These aren't just technology decisions anymore, they're business strategy decisions, geopolitical decisions, social decisions.
SPEAKER_01Exactly. And I think the next year is going to be crucial for determining which approaches win. Do users want I agents with desktop control? Or do they want more limited but safer AI assistance? Do sovereign AI funds actually produce competitive companies? Or does private capital continue to dominate? Do these billion-dollar valuations turn into sustainable businesses?
SPEAKER_00And there's this interesting tension between consolidation and fragmentation. On one hand, you've got massive companies like OpenAI and Anthropic trying to build general purpose AI systems. On the other hand, you've got specialized companies like Factory focusing on enterprise coding, or Luma focusing on faith-based entertainment.
SPEAKER_01No, that's a great point. And I think that tension is going to define the next phase of AI development. Are we heading toward a world with a few dominant AI platforms, or a world with thousands of specialized AI tools for specific use cases?
SPEAKER_00What's also striking is the international dimension. The UK launching a sovereign AI fund, Anthropic expanding in London, it's clear that AI is becoming a geopolitical competition, not just a technological one.
SPEAKER_01Right, and that has huge implications for how AI develops. If countries are competing to build their own AI capabilities, that could lead to less collaboration, more secrecy, different approaches to safety and regulation.
SPEAKER_00And then there's the money question. We talked about companies raising a multi-billion dollar valuations after months of operation. That level of speculation suggests that either AI is going to be much bigger than we think, or we're in for a serious correction.
SPEAKER_01I think it's probably both. AI will be transformative in ways we're just starting to understand. But not every company raising money today is going to be part of that transformation. There's going to be a lot of creative destruction in this space.
SPEAKER_00The other thing that strikes me is how quickly the stakes are escalating. We started with AI that could write better emails, and now we're talking about AI that can control your computer and robots that can learn tasks they've never seen before.
SPEAKER_01Yeah, and the pace of change means that companies, governments, and individuals are having to make decisions about AI without fully understanding the long-term implications. We're essentially flying the plane while we're building it.
SPEAKER_00Which brings us back to that fragmentation point. Different stakeholders are making different bets about what AI should look like, how it should be controlled, who should benefit from it. And those bets are happening in real time with real money and real consequences.
SPEAKER_01Those are the questions worth watching. Because right now, everyone is placing bets on what the future of AI looks like, but we won't know who is right until these products actually hit the market and users vote with their wallets and their trust.
SPEAKER_00And the thing is, unlike previous technology cycles, the decisions being made today about AI are going to affect not just the tech industry, but pretty much every aspect of society. That's why this stuff matters so much.
SPEAKER_01Exactly. Um we're not just talking about the next iPhone or social media platform. Uh we're talking about technologies that could reshape work, governance, creativity, even our sense of what it means to be human. The stakes couldn't be higher.
SPEAKER_00That's a wrap on today's episode. As always, if you found this useful, hit subscribe. It really helps us reach more people who are trying to keep up with this crazy fast-moving industry.
SPEAKER_01And if you've got thoughts on any of these stories, especially the open AI desktop control thing or these wild startup valuations, we'd love to hear from you. You know where to find us.
SPEAKER_00We'll be back tomorrow with more AI news, more analysis, and probably more questions than answers. I'm Alex Shannon.
SPEAKER_01And I'm Sam Hinton. Thanks for listening to Build by AI, and we'll see you tomorrow.