Build by AI

The Great AI Model Showdown: Microsoft vs Google vs Everyone I 3rd April

Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.

0:00 | 26:34
Microsoft just dropped three new foundational AI models while Google fired back with Gemma 4, claiming it's the most capable open model byte for byte. But here's the kicker - Google is also powering their AI datacenters with gas plants, completely abandoning their climate goals. Meanwhile, OpenAI just bought a podcast (yes, really) and Cursor is taking aim at Claude and Codex with their new AI agent. It's a wild day in AI and we're breaking down what it all means for you.
SPEAKER_00

Okay, so Google just announced they're the most capable open AI models bite for bite, but they're also literally burning gas to power their AI data centers. Like they've completely abandoned their climate commitments for AI.

SPEAKER_01

Wait, hold on. They're going from don't be evil to let's burn fossil fuels for our chatbots? That's insane.

SPEAKER_00

Right. And that's just one story today. Microsoft also just dropped three new foundational models out of nowhere. OpenAI bought a podcast, and we've got a full-blown model war happening.

SPEAKER_01

Dude, the amount of money and energy being thrown at this stuff right now is absolutely wild. And I'm not sure everyone realizes what's actually happening here.

SPEAKER_00

You're listening to Build by AI, I'm Alex Shannon. And yeah, we're diving straight into what might be the craziest AI news day we've had in weeks.

SPEAKER_01

And I'm Sam Hinton. Look, when you have Microsoft, Google, and half of Silicon Valley making major moves on the same day, you know something big is shifting. Plus, we've got some really unexpected stories that show just how weird the space is getting.

SPEAKER_00

Alright, let's break it all down. Starting with Microsoft's surprise announcement. So early reports suggest Microsoft just released three new foundational AI models. And if confirmed, this is a pretty significant move. We're talking about models that can transcribe voice to text, generate audio, and create images. According to TechCrunch, these were developed by a team that was formed just six months ago.

SPEAKER_01

Yeah, that that timeline is what gets me. Six months from formation to release, that's either incredibly efficient or they're really feeling the pressure from OpenAI and Google. This feels like Microsoft saying we're not just going to rely on our OpenAI partnership anymore.

SPEAKER_00

That's interesting because Microsoft has been pretty much riding the OpenAI wave since their big investment. Why do you think they're suddenly going it alone on foundational models?

SPEAKER_01

I think it's strategic diversification, honestly. Look, putting all your eggs in the open AI basket was smart when they were the clear leader, but now you've got Google with Gemini, Anthropic with Claude, and a dozen other players. Microsoft needs their own models, so they're not beholden to anyone else's roadmap or pricing.

SPEAKER_00

But wait, isn't this kind of duplicating effort? I mean, they're already paying billions to open AI for similar capabilities.

SPEAKER_01

Okay, but here's the thing. Having your own models means you control the entire stack. You can optimize for your specific use cases. You don't have to negotiate with a partner every time you want to make changes. And honestly, you probably save money at scale. Plus, if something happens to the OpenAI relationship.

SPEAKER_00

Right. So this is insurance as much as it is innovation. And the fact that they're covering voice, audio, and images, that's pretty comprehensive. For developers and businesses, what does this mean practically?

SPEAKER_01

If these models are competitive, and that's a big if since we only have early reports, it could mean more choice and potentially lower costs. Competition is good for everyone except the incumbents. But I'm curious about the quality. Rushing three models to market in six months makes me wonder if they're trying to match features rather than pushing boundaries.

SPEAKER_00

That's a fair point. Keep an eye on the benchmarks when they come out. This could either be Microsoft making a real play for AI independence, or it could be a hasty response to competitive pressure. We'll know pretty quickly which one it is.

SPEAKER_01

What's also interesting is the timing. We're seeing this massive acceleration in model releases across the board. Microsoft, Google, everyone's pushing stuff out faster than ever. It makes you wonder if there's some deadline or competitive milestone they're all racing toward.

SPEAKER_00

That's a really good point. Maybe it's regulatory pressure, maybe it's investor expectations, or maybe they all know something we don't about what's coming next in AI. The breakneck pace is starting to feel unsustainable.

SPEAKER_01

And for businesses trying to build on these platforms, the rapid fire releases are both exciting and terrifying. Like, great, more options, but also, how do you plan a product roadmap when foundational tools are changing every few months?

SPEAKER_00

Exactly. If you're a startup building on Azure's AI services, do you bet on open AI integration or pivot to Microsoft's native models? These decisions have huge implications for your architecture and costs.

SPEAKER_01

I think the the smart play is probably to build abstraction layers so you can switch between models more easily. But that adds complexity and development time that a lot of companies can't afford. It's like we're in this weird transitional period where everyone's trying to future-proof against an unknowable future.

SPEAKER_00

And meanwhile, Microsoft is just quietly building their own models while everyone else is debating strategy. If these models are actually good, they could have a massive advantage by controlling both the cloud infrastructure and the AI models running on it.

SPEAKER_01

That vertical integration plate is classic Microsoft. They've done it with Office, with Windows, with Azure, why not with AI? The question is whether they can execute on the technical side as well as they have on the business side.

SPEAKER_00

Speaking of competitive pressure, Google just fired back with Gemma 4, and they're making a bold claim here. They're calling these byte for byte, the most capable open models, which is a very specific way to phrase that. This isn't just another model release. This is Google throwing down the gauntlet in the open source space.

SPEAKER_01

Yeah, you know, that byte for byte qualifier is doing a lot of heavy lifting there. It's like saying pound for pound, the best fighter. You're acknowledging there are bigger models out there, but claiming you're the most efficient. And honestly, efficiency might matter more than raw size right now.

SPEAKER_00

Why is efficiency suddenly so important? I mean, we've been in this bigger is better phase for years with these models.

SPEAKER_01

Because compute costs are insane and getting worse. Everyone's realizing that having a massive model that costs a fortune to run isn't sustainable. Plus, smaller, efficient models can actually run locally, which opens up completely different use cases. If Gemma 4 really delivers flagship performance in a smaller package, that's huge for developers who can't afford enterprise level API costs.

SPEAKER_00

Okay, but Google has been pretty inconsistent with their open source strategy. Remember how they handled the original Gemma release? Are we sure they're actually committed to keeping this open?

SPEAKER_01

That's the million-dollar question, isn't it? Google has this pattern of releasing something open, getting everyone excited, and then either neglecting it or walking back the openness. But here's the thing: they're under so much pressure from meta's llama models and all these other open alternatives that they kind of have to play this game now.

SPEAKER_00

So you think this is more about competitive necessity than genuine commitment to open AI?

SPEAKER_01

I think it's both, honestly. Google needs open models to stay relevant in the developer ecosystem, but they also genuinely benefit from the research and improvements that come from open development. The question is whether they'll resist the temptation to close things up if these models become too successful.

SPEAKER_00

For people actually building with AI right now, the practical takeaway is probably to test Gemma 4, but not bet your entire infrastructure on it until Google proves they're serious about long-term support. But if the efficiency claims are real, this could be a game changer for smaller companies and indie developers. That's a huge shift. We've been in this centralized AI era where everything runs in the cloud, but efficient models could bring us back to edge computing. Imagine having flagship AI capabilities running on your laptop or phone without needing an internet connection.

SPEAKER_01

And that has massive implications for Google's business model, right? They make money from cloud AI services, but if everyone can run models locally, what happens to that revenue stream? It's almost like they're commoditizing their own product.

SPEAKER_00

Unless they're thinking bigger picture. Maybe local AI capabilities drive more search queries, more Android usage, more integration with Google services. They could lose the direct AI revenue but gain in other areas.

SPEAKER_01

That's actually pretty smart if that's the strategy. Give away the models to lock people into your ecosystem. But it's risky because once something is truly open source, you lose control over how it gets used and integrated.

SPEAKER_00

And we've seen what happens when Google releases something and then loses interest. Remember Google Reader, Google Plus, Stadia? There's a graveyard of Google products that started with big promises.

SPEAKER_01

True. But AI feels different. This isn't a side project or experimental product. This is a core to Google's future competitiveness. They can't afford to treat Gemma 4 like another Google Labs experiment.

SPEAKER_00

I hope you're right, because if Gemma 4 delivers on the efficiency promise, it could democratize AI in a really meaningful way. Small businesses, researchers, indie developers, suddenly everyone has access to powerful AI without needing venture capital or massive infrastructure.

SPEAKER_01

That democratization aspect is huge, and it's probably why we're seeing this push for efficient models across the industry. It's not just Google. Everyone realizes that the future might be more distributed than centralized.

SPEAKER_00

Alright, let's talk about something that hits closer to home for a lot of our listeners. Coding AI. Cursor just launched a new AI agent experience, and they're going directly after OpenIS Codex and Anthropics clawed code. This is interesting because Cursor has been more of a niche player, but now they're making a serious play for the mainstream coding market.

SPEAKER_01

Dude, I I've been watching Cursor for a while, and this makes total sense. They've been quietly building this really polished coding experience while everyone else was focused on general purpose chatbots. Now they're basically saying, we're done being the underdog, we're coming for the big guys.

SPEAKER_00

What's different about their approach? Because on the surface, AI coding assistants all seem pretty similar. You type comments, they generate code, they help with debugging.

SPEAKER_01

That's where the agent experience part comes in. From what I'm seeing, this isn't just autocomplete or code generation. It's more like having an AI pair programmer that can understand your entire project context, suggest architectural changes, and actually reason about your code at a higher level. It's the difference between a smart text expander and an actual coding partner.

SPEAKER_00

Okay, but wait, OpenAI and Anthropic have massive resources and data advantages. How does a smaller company like Cursa realistically compete with that?

SPEAKER_01

Uh this is actually a perfect example of why focus matters more than size sometimes. Um OpenAI has to make codecs work for everyone: web developers, mobile developers, data scientists, embedded systems, you name it. Cursor can optimize for specific workflows and actually iterate based on real developer feedback. Sometimes the scrappy focus team beats the giant corporation.

SPEAKER_00

That's a fair point, but the coding AI space is getting incredibly crowded. You've got GitHub Copilot, Amazon Code Whisperer, Tab Nine, Replay. Why do we need another one?

SPEAKER_01

Yeah, because none of them are perfect yet. Copilot is good, but sometimes feels disconnected from your project. Code Whisperer is fine, but feels very Amazon-y. Each one has different strengths and weaknesses, and honestly, competition is making all of them better. Plus, different developers have different preferences and workflows.

SPEAKER_00

For developers who are already using one of the established tools, what would make them switch to Cursor? Because switching coding tools is a pretty high-friction decision.

SPEAKER_01

It would have to be significantly better, not just marginally better. Cursor can deliver on this agent experience promise, like actually understanding your code base and helping with complex refactoring or architectural decisions. That could be worth switching for. But they're going to have to prove it, because developer trust is hard to earn and easy to lose.

SPEAKER_00

What's interesting is the timing of this launch. Cursor is going head to head with Anthropic and OpenAI, right when those companies are fighting battles on multiple fronts. Maybe they see an opportunity while the big players are distracted.

SPEAKER_01

That's actually really smart positioning. While OpenAI is dealing with governance drama and Anthropic is trying to scale up, Cursor can focus entirely on making developers happy. And developers are a pretty vocal community. If Cursor delivers a better experience, word will spread fast.

SPEAKER_00

But there's also the risk that one of these bigger companies just copies whatever Cursor does well and integrates it into their existing products. We've seen that playbook before.

SPEAKER_01

True. But by the time they copy it, Cursor will hopefully be on to the next innovation. And that's the advantage of being smaller and more nimble. Plus, developers care about the whole experience, the interface, the integration, the reliability. It's not just about the underlying AI model.

SPEAKER_00

Speaking of the whole experience, what does this agent approach actually look like in practice? Are we talking about an AI that can write entire applications, or is this more incremental?

SPEAKER_01

Based on what we're seeing from other AI agent approaches, I'd guess it's somewhere in the middle. Probably not writing full applications from scratch, but maybe handling entire features or s significant refactors. The key is whether it can maintain context across a large code base and make intelligent decisions about architecture.

SPEAKER_00

That context piece is huge. One of the biggest frustrations with current coding AI is that it doesn't really understand your project structure or coding conventions. If Cursor can crack that, they could have a real differentiator.

SPEAKER_01

And honestly, that's an area where being focused on developers pays off. Cursor doesn't need to understand legal documents or marketing copy. They just need to be the best at understanding code. That specialization could be their secret weapon.

SPEAKER_00

For our listeners who are developers, I'd say this is definitely worth trying. Especially if you're not locked into one of the big ecosystems. The worst case is you spend a few hours testing it out. The best case is you find a tool that genuinely makes you more productive.

SPEAKER_01

Absolutely. And even if Cursor doesn't become your primary tool, competition like this pushes everyone to innovate faster. Your GitHub co-pilot experience is probably going to get better because companies like Cursor are keeping the pressure on.

SPEAKER_00

Okay, we need to talk about this Google story, because it's honestly pretty shocking. Google is planning to use a gas plant to power an AI data center, which represents what the Guardian is calling a sharp turn from their climate goals. This is the same company that's been positioning itself as a climate leader for years.

SPEAKER_01

Yeah, this is where the the rubber meets the road on AI's environmental impact. All these companies have been making these grand climate commitments, but when push comes to shove and they need massive amounts of power for AI training and inference, suddenly those commitments become flexible.

SPEAKER_00

But this seems like more than just flexibility. This is a complete reversal, right? Google has been carbon neutral. They've been buying renewable energy. They've made climate action a core part of their brand. How do they justify this?

SPEAKER_01

I think they're betting that people care more about AI capabilities than climate consistency. And unfortunately, they might be right. When your competitors are building massive data centers and you're trying to compete in the AI race, waiting for renewable energy sources might feel like a luxury you can't afford.

SPEAKER_00

Okay, but that logic is pretty terrifying when you think about the scale we're talking about. AI data centers use enormous amounts of power, and if every tech company decides climate goals are optional when it comes to AI.

SPEAKER_01

Right. We're talking about potentially canceling out decades of progress on renewable energy. And here's what's really frustrating. It's not like renewable energy doesn't exist. This feels more like they don't want to wait for renewable capacity to come online, or they don't want to pay the premium for clean energy when gas is cheaper and faster to deploy.

SPEAKER_00

So this is basically Google saying AI dominance is more important than our climate commitments. What does that say about the priorities of these tech companies?

SPEAKER_01

It says that despite all the ESG reports and sustainability marketing, when there's a competitive threat, environmental concerns go out the window. And that's really concerning. Because if Google, which has been one of the better actors on climate, is making this choice, what are other companies doing that we don't know about?

SPEAKER_00

This feels like a moment where we need to start asking harder questions about the true cost of the AI race. Because if the price of having slightly better chatbots is abandoning our climate goals, maybe we need to slow down and think about whether that trade-off is worth it.

SPEAKER_01

Exactly. And consumers and businesses need to start factoring this stuff into their decisions about which AI services to use, because ultimately we're all complicit in this if we're demanding AI capabilities without caring about how they're powered.

SPEAKER_00

What's really wild is the timing. Google is making this climate U-turn on the same day they're announcing Gemma 4 as this efficient democratizing technology. It's like they're saying, here's AI for everyone, powered by fossil fuels. The cognitive dissonance is incredible.

SPEAKER_01

That's such a good point. They're literally promoting efficiency in their AI models while choosing the most inefficient, environmentally damaging way to power them. It's like they compartmentalize these decisions completely.

SPEAKER_00

And you know what's going to happen next, right? Every other tech company is going to point to Google and say, well, if they're using gas plants, we can too. This could trigger a race to the bottom on climate commitments across the entire industry.

SPEAKER_01

That's the most depressing part. You know, Google isn't just making a decision for themselves. They're potentially giving everyone else permission to abandon their climate goals too. And once that happens, it's really hard to put the genie back in the bottle.

SPEAKER_00

The energy demand numbers for AI are just staggering. We're talking about data centers that use as much power as small cities, and that demand is growing exponentially. Even if renewable energy is scaling up, it's not scaling fast enough to meet this AI boom.

SPEAKER_01

Which raises the question: should we be slowing down AI development until clean energy can catch up? I know that's heretical in Silicon Valley, but maybe some problems are worth solving more slowly if it means not destroying the planet.

SPEAKER_00

But you know the response to that would be China won't slow down, so we can't either. The geopolitical AI competition becomes the excuse for abandoning every other priority. It's this zero-sum thinking that everything else is disposable if it helps win the AI race.

SPEAKER_01

And meanwhile, the actual benefits of this AI arms race are questionable. Like, are we getting proportional value from all this energy consumption? A lot of these models are being used for pretty trivial applications.

SPEAKER_00

That's what kills me. We're potentially sacrificing climate stability so people can have better chatbots and code completion. The cost-benefit analysis is completely out of whack when you look at it, honestly.

SPEAKER_01

I think this Google decision is going to be a watershed moment. Either there's going to be massive backlash that forces them to reverse course, or we're going to look back on this as the moment the tech industry officially gave up on climate responsibility for the sake of AI dominance.

SPEAKER_00

Alright, rapid fire time. First up, OpenAI just acquired TBPN, which is apparently Silicon Valley's cult favorite tech podcast. The show will keep operating independently with Chris Lahane as chief political operative.

SPEAKER_01

Wait, OpenAI is buying podcasts now? That's actually kind of brilliant for narrative control. Um if you own the media that covers your industry, you can shape the conversation. Very meta, very Silicon Valley.

SPEAKER_00

Yeah, and Chris Lahane isn't just any political operative. This is serious influence operations. OpenAI is clearly thinking about perception management as much as product development.

SPEAKER_01

It makes sense when you think about all the regulatory pressure they're facing. Having a media property that can frame AI development in a positive light, you know, that's probably worth whatever they paid for it.

SPEAKER_00

But it also raises questions about media independence, right? If people are listening to what they think is independent tech commentary, but it's actually owned by one of the companies being covered.

SPEAKER_01

That's the concerning part. At least they're being transparent about the acquisition. But how many listeners are going to pay attention to that detail? Most people probably won't even realize the show is now owned by OpenAI.

SPEAKER_00

Aaron Powell And this sets a precedent. If OpenAI buying podcasts works for them, how long before Google starts acquiring tech YouTubers or Microsoft buys up newsletter writers? The lines between media and marketing could get very blurry.

SPEAKER_01

On the flip side, maybe this gives the podcast more resources and reach. If they maintain editorial independence and just get better funding and production quality, that could be a win for listeners. But that's a big if.

SPEAKER_00

Next, early reports suggest Quinn 3.6 Plus is being positioned as a step toward real-world AI agents. Not much detail yet, but the focus seems to be on practical agent applications.

SPEAKER_01

Okay, everyone's talking about agents now, but most of them are still pretty limited. If Quin can actually deliver agents that work in real-world scenarios, not just controlled demos, that could be significant.

SPEAKER_00

Right. But we've heard real-world agents promises before. The gap between demo and deployment is still pretty massive for most of these systems.

SPEAKER_01

True. But the fact that smaller players like Quinn are focusing on agents suggests the market is moving beyond just chat interfaces. Even if this specific release doesn't deliver, the direction is interesting.

SPEAKER_00

What's intriguing is that Quen is positioning this as towards real-world agents, not claiming that they've solved it. That's more honest than some of the grandiose claims we see from bigger companies.

SPEAKER_01

Yeah, I appreciate that honesty. And Quen has been pretty solid on their previous releases. They're not trying to overhype, they're just steadily building better models and being realistic about their capabilities.

SPEAKER_00

The real test will be whether these real-world agents can handle the messiness and unpredictability of actual business processes. Most current agents break down pretty quickly when they encounter edge cases.

SPEAKER_01

Exactly. Real-world deployment means dealing with inconsistent APIs, weird data formats, unexpected user behavior, system failures, all the stuff that doesn't exist in carefully crafted demos. If Quen 3.6 Plus can handle even some of that robustly, it's progress. Exactly. Plus, if you're building AI for therapy, education, or customer service, you need to know whether your model actually understands emotions or if it's just pattern matching emotional language.

SPEAKER_00

What I find interesting is that anthropic is investing in this kind of fundamental research while everyone else is racing to ship more models. This feels like the kind of work that pays off in the long term.

SPEAKER_01

They've always been more focused on safety and interpretability than just raw performance. This research probably informs how they design and train their models, which could give them advantages in sensitive applications.

SPEAKER_00

And emotional intelligence is becoming a real differentiator for AI assistance. Models that can navigate emotional conversations appropriately are going to be way more useful than ones that are just good at factual QA.

SPEAKER_01

Plus, understanding how emotion concepts work in LLMs could help with alignment and safety issues. If we know how models represent and reason about emotions, we can probably design better safeguards against manipulation or harmful outputs.

SPEAKER_00

If you zoom out and look at everything we covered today, there's this really interesting tension emerging. You've got this massive push for AI capabilities, Microsoft rushing out three models, Google claiming efficiency breakthroughs, cursor going after the big players.

SPEAKER_01

But then you also have this darker side: Google abandoning climate commitments, open AI buying media properties for influence, everyone burning through resources at an unsustainable pace. It's like we're witnessing both the peak of AI innovation and the beginning of some serious consequences.

SPEAKER_00

Right. And I keep coming back to that Google gas plant story. Because if we're willing to sacrifice our climate goals for better AI models, what else are we willing to sacrifice? And who's making those decisions?

SPEAKER_01

I think we're at this inflection point where the AI race is starting to reveal the true priorities of these companies. All the corporate social responsibility stuff was fine when it didn't conflict with competitive advantage. But now that AI dominance is on the line, we're seeing what really matters to them.

SPEAKER_00

The question is whether there's any way to slow this down or make it more sustainable. Because the current pace feels unsustainable in multiple ways, environmentally, economically, maybe even socially.

SPEAKER_01

I don't think it slows down voluntarily. The competitive pressure is too intense, but maybe we'll see more regulation, or maybe the costs will become so high that the companies are forced to be more strategic about where they compete. Either way, something's gotta give.

SPEAKER_00

What's also striking is how fragmented everything is getting. Microsoft building their own models instead of relying on open AI, Google pushing open source while also going proprietary, smaller players like Cursa trying to carve out niches. The AI landscape is becoming incredibly complex.

SPEAKER_01

That fragmentation might actually be healthy in the long run. When you have a bunch of companies all pursuing different strategies, open source, closed source, efficient models, massive models, specialized tools, innovation happens faster, and no one player can control the entire market.

SPEAKER_00

But it also makes it really hard for businesses and developers to make strategic decisions. Like if you're building a product today, do you bet on Microsoft's new models, Google's efficient approach, open AI's continued dominance, or one of the smaller specialized players?

SPEAKER_01

That uncertainty is probably intentional though. These companies benefit from developers being locked into their ecosystems, so they're not incentivized to make cross-platform compatibility easy. Everyone wants to be the platform that everyone else builds on.

SPEAKER_00

And meanwhile, the actual societal questions about AI are getting lost in all this corporate maneuvering. We're debating model efficiency and API pricing while barely talking about job displacement, privacy, concentration of power, environmental impact, the stuff that actually matters for most people.

SPEAKER_01

That's because the companies driving this conversation have a vested interest in keeping the focus on technical capabilities rather than societal implications. It's easier to sell revolutionary AI breakthrough than AI that might eliminate your job but uses clean energy.

SPEAKER_00

The OpenIPOLCAS acquisition is a perfect example of this. They're literally buying the media that covers them to control the narrative. That's not about building better AI. That's about managing public perception and regulatory pressure.

SPEAKER_01

And it's working. Look at how how most AI coverage focuses on capabilities and business implications, rather than deeper questions about power, control, and societal impact. The conversation has been successfully narrowed to terms that benefit the companies building these systems.

SPEAKER_00

But there are some positive signals too. Anthropic doing research on emotion concepts shows that at least some companies are thinking about safety and interpretability. Cursor focusing on developer experience shows that not everyone is just chasing the biggest models.

SPEAKER_01

True. And Google, releasing Gemma 4 as open source, whatever their motivations, does democratize access to powerful AI. Even if the big players are making questionable choices, the technology itself is becoming more accessible.

SPEAKER_00

I just hope we can find a way to have the benefits of rapid AI development without the worst of the downsides. But right now it feels like we're on a runaway train and no one's really in control of where it's heading.

SPEAKER_01

Maybe that's the most important thing for people to understand. Like this isn't inevitable. The pace, the priorities, the trade-offs, these are all choices being made by specific companies and individuals. And those choices can be influenced by public pressure, regulation, market forces, and individual decisions about which products to use.