Build by AI

The $100 Question: OpenAI's Premium Gamble I 10th April

Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.

0:00 | 24:42
OpenAI just launched a $100 per month ChatGPT subscription while simultaneously backing legislation to limit their liability for AI-caused mass deaths. Meanwhile, Meta's climbing the app charts and Florida is launching investigations. Today we dig into whether AI companies are getting too comfortable with risk, why developers might pay premium prices, and what happens when the honeymoon phase of AI adoption starts getting messy. Plus: the infrastructure arms race that's reshaping tech.
SPEAKER_00

So OpenI just launched a hundred dollar per month subscription tier, which honestly made me do a double take because that's five times their current premium price.

SPEAKER_01

Wait, a hundred bucks a month for ChatGPT? That's more than most people pay for their phone bill.

SPEAKER_00

Right. But here's the thing that's really got me thinking. They're doing this at the exact same time. They're lobbying for legal protection against liability for mass deaths caused by their AI.

SPEAKER_01

Oh wow. So they want premium pricing, but also want to limit their responsibility if things go catastrophically wrong. That's that's a very specific combination of confidence and caution.

SPEAKER_00

Exactly. And I can't figure out if this signals they're incredibly bullish about their technology or if they're starting to get nervous about the risks they're taking. And that contradiction is fascinating to me. Usually companies are either confident enough to stand behind their product or they're not. This feels like trying to have it both of us. You're listening to Build by AI. I'm Alex Shannon. And that tension between AI ambition and AI anxiety is basically the theme of today's entire news cycle.

SPEAKER_01

And I'm Sam Hiton. We've got OpenAI making some very interesting moves, Meta climbing the app charts, Florida launching investigations, and a massive infrastructure spending spree that's reshaping the entire industry.

SPEAKER_00

Plus, we're seeing some serious money flowing into next generation AI models as companies start hitting the limits of current approaches.

SPEAKER_01

It's one of those days where every story connects to the bigger question of whether we're in a sustainable AI boom or heading for some kind of reckoning. Let's dive in.

SPEAKER_00

And honestly, between the pricing strategies, the liability concerns, and the regulatory pushback, it feels like we're watching the AI industry grow up in real time.

SPEAKER_01

The honeymoon phase is definitely over. Okay. So they're clearly targeting developers and businesses that are hitting usage limits. But$100 a month? That's putting ChatGPT in the same price category as professional software, like Adobe Creative Suite.

SPEAKER_00

That's a great comparison. So do you think there's actually a market for this? Are developers really running up against those usage limits enough to justify this price jump?

SPEAKER_01

Oh, absolutely. If you're a developer using Codecs heavily for code generation,$100 a month is nothing compared to what you'd pay a human developer for the same output. I know teams that are probably hitting those limits daily.

SPEAKER_00

But here's what I'm wondering. Is this open AI testing the waters for much higher pricing across the board? Like, are we looking at the beginning of AI tools becoming genuinely expensive enterprise software?

SPEAKER_01

That's the million dollar question, literally. I think what we're seeing is OpenAI realizing they've been underpricing their technology.$20 a month for unlimited access to GPT-4? That was probably unsustainable from a business perspective.

SPEAKER_00

Right. But there's a risk here too. If AI tools get too expensive, you could see more companies investing in open source alternatives or building their own models.

SPEAKER_01

Exactly. OpenAI has this window where they're the clear leader, but pricing themselves out of the market could accelerate competition. It's a classic innovator's dilemma. Milk the current advantage or keep prices low to maintain market share.

SPEAKER_00

And for individual users and smaller businesses, this might be where we start seeing a real divide between who has access to cutting-edge AI and who doesn't.

SPEAKER_01

Yeah, we could be looking at the beginning of an AI access gap. The companies that can afford$100 subscriptions get significantly more powerful tools while everyone else gets stuck with more limited options.

SPEAKER_00

But let's think about this from the user perspective. What kind of developer or team actually needs five times more codex usage? We're talking about people who are essentially using AI as their primary coding assistant.

SPEAKER_01

Right. These are probably teams building AI-first products or maybe large engineering organizations where multiple developers are sharing accounts. The usage patterns must be pretty extreme to justify this tier.

SPEAKER_00

And that tells us something about adoption, doesn't it? If there's enough demand for this tier, it means AI coding tools have moved way beyond experimentation into core workflows.

SPEAKER_01

Absolutely. This pricing tier only makes sense if there are customers who literally can't do their jobs without heavy AI assistance. That's a pretty significant shift in how software gets built.

SPEAKER_00

The question is whether this pricing holds or if competition forces it back down. Because if other companies can offer similar capabilities at lower prices, OpenAI might have to retreat.

SPEAKER_01

But they might also be betting that by the time competition catches up, they'll have moved to even more advanced models, stay ahead of the curve, and justify premium pricing through technological leadership.

SPEAKER_00

Keep an eye on how quickly this tier fills up, and whether other AI companies follow suit with similar premium pricing. That'll tell us a lot about where this market is heading.

SPEAKER_01

And watch for enterprise deals too. You know, if businesses are willing to pay$100 per user per month, we might see even higher pricing tiers for large organizations.

SPEAKER_00

Now let's talk about that liability story I mentioned in the opening. OpenAI actually testified in support of an Illinois bill that would limit their liability even if their AI causes mass deaths or major financial disasters. They want legal protection from being held accountable for critical harm.

SPEAKER_01

Whoa, hold on. They're literally asking for protection against liability for mass deaths. That's not like limiting liability for minor bugs or service outages. That's some heavy stuff.

SPEAKER_00

Right? And the timing is what gets me. They're launching premium subscriptions while simultaneously trying to limit their responsibility if things go catastrophically wrong. What does that tell us about their risk assessment?

SPEAKER_01

I mean, from a business perspective, I get it. You know, if you're building technology that could potentially control critical infrastructure or financial systems, you want legal protection. But man, the optics are terrible.

SPEAKER_00

But Sam, should we be worried that they feel the need to ask for this protection in the first place? Like what do they know about the risks that we don't?

SPEAKER_01

Aaron Powell That's the scary part, right? Either they're being overly cautious lawyers, or they genuinely think there's a non-zero chance their technology could cause mass casualties. Neither interpretation is particularly comforting.

SPEAKER_00

And there's a precedent issue here too. If Illinois passes this, you can bet every other AI company is going to push for similar protections in other states.

SPEAKER_01

Exactly. Um, you know, we could end up in a situation where AI companies have broad immunity while regular people bear all the risk. That seems like a pretty fundamental shift in how we think about corporate responsibility.

SPEAKER_00

You know what this reminds me of? The early days of social media when platforms got Section 230 protections. Seemed reasonable at the time, but the long-term consequences were huge.

SPEAKER_01

That's a perfect analogy. And we're still dealing with the fallout from those decisions twenty five years later. The difference is AI has the potential for much more immediate and severe consequences than social media misinformation.

SPEAKER_00

The question is whether lawmakers understand the implications of what they're being asked to approve. This feels like one of those decisions that could define AI regulation for decades.

SPEAKER_01

And Illinois might not be the ideal testing ground for this kind of precedent-setting legislation. This deserves national attention and debate, not just a single state deciding for everyone.

SPEAKER_00

But let's think about this from OpenAI's perspective for a moment. If you're building AGI or near AGI systems, the potential for unintended consequences is genuinely massive. Maybe they're actually being responsible by acknowledging that risk up front.

SPEAKER_01

I can see that argument, but the the flip side is that if the risks are that severe, maybe they should slow down development rather than just limiting their liability. It feels like they want to have their cake and eat it too.

SPEAKER_00

And there's an economic argument here too. If AI companies can't be held liable for catastrophic failures, what incentive do they have to invest in safety measures? Liability creates market pressure for responsible development.

SPEAKER_01

Exactly. Remove the liability and you remove a major cost of reckless behavior. That seems like it could actually make AI development less safe, not more safe.

SPEAKER_00

The other concerning thing is that OpenAI testified in favor of this bill. They didn't just quietly support it, they actively advocated for it. That suggests this is a priority for them.

SPEAKER_01

Which brings us back to the fundamental question. What do they know about the risks that's making them push so hard for legal protection? That's what keeps me up at night about this story.

SPEAKER_00

This is definitely worth following closely. If this bill passes in Illinois, we'll probably see similar legislation pop up in other states pretty quickly. The AI industry will mobilize around this.

SPEAKER_01

And if it fails, that might signal that the public and lawmakers are starting to get more skeptical about giving AI companies carte blanche. Either way, it's a significant moment.

SPEAKER_00

Speaking of regulation, Florida Attorney General James Urthmeyer just launched an investigation into OpenAI focusing on public safety and national security risks. This is pretty significant as the first major state-level regulatory action against the company.

SPEAKER_01

Okay, that's interesting timing. Right after we just talked about OpenAI seeking liability protections. Florida's basically saying, hold up, let's examine whether these public safety concerns are real before we give you legal immunity.

SPEAKER_00

Exactly. And Florida's not exactly known for being anti-business. So when they're launching investigations into a tech company, that suggests some serious concerns behind the scenes.

SPEAKER_01

Public safety and national security are pretty broad categories. Are we talking about misinformation, privacy violations, potential for foreign interference?

SPEAKER_00

The fact that they're mentioning national security specifically makes me think this might be related to data handling or potential vulnerabilities in OpenAI systems. Remember, ChatGPT processes massive amounts of sensitive conversations.

SPEAKER_01

Right, and there have been ongoing questions about OpenAI's relationship with Microsoft and data residency issues. If you're Florida's attorney general, you might be worried about uh state government data potentially being accessible to foreign actors.

SPEAKER_00

But here's what's tricky for OpenAI. They can't really fight back too aggressively against this investigation, because it would contradict their public messaging about being committed to safety and transparency.

SPEAKER_01

Exactly. They they position themselves as the responsible AI company, so they kind of have to cooperate and appear welcoming of oversight. But I bet behind closed doors they're not thrilled about setting precedent for state investigations.

SPEAKER_00

And if Florida finds anything concerning, you can bet attorneys general in other states are going to launch their own investigations. This could be the start of a much broader regulatory scrutiny.

SPEAKER_01

Which brings us back to that liability legislation. Maybe OpenAI is seeing the uh the writing on the wall with increased regulatory attention and trying to get legal protections in place before the hammer falls.

SPEAKER_00

That would actually make a lot of sense strategically. Get immunity legislation passed while you still have political goodwill before any investigations uncover problems that make lawmakers less sympathetic.

SPEAKER_01

This investigation is worth watching closely because it could establish the template for how states regulate AI companies going forward. Florida's approach could become the model for everyone else.

SPEAKER_00

I'm also curious about the timeline here. How long do these kinds of investigations typically take? Are we talking months or years before we see results?

SPEAKER_01

State AG investigations can vary wildly, but if there are genuine national security concerns, this could move pretty quickly. Attorney Generals don't usually announce investigations unless they have reason to believe they'll find something. Right, and that could completely change the regulatory landscape for AI companies. Federal oversight is a whole different ballgame than dealing with individual state investigations.

SPEAKER_00

The other thing to watch is how other AI companies respond to this. Are they going to distance themselves from open AI, or are they going to close ranks and present a united front?

SPEAKER_01

Good point. If this investigation turns up serious issues, other AI companies might actually benefit from regulatory clarity. Better to have clear rules than to operate in a gray area where any company could be the next target.

SPEAKER_00

This feels like we're entering a new phase where AI companies can't just operate in the regulatory Wild West anymore. The attention is getting too intense, and the stakes are getting too high. Let's shift gears and talk about some success stories. Meta AI's mobile app just shot up to number five on the App Store after launching their new Muse Spark model. We're talking about jumping from number 57 to number 5, which is pretty dramatic.

SPEAKER_01

Dude, that's a massive jump. Going from 57 to 5 on the App Store doesn't happen by accident. That suggests Muse Spark is delivering something users actually want, not just generating hype.

SPEAKER_00

What's interesting is Meta has been pretty quiet about their AI efforts compared to OpenAI's constant headlines. But this app ranking suggests they might actually be gaining real traction with consumers.

SPEAKER_01

That's Meta's strength, though, right? They don't need to win the PR battle if they can win the usage battle. They've got billions of users across their platforms who they can gradually introduce to AI features.

SPEAKER_00

But I'm curious what MuseSpark actually does that's different. Do we know what specific capabilities drove this surge in downloads?

SPEAKER_01

That's the million-dollar question. Meta's been pretty secretive about the technical details, but the user response suggests they've solved some real pain point that other AI apps haven't addressed.

SPEAKER_00

You know what this might be? Meta has a huge advantage in understanding what people actually want to do with AI, because they see how billions of people interact with technology every day.

SPEAKER_01

Exactly. While OpenAI is building general purpose AI, Meta can build AI that's specifically designed around how people actually use apps. That user data advantage is massive.

SPEAKER_00

And the app store rankings are a leading indicator of broader adoption. If meta AI stays in the top ten, that could signal a real shift in the consumer AI market.

SPEAKER_01

Right, because app rankings translate to mainstream adoption in a way that enterprise subscriptions don't. OpenAI might have the developer community, but meta could end up with regular consumers.

SPEAKER_00

This feels like the beginning of the consumer AI wars. We've had the foundation model wars. Now we're moving into who can actually build AI products that normal people want to use every day.

SPEAKER_01

And Meta's got a huge distribution advantage there. They can integrate I into Instagram, Facebook, WhatsApp, apps people already use constantly. That's a much easier path to adoption than asking people to download something new.

SPEAKER_00

But here's what I'm wondering. Is this sustainable? App store rankings can be pretty volatile, especially for new features. Will Meta AI still be in the top 10 next month?

SPEAKER_01

That's the real test. Initial novelty can drive downloads, but retention is what matters. If people download the app, try it once, and never open it again, those rankings will crash pretty quickly.

SPEAKER_00

And there's the integration factor too. If MuseSpark gets built directly into Instagram and Facebook, people might not need the standalone app anymore. That could hurt the rankings but actually increase usage.

SPEAKER_01

Good point. Meta's endgame probably isn't to have a successful AI app, it's to have AI seamlessly integrated into all their existing products. The app might just be a testing ground.

SPEAKER_00

Which brings up an interesting competitive question. Should OpenAI be worried about companies like Meta that can bundle AI into existing popular products?

SPEAKER_01

I think they should be. OpenAI's advantage is having the best models, but if other companies can deliver good enough AI directly inside apps people already use, that's a major threat to OpenAI's user acquisition.

SPEAKER_00

The other thing to watch is whether this success with Muse Spark gives Meta more confidence to compete directly with OpenAI in other areas. If consumers respond this well to meta AI, they might accelerate their broader AI strategy.

SPEAKER_01

Absolutely. Success breeds ambition. If Meta can capture a significant chunk of the consumer AI market, they might decide to go after enterprise customers too. That would be a real challenge to OpenAI's business model.

SPEAKER_00

Let's rapid fire through some other big stories. Google and Intel are deepening their partnership to co-develop custom AI chips, responding to huge demand and a global CPU shortage.

SPEAKER_01

This is huge because it signals Google is serious about reducing their dependence on NVIDIA. Everyone's been at NVIDIA's mercy for AI chips, and that's created a massive bottleneck.

SPEAKER_00

And if Google and Intel can deliver competitive alternatives, it could completely reshape the AI infrastructure market and bring costs down across the industry.

SPEAKER_01

Plus Intel needs this partnership badly. They've been losing ground to AMD and NVIDIA in the AI space, so Google's backing could be what gets them back in the game.

SPEAKER_00

But here's the question: can Intel actually compete with Nvidia's performance? Or are they just going to be the cheaper alternative? Because AI companies care a lot more about performance than price right now.

SPEAKER_01

That's the key. If Google Intel chips are 30% cheaper, but 20% slower, that might not be attractive to companies racing to build the most powerful AI systems.

SPEAKER_00

Although for certain workloads, especially inference, rather than training, being cheaper might be more important than being the absolute fastest. Not every AI application needs cutting-edge performance.

SPEAKER_01

True. And Google scale means they can optimize for their specific use cases. They don't need general-purpose chips, they can build exactly what they need for their AI workloads.

SPEAKER_00

Speaking of infrastructure, OpenAI, NVIDIA, and other major firms are channeling billions into AI infrastructure as global demand absolutely explodes.

SPEAKER_01

This is the arms race, nobody talks about enough. Everyone focuses on the models, but the real competition is in who can build the computational power to train and run these systems at scale.

SPEAKER_00

And these aren't small investments. We're talking about billions in spending, which suggests these companies see infrastructure as a fundamental competitive advantage.

SPEAKER_01

Absolutely. The companies that control the infrastructure will ultimately control who gets to play in the AI game. This spending spree is about securing long-term market position.

SPEAKER_00

What's interesting is that this creates a bit of a chicken and egg problem. You need massive infrastructure to compete, but you need revenue to afford the infrastructure. It's becoming a rich-get-richer situation.

SPEAKER_01

Exactly, and that might explain OpenAI's hundred dollar subscription tier. They need cash flow to fund this infrastructure build-out, and premium pricing is one way to get there faster.

SPEAKER_00

The other thing is that all this infrastructure spending is creating shortages and driving up costs for everyone else. Smaller AI companies are getting squeezed out by supply constraints.

SPEAKER_01

Right, it's not just about having money, it's about having enough money to compete in what's essentially an infrastructure bidding war. The barriers to entry are getting higher every quarter.

SPEAKER_00

Now, early reports suggest Alibaba is leading a$290 million investment in developing a completely new type of AI model as current LLM limitations become apparent.

SPEAKER_01

If confirmed, this is fascinating because it suggests we might be hitting the ceiling of what traditional language models can do. 300 million is serious money to bet on next generation approaches.

SPEAKER_00

And Alibaba's timing makes sense. They need to leapfrog current leaders rather than just catching up to existing technology.

SPEAKER_01

Right, and if they can crack whatever comes after LLMs, they could completely disrupt the current AI hierarchy. That's worth a massive investment.

SPEAKER_00

The question is what specific limitations they're trying to address. Are we talking about reasoning capabilities, factual accuracy, computational efficiency, or something else entirely?

SPEAKER_01

That's what I want to know too. Because different limitations require completely different approaches. This investment suggests they have a specific technical breakthrough in mind.

SPEAKER_00

And if multiple companies are betting hundreds of millions on post-LLM research, that tells us the industry consensus is that current approaches won't scale much further.

SPEAKER_01

Which could be bad news for companies that have invested heavily in LLM infrastructure. If the next generation requires completely different architectures, a lot of current investments could become obsolete pretty quickly.

SPEAKER_00

And if early reports are accurate, Chinese startup Sheng Shu just raised 293 million specifically for artificial general intelligence research.

SPEAKER_01

Almost 300 million for AGI research from a startup, that's either incredibly ambitious or incredibly naive. AGI is still such a theoretical goal that it's hard to know how you'd even measure progress.

SPEAKER_00

But it shows how much money is flowing into next generation AI research, especially in China where there's clear government backing for AI leadership.

SPEAKER_01

True. And if multiple companies are betting hundreds of millions on post-LLM approaches, that suggests the current paradigm might have shorter legs than we think.

SPEAKER_00

What worries me a bit is that AGI research is so speculative that it's hard to hold companies accountable for results. How do you measure progress towards something that doesn't have a clear definition?

SPEAKER_01

That's a fair point. With 300 million in funding, investors must have some specific milestones in mind. But AGI timelines are notoriously unreliable.

SPEAKER_00

The geopolitical aspect is interesting too. If a Chinese company achieves major AGI breakthroughs, that could completely shift the global AI power balance.

SPEAKER_01

Absolutely. This isn't just about building better chatbots. This is about which country leads the most important technologies of the next century. That makes the stakes much higher.

SPEAKER_00

If you zoom out and look at everything we covered today, there's this really interesting pattern emerging. We've got premium pricing, liability protection, regulatory investigations, and massive infrastructure investments all happening simultaneously.

SPEAKER_01

It feels like we're transitioning from the experimental phase of AI to the industrial phase. Companies are making serious long-term bets, governments are paying attention, and the stakes are getting real.

SPEAKER_00

And that$100 chat GPT subscription might be a canary in the coal mine. If AI tools become genuinely expensive enterprise software, that changes who has access and how quickly the technology spreads.

SPEAKER_01

Exactly. We might be looking at the end of the democratized AI era before it really got started. The companies that can afford premium tools pull ahead while everyone else gets left behind.

SPEAKER_00

But there's also this infrastructure arms race happening that could change everything. If Google and Intel can break Nvidia's chip monopoly, or if these new AI models actually work, the whole landscape could shift again.

SPEAKER_01

The next six months are going to be crucial. We'll see if premium pricing sticks, whether regulatory pressure intensifies, and if any of these next generation AI approaches actually deliver results. Premium pricing might be driven by infrastructure costs, which creates opportunities for meta to capture consumers with integrated solutions, which then puts pressure on everyone to find alternative approaches like those next generation models.

SPEAKER_00

Right. And the regulatory scrutiny ties into all of it. If AI companies are asking for liability protection while charging premium prices, that creates a political problem. It looks like privatizing profits while socializing risks.

SPEAKER_01

The timing of that Florida investigation is particularly interesting in that context. It's almost like a direct response to OpenAI's liability push. You want immunity? Let's first examine what risks we'd be giving you immunity from.

SPEAKER_00

And the international competition adds another layer. Chinese companies are investing hundreds of millions in AGI research, while US companies are focused on premium subscriptions and liability protection. Those are very different strategies. That could be really significant long term. If China achieves major technical breakthroughs while US companies are focused on incremental improvements and risk management, the competitive landscape could flip pretty dramatically.

SPEAKER_01

The infrastructure story ties into this too. All these billions being spent on computational power suggest companies are preparing for much more resource-intensive AI systems. That's either scaling up current approaches or preparing for whatever comes next.

SPEAKER_00

Which brings us back to that fundamental question. Are we in the middle of sustainable AI growth, or are we approaching some kind of inflection point where everything changes?

SPEAKER_01

Based on today's stories, I'd say we're definitely approaching an inflection point. Premium pricing, regulatory scrutiny, and massive next generation investments don't happen during periods of stable growth.

SPEAKER_00

The question is whether that inflection point leads to continued rapid progress, or if we hit some kind of wall that forces the industry to reset expectations.

SPEAKER_01

And for anyone building businesses around AI, these trends matter a lot. The tools you depend on might get much more expensive, the regulatory environment is shifting, and the competitive landscape could completely change if these next generation approaches work.

SPEAKER_00

That's a wrap on today's show. As always, if you're getting value from these daily AI updates, subscribing really helps us keep doing this.

SPEAKER_01

And tomorrow we'll be back with whatever chaos the AI world throws at us next. Based on today's news, it's probably going to be interesting.

SPEAKER_00

Seriously, the pace of change in this industry is just incredible. Every day brings new developments that could reshape how we think about AI.

SPEAKER_01

Which is why we're here, trying to make sense of it all and figure out what it means for everyone else. Thanks for listening, and we'll see you tomorrow.

SPEAKER_00

See you tomorrow on Build by AI.