Build by AI

The Great AI Valuation Shakeup I 16th April

Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.

0:00 | 27:20
OpenAI investors are getting cold feet as Anthropic's meteoric rise reshapes the entire AI landscape. Meanwhile, Google launches a native Gemini app for Mac, Adobe unleashes Firefly across Creative Cloud, and a controversial startup wants AI to judge journalism itself. From billion-dollar valuations to AI agents securing code, today's episode dives deep into the power shifts happening right now in artificial intelligence. Plus: why one company thinks AI-generated code needs AI to review it.
SPEAKER_01

So I've been staring at these numbers all morning, and I think we might be witnessing the first major reshuffling of AI power. Early reports suggest some OpenAI investors are literally having second thoughts about their investments because of Anthropic.

SPEAKER_00

Wait, what, OpenAI investors are getting cold feet? That's actually that's huge, if true. I mean, we're talking about the company that basically created the entire consumer AI market.

SPEAKER_01

Right, but here's the kicker. According to these reports, OpenAI's current valuation basically assumes they'll IPO at over a trillion dollars. Meanwhile, Anthropic is sitting at 380 billion and looking like a bargain.

SPEAKER_00

Dude, that's not just cold feet. That's a complete recalculation of who's going to win this race. And honestly, I'm not sure I'm surprised.

SPEAKER_01

Yeah, we need to dig into this because if confirmed, this could signal a massive shift in how investors are thinking about the AI landscape. You're listening to Build by AI, I'm Alex Shannon. And that investor drama is just the tip of the iceberg today.

SPEAKER_00

And I'm Sam Hitten. We've also got Google making a play for your Mac desktop, Adobe going full AI agent mode, and honestly, one of the most controversial AI applications I've seen in a while. This is April 16th, 2026.

SPEAKER_01

Yeah, that journalism story, we have to talk about that one. But first, let's dive into this open AI situation because the implications are wild. Alright, so according to early reports from TechCrunch, some OpenAI investors are genuinely reconsidering their positions because of how well Anthropic is performing. The basic math here is that OpenAI's recent funding round assumes they'll eventually IPO at$1.2 trillion or higher.

SPEAKER_00

Okay, but that's insane when you put it in context. Like Anthropic is currently valued at$380 billion, which suddenly makes them look like the reasonable investment. That's a massive gap.

SPEAKER_01

Right. And that gap raises a fundamental question. Are we looking at OpenAI being overvalued or Anthropic being undervalued? What's your take on the actual competitive landscape here?

SPEAKER_00

Um honestly I think it's both. OpenAI got the f the first mover advantage with ChatGPT, but Anthropic has been consistently shipping really solid models. Claude has been impressive. Their safety focus resonates with enterprises, and they're not carrying the baggage of all the open AI drama from the past couple years.

SPEAKER_01

That's a good point about the drama. But hold on. OpenAI still has the market dominance, the Microsoft partnership, the developer mind share. Are investors really going to jump ship based on valuation multiples alone?

SPEAKER_00

See, that's where I think people are missing the bigger picture. This isn't just about current performance, it's about trajectory. Anthropic is growing fast, they're being more thoughtful about their approach, and uh frankly, they don't have Sam Altman getting fired and rehired every six months.

SPEAKER_01

Okay, that's fair, but let's be practical here. What does this actually mean for developers and businesses who are building on these platforms?

SPEAKER_00

That's that's the real question, right? If I'm a developer, I'm probably not switching platforms tomorrow. But if I'm planning a major AI integration for 2020, 2028, I'm definitely doing a lot more due diligence on Anthropic than I might have six months ago.

SPEAKER_01

And if you're an investor, you're probably asking whether OpenAI can actually justify that trillion dollar valuation assumption. Keep an eye on this because investor sentiment can shift really quickly in this space. And that affects everything from research funding to talent acquisition.

SPEAKER_00

You know what's really interesting to me about this though? The fact that we're seeing this kind of investor skepticism now, when the AI market is still supposedly in its early days. Like what does that say about how mature this space has already become?

SPEAKER_01

That's a great point. Maybe the honeymoon period for AI valuations is ending faster than people expected. Investors are starting to look at fundamentals like revenue growth, market share sustainability, competitive moats, the boring stuff that actually determines long-term success.

SPEAKER_00

Exactly. And when you look at it that way, Anthropic's 380 billion valuation might actually reflect a more realistic assessment of where the AI market is heading. Maybe OpenAI got ahead of itself with that trillion dollar assumption.

SPEAKER_01

But here's what I keep coming back to. When normal people think AI, they think chat GPT. That's worth something, even if it's hard to quantify.

SPEAKER_00

True. But brand recognition only takes you so far if your competitors are shipping better products at better prices. And enterprise customers, which is where the real money is, they don't care about brand as much as they care about reliability, safety, and integration capabilities.

SPEAKER_01

Which brings us back to anthropic's positioning around safety and thoughtful AI development. That might seem like marketing fluff, but if it translates to fewer hallucinations, better enterprise integrations, fewer PR disasters, that's real competitive advantage.

SPEAKER_00

And honestly, from a product perspective, Claude has felt more reliable to me lately, less likely to go off the rails, better at maintaining context in long conversations if that's what investors are seeing too. Then yeah, maybe anthropic is the better bet.

SPEAKER_01

So for people listening who are trying to figure out which AI platforms to bet on for their projects, what's your advice? Wait and see how this shakes out, or start diversifying away from OpenAI now.

SPEAKER_00

I'd say start experimenting with alternatives now, but don't panic and switch everything overnight. The beauty of working with AI APIs is that you can usually swap them out relatively easily. Build your systems to be platform agnostic and test what works best for your specific use cases.

SPEAKER_01

Let's shift gears to something that's definitely confirmed. Google just launched a native Gemini app for Mac. This isn't just a web wrapper, it's actually integrated into the Mac desktop environment so you can interact with Gemini without switching between Windows.

SPEAKER_00

Okay, this is actually a bigger deal than it sounds on the surface. Think about it. Google is essentially putting an AI assistant directly on your Mac desktop, competing with whatever Apple's gonna do with their own AI integration.

SPEAKER_01

Right. And the timing is interesting here. Apple's been pretty quiet about their AI strategy beyond some Siri improvements. Is Google trying to get ahead of whatever Apple announces at WWDC?

SPEAKER_00

Oh, absolutely. Um this is Google saying, hey, we're not gonna wait for Apple to decide how AI should work on Mac. They're going direct to users and trying to build that habit of reaching for Gemini instead of whatever Apple eventually ships.

SPEAKER_01

But here's what I'm curious about. How does this play with Apple's traditional control over the user experience? I mean, Apple historically hasn't loved third-party apps that try to integrate this deeply into the system.

SPEAKER_00

Yeah, that's the tension, right? But I think Google is betting that AI assistants are becoming so essential that users will demand this kind of access even if it ruffles Apple's feathers. Plus, it's not like Apple can block Google from the App Store without major antitrust implications.

SPEAKER_01

That's true. And from a user perspective, if I'm already using Gemini for work stuff, having it integrated into my desktop workflow instead of having to open a browser tab every time is genuinely useful.

SPEAKER_00

Exactly. You know, this is about reducing friction and building stickiness. The easier Google makes it to use Gemini in your daily workflow, the harder it becomes to switch to whatever Apple eventually releases.

SPEAKER_01

So for Mac users, this is probably worth checking out, especially if you're already in the Google ecosystem. And for the broader AI landscape. This is Google making a clear statement about not ceding desktop AI to Apple.

SPEAKER_00

I'm actually really curious about the technical implementation here. Like how deep does this integration actually go? Can it access other apps on your Mac? Or is it more like a floating assistant that stays on top of your other windows?

SPEAKER_01

That's a great question, and probably determines how useful this actually is in practice. If it's just a prettier version of the web interface, that's one thing. But if it can actually understand context from other Mac apps and help with cross-application workflows, that's genuinely transformative.

SPEAKER_00

Right. And that gets to the broader question of how AI assistants evolve on desktop platforms. Are we heading toward these assistants that can see everything you're working on and proactively help? Because that's simultaneously incredibly useful and kind of terrifying from a privacy perspective.

SPEAKER_01

Yeah, the privacy implications are huge. Google already knows a ton about most people's digital lives through search, email, Chrome browsing. Adding desktop level AI integration potentially gives them even deeper insights into how people work and what they're thinking about.

SPEAKER_00

Which is probably why Apple has been more cautious about this kind of integration. They've built their brand around privacy, so they can't just drop an AI assistant that's constantly watching everything you do on your Mac. Google doesn't have that constraint.

SPEAKER_01

True. But Google also has to deal with regulatory scrutiny around data collection and market dominance. Launching an AI assistant that potentially monitors everything Mac users do could attract unwanted attention from antitrust regulators.

SPEAKER_00

Good point. But from a competitive standpoint, this move makes total sense. Google is essentially trying to own the AI layer on top of Mac OS before Apple gets their act together. And honestly, if the user experience is good enough, a lot of people won't care about the privacy trade-offs.

SPEAKER_01

That's probably true. Most people already use Google services extensively anyway. For them, having Gemini integrated into their Mac workflow is probably more convenient than concerning. It's really about whether Apple responds with something compelling of their own.

SPEAKER_00

And whether Apple responds quickly enough. The longer Apple takes to ship their own desktop AI integration, the more time Google has to build user habits around Gemini. First mover advantage is real, especially with sticky products like AI assistants.

SPEAKER_01

Now here's something that could genuinely change how creative professionals work. Early reports suggest Adobe has released a new Firefly AI assistant that can actually work across multiple creative cloud applications to complete tasks automatically.

SPEAKER_00

Wait, so this isn't just AI helping within Photoshop or Premiere individually. This is like an AI assistant that can jump between Photoshop, Lightroom, Illustrator, all of them to complete a workflow.

SPEAKER_01

That's exactly what it sounds like, if these reports are accurate. We're talking about an AI that understands the entire Creative Cloud ecosystem and can automate tasks that normally require you to manually move between different applications.

SPEAKER_00

Dude, that's not just a feature update, that's a fundamental shift in how creative work gets done. Think about a typical workflow. You might start in Lightroom, move to Photoshop for detailed edits, then to Illustrator for graphics, then to Premiere for video, if an AI can handle those transitions automatically.

SPEAKER_01

Right. But I'm also wondering about the learning curve here. Creative professionals are pretty particular about their workflows. Are they going to trust an AI to make those cross-application decisions? Or is this going to be one of those features that sounds cool but nobody actually uses?

SPEAKER_00

That's the million-dollar question. But here's why I think this might actually work. Adobe has been gradually introducing AI features into each individual app, and people have been adopting them. This feels like the natural next step rather than some jarring change.

SPEAKER_01

Plus, if you're a small creative agency or a freelancer juggling multiple projects, anything that can automate the tedious parts of moving files and settings between applications is going to save serious time.

SPEAKER_00

Exactly. And Adobe has all that usage data from Creative Cloud to train these models on what typical workflows actually look like. They're not guessing about how people move between applications.

SPEAKER_01

Good point. For creative professionals, this is definitely worth experimenting with, especially for routine tasks. And for Adobe, this is another way to make Creative Cloud feel indispensable rather than just a collection of separate tools.

SPEAKER_00

You know what's really smart about this though? Adobe is essentially using AI to solve one of the biggest pain points with creative cloud. The fact that it's this fragmented ecosystem of different applications that don't always play nicely together.

SPEAKER_01

That's a really good point. Instead of rebuilding their entire software architecture to be more integrated, they're using AI as the glue that connects everything seamlessly. That's actually pretty clever from a product strategy standpoint.

SPEAKER_00

And it addresses one of the main competitive threats Adobe faces, which is newer creative tools that are built from the ground up to be more integrated and user-friendly. If Firefly can make Creative Cloud feel more cohesive, that's huge for user retention.

SPEAKER_01

I'm curious about the specifics though. Like what kinds of tasks can it actually handle across applications? Are we talking about simple file transfers, or can it make creative decisions about how to adapt content for different mediums?

SPEAKER_00

That's the key question. If it's just automating file imports and exports, that's useful, but not revolutionary. But if it can understand creative intent and adapt designs across different applications intelligently, that's a game changer for productivity.

SPEAKER_01

Right. And there's also the question of quality control. Creative work often requires those subtle human judgments about color, composition, timing. Can an AI assistant maintain that level of quality when moving work between applications? Or do you end up with technically correct but creatively mediocre results?

SPEAKER_00

I think that's where the assistant framing is important. This probably works best when it's handling the mechanical parts of the workflow and leaving the creative decisions to humans, like let the AI handle file conversions and basic adjustments, but keep human oversight on the creative choices.

SPEAKER_01

That makes sense. And honestly, even if it just eliminates the tedious parts of cross-application workflows, that frees up creative professionals to spend more time on the actual creative work. That's valuable even if the AI isn't making creative decisions.

SPEAKER_00

Absolutely. Time is money in creative work, especially for freelancers and small agencies. If this AI assistant can shave even 20-30 minutes off a typical project workflow, that adds up to significant cost savings and productivity gains over time.

SPEAKER_01

And from Adobe's perspective, this kind of AI integration makes it even harder for customers to switch away from creative cloud. Once your workflows are built around an AI that understands all these different Adobe applications, migrating to competitors becomes much more painful. Alright, now we need to talk about something that honestly made me do a double take when I read it. There's a startup called Objection, backed by Peter Thiel, that wants to use AI to judge journalism by letting users pay to challenge published stories.

SPEAKER_00

Oh no. Oh no no no. I already don't like where this is going. Peter Thiel famously funded the Hulk Hogan lawsuit that shut down Gawker, and now he's backing an AI system to challenge journalism? That's not a coincidence.

SPEAKER_01

Right. And according to early reports, critics are already warning that this could have a chilling effect on whistleblowers and fundamentally change how media accountability works. What's your take on the concept itself, setting aside the theal connection for a moment?

SPEAKER_00

Okay, look, I'm all for media accountability, but this feels like it's approaching the problem from completely the wrong angle. Journalism is about judgment calls, context, source protection, things that AI fundamentally can't evaluate properly. You can't algorithm your way to truth.

SPEAKER_01

But devil's advocate here. What if there are genuine factual errors or misleading reporting? Couldn't an AI system at least flag potential issues that deserve human review?

SPEAKER_00

See, that's the thing though. We already have systems for that. It's called corrections, retractions, media criticism, journalism schools, press councils. The difference is those systems are run by humans who understand the nuances of reporting. An AI doesn't know the difference between a legitimate source and a bad actor trying to discredit a story.

SPEAKER_01

And that gets to the whistleblower concern, right? If someone can pay to have an AI challenge, a story that exposes wrongdoing, that creates a whole new way to intimidate sources and reporters.

SPEAKER_00

Exactly. Imagine you're a journalist working on a story about corporate malfeasance, and you know that the company can just pay to have an AI tear apart your reporting methodology. Even if the AI is wrong, that creates doubt and gives bad actors a new tool to muddy the waters.

SPEAKER_01

This feels like one of those AI applications where the technical capability might exist. But the societal implications are really concerning. Keep an eye on this because how we handle AI's role in evaluating information is going to be crucial for democracy.

SPEAKER_00

And here's what really bothers me about this: the pay-to-challenge model. Good journalism costs money to produce, but challenging journalism with AI is relatively cheap. That creates this asymmetric warfare situation where well-funded interests can constantly attack investigative reporting.

SPEAKER_01

That's a really important point. Investigative journalism is already under financial pressure from declining ad revenues and subscription challenges. If you add a system where anyone with money can weaponize AI to attack stories, that makes the economics even worse for news organizations.

SPEAKER_00

Right, and think about the incentive structure this creates. News organizations might start avoiding controversial or complex stories because they know those are most vulnerable to AI-powered challenges. That's exactly the chilling effect critics are worried about.

SPEAKER_01

I keep coming back to the source protection issue, though. Journalism often depends on sources who are taking personal risks to expose wrongdoing. If those sources know that AI systems will be analyzing stories to try to identify them or discredit their information, that's going to make people much less likely to come forward.

SPEAKER_00

Absolutely. And here's the thing AI systems are really good at finding patterns and connections that humans might miss. That could actually make it easier to identify confidential sources, even when journalists think they've protected them adequately.

SPEAKER_01

So we could end up in a situation where this system, even if it's designed to improve journalism accuracy, actually makes investigative reporting more dangerous for both journalists and sources. That's a pretty significant unintended consequence.

SPEAKER_00

And let's be real about who's going to use this system. It's not going to be regular citizens trying to fact-check their local newspaper. It's going to be corporations, politicians, and other powerful interests who want to discredit negative coverage.

SPEAKER_01

Which brings us back to the Peter Thiel connection. This isn't happening in a vacuum. It's being funded by someone who has a track record of using legal and financial tools to attack media organizations he doesn't like.

SPEAKER_00

Exactly. So even if the technology itself could theoretically be used for legitimate media accountability, the funding source and business model suggests that's not really the primary goal here.

SPEAKER_01

Just because you can use AI to judge journalism doesn't mean you should. Let's rapid fire through some other stories. First up, early reports suggest OpenAI has updated its agents SDK to help enterprises build safer and more capable AI agents.

SPEAKER_00

This timing is interesting given all the investor drama we talked about. OpenAI is clearly doubling down on the enterprise market, which makes sense. That's where the sustainable revenue is, not consumer subscriptions.

SPEAKER_01

Right. And the focus on safety is smart positioning too. If enterprises are going to deploy AI agents at scale, they need that assurance that the systems won't go off the rails.

SPEAKER_00

Are we talking about better reasoning, longer context windows, improved integration with enterprise systems? The devil is in the details with these AI agent platforms.

SPEAKER_01

Good point. And given that Agentic AI is growing in popularity, OpenAI probably needs to move fast to maintain their lead in this space before competitors like Anthropics start eating into their market share.

SPEAKER_00

Yeah, enterprise customers are way more willing to switch AI providers than consumers are. If you're building mission-critical systems, you're going to go with whoever offers the best combination of reliability, safety, and capabilities, regardless of brand loyalty.

SPEAKER_01

Which brings us back to that valuation question. If OpenAI is banking on enterprise AI agents, being a major revenue driver, they need to prove they can maintain technical leadership as competition heats up.

SPEAKER_00

Absolutely. Enterprise sales cycles are longer, but the contracts are also bigger and stickier. Getting this agent SDK right could be crucial for OpenAI's long-term financial prospects.

SPEAKER_01

Speaking of Google, they also released Gemini 3.1 Flash TTS, which is their next generation text-to-speech system with more expressive AI speech capabilities.

SPEAKER_00

The race for better AI voices is heating up. This is about making AI assistants feel more natural to interact with, which becomes super important as they get integrated deeper into our workflows, like that Mac app we talked about.

SPEAKER_01

Yeah. And better TTS is crucial for accessibility too. The more natural AI speech sounds, the more useful it becomes for people who rely on screen readers or voice interfaces.

SPEAKER_00

Are we talking about better emotional range, more natural pacing, the ability to convey different moods? That could make a huge difference for applications like audiobook narration or language learning.

SPEAKER_01

Right. And if Google can make Gemini's voice interactions feel significantly more natural than competitors, that's another way to build user preference and stickiness across their AI products.

SPEAKER_00

Plus, better TTS opens up new use cases. If AI generated speech sounds truly natural and expressive, you can start using it for things like personalized podcast creation, interactive storytelling, even customer service applications, where you want to maintain a human feel.

SPEAKER_01

And the fact that it's available across Google products means they can provide a consistent voice experience, whether you're using the Mac app, Android Assistant, or web interfaces. That's smart ecosystem thinking. And specifically to review code that's generated by AI.

SPEAKER_00

Okay, that's actually brilliant. As more code gets generated by AI, we're going to need AI to check that AI generated code for security vulnerabilities. It's like AIEI all the way down, but in a good way.

SPEAKER_01

Right. It's solving a problem that basically didn't exist five years ago, but is becoming critical as AI coding tools get more prevalent. Smart timing for a startup to tackle this space.

SPEAKER_00

And here's the thing: human code reviewers are already struggling to keep up with the volume of code being written, let alone AI generated code that might have subtle security issues humans wouldn't catch. This feels like a natural fit for AI automation.

SPEAKER_01

I'm curious about the approach, though. Are they training models specifically to understand common patterns in AI-genated code vulnerabilities? Or is this more of a general-purpose security analysis tool?

SPEAKER_00

That's a great question. AI generated code might have different types of vulnerabilities than human written code, maybe more predictable patterns or blind spots around edge cases that humans would naturally consider, but AI tools miss.

SPEAKER_01

Plus, as AI coding tools get better, the security review tools need to evolve too. This could become a continuous arms race between AI code generation and AI security analysis.

SPEAKER_00

Which is probably why getting$9 million in funding makes sense. This market is likely to grow rapidly as more companies adopt AI coding tools, and there's a real need for specialized security solutions that understand both AI capabilities and limitations.

SPEAKER_01

And finally, an AI learning platform called Gizmo has reached 13 million users and secured$22 million in Series A funding.

SPEAKER_00

13 million users is serious traction. The AI education space is exploding right now as people realize they need to understand this technology to stay relevant in their careers.

SPEAKER_01

Yeah. And unlike some other AI applications, education is one where AI can genuinely provide personalized value that's hard to replicate without the technology. Good fit between problem and solution.

SPEAKER_00

What's interesting is the timing of this funding. We're at this inflection point where AI literacy is transitioning from nice to have to essential skill for most knowledge workers. Gizmo is positioned right in the middle of that trend.

SPEAKER_01

And with 13 million users, they've got real data on how people actually learn about AI. What concepts are hardest to grasp, what teaching methods work best. That's incredibly valuable for product development and content creation.

SPEAKER_00

Right. Plus, AI-powered personalized learning can adapt to individual learning styles and pace in ways that traditional online courses can't. If someone's struggling with a particular concept, the AI can automatically provide additional examples or alternative explanations.

SPEAKER_01

I'm also thinking about the business model here. Unlike a lot of consumer AI applications that are struggling to find sustainable revenue, education has proven willingness to pay for valuable content and personalized instruction.

SPEAKER_00

Absolutely. People invest in education, especially when it's directly tied to career advancement. If Gizmo can can demonstrate clear learning outcomes and career benefits for users, that$22 million investment could pay off pretty quickly. Yeah, you know, you've got Google pushing Gemini directly onto Mac Desktops, Adobe integrating AI across their entire suite, OpenAI doubling down on enterprise agents. Everyone's trying to make their AI indispensable by embedding it deeper into existing workflows.

SPEAKER_01

And then you have the investor story, which suggests the market is starting to mature and really evaluate which companies have sustainable advantages rather than just first mover benefits.

SPEAKER_00

Right. The honeymoon period is ending. It's not enough to just have an AI product anymore. You need to prove you can build a defensible business around it. The companies that figure out deep integration and sticky workflows are going to win.

SPEAKER_01

The question is whether we're heading toward a few dominant AI platforms that do everything, or a more specialized ecosystem where different AIs excel at different tasks. That journalism story suggests we definitely need to be thoughtful about which direction we want to go.

SPEAKER_00

I think we're seeing both trends simultaneously. You have these big platforms like Google and OpenAI trying to be the everything AI, but you also have specialized solutions like guitar for code security or gizmo for learning. There's room for both approaches.

SPEAKER_01

That's a good point. And maybe that's healthier for the ecosystem overall. Having a few dominant platforms provides stability and integration benefits, but having specialized competitors keeps everyone honest and drives innovation.

SPEAKER_00

Exactly. Look at Adobe. They're not trying to compete with OpenAI or Google on general AI capabilities. They're focusing on making AI work seamlessly within creative workflows where they already have expertise and market position.

SPEAKER_01

And that might be the smarter long-term strategy. Instead of trying to build the next chat GPT, focus on solving specific problems really well with AI. That's probably more defensible than trying to outgenal purpose the big tech companies.

SPEAKER_00

Maybe investors are starting to realize that AI is becoming more of a feature than a standalone product category. The real value is in the applications and integrations, not just the underlying models.

SPEAKER_01

That's a really insightful way to think about it. If AI becomes commoditized infrastructure, then the companies that win are the ones that build the best applications on top of that infrastructure, not necessarily the ones that build the infrastructure itself.

SPEAKER_00

Right, and that would explain why Google is so focused on integration, uh, the Mac app, the TTS improvements, the ecosystem play. They're not just selling AI capabilities, they're selling AI-powered workflows that become indispensable.

SPEAKER_01

And it explains why stories like the journalism one are so important to watch. As AI gets more powerful and integrated into critical systems like media and information, the stakes get higher. We need to think carefully about the incentives and power structures we're creating.

SPEAKER_00

Absolutely. The technical capability is advancing faster than our frameworks for thinking about the social implications. That's going to be the major challenge as AI becomes more embedded in everything we do. Well, that investor story is definitely going to be one to watch. If open AI really is losing investor confidence to anthropic, that could reshape everything.

SPEAKER_01

For sure. Thanks for listening to Build by AI. If you're finding value in these daily updates, hit subscribe so you don't miss tomorrow's developments. This space moves too fast to keep up with otherwise, for me and KUMEs.