Rendered Real: The Noir Starr Podcast
"Rendered Real: The Noir Starr Podcast" dives into the intersection of high fashion, artificial intelligence, and authentic representation. Hosted by the visionary team behind Noir Starr Models, each episode explores how the digital modeling revolution is reshaping beauty standards, brand storytelling, and the future of talent.
Rendered Real: The Noir Starr Podcast
Episode 52: Scaling to Infinity: The Economics of AI Models
Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.
In this episode, we explore the most profitable sector of 2026: AI Models. While traditional fashion relies on physical fabric and manual labor, the AI economy operates on a different set of rules. We’re moving from high marginal costs to a world where software can scale infinitely, creating defensible economic moats that redefine market dominance.
But there is a catch. As we’ve discussed in previous episodes, the "Golden Rule" of software—near-zero marginal costs—is being challenged by the high compute costs of AI. Today, we break down how to navigate this paradox and build a business that is truly built to scale.
Look at the device you're holding right now. I mean you probably use it every single day, right?
SPEAKER_00Oh, absolutely. It's basically an extension of our hands at this point.
SPEAKER_01Aaron Powell Yeah, exactly. And if you are, you know, navigating the modern world at all, you see artificial intelligence baked into absolutely every app, device, and workflow you touch.
SPEAKER_00Aaron Powell Right. I mean it's the new search, it's the new assistant. It's really the new operating system for everything. Aaron Powell It is.
SPEAKER_01Yeah. But our mission for you today goes uh well, beneath the glass screen. We're entirely ignoring the interface today.
SPEAKER_00Yeah, no chatbot windows or image generators this time.
SPEAKER_01Right. Instead, we're taking a deep dive into this staggering economic engine running silently beneath that surface. We're unpacking a really fascinating article from today, April 29, 2026.
SPEAKER_00It's a great piece.
SPEAKER_01Yeah, it's titled Scaling to Infinity by Anthony Starr and Noir Starr Models. And uh the core premise breaks down exactly why foundational AI models have become the ultimate high-margin business of our era.
SPEAKER_00Aaron Powell And to really grasp the magnitude of Anthony Starr's argument here, we kind of have to look backward for a moment.
SPEAKER_01Okay, take us back.
SPEAKER_00Well, for the last century, our entire global economy has been governed by manufacturing limits, right? And the variable costs of human labor.
SPEAKER_01Aaron Powell Sure. Like making more stuff meant paying for more time and materials.
SPEAKER_00Aaron Ross Powell Exactly. Producing more value inherently meant hiring more people or, you know, building larger factors. But the fundamental thesis of scaling to infinity is that we have crossed a threshold.
SPEAKER_01Trevor Burrus A pretty massive threshold, from what the article says.
SPEAKER_00Huge. We're moving into a world of near zero marginal costs regarding cognitive work. Deploying complex intelligence at the speed of light, it just completely rewrites how value is created.
SPEAKER_01Aaron Powell And how it scales, right?
SPEAKER_00Yep. And most importantly, who actually gets to capture that value.
SPEAKER_01Aaron Powell Okay, let's unpack this. Because that phrase near zero marginal costs, it gets thrown around in tech constantly, usually just to describe like basic software.
SPEAKER_00Oh, yeah, it's a massive buzzword.
SPEAKER_01But to understand the structural dominance of these AI companies today, we have to look at how they eliminate the traditional cost of replication entirely. The source points out that once a massive AI model is trained, which by the way is a brutal capital-intensive process, we'll dissect shortly distributing that, intelligence is basically free.
SPEAKER_00Right. Serving a million users requires essentially the same infrastructure as serving just one.
SPEAKER_01Which is wild to think about.
SPEAKER_00It is. Consider the auto industry, for instance. When a manufacturer wants to sell one more car, they have to pay for fresh steel, rubber, assembly labor.
SPEAKER_01Shipping, dealership storage, all of it.
SPEAKER_00Exactly. Every single unit carries a heavy variable cost. And it's the same in human-driven service industries, too.
SPEAKER_01Right. The bottleneck is identical.
SPEAKER_00If a high-end accounting firm wants to double its client list, they are forced to double their hiring of accountants. They need to rent more office space, pay for more health benefits, you name it.
SPEAKER_01But with foundational AI models, those physical constraints just vanish. I mean, there's no manufacturing floor, there's no shipping container.
SPEAKER_00And there's zero degradation of the product over time.
SPEAKER_01Yeah, copying an AI solution a million times over takes almost nothing. It functions as an infinite digital workforce that scales perfectly on demand.
SPEAKER_00And it strips away all those human variable expenses that normally dictate a corporate balance sheet.
SPEAKER_01Right. The text explicitly points out that salaries, healthcare benefits, onboarding, training, even downtime, they just cease to exist in this cost structure once the model is deployed.
SPEAKER_00Which is such a massive competitive advantage.
SPEAKER_01Aaron Powell The article gave some really vivid examples of this in practice. You have a single AI system diagnosing hundreds of medical images at 3 a.m.
SPEAKER_00Right. And simultaneously, that exact same underlying intelligence is drafting complex corporate legal contracts in Tokyo.
SPEAKER_01While the human legal team is fast asleep, and at that very same second, it's optimizing millions of ad bids in New York.
SPEAKER_00It's doing all of that simultaneously without taking a break. No fatigue, no drop in cognitive sharpness.
SPEAKER_01Yeah, Anthony Starr calls that continuous cognitive output.
SPEAKER_00Right. Exactly. These systems think and respond 24 hours a day. They take what used to be idle time in the economy, the hours when human workers are commuting or sleeping or on vacation, and they just convert them into pure, uninterrupted revenue.
SPEAKER_01It's just it's mind-blowing. The mechanism reminds me of spending an absolute fortune to build the most advanced master printing press in history.
SPEAKER_00Oh, that's a great way to look at it.
SPEAKER_01Yeah, you spent billions of dollars to build this incredible machine exactly once. But then instead of having to buy fresh ink and paper for every single book you want to print, every book it prints afterward just materializes out of thin air for free, forever.
SPEAKER_00Right. The marginal cost of the second book is practically zero.
SPEAKER_01Aaron Powell But you know, reading through this, I kept hitting a mental roadblock. Isn't this exactly what we've been doing with software for the last two decades? I mean, isn't this just the traditional software as a service or SAS model that companies like Salesforce or Microsoft have utilized forever?
SPEAKER_00Aaron Powell It's a really valid question, and the author explicitly addresses that comparison. He labels this new era software evolution beyond SAWs.
SPEAKER_01Aaron Powell So what's the mechanical difference then?
SPEAKER_00The distinction is profound. Traditional SAWS, like a spreadsheet application or a CRM dashboard, is ultimately just a static framework.
SPEAKER_01It's an empty container.
SPEAKER_00Exactly. It sits there entirely useless until a human being sits down, inputs the data, and does the actual cognitive work.
SPEAKER_01Aaron Powell Ah, I see. The software itself isn't generating the insight.
SPEAKER_00Right. It requires human mental effort to extract any value. AI models, on the other hand, deliver that continuous cognitive output. They aren't waiting for a user to press buttons.
SPEAKER_01They're actively analyzing the data, writing the code, and making the decisions. So put simply, SAW scales the hammer, but AI scales the carpenter swinging the hammer.
SPEAKER_00That is the perfect distinction.
SPEAKER_01Yes.
SPEAKER_00And because these models are delivering active intelligence rather than just empty features, it triggers their next major structural advantage. Aaron Powell Right.
SPEAKER_01They don't just work autonomously, they improve autonomously.
SPEAKER_00Exactly.
SPEAKER_01Here's where it gets really interesting. The article dives into these synthetic feedback loops. Every interaction the model has can theoretically be used to improve the next one.
SPEAKER_00And they aren't just relying on human users to teach them anymore, either.
SPEAKER_01Aaron Powell Right. The AI models are now generating their own training data through simulated environments. They're iterating millions of times per day without any human input whatsoever.
SPEAKER_00To understand how synthetic data generation works, think about how early AI learned to play chess.
SPEAKER_01Oh, sure.
SPEAKER_00Initially, models studied millions of historical games played by human grandmasters. But eventually, the AI just ran out of human games to study.
SPEAKER_01There's only so much human data out there.
SPEAKER_00Exactly. So the engineers simply had the AI play against itself millions of times a second. It wasn't learning from human limitations anymore.
SPEAKER_01It was discovering entirely new strategies by relentlessly testing the boundaries of the rules against itself.
SPEAKER_00Precisely. And that same mechanism is now being applied to business logic, coding, and complex problem solving. It's a self-improving engine.
SPEAKER_01And to capture the immense value of that engine, the text notes these companies are aggressively pursuing vertical integration, meaning they don't just want to build the brain, they want to own the entire nervous system.
SPEAKER_00Right. They want to embed intelligence across every single layer of the technology stack.
unknownTrevor Burrus, Jr.
SPEAKER_01Infrastructure, user interface, the underlying data pipelines.
SPEAKER_00Trevor Burrus, Jr. Yes. Because if you're a business today and you build your product by simply licensing an AI through an API and application programming interface, you are essentially just renting someone else's brain.
SPEAKER_01Aaron Ross Powell You send a question over the internet, their brain answers and sends it back.
SPEAKER_00Exactly. The problem is you don't own the brain. And more importantly, you aren't capturing the residual learning from that interaction. Aaron Powell Right.
SPEAKER_01The company that owns the API gets all that juicy feedback data. Trevor Burrus, Jr.
SPEAKER_00Yep. When intelligence is the product itself and it's deeply vertically integrated, it creates a self-reinforcing ecosystem. Every single action makes the entire system smarter.
SPEAKER_01Aaron Powell But wait, if the AI is essentially learning from synthetic data that it created itself inside its own simulation, isn't there a risk of it getting stuck in a bizarre echo chamber? What happens if it learns the wrong lesson from its own simulation?
SPEAKER_00And then reinforces that mistake a million times a second.
SPEAKER_01Yeah. That seems super dangerous.
SPEAKER_00Aaron Powell What's fascinating here is that the text outlines that specific danger. It warns about models optimizing for metrics that actually diverge from real-world outcomes.
SPEAKER_01Like it gets really good at solving a problem in the simulation, but the simulation doesn't match reality anymore.
SPEAKER_00Exactly. If you have a system generating its own feedback loop millions of times a day, and the initial parameter is slightly misaligned with reality, you end up with a highly efficient but entirely misguided system.
SPEAKER_01And worse than just making a mistake, it'll optimize to execute that flawed directive at the speed of light.
SPEAKER_00Right. It will uncover extreme, bizarre edge cases to solve problems in ways human developers would completely miss, which creates massive liabilities.
SPEAKER_01But the author also argues that if a company successfully manages that alignment, this synthetic feedback loop transforms into an invisible moat. The intelligence compounds silently and relentlessly at near zero marginal cost.
SPEAKER_00It does. And if your competitor is just licensing a generic model through that API you mentioned, they are completely stalled by integration, friction, and data lag.
SPEAKER_01They simply cannot replicate the compounding advantage of a vertically integrated self-training ecosystem.
SPEAKER_00It is the ultimate compounding interest applied to cognitive ability.
SPEAKER_01So because that digital software is endlessly self-improving for essentially zero cost, the natural question for you listening is probably why isn't every startup doing this? What keeps this infinite digital growth anchored to reality?
SPEAKER_00And the source makes it very clear that the digital infinity is entirely bottlenecked by brutal physical reality.
SPEAKER_01Yeah, the infrastructure required to even enter this game is staggering.
SPEAKER_00The digital infinity is entirely dependent on physical constraints. The article discusses massive hardware efficiency gains, specifically noting specialized accelerators.
SPEAKER_01Right. Things like two PUs, tensor processing units, and next generation NPUs, neural processing units. Let's break those down because the alphabet super chips can get overwhelming. What is actually happening inside an MPU that makes it so vital?
SPEAKER_00Okay, think of a traditional computer chip, a CPU, like a really fast, meticulous line cook.
SPEAKER_01Okay, I like this.
SPEAKER_00They can chop vegetables incredibly fast, but they still have to do it sequentially, right? One carrot at a time.
SPEAKER_01Got it. Fast, but strictly one after another.
SPEAKER_00Right. A neural processing unit, an NPU, is like having 10,000 chefs in the kitchen all at once, each looking at a tiny piece of a massive recipe.
SPEAKER_01Oh wow. So they don't process tasks sequentially at all.
SPEAKER_00Exactly. They handle thousands of probabilities simultaneously. That specific math processing a matrix of probabilities all at once is the silicon equivalent of how a biological brain recognizes a face or translates a language.
SPEAKER_01And these chips are advancing at a phenomenal pace, dramatically reducing the latency and power consumption required to run these models.
SPEAKER_00Yes. They are the physical engines that allow massive models to operate profitably on-edge devices and inside sprawling data centers.
SPEAKER_01Aaron Powell But acquiring those chips is not like ordering laptops for a new office. The capital barriers to entry are just astronomical.
SPEAKER_00Beyond astronomical, really.
SPEAKER_01The author states that training a state-of-the-art foundational model requires billions of dollars in compute, incredibly rare engineering talent, and the political and financial leverage to secure tens of thousands of those high-end GPUs.
SPEAKER_00Not to mention the massive energy loads required to keep all those servers powered and cooled.
SPEAKER_01Right. You hear the phrase digital monopoly and you think of a couple of guys coding in a garage. But what you're actually describing sounds more like building a nuclear power plant. The runway is incredibly long and terrifyingly expensive.
SPEAKER_00If we connect this to the bigger picture, this extreme capital intensity causes a quiet, massive market consolidation. You know, companies are up against balance sheets, not just technology.
SPEAKER_01Right. If I'm a mid-level tech startup listening to this, I'm looking at my runway and realizing I'm locked out before you even start.
SPEAKER_00Exactly. You literally cannot play the game unless you have billions of dollars to set on fire. Only giants can sustain that kind of financial burn, creating a natural monopoly before the race even really begins.
SPEAKER_01So because that capital bottleneck is so brutal, the few companies that actually survive the burn are left in a position to dictate the terms of the entire market.
SPEAKER_00Yes. And the article talks a lot about the network effects of these foundational models.
SPEAKER_01Right. As more users flock to the best model, it gathers more data, which improves its performance, which in turn attracts more developers and enterprise integrations.
SPEAKER_00It spirals upward until the model becomes the definitive de facto standard for the industry. And once adoption tips past that critical threshold, competition basically fades to irrelevance.
SPEAKER_01The compounding advantage is just too massive for a newcomer to overcome. So what does this all mean? How are they capturing value in 2026?
SPEAKER_00It all revolves around pricing power, specifically in specialized high-stakes domains.
SPEAKER_01Like medical diagnostics and legal forecasting, right? The text highlights those. In those fields, clients aren't price sensitive in the way a consumer might be when choosing, you know, a streaming subscription.
SPEAKER_00Definitely not. If you are a major hospital network trying to diagnose a rare oncology case or a massive corporate law firm forecasting the outcome of a billion-dollar litigation, you are not hunting for a discount software license. Exactly. In those arenas, clients are no longer paying for a simple tool. They are paying for certainty. Certainty that has been mathematically distilled from billions of data points.
SPEAKER_01Right. They're buying a decision engine.
SPEAKER_00And when an AI model provides the most accurate medical diagnosis or the most reliable legal forecast available on the planet, price sensitivity just evaporates.
SPEAKER_01So the business mindset shifts entirely. You're no longer competing on price, you're redefining what is possible and monetizing accumulated intelligence.
SPEAKER_00And every single inference strengthens the foundational knowledge of the model.
SPEAKER_01Let's clarify that real quick. An inference is basically the moment the AI applies its training to a brand new, unseen problem in the real world, right?
SPEAKER_00Exactly right. It's the act of the model making a prediction, like looking at a new patient's x-ray for the very first time and making a call.
SPEAKER_01And every single time it makes an inference, the data collected from that interaction feeds back into the system.
SPEAKER_00Aaron Powell Yes. The data isn't generic anymore. It's shaped by real-world decisions in high-value contexts. This self-reinforcing cycle of accuracy and exclusivity becomes the ultimate defensible asset in 2026.
SPEAKER_01It's just wild to step back and look at this. We've gone from the near-zero marginal costs of an infinite digital workforce through those self-improving synthetic loops up against massive hardware barriers.
SPEAKER_00To the ultimate prize, capturing premium value by selling absolute certainty in high-stakes fields.
SPEAKER_01It's a profound structural shift in how our economy values knowledge.
SPEAKER_00It really is. And it leaves us with a pretty provocative thought to ponder, building on what Anthony Starr wrote. If AI models become the de facto standard selling definitive certainty in fields like law and medicine, what happens to the value of human intuition?
SPEAKER_01Oh wow. That's a great question. Aaron Powell Right.
SPEAKER_00What happens to the concept of getting a second opinion? In a world dominated by decision engines trained on billions of data points, will human judgment eventually be seen as a luxury or a liability?
SPEAKER_01If that master printing press is materializing absolute certainty out of thin air, relying on a human guess might start to feel awfully risky.
SPEAKER_00It really might.
SPEAKER_01Thank you so much for joining us on this deep dive. Keep looking beneath the interface, keep questioning the economic engines driving your digital world, and keep challenging your assumptions about what comes next.