Build by AI

The Transparency Problem I 7th April

Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.

0:00 | 25:42
When robotaxi companies won't tell us how often humans have to take control, and UnitedHealth bets $3 billion on AI for your healthcare, we're facing some serious transparency issues. Meanwhile, OpenAI alumni are launching their own $100M fund and the company is pushing both safety fellowships and sweeping policy proposals. Plus a critical security flaw is being actively exploited, and Google quietly drops an offline AI dictation app. It's a day that highlights the gap between AI promises and reality - and why that should worry all of us.
SPEAKER_01

Okay, so I've been thinking about this all morning, and it's honestly kind of disturbing. These robo taxi companies are operating thousands of cars on public roads, and they flat out refuse to tell us how often a human has to jump in and take control.

SPEAKER_00

Wait, they won't disclose intervention rates? That's like an airline refusing to tell you how often pilots have to override the autopilot. That's not just concerning, that's terrifying.

SPEAKER_01

Right? And this is the same week United Health drops three billion dollars on AI, and open AI is pushing massive policy frameworks. There's this huge gap between the AI promises and what companies are actually willing to tell us about how this stuff really works.

SPEAKER_00

Dude, if companies won't be transparent about basic operational metrics, how are we supposed to trust them with our transportation, our health care, our everything? This transparency problem is way bigger than people realize.

SPEAKER_01

You're listening to Build by AI, the daily show that cuts through the AI hype to find out what's actually happening. I'm Alex Shannon.

SPEAKER_00

And I'm Sam Hinton. Today we're diving deep into this transparency crisis in AI. Plus, open AI is making some major moves with safety fellowships and industrial policy proposals. And there's a critical security vulnerability being actively exploited right now.

SPEAKER_01

And we'll talk about why open AI alumni launching their own hundred million dollar fund might be the most interesting story nobody's talking about.

SPEAKER_00

But let's start with this robotaxi situation because it really gets to the heart of everything wrong with how AI companies communicate with the public. This is such a red flag, Alex. The intervention rate is literally the most important metric for understanding how autonomous these vehicles actually are. It's like the fundamental measure of whether the technology actually works as advertised.

SPEAKER_01

Right. And they're operating these services commercially now. People are paying money to ride in these cars, assuming they're getting truly autonomous transportation. Shouldn't customers know how often a human has to step in?

SPEAKER_00

Absolutely. And um here's what really bugs me about this. If the intervention rates were low and impressive, don't you think these companies would be shouting those numbers from the rooftops? The fact that they won't share this data suggests the numbers probably aren't as good as their marketing implies.

SPEAKER_01

That's a fair point, but let me play devil's advocate for a second. Maybe they're worried about competitors getting access to operational details, or maybe the data is more complex than a simple percentage.

SPEAKER_00

Okay, but come on, Alex. You can share aggregate intervention rates without revealing proprietary algorithms. Airlines publish safety statistics, pharmaceutical companies publish clinical trial data. This feels like they want all the benefits of operating in public while avoiding any real accountability.

SPEAKER_01

You make a good point about other industries, but here's another angle. What if they're not sharing? Because the intervention categories are complicated. Like, is a remote operator giving directions the same as taking full control? Where do you draw the line?

SPEAKER_00

That's exactly why we need transparency. If the categories are complex, then explain the categories. Break it down. Here's how often we provide navigation assistance. Here's how often we take emergency control, here's how often we handle edge cases. The complexity isn't an excuse for total opacity.

SPEAKER_01

And what's really concerning is this sets a precedent for other AI applications. If robotaxi companies can get away with hiding basic performance metrics, what happens when AI systems are making decisions about healthcare, finance, criminal justice?

SPEAKER_00

Exactly. This is why I think this story is so much bigger than just transportation. We're establishing norms right now for how AI companies interact with regulators and the public, and those norms are going to apply everywhere.

SPEAKER_01

And think about the practical implications for consumers. If I'm choosing between a traditional taxi and a robotaxi, I should know the real risk profile, right? If humans are intervening every ten minutes versus every ten hours, that's completely different.

SPEAKER_00

Right. And there's also the question of what happens when the remote operators can't reach the car. Network connectivity issues, system failures, are passengers just stranded? We don't know because companies won't talk about failure modes.

SPEAKER_01

That's a scenario I hadn't even considered. If your robotaxi loses connection to its remote support team in an emergency, are you basically riding in a very expensive paperweight?

SPEAKER_00

Potentially, yeah. And these are the kinds of questions that should be answered before we have thousands of these vehicles on the road, not after we have our first major incident.

SPEAKER_01

So what should people be watching for here? How do we push back against this kind of opacity?

SPEAKER_00

I think we need to start demanding basic operational transparency as a condition for public deployment. If you want to operate on public roads, serve public customers, you need to share basic performance metrics. Period.

SPEAKER_01

And consumers have power here too. If enough people start asking these questions before getting in a robo taxi, companies will feel pressure to provide answers. Vote with your wallet.

SPEAKER_00

The other thing to watch is whether regulators step in. Right now it feels like we're in this wild west period where companies can deploy first and worry about oversight later. That has to change.

SPEAKER_01

Speaking of transparency issues, early reports suggest United Health Group is making a massive$3 billion investment in AI technologies. This is one of the largest healthcare companies in the world, basically doubling down on AI across their entire operation.

SPEAKER_00

$3 billion? That's not just dipping their toes in the water. That's a complete transformation bet. But here's what worries me. United Health is primarily an insurance company. So their AI is probably focused on finding ways to deny claims more efficiently, not necessarily improving patient care.

SPEAKER_01

That's pretty cynical, Sam. I mean, couldn't this investment also be about improving diagnosis, streamlining operations, reducing administrative overhead that ultimately benefits patients?

SPEAKER_00

Look, I hope you're right, but let's be realistic about incentives here. Insurance companies make money by collecting premiums and minimizing payouts. If I'm spending three billion on AI, I'm probably looking for ways to automate the process of finding reasons to reject expensive treatments.

SPEAKER_01

But that's exactly why we need more details about how this money is being spent. Are they investing in diagnostic tools, patient communication systems, or are they building more sophisticated denial algorithms? The public has a right to know.

SPEAKER_00

Here's another massive AI deployment affecting millions of people. And we have basically no visibility into how it's going to work or what the safeguards are. Three billion dollars in AI spending could revolutionize healthcare or make it worse for patients.

SPEAKER_01

Let me ask you this though. Even if some of this AI is used for claims processing, couldn't that actually be better for patients? Right now insurance decisions are often inconsistent, slow, and frustrating. If AI can make those processes more predictable and faster, isn't that an improvement?

SPEAKER_00

Potentially, but only if the AI systems are designed with patient outcomes as the primary goal. If they're optimized for cost reduction, then faster just means you get denied coverage more efficiently. Speed without the right incentives isn't necessarily progress.

SPEAKER_01

What's particularly concerning is that healthcare AI decisions can literally be life or death. If an AI system incorrectly denies coverage for a critical treatment, that's not just an inconvenience, that could kill someone.

SPEAKER_00

Right. And unlike Robotaxis, where you can see if the car crashes, healthcare AI failures might be invisible. A denied claim, a misdiagnosis, a delayed treatment, these failures might not be attributed to the AI system, even when they should be.

SPEAKER_01

That's a really good point about visibility. If someone dies because an AI system incorrectly flagged their treatment as unnecessary, how would we even know? The decision-making process is completely opaque.

SPEAKER_00

Exactly. And there's also the question of bias. Healthcare AI systems have a history of performing differently for different demographic groups. If United Health's AI is less likely to approve expensive treatments for certain populations, that's essentially automated discrimination.

SPEAKER_01

And with three billion dollars of investment, we're not talking about a pilot program. This is going to affect millions of patients immediately. The scale makes any problems potentially catastrophic.

SPEAKER_00

Which brings us back to the accountability question. If United Health is spending this much on AI, shouldn't there be some public reporting on how it's performing? Success rates, error rates, demographic impacts?

SPEAKER_01

Aaron Powell So what should patients and healthcare advocates be demanding in terms of transparency and over your site for this kind of massive AI investment?

SPEAKER_00

First, public reporting on AI decision making in healthcare. If an AI system is influencing coverage decisions, patients should know. Second, human appeal processes that don't just rubber stamp the AI recommendations. And third, regular audits for bias and accuracy, especially around different patient populations.

SPEAKER_01

I'd also add that patients should have the right to know when AI is involved in their care decisions. If an algorithm is recommending against your treatment, you should be told that explicitly.

SPEAKER_00

Absolutely. Informed consent should include AI systems, and there should be clear pathways for patients to request human review of AI influenced decisions.

SPEAKER_01

Keep an eye on this one because United Health's approach is going to set the standard for how the entire healthcare industry deploys AI. The decisions they make with this$3 billion are going to affect how every American interacts with the healthcare system.

SPEAKER_00

And if they get away with deploying AI without transparency or accountability, every other insurance company is going to follow the same playbook. This is a defining moment for healthcare AI governance.

SPEAKER_01

Alright, let's shift gears to something that's been flying under the radar. OpenAI alumni have launched a new venture capital fund called Zero Shot, and they're targeting a$100 million first fund. According to multiple sources, including TechCrunch, they've already started writing checks to portfolio companies.

SPEAKER_00

This is fascinating, because think about what this means. Some of the smartest people who helped build the most valuable AI company in the world are now betting their own money on what comes next. That's like getting a peek at the roadmap from people who actually know where this technology is heading.

SPEAKER_01

But I'm curious about the timing here. Why are OpenAI alumni leaving to start an investment fund now? Are they cashing out because they think OpenAI has peaked? Or do they see opportunities that OpenAI can't or won't pursue?

SPEAKER_00

I think it's probably the latter. OpenAI is basically locked into this big foundation model race with Google and Anthropic. But there are probably thousands of specialized AI applications that make sense as standalone companies, but don't fit into OpenAI's strategy. These folks have the technical knowledge to evaluate those opportunities.

SPEAKER_01

That makes sense, but doesn't this create some potential conflict of interest issues? If you're a founder pitching to ZeroShot and they pass, could that information somehow make its way back to OpenAI? Or conversely, could OpenAI's strategic decisions be influenced by ZeroShot's portfolio?

SPEAKER_00

That's a really good point. Silicon Valley has always had these informal networks where information flows between companies. But when it's former employees of the most important AI company investing in AI startups, the potential for conflicts gets pretty serious. Though to be fair, this happens in every industry.

SPEAKER_01

But there's also the question of non-compete agreements and proprietary information. How much of what these open AI alumni know is still confidential? And how do they separate their investment decisions from inside knowledge about open AI's future plans?

SPEAKER_00

That's going to be a delicate balance. On one hand, their expertise is exactly why this fund could be valuable. They understand the technology deeply. On the other hand, they have to be careful not to use confidential information or create competitive conflicts.

SPEAKER_01

What I find most interesting is what this says about the AI investment landscape.$100 million used to be a massive fund, but in AI right now, that's almost like a seed stage fund. The capital requirements for competitive AI companies have just exploded.

SPEAKER_00

Yeah, but I think that's actually why this fund makes sense. Not every AI company needs to train foundation models. There are probably tons of opportunities to build valuable AI applications using existing models, and those companies might only need millions, not billions.

SPEAKER_01

So you're thinking ZeroShot is betting on the application layer rather than the infrastructure layer.

SPEAKER_00

Exactly. Let OpenAI, Google, and Anthropic burn billions competing on foundation models. There's probably a whole ecosystem of AI-powered tools, services, and applications that can be built profitably on top of those models with much smaller capital requirements.

SPEAKER_01

And frankly, that might be where the real value creation happens for most businesses. Most companies don't need their own foundation model. They need AI that solves specific problems in their industry.

SPEAKER_00

Right, like AI for legal document review, or AI for medical imaging, or AI for supply chain optimization. Vertical applications that use general AI capabilities but are tailored for specific use cases.

SPEAKER_01

But here's what I'm wondering. Does having OpenAI alumni on the investment side create an advantage for their portfolio companies? Like, do these startups get preferential access to OpenAI's APIs or early information about new models?

SPEAKER_00

That would be problematic if true, but I suspect there are probably walls in place to prevent that kind of favoritism. Though you're right that the relationships and knowledge could provide indirect advantages.

SPEAKER_01

The thing to watch here is ZeroShot's first few investments. That'll tell us a lot about where some of the smartest people in AI think the real opportunities are outside outside of the Foundation model arms race.

SPEAKER_00

Absolutely. And if this fund is successful, we'll probably see more AI company alumni launching their own funds. The expertise these folks have is incredibly valuable for evaluating AI startups.

SPEAKER_01

Speaking of OpenAI, they've announced a new safety fellowship program that's designed to support independent research in AI safety and alignment. Multiple sources are reporting this is a pilot program aimed at developing the next generation of talent in iSafety Research. You're referring to some of the safety team members who left earlier this year. I mean, that's one way to interpret this. But couldn't this also just be a genuine effort to expand safety research beyond OpenAI's internal team?

SPEAKER_00

Sure. And I actually think that's probably the right approach. AI safety is too important to be handled entirely by the companies building the systems. You need independent researchers who don't have commercial pressures influencing their work.

SPEAKER_01

But here's what I'm wondering. How independent can this research really be if OpenAI is funding it? Even with the best intentions, there's got to be some influence on what questions get asked and how results get interpreted.

SPEAKER_00

That's the eternal problem with industry-funded research in any field. But honestly, right now, most AI safety research is either happening inside companies or with very limited academic funding. At least this creates more opportunities for people to work on safety full-time.

SPEAKER_01

And there's a talent pipeline issue too, right? We need more people who understand both the technical aspects of AI systems and the safety implications. Universities aren't really equipped to train that kind of interdisciplinary expertise yet.

SPEAKER_00

Exactly. Most computer science programs are still focused on making AI systems work better, not making them safer. And most ethics or policy programs don't have the technical depth to really understand how these systems fail.

SPEAKER_01

What kind of safety research do you think this fellowship will focus on? Alignment problems, interpretability, robustness testing?

SPEAKER_00

If I had to guess, probably a mix of everything. Alignment is the sexy existential risk stuff that gets headlines, but honestly, we probably need more boring research on things like bias detection, failure modes, and human AI interaction patterns.

SPEAKER_01

The boring stuff is probably more immediately useful, honestly. Like understanding when and why AI systems give wrong answers seems more actionable than solving the alignment problem for superintelligent AGI.

SPEAKER_00

Right. And there's this whole category of safety research around deployment and monitoring that gets overlooked. How do you detect when an AI system is behaving differently than expected in production? How do you roll back safely when something goes wrong?

SPEAKER_01

And this also connects to our transparency theme from earlier. One of the biggest safety issues might just be that we don't understand how these systems work or when they fail.

SPEAKER_00

Exactly. You can't ensure safety for systems you don't understand, so hopefully this fellowship produces researchers who can help make AI systems more interpretable and predictable, not just more powerful.

SPEAKER_01

But there's also a question of whether OpenI will actually listen to safety research that tells them to slow down or change course. The commercial pressures are enormous right now.

SPEAKER_00

That's the big test, isn't it? You know, it's one thing to fund safety research, it's another thing to actually implement the recommendations even when they're inconvenient or expensive.

SPEAKER_01

The proof will be in whether these fellows can publish freely, even if their results are critical of OpenAI or the broader AI industry. That's going to be the real test of how independent this research actually is.

SPEAKER_00

And whether OpenAI actually changes their practices based on safety research findings. The fellowship could be great for developing talent, but if it doesn't influence how AI systems are built and deployed, then what's the point?

SPEAKER_01

Alright, let's rapid fire through a few more stories. First up, Spain's Zupal just raised$130 million in Series B funding to map the Earth using AI, and they've partnered with L3 Harris to manufacture sensors for their spacecraft.

SPEAKER_00

Earth observation is having a moment. Between climate monitoring, agriculture optimization, and disaster response, there's huge demand for better satellite data. The AI angle is probably using machine learning to process and interpret all that imagery automatically.

SPEAKER_01

The L3 Harris partnership is interesting too. That's a major defense contractor, which suggests there might be government or military applications beyond just commercial earth mapping.

SPEAKER_00

Good point. Satellite imagery analysis is huge for national security applications. Being able to automatically detect changes in infrastructure, troop movements, agricultural patterns, that's incredibly valuable intelligence.

SPEAKER_01

And$130 million is serious money for a Spanish startup. That suggests the market opportunity for AI-powered Earth observation is massive, probably much bigger than most people realize. Next, early reports suggest Google quietly launched an offline first AI dictation app, powered by their Gemma models. It's designed to compete with apps like Whisperflow and operates without internet connectivity.

SPEAKER_00

Finally, someone gets it. Privacy-first AI that works offline is going to be huge. People are getting tired of everything going to the cloud. If Google can make Gemma competitive for local applications, that could be a real differentiator.

SPEAKER_01

This is smart positioning against OpenAI too. While everyone else is focused on massive cloud-based models, Google is building AI that can run locally on your device. That's a completely different value proposition.

SPEAKER_00

And for dictation specifically, offline makes total sense. You don't want your voice data going to servers, especially for sensitive or personal content. Local processing solves the privacy problem completely.

SPEAKER_01

The fact that they're launching quietly is interesting, though. Maybe they're testing the waters before making a big announcement. Or maybe they want to see how it performs before committing to a major marketing push.

SPEAKER_00

Or maybe they're being strategic about not giving OpenAI and others too much advanced notice. Let the product speak for itself before competitors have time to respond.

SPEAKER_01

There's also a critical security story. FlowEye's AI agent builder is apparently under active exploitation for a CVSS 10.

SPEAKER_00

CVSS 10.0 means maximum severity, as bad as it gets. And 12,000 exposed instances means this could affect a lot of people. If you're using FlowEase, you need to patch immediately or take your instances offline until this is fixed.

SPEAKER_01

Remote code execution is particularly nasty because it means attackers can potentially run whatever code they want on affected systems. That's not just data theft, that's complete system compromise.

SPEAKER_00

And the fact that it's under active exploitation means this isn't theoretical. There are real attackers using this vulnerability right now. This is an immediate, urgent security situation.

SPEAKER_01

This also highlights a broader issue with AI development tools. A lot of these platforms are moving fast and maybe not prioritizing security as much as they should.

SPEAKER_00

Yeah, when you're racing to build AI applications, security often gets treated as something you'll fix later. But vulnerabilities like this show why security has to be built in from the beginning.

SPEAKER_01

Finally, OpenAI published proposals for an ambitious industrial policy framework for what they're calling the intelligence age, emphasizing people-first principles and expanding opportunity as AI advances.

SPEAKER_00

This feels like OpenAI trying to shape the regulatory conversation before governments figure out what they want to do. Smart move, but I'm skeptical that industrial policy written by AI companies is going to prioritize regular people over corporate interests.

SPEAKER_01

The timing is definitely strategic. Better to propose your own framework than have regulations imposed on you. But the people first messaging suggests they understand there's public skepticism about AI's impact on jobs and society.

SPEAKER_00

I'd be curious to see the specific policy recommendations. Are they talking about retraining programs, universal basic income, antitrust enforcement? The details matter more than the high-level messaging.

SPEAKER_01

And there's a question of whether other AI companies will get behind open eyes framework or push their own competing versions. Industry unity would be powerful, but these companies also have different business models and priorities.

SPEAKER_00

The fact that they're calling it the intelligence age is also interesting framing. It suggests they see AI as a fundamental shift comparable to the Industrial Revolution, not just another technology upgrade.

SPEAKER_01

If you zoom out and look at everything we covered today, there's this fascinating tension between AI companies wanting to move fast and deploy everywhere, but not wanting to be transparent about how their systems actually work in practice.

SPEAKER_00

Right, and I think we're at this inflection point where that lack of transparency is going from being an annoyance to being genuinely dangerous. When AI systems are controlling cars, making healthcare decisions, processing sensitive data, we can't just trust that companies have our best interests at heart.

SPEAKER_01

What's interesting is that OpenAI seems to understand this with their safety fellowship and policy proposals, but then you have robotaxi companies and healthcare AI deployments happening with minimal oversight.

SPEAKER_00

That disconnect is telling. OpenAI talks about safety and responsible deployment. They're also the company pushing hardest to deploy AI everywhere as fast as possible. There's a gap between the messaging and the reality.

SPEAKER_01

Aaron Powell And the ZeroShot fund is interesting in this context too. You have open AI alumni basically betting that the real value is going to be in the application layer, not the foundation model race. That suggests even insiders think the current foundation model arms race might not be sustainable.

SPEAKER_00

Which brings us back to the transparency issue. If the future of AI is thousands of specialized applications rather than a few giant foundation models, then we need transparency frameworks that can scale to evaluate all those different use cases.

SPEAKER_01

The healthcare story is particularly concerning because it shows how quickly AI can scale without oversight.$3 billion in investment means United Health can deploy AI across their entire operation almost immediately, affecting millions of patients.

SPEAKER_00

And unlike RoboTaxis, where you might have early adopters choosing to take risks, healthcare, AI affects everyone, whether they want it or not. If your insurance company uses AI to evaluate your claims, you don't get to opt out.

SPEAKER_01

The security angle is important too. The flow-eyes vulnerability shows that as AI tools proliferate, the attack surface is expanding rapidly. We're not just worried about AI being misused. We're worried about AI development tools being compromised.

SPEAKER_00

And that that connects to the broader governance challenge. How do you regulate an ecosystem where new AI applications are being deployed constantly, often by companies that didn't exist five years ago?

SPEAKER_01

I think 2026 is going to be the year when the public and regulators start demanding real accountability from AI companies. The technology is too powerful and too pervasive for this Wild West approach to continue.

SPEAKER_00

The question is whether companies will embrace transparency voluntarily or whether it's going to take regulation to force it. And honestly, based on the robo-taxi situation, I'm not optimistic about the voluntary approach.

SPEAKER_01

But there's also a business case for transparency. Companies that can demonstrate their AI systems work reliably and safely should have a competitive advantage over those that won't share basic performance metrics.

SPEAKER_00

That's true. But only if customers and regulators actually demand transparency. Right now, many AI deployments happen without the end users even knowing AI is involved.

SPEAKER_01

The Google Offline Dictation app is a good example of a different approach. By keeping everything local, they're solving privacy and transparency issues by design rather than trying to manage them after the fact.

SPEAKER_00

Exactly. Privacy-preserving AI architectures could be the solution to a lot of these trust issues. If the AI processing happens on your device, you don't have to trust the company with your data.

SPEAKER_01

Keep an eye on Europe. They're typically more aggressive about tech regulation, and whatever framework they develop for AI transparency is probably going to influence the rest of the world.

SPEAKER_00

And watch for the first major AI-related incident that captures public attention. That's probably what it's going to take to force real change in how these systems are governed and deployed.

SPEAKER_01

The optimistic view is that we're still early enough to get this right. But the window for proactive governance is closing fast as AI systems become more embedded in critical infrastructure.

SPEAKER_00

Right. And the stakes keep getting higher. Today it's robo taxes and health insurance. Tomorrow it could be power grids and financial systems. The governance frameworks we establish now are going to determine how AI affects society for decades.

SPEAKER_01

That's going to do it for today's show. As always, if you found this useful, hit subscribe wherever you listen to Pockets. It really helps us reach more people who are trying to make sense of this AI transformation.

SPEAKER_00

And if you're working on AI transparency, safety research, or just have thoughts on any of these stories, reach out to us. We love hearing from listeners who are actually building in this space.

SPEAKER_01

We'll be back tomorrow with more AI news and analysis. I'm Alex Shannon.

SPEAKER_00

And I'm Sam Hinton. See you tomorrow on Build by AI.