DX Today | No-Hype Podcast & News About AI & DX
The DX Today Podcast: Real Insights About AI and Digital Transformation
Tired of AI hype and transformation snake oil? This isn't another sales pitch disguised as expertise. Join a 30+ year tech veteran and Chief AI Officer who's built $1.2 billion in real solutions—and has the battle scars to prove it.
No vendor agenda. No sponsored content. Just unfiltered insights about what actually works in AI and digital transformation, what spectacularly fails, and why most "expert" advice misses the mark.
If you're looking for honest perspectives from someone who's been in the trenches since before "digital transformation" was a buzzword, you've found your show. Real problems, real solutions, real talk.
For executives, practitioners, and anyone who wants the truth about technology without the sales pitch.
DX Today | No-Hype Podcast & News About AI & DX
America’s New National AI Framework: Preempting State Laws, Fair Use, and ‘Free Speech’ Guardrails (Mar 28, 2026)
Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.
Welcome to the DX Today podcast, your daily deep dive into the AI ecosystem. I'm Chris, and joining me as always is Sarah.
SPEAKER_01Hey Chris, good to be here.
SPEAKER_00Today I want to do a single topic episode that's less new model drop and more the rules of the road. The White House just put out a national AI legislative framework. The headline that jumped out at me: they want a uniform federal approach, and they're explicitly warning that a patchwork of state AI laws would undermine innovation.
SPEAKER_01Yeah, this is one of those moments where AI policy stops being abstract ethics talk and becomes actual governance architecture. Who gets to regulate, what gets regulated, and how quickly.
SPEAKER_00Let's set the table. What did the framework actually say? Not interpretations like, what are the pillars?
SPEAKER_01In the White House's own write-up, they lay out six key objectives. They talk about protecting children and empowering parents, safeguarding communities and small businesses, respecting intellectual property and supporting creators while still enabling AI to learn via fair use, preventing censorship and protecting free speech, enabling innovation and ensuring American AI dominance, and educating Americans and developing an AI-ready workforce. They also stress that it only works if it's applied uniformly. Otherwise, state-by-state rules become a drag on innovation.
SPEAKER_00So pretty broad, but the part that feels spicy is the preemption idea. We should overrule state laws. That's a big swing.
SPEAKER_01Route 50's coverage is more explicit. They describe a seven-pillar policy blueprint, and one pillar is literally establishing a federal policy framework that preempts what they call cumbersome state laws. That's where the federalism fight begins.
SPEAKER_00Okay, translate preemption in human terms.
SPEAKER_01Some it means if Congress passes a national AI law, it could override or limit what states can do in that area. Think of it like you're a company deploying an AI product nationwide. Instead of complying with 30 different state rules, you comply with one federal rulebook.
SPEAKER_00That sounds efficient. But it also sounds like it could be a way to water down protections if the federal rule book is weaker.
SPEAKER_01Exactly. Preemption is not automatically good or bad. It depends on the baseline. If the federal framework is strong, preemption can prevent regulatory arbitrage and simplify compliance. If it's weak, it can block stronger state protections.
SPEAKER_00Here's my devil's advocate moment. We already have the Internet as a precedent. A lot of people argue the U.S. won the Internet era because of relatively light-touch national policy.
SPEAKER_01That's the argument. NetChoice, for example, explicitly supports the idea that you don't want 50 different confusing and conflicting regimes. Their point is, the internet revolution happened under a simpler environment.
SPEAKER_00And the counter-argument is social media.
SPEAKER_01Right. Critics will say, we tried move fast break things and got misinformation, mental health harms, polarization. Americans for Responsible Innovation, an AI watchdog group, is quoted arguing this framework shields AI developers from liability, and they're worried it recommends banning state laws while also urging Congress not to create new open-ended liability for harms, including child harms.
SPEAKER_00So the real question is, is the framework a serious attempt to reduce harms? Or is it a strategy to accelerate deployment and reduce liability?
SPEAKER_01It's both.
SPEAKER_00Let's go pillar by pillar, but keep it grounded. Start with kids. What did they propose?
SPEAKER_01The White House says parents should have tools like account controls to protect children's privacy and manage device use. And they say AI platforms likely accessed by minors should implement features to reduce sexual exploitation of children or encouragement of self-harm.
SPEAKER_00That sounds like a yes, obviously. But implementation is everything. What counts as likely to be accessed by minors? Every app on Earth?
SPEAKER_01Exactly. Definitions matter. If you define it broadly, you're effectively mandating child safety design across most consumer AI. If you define it narrowly, you can dodge it.
SPEAKER_00Next, communities and energy. This is the sleeper issue.
SPEAKER_01Huge. Route 50 highlights the data center electricity angle. AI supporting data centers can raise local electricity prices unless offsets are made. The framework pushes the idea that ratepayers shouldn't foot the bill, and it calls for streamlining permitting so data centers can generate power on-site and scale faster.
SPEAKER_00That's interesting because it turns AI policy into industrial policy. Permits, power plants, grid reliability.
SPEAKER_01Yes, the policy lever isn't just what models are allowed, it's how quickly can we build compute.
SPEAKER_00But if you make permitting easier, local communities might say, hold on, that's noise, water, emissions, land use.
SPEAKER_01Exactly. You have a tension between national strategy and local externalities.
SPEAKER_00Okay, let's hit the third rail. Copyright creators and training data.
SPEAKER_01The framework is trying to thread the needle. The White House says creative works and unique identities should be respected, but also says for AI to improve it must be able to make fair use of what it learns from the world. Route 50 adds a sharper detail. It reports the administration's position that collecting copyrighted internet material for training is not necessarily a copyright violation, and that courts ultimately decide fair use.
SPEAKER_00That's basically saying we're pro-training.
SPEAKER_01It's saying we think training can be lawful under fair use, yes, but they also talk about protecting voice and likeness. So they're acknowledging the imitation problem.
SPEAKER_00This is where I get stuck. If training is fair use, what are creators being compensated for?
SPEAKER_01Potentially for specific likeness use, specific licensed corpuses, or for outputs that directly replicate protected material. Route 50 reports the framework suggests enabling licensing laws or a collective rights mechanism, so rights holders can negotiate compensation for likeness or content use.
SPEAKER_00So maybe training is allowed, but a licensing market is encouraged anyway?
SPEAKER_01Possibly, but you can imagine two futures. One, training is generally fair use, and licensing is optional, used by companies that want premium data or risk reduction. Two, licensing becomes effectively mandatory because courts or regulations make the risk too high otherwise.
SPEAKER_00And technically, the industry is drifting toward model plus retrieval, right? You might not need to bake everything into weights.
SPEAKER_01Exactly. A lot of practical systems use retrieval augmented generation, tool use, or licensed databases. That shifts the copyright battlefield from training to access and reproduction.
SPEAKER_00Now the free speech pillar. This one is loaded.
SPEAKER_01The White House text says the federal government must defend free speech and First Amendment protections while preventing AI from being used to silence lawful political expression or dissent. They say AI can't become a vehicle for government to dictate right and wrong think. And they propose guardrails so AI can pursue truth and accuracy without limitation.
SPEAKER_00That phrase, truth and accuracy without limitation, sounds nice, but it's also vague. Does it mean no safety filters?
SPEAKER_01That's the ambiguity. In practice, model behavior is always constrained by policy and training. The question is, constrained toward what? Safety and harm reduction, or neutrality, or some definition of viewpoint diversity.
SPEAKER_00And who decides what is censorship versus moderation? Because if you block hate speech, someone will call it censorship.
SPEAKER_01Right. And if you allow everything, you can get harassment, self-harm encouragement, or disinformation. So the free speech framing can collide with child safety, scam prevention, and other pillars.
SPEAKER_00Pillar five, innovation and dominance. That's straightforward. Go faster.
SPEAKER_01They explicitly call for removing outdated barriers, accelerating deployment across sectors, and broad access to testing environments.
SPEAKER_00Like regulatory sandboxes.
SPEAKER_01Route 50 says the framework recommends Congress establish regulatory sandboxes for AI applications, isolated environments to test and develop. It also mentions providing resources so industry and academic partners can access federal datasets for further training.
SPEAKER_00That's a big deal.
SPEAKER_01Yes, depending on what data sets and under what privacy constraints.
SPEAKER_00And pillar six, workforce.
SPEAKER_01The framework emphasizes AI fluency in the broader workforce and asks Congress to support non-regulatory methods to expand education programs.
SPEAKER_00Okay, now let's tackle the central controversy. Sector-specific regulation versus a new AI regulator.
SPEAKER_01Route 50 notes: the framework says Congress should not create a new federal rulemaking body to regulate AI and should maintain a sector-specific approach with existing regulators.
SPEAKER_00That feels like don't make an AI FDA.
SPEAKER_01Exactly. Instead, you'd have agencies like the FTC, SEC, FDA, Department of Labor, DOE, etc., applying AI policy in their domains.
SPEAKER_00Pros and cons?
SPEAKER_01Pro. Those agencies already understand their sectors and have enforcement power. Con, AI is cross-cutting. You can end up with gaps, inconsistent enforcement, and slow coordination. Also, if the aim is to preempt state laws, you need a strong federal baseline. Sector-specific rules might not add up to a coherent national policy.
SPEAKER_00Here's a hypothetical.
SPEAKER_01Potentially, depending on how the preemption clause is written. Some preemption frameworks carve out areas states can still regulate, like consumer protection or election administration. Route 50 notes the framework suggests preemption wouldn't apply to how states use AI themselves, or areas where states are uniquely suited to govern certain topics. But the boundaries are exactly what becomes the fight.
SPEAKER_00Let's do a concrete example from the article. They mention scams.
SPEAKER_01Yeah. The White House says Congress should augment the federal government's ability to combat AI-enabled scams and address national security concerns.
SPEAKER_00That sounds like everyone agrees.
SPEAKER_01In principle, in practice, the enforcement levers could be stronger identity verification, platform obligations, liability rules, or criminal penalties. Each approach has trade-offs.
SPEAKER_00Now I want to zoom out. Why is this happening now? Why is preemption suddenly the focus?
SPEAKER_01Because states are moving faster than Congress and companies are feeling the compliance burden. Also, in AI, the pace is fast enough that companies worry the patchwork becomes a real strategic disadvantage versus countries with a single national rule book.
SPEAKER_00But is patchwork always bad? States can be laboratories.
SPEAKER_01Yes. States can experiment and learn what works. And sometimes state action forces federal action. But from the company perspective, patchwork can mean inconsistent product behavior, legal uncertainty, and slower rollouts.
SPEAKER_00Let's talk about the fair use stance technically. If courts decide fair use, why does a policy framework matter?
SPEAKER_01Because policy affects legislation, enforcement priorities, and how agencies interpret ambiguous areas. Even if courts decide, Congress can rewrite statutes, create licensing mechanisms, or define liability.
SPEAKER_00And if the federal government signals training is fair use, that's a strong political signal.
SPEAKER_01Right. It can shape negotiations, it can influence whether licensing is required, it can influence investor confidence.
SPEAKER_00I want to bring in the creator angle. People here protect creators and think it means payment. But the framework seems to say protect voice and likeness, but don't block training.
SPEAKER_01That's the tension. Voice and likeness protections are more like personality rights and anti-impersonation. Training data is more like copyright and database rights. The framework is trying to separate those.
SPEAKER_00And from an AI engineering standpoint, voice cloning is easy now.
SPEAKER_01Yes, and enforcement is messy. You can watermark audio, you can require consent for voice models, you can create legal remedies for unauthorized imitation, but globally distributed models make enforcement difficult.
SPEAKER_00So if the framework pushes a national approach, the question becomes: does it make enforcement stronger or weaker?
SPEAKER_01Depends on what Congress does. A national rule can be stronger if it sets clear standards and resources enforcement. It can be weaker if it primarily blocks states from acting and relies on voluntary measures.
SPEAKER_00Let's hit another sensitive part. The framework's emphasis on not creating a new regulator and on avoiding open-ended liability. That sounds like it reduces guardrails.
SPEAKER_01That's why watchdogs are alarmed. If you preempt states and you don't create a strong federal enforcement mechanism, you can end up with a vacuum.
SPEAKER_00But industry would say guardrails can be created via existing agencies and product safety standards.
SPEAKER_01Right. And industry groups like the Business Software Alliance welcomed the idea of clear national rules and workforce focus, and it also mentions access to federal data sets.
SPEAKER_00If you're a startup founder, what do you want?
SPEAKER_01Predictable rules, limited compliance overhead, and clarity on data rights. The worst case for a startup is you ship a product, then a state-by-state set of rules forces constant redesign. If you're a consumer advocate, you want enforceable rights, transparency, opt-outs, safety requirements, redress when harmed, especially for miners. And if you're a big cloud provider building data centers, you want streamlined permitting and a stable environment for massive CapEx.
SPEAKER_00Okay. I want to ask the what could go wrong question. Suppose Congress acts quickly and passes a preemptive national AI law aligned with this framework. What are failure modes?
SPEAKER_01One failure mode is floor becomes ceiling. The federal rules become the maximum, not the minimum, blocking states from experimenting with stronger protections. Another is vagueness. If definitions are unclear, like what counts as a high-risk AI system or what counts as censorship, you get years of litigation. A third is regulatory capture, where the rules are shaped primarily by the biggest companies.
SPEAKER_00In a success mode?
SPEAKER_01Clear national standards that reduce fraud and child harms, make transparency meaningful, create workable licensing or compensation pathways, and still keep the U.S. competitive.
SPEAKER_00That sounds like threading a needle while riding a motorcycle.
SPEAKER_01Pretty much.
SPEAKER_00Final big question. How does this affect product teams building AI right now?
SPEAKER_01It pushes teams to think about governance as a product requirement. If preemption becomes real, you might optimize for federal compliance rather than state compliance. For copyright, you'll want strong documentation of training data provenance and licensing. For kids and safety, you'll need robust moderation, age-aware experiences, and escalation. And for free speech, you'll need transparent policies and probably auditability. Why the model refused, what it was trained to do.
SPEAKER_00So even though this is policy and it becomes architecture.
SPEAKER_01Exactly. Policy becomes architecture.
SPEAKER_00All right, let's land this plane. If you had to summarize the debate in one sentence. That's all for today's episode of the DX Today podcast. Thanks for listening, and we'll see you next time.