Diritto al Digitale
Diritto al Digitale is the must-listen podcast on innovation law, brought to you by Giulio Coraggio, data and technology lawyer at the global law firm DLA Piper. Each episode explores the cutting-edge legal challenges shaping our digital world—from data privacy and artificial intelligence to the Internet of Things, outsourcing, e-commerce, and intellectual property.
Join us as we illuminate the legal frameworks behind today’s breakthroughs and provide insider insights on how innovation is transforming the future of business and society.
You can contact us at the details available on dlapiper.com
Diritto al Digitale
AI Act vs US AI Policy Framework: Regulatory Divergence and Its Impact on Global AI Governance
Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.
The global race to regulate artificial intelligence is accelerating—and companies can no longer afford to ignore it.
In this episode of Diritto al Digitale, Giulio Coraggio, Technology and Data Lawyer at DLA Piper, analyzes the growing divergence between the EU AI Act and the US AI Policy Framework, two competing models that are reshaping AI governance, compliance, and innovation strategies worldwide.
The European Union is advancing a risk-based regulatory framework under the AI Act, reinforced by the latest developments of the Digital Omnibus package, introducing stricter obligations for high-risk AI systems, data governance, and compliance by design. At the same time, the United States is pursuing a more flexible, policy-driven approach based on soft law, voluntary standards, and ex post enforcement.
What does this mean in practice for companies operating globally?
This episode explores:
- The latest updates on the EU AI Act and Digital Omnibus
- The structure and impact of the US National AI Policy Framework
- Key differences between ex ante compliance and ex post enforcement models
- Legal, operational, and reputational risks for businesses using AI
- Why AI governance is no longer optional for organizations
If your business is developing, deploying, or integrating artificial intelligence, understanding these regulatory dynamics is critical to managing risk and staying competitive.
Listen now to gain practical insights into AI regulation, compliance strategies, and the future of global AI governance.
📌 You can find our contacts 👉 www.dlapiper.com
There is a structural question that companies, regulators and investors can no longer ignore. If the real competitive advantage in artificial intelligence is not the technology itself, but the regulatory environment in which technology is developed and deployed. Because today we're not witnessing a single global framework for artificial intelligence. What we are witnessing is fragmentation. On one side, the European Union is refining a comprehensive risk-based regulatory regime through the AI Act and the latest development under the so-called digital omnibus package. On the other hand, the United States is consolidating a policy-driven, innovation-oriented approach through its national AI policy framework, recently approved, and federal guidance. This is not just a legal divergence, it's a structural shift that is already influencing investment flows, product design, data strategies, and ultimately market leadership. I'm Giulio Coraggio, a technology and a data lawyer at the global law firm GLA Fiber. This is Diretto Digitale, the podcast where we explore the intersection between law and innovation. So let's start with Europe. The AI Act represents the first comprehensive attempt globally to regulate artificial intelligence through a binding horizontal framework. But what is particularly relevant today is not the text of the AI Act itself, it's how that text is evolving. Recent developments linked to the so-called Digital Omnibus Initiative and positions adopted by the European Parliament through its committees as well as the Council of the European Union highlight a clear trend. Implementation complexity is forcing regulatory recalibration. We're seeing postponement of obligations relating to high-risk AI systems potentially shifting key deadlines to 2027 and 2028. The reintroduction and strengthening of registration requirements in the EU database for high risk systems, the confirmation of stricter conditions for processing of special categories of personal data, and the introduction of new prohibited practices, including AI systems generating non-consensual intimate content. This is not the regulation. It's regulatory engineering. The EU is acknowledging that enforceability is as critical as ambition. So, the EU model, how it's structured. The AI Act is built on a structured risk taxonomy. AI systems are classified into prohibited systems, high risk systems, limited risk systems, and minimal risk systems. For high risk systems, the regulatory burden is significant and highly operational. Companies must implement robust risk management systems aligned with life cycle obligations, data governance frameworks, ensuring quality, relevance and bias mitigation, extensive technical documentation and record keeping, human oversight mechanisms, conformity assessments, and registration in the EU database. This is not simply a legal framework. It's a compliance architecture. It effectively embeds regulatory requirements into the design, development and deployment of AI systems, what we can define as compliance by design and by default. However, this approach has clear implications. It increases time to market, operational costs, and internal governance complexity. If we shift to the United States, the paradigm changes entirely. There is no single comprehensive federal AI law equivalent to the AI Act. Instead, the US regulatory landscape is characterized by the national AI policy framework, recently approved, executive orders, sector-specific regulations, and stronger reliance on existing legal tools, including consumer protection and competition law. The approach is fundamentally different. It prioritizes innovation and technological leadership, public-private collaboration, flexible standards, and enforcement primarily exposed. Even when regulatory intervention occurs, it often takes the form of voluntary commitments, soft law instruments, or agency guidance. This creates a significantly more agile environment for AI development, but it also shifts risks from an exante compliance to exposed liability. So there are two models and two regulatory philosophies. At this stage, it's clear that we're not simply comparing two regulatory frameworks. We are comparing two different regulatory mindsets, two different approaches. The U model is precautionary. It assumes that the AI systems inherently generate risks that must be mitigated before deployment. The US model is permissive. It assumes that innovation should not be constrained unless and until concrete risks materialize. This divergence has immediate practical consequences. For example, an AI provider operating in the EU must integrate compliance processes at the earliest stages of development. The same provider in the US may prioritize speed, scalability, and market penetration, addressing legal risks at a later stage. For globally active companies, this is not a theoretical debate. It's an operational challenge because they cannot choose one model over the other. They must manage both simultaneously. This creates a need for a multi-layered AI governance framework, jurisdiction-specific compliance strategies, and scalable internal controls capable of adapting to different regulatory expectations. And this is where many organizations are currently exposed. Because without a structured AI governance model, companies risk integrating AI solutions into their core operations without fully assessing regulatory risks, data protection implications, liability exposure, and compliance obligations. The issue is timing. These risks often emerge when the AI system is already embedded in business processes. At that point, remediation becomes significantly more expensive. Operational disruption increases and reputational damage becomes a concrete risk. There is a narrative suggesting that regulation slows innovation, but it is in many respects an oversimplification. The absence of regulation does not eliminate risks. It redistributes it and often amplifies it. The real question is not whether regulation is needed, it's whether we can design regulatory frameworks that are predictable, scalable, and technologically neutral. Because ultimately, legal certainty is itself a driver of innovation. Looking ahead, several adoptions will be critical. In the European Union, the finalization of the digital omnibus adjustments, the progressive implementation of the AI Act, and the development of secondary legislation and standards. In the US, further consolidation of federal AI policies, increased activity by regulatory agencies, and potential sector-specific legislative interventions. At the global level, we expect increasing regulatory competition and possibly gradual convergence driven by market forces. So let me leave you with this question. Will the future of artificial intelligence be shaped primarily by engineers or by regulators? And more importantly, are companies truly prepared to operate in a world where AI regulation moves at a different speed across jurisdictions? If you have views on this topic, feel free to reach me out at julio.com. And if you found this episode interesting, subscribe to the podcast, activate the notification bell, and leave us a five-star review on Apple Podcasts or Spotify. I'm Giulio Coraggio, this is the podcast Diritto Digitale.