Diritto al Digitale

Meta and Google Convicted: Platform Design Liability Is Reshaping Tech Law

DLA Piper Law Firm

Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.

0:00 | 8:44

In this episode of Diritto al Digitale, Giulio Coraggio, technology and data lawyer at DLA Piper, explores a landmark US decision that found Meta and Google liable for the design of their platforms—not for the content they host.

Giulio Coraggio analyses how this case reshapes platform liability, introducing the concept of design liability and redefining legal risk for social media and AI-driven systems. The episode examines the implications under EU law, including the Digital Services Act (DSA), AI Act, and GDPR, and explains how features such as algorithmic recommendations, infinite scroll, and autoplay may now become legally relevant.

A must-listen for legal professionals and companies navigating digital regulation, this episode provides a structured and practical analysis of how litigation may drive regulatory change—and why platform design is becoming a core compliance issue.

Send us Fan Mail

📌 You can find our contacts 👉 www.dlapiper.com

SPEAKER_00

What if the next wave of litigation against the big tech is not about illegal content but about how platforms like Facebook, Instagram and YouTube are designed? Because this is exactly what is now happening to Meta and Google. A California jury has found that their platforms may be responsible for harm suffered by a minor user, not because of specific content, because of the way their systems are engineered. And if this approach consolidates, it could fundamentally reshape the legal framework applicable to digital platforms. This is the podcast Dirito Digitale. Podcast where we explore the intersection between law and innovation. And today we are discussing why Meta and Google may have just opened a completely new front of liability, one that goes beyond content moderation and directly targets platform design. Let's start from the legal core of this case. For years, companies like Med and Google have relied on intermediary liability protections. The logic was clear, platforms are not responsible for third party content, provided they act upon notice. But this case reframes the issue entirely. The alleged harm is not linked to what users post, it's linked to how platforms are designed to drive user behavior. Features such as infinite scroll, autoplay, algorithmic recommendations are now being assessed, not as neutral tools, but as design choices that may generate foreseeable harm. From a legal standpoint, this shifts the discussion towards product liability and duty of care. The implicit argument is that if a platform systematically optimizes for engagement by leveraging cognitive vulnerabilities, then its design may be considered defective. This is where the traditional regulatory approach starts to show its limits. For years lawmakers have focused on content moderation, but the case involving Meta and Google suggests that this is only part of the picture. Because ever even perfectly lawful content, if delivered through an addictive architecture, can generate harm. It is a critical legal shift. It means that compliance cannot be achieved simply by removing illegal content. It requires rethinking the structure of the platform itself. We are witnessing the emergence of a new regulatory paradigm. The design is becoming a legally relevant category. And this is particularly important for platforms like Met and Google, whose business models are deeply tied to user engagement. The regulatory focus is moving towards limiting mechanisms such as infitty scroll, giving users meaningful influence over recommendation systems, introducing friction to counter compulsive usage patterns. These elements are no longer just UX decisions. They are increasingly framed as compliance obligations. Let's translate this into concrete legal consequences. Number one, companies like Met and Google could face claims based on negligent design, failure to prevent foreseeable harm, defective digital products. This represents a significant expansion of exposure. Number two, in Europe, this discussion intersects with existing frameworks. The Digital Services Act already requires large platforms to assess and mitigate systemic risks. The Act regulates certain algorithmic systems. The GDPR governs profiling and automated decision making. However, none of these instruments directly address addictive design. This creates a regulatory gap, one that litigation against companies like Meta and Google may start to fill. three, there is also a practical dimension. Bottom Meta and Google already optimize their platforms through continuous testing. So redesign is technically feasible. What changes now is the incentive structure because design choices may become evidence in litigation and this transforms UX decisions into legal risks factors. We should not overlook one critical element. The design of platforms operated by Med and Google is deeply driven by AI systems. Recommendation engines are not static. They are adaptive, predictive, and optimized for engagement. If liability shifts to design, then it inevitably extends to AI. This raises a structural question. Can we regulate AI outputs without addressing the interfaces that deliver them? Or is platform design the real point where AI regulation becomes effective? We are now facing two possible scenarios. Scenario number one, regulatory intervention. Clear legal standards define acceptable design practices. Scenario number two, litigation driven evolution. Courts progressively shape the rules through cases involving companies like Meta and Google. The second scenario is more uncertain and more fragmented, but it is also at this stage more likely. This case is not just about Meta and Google, it's about the future of digital regulation. When we start regulating design, we are effectively regulating behavior. We are influencing how users interact, decide and engage. And this takes us well beyond traditional legal categories. So let me leave you with a few questions. Should companies like Med and Google be legally responsible for how they design user attention? Is it time to explicitly regulate addictive design? And ultimately, do we want regulators or quotes to define the limits of platform architecture? If you found these episodes interesting, feel free to reach out to me at julio.coraggio at yellowpaper.com to address questions or just recommend topics of future discussions. And do not forget to subscribe to our podcast, activate the notification bell, and leave us a 5-star review on Apple Podcasts or Spotify. I'm Giulio Coraggio, this is the podcast Diritto al Digitale. Arrivederci