The recent legal victories against Big Tech are promising, but not enough to protect kids
A series of legal victories against Meta captured headlines as families won redress for harms to minors linked to social media and AI products. These cases tested a consequential legal question: whether traditional product liability principles can hold companies accountable for design features that create foreseeable risks to minors. In both New Mexico and California, courts found the platforms liable for consumer harms, ordering Meta and YouTube to pay millions to plaintiffs.
These cases mark a fundamental shift in litigation targeting platform harms. Rather than a dispute over content, these cases evaluate the safety of the underlying product design. In doing so, the cases tested an outstanding question whether product liability principles can be applied to digital services. These cases will have far-reaching implications, especially for similar cases, including a lawsuit initiated by a bipartisan coalition of 32 state attorneys general on social media addiction.
Even as these cases provide precedent for holding digital platforms accountable, relying on case law alone leaves minors vulnerable to the unpredictability of future court decisions and the risk of appeals overturning favorable verdicts. Congress must act now to codify a clear duty of care that requires platforms and AI developers to prioritize minors’ safety.
From Content Moderation to Product Design
In litigation involving social media harms, platforms have frequently invoked Section 230 of the Communications Decency Act and the First Amendment as defenses. Platforms argue that they function as neutral intermediaries that host third-party speech and therefore cannot be held liable for user content. They also assert that recommendation algorithms and other design features reflect protected editorial judgments about how content is organized and presented.
Courts have issued mixed rulings on these claims. Some courts have accepted arguments that recommendation algorithms constitute protected editorial decisions, even where plaintiffs allege that algorithmic amplification contributed to harmful outcomes. Other courts have allowed claims to proceed where plaintiffs allege that platform design features themselves — rather than third-party speech alone — materially contributed to unlawful or dangerous behavior.
Similar legal questions are emerging in lawsuits involving generative AI developers. Companies including OpenAI, Character.AI, and Google face claims alleging that chatbots contributed harm to minors, including detrimental mental health impacts. Defendants have argued that Section 230 immunity applies where models generate information derived from end-users’ prompts. Courts have not uniformly accepted this argument. In Garcia v. Character Technologies, for example, the court allowed claims for strict product liability, negligence, and wrongful death to proceed.
These cases illustrate an unsettled legal boundary: the difference between full product liability vs. merely hosting or organizing third-party speech protected by Section 230 and the First Amendment. The recent Meta cases were a step in providing clarity on this distinction.
Negligence and Defective Design
A private lawsuit alleged that Instagram and Facebook were designed to encourage harmful prolonged engagement among minors. The case notes Meta’s internal research that demonstrated knowledge of adverse effects on adolescents’ well-being. The California jury found Meta negligent in the design of their products and that their defective design was a substantial factor in causing harm to consumers.
Allegations Related to Exploitation Risk
In a case brought by New Mexico’s state attorney general, claims focused on whether Meta’s product design facilitated sexual exploitation of minors. The jury found that Meta violated New Mexico’s Unfair Practices Act by failing to design sufficient safeguards against grooming behavior and exploitation of minors, despite awareness of how such abuses can occur on their platforms.
While the courts play an important role in holding platform companies accountable, they operate retrospectively and often require years to reach resolution. If policymakers only rely on post-hoc liability through the courts, we risk exposing minors to foreseeable harms today and for decades to come.
A Fundamental Safeguard: Minor Safety Duty of Care
What unites these cases is far more than a reckoning over past harm, but an opportunity to ensure major technology products are designed, tested, and released safely. Unlike pharmaceuticals, automobiles, or even children’s toys, there are no national safety standards for platforms and they face no mandatory pre-deployment safety testing to assess and remedy foreseeable harms to minors.
Congress must step in to ensure minor safety duty-of-care obligations. While the Senate has attempted to codify this requirement in the Kids Online Safety Act (KOSA), the current House version omits it. Sen. Marsha Blackburn (R-TN), the sponsor for KOSA, recently released a federal AI legislative framework, which includes the Senate version of KOSA and additional duty-of-care obligations for chatbot developers.
The consequences of whether a duty-of-care obligation is included in the enrolled version of KOSA are significant. Without it, platforms may deploy products without adequate safeguards, raising the likelihood that foreseeable harms to minors occur at scale.
While these cases were a big win for child safety advocates, more needs to be done. Without legislative action to enshrine a duty of care, courts could still revert to holding platforms as merely facilitators of speech rather than products with design-based responsibilities toward minors.
Instead of waiting for the courts to decide, Congress should proactively establish clear statutory obligations that safeguard minors now and in the years ahead.

