AI Regulation

Juries Hold Meta and YouTube Liable for Harm to Children

In March 2026, two U.S. juries delivered landmark verdicts holding Meta (owner of Facebook and Instagram) and YouTube liable for harm caused to child users through addictive platform design. A California jury found both companies negligent for knowingly creating addictive social media products that harmed a plaintiff who began using Instagram at age nine and YouTube at six. The companies were ordered to pay $6 million in compensatory and punitive damages to the woman, now 20 years old.

This verdict followed a separate New Mexico court ruling that ordered Meta to pay $375 million for failing to protect children from sexual predators on Facebook and Instagram. These decisions mark a significant legal reckoning for social media companies, comparing their responsibility to historic cases against tobacco companies for harmful products.

Findings on Platform Design and Harm

Juries concluded that the social media platforms deliberately designed features such as infinite scrolling, auto-play, and algorithmic content recommendations to maximize user engagement—including that of children—despite knowing the addictive nature and associated risks. Unlike other media, social media lacks robust safeguards, and enforcement of age restrictions is weak, allowing children easy access to a broad spectrum of content, including harmful material.

Scientific research supports concerns about children’s online vulnerability, noting that developing brains limit their capacity to critically assess content. Social media’s unregulated environment exposes minors to dangerous influences promoting self-harm, eating disorders, suicidal ideation, and sexual exploitation.

Calls for Government Regulation and Industry Accountability

The U.S. approach has largely relied on companies’ self-regulation, protected under Section 230 of the Communications Decency Act, which critics say has failed to protect children adequately. The California and New Mexico verdicts show courts stepping in due to legislative inaction. Meanwhile, legislation such as the Kids Online Safety Act has stalled despite bipartisan support.

In contrast, the European Union’s Digital Services Act (DSA) sets legally binding obligations for platforms accessible to minors, including age verification, default privacy settings, bans on profiling-based advertising, and risk assessments. The DSA aims to create safer online environments for children, though enforcement challenges remain.

Background

The social media industry has faced mounting criticism for prioritizing engagement and advertising revenue over user safety, particularly for children. Platforms collect and monetize detailed user data, including behavioral responses, often without full transparency or meaningful consent from minors or their families.

Recent cases have also highlighted risks associated with AI-driven chatbots used by teens, some of which have failed to recognize or respond appropriately to users expressing suicidal thoughts. These incidents have sparked calls for stronger oversight of emerging AI technologies interacting with young people.

Why it matters

These judicial rulings put pressure on governments and technology companies to implement stronger, enforceable safety standards aimed at protecting children online. The verdicts emphasize the need for regulatory frameworks that hold companies accountable for platform design choices and harm prevention, rather than leaving children to navigate online risks alone. Coordinated action by legislators, regulators, and civil society is essential to ensure compliance and meaningful change.

Read more AI Regulation stories on Goka World News.

Sources

This article is based on reporting and publicly available information from the following source:

Giorgio Kajaia
About the author

Giorgio Kajaia

Giorgio Kajaia is a writer at Goka World News covering world news, U.S. news, politics, business, climate, science, technology, health, security, and public-interest stories. He focuses on clear, factual, and reader-first reporting based on credible reporting, official statements, publicly available information, and relevant source material.

View all posts by Giorgio Kajaia