top of page

Meta Lost in Court, but Children Are Still Losing Online.

On liability, Section 230, and why a landmark verdict might not move the needle.


meta lost in court

Two days ago, I wrote about Meta’s Teen Accounts and whether they represented a real child-safety policy or a carefully timed public relations move. I didn’t have to wait long for an answer.


A New Mexico jury just ordered Meta to pay $375 million for misleading the public about the safety of its platforms for children. This wasn’t a finding of carelessness or poor design, but rather the jury found deception, specifically that Meta knew what was happening on its platforms, heard warnings from its own employees, and told the public something different anyway. Arturo Béjar, a former engineering leader at Meta, testified that he ran experiments on Instagram showing underage users were being served sexualized content, and that his own daughter was propositioned by a stranger on the platform. Internal research showed that at one point, 16% of all Instagram users reported being shown unwanted nudity or sexual activity in a single week. Meta knew this, and the verdict proves that.


And yet, a Meta spokesperson responded to the ruling, saying the company is “confident in our record of protecting teens online.”


Honestly, every time a ruling like this comes down, I feel a flicker of hope that this will finally be the moment we see real accountability. That this might be the turning point that we look back on later and think, “Can you imagine that there were no protections or regulations about that then?” Like we do with seatbelts or bike helmets or the age restrictions we now take for granted around alcohol and tobacco. And then I remember how this works. Tech companies have the resources to appeal indefinitely, to fund the legislators who write the rules they’ll eventually have to follow, and to reframe every lawsuit as an attack on innovation. Children don’t have lobbyists. The organizations doing the hardest work, the law enforcement officials, the nonprofit investigators, the university researchers working to understand and disrupt child exploitation online, are operating on a fraction of the resources that Meta spends on a single product launch.


What makes this genuinely maddening is that partnership is possible. There are people doing extraordinary work to protect children from online predators and exploitation, and platforms like Meta, TikTok, and Roblox have the technical infrastructure, the data, and frankly, the legal obligation to be working alongside them. Instead, that infrastructure gets pointed at engagement metrics. The same algorithmic sophistication that could identify and remove predatory behavior is being used to keep children scrolling.


The legal framework hasn’t kept up. Section 230, which has shielded platforms from liability for user-generated content since 1996, was written before social media existed in any recognizable form. There is genuine bipartisan interest in reforming it, and that conversation is long overdue. Platforms should not be able to profit from systems that demonstrably endanger children while hiding behind liability protections designed for a completely different internet. With regard to the most recent legal win, the $375 million verdict is significant, but Meta’s annual revenue is roughly $160 billion. A fine they can absorb isn’t accountability; it’s a line item.


I keep coming back to the same question that drives this newsletter: why isn’t protecting children online treated as the urgent, non-negotiable priority that it is? We regulate what children can watch on television, what kind of milk they can be served at school, and what car seats they have to ride in. We have decided, as a society, that children deserve protection from known harms. These harms are documented in internal research, testified to under oath, and now confirmed by a jury.


To be clear, blaming tech companies doesn't solve everything, and no one is arguing that Instagram, or social media, or even the tech platforms themselves, for that matter, are solely responsible for every harm that comes to a child online. But specific design choices made by these companies contribute to patterns of use that put way too many young people at serious risk, and that is a shared responsibility that this verdict begins to name, and which these companies have largely evaded up to this point. I desperately want this case to be the landmark ruling that finally establishes what tech companies owe the children they profit from.


But I've watched this industry absorb fines, issue carefully worded apologies, and roll out just enough new features to change the subject, and I'm not (yet) holding my breath.



Soni Albright

Soni Albright is a teacher, parent educator, curriculum specialist, researcher, and writer for Cyber Civics with nearly 24 years of experience in education. She has taught the Cyber Civics curriculum for 14 years and currently works directly with students while also supporting families and educators. Her experience spans a wide range of school settings—including Waldorf, Montessori, public, charter, and homeschool co-ops. Soni regularly leads professional development workshops and is passionate about helping schools build thoughtful, age-appropriate digital literacy programs. Please visit: https://www.cybercivics.com/parent-presentations

 
 
bottom of page