top of page

Meta’s Teen Accounts

Meta’s Teen Accounts: Embedded Safety Tools or PR Play?

teens accounnt

There’s a word for when a company spends $700,000 running television ads about how much it cares about your kids while simultaneously facing a landmark federal trial over whether it deliberately engineered those same kids’ dependence on its platforms. That word is audacity.


Meta has run its “Teen Accounts” ads more than 3,500 times since November on CNN, Fox, ABC, and local stations across the country, with a particular concentration in Los Angeles (where the trial is taking place) and Washington, D.C. (where policymakers are watching). The company briefly paused the campaign in January, then resumed it just as the trial began. AdImpact, an advertising data firm, tracked it all. This is not simply a safety initiative; it is a strategic communication effort aimed at shaping public perception, as well as that of potential jurors and policymakers alike.


I have seen these messages firsthand in placements like The New York Times (both in podcast advertising and digital content), using phrases such as “Learn more about Meta’s efforts to protect teens online.” As a parent of four, I find that framing deeply concerning. It asks families to extend trust that has not been earned, at the very moment, no less, that trust is under scrutiny.


And it may work unless people know how to look beyond the message. That is where media literacy matters.


What the Documents Say


In January 2026, newly unsealed court filings in the social media addiction trials gave the public a look inside how Meta, Google, Snap, and TikTok actually talked about children in private. These are internal emails, chat logs, slide decks, and depositions, not official statements in a deposition or courtroom.



In 2016, a Meta company email noted that “Mark has decided that the top priority for the company in 2017 is teens.” Another message from the same period stated: “Engaging the vast majority of teens in an area/school with our products is crucial to driving overall time spent.”


In 2018, Meta’s Youth Team circulated a document discussing a product called “Tweens on Facebook” and a “private mode,” a second account to give teenagers “plausible deniability,” explicitly inspired by the popularity of “finstas.” The same year, Meta was running a paid Teen Ambassador Program: recruiting 13- to 17-year-olds, having them sign NDAs, and paying them with swag and incentives to act, in their own words, as “our plug at local high schools.”


A 2019 internal study on teen mental health found that “teens talk of Instagram in terms of an ‘addicts narrative’ — spending too much time indulging in a compulsive behaviour that they know is negative but feel powerless to resist.” The document also noted: “Teens can’t switch off from Instagram even if they want to.”


In 2020, an employee message exchange compared Instagram to drugs and slot machines. The exchange ended: “Oh my gosh y’all IG is a drug.” “Lol, I mean, all social media. We’re basically pushers.”


A 2023 slide deck titled “Controlling the Narrative” suggested that Instagram could “use school networks as a lever for acquisition” and position the platform as essential to navigating school transitions.


The lifetime value of a 13-year-old was calculated internally at $270 per teen. That document is from 2018. The ads running right now cost $700,000 for a single spot.


This is what “protect teens online” looks like from the inside.


The Teen Accounts “Fix” Has Already Been Evaluated


Meta’s Teen Accounts, which are the centerpiece of their current ad campaign, were rolled out in September 2024. They reportedly limit who can contact a teen, apply content filters, and set time limits. Meta calls it a meaningful safety measure.


Fair Play for Kids, a child advocacy organization, published an independent assessment of Teen Accounts in 2025. Their findings: the protections are inconsistent, easily circumvented, and fall well short of what the platform’s own internal research identified as necessary. “Two-thirds (64%) of safety tools tested found to be ineffective, with just 17% working as described by Meta.” The problems Meta’s own employees flagged in 2019, such as compulsive use, inability to disengage, and content that normalizes harmful behaviors, are not solved by a contact filter and a bedtime reminder.


meta rolling out
Official Meta communication re: Teen Accounts

Expert testimony submitted in the current trial concluded that Meta’s platforms “were not designed to be reasonably safe for children” and that age verification and parental consent systems were ineffective. Another expert found that Meta “did not provide effective warnings to adolescent users and parents about the risks and harms.”


Meta Teen Accounts were designed to look safe. But when engagement is the actual priority, safety can't be.


The Government Response: Watered Down and Stalled


Congress has been discussing child online safety legislation for years, a topic we’ve covered in depth in this blog (here, here, here, here…). The Senate passed the Kids Online Safety Act (KOSA) with a sweeping bipartisan majority, 91 to 3. KOSA’s defining feature was a duty-of-care provision: a legal requirement that platforms act in the best interests of children, enforceable in court.


The House version that cleared committee in early 2026 removed the duty of care entirely. House Republicans advanced the KIDS Act on a largely partisan basis, with Democrats objecting that the bill was not only weaker than the Senate version, but that some provisions — particularly moving age verification to app stores rather than platforms — would actually benefit Meta and other large platforms by making compliance easier for them to manage on their own terms. Rep. Jake Auchincloss called it a “Meta markup.”


While the House markup was still ongoing, the Senate unanimously passed an update to the Children’s Online Privacy Protection Act (COPPA 2.0). Senator Ed Markey took to the Senate floor to say the House version “undermines strong bipartisan compromise.”


A Senate-passed child online safety bill (approved 91–3) has stalled after the House produced a weaker version. The companies that lobbied against the original are now spending millions on ads highlighting their own voluntary safety measures. It's a familiar pattern: industry-friendly legislation arrives wrapped in the language of child protection.


I’ll Say it Again: Media Literacy Is Not Optional


My daughter, a high schooler, gave her younger brother exactly the right advice when he asked about getting an Instagram account: “That’s dangerous. Start somewhere gentler. Get your skills under control first.”


She arrived at that conclusion not from a school curriculum or a parental lecture, but from lived experience by watching what these platforms do to people she knows and how they make her feel. That’s media literacy in practice. It shouldn’t have to be learned through experience. It should be taught.


What does media literacy look like when it comes to Big Tech’s PR? It looks like asking better questions: Why is this message showing up right now? Who paid for it and who benefits? It means looking beyond what companies say to what they’ve actually done, and treating vague claims like “we’ve made improvements” with skepticism, especially when there is little regulation or accountability. At its core, media literacy is about questioning motives, examining evidence, and resisting the impulse to take corporate messaging at face value.


It also means teaching young people—and, frankly, adults—to evaluate a company’s track record rather than its current advertising campaign. Meta has a track record. It spans nearly a decade of internal documents describing children as a growth market, calculating their lifetime monetary value, and outlining strategies to cultivate compulsive use despite documented risks and harms.


The goal of media literacy education is not cynicism, but discernment: the ability to ask who funded a message, what has been omitted, and whose experiences are not represented.

In this case, much of the answer already exists in the public record. The challenge is not access to information, it is teaching people to recognize it, question it, and use it.



Soni Albright

Soni Albright is a teacher, parent educator, curriculum specialist, researcher, and writer for Cyber Civics with nearly 24 years of experience in education. She has taught the Cyber Civics curriculum for 14 years and currently works directly with students while also supporting families and educators. Her experience spans a wide range of school settings—including Waldorf, Montessori, public, charter, and homeschool co-ops. Soni regularly leads professional development workshops and is passionate about helping schools build thoughtful, age-appropriate digital literacy programs. Please visit: https://www.cybercivics.com/parent-presentations

 
 
bottom of page