Online Safety Laws for Children & Teens Part 3
- Soni Albright
- 5 hours ago
- 7 min read
How Schools and Libraries Became the Front Line: The Children’s Internet Protection Act (CIPA)

About this series:
This series offers a practical overview of the key U.S. laws that govern online privacy and safety for children and teenagers. It begins with the Children’s Online Privacy Protection Act (COPPA), passed by Congress in 1998 and first implemented by the Federal Trade Commission (FTC) in 2000, with major rule updates in 2013 and 2025. Together, these regulations form the foundation for how companies collect, use, and share data from children under the age of 13. The series also examines related federal laws, including the Children’s Internet Protection Act (CIPA, 2000), which governs schools and libraries; the REPORT Act (2024), which expands online exploitation reporting requirements; and current proposals such as the Kids Online Safety Act (KOSA) and the EARN IT Act, which aim to address risks facing teens. Each installment breaks down what these laws actually say, how they are enforced, and what they mean for families, educators, and young people navigating digital spaces today.
Introduction: A Very Different Kind of Law
In Parts One and Two, we looked at COPPA, the law meant to protect children’s privacy, and the many ways companies have continued to ignore or exploit it. But COPPA isn’t the only federal law shaping children’s online experiences. At nearly the same moment COPPA came into force, Congress passed another statute that quietly shifted responsibility for children’s online safety away from the tech industry and onto schools and libraries.
That law is the Children’s Internet Protection Act (CIPA), passed in 2000. And unlike COPPA, which regulates companies, CIPA regulates institutions. If a school or library accepts federal E-rate funding for internet access or technology, it must comply with CIPA. In practical terms, this means the burden of filtering, monitoring, and managing student internet use falls squarely on the very institutions already struggling with outdated devices, limited budgets, and the impossible task of keeping kids safe in a digital world built without them in mind. It’s a monumental task.
CIPA, though, is not a privacy law. It is a filtering law.And that distinction matters.
What CIPA Actually Requires
Schools and libraries receiving E-rate funds must:
Install “technology protection measures” (filters)These must block visual depictions of obscenity, child sexual abuse material, and other content deemed “harmful to minors.”
Monitor minors’ online activitySchools must supervise or monitor student internet use, though CIPA does not define how.
Teach appropriate online behaviorIncluding digital citizenship, safe communication, and cyberbullying prevention.
Adopt an internet safety policyAddressing unauthorized access, hacking, inappropriate content, and various online risks.
On paper, these requirements sound straightforward. In practice, they raise complicated questions about equity, censorship, safety, privacy, and the role of public institutions in mediating tech harms.
When CIPA Reached the Supreme Court
CIPA faced immediate First Amendment challenges. Opponents argued that filtering software was blunt, error-prone, and capable of blocking constitutionally protected speech. In United States v. American Library Association (2003), the Supreme Court upheld CIPA, but only under specific conditions:
Adults must be allowed to request that filters be disabled without undue burden.
Libraries retain discretion over how filtering is implemented.
The funding condition applies only if institutions accept E-rate dollars.
The decision effectively preserved CIPA while acknowledging its core flaws: filters can block too much, block the wrong things, or block material the First Amendment protects.
This tension shapes every CIPA implementation today.
Filtering at Scale: The Promise and the Problem
Filters were Congress’s chosen technological solution for protecting children from harmful content. But filters fail in predictable ways:
1. They overblock essential resources.
Sex education sites, LGBTQ+ youth support pages, domestic violence help, mental-health resources, and basic health information are routinely blocked.
2. They underblock the content they were meant to stop.
Filtering software relies on keywords, image classifiers, and AI models, none of which are perfect. Students will tell you plainly: “The filter blocks what I need and misses what I shouldn’t see.”
3. They disproportionately impact marginalized students.
Many students rely on the school internet as their primary or only access. These students experience the strictest blocking with the fewest alternatives.
4. They create a false sense of safety.
Filtering can stop a search result, but it can’t teach critical thinking, risk assessment, or digital judgment. That is the job of media literacy instruction. Once students leave the school network, the filter disappears.
CIPA was meant to be a safety net. In practice, it often becomes a simplistic technological gatekeeper that ignores the complexity of real online risk.
CIPA’s Most Neglected Requirement: Digital Citizenship & Media Literacy Instruction
Ironically, the one requirement that could genuinely improve student safety, instruction in appropriate online behavior, is the one most unevenly implemented.
Schools must teach:
safe online communication
appropriate social networking behavior
cyberbullying prevention
responsible technology use
But Congress provided no standards, no funding, and no enforcement. Predictably:
Many schools treat digital citizenship as a one-time assembly or slideshow.
Few educators receive training in media literacy or online safety.
Entire curricula are outsourced to corporate platforms whose primary interest is protecting the company, not the child.
A requirement that could have empowered students became a checkbox instead. This is a huge missed opportunity for protecting children in the online space.
From Filtering to Surveillance: How CIPA Has Been Interpreted Over Time
CIPA never required surveillance of students. But the vague instruction to “monitor minors’ online activity” has been interpreted and marketed by vendors as permission to install increasingly invasive tools:
keystroke tracking
browser and search monitoring
real-time alerts to administrators
scanning of student emails or documents
webcam-based proctoring
AI-driven “threat assessment” systems
These systems raise serious concerns:
false positives that lead to disciplinary action
misinterpretation of mental-health signals
indefinite storage of student data
no explicit parent or student consent
Even though CIPA never explicitly mandated constant monitoring, its vague requirement that schools “monitor minors’ online activity” opened the door to a booming surveillance industry. Districts now purchase tools that go far beyond filtering, including AI systems that generate “risk scores,” flag keywords, scan private messages and search histories, capture webcam or screen activity, and send real-time alerts to administrators. These technologies—often lacking independent evaluation, meaningful parental consent, or safeguards against bias—have become a multibillion-dollar market. Vendors effectively act as data brokers, collecting and integrating student information into proprietary platforms and selling districts access to their monitoring dashboards. The result: students are placed inside surveillance ecosystems that no adult would tolerate in the workplace, while the platforms driving their online lives remain largely unregulated.
This shift has created a thriving commercial market, with districts spending hundreds of thousands of dollars on systems that monitor students far beyond what CIPA ever envisioned. Even when districts act responsibly, compliance still requires handing enormous volumes of sensitive student data to private vendors, with no opt-out for students, no meaningful parental control, and no independent oversight of how that data is used or stored.
Watertown Public Schools’ 2021 contract with GoGuardian details the extensive data-access and retention permissions needed to use its device-monitoring tools (source), while Minneapolis Public Schools halted use of Gaggle after it flagged more than 1,300 student messages, including private journal entries (source). Vancouver Public Schools spent over $328,000 on Gaggle services alone over three years (source). Research from UC San Diego and the ACLU confirms that these companies routinely monitor students 24/7, generate automated “risk scores,” and store large quantities of student data, often without meaningful parental awareness or consent (source). Together, these examples show how a vague statutory phrase has fueled a multimillion-dollar surveillance industry inside schools, even as the companies shaping children’s online experiences face almost no comparable regulation (source).
The Tension Between COPPA and CIPA
Part One taught us that COPPA places responsibility for children’s privacy on companies.Part Three shows us that CIPA places responsibility for children’s safety onto schools.
But these mandates often collide:
COPPA says: minimize data collection.
CIPA says: monitor student activity.
See what they did there?
Schools are left trying to reconcile contradictory legal obligations using limited budgets, insufficient training, and products built by companies that benefit from the confusion.
Meanwhile, the apps, games, and even ‘educational’ platforms students use every day still lack enforceable age-appropriate design standards or basic child-safety obligations.
CIPA asks schools to manage harms created by an industry with no incentive to change.
What CIPA Reveals About Our Approach to Youth Online Safety
When Congress passed CIPA, it assumed:
filters could replace education,
schools could control children’s online environments,
and the primary risks were about content, not data, design, or algorithms.
In 2025, none of those assumptions holds.
Kids carry the internet in their pockets.
Digital harms stem from profiling, design choices, algorithmic amplification, and unregulated AI, not just explicit content.
And CIPA’s framework cannot solve problems created by platforms far outside school control.
Where We Go From Here
CIPA isn’t going anywhere. Schools depend on E-rate funding, and filtering systems are now embedded in public education. But we cannot rely on filtering or surveillance to keep kids safe. Real safety comes from education, transparency, media literacy, and meaningful accountability for the platforms that shape children’s online lives.
In Part Four, we’ll look at the REPORT Act and the shift toward platform accountability, mandatory reporting, and harm mitigation, especially as the lines between online safety, AI, and youth mental health continue to blur.
c) Soni Albright, 2025 — Do As I Say, Not As I Do
Catch up on past articles from this series:

Soni Albright is a teacher, parent educator, curriculum specialist, researcher, and writer for Cyber Civics with nearly 24 years of experience in education. She has taught the Cyber Civics curriculum for 14 years and currently works directly with students while also supporting families and educators. Her experience spans a wide range of school settings—including Waldorf, Montessori, public, charter, and homeschool co-ops. Soni regularly leads professional development workshops and is passionate about helping schools build thoughtful, age-appropriate digital literacy programs. Please visit: https://www.cybercivics.com/parent-presentations





























