top of page

Age Assurance and the Erosion of Online Anonymity in the United States

  • Human Rights Research Center
  • Feb 10
  • 12 min read

February 10, 2026


Summary


For decades, the internet has largely relied on a model of self-reported age — the familiar “check here if you are 18” box.[1]While criticized for its ineffectiveness, this system preserved a baseline level of anonymity, allowing users to access online information without disclosing personally identifiable information (PII). Today, this paradigm is being replaced by an expanding digital identity regime.[2] In the name of child protection, governments worldwide, and states across the United States, are implementing age assurance — an umbrella term for any method a platform uses to determine a user’s age.[3] This includes requiring users to provide government-issued identification, biometric data, third-party credentials, and many other forms of identification to access certain online content.[4] These laws create tensions between protecting children and the fundamental human rights to seek, receive, and impart information anonymously.[5]


[Image source: Natural News]
[Image source: Natural News]

A Brief History of Age Assurance Law in the United States


In 1996, Congress passed the Communications Decency Act (CDA), which criminalized the transmission of indecent material to children under 18.[6] The law was catalyzed by a moral panic of anti-pornography activists who feared that children would encounter explicit content on the internet.[7] Civil liberties groups challenged the CDA, arguing that its sweeping anti-indecency restrictions burdened adults’ access to constitutionally protected expression. In Reno v. American Civil Liberties Union(1997), the Supreme Court struck down the CDA’s “indecency” and “patently offensive” provisions as unconstitutionally vague, emphasizing the absence of reliable means of age verification that would not also burden lawful adult speech.[8] [9]


This was not the end of the battle. In 1998, Congress enacted the Child Online Protection Act (COPA). The law included an “affirmative defense”, where websites could avoid liability if they implemented certain age-assurance measures, including ensuring the website either “requires the use of a credit card, debit account, adult access code, or adult personal identification number”, “accepts a digital certificate that verifies age” or “uses other reasonable age verification measures.”[10] Similar to Reno v. ACLU, this law faced immediate constitutional challenges, raising questions about whether such requirements infringed on free expression and privacy online. Ultimately, COPA was struck down, as courts found it both constitutionally problematic and practically ineffective.[11]


At the same time, Congress enacted the Children’s Online Privacy Protection Act (COPPA) in 1998. Unlike the CDA or COPA, COPPA protects the privacy of children under 13 by regulating the online collection, use, and disclosure of their PII.[12]Covered websites are required to obtain parental consent before collecting personal data from children under 13. As a result, age determination became a compliance requirement — platforms needed a way to identify which users triggered COPPA’s protections. Sites implemented age-screening mechanisms, embedding age assurance into access and data-collection practices.


As a result of these shifting regulatory frameworks, in the decade following the practical failures of the CDA and COPA, many online platforms adopted self-reported age as the standard method for age gating, typically asking users to enter their birth dates during account creation.[13] This approach sought to strike a balance between protecting minors, ensuring regulatory compliance, and addressing constitutional concerns to preserve adults’ access to lawful content.[14] This model remained largely unchallenged until the 2020s, when Louisiana became the first state to depart from the norm.


Effective January 1, 2023, Louisiana Act 440 established the first legally enforceable state-level age-verification requirement for online adult content.[15] The law requires commercial websites whose content consists of at least one-third material deemed “harmful to minors” (a broad regulatory term) to implement “reasonable” age-verification systems, such as government-issued identification, third-party verification services, or comparable mechanisms.[16] As with the CDA, this law arose from concerns about minors’ exposure to online pornography and sexualized content on social media, with lawmakers arguing that traditional parental controls were insufficient.[17] After passing, Louisiana’s statute quickly became a template.


Following its enactment, other states moved to adopt similar laws, borrowing its definitions, thresholds, and enforcement mechanisms. Mississippi's SB 2346[18], Virginia's SB 1515[19], Arkansas's SB 66[20], and Texas's HB 1181[21] were all implemented in 2023, with dozens more passing across the South and beyond. By the end of 2025, more than half of U.S. states had passed some form of age-verification law governing minors’ access to online adult content and, in some cases, social media platforms more broadly.[22] This rapid proliferation transformed age assurance from a largely unsuccessful federal initiative into a central state regulatory strategy.


The Supreme Court’s 2025 ruling in Free Speech Coalition, Inc. v. Paxton cemented this trend, affirming that states may require websites that host sexually explicit content to implement age-verification systems, such as government-issued identification or other age-assurance mechanisms, to prevent minors from accessing that material.[23] These recent developments mark a turning point: age assurance has moved from a contested idea into a central feature of U.S. online regulation, redefining the boundaries of digital free expression.


How Current Age Assurance Operates


Government-issued identification requires users to submit IDs such as driver’s licenses, passports, or state-issued identity cards. These systems create persistent digital records tied to a user’s identity, and can be combined with “liveness checks,” such as requiring users to take real-time selfies, read specific phrases, or hold identifying information on camera. These methods expand the collection of sensitive personally identifiable information (PII) — much of which is unnecessary for age verification. Researchers describe such systems as a “privacy nightmare,” warning that “all that ID information is a gold mine for identity theft.”[24] These databases are attractive to hackers because they contain immutable, high-value information that can be exploited for financial and identity fraud, account takeovers, and other forms of abuse. Unlike usernames and passwords, government-issued identification generally cannot be changed once compromised, meaning a single breach can cause permanent harm. Centralizing the information in a large database magnifies the risk by creating valuable targets for exploitation.


Biometric and facial age estimation, on the other hand, use facial recognition to infer a user’s age from still images or live scans without requiring a verified ID. While they avoid formal identity collection, these methods carry risks related to accuracy and bias. According to the National Institute of Standards and Technology, “accuracy is strongly influenced by algorithm, sex, image quality, region-of-birth, age itself, and interactions between those factors.”[25] As a result, these verification methods may produce unreliable results for certain demographic groups. Biometric data also opens the potential for secondary data use. A company or government may retain this data and repurpose or combine it with other datasets for purposes other than those for which it was initially collected. For example, Clearview AI scraped billions of online images and used them to build a facial recognition database, which it sold to law enforcement agencies, far exceeding the intentions of many users who posted their images.[26]


Other methods of age assurance include persistent mechanisms, such as cookies, local storage, and device tracking, that store proof of age.[27] These tools effectively link a user’s identity and activity across multiple platforms. Platforms may also outsource age checks to third-party assurance services. In these cases, users submit identification documents or biometric data to an external provider, which then issues a reusable assurance token.[28] Other techniques, such as carrier, financial, and SIM-based assurance, derive age information from mobile carrier records, SIM registration, or credit card ownership.[29] In practice, these mechanisms transform age assurance into a system of continuous identification and data sharing, with implications for human rights.


Human Rights Concerns


Age assurance mandates can limit people’s ability to access information, express themselves, and participate in online life.[30] These systems often rely on persistent identifiers or third-party verification providers, creating records that may be retained indefinitely, shared across platforms, or exposed through data breaches. In digital spaces where young people learn, communicate, and develop political and social identities, the normalization of pervasive identity checks erodes anonymity.


Age assurance regimes also risk reinforcing existing structural inequalities. Young people from low-income families, racialized communities, undocumented households, or those without access to government identification may be unable to comply with invasive verification requirements, effectively excluding them from online communities.[31] Machine learning-based age estimation systems, such as facial age estimation or social network-based prediction, may compound these harms. Such systems are prone to misclassification, particularly for marginalized peoples, and when errors occur, platforms may respond by imposing additional verification requirements or denying access altogether, burdening marginalized users.[32]


Age assurance mandates may also be coupled with overly broad content restrictions. The question of what counts as “harmful to children” does not have a simple answer, and policies labeling content as “adult” may be applied unevenly, particularly to LGBTQ+ and sexual health information. As the Electronic Frontier Foundation writes, “Typically aimed at keeping sites ‘family friendly,’ these policies are often unevenly enforced, classifying LGBTQ+ content as ‘adult’ when similar heterosexual content isn’t.”[33] In practice, such enforcement may result in the suppression of, or children’s lack of access to, content addressing transgender experiences, sexual education, and reproductive health. These dynamics may prevent young people from exploring sensitive topics.


Finally, despite their intrusiveness, age assurance systems are largely ineffective at meaningfully protecting minors from harmful content.[34] These systems are easily circumvented using tools such as VPNs, shared or fake credentials, or by migrating to alternative platforms beyond the reach of regulation, leaving young users exposed to real harms online while failing to address the underlying drivers. Rather than resolving these shortcomings of the legislation itself, some policymakers have responded by proposing further restrictions, including potential bans on VPN use.[35] Such measures would further erode our rights to privacy and anonymity without addressing the root causes of online risks. Meanwhile, age assurance mandates impose burdens on all users, forcing individuals to choose between surrendering personal data and accessing certain parts of the internet.


Conclusion and Recommendations


The expansion of state-mandated age assurance in the United States marks a fundamental shift in online governance. What began as a failed effort to protect minors from explicit content has evolved into a regime that increasingly conditions access to the internet on the disclosure of personal data. In doing so, it erodes longstanding norms of online anonymity that are essential to free expression, privacy, and democratic participation, while disproportionately burdening marginalized youth. Policymakers and platforms must ensure that child protection measures do not compromise fundamental human rights. Instead, they should prioritize approaches that respect privacy, promote equality, and safeguard access to information for all users.


Several policy recommendations include:


  • Shift responsibility to platforms to uphold children’s rights: Social media companies and websites should protect minors through platform design and content practices, rather than forcing children to disclose personal data.

  • Ban invasive age-verification methods: Governments should prohibit the mandatory submission of government IDs, biometrics, or persistent identifiers to access lawful content.

  • Enforce strict data minimization and privacy safeguards: Any data collected from awebsite should be limited to essential information, not be retainedor shared, be encrypted, and be subject to independent audits.

  • Ensure equitable and non-discriminatory moderation: Age-based content restrictions should be applied transparently and fairly, with safeguards to prevent disproportionate suppression of educational content.[36]

  • Adopt privacy-preserving verification technologies: If a platform insists on implementing age assurance, these methods should confirm eligibility without revealing identity, such as zero-knowledge proofs, wherein the user can tell the website that they are or are not over a certain age, without revealing any other information.[37]


Glossary


  • Age Assurance: An umbrella term describing methods used to determine or confirm a user’s age online.

  • Age Estimation: A form of age assurance that uses algorithms to predict a user’s age without requiring official documentation.

  • Age Gating: Restricted access to certain websites or content based on a user’s age.

  • Age Verification: A method of age assurance requiring users to prove their age using credentials.

  • Anonymity: The ability to access information, communicate, or participate online without revealing one’s identity.

  • Biometric Data: Biological or behavioral characteristics used to identify or estimate attributes of an individual.

  • Child Online Protection Act (COPA): A 1998 federal law intended to restrict minors’ access to harmful online content.

  • Children’s Online Privacy Protection Act (COPPA): A 1998 federal law regulating the collection, use, and disclosure of personal information from children under 13.

  • Chilling Effect: The discouragement or suppression of lawful speech or participation due to fear of surveillance and associated consequences.

  • Communications Decency Act (CDA): A 1996 federal law that criminalized the transmission of indecent material to minors.

  • Data Minimization: A privacy principle requiring that systems collect only the minimum amount of personal data necessary to fulfill a specific purpose.

  • Facial Recognition / Facial Age Estimation: Technologies that analyze facial features to identify individuals or infer attributes such as age.

  • Free Speech Coalition, Inc. v. Paxton (2025): A U.S. Supreme Court case affirming states’ authority to require age verification for certain online content.

  • Government-Issued Identification: Official identity documents, such as driver’s licenses or passports, used in age-verification systems.

  • Liveness Checks: Verification techniques requiring real-time user actions to verify a person is physically present.

  • Personally Identifiable Information (PII): Data that can be used to identify an individual, including names, ID numbers, biometric data, and persistent identifiers.

  • Persistent Identifiers: Technologies such as cookies, device fingerprints, or local storage that allow users to be recognized across sessions or platforms.

  • Privacy-Preserving Age Assurance: Methods that confirm age eligibility without revealing identity or storing sensitive personal data.

  • Reno v. American Civil Liberties Union (1997): A Supreme Court decision striking down key provisions of the CDA and affirming strong First Amendment protections for online speech.

  • Third-Party Age Assurance Services: External providers that conduct age checks on behalf of platforms and issue reusable tokens or credentials.

  • Zero-Knowledge Proof: A cryptographic protocol for a party to convince another party that a statement is true or false, without providing any extra information.[38]


Footnotes/Sources


[1] Liudas Kanapienis. “Social Media’s Age Verification Crisis: Can Platforms Solve the Technical and Ethical Puzzle?” Biometric Update | Biometrics News, Companies and Explainers, BiometricUpdate.com, July 2025, www.biometricupdate.com/202507/social-medias-age-verification-crisis-can-platforms-solve-the-technical-and-ethical-puzzle.

[2] Rindala Alajaji. “The Year States Chose Surveillance over Safety: 2025 in Review.” Electronic Frontier Foundation, 2 Jan. 2026, www.eff.org/deeplinks/2025/12/year-states-chose-surveillance-over-safety-2025-review.

[3] “Age Estimation Requires Verification for Many Users.” Center for Democracy and Technology, 26 Mar. 2025, cdt.org/insights/age-estimation-requires-verification-for-many-users/.

[4] “Age Verification Laws and Youth Online Safety: Overview and Recommendations.” New America, 2019, www.newamerica.org/oti/reports/age-verification-the-complicated-effort-to-protect-youth-online/age-assurance-and-age-verification/.

[5] “International Day for Universal Access to Information (IDUAI) 2021 - Teaser.” Unesco.org, 2021, www.unesco.org/en/right-information.

[6] “S.652 - 104th Congress (1995-1996): Telecommunications Act of 1996.” Congress.gov, 2026, www.congress.gov/bill/104th-congress/senate-bill/652.

[7] Roman, David. “The Return of Age Verification Laws.” Acm.org, Communications of the ACM, Mar. 2024, https://doi.org/10.1145//3651865.

[8] “ACLU Background Briefing - Reno v. ACLU: The Road to the Supreme Court | American Civil Liberties Union.” American Civil Liberties Union, 13 Sept. 2005, www.aclu.org/press-releases/aclu-background-briefing-reno-v-aclu-road-supreme-court. Accessed 9 Jan. 2026. ‌

[9] “Reno v. American Civil Liberties Union.” LegalClarity, 21 July 2025, legalclarity.org/reno-v-american-civil-liberties-union/. Accessed 10 Jan. 2026. ‌

[10] “H.R.3783 - 105th Congress (1997-1998): Child Online Protection Act.” Congress.gov, 2026, www.congress.gov/bill/105th-congress/house-bill/3783.

[11] Tien, Lee. “After 10 Years, an Infamous Internet-Censorship Act Is Finally Dead.” Electronic Frontier Foundation, 21 Jan. 2009, www.eff.org/deeplinks/2009/01/copa.

[12] “Children’s Online Privacy Protection Rule (‘COPPA’).” Federal Trade Commission, 25 July 2013, www.ftc.gov/legal-library/browse/rules/childrens-online-privacy-protection-rule-coppa. Accessed 9 Jan. 2026. ‌

[13] “After 10 Years, an Infamous Internet-Censorship Act Is Finally Dead.” Electronic Frontier Foundation, 21 Jan. 2009, www.eff.org/deeplinks/2009/01/copa.‌

[14] Johnson, Ariel Fox. “U.S. Age Assurance Is Beginning to Come of Age: The Long Path Toward Protecting Children Online and Safeguarding Access to the Internet.” Common Sense Media, 30 Sept. 2024, https://www.commonsensemedia.org/sites/default/files/featured-content/files/2024-us-age-assurance-white-paper_final.pdf?

[15] Franklin, Jonathan. “Looking to Watch Porn in Louisiana? Expect to Hand over Your ID.” NPR, 5 Jan. 2023, www.npr.org/2023/01/05/1146933317/louisiana-new-porn-law-government-id-restriction-privacy.

[16] Louisiana. Act No. 440, 2022 Reg. Sess. House Bill No. 142, Enrolled. “An Act to Enact R.S. 9:2800.28, Relative to Material Harmful to Minors”. Effective Jan. 1, 2023, https://legis.la.gov/legis/ViewDocument.aspx?d=1289498

[17] Wilson, Sabrina. “New Louisiana Laws Target Online Pornography, Delinquent Taxpayers.” Https://Www.fox8live.com, FOX 8 Local First, 3 Jan. 2023, www.fox8live.com/2023/01/03/new-louisiana-laws-target-online-pornography-delinquent-taxpayers

[18] Mississippi. Senate Bill 2346, 2023 Reg. Sess. “An Act to Require Age Verification for Material Harmful to Minors on the Internet”. Effective July 1, 2023, https://billstatus.ls.state.ms.us/documents/2023/html/SB/2300-2399/SB2346IN.htm

[19] Virginia. Senate Bill 1515, 2023 Sess. “An Act Establishing Civil Liability for Material Harmful to Minors on the Internet”. Approved May 12, 2023. Effective July 1, 2023, https://legacylis.virginia.gov/cgi-bin/legp604.exe?231+sum+SB1515

[20] Arkansas. Act 612 (SB 66), 94th Gen. Assemb., Reg. Sess., 2023. “An Act to Create the Protection of Minors from Distribution of Harmful Material and Require Age Verification”. Effective April 11, 2023, https://www.arkleg.state.ar.us/Bills/Detail?id=sb66&ddBienniumSession=2023%2F2023R

[21] Texas. House Bill 1181, 88th Leg., Reg. Sess. “An Act Relating to Restricting Access to Sexual Material Harmful to Minors on an Internet Website”. Enacted 2023. Effective September 1, 2023, https://capitol.texas.gov/tlodocs/88R/billtext/html/HB01181H.htm

[22] Cole, Samantha. “Half of the US Now Requires You to Upload Your ID or Scan Your Face to Watch Porn.” 404 Media, 2 Dec. 2025, www.404media.co/missouri-age-verification-law-porn-id-check-vpns/.

[23] SUPREME COURT of the UNITED STATES. 2024, www.supremecourt.gov/opinions/24pdf/23-1122_3e04.pdf. ‌

[24] Scheffler, Sarah. “Age Verification Systems Will Be a Personal Identifiable Information Nightmare.” Communications of the ACM, vol. 67, no. 7, Association for Computing Machinery, July 2024, pp. 31–33, https://doi.org/10.1145/3660519.

[25] Hanaoka, Kayee. Face Analysis Technology Evaluation: Age Estimation and Verification. Jan. 2024, https://doi.org/10.6028/nist.ir.8525. ‌

[26] “ACLU Sues Clearview AI | American Civil Liberties Union.” American Civil Liberties Union, 27 May 2020, www.aclu.org/press-releases/aclu-sues-clearview-ai.

[27] “Age Verification Laws and Youth Online Safety: Overview and Recommendations.” New America, 2019, www.newamerica.org/oti/reports/age-verification-the-complicated-effort-to-protect-youth-online/age-assurance-and-age-verification/.

[28] Ringrose, Katelyn, et al. “Mind the Gap: Understanding Age Verification and Assurance | IAPP.” IAPP.org, 2025, iapp.org/news/a/mind-the-gap-understanding-age-verification-and-assurance.

[29] “Age Verification Laws and Youth Online Safety: Overview and Recommendations.” New America, 2019, www.newamerica.org/oti/reports/age-verification-the-complicated-effort-to-protect-youth-online/age-assurance-and-age-verification/.

[30] Hancock, Alexis. “Privacy Is for the Children (Too).” Electronic Frontier Foundation, 26 Nov. 2025, www.eff.org/deeplinks/2025/11/privacy-children-too.

[31] Kayyali, Dia, and Jasmine Mithani. “Age Verification Is Locking Trans People out of the Internet.” Tech Policy Press, 8 Dec. 2025, www.techpolicy.press/age-verification-is-locking-trans-people-out-of-the-internet/.

[32] Kayyali, Dia, and Jasmine Mithani. “Age Verification Is Locking Trans People out of the Internet.” Tech Policy Press, 8 Dec. 2025, www.techpolicy.press/age-verification-is-locking-trans-people-out-of-the-internet/.

[33] York, Jillian C. “Blunt Policies and Secretive Enforcement Mechanisms: LGBTQ+ and Sexual Health on the Corporate Web.” Electronic Frontier Foundation, 24 Oct. 2018, www.eff.org/deeplinks/2018/10/blunt-policies-and-secretive-enforcement-mechanisms-lgbtq-and-sexual-health.

[34] Brown, Elizabeth Nolan. “Age-Verification Laws Don’t Work, according to New Study.” Reason.com, 12 Mar. 2025, reason.com/2025/03/12/study-age-verification-laws-dont-work/.

[35] Zak Doffman. “‘Disaster’—IPhone and Android VPN Ban ‘Actually Happening.’” Forbes, Dec. 2025, www.forbes.com/sites/zakdoffman/2025/12/01/iphone-and-android-vpn-ban-is-suddenly-real-do-this-instead/.

[36] Tanner, Brooke, and Nicol Turner Lee. “Children’s Online Safety Laws Are Failing LGBTQ+ Youth.” Brookings, 9 July 2025, www.brookings.edu/articles/childrens-online-safety-laws-are-failing-lgbtq-youth/.

[37] Mitra, Finn. “How Offline ID Checks Could Help Solve the Age Verification Head-Scratcher.” Tech Policy Press, 7 Jan. 2026, www.techpolicy.press/how-offline-id-checks-could-help-solve-the-age-verification-headscratcher/.

[38] “Zero-Knowledge Proofs | MIT CSAIL Theory of Computation.” Mit.edu, 2025, toc.csail.mit.edu/node/218.

​Address:

2000 Duke Street, Suite 300, Alexandria, VA 22314, USA

Tax exempt 501(c)(3)

EIN: 87-1306523

© 2026 HRRC

bottom of page