This is the new MLex platform. Existing customers should continue to use the existing MLex platform until migrated.
For any queries, please contact Customer Services or your Account Manager.
Dismiss

Papers, please: Can online age verification be compatible with privacy?

By Maria Dinzeo

December 11, 2025, 13:01 GMT | Comment
Age verification is increasingly seen as a way to keep children away from harm online, as seen with versions in place in the EU, UK and the US. But there are widespread fears that proving your age necessarily means handing over sensitive personal data that is vulnerable to exploitation. Can age verification work effectively without eroding privacy for children and adults alike?
This article is part of an MLex online safety special series running this week. Other stories focus on US-specific regulation, algorithmic content and video gaming (see here). 

Across Europe, and more lately the US, governments are turning to age verification as a practical way to shield children from online harm. But can the verification tools out there strike a balance between protecting children and protecting privacy? It's tricky, and there's no obvious playbook that stands out.

“The absence of any safeguards whatsoever has led to a whole generation of kids who have basically been experimented on and fed huge amounts of harmful content,” says George Billinge, who helped draft regulatory guidance underpinning the UK’s emerging online safety regime. 

Billinge, now a tech consultancy CEO, sees it as about establishing values: “Trying to think about how nation states can exert some kind of social contract onto previously completely ungoverned online spaces is a worthwhile activity,” he told MLex.

For privacy advocates, though, the rapid pace of regulation is a concern. A fervor for bringing in age verification risks masking a less visible threat to both children and adults by requiring everyone to give up their ability to browse the web anonymously, they fear.

“We think about it as identifying minors, but it is really about identifying adults,” Alice Marwick, director of research at New York-based institute Data & Society, told MLex.

Legal scholars share this concern. Eric Goldman, Associate Dean for Research at California’s Santa Clara University School of Law, is skeptical about whether age verification can ever work effectively without eroding privacy for both children and adults.

Laws that force platforms to separate adults from children end up sweeping in everyone, he says, creating a gated system where access depends on showing your digital “papers” and requires surrendering a significant degree of privacy and data security.

He expounds on this idea in a paper published this summer in the Stanford Law Technology Review on what he calls “Segregate and Suppress” laws, a regulatory model in which governments first compel platforms to separate adult users from minors, then restrict or censor what minors can access. Such laws are incompatible with privacy, Goldman contends.

— Europe’s early lead —

The debate has been under way for some time in Europe, where the Digital Services Act in the EU and the Online Safety Act in the UK have ushered in some of the world’s most stringent age-verification requirements. In the process, they have been instrumental in raising questions about just how far regulators can go in curbing online harms without undermining privacy.

The UK’s requirement, ushered into force in July by online safety regulator Ofcom, effectively compels platforms that allow pornography and are “likely to be accessed by children” to ensure only adults can get in. But it doesn’t specify how sites should go about doing this, only that whatever method they use be “proportionate, highly effective and privacy-protective.”

Most are outsourcing the task to third parties, drawing criticism from some privacy groups who say Britons are being forced to entrust an array of sensitive data — such as facial scans, identity documents and financial information — to overseas age-verification providers with dubious track records on privacy.

The rollout hasn’t been without its bumps, says Iain Corby, executive director of the Age Verification Providers Association. “Ofcom has done a pretty good job, but they still haven't required any sort of independent certification,” he told MLex. “There’s a lot of newcomers into the market doing very poor-quality age checks. And also, we don't know whether they're behaving well in terms of your personal data.”

While Ofcom hasn’t set specific accuracy thresholds, the enforcer's expectation that companies clearly show that their age-verification methods are effective and designed to preserve privacy has proved a tall order. “I don't think that anybody is getting it right yet,” Ofcom’s online safety policy director, Almudena Lara, told a recent conference (see here).

The EU’s approach is similar: While the type of age assurance required for online platforms depends on their level of risk to young users, those measures can’t be imposed at the expense of children’s rights to data protection (see here and here).

That’s why the EU regulator is currently working on an age-verification app, as well as on a long-term solution for age verification from the end of 2026, known as the EU Digital Wallet — seen as an appropriate and “privacy-preserving” way of checking someone’s age.

— The US joins the debate —

The clash between privacy and child protection is also now unfolding across the US, where a growing body of state laws is testing how far the government can go in mandating age verification.

The Age Verification Providers Association says 25 states have passed such laws for adult content, while 13 have passed laws restricting children’s access to social media through age gating. While many of these mandate “reasonable” methods of age verification, they don’t explicitly require that those methods be privacy-preserving.
Nowhere is the tension clearer than in California, where regulators are wrestling with how to enforce these mandates.

State privacy agency CalPrivacy has warned that “there is currently no privacy-protective way to determine whether a consumer is a child.” In a memo last year urging its board to reject an age-verification proposal, the enforcer’s legislation and policy director, Maureen Mahoney, cautioned that such measures could “reduce privacy by incentivizing businesses to collect even more personal information from all users to verify children’s ages.”

California Governor Gavin Newsom ended up vetoing the measure, partly because of its “unclear effects on children's privacy.”

Nevertheless, age verification is coming to the Golden State. Under a new law known as the Protecting Our Kids from Social Media Addiction Act (SB 976), social media platforms will have to verify users’ ages and obtain verifiable parental consent to provide personalized feeds to children starting Jan. 1, 2027.

What platforms will have to do to comply is still a big unknown, as the state’s attorney general, who has sole enforcement authority over the law, is still hashing out the regulations (see here).

Industry and parent groups have pushed for what they see as a panacea: device-level "age-bracket signals." Google and Apple both offer APIs that relay a user’s broad age range to apps, using a birthdate supplied by a parent or Google account holder.

The idea was widely touted at a recent public forum hosted by the California Attorney General’s office on how to implement SB 976’s age-verification mandates (see here). California has already codified that approach in a separate law, AB 1043, which will require operating systems to collect a user’s birthdate and age at setup and pass an anonymized age-bracket signal to apps and app stores beginning in 2027.

While this may sound a lot like a magic bullet, it comes with unintended consequences. A user’s device becomes linked to their age range, allowing device makers to classify users and potentially tie online activity to verified age groups. This data can also be later repurposed in ways the law didn’t intend, including for analytics or advertising.

— Data woes —

As age-verification laws take effect in US states beyond California, companies are also turning to third-party vendors that require web users to upload government IDs, identity documents or live selfies. But these methods create new repositories of sensitive data that could open the door to various forms of misuse.

“Once data is collected for any purpose, it ends up getting used for many other purposes,” Marwick stressed to MLex. “Often you have this slippery slope where it expands, and the data you were told was going to be private and told was being protected will be sold.”

The lack of trust spills over into families as well. Research shows that children and parents are deeply uncomfortable with age verification, especially when it relies on facial scans or submitting identity documents.

The Center for Democracy and Technology interviewed 17 families for a study on age verification perspectives, for example, and found that “when presented with a scenario of ID-based verification, participants immediately and intuitively responded with strong opposition."

The study’s authors wrote that most parents and teens “expressed reluctance to upload sensitive identity documents to an online service, especially given the breadth and sensitivity of data on an ID card. Commonly raised concerns included the privacy risk posed by service providers holding such data, as well as possible security breaches.”

Facial scans fared no better. Researchers reported that it “raised significant red flags for both teens and parents in the study,” as both an ineffective means of verifying age and a “jarring” experience for users who were expected to take a photo of themselves before they could access any online service.

But as states demand stricter age checks, platforms are forging ahead. YouTube announced this year that it would begin using machine learning to predict whether a user is under or over 18 based on a “variety of signals” that include viewing and search histories and how long a user’s account has existed.

If the algorithm incorrectly flags someone as a minor and imposes age restrictions, a user can prove they are an adult by providing a credit card number, government ID or selfie. But this solution has unsurprisingly raised a host of privacy concerns about surveillance, misclassification and data collection.

The platform Discord found this out recently when a third-party provider that it uses to resolve age-verification appeals was hacked, exposing 70,000 user-submitted government IDs (see here).

Nonetheless, Google posted in July that it would expand its AI-powered age checks to more of its products and services. The company did not respond to a request for comment from MLex.

— ‘Nerd harder’ —

Pressure to meet age-verification mandates without amassing sensitive data has pushed companies toward technologies that claim to do just that.

“Zero-knowledge proof” technology, known as ZKP, has been billed as a way for website visitors to prove they are over 18 without supplying any underlying identification. The system checks a cryptographic credential, such as a government-verified ID stored on their device, and produces mathematical proof that they’re over 18 without exposing the ID itself.

Google, for example, is exploring the use of ZKP technology across its identity and payments ecosystem, announcing a collaboration between Google Cloud and Self Labs, a ZKP protocol provider, to potentially apply the technology to services such as Google Wallet.

For some, that would seem to solve the privacy problem. Corby said that as long as the verification is done entirely on the device and no data is stored elsewhere, the technology can actually be privacy enhancing and preserve anonymity.

But zero-knowledge proofs still require a verifier on the other end to check the math, which means a new entity must be trusted with some piece of the process.

“ZKPs are a great tool for sharing less data about ourselves over time or in a one-time transaction,” according to the Electronic Frontier Foundation, a digital privacy campaign group. “But it is still imperative to point out that utilizing this technology to share even more about ourselves online through mandatory age verification establishes a wider scope for sharing in an already saturated ecosystem of easily linked, existing personal information online.”

There’s also the potential for re-identification through metadata, and the chance that the proof could be intercepted by another entity as it’s sent over the network, or that an attacker could trick a device into thinking it’s talking to the verifier.

As long as an authenticator exists anywhere in the chain, it remains a potential point of failure, Goldman says. Even the term “zero-knowledge” is a bit of a misnomer, because someone will always have some degree of access to highly sensitive information.

More generally, he writes, “treating the online age-authentication challenge as purely technological encourages the belief that its problems can be solved if technologists ‘nerd harder.’ ”

While technology can make age verification better or more accurate, it cannot resolve the core trade-off; that in trying to prove something as simple as your age, you inevitably have to give up something else.

For Corby, the idea that age checks and privacy must be at odds with each other sells the technology short. “That’s rubbish,” he said. “If we can put a man on the moon, we can have you prove your age without putting your identity at risk.”

When, or even whether, technology can resolve that basic tension remains an open question, however.

Please email editors@mlex.com to contact the editorial staff regarding this story, or to submit the names of lawyers and advisers.

Tags