Child Safety vs. Privacy: The Online Age Verification Dilemma

Key Takeaways
- Age verification is becoming mandatory under regulations like the UK's Online Safety Act 2023, pushing platforms such as YouTube, Instagram, X, and adult websites to roll out stricter checks using AI, IDs, or selfies.
- Users are forced to share sensitive personal data, which increases exposure to cyber breaches, surveillance, and misuse of data for AI training.
- There are safer alternatives through decentralized verification models like W3C Verifiable Credentials 2.0, where platforms get a simple yes/no' token proving age without collecting personal information.

Online crimes against children have been increasing in the past decade.
A report from Childlight says that 300M+ children have suffered from online abusive behavior, with almost 10 cases being reported every second.
How do we solve this? Online age verification has emerged as one of the answers, where platforms require users to prove they're above 18 before they can access their services.
The latest to join this shift is YouTube, when it announced an AI-based age verification feature in the US. Under it, YouTube's proprietary AI engine will look at factors like watch history, video categories, and account age to determine if the account belongs to a minor.
YouTube will disable personalized advertising, turn on digital wellbeing tools, and add safeguard to its recommendation engine for accounts belonging to minors. In practice, the YouTube feed of a minor will be starkly different from a usual feed, where YouTube will remove certain videos altogether.
How YouTube's AI Age Verification Works (and Its Early Red Flags)If YouTube wrongly flags your account account as underage,' you'll have to provide certain documents like a government ID or credit card to prove otherwise.
However, if you dig a little deeper, you'll realize that YouTube doesn't have any tangible information to correctly identify a minor account, except for your watch history.
So, even if a minor enters a fake date of birth, they can still use the unrestricted' version of YouTube as long as their watch history doesn't raise suspicion.
Now, YouTube hasn't disclosed how their AI model works exactly, but we have found certain tell-tale signs of what could flag your account as a minor.' For instance, if you watch a lot of SpongeBob videos, anime, cartoons, and other content that a child might watch, your account may fall under scrutiny.
There's no well-defined category of videos that may land you under the minor screener. However, look out for videos that have comments disabled and those you cannot save.
Plus, if you see the tag saying Trying YouTube Kids,' those are the kind of videos YouTube is watching closely.
For example, look at this video uploaded by the official SpongeBob SquarePants channel. The comments are disabled, you cannot download it, and the platform suggests you to use YouTube Kids.
Before you lash out at YouTube and blame Google for this verification fiasco, it's important to understand that it's not YouTube's wishful thinking.
The root of this age verification' movement is the UK's Online Safety Act 2023, published by Ofcom. The Act mandates online platforms to deploy highly effective' age verification processes in place to protect children from harmful online content.
Tech firms must introduce age checks to prevent children from accessing porn, self-harm, suicide and eating disorder content.
- Ofcom
In compliance with this, many other platforms like Facebook, Instagram, TikTok, and X have rolled out guidelines and changes for children safety.
InstagramInstagram rolled out an age-verification program in June 2022 (much before the OSA Act was passed) requiring age checks for users. There are several ways you can do this - selfie verification and social vouching. Meta has partnered with Yoti, a leading age-verification provider, for this.
You'd need to upload a video selfie as an age proof, which is vetted by Yoti's system to verify your age.
If you're 13-17 years, Instagram will only show you age-appropriate content, make your account private by default, and prevent unwanted DMs from strangers.
Meta says that the system they use for verification cannot identify you, but only your age and the image are deleted once the purpose is met.
A second method involves asking your mutual friend to vouch for your age, who must be above 18 years and follow other pre-set Meta guidelines. Plus, you can also upload government-issued ID proofs for age verification, which is deleted after 30 days from Meta's servers.
As per a report published by NSPCC, 47% of online grooming crimes take place on Meta platforms like Facebook and Instagram, and 26% of all sexual crimes against children are initiated on Instagram, making it a popular spot for online child offenders.
This makes it important for Meta to safeguard children's interest, especially when every 6 in 10 teens (13-17 age group) in the US use the platform.
Adult WebsitesThe OSA's initial move was to enable age verification on websites and social media platforms that allow access to adult and adult content, and rightly so. In compliance, many websites have rolled out age verification programs.
However, unlike tech-heavy platforms like Meta and YouTube, these websites rely heavily on ID verification with limited use of AI. This means that you'll be required to provide a government-issued ID to prove you're over 18.
Also, unlike social media platforms, where you need an account to post, message, and interact with the world, adult websites don't work like that - you don't necessarily need an account to view their content.
So, an obvious workaround is VPN. Users can simply switch their location to a region that doesn't require age verification, and use adults websites normally. This means that while these regulations seem to be working on the surface, there are still a lot of loopholes, especially in the case of adult content.
X (Formerly Twitter)Unlike Meta, X wasn't proactive in rolling out age-verification, instead waiting for the OSA to provide stricter guidelines. Similar to other platforms, X uses methods like self-attested age, ID verification, and account creation date to determine a user's age.
Moreover, X also uses email-based age verification and network-based estimation to make a decision. If it finds you're below 18, sensitive media settings will be enabled for your account, and you won't be able to access certain age-restricted content.
Much like adult platforms, we didn't spot high AI usage in the entire process. But, X was the first platform that should have used advanced machine learning to do so.
Why? Because not only is X a social media platform but also a hub of pornographic content. As per a report, 45% of children accessing X in 2025 have seen porn on the platform, up from 35% in 2023.
X doesn't outright ban adult content, but it keeps it under strict control. For instance, X allows consensually produced and distributed' adult content (adult nudity or sexual behavior), which are properly labeled. Plus, no such content is allowed on profile photos, banners, or covers.

But, since X technically allows' adult content on the platform, it's not unlikely that underage users will end up finding it, especially if they want to. So, this puts a lot of responsibility on X to adequately verify its users' age using AI and ML models.
The ProblemWhile the whole age-verification movement is a positive step towards the online protection of children, it does bring in a few concerns as well.
Up until now, these social media platforms used to collect information like phone numbers, email addresses, and locations.
Now, however, the extent of personal information these tech platforms demand has increased, including selfies and government-issued IDs. This puts more of your personal data at risk of cyber-breaches, increasing the blast-radius of such incidents.
And, cyber attacks are more common than you think.
- Back in 2019, a database leak exposed private information of millions of Instagram users. Particularly influencers, including data like phone numbers and email addresses.
- Similarly, in 2021, a data leak put the information of 533M Facebook users at risk. This included information like location, phone numbers, and other sensitive data.
- In another breach, 800K account data from Brazzers (an adult website) leaked personal user email addresses.
This was when these websites didn't have access to other sensitive information. And now, as more of your data is dumped on the internet, these data breaches will only get worse.
For instance, malicious third parties can now use your Social Security Number to orchestrate social engineering attacks, financial frauds, and scams. Or, they can modify your verification selfies using AI tools and post them on adult websites.
So, this seems like a classic whack-a-mole problem, where you solve one issue (online child safety) and another one pops up (online privacy) out of the weeds.
Next Up is the Problem of AI Data TrainingBoth YouTube and Meta use in-house AI models to detect a user's age. So, these AI models would have been trained on similar datasets belonging to real users. Otherwise, how would they figure out your actual age?
And using personal information for AI training is also a massive grey area.
- For instance, in 2024, Meta announced that it would use public posts, videos, and photos to train its AI models to deliver a better experience. However, after pushback from the Irish DPC, they withdrew the proposal.
- OpenAI has received allegations for transcribing thousands of YouTube videos to train GPT-4, using Whisper audio transcription.
- Similarly, ClearView AI, a facial recognition company, received a fine of 20 million in France and Italy each. What for? Illegal and non-consensual use of personal images for building biometric databases.
Facebook's official blog post also mentions that it trains its AI model on user profile information, user interactions, and the type of content they view for age verification.
So, under the guise of verifying your age,' someone is constantly watching your activity. They're training AI models, and keeping an eye on you - feels like George Orwell's 1984 all over again.
Of course it's easy for the tech giants to come up and say, Oh! But we're doing all this to keep your children safe.' It's convenient for them to do that, as it advances their other, more worrisome goals. But, at the end of the day, it's you who's paying for this unfair tradeoff.
While the OSA has sewn the hole of child safety, another one has popped up - online privacy and surveillance. And we've just started to see its true extent.
The SolutionEnough with the problem, let's focus on the solution.
Instead of giving your personal information to 10 different websites to determine your age, we can have an issuer wallet verifier model. This would verify the age of a particular user online.
Here's how this would work: when a platform wants to confirm your age, they can ask the issuer for a simple yes or no (through an age-verification token). And all without ever accessing your personal data.
Compare this with your credit score, which banks look at while approving loans. The bank doesn't collect every financial data, bill payments, and credit card statement from you to determine your loan eligibility.
Instead, it heads over to an institution like Equifax, which provides a single credit score as a preliminary scanner of eligibility. Banks use this to form an opinion on your credit worthiness.
The same is true for age verification, where the issuer provides a yes/no token to the platform. This is exactly what the W3C Verifiable Credentials 2.0 vouches for. Under this, an approved issuer signs a credential (that you're above 18), which they can store on a wallet. So, whenever a website needs to verify your age, you can share this signed token.
This is called verifying the attribute' instead of the identity. You're essentially tying an attribute to a verified and signed central token. All without giving away your personal information to various websites.
Other solutions include strict legislations and regulations preventing tech companies from using private information for training AI models. However, your personal data would still go on an online database, opening the way to data breaches.
Besides, independent third parties should periodically verify these AIs to ensure that the companies are keeping their promises.
Simply put, there are middle grounds between child safety and over-invasion of privacy, but that requires intention. If regulators want to protect children without exposing you to unnecessary privacy risks, there must be active intention and action backed by strict regulations to prevent data misuse and surveillance.
The post Child Safety vs. Privacy: The Online Age Verification Dilemma appeared first on Techreport.