Age Verification Providers Say Don’t Worry About California Design Code; You’ll Just Have To Scan Your Face For Every Website You Visit

If you thought cookie pop-ups were an annoying nuisance, just wait until you have to scan your face for some third party to verify your age" after California's new design code becomes law.
On Friday, I wrote about the companies and organizations most likely to benefit from California's AB 2273, the Age Appropriate Design Code" bill that the California legislature seems eager to pass (and which they refer to as the Kid's Code" even though the details show it will impact everyone, and not just kids). The bill seemed to be getting very little attention, but after a few of my posts started to go viral, the backers of the bill ramped up their smear campaigns and lies - including telling me that I'm not covered by it (and when I dug in and pointed out how I am... they stopped responding). But, even if somehow Techdirt is not covered (which, frankly, would be a relief), I can still be quite concerned about how it will impact everyone else.
But, the craziest of all things is that the Age Verification Providers Association" decided to show up in the comments to defend themselves and insist that their members can do age verification in a privacy-protective manner. You just have to let them scan your face with facial recognition technology.
Really.
I'm not kidding:
First, we want to reassure you and your readers generally about anonymity. The purpose of the online age verification sector is to allow users to prove their age to a website, WITHOUT disclosing their identity.
This can be achieved in a number of ways, but primarily through the use of independent, third-party AV providers who do not retain centrally any of your personal data. Once they have established your age or age-range, they have no need (and under EU GDPR law, therefore no legal basis) to retain your personal data.
In fact, the AV provider may not have needed to access your personal data at all. Age estimation based on facial analysis, for example, could take place on your own device, as can reading and validating your physical ID.
First, I want to call out that they said may not" need access to your personal data. Which is very different from does not" or will not."
Also, they insist it's not facial recognition" software because it's not matching you up to a database of your identity... it's just using AI" to guess estimate your age. What could possibly go wrong?
But, more to the point, they're basically saying don't worry, you'll just need to scan your face or ID for every website your visit." Normalizing facial scans does not seem particularly privacy protecting or reasonable. It seems pretty dystopian, frankly.
We've already just gone through this nonsense earlier this year when the IRS was demanding facial scans, and it later came out that - contrary to claims about privacy and the high quality of the facial verification technology - the technology was incredibly unreliable and the vendor in question's public claims about the privacy tools were bogus.
Honestly, this whole thing is bizarre. The idea that we need facial scans to surf the internet is just crazy, and I don't see how that benefits kids at all. (Also, does this mean you can only surf the web on PCs that have webcams, now? Do public libraries and internet cafes have to equip every machine with a camera?)
This morning, they're in the comments again, trying (and failing) to defend this argument that it's nothing to worry about. When people point out that such a system can be gamed, they have an answer... we'll just make you take a video of yourself saying phrases, too." I mean WHAT?
For some higher risk use cases, the age check may involve a liveness test where the user must take several selfie photos or record a short video saying phrases requested by the provider. Passive liveness technology has further reduced the effort required by the user - do look into that.
They're also cautioning against the claims that you'd have to scan all the time. If you're low risk," according to them, you might only have to have your face scanned every three months. What a bargain.
How often you need to prove it is still the same user who did the check is a matter for the services themselves and their regulators. Some low risk uses might only check every three months - higher risk situations might double check it is still you each time you make a purchase.
Also, they're saying that if Techdirt is going to publish content that is potential harmful to kids" (as we've described, the standard harmful to kids" is never clearly defined in the bill, and could easily apply to our stories on civil rights abuses among other things), these Age Verification providers have a solution: just redesign Techdirt to put those stories in the adult section."
Unless techdirt carries content that is potentially harmful to kids, it woud not need to apply age assurance. If some content is potentially harmful, this could be put in a sub-section of the site where adult users who wish to access it would use an age check - but probabably the same one they did 3 weeks ago when downloading a new 18 rated video game.
All of this is nonsense.
Once again, everything about this bill assumes everyone providing internet services is inherently up to no good, and that every kid who uses the internet is damaged by it. That's not even remotely true. There are ways to deal with the actual problems without ruining the internet for everyone. But that's not the approach California is taking.