Yet Again We Remind Policymakers That “Standard Technical Measures” Are No Miracle Solution For Anything

I'm starting to lose count of how many regulatory proceedings there have been in the last 6 months or so to discuss standard technical measures" in the copyright context. Doing policy work in this space is like living in a zombie movie version of Groundhog Day" as we keep having to marshal resources to deal with this terrible idea that just won't die.
The terrible idea? That there is some miracle technological solution that can magically address online copyright infringement (or any policy problem, really, but for now we'll focus on how this idea keeps coming up in the copyright context). Because when policymakers talk about standard technical measures" that's what they mean: that there must be some sort of technical wizardry that can be imposed on online platforms to miraculously eliminate any somehow wrongful content that happens to be on their systems and services.
It's a delusion that has its roots going back at least to the 1990s, when Congress wrote into the DMCA the requirement that platforms accommodate and [...] not interfere with standard technical measures" if they wanted to be eligible for its safe harbor protections against any potential liability for user infringements. Even back then Congress had no idea what such technologies would look like, and so it defined them in a vague way, as technologies of some sort used by copyright owners to identify or protect copyrighted works [that] (A) have been developed pursuant to a broad consensus of copyright owners and service providers in an open, fair, voluntary, multi-industry standards process; (B) are available to any person on reasonable and nondiscriminatory terms; and (C) do not impose substantial costs on service providers or substantial burdens on their systems or networks." Which is a description that even today, a quarter-century later, correlates to precisely zero technologies.
Because, as we pointed out in our previous filing in the previous policy study, there is no technology that could possibly meet all these requirements, even just on the fingerprinting front. And, as we pointed out in this filing, in this policy study, even if you could accurately identify copyrighted works online, no tool can possibly identify infringement. Infringement is an inherently contextual question, and there is no way to load up any sort of technical tool with enough information needed to be able to correctly infer whether a work appearing online is infringing or not. As we explained, it is simply not going to know:
(a) whether there's a valid copyright in the work at all (because even if such a tool could be fed information directly from Copyright Office records, registration is often granted presumptively, without necessarily testing whether the work is in fact eligible for a copyright at all, or that the party doing the registering is the party entitled to do it);
(b) whether, even if there is a valid copyright, if it is one validly claimed by the party on whose behalf the tool is being used to identify the work(s);
(c) whether a copyrighted work appearing online is appearing online pursuant to a valid license (which the programmer of the tool may have no ability to even know about); or
(d) whether the work appearing online appears online as a fair use, which is the most contextual analysis of all and therefore the most impossible to pre-program with any accuracy - unless, of course, the tool is programmed to presume that it is.
Because the problem with presuming that a fair use is not a fair use, or that a non-infringing work is infringing at all, is that proponents of these tools don't just want to be able to deploy these tools to say oh look, here's some content that may be infringing." They want those tools' alerts to be taken as definitive discoveries of infringement that will force a response from the platforms to do something about them. And the only response that will satisfy these proponents is (at minimum) removal of this content (if not also removal of the user, or more) if the platforms want to have any hope of retaining their safe harbor protection. Furthermore, proponents want this removal to happen irrespective of whether the material is actually infringing or not, because they also want to have this happen without any proper adjudication of that question at all.
We already see the problem of platforms being forced to respond to every allegation of infringement as presumptively valid, as an uncheckable flood of takedown notices keep driving offline all sorts of expression that is actually lawful. What these inherently flawed technologies would do is turn that flood into an even greater tsunami as platforms are forced to credit every allegation they automatically spew forth every time they find any instance of a work, no matter how inaccurate such an infringement conclusion actually is.
And that sort of law-caused censorship, forcing expression to be removed without there ever being any adjudication of whether the expression is indeed unlawful, deeply offends the First Amendment, as well as copyright law itself. After all, copyright is all about encouraging new creative expression (as well as the public's access to it). But forcing platforms to respond to systems like these would be all about suppressing that expression, and an absolutely pointless thing for copyright law to command, whether in its current form as part of the DMCA or any of the new, equally dangerous updates proposed. And it's a problem that will only get worse as long as anyone thinks that these technologies are any sort of miracle solution to any sort of problem.