Article 61W4H Two GCHQ Employees Suggest The Solution To CSAM Distribution Is… More Client-Side Scanning

Two GCHQ Employees Suggest The Solution To CSAM Distribution Is… More Client-Side Scanning

by
Tim Cushing
from Techdirt on (#61W4H)
Story Image

The font of not-great ideas continues to overflow at Lawfare. To be fair, this often-overflowing font is due to its contributors, which are current and former members of spy agencies that have violated rights, broken laws, and otherwise done what they can to make internet communications less secure.

We've heard from these contributors before. Ian Levy and Crispin Robinson are both GCHQ employees. A few years ago, as companies like Facebook started tossing around the idea of end-to-end encryption, Levy and Robinson suggested a workaround that would have done the same amount of damage as mandated backdoors, even if the pitch was slightly different than the suggestions offered by consecutive FBI directors.

What was suggested then was some sort of parallel communication network that would allow spies and law enforcement to eavesdrop on communications. The communications would still be encrypted. It's just that the good guys" would have their own encrypted channel to listen in on these communications. Theoretically, communications would still be secure, unable to be accessed by criminals. But opening a side door is not a whole lot different than opening a back door. A blind CC may be a bit more secure than undermining encryption entirely, but it's still opens up another communication channel - one that might be left open and unguarded by the interceptors who would likely feel whatever bad things might result from that is acceptable because (lol) spy agencies only target dangerous enemies of the state.

The pair are back at it. In this post for Lawfare, Levy and Robinson suggest a solution" that has already been proposed (and discarded) by the company that attempted it first: Apple. The solution" is apparently trivially easy to exploit and prone to false positives/negatives, but that isn't stopping these GCHQ reps from suggesting it be given another spin.

According to the paper [PDF] published by these two GCHQ employees, the key to fighting CSAM (Child Sexual Assault Material) in the era of end-to-end encryption is... more client-side scanning of content. And it goes beyond matching local images to known hashes stored by agencies that combat the sexual exploitation of children.

For example, one of the approaches we propose is to have language models running entirely locally on the client to detect language associated with grooming. If the model suggests that a conversation is heading toward a risky outcome, the potential victim is warned and nudged to report the conversation for human moderation. Since the models can be tested and the user is involved in the provider's access to content, we do not believe this sort of approach attracts the same vulnerabilities as others.

Well, no vulnerabilities except for the provider's access to what are supposed to be end-to-end encrypted communications. If this is the solution, the provider may as well not offer encryption at all, since it apparently won't actually be encrypted at both ends. The provider will have access to the client side in some form, which opens a security hole that would not be present otherwise. The only mitigating factor is that the provider will not have its own copy of the communications. And if it doesn't have that, of what use is it to law enforcement?

The proposal (which the authors note is not to be viewed as representative of GCHQ or the UK government) operates largely on faith.

[W]e believe that a robust evidence-based approach to this problem can lead to balanced solutions that ensure privacy and safety for all. We also believe that a framework for evaluating the benefits and disbenefits is needed.

Disbenefits" is a cool word. If one were more intellectually honest, they might use a word like drawback" or flaw" or negative side effects." But that's the Newspeak offered by the two GCHQ employees.

The following sentence makes it clear the authors don't know whether any of their proposals will work, nor how many [cough] disbenefits they will cause. Just spitballing, I guess, but with the innate appeal to authority that comes from their positions and a tastefully-formatted PDF.

We don't provide one in this paper but note that the U.K.'s national Research Centre on Privacy, Harm Reduction and Adversarial Influence Online (REPHRAIN) is doing so as part of the U.K. government's Safety Tech Challenge Fund, although this will require interpretation in the context of national data protection laws and, in the U.K., guidance from the Information Commissioner's Office.

[crickets.wav]

The authors do admit client-side scanning (whether of communications or content) is far from flawless. False negatives and false positives will be an ongoing problem. The system can easily be duped into ok'ing CSAM. That's why they want to add client-side scanning of written communications to the mix, apparently in hopes that a combination of the two will reduce the disbenefits."

Supposedly this can be accomplished with tech magic crafted by people nerding harder and a system of checks and balances that will likely always remain theoretical, even if it's hard-coded into moderation guidelines and law enforcement policies.

For example, offenders often send existing sexually explicit images of children to potential victims to try to engender trust (hoping that victims reciprocate by sending explicit images of themselves). In this case, there is no benefit whatsoever in an offender creating an image that is classified as child abuse material (but is not), since they are trying to affect the victim, not the system. This weakness could also be exploited by sending false-positive images to a target, hoping they are somehow investigated or tracked. This is mitigated by the reality of how the moderation and reporting process works, with multiple independent checks before any referral to law enforcement.

This simply assumes such multiple independent checks" exist or will exist. They may not. It may be policy for tech companies to simply forward everything questionable to law enforcement and allow the pros" to sort it out. That solution" is easiest for tech companies and since they'll be operating in good faith, legal culpability for adverse law enforcement reactions will be minimal.

That assumptive shrug that robust policies exist, will exist, or will be followed thousands of times a day leads directly into another incorrect assumption: that harm to innocent people will be mitigated because of largely theoretical checks and balances on both ends of the equation.

The second issue is that there is no way of proving which images a client-side scanning algorithm is seeking to detect, leaving the possibility of mission creep" where other types of images (those not related to child sexual abuse) are also detected. We believe this is relatively simple to fix through a small change to how the global child protection non-governmental organizations operate. We would have a consistent list of known bad images, with cryptographic assurances that the databases contain only child sexual abuse images that can be attested to publicly and audited privately. We believe these legitimate privacy concerns can be mitigated technically and the legal and policy challenges are likely harder, but we believe they are soluble.

The thing is, we already have a consistent list of known bad images." If we're not already doing the other things in that sentence (a verifiable database that can be attested to publicly and audited privately"), then the only thing more client-side content scanning can do is produce more false positives and negatives. Again, the authors assume these things are already in place. And they use these assumptions to buttress their claims that the disbenefits" will be limited by what they assume will happen (multiple independent checks") or assume has already happened (an independently verifiable database of known CSAM images).

That's a big ask. The other big ask is the paper proposes the private sector do all of the work. Companies will be expected to design and implement client-side scanning. They will be expected to hire enough people to provide human backstops for AI-guided flagging of content. They will need to have specialized personnel in place to act as law enforcement liaisons. And they will need to have solid legal teams in place to deal with the blowback (I'm sorry, disbenefits") of false positives and negatives.

If all of this is in place, and law enforcement doesn't engage in mission creep, it will work the way the authors suggest it will work: a non-encryption-breaking solution to the distribution of CSAM via end-to-end encrypted communications platforms. That's not to say the paper does not admit all the pieces need to come together to make this work. But this proposal raises far more questions than it answers. And yet the authors seem to believe it will work because it's merely possible.

Through our research, we've found no reason as to why client-side scanning techniques cannot be implemented safely in many of the situations society will encounter. That is not to say that more work is not needed, but there are clear paths to implementation that would seem to have the requisite effectiveness, privacy, and security properties.

It's still a long way from probable, though. And that's not even in the same neighborhood as theoretically possible if everything else goes right.

External Content
Source RSS or Atom Feed
Feed Location https://www.techdirt.com/techdirt_rss.xml
Feed Title Techdirt
Feed Link https://www.techdirt.com/
Reply 0 comments