Article 62SC2 Horrifying: Google Flags Parents As Child Sex Abusers After They Sent Their Doctors Requested Photos

Horrifying: Google Flags Parents As Child Sex Abusers After They Sent Their Doctors Requested Photos

by
Mike Masnick
from Techdirt on (#62SC2)
Story Image

Over the last few years, there has been a lot of attention paid to the issue of child sexual abuse material (CSAM) online. It is a huge and serious problem. And has been for a while. If you talk to trust and safety experts who work in the field, the stories they tell are horrifying and scary. Trying to stop the production of such material (i.e., literal child abuse) is a worthy and important goal. Trying to stop the flow of such material is similarly worthy.

The problem, though, is that as with so many things that have a content moderation component, the impossibility theory rears its head. And nothing demonstrates that quite as starkly as this stunning new piece by Kashmir Hill in the New York Times, discussing how Google has been flagging people as potential criminals after they shared photos of their children in response to requests from medical professionals trying to deal with medical conditions the children have.

There is much worth commenting on in the piece, but before we get into the details, it's important to give some broader political context. As you probably know if you read this site at all, across the political spectrum, there has been tremendous pressure over the last few years to pass laws that force" websites to do something" about CSAM material. Again, CSAM is a massive and serious problem, but, as we've discussed, the law (namely 18 USC 2258) already requires websites to report any CSAM content they find, and they can face stiff penalties for failing to do so.

Indeed, it's quite likely that much of the current concern about CSAM is due to there finally being some level of recognition of how widespread it is thanks to the required reporting by tech platforms under the law. That is, because most websites take this issue so seriously, and carefully follow the law, we now know how widespread and pervasive the problem is.

But, rather than trying to tackle the underlying problem, politicians often want to do the politician thing, and just blame the tech companies for doing the required reporting. It's very much shooting the messenger and using the fact that the reporting by tech companies is shining a light on the underlying societal failures that resulted in this, as an excuse to blame the tech companies, rather than the societal failings.

It's easier to blame the tech companies - most of whom have bent over backwards to work with law enforcement and to build technology to help respond to CSAM - than to come up with an actual plan for dealing with the underlying issues. And so almost all of the legal proposals we've seen are really about targeting tech companies... and, in the process, removing underlying rights. In the US, we've seen the EARN IT Act, which completely misdiagnoses the problem, and would actually make it that much harder for law enforcement to track down abusers. EARN It attempts to blame tech companies for law enforcement's unwillingness to go after CSAM producers and distributors.

Meanwhile, over in the EU, there's an apparently serious proposal to effectively outlaw encryption and require client-side scanning of all content in an attempt to battle CSAM. Even as experts have pointed out how this makes everyone less safe, and there has been pushback on the proposal, politicians are still supporting it by basically just repeating we must protect the children" without seriously responding to the many ways in which these bills will make children less safe.

Separately, it's important to understand some of the technology behind hunting down and reporting CSAM. The most famous of which is PhotoDNA, initially developed by Microsoft and used among many of the big platforms to share hashes of known CSAM material to make sure that the material that has been discovered isn't more widely spread. There are some other similar tools, but for fairly obvious reasons these tools have some risks associated with them, and there are concerns both about false positives and about who is allowed to have access to the tools (even as they're sharing hashes, not actual images, the possibility of such tools to be abused is a real concern). A few companies, including Google, have developed more AI-based tools to try to identify CSAM, and Apple (somewhat infamously) has been working on its own client-side scanning tools along with cloud based scanning. But client-side scanning has significant limits, and there is real fear that it will be abused.

Of course, spy agencies also love the idea of everyone being forced to do client-side scanning in response to CSAM, because they know that basically creates a backdoor to spy on everyone's devices.

Whenever people talk about this and highlight the potential for false positives, they're often brushed off by supporters of these scanning tools, saying that the risk is minimal. And, until now, there weren't many good examples of false positives beyond things like Facebook pulling down iconic photographs, claiming they were CSAM.

However, this article (yes, finally we're talking about the article) by Hill gives us some very real world examples of how aggressive scanning for CSAM can not just go wrong, but can potentially destroy lives as well. In horrifying ways.

It describes how a father noticed his son's penis was swollen and apparently painful to the child. An advice nurse at their healthcare provider suggested they take photos to send to the doctor, so the doctor could review them in advance of a telehealth appointment. The father took the photos and texted them to his wife so she could share with the doctor... and that set off a huge mess.

In texting them - in Google's terms, taking affirmative action," - it caused Google to scan the material, and it's AI-based detector flagged the image as potential CSAM. You can understand why. But the context was certainly missing. And, it didn't much matter to Google - which shut down the guy's entire Google account (including his Google Fi phone service) and reported him to local law enforcement.

The guy, just named Mark" in the story, appealed, but Google refused to reinstate his account. Much later, Mark found out about the police investigation this way:

In December 2021, Mark received a manila envelope in the mail from the San Francisco Police Department. It contained a letter informing him that he had been investigated as well as copies of the search warrants served on Google and his internet service provider. An investigator, whose contact information was provided, had asked for everything in Mark's Google account: his internet searches, his location history, his messages and any document, photo and video he'd stored with the company.

The search, related to child exploitation videos," had taken place in February, within a week of his taking the photos of his son.

Mark called the investigator, Nicholas Hillard, who said the case was closed. Mr. Hillard had tried to get in touch with Mark but his phone number and email address hadn't worked.

I determined that the incident did not meet the elements of a crime and that no crime occurred," Mr. Hillard wrote in his report. The police had access to all the information Google had on Mark and decided it did not constitute child abuse or exploitation.

Mark asked if Mr. Hillard could tell Google that he was innocent so he could get his account back.

You have to talk to Google," Mr. Hillard said, according to Mark. There's nothing I can do."

In the article, Hill highlights at least one other example of nearly the same thing happening, and also talks to (former podcast guest) Jon Callas, about how it's likely that this happens way more than we realize, but the victims of it probably aren't willing to speak about it, because then their names are associated with CSAM.

Jon Callas, a technologist at the Electronic Frontier Foundation, a digital civil liberties organization, called the cases canaries in this particular coal mine."

There could be tens, hundreds, thousands more of these," he said.

Given the toxic nature of the accusations, Mr. Callas speculated that most people wrongfully flagged would not publicize what had happened.

There's so much in this story that is both horrifying, but also a very useful illustration of the trade-offs and risks with these tools, and the process for correcting errors. It's good that these companies are making proactive efforts to stop the creation and sharing of CSAM. The article already shows how these companies go above and beyond what the law actually requires (contrary to the claims of politicians and some in the media - and, unfortunately, many working for public interest groups trying to protect children).

However, it also shows the very real risks of false positives, and how it can create very serious problems for people, and how very few people are even willing to publicly discuss it for fear of the impact on their own lives and reputations for even highlighting the issue.

If politicians (pushed by many in the media) continue to advocate for regulations mandating even more aggressive behavior from these companies, including increasing liability for missing any content, it is inevitable that we will have many more such false positives - and the impact will be that much bigger.

There are real trade-offs here, and any serious discussion of how to deal with them should recognize that. Unfortunately, most of the discussions are entirely one-sided, and refuse to even acknowledge the issue of false positives and the concerns about how such aggressive scanning can impact people's privacy.

And, of course, since the media (with the exception of this article!) and political narrative are entirely focused on but think of the children!" the companies are bending even further backwards to appease them. Indeed, Google's response to the story of Mark seems ridiculous as you read the article. Even after the police clear him of any wrongdoing, it refuses to give him back his account.

But that response is totally rational when you look at the typical media coverage of these stories. There have been so many stories - often misleading ones - accusing Google, Facebook and other big tech companies of not doing enough to fight CSAM. So any mistakes in that direction are used to completely trash the companies, saying that they're turning a blind eye" to abuse or even deliberately profiting" off of CSAM. In such a media environment, companies like Google aren't even going to risk missing something, and its default is going to be to shut down the guy's account. Because the people at the company know they'd get destroyed publicly if it turns out he was involved in CSAM.

As with all of this stuff, there are no easy answers here. Stopping CSAM is an important and noble goal, but we need to figure out the best way to actually do that, and deputizing private corporations to magically find and stop it, with serious risk of liability for mistakes (in one direction), seems to have pretty significant costs as well. And, on top of that, it distracts from trying to solve the underlying issues, including why law enforcement isn't actually doing enough to stop the actual production and distribution of actual CSAM.

External Content
Source RSS or Atom Feed
Feed Location https://www.techdirt.com/techdirt_rss.xml
Feed Title Techdirt
Feed Link https://www.techdirt.com/
Reply 0 comments