That Time Google Kinda, But Not Really, Accused People Searching For Techdirt Of Searching For CSAM
Earlier this week I posted two examples of people falsely being told that a post or a search was deemed connected to child sexual abuse material. Earlier this week, I thought I had spotted another example, when someone on Bluesky alerted me that they had searched for techdirt" and the results included a line saying We think that your search might be associated with child sexual abuse. Child sexual abuse or viewing sexual imagery of children can lead to imprisonment and...."
I tried the search myself, but didn't see the same results... until I VPN'd my way into a Polish server, and did:
On Bluesky, others quickly chimed in, noting that they also saw it elsewhere in the EU, though not always. It seemed to depend on a few factors, including whether or not you were logged in.
The whole thing seemed pretty strange, and we started to investigate what was going on. At one point, a colleague noted it was weird that the warning would appear where it seems to be in those images. That space is normally for summaries of the page you're looking at. If there was an alert or a warning, it seems like it would appear above the search entirely.
And then... I realized. That line was not a warning from Google. It was Google doing a terrible job of summarizing what's on Techdirt. Because, remember how I started this story off talking about the article I posted on Tuesday about mistaken claims of searching for CSAM? In the middle of that post, I included the text of that warning... which was identical to the text in the warning" on Google.
In other words, for who knows what reason, when Google tried to summarize Techdirt, it just pulled that quote out of one article on the page, and for some category of searches that's what it showed as the summary of the whole page. I do not know why it would choose that one sentence, because what a perplexing sentence to choose! And one that might cause people to not click on Techdirt!
There has been lots of talk about how Google's search quality has been on the decline, and this seems like another example of the kinds of reasons why. Still, it a weird way, it's yet another example of why policing stuff online can be so tricky. An AI summarizer might grab the wrong sentence, and when placed in the wrong context, it could look very, very bad.
I'm not entirely sure that there's anything that can be done in such situations. It's not like we live in Australia, where internet laws are upside down. It's just one of those weird things, where maybe the answer is just... to use other search engines, rather than relying on an increasingly unreliable Google.