Professor Wrongly Blames Apple For CSAM
His heart is probably in the right place. That's the best thing I can say about Berkeley professor Dr. Hany Farid, who has spent the last couple of years being wrong about CSAM (child sexual abuse material) detection.
That he's been wrong has done little to shut him up. But he appears to deeply feel he's right. And that's why I'm convinced his heart is in the right place: right up there in the chest cavity where most hearts are located.
Physical heart location aside, he's pretty much always wrong. He's always happy to offer his (non-expert) opinion and deploy presentations that preach to the converted. He's sure the CSAM problem is the fault of service providers, rather than those who create and upload CSAM.
So, he's spent a considerable amount of time going after Apple. Apple, at one point, considered client-side scanning to be an acceptable solution to this problem, even if it meant making Apple less secure than its competitors. Shortly thereafter - following plenty of unified criticism - Apple decided it was better off protecting millions of innocent customers and users, rather than sacrificing them on the altar of for the children" just because it might make it easier for the government to locate and identify the extremely small percentage of users engaged in illicit activity.
This walkback appears to have upset Hanry Farid. And he's been given space at San Francisco's largest paper to make everyone stupider. His record for being wrong continues uninterrupted with his op-ed for the San Francisco Chronicle. Here's the headline:
Why are there so many images of child abuse stored on iCloud? Because Apple allows it
There's a difference between allows" and this kind of thing happens." That's the difference Farid hopes to obscure. No matter what platform is involved, a certain number of users will attempt to use it to share illicit content. That Apple's cloud service is host to (a minimal amount) of CSAM says nothing about Apple's internal attitude towards CSAM, much less about it's so-called allowing" of this content to be hosted and shared via its services.
But Farid insists Apple is complicit in the sharing of CSAM, something he attempts to prove by highlighting recent convictions aided by (wait for it) evidence obtained from Apple itself.
Earlier this year, a man in Washington state was sentenced to 22 years in federal prison forsexually abusinghis girlfriend's 7-year-old stepdaughter. As part of their investigation, authorities also discovered the man had been storing known images and videos of child sexual abuse on his Apple iCloud account for four years.
Why was this man able to maintain a collection of illegal and sexually exploitative content of children for so long? Because Apple wasn't looking for it.
The first paragraph contains facts. The second paragraph contains conjecture. The third paragraph of this op-ed again mixes both, presenting both conjecture and and a secured conviction as evidence of Apple's unwillingness to police iCloud for CSAM.
What goes ignored is the fact that the evidence used to secure these convictions was derived from iCloud accounts. If Apple indeed has no desire to rid the world of CSAM, it seems it might have put up more of a fight when asked to hand over this content.
What this does show is something that runs contrary to Farid's narrative: Apple is essential in securing convictions of CSAM producers and distributors. The content stored in these iCloud accounts was essential to the success of these prosecutions. If Apple was truly more interested in aiding and abetting in the spread of CSAM, it would have done more to prevent prosecutors from accessing this evidence.
And that's the problem with disingenuous arguments like the ones Farid is making. Farid claims Apple isn't doing enough to stymie CSAM distribution. But then he tries to back his claims by detailing all the times Apple has been instrumental in securing convictions of child abusers.
Not content with ignoring this fatal flaw in his argument, Farid moves on to make arguably worse arguments using his version of known facts.
Back in the summer of 2021, Appleannounced a planto use innovative methods to specifically identify and report known images and videos of the rape and molestation of a child- without compromising the privacy that its billions of users expect.
This is a huge misrepresentation of Apple's client-side scanning plan. It definitely would compromise the privacy that its billions of users expect." Apple's proposed scanning of all content on user devices that might be hosted (however temporarily) by its iCloud service very definitely compromised their privacy. Worse, it compromised their security by introducing a new attack vector for malicious governments and malicious hackers that could have allowed anyone to access content phone users (incorrectly, in this case) assumed was only accessible to them.
That misrepresentation is followed by another false assertion by Farid:
But by the end of 2022, Applequietly abandonedthe plan.
Apple did not quietly" abandon this plan. It publicly announced this reversal, something that led almost immediately to a number of government figures, talking heads, and special interest groups publicly expressing their displeasure with this move by Apple. It was anything but quiet."
Adding to this wealth of misinformation is Farid's unsupported claims about hash-matching, which has been repeatedly shown to be easily circumvented and, even worse, easily manipulated to create false positives capable of causing irreparable damage to innocent people.
Detecting known images is a tried and true way many companies, includingApple's competitors, have detected this content for years. Apple could deploy this same technique to find child sexual abuse images and videos on its platforms.
Translation: A parent innocently taking pictures of their infant in the bathtub will not be reported to law enforcement because those images have not previously been determined to be illicit. This critical distinction ensures that innocent users' privacy remains intact while empowering Apple to identify and report the presence of known child sexual abuse images and videos on iCloud.
While it's true hash-matching works to a certain extent, pretending innocent people won't be flagged and/or the system can't be easily defeated is ridiculous. But Farid has an ax to grind, and he's obviously not going to be deterred by the reams of evidence that contradict what he obviously considers to be foregone conclusions.
The ultimate question is this: is it better to be wrong but loud about stuff? Or is it better to be right, even if it means some of the worst people in the world will escape immediate detection by governments or service providers?
Or, if those aren't the questions you like, consider this: is it more likely Apple desires to be host of illicit images or is it more likely Apple isn't willing to intrude on the privacy of users because it wishes to earn the trust of non-criminal users - users who make up the largest percentage of Apple customers?
People like Professor Farid aren't willing to consider the most likely explanation. Instead, they insist - without evidence - big tech companies are willfully ignoring illegal activity so they can increase their profits. That's just stupid. Companies that ignore illegal activity may enjoy brief bumps in profit margin but the long-term profitability of relying (as Farid insists they are) on illegal activity is something no tech company, no matter how large, would consider to be a solid business model.