WhatsApp Boss Says No To AI Filters Policing Encrypted Chat
An anonymous reader quotes a report from The Register: The head of WhatsApp will not compromise the security of its messenger service to bend to the UK government's efforts to scan private conversations. Will Cathcart, who has been at parent company Meta for more than 12 years and head of WhatsApp since 2019, told the BBC that the popular communications service wouldn't downgrade or bypass its end-to-end encryption (EE2E) just for British snoops, saying it would be "foolish" to do so and that WhatsApp needs to offer a consistent set of standards around the globe. "If we had to lower security for the world, to accommodate the requirement in one country, that ... would be very foolish for us to accept, making our product less desirable to 98 percent of our users because of the requirements from 2 percent," Cathcart told the broadcaster. "What's being proposed is that we -- either directly or indirectly through software -- read everyone's messages. I don't think people want that." Strong EE2E ensures that only the intended sender and receiver of a message can read it, and not even the provider of the communications channel nor anyone eavesdropping on the encrypted chatter. The UK government is proposing that app builders add an automated AI-powered scanner in the pipeline -- ideally in the client app -- to detect and report illegal content, in this case child sex abuse material (CSAM). The upside is that at least messages are encrypted as usual when transmitted: the software on your phone, say, studies the material, and continues on as normal if the data is deemed CSAM-free. One downside is that any false positives mean people's private communications get flagged up and potentially analyzed by law enforcement or a government agent. Another downside is that the definition of what is filtered may gradually change over time, and before you know it: everyone's conversations are being automatically screened for things politicians have decided are verboten. And another downside is that client-side AI models that don't produce a lot of false positives are likely to be easily defeated, and are mainly good for catching well-known, unaltered CSAM examples.
Read more of this story at Slashdot.