Australian Government Apparently Willing To Follow In UK Government’s Client-Side Scanning Footsteps

The UK government desires direct control of the internet. This has been the plan for years. A bill that would criminalize encryption while mandating client-side scanning to control the spread of child sexual abuse material (CSAM) has been on the front burner for years.
The bill would also turn hate speech into a crime and punish tech companies directly for content generated by users. It's a bad idea all over - something UK legislators realized early on, resulting in some rebranding. What used to be called the Online Harms Act" is now the Online Safety Act." The harms to internet users remain the same. The only thing that has changed is the government's preferred nomenclature.
While we've been keeping an eye on similar statutes proposed by the EU - something that would also criminalize encryption if end-to-end encryption prevented client-side scanning for CSAM - the UK's policy proposal has been embraced by its farm team, the Australian government.
This government has been seeking ways to irreparably damage encryption while increasing its domestic surveillance powers. That it would embrace a proposal that threatens encryption while increasing monitoring demands for service providers is unsurprising.
But the nation has flown under the radar while the UK and EU governments make all the headlines. But we're back on task here, thanks to an excellent report on the latest regulation efforts by Cam Wilson for Aussie-focused news outlet Crikey.
A barely noticed announcement made this month by Australia's online safety chief is the strongest signal yet that tech companies like Meta, Google and Microsoft will soon be legally required to scan all user content.
This indication came after the federal government's eSafety commissioner and Australia's tech industry couldn't agree on how companies were going to stamp out child sexual abuse material (CSAM) and pro-terror content.
Now, eSafety commissioner Julie Inman Grant is writing her own binding rules and all signs point towards the introduction of a legal obligation that would force email, messaging and online storage services like Apple iCloud, Signal and ProtonMail to proactively detect" harmful online material on their platforms -a policy that would be a first in the Western world if implemented today.
This all aligns with the worst aspects of the UK and EU proposals. The thing is: this won't work. WhatsApp - Facebook's messaging acquisition - has already made it clear it won't break encryption to satisfy overreaching legislators. Apple has already been burnt by its own proactive client-side scanning proposal, so it's unlikely it will be talked into further damaging its own reputation with subservience to governments demanding it do what it has decided it simply won't do... at least not at the moment. And ProtonMail has extended a firm middle finger to any government demanding it break its encryption.
The end result of this Australian proposal won't be greater insight into CSAM distribution. All this insistence on client-side scanning (with its obvious effects on E2EE) will do is ensure Australian residents will only have access to subpar communication platforms that have never been concerned enough about user privacy and security to implement end-to-end encryption.
As is par for the course, the ends are undeniably good: stopping the spread of CSAM and identifying those trafficking in this illegal content. It's the means that are terrible, and not just because the proposed means mandate undermining encryption and/or fining tech companies $657,000/day over content created and distributed by their users.
Any scanning system is vulnerable to incorrect results. The DIS [designated internet services] code notes that hash lists are not infallible" and points out an error, such as recording a false positive and then erroneously flagging someone for possessing CSAM, can have serious consequences. The use of machine learning or artificial intelligence for scanning adds to the complexity and, as a result, the likelihood that something would be wrongly identified. Similarly, systems may also record false negatives and miss harmful online content.
Even if scanning technology was completely error-proof, the application of this technology can still have problems. The eSafety commissioner expects pro-terror material like footage of mass shootings to be proactively detected and flagged but there are many legitimate reasons why an individual such as journalists and researchers may possess this content. While the national classification scheme has contextual carve-outs for these purposes, scanning technologies don't have this context and could flag this content like any other user.
There are even examples of how content that appears to be CSAM material in a vacuum has legitimate purposes. For example, a father was automatically flagged, banned and reported to police by Google afterit detected medical images taken of his child's groinunder orders of a doctor, immediately locking this user out of their email, phone and home internet.
The government has approached stakeholders (i.e., tech companies and service providers) for comments and suggestions. But it has also decided that it's free to reject any comments or suggestions it doesn't like, including comments that logically point out how this won't work and will make internet users less secure.
The Australian government - at least as personified by Inman Smith - believes the tech world has had its say. Now, all that's left is to force them to bend to the new rules.
The rejection of these two industry codes now leaves the eSafety commissioner's office free to come up with its own enforceable regulations. Other than taking part in a mandatory consultation for the eSafety commissioner's proposed code, Australian tech companies have no further say in what they'll be legally required to do.
If this keeps moving forward, Australian residents will be expected to use the internet the government feels is acceptable, rather than a wide variety of services that actually seek to protect their users from malicious hackers and/or human rights violators who have no qualms about engaging in extraterritorial spying on journalists, activists, and dissidents.
This won't end well for Australia. Hopefully, this will be met with the same push back that has forced the EU and UK to reconsider their demands for broken encryption and privacy-violating client-side scanning.