Automattic, Mozilla, Twitter and Vimeo urge EU to beef up user controls to help tackle ‘legal-but-harmful’ content
Automattic, Mozilla, Twitter and Vimeo have penned an open letter to EU lawmakers urging them to ensure that a major reboot of the bloc's digital regulations doesn't end up bludgeoning freedom of expression online.
The draft Digital Services Act and Digital Markets Act are due to be unveiled by the Commission next week, though the EU lawmaking process means it'll likely be years before either becomes law.
The Commission has said the legislative proposals will set clear responsibilities for how platforms must handle illegal and harmful content, as well as applying a set of additional responsibilities on the most powerful players which are intended to foster competition in digital markets.
It also plans to legislate around political ads transparency - under a Democracy Action Plan - but not til Q3 next year.
The Internet is at a crossroads. What happens next will define our online lives for an entire generation.
Today, in Europe, Twitter is joining @automattic @mozilla @Vimeo to call on regulators to endorse a digital future built on the #OpenInternet. https://t.co/D27Woy36pe
- Twitter Public Policy (@Policy) December 9, 2020
In their joint letter, entitled Crossroads for the open Internet', the four tech firms argue that: The Digital Services Act and the Democracy Action Plan will either renew the promise of the Open Internet or compound a problematic status quo - by limiting our online environment to a few dominant gatekeepers, while failing to meaningfully address the challenges preventing the Internet from realising its potential."
Europe to limit how big tech can push its own services and use third-party data
On the challenge of regulating digital content without damaging vibrant online expression they advocate for a more nuanced approach to legal-but-harmful" content - pressing a freedom of speech is not freedom of reach' position by urging EU lawmakers not to limit their policy options to binary takedowns (which they suggest would benefit the most powerful platforms).
Instead they suggest tackling problem (but legal) speech by focusing on content visibility as key and ensuring consumers have genuine choice in what they see - implying support for regulation to require that users have meaningful controls over algorithmic feeds (such as the ability to switch off AI curation entirely).
Unfortunately, the present conversation is too often framed through the prism of content removal alone, where success is judged solely in terms of ever-more content removal in ever-shorter periods of time. Without question, illegal content - including terrorist content and child sexual abuse material - must be removed expeditiously. Indeed, many creative self-regulatory initiatives proposed by the European Commission have demonstrated the effectiveness of an EU-wide approach," they write.
Yet by limiting policy options to a solely stay up-come down binary, we forgo promising alternatives that could better address the spread and impact of problematic content while safeguarding rights and the potential for smaller companies to compete. Indeed, removing content cannot be the sole paradigm of Internet policy, particularly when concerned with the phenomenon of legal-but-harmful' content. Such an approach would benefit only the very largest companies in our industry.
We therefore encourage a content moderation discussion that emphasises the difference between illegal and harmful content and highlights the potential of interventions that address how content is surfaced and discovered. Included in this is how consumers are offered real choice in the curation of their online environment."
On illegal hate speech, EU lawmakers eye binding transparency for platforms
Twitter does already let users switch between a chronological content view or top tweets' (aka, its algorithmically curated feed) - so arguably it already offers users real choice" on that front. That said, its platform can also inject some (non-advertising) content into a user's feed regardless of whether a person has elected to see it - if its algorithms believe it'll be of interest. So not quite 100% real choice then.
Another example is Facebook - which does offer a switch to turn off algorithmic curation of its News Feed. But it's so buried in settings most normal users are unlikely to discover it. (Underlying the importance of default settings in this context; algorithmic defaults with buried user choice do already exist on mainstream platforms - and don't sum to meaningful user control over what they're exposed to.)
In the letter, the companies go on to write that they support measures towards algorithmic transparency and control, setting limits to the discoverability of harmful content, further exploring community moderation, and providing meaningful user choice".
We believe that it's both more sustainable and more holistically effective to focus on limiting the number of people who encounter harmful content. This can be achieved by placing a technological emphasis on visibility over prevalence," they suggest, adding: The tactics will vary from service to service but the underlying approach will be familiar."
The Commission has signalled that algorithmic transparency will be a key plank of the policy package - saying in October that the proposals will include requirements for the biggest platforms to provide information on the way their algorithms work when regulators ask for it.
Commissioner Margrethe Vestager said then that the aim is to give more power to users - so algorithms don't have the last word about what we get to see, and what we don't get to see" - suggesting requirements to offer a certain level of user control could be coming down the pipe for the tech industry's dark patterns.
Big tech's blackbox' algorithms face regulatory oversight under EU plan
In their letter, the four companies also express support for harmonizing notice-and-action rules for responding to illegal content, to clarify obligations and provide legal certainty, as well as calling for such mechanisms to include measures proportionate to the nature and impact of the illegal content in question".
The four are also keen for EU lawmakers to avoid a one-size-fits-all approach for regulating digital players and markets. Although given the DSA/DMA split that looks unlikely; there will at least be two sizes involved in Europe's rebooted rules, and most likely a lot more nuance.
We recommend a tech-neutral and human rights-based approach to ensure legislation transcends individual companies and technological cycles," they go on, adding a little dig over the controversial EU Copyright directive - which they describe as a reminder there are major drawbacks in prescribing generalised compliance solutions".
Our rules must be sufficiently flexible to accommodate and allow for the harnessing of sectoral shifts, such as the rise of decentralised hosting of content and data," they go on, arguing a far-sighted approach" can be ensured by developing regulatory proposals that optimise for effective collaboration and meaningful transparency between three core groups: companies, regulators and civil society".
Here the call is for co-regulatory oversight grounded in regional and global norms", as they put it, to ensure Europe's rebooted digital rules are effective, durable, and protective of individuals' rights".
The joint push for collaboration that includes civic society contrasts with Google's public response to the Commission's DSA/DMA consultation - which mostly focused on trying to lobby against ex ante rules for gatekeepers (like Google will surely be designated).
Though on liability for illegal content front the tech giant also lobbied for clear delineating lines between how illegal material must be handled and what's lawful-but-harmful."
The full official detail of the DSA and DMA proposals are expected next week.
A Commission spokesperson declined to comment on the specific positions set out by Twitter et al today, adding that the regulatory proposals will be unveiled soon". (December 15 is the slated date.)
Last week - setting out the bloc's strategy towards handling politically charged information and disinformation online - values and transparency commissioner, Vera Jourova, confirmed the forthcoming DSA will not set specific rules for the removal of disputed content".
Instead, she said there will be a beefed up code of practice for tackling disinformation - extending the current voluntary arrangement with additional requirements. She said these will include algorithmic accountability and better standards for platforms to cooperate with third-party fact-checkers. Tackling bots and fake accounts and clear rules for researchers to access data are also on the (non-legally-binding) cards.
We do not want to create a ministry of truth. Freedom of speech is essential and I will not support any solution that undermines it," said Jourova. But we also cannot have our societies manipulated if there are organized structures aimed at sewing mistrust, undermining democratic stability and so we would be naive to let this happen. And we need to respond with resolve."