TikTok joins the EU’s Code of Practice on disinformation
TikTok is the latest platform to sign up the European Union's Code of Practice on disinformation, agreeing to a set of voluntary steps aimed at combating the spread of damaging fakes and falsehoods online.
The short video sharing platform, which is developed by Beijing based ByteDance and topped 2BN downloads earlier this year, is hugely popular with teens - so you're a lot more likely to see dancing and lipsyncing videos circulating than AI-generated high tech deepfakes'. Though, of course, online disinformation has no single medium: The crux of the problem is something false passing off as true, with potentially very damaging impacts (such as when it's targeted at elections; or bogus health information spreading during a pandemic).
The EDiMA trade association, which counts TikTok as one of a number of tech giant members - and acts as a spokesperson for those signed up to the EU's Code - announced today that the popular video sharing platform had formally signed up.
TikTok signing up to the Code of Practice on Disinformation is great news as it widens the breadth of online platforms stepping up the fight against disinformation online. It shows that the Code of Practice on Disinformation is an effective means to ensure that companies do more to effectively fight disinformation online," said Siada El Ramly, EDiMA's director general, in a statement.
She further claimed the announcement shows once again that internet companies take their responsibility seriously and are ready to play their part".
In another statement, TikTok's Theo Bertram, director of its government relations & public policy team in Europe, added: To prevent the spread of disinformation online, industry co-operation and transparency are vital, and we're proud to sign up to the Code of Practice on Disinformation to play our part."
That's the top-line PR from the platforms' side.
However earlier this month the Commission warned that a coronavirus infodemic' had led to a flood of false and/or misleading information related to the COVID-19 pandemic in recent months - telling tech giants they must do more.
Platforms signed up to the Code of Practice must now provide monthly reports with greater detail about the counter measures they're taking to tackle coronavirus fakes, it added - warning they need to back up their claims of action with more robust evidence that the steps they're taking are actually working.
The Commission said then that TikTok was on the point of signing up. It also said negotiations remain ongoing with Facebook-owned WhatsApp to join the code. We've reached out to the Commission for any update.
In the almost two years since the code came into existence EU lawmakers have made repeat warnings that tech giants are not doing enough to tackle disinformation being spread on their platforms.
Commissioners are now consulting on major reforms to foundational ecommerce rules which wrap digital services, including looking at the hot button issue of content liability and asking - more broadly - how much responsibility platforms should have for the content they amplify and monetize? A draft proposal of the Digital Services Act is slated for the end of the year.
All of which incentivizes platforms to show willingness to work with the EU's current (voluntary) anti-disinformation program - or risk more stringent and legally binding rules coming down the pipe in future. (TikTok has the additional risk of being a China-based platform, and earlier this month the Commission went so far as to name China as one of the state entities it has identified spreading disinformation in the region.)
Although the chance of hard and fast regulations to tackle fuzzy falsehoods seems unlikely.
Earlier this month the Commission's VP for values and transparency, Vra Jourova, suggested illegal content will be the focus for the Digital Services Act. On the altogether harder-to-define problem of disinformation' she said: I do not foresee that we will come with hard regulation on that." Instead, she suggested lawmakers will look for an efficient" way of decreasing the harmful impacts associated with the problem - saying they could, for example, focus on pre-election periods; suggesting there may be temporary controls on platform content ahead of major votes.
Facebook, Google, Twitter and Mozilla were among the first clutch of tech platforms and online advertisers to sign up to the Commission's code back in 2018 - when signatories committed to take actions aimed at disrupting ad revenues for entities which spread fakes and actively support research into disinformation.
They also agreed to do more to tackle fake accounts and bots; and said they'd make political and issue ads more transparent. Empowering consumers to report disinformation and access different news sources, and improving the visibility of authoritative content were other commitments.
Since then a few more platforms and trade associations have signed up to the EU code - with TikTok the latest.
Reviews of the EU's initiative remain mixed - including the Commission's own regular must do better' report card for platforms. Clearly, online disinformation remains hugely problematic. Nor is there ever going to be a simply fix for such a complex human phenomenon. Although there is far less excuse for platforms' ongoing transparency failures.
Which may in turn offer the best route forward for regulators to tackle such a thorny issue: Via enforced transparency and access to platform data.
So I think the next steps have to focus on
1 Securing greater transparency + access to data
2 Incentivizing collaboration
3 Investing actual in strengthening independent media(That's leaving aside wider range of policy discussions on competition, data, tax etc)
12/15
- Rasmus Kleis Nielsen (@rasmus_kleis) June 22, 2020