Tech companies operating some of the world’s biggest online platforms — including Facebook-owner Meta, Microsoft, Google, Twitter, Twitch, and TikTok — have signed up to a new EU rulebook for tackling online disinformation.
These firms and others will have to make greater efforts to halt the spread of fake news and propaganda on their platforms, as well as share more granular data on their work with EU member states. Announcing the new “Code of Practice on disinformation,” the European Commission said that the guidelines had been shaped particularly by “lessons learnt from the COVID19 crisis and Russia’s war of aggression in Ukraine.”
“This new anti-disinformation Code comes at a time when Russia is weaponising disinformation as part of its military aggression against Ukraine, but also when we see attacks on democracy more broadly,” said the Commission’s vice president for values and transparency, Věra Jourová, in a press statement.
The code itself contains 44 specific “commitments” for companies that target an array of potential harms from disinformation. These include commitments to:
- create searchable libraries for political adverts
- demonetize fake news sites by removing their advertising revenue
- reduce the number of bot networks and fake accounts used to spread disinformation
- give users tools to flag disinformation and access “authoritative sources”
- give researchers “better and wider access to platforms’ data”
- work closely with independent fact-checkers to verify information sources
Many US tech firms like Facebook and Twitter have already adopted similar initiatives following pressure from politicians and regulators, but the EU claims its new code of practice will allow for greater oversight into these operations and stronger enforcement.
Despite the scope of the anti-disinformation code, there are some notable absences from the list of signatories. Apple, for example, has not signed up, despite its burgeoning advertising business and the code’s focus on demonetizing sources of disinformation by cutting off ads. Other large platforms, like Telegram, which has been a major battleground for propaganda following the Russian invasion of Ukraine, are also absent.
Although the predecessor for these guidelines, 2018’s Code of Practice on Disinformation, was entirely voluntary, the EU notes that this new rulebook will be enforced by its new Digital Services Act, or DSA.
“To be credible, the new Code of Practice will be backed up by the DSA — including for heavy dissuasive sanctions,” said the EU’s commissioner for the internal market, Thierry Breton, in a press statement. “Very large platforms that repeatedly break the Code and do not carry out risk mitigation measures properly risk fines of up to 6% of their global turnover.”
Although the EU is presenting the code as a strong deterrent against misinformation with clear methods of enforcement, it’s worth remembering how difficult it is to even gauge the impact of disinformation, let alone curb its negative impacts.
Take, for example, the code’s 31st commitment, in which signatories agree to “integrate, showcase, or otherwise consistently use fact-checkers’ work in their platforms’ services, processes, and contents.” Platforms signed up to this portion of the code will, in the future, have to share data on fact-checkers’ work on their platform, giving each EU member state information including “number of fact-check articles published; reach of fact-check articles; number of content pieces reviewed by fact-checkers.”
Such data will offer new insight, no doubt, but can hardly give the full picture of fact-checkers’ work. Consider that Facebook has been partnering with fact-checkers as far back as 2016, but has also been criticized for using partisan groups (like the Check Your Fact team, which has ties to conservative website The Daily Caller) to verify sources.
Source: https://www.theverge.com/2022/6/16/23168987/eu-code-disinformation-online-propaganda-facebook-twitter-tiktok