Big Tech to work with EU to fight misinformation and deepfakes
Although voluntary, breaking the Code after signing up to it risks an even heavier fine than breaching GDPR rules.
Tech companies including Meta, Google, Twitter, TikTok and Microsoft have agreed to follow the European Commission's new Code of Practice on Disinformation, which aims to fight the spread of disinformation online - or risk a healthy fine.
The new Code, which was published this week, builds on the Code of Practice 2018 with stronger and more granular commitments. Chiefly these are to:
- Broaden participation in the Code, chiefly so smaller organisations are encouraged to sign up;
- Cut financial incentives for spreading disinformation, by ensuring purveyors of disinformation cannot gain advertising revenues;
- Cover new manipulative behaviours such as fake accounts, bots or malicious deep fakes spreading disinformation;
- Empower users with better tools to recognise, understand and flag disinformation;
- Expand fact-checking in all EU countries and all its languages, while making sure fact-checkers are fairly rewarded for their work;
- Ensure transparent political advertising by allowing users to easily recognise political ads thanks to better labelling and information on sponsors, spend and display period;
- Better support researchers by giving them better access to platforms' data;
- Evaluate its own impact through a strong monitoring framework and regular reporting from platforms on how they're implementing their commitments;
- Set up a Transparency Centre and Task Force for an easy and transparent overview of the implementation of the Code, keeping it future-proof and fit for purpose.
Notably the new Code mentions relatively nascent technologies like deepfakes, which are yet to be used in a convincing attack - although there have already been attempts to use them to influence geopolitics.
Disinformation has already proven a dangerous tool that can spread quickly - see the influence of fake information on the uptake of Covid vaccines, and the riots at the US Capitol Building last year. Referring to that event Philip Ingram, a former senior military intelligence official for the UK Government, said at the Cyber Security Festival last week, "They say sticks and stones can break your bones but words can never hurt you. Rubbish. People died."
Thirty-four organisations have signed up to the Code so far, including ad-tech companies, fact-checkers and a variety of smaller, specialised platforms in addition to the Big Tech firms.
Signatories will need to outline their own policies for dealing with manipulated content, as well as demonstrating that their algorithms are trustworthy. They will have six months to implement the commitments and measures in the Code.
The Code of Practice is voluntary, but parts of it are backed up by the Digital Services Act, and the European Commission is aiming to have it recognised as a Code of Conduct under the DSA.
Thierry Breton, commissioner for the internal market, warned, "Spreading disinformation should not bring a single euro to anyone. To be credible, the new Code of Practice will be backed up by the DSA - including for heavy dissuasive sanctions. Very large platforms that repeatedly break the Code and do not carry out risk mitigation measures properly risk fines of up to 6% of their global turnover."
Věra Jourová, VP for values and transparency in the Commission, said: "This new anti-disinformation Code comes at a time when Russia is weaponising disinformation as part of its military aggression against Ukraine, but also when we see attacks on democracy more broadly. We now have very significant commitments to reduce the impact of disinformation online and much more robust tools to measure how these are implemented across the EU in all countries and in all its languages. Users will also have better tools to flag disinformation and understand what they are seeing. The new Code will also reduce financial incentives for disseminating disinformation and allow researchers to access to platforms' data more easily."