How the EU is cracking down on disinformation

On Tuesday, the European Commission called on social media companies to quash propaganda lies from Russia.

How the EU is cracking down on disinformation
European Commission vice-president in charge of Values and Transparency Vera Jourova gives a press conference during a General Affairs Council in Luxembourg, June 22, 2021. John Thys/Pool via REUTERS/File Photo

The backstory: Yesterday, we mentioned how X, formerly Twitter, is being called out by the EU for platforming fake news, propaganda and harmful content. Over the past few years, the EU has been working to curb the spread of political misinformation that could affect the outcome of elections by misleading the public.

In 2022, the EU established a law for tech companies with at least 45 million monthly users to set up systems to limit misinformation, hate speech and propaganda from spreading on their platforms. If they don’t act, these companies could have to pay up to 6% of their global annual revenue in fines or risk being banned in the EU altogether. The law is called the Digital Services Act (DSA), and it puts the responsibility for disinformation in the hands of the tech giants that host and, through their algorithms, often end up boosting the spread of dangerous content.

More recently: Technology has made it easier to spread false info and create deepfakes, with the rise of artificial intelligence (AI) being a major source of the stuff. One major concern in the EU, an ally of Ukraine, is the spread of Russian propaganda.

While the DSA is now in place, it’s still challenging to enforce. Regulators would have to prove a platform had internal issues that cause harm, which could end up argued over in courts for years since the whole sector is a relatively new arena for law.

The development: It’s nearing election season in many European countries (and many countries all over the world, really). But fake news is already a problem. AI-generated deepfake videos have already been spotted concerning an election in Turkey and the Republican presidential primary race in the US.

On Tuesday, the European Commission called on social media companies that have signed a Voluntary Code of Practice on Disinformation to quash propaganda lies from Russia. With the DSA policies in effect, the EU just put Elon Musk, who earlier withdrew from that voluntary code, on warning for X’s inaction on reining in misleading content.

According to a new report by NewsGuard, several Russian, Chinese and Iranian state media outlet accounts saw a 70% bump in engagement on X after the platform cut out labels showing them as “state-affiliated.” With EU lawmakers trying to put restrictions on Big Tech, this could lead to some major industry clashes. ​​We’ll see how things go this weekend, with an upcoming election in Slovakia being the first in Europe since the DSA went into effect.

Key comments:

“[Sticking to DSA policy] actually requires dedicating quite a large sum of resources, you know, enlarging the teams that would be responsible for a given country,” says Dominika Hajdu, the director of the Center for Democracy and Resilience at Globsec in Slovakia. “It requires energy, staffing that the social media platforms will have to do for every country. And this is something they are reluctant to do unless there is a potential financial cost to it.”

"The very large platforms must address this risk, especially when we have to expect that the Kremlin and others will be active before our European elections," European Commission Vice President Vera Jourova told reporters on Tuesday.

"I expect signatories to adjust their actions to reflect that. There is a war in the information space waged against us and there are upcoming elections where malicious actors will try to use the design features of the platforms to manipulate.” said Jourova. “Upcoming national elections and the EU elections will be an important test for the code that platforms signatories should not fail."