A few minutes every morning is all you need.
Stay up to date on the world's Headlines and Human Stories. It's fun, it's factual, it's fluff-free.
In the aftermath of the 2016 presidential election, the issue of disinformation campaigns on social media emerged as a topic of media fascination and a subject of near-constant political arguments.
Donald Trump, one of the most contentious candidates in American political history, had just been elected president and rumors swirled that his campaign might have benefited from aid from Russia, an allegation that amounted to treason.
In the months that followed, news also emerged that the Russian government was supporting, financially and otherwise, organizations that used the Internet and social media to increase polarization and fuel suspicion of American institutions in the run up to the election.
A formal investigation into the actions of the Trump campaign started in 2017, culminating in a 2019 report by an investigative team led by former FBI Director Robert Mueller.
In the report, investigators alleged that Russian disinformation tactics were systematic, with Moscow-backed actors using fake personal and organizational accounts on social media to spread incendiary messages. The main organization involved was “The Internet Research Agency,” a company in Russia with supposed ties to Russian President Vladimir Putin.
As for the actions of the Trump campaign, while several members were found guilty of financial crimes, witness tampering, illegal foreign lobbying and making false statements to Congress and the FBI, among others, no evidence was found that President Trump or his aides had coordinated with a foreign power.
Untangling these two related, but separate findings from the report proved to be a daily struggle for news outlets. Many Americans had already made up their minds regarding Trump’s culpability – or lack thereof – on the issue of Russian interference before the findings were even released.
Social media as a new frontier
When social media first gained prominence over a decade ago, business leaders and political commentators alike touted its groundbreaking potential to reshape society and government for the better.
As more people gained access to technology to communicate digitally, voices would be raised and democracy would be strengthened, or so the argument went.
As time wore on, the downsides of the technology also became evident, as examples of misuse and documentation of viral rumors and lies on social media platforms piled up. Some touted their belief that democracies would falter if citizens couldn’t keep track of honest information on which to base their decisions.
Other voices still claim that the potential harm or good that comes from social media, like most human systems, isn’t a foregone conclusion. While there is no denying social media’s power to foster change – one just has to look at the role it played in ousting dictators like former Egyptian President Hosni Mubarak from power – outcomes regarding its potential and downsides can only be attributed to the way the human brain is wired.
It’s well known that people are prone to forming opinions that feed into their preexisting beliefs, that they have affinities for others who they perceive think similarly, and that they indulge in emotional displays, including feelings of resentment. While none of this is new, there are those who argue that the worst of social media may be a reflection of these tendencies on a larger scale.
Effects on the election
In regards to Russia’s use of social media, one big question remains – how much did Russian efforts affect the 2016 election?
While some commentators suggest that Russian disinformation altered the course of the election, others say that its effects on 2016’s outcome was negligible given the sheer amount of information voters received during the campaign – factual or not.
In 2017, the leaders of major social media companies were compelled to address Congress regarding their knowledge of the Russian disinformation that had been disseminated on their platforms.
In a written statement, Facebook said that Russian-based operatives created some 80,000 posts on their network, calculating that 126 million users, or about 40% of the US population, may have seen them at some point over the two years that preceded the election.
Colin Stretch, at the time Facebook’s general counsel, argued that this number reflected only a tiny fraction of content on Facebook, equal to about one in 23,000 posts.
Mueller’s investigators claimed that Russian actors spent $160,000 on Facebook ads promoting fake news during the 2016 campaign. By comparison, the Trump and Clinton campaigns alone spent a combined $81 million.
Likewise, despite claiming that hackers affiliated with the Russian government made numerous attempts to penetrate US databases, Mueller’s investigation found no evidence that vote tallies were altered or that voter’s registration information was compromised.
According to officials in the US and abroad, Russian disinformation campaigns are still ongoing, with the European Union (EU) recently warning that Russian-based actors are using the global coronavirus pandemic to sow doubts about the way the West has responded to the virus.
The EU characterized the digital moves as a “significant disinformation campaign” led by pro-Russian media aimed to stoke “confusion, panic and fear” and “subvert European societies from within.” Strikingly similar language was used during US government hearings on the issue.
While Putin and other Russian officials have consistently denied claims that they interfered in the 2016 election or are currently working to spread disinformation, the majority of experts seem to agree that Russian-based attempts to sow discord are a real and continuing concern.
Still, that doesn’t mean that the hope of creating positive digital environments is lost, during election season or otherwise, according to the Brookings Institution, an American research group.
A report written by political scientists from Harvard and Northeastern University, argues that while false information is especially dangerous because it has the potential to “be used to create scapegoats, to normalize prejudices, to harden us-versus-them mentalities and even, in extreme cases, to catalyze and justify violence,” the credibility of the publication in which information appears, the repetition of information and societal pressure also has an influence on the public’s perception of news material.
For this reason, for democratic societies to see the truth amidst a glut of fake information, not only does professional and legitimate journalism need to be supported, but consumers need to practice sound digital literacy.
This means readers and viewers should learn to develop a keen eye towards discerning the quality of their sources of information, while being especially wary of getting all their news from only one source.
To this end, a recent study from Stanford University’s Sam Wineburg and Sarah McGrew found that the best online news gathering techniques could be taken from professional fact checkers.
“In contrast [to most readers], fact checkers read laterally, leaving a site after a quick scan and opening up new browser tabs in order to judge the credibility of the original site.”
Instead of being satisfied with the first thing they read, “their stance toward the unfamiliar was cautious: while things may be as they seem, in the words of [one participant in the study], ‘I always want to make sure.’”