A few minutes every morning is all you need.
Stay up to date on the world's Headlines and Human Stories. It's fun, it's factual, it's fluff-free.
The backstory: Over the past few years, the spread of disinformation on social media has become an epidemic. Mis- and disinformation can come in the form of out-of-context info and details, deepfakes and just straight-up lies. Because there are no filters for instantly identifying fake news, it can be posted and spread quickly on platforms like Facebook, Instagram, TikTok and X (formerly Twitter). Many people get their news via social media instead of traditional media (newspapers, TV news), so this problem affects how entire demographics understand what’s going on in the world. disinformation
A few years ago, MIT researchers conducted a study that found that fake news can spread up to 10 times faster than true news on social media – at least partly due to how sensational it can be. Earlier this year, a University of Southern California study found that a major factor in fake news spreading is how social platforms’ algorithms tend to reward users for sharing info on the regular. Most fake news focuses on the hot-button topics of politics, health, immigration and climate change.
More recently: Different countries have been looking into regulations to cut down on the spread of misinformation via social media. There’s a bit of a grey area here, though – when does regulating fake news start to become a dangerous form of censorship?
In 2019, Singapore passed the Protection from Online Falsehoods and Manipulation Act, which enables the government to add corrections to allegedly fake claims “against the public interest.” It also bans the spread of misinformation through private messaging and allows the government to boot false info more generally. This law has been criticized for limiting free speech.
Another, more recent series of regulations comes from the EU, with the Digital Services Act (DSA), which was passed last year and is currently going into effect. Under this law, governments would be able to ask platforms to take down a range of content now deemed illegal, like hate speech and scams. It also requires companies running these platforms to update their systems to curb the spread of misinformation, hate speech and propaganda.
The development: As the violence in Israel and Palestine erupted over the weekend, social media was immediately lit up with various reports, media and rumors related to everything going on. The sensitivity of the situation, chaotic news coming from all directions and the social media free-for-all have created a hotbed for the spread of misinformation.
Fake accounts pretending to be journalists have been cropping up. For example, one video that allegedly shows a Hamas fighter shooting down an Israeli helicopter actually comes from the video game Arma 3 – but it’s gone viral as news. Another video supposedly showing an Israeli woman being attacked in Gaza was actually filmed in Guatemala eight years ago. While no social media platform is totally innocent, X seems to be the primary source for the spread of this misinformation currently.
On Sunday, X owner Elon Musk recommended two accounts for news on these current events, both of which have been caught spreading fake news in the past year (and one that’s made antisemitic remarks). The platform has also cut the misinformation teams, removed headlines from outside links posted on the platform and allows for paid verification of accounts, which experts say has made the spread of fake news even easier than on other platforms.
“Elon Musk’s changes to the platform work entirely to the benefit of terrorists and war propagandists,” Emerson Brooking, a researcher at the Atlantic Council Digital Forensics Research Lab, told WIRED. “Changes in profit and incentive structure mean that there’s a lot more tendency for people to share at high volume information, which may not be true because they are trying to maximize view counts. Anyone can buy one of those little blue checks and change their profile picture to something that’s seemingly a media outlet. It takes quite a bit of work to vet who’s telling the truth and who’s not.”
“I’ve often found that mis- and disinformation and incitement to violence in the English language are prioritized, but those in Arabic are often overlooked,” Alex Goldenberg, an analyst at the Network Contagion Research Institute, told CNBC.
“I’ve been fact-checking on Twitter for years, and there’s always plenty of misinformation during major events. But the deluge of false posts in the last two days, many boosted via Twitter Blue [now X Premium], is something else. Neither fact-checkers nor Community Notes can keep up with this,” said Shayan Sardarizadeh, a journalist at BBC Verify.
"Once we saw the events happening, the war started, there was a void of information. No one knew nothing. And [into] this vacuum of information entered all kinds of interest groups, fear, confusion and conspiracies," said Achiya Schatz, executive director of FakeReporter, an Israeli watchdog group that tracks misinformation.