Study reveals YouTube promotes climate misinformation and major companies are funding it
A few minutes every morning is all you need.
Stay up to date on the world's Headlines and Human Stories. It's fun, it's factual, it's fluff-free.
A study published on Jan. 16 from the activist group Avaaz revealed that YouTube is “actively promoting" climate disinformation on its platform, which some of the biggest companies and environmental groups in the world are funding by advertising on the videos.
What did the study find?
The group began the study to discover how YouTube is protecting its users from climate misinformation. To do this, Avaaz examined what videos the platform suggests when a user searches the terms “global warming," “climate change," or “climate manipulation."
Researchers focused on what videos show up via YouTube’s “Up Next” feature, plus the next video that automatically plays after a clip finishes. According to the study, these built-in mechanisms determine what most users watch on Youtube.
The study claims that ‘YouTube is driving millions of people to watch climate misinformation videos every day.’ Not only are misinformation videos watched by users who search for such content, but researchers report that the sites recommendation algorithm promotes the videos, showing them to users who wouldn’t encounter them otherwise.
Researchers found that when users typed in “global warming," 16 percent of the top 100 related videos featured disinformation about climate change. The percentage of disinformation when typing in “climate change" was 8 percent, and 21 percent when typing in “climate manipulation.”
The findings add to data that shows disinformation accounts for over 20 percent of views for YouTube videos relating to climate change.
The study led Avaaz to conclude that “YouTube is actively promoting climate misinformation to millions of users."
Fadi Quran, a campaigns director at Avaaz, said “it’s very likely that at least one in five users who search for a term like global warming or climate change could be sent down this type of misinformation rabbit hole."
The study also alleges that YouTube is incentivizing climate misinformation by paying content creators each time an advert runs on a video.
Although the companies and environmental groups advertised on the videos, Avessa pointed out that they were mostly unaware that their advertisements were appearing alongside climate disinformation.
Reports suggest the activist group relied on a YouTube developer tool that attempts to replicate the platform algorithm for video suggestions. In line with this, researchers acknowledged that the study “does not provide a replica of YouTube’s suggestions algorithm." But they maintained that their data “very likely" represents similar suggestions that YouTube presents to users.
The discoveries that Avaaz made led the group to call on YouTube to introduce new policies to prevent the further spread of climate misinformation on its platform. This includes recommending that YouTube includes climate misinformation in its “borderline content" policy and demonetize misinformation content. They also suggest that YouTube works with independent fact-checkers to inform users who have watched any disinformation videos.
[article_ad]
Which companies are involved?
Alongside the promotion and monetization of climate disinformation videos, Avaaz found that some of the world’s largest companies, including Samsung, L’Oreal and Uber, have advertisements running on disinformation videos.
The study also discovered that prominent environmental groups like Greenpeace International and the World Wildlife Fund advertise on the same videos. By running adverts on such content, Avaaz claims that the companies are funding climate misinformation. One hundred and eight brands have been identified, many of whom have also made commitments to combat climate change.
YouTube’s response
In a statement released after the publication of the study, YouTube said: “We can’t speak to Avaaz’s methodology or results, and our recommendations systems are not designed to filter or demote videos or channels based on specific perspectives. YouTube has strict ad policies that govern where ads are allowed to appear, and we give advertisers tools to opt-out of content that doesn’t align with their brand. We’ve also significantly invested in reducing recommendations of borderline content and harmful misinformation, and raising up authoritative voices on YouTube."
Response from companies named
L’Oreal was one of the major brands who ran adverts on climate misinformation videos. Responding to the claims, a spokeswoman for the company told The Millennial Source: “The information promoted by these videos is in direct contradiction with L’Oréal’s commitments and the work we have been carrying out for many years to protect the environment. We are collaborating with YouTube teams asking them to use all the technological means at their disposal to better inform the platform’s users about the nature of these videos and to limit their impact."
Hyundai, another company researchers found advertising on disinformation videos, also responded to our journalists stating: “Hyundai Motor America works constantly with YouTube to ensure its ads run on content that follows established protocols and filters, including avoiding content that promotes patently inaccurate information. We are looking further into the reported ad placement on London Real’s YouTube video and their interview with a climate change denier."
[article_ad]
Other YouTube disinformation
The fallout from the January 16 study is not the first time controversy has engulfed Youtube. In February 2019, several high-profile brands including Walt Disney Co. and Nestle removed their advertisements from Youtube following accusations that the site’s comment section facilitated a “soft-core pedophilia ring."
And critics claim Youtube has promoted anti-vaccination content. An investigation found that when searching phrases such as “should I vaccinate my kids,” users would find videos that contained dubious scientific claims or describe vaccinations as harmful.
How is YouTube fighting disinformation?
In 2019, Google, which owns Youtube, released a document detailing the steps it was taking to fight disinformation across its various divisions, including YouTube, Google News and Google Search.
Amongst other methods, the company said they would tackle the issue by using human curators to decide whether something is a high-quality result for a specific search. Google also revealed how it was trying to crack down on trolls and hackers.
Following further accusations that the platform was a “radicalization engine," Youtube executives announced that in the US, the platform would start reducing recommendations of so-called “borderline” content.
Borderline content is defined as videos that come close to breaking the site’s rules on areas like hate speech and “content that could misinform users in harmful ways."
Despite announcing changes, an April 2019 Bloomberg report said that YouTube was aware of disinformation on its site but did nothing prevent it. According to the report, several senior employees resigned following the company’s lack of action on the problem.
One employee who resigned was Guillaume Chasloter. He helped create YouTube’s recommendation algorithms. Speaking to the Columbia Journalism Review, he said that during his employment, the company’s main concern was the amount of time that users spent watching videos, rather than the accuracy of them.
Chaslot added that this led to YouTube recommending more controversial or shocking videos.
[article_ad]
Comments ()