YouTube algorithm pushed election fraud claims to Trump supporters, report says
Enlarge (credit: Nathan Howard / Stringer | Getty Images News)
For years, researchers have suggested that algorithms feeding users content aren't the cause of online echo chambers, but are more likely due to users actively seeking out content that aligns with their beliefs. This week, New York University researchers for the Center for Social Media and Politics showed results from a YouTube experiment that just happened to be conducted right when election fraud claims were raised in fall 2020. They say their results provide an important caveat to prior research by showing evidence that in 2020, YouTube's algorithm was responsible for "disproportionately" recommending election fraud content to users more "skeptical of the election's legitimacy to begin with."
A coauthor of the study, Vanderbilt University political scientist James Bisbee told The Verge that even though participants were recommended a low number of election denial videos-a maximum of 12 videos out of hundreds participants clicked on-the algorithm generated three times as many to people predisposed to buy into the conspiracy than it to people who did not. "The more susceptible you are to these types of narratives about the election... the more you would be recommended content about that narrative," Bisbee said.
YouTube spokesperson Elena Hernandez told Ars that Bisbee's team's report "doesn't accurately represent how our systems work." Hernandez says, "YouTube doesn't allow or recommend videos that advance false claims that widespread fraud, errors, or glitches occurred in the 2020 US presidential election" and YouTube's "most viewed and recommended videos and channels related to elections are from authoritative sources, like news channels."