Article 60H97 Is Social Media Really Harmful?

Is Social Media Really Harmful?

by
EditorDavid
from on (#60H97)
Social media has made us "uniquely stupid," believes Jonathan Haidt, a social psychologist at the New York University's School of Business. Writing in the Atlantic in April, Haidt argued that large social media platforms "unwittingly dissolved the mortar of trust, belief in institutions, and shared stories that had held a large and diverse secular democracy together." But is that true? "We're years into this, and we're still having an uninformed conversation about social media," notes Dartmouth political scientist Brendan Nyhan (quoted this month in a new article in the New Yorker). The article describes how Haidt tried to confirm his theories in November with Chris Bail, a sociologist at Duke and author of the book "Breaking the Social Media Prism." The two compiled a Google Doc collecting every scholarly study of social media - but many of the studies seemed to contradict each other:When I told Bail that the upshot seemed to me to be that exactly nothing was unambiguously clear, he suggested that there was at least some firm ground. He sounded a bit less apocalyptic than Haidt. "A lot of the stories out there are just wrong," he told me. "The political echo chamber has been massively overstated. Maybe it's three to five per cent of people who are properly in an echo chamber." Echo chambers, as hotboxes of confirmation bias, are counterproductive for democracy. But research indicates that most of us are actually exposed to a wider range of views on social media than we are in real life, where our social networks - in the original use of the term - are rarely heterogeneous. (Haidt told me that this was an issue on which the Google Doc changed his mind; he became convinced that echo chambers probably aren't as widespread a problem as he'd once imagined....) [A]t least so far, very few Americans seem to suffer from consistent exposure to fake news - "probably less than two per cent of Twitter users, maybe fewer now, and for those who were it didn't change their opinions," Bail said. This was probably because the people likeliest to consume such spectacles were the sort of people primed to believe them in the first place. "In fact," he said, "echo chambers might have done something to quarantine that misinformation." The final story that Bail wanted to discuss was the "proverbial rabbit hole, the path to algorithmic radicalization," by which YouTube might serve a viewer increasingly extreme videos. There is some anecdotal evidence to suggest that this does happen, at least on occasion, and such anecdotes are alarming to hear. But a new working paper led by Brendan Nyhan, a political scientist at Dartmouth, found that almost all extremist content is either consumed by subscribers to the relevant channels - a sign of actual demand rather than manipulation or preference falsification - or encountered via links from external sites. It's easy to see why we might prefer if this were not the case: algorithmic radicalization is presumably a simpler problem to solve than the fact that there are people who deliberately seek out vile content. "These are the three stories - echo chambers, foreign influence campaigns, and radicalizing recommendation algorithms - but, when you look at the literature, they've all been overstated." He thought that these findings were crucial for us to assimilate, if only to help us understand that our problems may lie beyond technocratic tinkering. He explained, "Part of my interest in getting this research out there is to demonstrate that everybody is waiting for an Elon Musk to ride in and save us with an algorithm" - or, presumably, the reverse - "and it's just not going to happen." Nyhan also tells the New Yorker that "The most credible research is way out of line with the takes," adding, for example, that while studies may find polarization on social media, "That might just be the society we live in reflected on social media!"He hastened to add, "Not that this is untroubling, and none of this is to let these companies, which are exercising a lot of power with very little scrutiny, off the hook. But a lot of the criticisms of them are very poorly founded. . . . The lack of good data is a huge problem insofar as it lets people project their own fears into this area." He told me, "It's hard to weigh in on the side of 'We don't know, the evidence is weak,' because those points are always going to be drowned out in our discourse. But these arguments are systematically underprovided in the public domain...." Nyhan argued that, at least in wealthy Western countries, we might be too heavily discounting the degree to which platforms have responded to criticism... He added, "There's some evidence that, with reverse-chronological feeds" - streams of unwashed content, which some critics argue are less manipulative than algorithmic curation - "people get exposed to more low-quality content, so it's another case where a very simple notion of 'algorithms are bad' doesn't stand up to scrutiny. It doesn't mean they're good, it's just that we don't know."

twitter_icon_large.pngfacebook_icon_large.png

Read more of this story at Slashdot.

External Content
Source RSS or Atom Feed
Feed Location https://rss.slashdot.org/Slashdot/slashdotMain
Feed Title
Feed Link https://rss.slashdot.org/
Reply 0 comments