Article 6E20F Social Media Algorithms Warp How People Learn From Each Other, Research Shows

Social Media Algorithms Warp How People Learn From Each Other, Research Shows

by
BeauHD
from Slashdot on (#6E20F)
William Brady writes via The Conversation: People are increasingly interacting with others in social media environments where algorithms control the flow of social information they see. Algorithms determine in part which messages, which people and which ideas social media users see. On social media platforms, algorithms are mainly designed to amplify information that sustains engagement, meaning they keep people clicking on content and coming back to the platforms. I'm a social psychologist, and my colleagues and I have found evidence suggesting that a side effect of this design is that algorithms amplify information people are strongly biased to learn from. We call this information "PRIME," for prestigious, in-group, moral and emotional information. In our evolutionary past, biases to learn from PRIME information were very advantageous: Learning from prestigious individuals is efficient because these people are successful and their behavior can be copied. Paying attention to people who violate moral norms is important because sanctioning them helps the community maintain cooperation. But what happens when PRIME information becomes amplified by algorithms and some people exploit algorithm amplification to promote themselves? Prestige becomes a poor signal of success because people can fake prestige on social media. Newsfeeds become oversaturated with negative and moral information so that there is conflict rather than cooperation. The interaction of human psychology and algorithm amplification leads to dysfunction because social learning supports cooperation and problem-solving, but social media algorithms are designed to increase engagement. We call this mismatch functional misalignment. One of the key outcomes of functional misalignment in algorithm-mediated social learning is that people start to form incorrect perceptions of their social world. For example, recent research suggests that when algorithms selectively amplify more extreme political views, people begin to think that their political in-group and out-group are more sharply divided than they really are. Such "false polarization" might be an important source of greater political conflict. Functional misalignment can also lead to greater spread of misinformation. A recent study suggests that people who are spreading political misinformation leverage moral and emotional information -- for example, posts that provoke moral outrage -- in order to get people to share it more. When algorithms amplify moral and emotional information, misinformation gets included in the amplification. Brady cites several new studies on this topic that have demonstrated that social media algorithms clearly amplify PRIME information. However, it's unclear if this amplification leads to offline polarization. Looking ahead, Brady says his team is "working on new algorithm designs that increase engagement while also penalizing PRIME information." The idea is that approach would "maintain user activity that social media platforms seek, but also make people's social perceptions more accurate," he says.

twitter_icon_large.pngfacebook_icon_large.png

Read more of this story at Slashdot.

External Content
Source RSS or Atom Feed
Feed Location https://rss.slashdot.org/Slashdot/slashdotMain
Feed Title Slashdot
Feed Link https://slashdot.org/
Feed Copyright Copyright Slashdot Media. All Rights Reserved.
Reply 0 comments