Article 6PQ83 The Political Preferences of LLMs

The Political Preferences of LLMs

by
hubie
from SoylentNews on (#6PQ83)

gznork26 writes:

From ScienceBlog: A comprehensive analysis of 24 state-of-the-art Large Language Models (LLMs) has uncovered a significant left-of-center bias in their responses to politically charged questions. The study, published in PLOS ONE, sheds light on the potential political leanings embedded within AI systems that are increasingly shaping our digital landscape.

The underlying paper at PLOS One: https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0306621

The researcher used a variety of tests of political alignment to assess the bias of some Large Language Models (LLMs) and found that they exhibited a left-of-center bias. To discover whether that bias can be affected by changing the training data, versions of LLMs were trained on selected sources, producing biases to order.

Here's a question for the community: Is the 'centerpoint' of political bias, as judged by these tests, arbitrary and reflective of the gamut of bias that is accepted as normal at this time? Is that centerpoint an absolute that can be used as a reference, or is it simply an artifact of how the political universe is currently understood? It seems to me that the phase space it exists in is limited by the kinds of political organizations which are preset in the world today, and that there might be valid solutions which have not yet been explored.

Original Submission

Read more of this story at SoylentNews.

External Content
Source RSS or Atom Feed
Feed Location https://soylentnews.org/index.rss
Feed Title SoylentNews
Feed Link https://soylentnews.org/
Feed Copyright Copyright 2014, SoylentNews
Reply 0 comments