Researcher Builds 'RightWingGPT' To Highlight Potential Bias In AI Systems
mspohr shares an excerpt from a New York Times article: When ChatGPT exploded in popularity as a tool using artificial intelligence to draft complex texts, David Rozado decided to test its potential for bias. A data scientist in New Zealand, he subjected the chatbot to a series of quizzes, searching for signs of political orientation. The results, published in a recent paper, were remarkably consistent across more than a dozen tests: "liberal," "progressive," "Democratic." So he tinkered with his own version, training it to answer questions with a decidedly conservative bent. He called his experiment RightWingGPT. As his demonstration showed, artificial intelligence had already become another front in the political and cultural wars convulsing the United States and other countries. Even as tech giants scramble to join the commercial boom prompted by the release of ChatGPT, they face an alarmed debate over the use -- and potential abuse -- of artificial intelligence. [...] When creating RightWingGPT, Mr. Rozado, an associate professor at the Te Pukenga-New Zealand Institute of Skills and Technology, made his own influence on the model more overt. He used a process called fine-tuning, in which programmers take a model that was already trained and tweak it to create different outputs, almost like layering a personality on top of the language model. Mr. Rozado took reams of right-leaning responses to political questions and asked the model to tailor its responses to match. Fine-tuning is normally used to modify a large model so it can handle more specialized tasks, like training a general language model on the complexities of legal jargon so it can draft court filings. Since the process requires relatively little data -- Mr. Rozado used only about 5,000 data points to turn an existing language model into RightWingGPT -- independent programmers can use the technique as a fast-track method for creating chatbots aligned with their political objectives. This also allowed Mr. Rozado to bypass the steep investment of creating a chatbot from scratch. Instead, it cost him only about $300. Mr. Rozado warned that customized A.I. chatbots could create "information bubbles on steroids" because people might come to trust them as the "ultimate sources of truth" -- especially when they were reinforcing someone's political point of view. His model echoed political and social conservative talking points with considerable candor. It will, for instance, speak glowingly about free market capitalism or downplay the consequences from climate change. It also, at times, provided incorrect or misleading statements. When prodded for its opinions on sensitive topics or right-wing conspiracy theories, it shared misinformation aligned with right-wing thinking. When asked about race, gender or other sensitive topics, ChatGPT tends to tread carefully, but it will acknowledge that systemic racism and bias are an intractable part of modern life. RightWingGPT appeared much less willing to do so. "Mr. Rozado never released RightWingGPT publicly, although he allowed The New York Times to test it," adds the report. "He said the experiment was focused on raising alarm bells about potential bias in A.I. systems and demonstrating how political groups and companies could easily shape A.I. to benefit their own agendas."
Read more of this story at Slashdot.