Researchers Find AI Chatbots Are Racist Despite Multiple Anti-Racism Training
- A new experiment shows that popular AI chatbots still use racial stereotypes when dealing with users with an African American dialect.
- These users are often classified to be aggressive" and suspicious"
- GPT-4 is more likely to recommend death penalty if the user has an African American dialect.
New research finds that despite multiple anti-racism training, many leading AI chatbots continue to be racist and hold a prejudice against people from different countries and communities, especially Black people.
This discovery was made by a small group of researchers from Allen Institute for AI, Stanford University, and the University of Chicago. They tested over a dozen popular large language models (LLMs) and found that they continue to use racist stereotypes.
Valentin Hofmann, one of the researchers involved, also shared their discovery through a post on X.
Read more: AI chatbots are on the rise, and so are privacy concerns
How Was the Research Conducted and its FindingsIn an experiment to understand the underlying racism, the researchers involved created two documents - one written in African American English and the other written in standard American English.Each document was given to all of the selected AI chatbots that were then asked to share their opinion on the author and their personality. The result was almost consistent among all:
- The author of the paper written in African American English was deemed to be rude, aggressive, and suspicious
- The author of the paper written in standard American English received positive feedback
The bots also had some biased opinions over the type of job Black people could do. When asked to match people with African American dialect with a suitable job, AI chatbots often chose positions that don't require higher education or a degree.
These include jobs that are usually related to the field of sports, music, and entertainment. Standard English speakers, on the other hand, faced no such prejudice.
What's Causing the Problem with AI Chatbots?Interestingly, if you ask the chatbots about African Americans in general, they receive more positive feedback with frequent usage of terms like passionate, intelligent, and brilliant. This proves that there's a certain form of hidden racism that's only triggered when a different dialect is detected.
Another interesting observation is that large language models tend to be more racist than small language models.
This might be because large language models are trained on more data. Now, if that's the main reason, then the problem runs much deeper because it goes on to show that the internet is filled with content with a racist undertone.
ChatGPT-4, GPT-3.5, and Google's Bard AI are some of the most popular chatbots on the list of racist LLMs.
It also sheds light on the vetting process of the chatbots' training teams. Even if a certain training material is racist, it's their job to select the right dataset.
Now, if they don't find these racist datasets problematic, it might suggest that the teams involved in training the models are not very racially impartial to begin with.
This isn't the first time AI chatbots have come across as utterly racist. Last year, it was found that as many as nine AI chatbots share health myths (all of which had been debunked a long time ago) when asked medical questions about Black people.
Stanford University assistant professor Roxana Daneshjou, who was an advisor on the paper, said There are very real-world consequences to getting this wrong that can impact health disparities."
In addition to racism and privacy red flags, AI chatbots are causing all sorts of ruckus - swearing at its own company, making hundreds of dollars worth of mistakes, and all in all, just massively negatively impacting the company's reputation.
Read more:Google's Bard AI criticized for being too woke
The post Researchers Find AI Chatbots Are Racist Despite Multiple Anti-Racism Training appeared first on The Tech Report.