Article 6VG1W Grok 3 Might Be Biased. Netizens Slam the AI Chatbot for Siding with Trump and Musk

Grok 3 Might Be Biased. Netizens Slam the AI Chatbot for Siding with Trump and Musk

by
Alpa Somaiya
from Techreport on (#6VG1W)
grok-3-might-be-biased-netizens-slam-the-ai-chatbot-for-siding-with-trump-and-musk-1-1200x686.jpg

Key Takeaways

  • Soon after Grok 3 was launched, some netizens found that it was programmed to leave out Donald Trump and Elon Musk when asked about Who is the biggest misinformation spreader?"
  • The company has fixed the issue now, stating that it doesn't align with the company's values.
  • Questions still remain about why such a bias was allowed in the first place and if AI chatbots can actually be trusted.
grok-3-might-be-biased-netizens-slam-the-ai-chatbot-for-siding-with-trump-and-musk-1200x686.jpg?_t=1740408538

Ever since Musk started his journey with X and then xAI (his own AI company), he has always positioned himself as a staunch supporter of free speech. He used the same argument to justify removing censorship and guardrails from the social media platform X and the first two versions of Grok - xAI's first generation of AI bots.

However, soon after he unveiled Grok 3, an incident shook all his past claims of free speech and raised questions on whether Musk and his AI bots are as unbiased as they claim to be.

What Is The Incident?

Soon after Grok 3 went public, some users noticed a glitch (or probably an intentional tweak that the developers forgot to hide). With the Think mode on, which is what allows the user to see how the AI is arriving at an answer, the tool was asked, Who is the biggest misinformation spreader?"

In its chain of thoughts, it was clearly seen that the tool had been instructed to never mention Donald Trump or Elon Musk.

This incident triggered widespread criticism, which traveled like wildfire and soon reached xAI's office. By Sunday the matter was addressed and Grok 3 began taking Donald Trump into consideration for the response.

After this incident was resolved, another arose. The tool had been constantly saying that both Donald Trump and Elon Musk deserve the death penalty. This glitch was fixed soon, too.

What Does Musk Have to Say?

Musk addressed the glitches and blamed them on the training datasets. Engineering lead Igor Babuschkin also came clean about the matter on his X account.

He said while the tool was briefly instructed to not mention Trump or Musk's name for such queries, they decided to overturn this programming after the users began to point, stating that this move did not align with the company values to begin with. He also promised to make Grok 3 politically neutral.

Even though the matter is now resolved, the larger question remains, i.e., whether the AI tools we use on a day-to-day basis are even giving us the truth or are simply a propaganda tool for powerful leaders.

AI is quickly becoming a part of our daily lives. A lot of people rely on their favorite AI bots for any query they might have. In a situation like this, if the bot itself is biased and inaccurate, the users will also develop a distorted perception of everything around them. Not just that, such misinformation also takes away from an individual's right to unfiltered and unbiased truth.

The scary part is we were only able to find out about Grok 3's biased programming because of the Think" mode that displays the exact steps the bot follows to reach a conclusion. There are so many other tools out there that are not this transparent. So, we might never truly know whether what we're being told is the absolute truth.

The post Grok 3 Might Be Biased. Netizens Slam the AI Chatbot for Siding with Trump and Musk appeared first on Techreport.

External Content
Source RSS or Atom Feed
Feed Location https://techreport.com/feed/
Feed Title Techreport
Feed Link https://techreport.com/
Reply 0 comments