Google promises to fix Gemini's image generation following complaints that it's 'woke'
Google's Gemini chatbot, which was formerly called Bard, has the capability to whip up AI-generated illustrations based on a user's text description. You can ask it to create pictures of happy couples, for instance, or people in period clothing walking modern streets. As the BBC notes, however, some users are criticizing Google for depicting specific white figures or historically white groups of people as racially diverse individuals. Now, Google has issued a statement, saying that it's aware Gemini "is offering inaccuracies in some historical image generation depictions" and that it's going to fix things immediately.
We're aware that Gemini is offering inaccuracies in some historical image generation depictions. Here's our statement. pic.twitter.com/RfYXSgRyfz
- Google Communications (@Google_Comms) February 21, 2024
According to Daily Dot, a former Google employee kicked off the complaints when he tweeted images of women of color with a caption that reads: "It's embarrassingly hard to get Google Gemini to acknowledge that white people exist." To get those results, he asked Gemini to generate pictures of American, British and Australian women. Other users, mostly those known for being right-wing figures, chimed in with their own results, showing AI-generated images that depict America's founding fathers and the Catholic Church's popes as people of color.
In our tests, asking Gemini to create illustrations of the founding fathers resulted in images of white men with a single person of color or woman in them. When we asked the chatbot to generate images of the pope throughout the ages, we got photos depicting black women and Native Americans as the leader of the Catholic Church. Asking Gemini to generate images of American women gave us photos with a white, an East Asian, a Native American and a South Asian woman. The Verge says the chatbot also depicted Nazis as people of color, but we couldn't get Gemini to generate Nazi images. "I am unable to fulfill your request due to the harmful symbolism and impact associated with the Nazi Party," the chatbot responded.
Gemini's behavior could be a result of overcorrection, since chatbots and robots trained on AI over the past years tended to exhibit racist and sexist behavior. In one experiment from 2022, for instance, a robot repeatedly chose a Black man when asked which among the faces it scanned was a criminal. In a statement posted on X, Gemini Product Lead Jack Krawczyk said Google designed its "image generation capabilities to reflect [its] global user base, and [it takes] representation and bias seriously." He said Gemini will continue to generate racially diverse illustrations for open-ended prompts, such as images of people walking their dog. However, he admitted that "[h]istorical contexts have more nuance to them and [his team] will further tune to accommodate that."
This article originally appeared on Engadget at https://www.engadget.com/google-promises-to-fix-geminis-image-generation-following-complaints-that-its-woke-073445160.html?src=rssWe are aware that Gemini is offering inaccuracies in some historical image generation depictions, and we are working to fix this immediately.
- Jack Krawczyk (@JackK) February 21, 2024
As part of our AI principles https://t.co/BK786xbkey, we design our image generation capabilities to reflect our global user base, and we...