Article 6JTHD Google’s hidden AI diversity prompts lead to outcry over historically inaccurate images

Google’s hidden AI diversity prompts lead to outcry over historically inaccurate images

by
Benj Edwards
from Ars Technica - All content on (#6JTHD)
gemini_diversity_hero-800x450.jpg

Enlarge / Generations from Gemini AI from the prompt, "Paint me a historically accurate depiction of a medieval British king." (credit: @stratejake / X)

On Thursday morning, Google announced it was pausing its Gemini AI image-synthesis feature in response to criticism that the tool was inserting diversity into its images in a historically inaccurate way, such as depicting multi-racial Nazis and medieval British kings with unlikely nationalities.

"We're already working to address recent issues with Gemini's image generation feature. While we do this, we're going to pause the image generation of people and will re-release an improved version soon," wrote Google in a statement Thursday morning.

As more people on X began to pile on Google for being "woke," the Gemini generations inspired conspiracy theories that Google was purposely discriminating against white people and offering revisionist history to serve political goals. Beyond that angle, as The Verge points out, some of these inaccurate depictions "were essentially erasing the history of race and gender discrimination."

Read 9 remaining paragraphs | Comments

External Content
Source RSS or Atom Feed
Feed Location http://feeds.arstechnica.com/arstechnica/index
Feed Title Ars Technica - All content
Feed Link https://arstechnica.com/
Reply 0 comments