Close Menu
Technology Mag

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    Save 20% With VistaPrint Coupons for July 2025

    July 1, 2025

    OpenAI Leadership Responds to Meta Offers: ‘Someone Has Broken Into Our Home’

    June 30, 2025

    Microsoft Authenticator is ending support for passwords

    June 30, 2025
    Facebook X (Twitter) Instagram
    Subscribe
    Technology Mag
    Facebook X (Twitter) Instagram YouTube
    • Home
    • News
    • Business
    • Games
    • Gear
    • Reviews
    • Science
    • Security
    • Trending
    • Press Release
    Technology Mag
    Home » Google apologizes for “missing the mark” after Gemini generated racially diverse Nazis
    News

    Google apologizes for “missing the mark” after Gemini generated racially diverse Nazis

    News RoomBy News RoomFebruary 21, 20245 Mins Read
    Facebook Twitter Pinterest LinkedIn Reddit WhatsApp Email

    Google has apologized for what it describes as “inaccuracies in some historical image generation depictions” with its Gemini AI tool, saying its attempts at creating a “wide range” of results missed the mark. The statement follows criticism that it depicted specific white figures (like the US Founding Fathers) or groups like Nazi-era German soldiers as people of color, possibly as an overcorrection to long-standing racial bias problems in AI.

    “We’re aware that Gemini is offering inaccuracies in some historical image generation depictions,” says the Google statement, posted this afternoon on X. “We’re working to improve these kinds of depictions immediately. Gemini’s AI image generation does generate a wide range of people. And that’s generally a good thing because people around the world use it. But it’s missing the mark here.”

    My Gemini results for “generate a picture of an American woman,” one of the prompts that set off the debate of the past few days.

    Google began offering image generation through its Gemini (formerly Bard) AI platform earlier this month, matching the offerings of competitors like OpenAI. Over the past few days, however, social media posts have questioned whether it fails to produce historically accurate results in an attempt at racial and gender diversity.

    As the Daily Dot chronicles, the controversy has been promoted largely — though not exclusively — by right-wing figures attacking a tech company that’s perceived as liberal. Earlier this week, a former Google employee posted on X that it’s “embarrassingly hard to get Google Gemini to acknowledge that white people exist,” showing a series of queries like “generate a picture of a Swedish woman” or “generate a picture of an American woman.” The results appeared to overwhelmingly or exclusively show AI-generated people of color. (Of course, all the places he listed do have women of color living in them, and none of the AI-generated women exist in any country.) The criticism was taken up by right-wing accounts that requested images of historical groups or figures like the Founding Fathers and purportedly got overwhelmingly non-white AI-generated people as results. Some of these accounts positioned Google’s results as part of a conspiracy to avoid depicting white people, and at least one used a coded antisemitic reference to place the blame.

    Gemini wouldn’t produce an image of a 1943 soldier on desktop for me, but it offered this set of illustrations to a colleague.

    Google didn’t reference specific images that it felt were errors; in a statement to The Verge, it reiterated the contents of its post on X. But it’s plausible that Gemini has made an overall attempt to boost diversity because of a chronic lack of it in generative AI. Image generators are trained on large corpuses of pictures and written captions to produce the “best” fit for a given prompt, which means they’re often prone to amplifying stereotypes. A Washington Post investigation last year found that prompts like “a productive person” resulted in pictures of entirely white and almost entirely male figures, while a prompt for “a person at social services” uniformly produced what looked like people of color. It’s a continuation of trends that have appeared in search engines and other software systems.

    Some of the accounts that criticized Google defended its core goals. “It’s a good thing to portray diversity ** in certain cases **,” noted one person who posted the image of racially diverse 1940s German soldiers. “The stupid move here is Gemini isn’t doing it in a nuanced way.” And while entirely white-dominated results for something like “a 1943 German soldier” would make historical sense, that’s much less true for prompts like “an American woman,” where the question is how to represent a diverse real-life group in a small batch of made-up portraits.

    For now, Gemini appears to be simply refusing some image generation tasks. It wouldn’t generate an image of Vikings for one Verge reporter, although I was able to get a response. On desktop, it resolutely refused to give me images of German soldiers or officials from Germany’s Nazi period or to offer an image of “an American president from the 1800s.”

    Gemini’s results for the prompt “generate a picture of a US senator from the 1800s.”

    But some historical requests still do end up factually misrepresenting the past. A colleague was able to get the mobile app to deliver a version of the “German soldier” prompt — which exhibited the same issues described on X.

    And while a query for pictures of “the Founding Fathers” returned group shots of almost exclusively white men who vaguely resembled real figures like Thomas Jefferson, a request for “a US senator from the 1800s” returned a list of results Gemini promoted as “diverse,” including what appeared to be Black and Native American women. (The first female senator, a white woman, served in 1922.) It’s a response that ends up erasing a real history of race and gender discrimination — “inaccuracy,” as Google puts it, is about right.

    Additional reporting by Emilia David

    Share. Facebook Twitter Pinterest LinkedIn WhatsApp Reddit Email
    Previous ArticleWaymo’s robotaxi expansion plans in California put on hold by regulators
    Next Article Framework is selling a cheap modular laptop

    Related Posts

    Microsoft Authenticator is ending support for passwords

    June 30, 2025

    AT&T says ‘our network’ wasn’t to blame for Trump’s troubled conference call

    June 30, 2025

    The government’s Apple antitrust lawsuit is still on

    June 30, 2025

    Apple’s AI Siri might be powered by OpenAI

    June 30, 2025

    The Nintendo Switch 2 will be available in-store at Best Buy on July 1st

    June 30, 2025

    Mark Zuckerberg announces his AI ‘superintelligence’ super-group

    June 30, 2025
    Our Picks

    OpenAI Leadership Responds to Meta Offers: ‘Someone Has Broken Into Our Home’

    June 30, 2025

    Microsoft Authenticator is ending support for passwords

    June 30, 2025

    AT&T says ‘our network’ wasn’t to blame for Trump’s troubled conference call

    June 30, 2025

    The government’s Apple antitrust lawsuit is still on

    June 30, 2025
    • Facebook
    • Twitter
    • Pinterest
    • Instagram
    • YouTube
    • Vimeo
    Don't Miss
    News

    Apple’s AI Siri might be powered by OpenAI

    By News RoomJune 30, 2025

    Apple is considering enlisting the help of OpenAI or Anthropic to power its AI-upgraded Siri,…

    The best Switch 2 screen protector you should buy

    June 30, 2025

    The Nintendo Switch 2 will be available in-store at Best Buy on July 1st

    June 30, 2025

    Telegram Purged Chinese Crypto Scam Markets—Then Watched as They Rebuilt

    June 30, 2025
    Facebook X (Twitter) Instagram Pinterest
    • Privacy Policy
    • Terms of use
    • Advertise
    • Contact
    © 2025 Technology Mag. All Rights Reserved.

    Type above and press Enter to search. Press Esc to cancel.