Close Menu
Technology Mag

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    A Super-Energetic Neutrino That Reached Earth in 2023 Has Been Confirmed to Be Real. But Where Did It Come From?

    August 23, 2025

    Bose’s compact TV Speaker is more than $100 off right now

    August 23, 2025

    What’s on your desk, Dominic Preston?

    August 23, 2025
    Facebook X (Twitter) Instagram
    Subscribe
    Technology Mag
    Facebook X (Twitter) Instagram YouTube
    • Home
    • News
    • Business
    • Games
    • Gear
    • Reviews
    • Science
    • Security
    • Trending
    • Press Release
    Technology Mag
    Home » AI Is Spreading Old Stereotypes to New Languages and Cultures
    Business

    AI Is Spreading Old Stereotypes to New Languages and Cultures

    News RoomBy News RoomApril 24, 20254 Mins Read
    Facebook Twitter Pinterest LinkedIn Reddit WhatsApp Email

    So, there’s the training data. Then, there’s the fine-tuning and evaluation. The training data might contain all kinds of really problematic stereotypes across countries, but then the bias mitigation techniques may only look at English. In particular, it tends to be North American– and US-centric. While you might reduce bias in some way for English users in the US, you’ve not done it throughout the world. You still risk amplifying really harmful views globally because you’ve only focused on English.

    Is generative AI introducing new stereotypes to different languages and cultures?

    That is part of what we’re finding. The idea of blondes being stupid is not something that’s found all over the world, but is found in a lot of the languages that we looked at.

    When you have all of the data in one shared latent space, then semantic concepts can get transferred across languages. You’re risking propagating harmful stereotypes that other people hadn’t even thought of.

    Is it true that AI models will sometimes justify stereotypes in their outputs by just making shit up?

    That was something that came out in our discussions of what we were finding. We were all sort of weirded out that some of the stereotypes were being justified by references to scientific literature that didn’t exist.

    Outputs saying that, for example, science has shown genetic differences where it hasn’t been shown, which is a basis of scientific racism. The AI outputs were putting forward these pseudo-scientific views, and then also using language that suggested academic writing or having academic support. It spoke about these things as if they’re facts, when they’re not factual at all.

    What were some of the biggest challenges when working on the SHADES dataset?

    One of the biggest challenges was around the linguistic differences. A really common approach for bias evaluation is to use English and make a sentence with a slot like: “People from [nation] are untrustworthy.” Then, you flip in different nations.

    When you start putting in gender, now the rest of the sentence starts having to agree grammatically on gender. That’s really been a limitation for bias evaluation, because if you want to do these contrastive swaps in other languages—which is super useful for measuring bias—you have to have the rest of the sentence changed. You need different translations where the whole sentence changes.

    How do you make templates where the whole sentence needs to agree in gender, in number, in plurality, and all these different kinds of things with the target of the stereotype? We had to come up with our own linguistic annotation in order to account for this. Luckily, there were a few people involved who were linguistic nerds.

    So, now you can do these contrastive statements across all of these languages, even the ones with the really hard agreement rules, because we’ve developed this novel, template-based approach for bias evaluation that’s syntactically sensitive.

    Generative AI has been known to amplify stereotypes for a while now. With so much progress being made in other aspects of AI research, why are these kinds of extreme biases still prevalent? It’s an issue that seems under-addressed.

    That’s a pretty big question. There are a few different kinds of answers. One is cultural. I think within a lot of tech companies it’s believed that it’s not really that big of a problem. Or, if it is, it’s a pretty simple fix. What will be prioritized, if anything is prioritized, are these simple approaches that can go wrong.

    We’ll get superficial fixes for very basic things. If you say girls like pink, it recognizes that as a stereotype, because it’s just the kind of thing that if you’re thinking of prototypical stereotypes pops out at you, right? These very basic cases will be handled. It’s a very simple, superficial approach where these more deeply embedded beliefs don’t get addressed.

    It ends up being both a cultural issue and a technical issue of finding how to get at deeply ingrained biases that aren’t expressing themselves in very clear language.

    Share. Facebook Twitter Pinterest LinkedIn WhatsApp Reddit Email
    Previous ArticleAn Experimental Obesity Pill Mimics Gastric Bypass Surgery
    Next Article Ember’s 16-ounce temperature-controlled Tumbler is half off

    Related Posts

    Join Us for WIRED’s “Uncanny Valley” Live

    August 22, 2025

    Kanye West Said Memecoins ‘Prey On Fans.’ Then He Apparently Launched One

    August 22, 2025

    Africa Is Buying a Record Number of Chinese Solar Panels

    August 22, 2025

    Trump Is Betting Big on Intel. Will the Chips Fall His Way?

    August 22, 2025

    Why Did a $10 Billion Startup Let Me Vibe-Code for Them—and Why Did I Love It?

    August 21, 2025

    Do Large Language Models Dream of AI Agents?

    August 21, 2025
    Our Picks

    Bose’s compact TV Speaker is more than $100 off right now

    August 23, 2025

    What’s on your desk, Dominic Preston?

    August 23, 2025

    I Can’t Stop Playing Duolingo Chess

    August 23, 2025

    Gear News of the Week: Always-Recording Smart Glasses, and Google Teases a New Nest Speaker

    August 23, 2025
    • Facebook
    • Twitter
    • Pinterest
    • Instagram
    • YouTube
    • Vimeo
    Don't Miss
    Games

    The PlayStation 5 Is About to Get More Expensive

    By News RoomAugust 23, 2025

    Tech companies are continuing to feel the impact of President Donald Trump’s tariffs, this time…

    The Fairphone 6 no longer feels like a compromise (except in the US)

    August 23, 2025

    Will Trump help 4Chan escape the UK’s Online Safety Act?

    August 23, 2025

    FEMA Now Requires Disaster Victims to Have an Email Address

    August 23, 2025
    Facebook X (Twitter) Instagram Pinterest
    • Privacy Policy
    • Terms of use
    • Advertise
    • Contact
    © 2025 Technology Mag. All Rights Reserved.

    Type above and press Enter to search. Press Esc to cancel.