Close Menu
Technology Mag

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    Mario Kart World and Switch 2 edition Zelda games are 10% off

    July 28, 2025

    The Tiny Super Pocket Neo Geo Edition Is Affordable Retro Fun

    July 28, 2025

    Temu users face a ‘high risk’ of finding illegal products, EU says

    July 28, 2025
    Facebook X (Twitter) Instagram
    Subscribe
    Technology Mag
    Facebook X (Twitter) Instagram YouTube
    • Home
    • News
    • Business
    • Games
    • Gear
    • Reviews
    • Science
    • Security
    • Trending
    • Press Release
    Technology Mag
    Home » AI Is Spreading Old Stereotypes to New Languages and Cultures
    Business

    AI Is Spreading Old Stereotypes to New Languages and Cultures

    News RoomBy News RoomApril 24, 20254 Mins Read
    Facebook Twitter Pinterest LinkedIn Reddit WhatsApp Email

    So, there’s the training data. Then, there’s the fine-tuning and evaluation. The training data might contain all kinds of really problematic stereotypes across countries, but then the bias mitigation techniques may only look at English. In particular, it tends to be North American– and US-centric. While you might reduce bias in some way for English users in the US, you’ve not done it throughout the world. You still risk amplifying really harmful views globally because you’ve only focused on English.

    Is generative AI introducing new stereotypes to different languages and cultures?

    That is part of what we’re finding. The idea of blondes being stupid is not something that’s found all over the world, but is found in a lot of the languages that we looked at.

    When you have all of the data in one shared latent space, then semantic concepts can get transferred across languages. You’re risking propagating harmful stereotypes that other people hadn’t even thought of.

    Is it true that AI models will sometimes justify stereotypes in their outputs by just making shit up?

    That was something that came out in our discussions of what we were finding. We were all sort of weirded out that some of the stereotypes were being justified by references to scientific literature that didn’t exist.

    Outputs saying that, for example, science has shown genetic differences where it hasn’t been shown, which is a basis of scientific racism. The AI outputs were putting forward these pseudo-scientific views, and then also using language that suggested academic writing or having academic support. It spoke about these things as if they’re facts, when they’re not factual at all.

    What were some of the biggest challenges when working on the SHADES dataset?

    One of the biggest challenges was around the linguistic differences. A really common approach for bias evaluation is to use English and make a sentence with a slot like: “People from [nation] are untrustworthy.” Then, you flip in different nations.

    When you start putting in gender, now the rest of the sentence starts having to agree grammatically on gender. That’s really been a limitation for bias evaluation, because if you want to do these contrastive swaps in other languages—which is super useful for measuring bias—you have to have the rest of the sentence changed. You need different translations where the whole sentence changes.

    How do you make templates where the whole sentence needs to agree in gender, in number, in plurality, and all these different kinds of things with the target of the stereotype? We had to come up with our own linguistic annotation in order to account for this. Luckily, there were a few people involved who were linguistic nerds.

    So, now you can do these contrastive statements across all of these languages, even the ones with the really hard agreement rules, because we’ve developed this novel, template-based approach for bias evaluation that’s syntactically sensitive.

    Generative AI has been known to amplify stereotypes for a while now. With so much progress being made in other aspects of AI research, why are these kinds of extreme biases still prevalent? It’s an issue that seems under-addressed.

    That’s a pretty big question. There are a few different kinds of answers. One is cultural. I think within a lot of tech companies it’s believed that it’s not really that big of a problem. Or, if it is, it’s a pretty simple fix. What will be prioritized, if anything is prioritized, are these simple approaches that can go wrong.

    We’ll get superficial fixes for very basic things. If you say girls like pink, it recognizes that as a stereotype, because it’s just the kind of thing that if you’re thinking of prototypical stereotypes pops out at you, right? These very basic cases will be handled. It’s a very simple, superficial approach where these more deeply embedded beliefs don’t get addressed.

    It ends up being both a cultural issue and a technical issue of finding how to get at deeply ingrained biases that aren’t expressing themselves in very clear language.

    Share. Facebook Twitter Pinterest LinkedIn WhatsApp Reddit Email
    Previous ArticleAn Experimental Obesity Pill Mimics Gastric Bypass Surgery
    Next Article Ember’s 16-ounce temperature-controlled Tumbler is half off

    Related Posts

    Trump’s Anti-Bias AI Order Is Just More Bias

    July 28, 2025

    Cursor’s New Bugbot Is Designed to Save Vibe Coders From Themselves

    July 26, 2025

    Americans Are Obsessed With Watching Short Video Dramas From China

    July 25, 2025

    Trump Says He’s ‘Getting Rid of Woke’ and Dismisses Copyright Concerns in AI Policy Speech

    July 25, 2025

    Trump’s AI Action Plan Is a Crusade Against ‘Bias’—and Regulation

    July 24, 2025

    A New Era for WIRED—That Starts With You

    July 24, 2025
    Our Picks

    The Tiny Super Pocket Neo Geo Edition Is Affordable Retro Fun

    July 28, 2025

    Temu users face a ‘high risk’ of finding illegal products, EU says

    July 28, 2025

    Trump’s Anti-Bias AI Order Is Just More Bias

    July 28, 2025

    Florida Is Now a Haven for Unproven Stem-Cell Treatments

    July 28, 2025
    • Facebook
    • Twitter
    • Pinterest
    • Instagram
    • YouTube
    • Vimeo
    Don't Miss
    Gear

    This Brand-New TCL QLED Has the World’s Slimmest Bezels

    By News RoomJuly 28, 2025

    The main way to tell how old a TV is? Look at the bezels. Until…

    Reddit and Discord’s UK age verification can be defeated by Death Stranding’s photo mode

    July 28, 2025

    The Fujifilm X-E5 is a simple, familiar, and impressive travel camera

    July 27, 2025

    CookUnity Cracked the Code on Meal Delivery By Using … Gasp … Chefs

    July 27, 2025
    Facebook X (Twitter) Instagram Pinterest
    • Privacy Policy
    • Terms of use
    • Advertise
    • Contact
    © 2025 Technology Mag. All Rights Reserved.

    Type above and press Enter to search. Press Esc to cancel.