Close Menu
Technology Mag

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    The best Apple deals you can shop ahead of Amazon Prime Day

    June 20, 2025

    A Deep Learning Alternative Can Help AI Agents Gameplay the Real World

    June 20, 2025

    You sound like ChatGPT

    June 20, 2025
    Facebook X (Twitter) Instagram
    Subscribe
    Technology Mag
    Facebook X (Twitter) Instagram YouTube
    • Home
    • News
    • Business
    • Games
    • Gear
    • Reviews
    • Science
    • Security
    • Trending
    • Press Release
    Technology Mag
    Home » You sound like ChatGPT
    News

    You sound like ChatGPT

    News RoomBy News RoomJune 20, 20256 Mins Read
    Facebook Twitter Pinterest LinkedIn Reddit WhatsApp Email

    Join any Zoom call, walk into any lecture hall, or watch any YouTube video, and listen carefully. Past the content and inside the linguistic patterns, you’ll find the creeping uniformity of AI voice. Words like “prowess” and “tapestry,” which are favored by ChatGPT, are creeping into our vocabulary, while words like “bolster,” “unearth,” and “nuance,” words less favored by ChatGPT, have declined in use. Researchers are already documenting shifts in the way we speak and communicate as a result of ChatGPT — and they see this linguistic influence accelerating into something much larger.

    In the 18 months after ChatGPT was released, speakers used words like “meticulous,” “delve,” “realm,” and “adept” up to 51 percent more frequently than in the three years prior, according to researchers at the Max Planck Institute for Human Development, who analyzed close to 280,000 YouTube videos from academic channels. The researchers ruled out other possible change points before ChatGPT’s release and confirmed these words align with those the model favors, as established in an earlier study comparing 10,000 human- and AI-edited texts. The speakers don’t realize their language is changing. That’s exactly the point.

    One word, in particular, stood out to researchers as a kind of linguistic watermark. “Delve” has become an academic shibboleth, a neon sign in the middle of every conversation flashing ChatGPT was here. “We internalize this virtual vocabulary into daily communication,” says Hiromu Yakura, the study’s lead author and a postdoctoral researcher at the Max Planck Institute of Human Development.

    “‘Delve’ is only the tip of the iceberg.”

    But it’s not just that we’re adopting AI language — it’s about how we’re starting to sound. Even though current studies mostly focus on vocabulary, researchers suspect that AI influence is starting to show up in tone, too — in the form of longer, more structured speech and muted emotional expression. As Levin Brinkmann, a research scientist at the Max Planck Institute of Human Development and a coauthor of the study, puts it, “‘Delve’ is only the tip of the iceberg.”

    AI shows up most obviously in functions like smart replies, autocorrect, and spellcheck. Research out of Cornell looks at our use of smart replies in chats, finding that use of smart replies increases overall cooperation and feelings of closeness between participants, since users end up selecting more positive emotional language. But if people believed their partner was using AI in the interaction, they rated their partner as less collaborative and more demanding. Crucially, it wasn’t actual AI usage that turned them off — it was the suspicion of it. We form perceptions based on language cues, and it’s really the language properties that drive those impressions, says Malte Jung, Associate Professor of Information Science at Cornell University and a co-author of the study.

    This paradox — AI improving communication while fostering suspicion — points to a deeper loss of trust, according to Mor Naaman, professor of Information Science at Cornell Tech. He has identified three levels of human signals that we’ve lost in adopting AI into our communication. The first level is that of basic humanity signals, cues that speak to our authenticity as a human being like moments of vulnerability or personal rituals, which say to others, “This is me, I’m human.” The second level consists of attention and effort signals that prove “I cared enough to write this myself.” And the third level is ability signals which show our sense of humor, our competence, and our real selves to others. It’s the difference between texting someone, “I’m sorry you’re upset” versus “Hey sorry I freaked at dinner, I probably shouldn’t have skipped therapy this week.” One sounds flat; the other sounds human.

    For Naaman, figuring out how to bring back and elevate these signals is the path forward in AI-mediated communication, because AI is not only changing language — but what we think. “Even on dating sites, what does it mean to be funny on your profile or in chat anymore where we know that AI can be funny for you?” Naaman asks. The loss of agency starting in our speech and moving into our thinking, in particular, is what he is worried about. “Instead of articulating our own thoughts, we articulate whatever AI helps us to articulate…we become more persuaded.” Without these signals, Naaman warns, we’ll only trust face-to-face communication — not even video calls.

    We lose the verbal stumbles, regional idioms, and off-kilter phrases that signal vulnerability, authenticity, and personhood

    The trust problem compounds when you consider that AI is quietly establishing who gets to sound “legitimate” in the first place. University of California, Berkeley research found that AI responses often contained stereotypes or inaccurate approximations when prompted to use dialects other than Standard American English. Examples of this include ChatGPT repeating the prompt back to the non-Standard-American-English user due to lack of comprehension and exaggerating the input dialect significantly. One Singaporean English respondent commented, “the super exaggerated Singlish in one of the responses was slightly cringeworthy.” The study revealed that AI doesn’t just prefer Standard American English, it actively flattens other dialects in ways that can demean their speakers.

    This system perpetuates inaccuracies not only about communities but also about what “correct” English is. So the stakes aren’t just about preserving linguistic diversity — they’re about protecting the imperfections that actually build trust. When everyone around us starts to sound “correct,” we lose the verbal stumbles, regional idioms, and off-kilter phrases that signal vulnerability, authenticity, and personhood.

    We’re approaching a splitting point, where AI’s impacts on how we speak and write move between the poles of standardization, like templating professional emails or formal presentations, and authentic expression in personal and emotional spaces. Between those poles, there are three core tensions at play. Early backlash signals, like academics avoiding “delve” and people actively trying not to sound like AI, suggests we may self-regulate against homogenization. AI systems themselves will likely become more expressive and personalized over time, potentially reducing the current AI voice problem. And the deepest risk of all, as Naaman pointed to, is not linguistic uniformity but losing conscious control over our own thinking and expression.

    The future isn’t predetermined between homogenization and hyperpersonalization: it depends on whether we’ll be conscious participants in that change. We’re seeing early signs that people will push back when AI influence becomes too obvious, while technology may evolve to better mirror human diversity rather than flatten it. This isn’t a question about whether AI will continue shaping how we speak — because it will — but whether we’ll actively choose to preserve space for the verbal quirks and emotional messiness that make communication recognizably, irreplaceably human.

    Share. Facebook Twitter Pinterest LinkedIn WhatsApp Reddit Email
    Previous ArticleThis AI Model Never Stops Learning
    Next Article A Deep Learning Alternative Can Help AI Agents Gameplay the Real World

    Related Posts

    The best Apple deals you can shop ahead of Amazon Prime Day

    June 20, 2025

    Meta announces Oakley smart glasses

    June 20, 2025

    Applebee’s and IHOP have plans for AI

    June 20, 2025

    Get a 512GB Mac Mini M4 for its lowest price

    June 20, 2025

    Tesla’s first robotaxi rides will have a ‘safety monitor’ in the passenger seat

    June 20, 2025

    Microsoft is blocking Google Chrome through its family safety feature

    June 20, 2025
    Our Picks

    A Deep Learning Alternative Can Help AI Agents Gameplay the Real World

    June 20, 2025

    You sound like ChatGPT

    June 20, 2025

    This AI Model Never Stops Learning

    June 20, 2025

    Meta announces Oakley smart glasses

    June 20, 2025
    • Facebook
    • Twitter
    • Pinterest
    • Instagram
    • YouTube
    • Vimeo
    Don't Miss
    News

    Applebee’s and IHOP have plans for AI

    By News RoomJune 20, 2025

    Applebee’s and IHOP plan to launch an AI-powered “personalization engine” that could help its restaurants…

    Get a 512GB Mac Mini M4 for its lowest price

    June 20, 2025

    US Supreme Court Upholds Tennessee’s Ban on Gender-Affirming Care for Minors

    June 20, 2025

    Tesla’s first robotaxi rides will have a ‘safety monitor’ in the passenger seat

    June 20, 2025
    Facebook X (Twitter) Instagram Pinterest
    • Privacy Policy
    • Terms of use
    • Advertise
    • Contact
    © 2025 Technology Mag. All Rights Reserved.

    Type above and press Enter to search. Press Esc to cancel.