Close Menu
Technology Mag

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    I tried Amazon and Google’s new smart home gadgets this week, ask me anything!

    October 3, 2025

    The real price of a free TV

    October 3, 2025

    This AI-Powered Robot Keeps Going Even if You Attack It With a Chainsaw

    October 3, 2025
    Facebook X (Twitter) Instagram
    Subscribe
    Technology Mag
    Facebook X (Twitter) Instagram YouTube
    • Home
    • News
    • Business
    • Games
    • Gear
    • Reviews
    • Science
    • Security
    • Trending
    • Press Release
    Technology Mag
    Home » Chatbots Play With Your Emotions to Avoid Saying Goodbye
    Business

    Chatbots Play With Your Emotions to Avoid Saying Goodbye

    News RoomBy News RoomOctober 3, 20253 Mins Read
    Facebook Twitter Pinterest LinkedIn Reddit WhatsApp Email

    Regulation of dark patterns has been proposed and is being discussed in both the US and Europe. De Freitas says regulators also should look at whether AI tools introduce more subtle—and potentially more powerful—new kinds of dark patterns.

    Even regular chatbots, which tend to avoid presenting themselves as companions, can elicit emotional responses from users though. When OpenAI introduced GPT-5, a new flagship model, earlier this year, many users protested that it was far less friendly and encouraging than its predecessor—forcing the company to revive the old model. Some users can become so attached to a chatbot’s “personality” that they may mourn the retirement of old models.

    “When you anthropomorphize these tools, it has all sorts of positive marketing consequences,” De Freitas says. Users are more likely to comply with requests from a chatbot they feel connected with, or to disclose personal information, he says. “From a consumer standpoint, those [signals] aren’t necessarily in your favor,” he says.

    WIRED reached out to each of the companies looked at in the study for comment. Chai, Talkie, and PolyBuzz did not respond to WIRED’s questions.

    Katherine Kelly, a spokesperson for Character AI, said that the company had not reviewed the study so could not comment on it. She added: “We welcome working with regulators and lawmakers as they develop regulations and legislation for this emerging space.”

    Minju Song, a spokesperson for Replika, says the company’s companion is designed to let users log off easily and will even encourage them to take breaks. “We’ll continue to review the paper’s methods and examples, and [will] engage constructively with researchers,” Song says.

    An interesting flip side here is the fact that AI models are themselves also susceptible to all sorts of persuasion tricks. On Monday OpenAI introduced a new way to buy things online through ChatGPT. If agents do become widespread as a way to automate tasks like booking flights and completing refunds, then it may be possible for companies to identify dark patterns that can twist the decisions made by the AI models behind those agents.

    A recent study by researchers at Columbia University and a company called MyCustomAI reveals that AI agents deployed on a mock ecommerce marketplace behave in predictable ways, for example favoring certain products over others or preferring certain buttons when clicking around the site. Armed with these findings, a real merchant could optimize a site’s pages to ensure that agents buy a more expensive product. Perhaps they could even deploy a new kind of anti-AI dark pattern that frustrates an agent’s efforts to start a return or figure out how to unsubscribe from a mailing list.

    Difficult goodbyes might then be the least of our worries.

    Do you feel like you’ve been emotionally manipulated by a chatbot? Send an email to [email protected] to tell me about it.


    This is an edition of Will Knight’s AI Lab newsletter. Read previous newsletters here.

    Share. Facebook Twitter Pinterest LinkedIn WhatsApp Reddit Email
    Previous ArticleJapan’s most popular beer is running low after cyberattack
    Next Article This AI-Powered Robot Keeps Going Even if You Attack It With a Chainsaw

    Related Posts

    This AI-Powered Robot Keeps Going Even if You Attack It With a Chainsaw

    October 3, 2025

    Exclusive: Mira Murati’s Stealth AI Lab Launches Its First Product

    October 2, 2025

    Why One VC Thinks Quantum Is a Bigger Unlock Than AGI

    October 1, 2025

    A Journey Into the Heart of Labubu

    October 1, 2025

    Marissa Mayer Is Dissolving Her Sunshine Startup Lab

    October 1, 2025

    OpenAI Is Preparing to Launch a Social App for AI-Generated Videos

    October 1, 2025
    Our Picks

    The real price of a free TV

    October 3, 2025

    This AI-Powered Robot Keeps Going Even if You Attack It With a Chainsaw

    October 3, 2025

    Chatbots Play With Your Emotions to Avoid Saying Goodbye

    October 3, 2025

    Japan’s most popular beer is running low after cyberattack

    October 3, 2025
    • Facebook
    • Twitter
    • Pinterest
    • Instagram
    • YouTube
    • Vimeo
    Don't Miss
    Science

    A Startup Used AI to Make a Psychedelic Without the Trip

    By News RoomOctober 3, 2025

    While there’s growing evidence that psychedelic drugs can effectively treat severe mental health conditions, especially…

    Apple pulls ICEBlock from the App Store

    October 2, 2025

    Google is destroying independent websites, and one sees no choice but to defend it anyway

    October 2, 2025

    Shein is opening its first physical stores

    October 2, 2025
    Facebook X (Twitter) Instagram Pinterest
    • Privacy Policy
    • Terms of use
    • Advertise
    • Contact
    © 2025 Technology Mag. All Rights Reserved.

    Type above and press Enter to search. Press Esc to cancel.