Close Menu
Technology Mag

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot
    33 practical smart home gifts that make everyday life a little easier

    33 practical smart home gifts that make everyday life a little easier

    December 12, 2025
    We found 70 stocking stuffers under 0 that are actually useful

    We found 70 stocking stuffers under $100 that are actually useful

    December 12, 2025
    iOS 26.2 is here with Liquid Glass, AirDrop, and Apple Music updates

    iOS 26.2 is here with Liquid Glass, AirDrop, and Apple Music updates

    December 12, 2025
    Facebook X (Twitter) Instagram
    Subscribe
    Technology Mag
    Facebook X (Twitter) Instagram YouTube
    • Home
    • News
    • Business
    • Games
    • Gear
    • Reviews
    • Science
    • Security
    • Trending
    • Press Release
    Technology Mag
    Home » An AI Customer Service Chatbot Made Up a Company Policy—and Created a Mess
    Business

    An AI Customer Service Chatbot Made Up a Company Policy—and Created a Mess

    News RoomBy News RoomApril 22, 20254 Mins Read
    Facebook Twitter Pinterest LinkedIn Reddit WhatsApp Email
    An AI Customer Service Chatbot Made Up a Company Policy—and Created a Mess

    On Monday, a developer using the popular AI-powered code editor Cursor noticed something strange: Switching between machines instantly logged them out, breaking a common workflow for programmers who use multiple devices. When the user contacted Cursor support, an agent named “Sam” told them it was expected behavior under a new policy. But no such policy existed, and Sam was a bot. The AI model made the policy up, sparking a wave of complaints and cancellation threats documented on Hacker News and Reddit.

    This marks the latest instance of AI confabulations (also called “hallucinations”) causing potential business damage. Confabulations are a type of “creative gap-filling” response where AI models invent plausible-sounding but false information. Instead of admitting uncertainty, AI models often prioritize creating plausible, confident responses, even when that means manufacturing information from scratch.

    For companies deploying these systems in customer-facing roles without human oversight, the consequences can be immediate and costly: frustrated customers, damaged trust, and, in Cursor’s case, potentially canceled subscriptions.

    How It Unfolded

    The incident began when a Reddit user named BrokenToasterOven noticed that while swapping between a desktop, laptop, and a remote dev box, Cursor sessions were unexpectedly terminated.

    “Logging into Cursor on one machine immediately invalidates the session on any other machine,” BrokenToasterOven wrote in a message that was later deleted by r/cursor moderators. “This is a significant UX regression.”

    Confused and frustrated, the user wrote an email to Cursor support and quickly received a reply from Sam: “Cursor is designed to work with one device per subscription as a core security feature,” read the email reply. The response sounded definitive and official, and the user did not suspect that Sam was not human.

    After the initial Reddit post, users took the post as official confirmation of an actual policy change—one that broke habits essential to many programmers’ daily routines. “Multi-device workflows are table stakes for devs,” wrote one user.

    Shortly afterward, several users publicly announced their subscription cancellations on Reddit, citing the non-existent policy as their reason. “I literally just cancelled my sub,” wrote the original Reddit poster, adding that their workplace was now “purging it completely.” Others joined in: “Yep, I’m canceling as well, this is asinine.” Soon after, moderators locked the Reddit thread and removed the original post.

    “Hey! We have no such policy,” wrote a Cursor representative in a Reddit reply three hours later. “You’re of course free to use Cursor on multiple machines. Unfortunately, this is an incorrect response from a front-line AI support bot.”

    AI Confabulations as a Business Risk

    The Cursor debacle recalls a similar episode from February 2024 when Air Canada was ordered to honor a refund policy invented by its own chatbot. In that incident, Jake Moffatt contacted Air Canada’s support after his grandmother died, and the airline’s AI agent incorrectly told him he could book a regular-priced flight and apply for bereavement rates retroactively. When Air Canada later denied his refund request, the company argued that “the chatbot is a separate legal entity that is responsible for its own actions.” A Canadian tribunal rejected this defense, ruling that companies are responsible for information provided by their AI tools.

    Rather than disputing responsibility as Air Canada had done, Cursor acknowledged the error and took steps to make amends. Cursor cofounder Michael Truell later apologized on Hacker News for the confusion about the non-existent policy, explaining that the user had been refunded and the issue resulted from a backend change meant to improve session security that unintentionally created session invalidation problems for some users.

    “Any AI responses used for email support are now clearly labeled as such,” he added. “We use AI-assisted responses as the first filter for email support.”

    Still, the incident raised lingering questions about disclosure among users, since many people who interacted with Sam apparently believed it was human. “LLMs pretending to be people (you named it Sam!) and not labeled as such is clearly intended to be deceptive,” one user wrote on Hacker News.

    While Cursor fixed the technical bug, the episode shows the risks of deploying AI models in customer-facing roles without proper safeguards and transparency. For a company selling AI productivity tools to developers, having its own AI support system invent a policy that alienated its core users represents a particularly awkward self-inflicted wound.

    “There is a certain amount of irony that people try really hard to say that hallucinations are not a big problem anymore,” one user wrote on Hacker News, “and then a company that would benefit from that narrative gets directly hurt by it.”

    This story originally appeared on Ars Technica.

    Share. Facebook Twitter Pinterest LinkedIn WhatsApp Reddit Email
    Previous ArticleTechnics’ AZ100 Are Instant Contenders for the Best Premium Wireless Buds
    Next Article Nvidia’s GPU drivers are a mess

    Related Posts

    What Happens When Your Coworkers Are AI Agents

    What Happens When Your Coworkers Are AI Agents

    December 9, 2025
    San Francisco Mayor Daniel Lurie: ‘We Are a City on the Rise’

    San Francisco Mayor Daniel Lurie: ‘We Are a City on the Rise’

    December 9, 2025
    An AI Dark Horse Is Rewriting the Rules of Game Design

    An AI Dark Horse Is Rewriting the Rules of Game Design

    December 9, 2025
    Watch the Highlights From WIRED’s Big Interview Event Right Here

    Watch the Highlights From WIRED’s Big Interview Event Right Here

    December 9, 2025
    Amazon Has New Frontier AI Models—and a Way for Customers to Build Their Own

    Amazon Has New Frontier AI Models—and a Way for Customers to Build Their Own

    December 4, 2025
    AWS CEO Matt Garman Wants to Reassert Amazon’s Cloud Dominance in the AI Era

    AWS CEO Matt Garman Wants to Reassert Amazon’s Cloud Dominance in the AI Era

    December 4, 2025
    Our Picks
    We found 70 stocking stuffers under 0 that are actually useful

    We found 70 stocking stuffers under $100 that are actually useful

    December 12, 2025
    iOS 26.2 is here with Liquid Glass, AirDrop, and Apple Music updates

    iOS 26.2 is here with Liquid Glass, AirDrop, and Apple Music updates

    December 12, 2025
    Mmm, Qi donuts

    Mmm, Qi donuts

    December 12, 2025
    Google Translate brings real-time speech translations to any headphones

    Google Translate brings real-time speech translations to any headphones

    December 12, 2025
    • Facebook
    • Twitter
    • Pinterest
    • Instagram
    • YouTube
    • Vimeo
    Don't Miss
    How to vibe-write a country hit News

    How to vibe-write a country hit

    By News RoomDecember 12, 2025

    You may not even know it, but you’ve almost certainly encountered songs made mostly or…

    The TCL QM9K is excellent, but not much more than the QM8K

    The TCL QM9K is excellent, but not much more than the QM8K

    December 12, 2025
    We’re still talking about the Trump phone

    We’re still talking about the Trump phone

    December 12, 2025
    I quit all my AI fitness plans, and I feel free

    I quit all my AI fitness plans, and I feel free

    December 12, 2025
    Facebook X (Twitter) Instagram Pinterest
    • Privacy Policy
    • Terms of use
    • Advertise
    • Contact
    © 2025 Technology Mag. All Rights Reserved.

    Type above and press Enter to search. Press Esc to cancel.