Close Menu
Technology Mag

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    Save 50 percent on Paramount Plus subscriptions, and get $60 off a solar-powered dash cam

    September 13, 2025

    Spotify Lossless is an inconvenient improvement

    September 13, 2025

    Apple’s Big Bet to Eliminate the iPhone’s Most Targeted Vulnerabilities

    September 13, 2025
    Facebook X (Twitter) Instagram
    Subscribe
    Technology Mag
    Facebook X (Twitter) Instagram YouTube
    • Home
    • News
    • Business
    • Games
    • Gear
    • Reviews
    • Science
    • Security
    • Trending
    • Press Release
    Technology Mag
    Home » This Prompt Can Make an AI Chatbot Identify and Extract Personal Details From Your Chats
    Security

    This Prompt Can Make an AI Chatbot Identify and Extract Personal Details From Your Chats

    News RoomBy News RoomOctober 19, 20243 Mins Read
    Facebook Twitter Pinterest LinkedIn Reddit WhatsApp Email

    The researchers say that if the attack were carried out in the real world, people could be socially engineered into believing the unintelligible prompt might do something useful, such as improve their CV. The researchers point to numerous websites that provide people with prompts they can use. They tested the attack by uploading a CV to conversations with chatbots, and it was able to return the personal information contained within the file.

    Earlence Fernandes, an assistant professor at UCSD who was involved in the work, says the attack approach is fairly complicated as the obfuscated prompt needs to identify personal information, form a working URL, apply Markdown syntax, and not give away to the user that it is behaving nefariously. Fernandes likens the attack to malware, citing its ability to perform functions and behavior in ways the user might not intend.

    “Normally you could write a lot of computer code to do this in traditional malware,” Fernandes says. “But here I think the cool thing is all of that can be embodied in this relatively short gibberish prompt.”

    A spokesperson for Mistral AI says the company welcomes security researchers helping it to make its products safer for users. “Following this feedback, Mistral AI promptly implemented the proper remediation to fix the situation,” the spokesperson says. The company treated the issue as one with “medium severity,” and its fix blocks the Markdown renderer from operating and being able to call an external URL through this process, meaning external image loading isn’t possible.

    Fernandes believes Mistral AI’s update is likely one of the first times an adversarial prompt example has led to an LLM product being fixed, rather than the attack being stopped by filtering out the prompt. However, he says, limiting the capabilities of LLM agents could be “counterproductive” in the long run.

    Meanwhile, a statement from the creators of ChatGLM says the company has security measures in place to help with user privacy. “Our model is secure, and we have always placed a high priority on model security and privacy protection,” the statement says. “By open-sourcing our model, we aim to leverage the power of the open-source community to better inspect and scrutinize all aspects of these models’ capabilities, including their security.”

    A “High-Risk Activity”

    Dan McInerney, the lead threat researcher at security company Protect AI, says the Imprompter paper “releases an algorithm for automatically creating prompts that can be used in prompt injection to do various exploitations, like PII exfiltration, image misclassification, or malicious use of tools the LLM agent can access.” While many of the attack types within the research may be similar to previous methods, McInerney says, the algorithm ties them together. “This is more along the lines of improving automated LLM attacks than undiscovered threat surfaces in them.”

    However, he adds that as LLM agents become more commonly used and people give them more authority to take actions on their behalf, the scope for attacks against them increases. “Releasing an LLM agent that accepts arbitrary user input should be considered a high-risk activity that requires significant and creative security testing prior to deployment,” McInerney says.

    For companies, that means understanding the ways an AI agent can interact with data and how they can be abused. But for individual people, similarly to common security advice, you should consider just how much information you’re providing to any AI application or company, and if using any prompts from the internet, be cautious of where they come from.

    Share. Facebook Twitter Pinterest LinkedIn WhatsApp Reddit Email
    Previous ArticleSpaceX Has a Plan for Starlink to Hit Gigabit Speeds
    Next Article DJI sues the US Department of Defense for labeling it a ‘Chinese Military Company’

    Related Posts

    Apple’s Big Bet to Eliminate the iPhone’s Most Targeted Vulnerabilities

    September 13, 2025

    Defense Department Scrambles to Pretend It’s Called the War Department

    September 12, 2025

    US Investment in Spyware Is Skyrocketing

    September 11, 2025

    Cindy Cohn Is Leaving the EFF, but Not the Fight for Digital Rights

    September 11, 2025

    Massive Leak Shows How a Chinese Company Is Exporting the Great Firewall to the World

    September 10, 2025

    ICE Has Spyware Now

    September 9, 2025
    Our Picks

    Spotify Lossless is an inconvenient improvement

    September 13, 2025

    Apple’s Big Bet to Eliminate the iPhone’s Most Targeted Vulnerabilities

    September 13, 2025

    Why Former NFL All-Pros Are Turning to Psychedelics

    September 13, 2025

    Elon Musk is trying to silence Microsoft employees who criticize Charlie Kirk

    September 12, 2025
    • Facebook
    • Twitter
    • Pinterest
    • Instagram
    • YouTube
    • Vimeo
    Don't Miss
    News

    Tucker Carlson asks Sam Altman if an OpenAI employee was murdered ‘on your orders’

    By News RoomSeptember 12, 2025

    Carlson: “…he was definitely murdered, I think… there were signs of a struggle, of course.…

    Nvidia’s GeForce Now Update Feels Like Someone Put an RTX 5080 in My MacBook

    September 12, 2025

    Discord is distancing itself from the Charlie Kirk shooting suspect

    September 12, 2025

    A new Astro Bot-themed PS5 controller is now available for preorder

    September 12, 2025
    Facebook X (Twitter) Instagram Pinterest
    • Privacy Policy
    • Terms of use
    • Advertise
    • Contact
    © 2025 Technology Mag. All Rights Reserved.

    Type above and press Enter to search. Press Esc to cancel.