Close Menu
Technology Mag

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    Blizzard’s Diablo team has unionized

    August 29, 2025

    Apple iPhone 17 launch event: What to expect

    August 29, 2025

    This liquid-cooled projector promises an incredibly bright 6,200 lumen image

    August 29, 2025
    Facebook X (Twitter) Instagram
    Subscribe
    Technology Mag
    Facebook X (Twitter) Instagram YouTube
    • Home
    • News
    • Business
    • Games
    • Gear
    • Reviews
    • Science
    • Security
    • Trending
    • Press Release
    Technology Mag
    Home » A Single Poisoned Document Could Leak ‘Secret’ Data Via ChatGPT
    Security

    A Single Poisoned Document Could Leak ‘Secret’ Data Via ChatGPT

    News RoomBy News RoomAugust 7, 20253 Mins Read
    Facebook Twitter Pinterest LinkedIn Reddit WhatsApp Email

    The latest generative AI models are not just stand-alone text-generating chatbots—instead, they can easily be hooked up to your data to give personalized answers to your questions. OpenAI’s ChatGPT can be linked to your Gmail inbox, allowed to inspect your GitHub code, or find appointments in your Microsoft calendar. But these connections have the potential to be abused—and researchers have shown it can take just a single “poisoned” document to do so.

    New findings from security researchers Michael Bargury and Tamir Ishay Sharbat, revealed at the Black Hat hacker conference in Las Vegas today, show how a weakness in OpenAI’s Connectors allowed sensitive information to be extracted from a Google Drive account using an indirect prompt injection attack. In a demonstration of the attack, dubbed AgentFlayer, Bargury shows how it was possible to extract developer secrets, in the form of API keys, that were stored in a demonstration Drive account.

    The vulnerability highlights how connecting AI models to external systems and sharing more data across them increases the potential attack surface for malicious hackers and potentially multiplies the ways where vulnerabilities may be introduced.

    “There is nothing the user needs to do to be compromised, and there is nothing the user needs to do for the data to go out,” Bargury, the CTO at security firm Zenity, tells WIRED. “We’ve shown this is completely zero-click; we just need your email, we share the document with you, and that’s it. So yes, this is very, very bad,” Bargury says.

    OpenAI did not immediately respond to WIRED’s request for comment about the vulnerability in Connectors. The company introduced Connectors for ChatGPT as a beta feature earlier this year, and its website lists at least 17 different services that can be linked up with its accounts. It says the system allows you to “bring your tools and data into ChatGPT” and “search files, pull live data, and reference content right in the chat.”

    Bargury says he reported the findings to OpenAI earlier this year and that the company quickly introduced mitigations to prevent the technique he used to extract data via Connectors. The way the attack works means only a limited amount of data could be extracted at once—full documents could not be removed as part of the attack.

    “While this issue isn’t specific to Google, it illustrates why developing robust protections against prompt injection attacks is important,” says Andy Wen, senior director of security product management at Google Workspace, pointing to the company’s recently enhanced AI security measures.

    Share. Facebook Twitter Pinterest LinkedIn WhatsApp Reddit Email
    Previous ArticleMeta’s prototype headsets show off the future of mixed reality
    Next Article US military finds a good use for Tesla Cybertruck: missile target practice

    Related Posts

    Senate Probe Uncovers Allegations of Widespread Abuse in ICE Custody

    August 27, 2025

    Highly Sensitive Medical Cannabis Patient Data Exposed by Unsecured Database

    August 27, 2025

    A Special Diamond Is the Key to a Fully Open Source Quantum Sensor

    August 25, 2025

    Data Brokers Face New Pressure for Hiding Opt-Out Pages From Google

    August 23, 2025

    493 Cases of Sextortion Against Children Linked to Notorious Scam Compounds

    August 20, 2025

    Russia Is Cracking Down on End-to-End Encrypted Calls

    August 19, 2025
    Our Picks

    Apple iPhone 17 launch event: What to expect

    August 29, 2025

    This liquid-cooled projector promises an incredibly bright 6,200 lumen image

    August 29, 2025

    Google adds iPhone-like ‘Calling Cards’ to its Phone app

    August 29, 2025

    What It’s Like Watching Dozens of Bodies Decompose (for Science)

    August 29, 2025
    • Facebook
    • Twitter
    • Pinterest
    • Instagram
    • YouTube
    • Vimeo
    Don't Miss
    News

    Microsoft fires two more employees for participating in Palestine protests on campus

    By News RoomAugust 28, 2025

    Microsoft has fired two more employees who participated in recent protests against the company’s contracts…

    Microsoft launches its first in-house AI models

    August 28, 2025

    Xbox’s cross-device play history syncs your recently played games on every screen

    August 28, 2025

    The best Labor Day sales on TVs

    August 28, 2025
    Facebook X (Twitter) Instagram Pinterest
    • Privacy Policy
    • Terms of use
    • Advertise
    • Contact
    © 2025 Technology Mag. All Rights Reserved.

    Type above and press Enter to search. Press Esc to cancel.