A Single Poisoned Document Could Leak ‘Secret’ Data Via ChatGPT


The latest generative AI models are not just self-contained Text-generating chat boots“When they can easily be linked to your data to give personal answers to your questions.” Openai -s Chatgpt may be attached To your inbox Gmail, may inspect your GitHub code, or find appointments in your Microsoft calendar. But these relationships can be misused – and researchers have shown that it can only take one “poisoned” document to do so.

New findings from security researchers Michael Bargury and Tamir Ishay Sharbat, revealed at the Black Hat Hacker Conference in Las Vegas today, show how weakness in OpenAi’s connectors allowed sensitive information to extract from Google Drive account using an an account indirect prompt injection attack. In proof of the attack, Dubbed agentflayerBargury shows how it was possible to extract secrets from developers, in the form of API keys that were kept in a demonstration.

The vulnerability emphasizes how to connect AI models to external systems and sharing more data through them increases the potential attack surface for malicious hackers and may increase the ways where vulnerabilities can be introduced.

“There is nothing the user has to do to be compromised, and there is nothing the user has to do for the data to come out,” Bargury, the CTO at security firm Zenity, tells Wired. “We have shown that this is completely zero-click; we just need your email, we share the document with you, and so it is. So yes, this is very, very bad,” Bargury says.

Openai did not immediately respond to Wired’s request for a comment on the vulnerability in connectors. The company introduced connectors for chatgpt as a beta function earlier this year, and its Website lists At least 17 different services that can be linked with its accounts. It says the system allows you to “bring your tools and data in ChatGPT” and “search for files, pull live data and reference content right in the chat.”

Bargury says he reported the findings to Openai earlier this year and that the company quickly introduced mitigations to prevent the technique he used to extract data with connectors. The way the attack works means only a limited amount of data at once – full documents could not be removed as part of the attack.

“While this issue is not specific to Google, it illustrates why developing robust protections against fast injection attacks is important,” says Andy Wen, senior director of security product management at Google Workspace, showing the company Recently Improved AI -Security Measures.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *