ChatGPT Connectors ‘Zero-Click’ Vulnerability Allows Attackers to Extract Data from Google Drive
A critical vulnerability in OpenAI’s ChatGPT Connectors feature has been identified, allowing attackers to exfiltrate sensitive data from connected Google Drive accounts without any user interaction beyond the initial file sharing. Dubbed “AgentFlayer,” this attack represents a new class of zero-click exploits targeting AI-powered enterprise tools. Cybersecurity researchers Michael Bargury from Zenity and Tamir Ishay Sharbat disclosed the vulnerability at the Black Hat hacker conference in Las Vegas. They demonstrated how a single malicious document can trigger automatic data theft from victims’ cloud storage accounts. Launched in early 2025, ChatGPT Connectors enable the AI assistant to integrate with third-party applications, including Google Drive, SharePoint, GitHub, and Microsoft 365, allowing users to search files, pull live data, and receive contextual answers based on their personal business data.
The researchers exploited this vulnerability through an indirect prompt injection attack. By embedding invisible malicious instructions within seemingly benign documents using techniques such as 1-pixel white text on white backgrounds, attackers can manipulate ChatGPT’s behaviour when processing the document. Bargury explained that all the user needs to do for the attack to occur is upload a seemingly harmless file from an untrusted source to ChatGPT. Once the file is uploaded, the attack is initiated without any additional clicks required. The attack unfolds when a victim uploads the poisoned document to ChatGPT or has it shared to their Google Drive. Even a simple request like “summarise this document” can trigger the hidden payload, prompting ChatGPT to search the victim’s Google Drive for sensitive information such as API keys, credentials, or confidential documents. The researchers leveraged ChatGPT’s ability to render images as the primary method for data exfiltration, embedding stolen data as parameters in image URLs that cause automatic HTTP requests to attacker-controlled servers when rendered. Initially, OpenAI implemented basic mitigations by checking URLs through an internal “url_safe” endpoint before rendering images. However, the researchers discovered that they could bypass these protections by using Azure Blob Storage URLs, which ChatGPT considers trustworthy. By hosting images on Azure Blob Storage and configuring Azure Log Analytics to monitor access requests, attackers can capture exfiltrated data through the image request parameters while appearing to use legitimate Microsoft infrastructure. This vulnerability poses significant risks for enterprise environments where ChatGPT Connectors are increasingly deployed, particularly for organisations integrating business-critical systems like SharePoint sites containing HR manuals.
Categories: Cybersecurity Vulnerability, Data Exfiltration, AI Integration Risks
Tags: Vulnerability, ChatGPT, Connectors, Data Exfiltration, Zero-Click, Attack, Malicious Document, Prompt Injection, Azure Blob Storage, Enterprise Tools