The critical vulnerabilities allowed unauthorized access to third-party accounts and sensitive user data.
This was discovered by researchers at Salt Security. ChatGPT plugins provide interaction capabilities with third-party services. They perform tasks for users on platforms such as GitHub, Google Drive and Salesforce. These plugins help extend ChatGPT’s functionality, such as accessing data on Google Drive.
“When you use those plugins, you actually give ChatGPT permission to send sensitive data on your behalf to a third-party website, and depending on the plugin, you also give permission to those plugins to access your private accounts on Google Drive, GitHub and More,” Salt Security explains.
Three vulnerabilities
The researchers discovered several vulnerabilities in the plugins. First is a vulnerability in ChatGPT itself. When users install new plugins, ChatGPT directs the user to the plugin’s website to approve a code. ChatGPT can then communicate with the plugin on the user’s behalf. However, a malicious actor can abuse this process by requesting code approval with a malicious plugin. The malicious plugin allows them to attack the victim’s user account. Then, every message from a user to ChatGPT is forwarded to the plugin. If any sensitive information is shared, it thus reaches the malicious party.
The other vulnerabilities are more in external services. These include a vulnerability in PluginLab, a framework for developing plugins. Salt Security discovered that PluginLab did not authenticate the user account correctly during installation. By cleverly exploiting this, a hacker could take over an account. Finally, there was another vulnerability related to OAuth redirection. It turned out that this process could be abused by sending a link that triggered a process to steal login credentials. Again, this led to an account being taken over.
Salt Security notified OpenAI and the third-party vendors. The vulnerabilities have since been fixed.