2 min Security

Slack AI tricked into helping hackers steal data

Slack AI tricked into helping hackers steal data

Slack AI normally summarizes long conversations or helps users find information within meeting chats. However, the tool turns out to be just as useful to attackers via indirect prompt injection.

Like other GenAI tools, Slack AI suffers from an ailment. Namely, it cannot distinguish legitimate prompts from those of malicious actors. Although the tool has been told by its developers to behave securely and responsibly, attackers can throw a spanner in the works via a clever text prompt.

Attack chain

Discoverer PromptArmor explains how malicious actors can get to work. Unlike previous problems with Slack, where insiders could easily leak data, access to a private channel is not at all necessary to exfiltrate data. Users can normally look up data in both public and private channels, but the accessed data also contains channels behind the scenes that the user is not a member of. It’s true that these are still public channels, but indirectly provides access to data that should be out of reach according to the UI.

This behaviour, PromptArmor shows, offers the possibility of stealing API keys put into a private channel by developers. A user can place an API key in a conversation with themselves, then create a public channel with malicious instructions. This public channel need only contain the attacker. Once Slack AI is then accessed, a rogue instruction can be sent to unsuspecting users. Those individuals are prompted via the PromptArmor method to re-authenticate, a process that allows the attacker to steal the data via an HTTP parameter.

Large attack surface

According to PromptArmor, this exploit greatly broadens the attack surface. Attackers don’t even need to be in Slack: if a user loads a PDF in Slack with the rogue instructions, an indirect prompt injection is already possible. Admins can fortunately curb such instructions via restricting privileges, though.

Slack’s security team doesn’t seem to be fully aware of the risk yet. According to Slack, the intention is that data should be searchable in all public channels. However, that only covers a small part of the methodology that PromptArmor displays. And since Slack AI has made most file types searchable since Aug. 14, the potential for problems has only increased, they said.

Also read: Malware can be funneled through a system app on Google Pixel devices