Three popular AI agents on GitHub Actions are vulnerable to so-called “Comment and Control” attacks. These are Claude Code Security Review, Google Gemini CLI Action, and GitHub Copilot Agent. Through PR titles, issue bodies, and comments, attackers steal API keys and access tokens without requiring external infrastructure.
Security researcher Aonan Guan made the discovery together with researchers from Johns Hopkins University. The attack pattern is called “Comment and Control,” a reference to “Command and Control,” and uses GitHub itself as a channel to carry out attacks. An attacker writes a malicious PR title or issue comment; the AI agent processes that text and sends back stolen credentials via a comment or commit.
Unlike classic indirect prompt injection, GitHub Actions workflows automatically trigger on events such as pull_request or issues. Simply opening a PR is enough to activate an agent.
Three attack vectors, one pattern
In Anthropic’s Claude Code Security Review, the PR title is processed into the system prompt without further sanitization. Guan opened a PR with a malicious title that instructed Claude to execute bash commands. The ANTHROPIC_API_KEY and GITHUB_TOKEN appeared as “findings” in a PR comment. Anthropic rated the vulnerability as CVSS 9.4 Critical. Claude had previously been found vulnerable to prompt injection, resulting in the leakage of private data, so Anthropic is not entirely unfamiliar with such issues.
In Google’s Gemini CLI Action, Gemini publicly posted the GEMINI_API_KEY as an issue comment following a similar attack involving a fake instruction section. Google awarded a bounty of $1,337.
GitHub Copilot: Three Layers Bypassed
The most remarkable case is GitHub Copilot Agent. GitHub had built in three runtime security layers: environment filtering, secret scanning, and a network firewall. Guan bypassed all three.
The attack begins with an issue containing a hidden payload in an HTML comment, which is invisible to human users but remains readable by the AI. An unsuspecting victim assigns the issue to Copilot. The UU() function filters sensitive variables from the bash subprocess, but the parent Node.js process retains the full environment. Those variables are readable via `ps auxeww `. Base64 encoding bypassed the secret scanner; the encoded output went through a regular Git push—an allowed channel. Four credentials were exposed, including `GITHUB_TOKEN` and `GITHUB_COPILOT_API_TOKEN`.
GitHub initially referred to the finding as a “previously identified architectural limitation” and paid a $500 bounty after reopening the report. In March, GitHub published a security roadmap for Actions, outlining scoped secrets and an egress firewall as planned measures.