Microsoft is expanding its Copilot bug bounty program and increasing rewards even for vulnerabilities of moderate severity.
Where previously no bounty was given for this category, researchers can now earn up to $5,000. In addition, Microsoft has significantly expanded the number of vulnerabilities for which it pays.
The Register quotes Lynn Miyashita and Madeline Eckert of the Microsoft bounty team. They argue that even moderate vulnerabilities have major implications for the security and reliability of Copilot products. Under the Copilot Bounty Program, researchers who discover and report unknown vulnerabilities can receive rewards ranging from $250 to $30,000. The most serious security vulnerabilities, such as code injection or model manipulation, receive the highest rewards.
Microsoft classifies vulnerabilities into four levels: critical, important, moderate and low. This classification is based on the Microsoft Vulnerability Severity Classification for both AI systems and online services. The tech giant also expanded the Copilot Bounty Program from three to 14 types of vulnerabilities. This is part of its broader strategy to widely integrate generative AI into its products.
Additional training programs
Previously, the program focused on inference manipulation, model manipulation and inferential information disclosure. The new categories include deserialization of untrusted data, code injection, authentication problems, SQL or command injection, server-side request forgery (SSRF), improper access control, cross-site scripting (XSS), cross-site request forgery (CSRF), errors in Web security configurations, cross-origin access problems and improper input validation.
Microsoft is urging bug hunters to target vulnerabilities in specific services, including Copilot for Telegram and Copilot for WhatsApp. And on the websites copilot.microsoft.com and copilot.ai. The first AI bug bounty program was launched in October 2023 for Bing’s AI features and expanded to Copilot in April 2024.
In addition to the expanded bug bounty allowances and new targets, Microsoft also announced additional training programs for AI professionals last year under the name Zero Day Quest. This initiative offers workshops and access to Microsoft AI engineers and research tools.
Security AI remains a concern
Despite the efforts, concerns about the security of generative AI remain. Microsoft and other large tech companies are rapidly implementing this technology, sometimes without understanding the full implications for security and privacy. An example of this is Windows Recall, where security risks only came to light later.
Researchers have repeatedly found ways to jailbreak large language models such as Copilot, potentially enabling criminals to misuse AI for cyber attacks or even weapon systems. Data poisoning attacks, which deliberately place misleading data into training datasets, also cause damaging or erroneous AI output. This poses a particular risk in sectors such as healthcare.
Despite these challenges, it seems unlikely that software companies will slow the pace of AI integration. It is possible that higher bug bounty rewards will allow researchers to detect the worst vulnerabilities faster before malicious actors can exploit them.