2 min Security

Microsoft discovers backdoor that exploits OpenAI API

Microsoft discovers backdoor that exploits OpenAI API

Microsoft researchers discovered a backdoor that exploits the OpenAI Assistants API for command-and-control communication. The malware, called SesameOp, was discovered in July 2025 during an incident in which attackers remained in the environment for months.

SesameOp is designed for persistence and silent control over compromised devices. The nature of the backdoor aligns with the attack’s ultimate goal: long-term access for espionage.

The attackers had deeply embedded themselves in the environment. Microsoft Incident Response discovered a complex network of internal webshells that executed commands. These webshells were controlled by malicious processes that had compromised various Microsoft Visual Studio utilities. To do this, they used .NET AppDomainManager injection, a technique to evade detection.

Unusual attack technique

Instead of traditional C2 methods, the attacker chose a surprising route. The backdoor uses the OpenAI Assistants API as a storage point and conduit for commands. A component of the malware retrieves instructions via this API, which are then executed on the infected system.

The researchers searched for other Visual Studio files that loaded suspicious libraries. This yielded additional artifacts that enabled external communication with the webshell infrastructure. Analysis of one of these files led to the discovery of SesameOp.

Collaboration against abuse

Microsoft emphasizes that this is not a vulnerability or misconfiguration. “This threat does not represent a vulnerability or misconfiguration, but rather a way to misuse built-in capabilities of the OpenAI Assistants API,” the company said. The API will be phased out in August 2026.

Together with OpenAI, Microsoft investigated how the attacker exploited the API. The Detection and Response Team (DART) shared its findings with OpenAI, which then disabled an API key and associated account. The review confirmed that the account had not communicated with OpenAI models, except for a few limited API calls.

Tip: OpenAI Aardvark automatically detects vulnerabilities