In a striking incident, attackers utilized a combination of exposed credentials and artificial intelligence (AI) to gain administrative access to an Amazon Web Services (AWS) environment in less than 10 minutes. This event serves as a stark reminder of how AI is rapidly becoming a significant tool for cybercriminals, enabling them to execute attacks with unprecedented speed.
According to a report by the Sysdig Threat Research Team (TRT), the attack began on November 28 when the threat actor discovered credentials in public Simple Storage Service (S3) buckets. Following this initial breach, the actor escalated privileges and moved laterally across 19 unique AWS principals in a matter of minutes.
The researchers noted that throughout the duration of the attack, the perpetrator leveraged large language models (LLMs) to automate various phases, including reconnaissance, malicious code generation, and real-time decision-making. This reliance on LLMs significantly contributed to both the rapidity and efficiency of the attack, highlighting the evolving tactics employed by threat actors.
“This attack stands out for its speed, effectiveness, and strong indicators of AI-assisted execution,” Sysdig researchers Alessandro Brucato and Michael Clark stated in their report.
Initial Access Through Credential Exposure
While the swift execution of the attack was alarming, the method of gaining access through exposed credentials serves as a cautionary tale for organizations utilizing cloud environments. The researchers emphasized that compromised credentials are often the gateway for attackers into cloud infrastructures.
“Leaving access keys in public buckets is a huge mistake,” they advised. “Organizations should prefer IAM roles instead, which use temporary credentials. If they really want to leverage IAM users with long-term credentials, they should secure them and implement a periodic rotation.”
The exposed S3 buckets were named following common AI tool naming conventions, which the attackers actively sought out during their reconnaissance phase, making it easier for them to locate the necessary credentials.
In response to the incident, an AWS spokesperson reassured that “AWS services and infrastructure are not affected by this issue,” attributing the breach to misconfigured S3 buckets. They recommended that customers secure their cloud resources by adhering to best practices for security, identity, compliance, and monitoring services.
AI's Role in Attack Acceleration
The attacker showcased the use of AI and LLMs throughout various stages of the attack, indicating a dual objective of both executing the attack and harnessing the cloud environment for their own purposes. The compromised credentials initially held only ReadOnlyAccess privileges, prompting the attacker to employ Lambda function code injection to gain access to an account with administrative rights.
This privilege escalation, which took only eight minutes, was marked by the attacker writing code in Serbian and exhibiting behaviors typical of AI generation, such as comprehensive exception handling and rapid script creation.
In the lateral movement portion of the attack, the actor attempted to assume multiple roles, including cross-account roles, by enumerating account IDs and attempting to leverage various organizational roles, even including IDs that did not belong to the organization. This behavior aligns with patterns often attributed to AI hallucinations.
Targeting AI Models
The threat actor also engaged in LLMjacking by targeting the implementation of Bedrock, AWS’s AI application development environment. They invoked a variety of AI models, including several versions from prominent companies, and programmatically interacted with AWS Marketplace APIs to accept usage agreements on behalf of the victim.
After their activities with Bedrock, the attacker pivoted to hijacking GPU instances, likely for model training or resale. Throughout this process, inconsistencies in the training script further suggested the use of LLM generation.
Preventative Measures and Future Outlook
The incident underscores the critical need for organizations to master fundamental security practices, as the attack could have been prevented had valid credentials not been exposed in public S3 buckets. Experts warn that such oversights can lead to severe breaches.
As AI continues to evolve, the threat landscape is expected to undergo significant changes, with AI becoming both a facilitator and target of cyberattacks. Experts warn that the speed and efficiency introduced by AI will require organizations to prioritize runtime detection, least-privilege enforcement, and other mitigation strategies to safeguard against these accelerating threats.
Source: Dark Reading News