Integration & Automation
Manage multiaccount and multi-Region infrastructure in Terraform using AWS Cloud9
Begin your HashiCorp Terraform journey using security best practices with AWS. HashiCorp Terraform is a popular infrastructure as code (IaC) tool to automate infrastructure of any cloud. In this post, we show how to create an AWS Cloud9 instance as a web-based integrated development environment (IDE). We also use Amazon Simple Storage Service (Amazon S3) for a remote backend and Amazon DynamoDB for remote lock files. AWS CodeCommit will be our repository for version control for our Terraform files, and AWS Identity and Access Management (IAM) roles will provide cross-account access. Use this post as a guide to set up infrastructure on AWS for Terraform deployments.
AWS Cloud9 can be the central point to deploy Terraform code, and it integrates well with CodeCommit. We use CodeCommit to create repositories for both our Terraform modules (a container for multiple resources that are used together) as well as our implementation code.
Terraform manages multiple clouds and environments (such as Kubernetes and Active Directory) through Terraform providers. However, when you use Terraform on AWS, you may encounter the following challenges:
- Managing AWS security credentials, by using secret keys and access keys for multiple AWS accounts.
- The local storage of IaC, dependency lock files, and state files can be subject to single point of failure.
- Tracking changes can be an issue.
- Securely deploying and managing infrastructure to multiple accounts and multi-Region.
This post addresses these challenges in the following ways:
- To reduce the use of managing secret and access keys, we assign the AWS Cloud9 instance (in the admin or central account) an IAM role. This IAM role will have a trust relationship to the admin IAM role in our spoke accounts.
- We show you how you can quickly deploy all the resources, such as a DynamoDB table for maintaining locks and Amazon S3 for storing state files securely, without a single point of failure.
- With version control of our Terraform infrastructure files, we can track all the changes made to the infrastructure. If needed, we can revert to the last working change in the event of failures. If our AWS Cloud9 instance is shut down, our files will be secure and can be used to continue building our infrastructure.
- We enable better control on our multiaccount infrastructure until we get the confidence and automation in place to create a CI/CD pipeline for our infrastructure with Terraform.
About this blog post | |
Time to read | ~11 min. |
Time to complete | ~15 min. |
Cost to complete | ~$0-$2 Depending on instance size |
Learning level | Advanced (300) |
AWS services | AWS Cloud9 AWS CloudFormation Amazon Simple Storage Service (Amazon S3) Amazon DynamoDB AWS CodeCommit AWS Identity and Access Management (IAM) AWS Security Token Service (AWS STS) Amazon Elastic Compute Cloud (Amazon EC2) AWS Systems Manager |
Overview
Figure 1 shows the architecture that we use to demonstrate Terraform AWS Cloud9.
- AWS Cloud9 has code and backend information with Amazon S3 and DynamoDB. AWS Cloud9 has an Instance Profile with a central AWS Cloud9 role.
- In the Terraform provider, use AWS Security Token Service (AWS STS) to specify
AssumeRole
with cross-account Terraform spoke role, which has a trust policy with a central AWS Cloud9 role. - The command
terraform apply
creates a virtual private cloud (VPC) and security group in the spoke account and the Region specified in the provider information.
Set up AWS resources for Terraform deployments
We’ll create an AWS Cloud9 environment and IAM role for an AWS Cloud9 instance and an Amazon S3 bucket for state files. We use a DynamoDB table for locks and CodeCommit in the central account and central Region.
In the spoke account, we create an IAM role that allows the AWS Cloud9 instance to deploy a VPC and a security group in the spoke account.
The AWS Cloud9 environment is used to deploy resources in the spoke account (any Region) by using the AssumeRole capability of the Terraform AWS provider.
Prerequisites
- Two AWS accounts: central account and spoke account
- Access to AWS CloudFormation
- Permission to create resources in both central and spoke accounts
- Understanding of Terraform with providers and remote backends
- Understanding of Git
- Familiarity with DynamoDB and Amazon S3 with respect to Terraform deployments and services like CodeCommit, AWS Cloud9, and IAM roles cross-account access
Target technology stack (tools)
- AWS CloudFormation: To deploy the initial infrastructure to securely deploy AWS Cloud9, IAM roles, CodeCommit (Git), Amazon S3 buckets, and a DynamoDB table
- AWS Cloud9: As an IDE or jump box for cross-account or cross-Region infrastructure deployments
- CodeCommit: Version control for Terraform code
- Amazon S3: Used as a backend configuration to store state files of Terraform infrastructure.
- DynamoDB: Amazon S3 supports state locking and consistency checking using DynamoDB, which can be enabled by setting the
dynamodb_table
field to an existing DynamoDB table name. A single DynamoDB table can be used to lock multiple remote state files. Terraform generates key names that include the values of thebucket
andkey
variables - IAM: Used to assign to an Amazon Elastic Compute Cloud (Amazon EC2) instance using Instance Profile, which enables cross-account access
Repository
Clone this GitHub repository locally.
Walkthrough
Step 1. Deploy the central account infrastructure (create a CloudFormation stack to create resources)
- Sign in to the AWS Management Console, and open the AWS CloudFormation console.
- Create the CloudFormation stack from the template
Cloud9CFN.yaml
. For more information, refer to Creating a stack on the AWS CloudFormation console. - Add the stack parameter
TerraformBackendBucketName
, and add an appropriate, unique bucket name. - After the stack creation completes, copy the following values from the Outputs section to a local text editor:
-
- BackendDynamoDbTable
- S3BackendName
- TerraformCloud9Role
Notes:
- By default, this uses no-ingress Amazon EC2 instances to maintain instance security. The security group for this type of Amazon EC2 instance doesn’t have any inbound rule.
- We recommend using a private subnet to allow the instance for the subnet to communicate with the internet by hosting a NAT gateway in a public subnet.
- If you create AWS Cloud9 in a public subnet (not recommended), attach an internet gateway to the VPC, as well as an internet gateway route to a public subnet. This will enable the SSM Agent for the instance to connect to AWS Systems Manager.
Step 2. Create the spoke account infrastructure deployment
- Sign in to the AWS Management Console, and open AWS CloudFormation console.
- Log in to the spoke account.
- Create the AWS CloudFormation stack from the template
SpokeCFN.yaml
. - Add the stack parameter
CentralAccount
. Use the account number where you created the AWS Cloud9 CloudFormation stack.
This creates an IAM role. It also creates a trust relationship with the role that has been associated to the AWS Cloud9 instance. This enables cross-account access from this AWS Cloud9 instance.
Step 3. Open AWS Cloud9 in the central account
- Log out of the spoke account.
- Sign in to the AWS Management Console, and open the AWS Cloud9 console.
- Choose the AWS Cloud9 console, and choose Open IDE.
Note: If the Open IDE option is unavailable, make sure that the user/role that you used to create the AWS CloudFormation stack is the same user/role that you’re using to access the AWS Cloud9 environment.
Step 4. Configure AWS credentials for your AWS Cloud9 workspace
- Choose the AWS Cloud9 logo.
- Choose Preferences.
- In the Preferences tab, choose AWS Settings.
- Select the AWS managed temporary credentials option to turn it off.
Close the Preferences tab.
This provides that instead of user/role credentials, the IAM role (attached to the Amazon EC2 instance) is used to establish cross-account access.
Note: If you get a “Session Token Expired” error, make sure you repeat the step to turn off AWS managed temporary credentials on the Preferences page of AWS Cloud9.
Step 5. Clone the CodeCommit repository
- Sign in to the AWS Management Console, and open the CodeCommit console.
- Choose the TerraformCodeCommit repository.
- Choose Clone HTTPS.
- Open the AWS Cloud9 console and add the following command (Figure 4):
git clone <Paste the Repo Copied Above>
An empty repo will be added here.
Step 6. Use the AWS CloudFormation stack to create resources in the spoke account
Repeat Step 2 (“Create the spoke account infrastructure deployment”).
Terraform deployment
Create a VPC and security group from sample Terraform code
- Sign in to the AWS Management Console, and open the AWS Cloud9 console.
- Choose the AWS Cloud9 console, and choose Open IDE.
- Move files from your local machine and copy all files from the Terraform directory into the empty TerraformCodeCommit directory in the AWS Cloud9 (see Figure 5).
- Make changes to following files:
-
- backend.tf
- backend “s3” bucket: S3BackendName
- region: Where you created the AWS CloudFormation template
- dynamodb table: Values from BackendDynamoDbTable
- backend.tf
Use the values from S3BackendName in place of <BACKEND_S3_BUCKET>, the Region where you created the AWS CloudFormation template in place of <REGION>, and BackendDynamoDbTable in place of <LOCK_DYNAMODB>.
-
- provider.tf
- region: Where you want to deploy this VPC and the security group
- provider.tf
-
- terraform.tfvars
- spoke account: Account number (where you deployed the spoke AWS CloudFormation template) where you want to create these resources
- terraform.tfvars
spoke_account = <SPOKEACCOUNT>
Note: Add terraform.tfvars to .gitignore (when using a Git repository) and make it local to the directory. This file might contain production secrets or variables local to an environment.
After the CloudFormation stack creation is complete
- Right-click the directory where all the files reside, and choose Open Terminal Here (see Figure 6).
- Run the following commands in the AWS Cloud9 workspace terminal:
terraform init
terraform plan -out=tfplan -input=false
terraform apply -input=false tfplan
3. Use these Git commands to add the files to the CodeCommit repository:
git add *
git commit -m "First Commit"
git push
This keeps the files version-controlled.
Validate this information
- The VPC and the security group are created in the spoke account, and the Region was added in the provider.tf files.
- The backend state files were created in the Amazon S3 bucket from the Amazon S3 console.
- You can find the Terraform files in the CodeCommit repository by navigating to the CodeCommit console.
Cleanup
- Sign in to the AWS Management Console, and open the AWS Cloud9 console.
- Use the following command to destroy all the resources created with Terraform using the same directory:
terraform destroy
- Empty the Amazon S3 bucket for successful deletion of the AWS CloudFormation stack. Use the Amazon S3 console to empty a bucket, which deletes all the objects in the bucket without deleting the bucket.
- Open the Amazon S3 console.
- From the bucket name list, choose the option next to the name of the bucket that you want to empty, and then choose Empty.
- To empty the bucket page, enter the bucket name into the text field, and then choose Empty.
- Monitor the progress of the process on the Empty bucket: Status page.
- Delete the AWS CloudFormation stack from both the spoke account and the AWS Cloud9 central account.
Conclusion
In this blog post, we discuss how to deploy a multi-Region infrastructure with multiple accounts with Terraform infrastructure as code using AWS Cloud9. We demonstrated how locks and state files can be externalized and maintained centrally for resources across accounts and Regions. We also showed how infrastructure can be deployed securely with the use of IAM roles without managing any credentials.