AWS Cloud Operations Blog

Using Terraform with Landing Zone Accelerator on AWS

In this post, we explore how you can incorporate HashiCorp Terraform to manage your Amazon Web Services (AWS) application infrastructure after using AWS Control Tower with Landing Zone Accelerator on AWS (LZA) to manage your AWS ecosystem.

The LZA deploys a cloud foundation that is architected to align with AWS best practices and multiple global compliance frameworks. Through a set of YAML configuration files in the management account, an organization can configure additional functionality, manage networking, deploy security services.

The LZA solution is targeted towards customers in Public Sector and highly regulated industries. For central managed governance, using AWS Control Tower is strongly recommended if you are deploying LZA to a Region where AWS Control Tower is supported. LZA’s complementary capabilities provide a comprehensive no-code solution across more than 35 AWS services to manage and govern a multi-account environment to support highly-regulated workloads and complex compliance requirements. AWS Control Tower and LZA helps establish platform readiness with security, compliance, and operational capabilities.

Scenario overview

We will explore common use cases to illustrate how Terraform can be used with LZA. In this scenario, an example organization has deployed the LZA solution and has a number of development, test, and production AWS accounts running various workloads. Centralized infrastructure, including networking resources, has been provisioned by the organization’s cloud platform team from the management account with LZA.

The infrastructure team, within this example organization, wants to deploy infrastructure using Terraform to multiple AWS accounts for a new application. The team wants to use the existing LZA infrastructure in a repeatable and scalable way, so that they can achieve parity across AWS accounts development, test, and production environments.

The team has identified three use cases:

  1. Networking. They want to use the existing Amazon Virtual Private Cloud (Amazon VPC) infrastructure provisioned by LZA.
  2. Encryption. The team must use the centrally-provisioned AWS Key Management Service (AWS KMS) keys to remain compliant with their internal security policies.
  3. Environment parameters. They want to customize their Terraform deployment based on whether the environment is development, test, or production.

These use cases could be re-created and adapted for your own LZA deployment. The following sections will show how these requirements can be met using LZA and Terraform. They are intended as reference examples, rather than as a walkthrough.

If you want to work through the use cases yourself, we suggest starting with the Landing Zone Accelerator on AWS Implementation Guide.

Use Case 1: Networking

In an AWS Organization, networking resources may be managed centrally by a dedicated networking team. Terraform users may want to hook into this existing approved infrastructure. In this use case, the product team wants to deploy an Amazon EC2 instance, using Terraform, to an Amazon VPC already provisioned for them by LZA.

In our network-config.yaml file that is part of the LZA configuration, we have provisioned the following VPC:

vpcs:
  - name: lza-managed-vpc
    account: Sandbox
    region: eu-west-2
    cidrs:
      - 10.0.0.0/16
    enableDnsHostnames: true
    enableDnsSupport: true
    instanceTenancy: default
    routeTables:
      - name: SubnetRouteTable
    routes: []
    subnets:
      - name: SubnetA
        availabilityZone: a
        routeTable: SubnetRouteTable
        ipv4CidrBlock: 10.0.1.0/24
      - name: SubnetB
        availabilityZone: b
        routeTable: SubnetRouteTable
        ipv4CidrBlock: 10.0.2.0/24
      - name: SubnetC
        availabilityZone: c
        routeTable: SubnetRouteTable
        ipv4CidrBlock: 10.0.3.0/24

If we navigate to the Amazon VPC dashboard, in the Sandbox account it has been provisioned in, the VPC is present in console.

Figure 1: The provisioned VPC in the AWS Management Console.

Figure 1: The provisioned VPC in the AWS Management Console.

Using the terraform Data Source: aws_vpc, we can reference the the name of the VPC.

data "aws_vpc" "this" {
  tags = {
  Name = "lza-managed-vpc"
  }
}

Now that we have the VPC, we can identify the private subnet IDs.

data "aws_subnets" "this" {
  filter {
    name = "vpc-id"
    values = [data.aws_vpc.this.id]
  }
}

With this information, we can then launch resources into the VPC and its subnets. In this example, we will create an Amazon EC2 Auto Scaling group in the AWS Europe (London) Region.

The following code creates an Auto Scaling group, launch template, and placement group that will start provisioning EC2 instances into the three private subnets vended by LZA.

terraform {
  required_providers {
    aws = {
      source = "hashicorp/aws"
      version = "~> 4.0"
    }
  }
  required_version = "~> 1.1"
}

provider "aws" {
  region = "eu-west-2"
}


data "aws_vpc" "this" {
  tags = {
    Name = "vpc"
  }
}

data "aws_subnets" "this" {
  filter {
    name = "vpc-id"
    values = [data.aws_vpc.this.id]
  }
}

resource "aws_placement_group" "this" {
  name = "placement-group"
  strategy = "spread"
}

resource "aws_autoscaling_group" "this" {
  name = "autoscaling-group"
  max_size = 5
  min_size = 2
  desired_capacity = 4
  force_delete = true
  placement_group = aws_placement_group.this.id
  vpc_zone_identifier = data.aws_subnets.this.ids[*]

  launch_template {
    id = aws_launch_template.this.id
    version = "$Latest"
  }
}

resource "aws_launch_template" "this" {
  name_prefix = "launch-template"
  image_id = "ami-xxxxxxxxxxxxx"
  instance_type = "t2.micro"
}

Beyond Amazon VPC, there are many more networking resources that can be provisioned by LZA and then integrated with a Terraform deployment. For example, AWS Transit Gateway and other advanced networking resource can all be incorporated into your code using Terraform’s data sources.

Use Case 2: Encryption

In this use case, the team wants to use LZA-provisioned AWS KMS keys with the infrastructure that they are going to deploy with Terraform.

LZA uses AWS KMS keys to provide encryption-at-rest capabilities for resources across the AWS Organization. AWS KMS keys are deployed to every account and Region managed by the solution.

For more information on how LZA manages these keys, refer to the Implementation Guide.

Figure 2: Key management for all accounts in Landing Zone Accelerator

Figure 2: Key management for all accounts in Landing Zone Accelerator

These are the default keys deployed by LZA:

Rather than provision and manage new AWS KMS keys, developers can hook into the existing keys provisioned by LZA. For some organizations, like our example organization, AWS KMS keys are managed centrally. The KMS keys that LZA deploys is utilized across a wide range of AWS services including services that your applications use on AWS for encryption at rest. LZA also allows the ability to bring in your own custom resource-based KMS policies if the default versions that LZA deploys do not work for your security posture.

In the following screenshots, these are some of the keys deployed that can be viewed in the AWS Management Console.

Figure 3: Landing Zone Accelerator AWS KMS keys in the AWS Management Console

Figure 3: Landing Zone Accelerator AWS KMS keys in the AWS Management Console

Using the terraform Data Source: aws_kms_alias, we can reference an AWS KMS key alias.

data "aws_kms_alias" "s3" {
  name = "alias/accelerator/kms/s3/key"
}

Now that we have the alias, we can encrypt resources. In this example, we will create an S3 bucket, and encrypt it using the S3 key, in the Europe (London) Region.

terraform {
  required_providers {
    aws = {
      source = "hashicorp/aws"
      version = "~> 4.0"
    }
  }
  required_version = "~> 1.1"
}

provider "aws" {
  region = "eu-west-2"
}

data "aws_kms_alias" "s3" {
  name = "alias/accelerator/kms/s3/key"
}

data "aws_caller_identity" "current" {}

resource "aws_s3_bucket" "this" {
  bucket = "bucket-${data.aws_caller_identity.current.account_id}"
}

resource "aws_s3_bucket_server_side_encryption_configuration" "this" {
  bucket = aws_s3_bucket.this.bucket

  rule {
    apply_server_side_encryption_by_default {
    kms_master_key_id = data.aws_kms_alias.s3.target_key_arn
    sse_algorithm = "aws:kms"
    }
  }
}

We also want to provide other security measures, such as blocking public access (which is a feature that you can apply at the account level within the LZA solution) and enabling versioning on our S3 bucket to keep multiple variants of objects to in the event of accidental deletions or overwrites.

The following code creates an S3 bucket and encrypts it using the Amazon S3 key created by LZA.

resource "aws_s3_bucket_public_access_block" "block_public_access" {
  bucket = aws_s3_bucket.this.id

  block_public_acls = true
  block_public_policy = true
  ignore_public_acls = true
  restric_public_buckets = true
}

resource "aws_s3_bucket_versioning" "versioning_bucket" {
  bucket = aws_s3_bucket.this.id
  versioning_configuration {
    status = “Enabled”
  }
}

resource "aws_s3_bucket_server_side_encryption_configuration" "this" {
  bucket = aws_s3_bucket.this.bucket

  rule {
    apply_server_side_encryption_by_default {
    kms_master_key_id = data.aws_kms_alias.s3.target_key_arn
    sse_algorithm = "aws:kms"
    }
  }
}

Any of the other default AWS KMS keys can be accessed using the same method as above. LZA can also deploy AWS Systems Manager Session Manager keys and Amazon Elastic Block Store (Amazon EBS) keys. For more information, see Key management.

Use Case 3: Environment parameters

In this use case the DevOps team wants to customize their Terraform deployment based on whether the environment is development, test, or production. They are deploying Terraform to all three environments using a single repository and pipeline, so they need to use environment parameters to make changes based on the environment.

To prepare for this, they have deployed parameters into each account, using AWS Systems Manager Parameter Store.

In this use case we will examine the use of Parameter Store with LZA, show how parameters can be deployed with LZA, and then explore how these parameters can be used with Terraform.

Parameter Store

The LZA exposes some resource Amazon Resource Names (ARNs) in Parameter Store as parameters.

Parameter Store provides secure, hierarchical storage for configuration data management and secrets management. In the previous use case, we looked at how we could find the Amazon S3 AWS KMS key ARN using an AWS KMS-based data source. Alternatively, we could have extracted this AWS KMS ARN from the Parameter Store.

Figure 4: An AWS KMS key in the Parameter Store console

Figure 4: An AWS KMS key in the Parameter Store console

Using the Terraform Data Source: aws_ssm_parameter, we can reference the parameter and get the AWS KMS key ARN.

data "aws_ssm_parameter" "this" {
  name = "/accelerator/kms/s3/key-arn"
}

We can also extract other values from Parameter Store, including some of the networking resources from Use Case 1.

Figure 5: Network parameters in the Parameter Store console

Figure 5: Network parameters in the Parameter Store console

LZA deploys parameters to help target resources created in the configuration YAML files. If you cannot reliably find a Terraform data source in your relevant service, Parameter Store may have a viable option.

Deploy environment parameters with LZA

We are using the LZA customizations-config.yaml file to deploy a CloudFormation template that contains a Parameter Store parameter.

customizations:
  cloudFormationStacks:
    - deploymentTargets:
        organizationalUnits:
          - Development
      description: Environment Parameter
      name: DevelopmentEnvironment
      regions:
      - eu-west-2
      runOrder: 1
      template: cloudformation/environment.yaml
      parameters:
      - name: Environment
        value: Development
      terminationProtection: false

The CloudFormation template, environment.yaml, looks like this:

AWSTemplateFormatVersion: "2010-09-09"
Description: "Define environment"
Parameters:
  Environment:
    Type: String
Resources:
  DynamoDBTable: 
    Type: AWS::SSM::Parameter
    Properties: 
      Type: String
      Value: !Ref Environment
      Name: /accelerator/environment    

This template deploys a Parameter Store parameter called /accelerator/environment that has a value defined in the customizations-config.yaml file. We have defined it here so that the same CloudFormation template can be re-used for multiple environments.

Here the parameter is visible in the console.

Figure 6: The environment parameter in the Parameter Store console.

Figure 6: The environment parameter in the Parameter Store console.

In this scenario, our fictional organization has also created parameters for the Test and Production OUs in the LZA customizations-config.yaml file.

Use the environment parameters with Terraform

The development team wants to include the environment in the name of their S3 bucket. The following code creates an Amazon S3 bucket with the environment in the name.

terraform {
  required_providers {
  aws = {
    source = "hashicorp/aws"
    version = "~> 4.0"
    }
  }
  required_version = "~> 1.1"
}

provider "aws" {
  region = "eu-west-2"
}

data "aws_caller_identity" "current" {}

data "aws_ssm_parameter" "environment" {
  name = "/accelerator/environment"
}

resource "aws_s3_bucket" "this" {
  bucket = "bucket-${data.aws_ssm_parameter.value}-${data.aws_caller_identity.current.account_id}"
}

The above code creates an Amazon S3 bucket with the environment in the name. As in Use Case 2, you may want to provide other security measures, such as blocking public access (which is a feature that you can apply at the account level within the LZA solution) and enabling versioning on the S3 bucket to keep multiple variants of objects to in the event of accidental deletions or overwrites.

This mechanism could be replicated in many use cases where teams want to use environment parameters with Terraform. Here are some examples:

  • The Terraform count meta-argument could be used to vary resource deployment based on the environment
  • Different networking configurations deployed to development, test, and production environments

Conclusion

In this blog post, we have shown examples of how Landing Zone Accelerator on AWS (LZA) resources can be incorporated with Terraform workflows.

This approach is suitable for combining the LZA with any preferred infrastructure as code (IaC) solution for defining additional resources in your AWS environments, taking advantage of the benefits of the management of your Landing Zone while giving you full flexibility with how you implement your cloud infrastructure on top of it.

If you have feedback about this blog post, submit comments in the Comments section below. If you have questions, start a new thread on AWS Re:Post with the Terraform tag.

About the authors:

Jake Barker

Jake is a Senior Security Consultant with AWS Professional Services. He loves making security accessible and eating great pizza.

Bo Lechangeur

Bo Lechangeur is a Principal Architect for the AWS Security Tooling and Compliance Engineering team. He enjoys helping customers build automated solutions and solving complex problems to help them through their journey in AWS.

Dave Connell

Dave Connell is a Senior Systems Development Engineer within AWS Professional Services. He has spent the last seven years building software to secure and scale AWS usage, enabling builders to delight customers and deliver business value through excellent software in the cloud. Dave is passionate about enabling delivery of predictable, safe and valuable customer outcomes by empowering the builder.