AWS Cloud Operations Blog

How to Deploy AWS Config Conformance Packs Using Terraform

This post demonstrates how to enable AWS Config and deploy a sample AWS Config Conformance pack using HashiCorp’s Terraform.

AWS Config provides configuration, compliance, and auditing features required for governing your resources and providing security posture assessment at scale. This service lets you create managed rules, which are predefined, customizable rules that AWS Config uses to evaluate whether your AWS resources comply with common best practices.

An AWS Config conformance pack is a collection of AWS Config rules and remediation actions defined as YAML templates. Conformance Packs can be easily deployed as a single entity in an account and a Region or across an organization within AWS Organizations. AWS provides sample Conformance Pack templates for various compliance standards and industry benchmarks. You can download every conformance pack template from GitHub. This blog will work with the Conformance pack around Operational Best Practices for Amazon Simple Storage Service (S3). Note that you can utilize this mechanism for other sample conformance packs or for your own.

Terraform is a on open-source, infrastructure as code (IaC) software tool, similar to AWS CloudFormation, the AWS native IaC solution. Infrastructure as code (IaC) is the process of provisioning and managing your cloud resources by writing a template file that is both human readable and machine consumable. In February 2021, HashiCorp Terraform announced support for AWS Config Conformance pack as part of its AWS provider version 3.28.0. If you plan to use Terraform to manage your AWS environment, this post demonstrates how to deploy AWS Config and Conformance packs by using Terraform.

As shown in the following picture, you use a Terraform configuration to create a Conformance pack in your AWS account. This Conformance pack will deploy rules around operational best practices for Amazon S3:

Architecture Diagram showing how a user can execute terraform scripts from a remote workstation to Deploy AWS Config conformance pack.

Figure 1: Architecture shows interaction between User, Terraform, AWS Config and Conformance Pack

Prerequisites

To complete the steps in this blog, you will need the following:

• An AWS account with permissions to AWS Config, Amazon S3, and CloudFormation. Make sure to check the prerequisites for using AWS Config.
Download and set up Terraform. You can see these instructions to get started with Terraform on AWS.
• Make sure you have installed the AWS Command Line Interface (AWS CLI) and configured access to the AWS account you would like to deploy to. You can also utilize AWS CloudShell and deploy the solution.

Walkthrough

In this blog, we highlight two different methods you can follow to set up Conformance Packs with Operational Best Practices for S3. The first method assumes you are using AWS Config for the first time and have not yet enabled it in your AWS account. In the Terraform script, you will enable Config and deploy the Conformance pack.

In the second method, we assume you have already enabled Config, and show you how to use Terraform to deploy the Conformance Pack. In both methods, follow the same instructions until it is time to update your Terraform script, or the main.tf file.

  1. Ensure that your AWS CLI is configured in your terminal. You will need to input your AWS Access Key ID and Secret Access Key.

$ aws configure

  1. Next, write your Terraform configuration. The configuration is a set of files that describe infrastructure in Terraform:

$ mkdir learn-terraform-conformance-packs

  1. Change into this directory:

$ cd learn-terraform-conformance-packs

  1. You will now create a file entitled “main.tf” to define your infrastructure:

$ touch main.tf

  1. Open main.tf in your text editor, paste in the following Terraform configuration file, and save the file.

Terraform Configuration to enable AWS Config and deploy a conformance pack

If you have already enabled Config, scroll to use “Terraform Configuration to just deploy the conformance pack” section.

provider "aws" {
  region = "us-east-1"
}

#--------------
# S3 Variable
#--------------
variable "encryption_enabled" {
  type        = bool
  default     = true
  description = "When set to 'true' the resource will have AES256 encryption enabled by default"
}

# Get current region of Terraform stack
data "aws_region" "current" {}

# Get current account number
data "aws_caller_identity" "current" {}

# Retrieves the partition that it resides in
data "aws_partition" "current" {}

# -----------------------------------------------------------
# set up the AWS IAM Role to assign to AWS Config Service
# -----------------------------------------------------------
resource "aws_iam_role" "config_role" {
  name = "awsconfig-example"

  assume_role_policy = <<POLICY
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Action": "sts:AssumeRole",
      "Principal": {
        "Service": "config.amazonaws.com"
      },
      "Effect": "Allow",
      "Sid": ""
    }
  ]
}
POLICY
}

resource "aws_iam_role_policy_attachment" "config_policy_attach" {
  role       = aws_iam_role.config_role.name
  policy_arn = "arn:${data.aws_partition.current.partition}:iam::aws:policy/service-role/AWSConfigRole"
}

resource "aws_iam_role_policy_attachment" "read_only_policy_attach" {
  role       = aws_iam_role.config_role.name
  policy_arn = "arn:${data.aws_partition.current.partition}:iam::aws:policy/ReadOnlyAccess"
}

# -----------------------------------------------------------
# set up the AWS Config Recorder
# -----------------------------------------------------------
resource "aws_config_configuration_recorder" "config_recorder" {

  name     = "config_recorder"
  role_arn = aws_iam_role.config_role.arn
  recording_group {
    all_supported                 = true
    include_global_resource_types = true
  }
}

# -----------------------------------------------------------
# Create AWS S3 bucket for AWS Config to record configuration history and snapshots
# -----------------------------------------------------------
resource "aws_s3_bucket" "new_config_bucket" {
  bucket        = "config-bucket-${data.aws_caller_identity.current.account_id}-${data.aws_region.current.name}"
  acl           = "private"
  force_destroy = true
  dynamic "server_side_encryption_configuration" {
    for_each = var.encryption_enabled ? ["true"] : []

    content {
      rule {
        apply_server_side_encryption_by_default {
          sse_algorithm = "AES256"
        }
      }
    }
  }
}

# -----------------------------------------------------------
# Define AWS S3 bucket policies
# -----------------------------------------------------------
resource "aws_s3_bucket_policy" "config_logging_policy" {
  bucket = aws_s3_bucket.new_config_bucket.id
  policy = <<POLICY
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Sid": "AllowBucketAcl",
      "Effect": "Allow",
      "Principal": {
        "Service": [
         "config.amazonaws.com"
        ]
      },
      "Action": "s3:GetBucketAcl",
      "Resource": "${aws_s3_bucket.new_config_bucket.arn}",
      "Condition": {
        "Bool": {
          "aws:SecureTransport": "true"
        }
      }
    },
    {
      "Sid": "AllowConfigWriteAccess",
      "Effect": "Allow",
      "Principal": {
        "Service": [
         "config.amazonaws.com"
        ]
      },
      "Action": "s3:PutObject",
      "Resource": "${aws_s3_bucket.new_config_bucket.arn}/AWSLogs/${data.aws_caller_identity.current.account_id}/Config/*",
      "Condition": {
        "StringEquals": {
          "s3:x-amz-acl": "bucket-owner-full-control"
        },
        "Bool": {
          "aws:SecureTransport": "true"
        }
      }
    },
    {
      "Sid": "RequireSSL",
      "Effect": "Deny",
      "Principal": {
        "AWS": "*"
      },
      "Action": "s3:*",
      "Resource": "${aws_s3_bucket.new_config_bucket.arn}/*",
      "Condition": {
        "Bool": {
          "aws:SecureTransport": "false"
        }
      }
    }
  ]
}
POLICY
}

# -----------------------------------------------------------
# Set up Delivery channel resource and bucket location to specify configuration history location.
# -----------------------------------------------------------
resource "aws_config_delivery_channel" "config_channel" {
  s3_bucket_name = aws_s3_bucket.new_config_bucket.id
  depends_on     = [aws_config_configuration_recorder.config_recorder]
}

# -----------------------------------------------------------
# Enable AWS Config Recorder
# -----------------------------------------------------------
resource "aws_config_configuration_recorder_status" "config_recorder_status" {
  name       = aws_config_configuration_recorder.config_recorder.name
  is_enabled = true
  depends_on = [aws_config_delivery_channel.config_channel]
}

# -----------------------------------------------------------
# set up the Conformance Pack
# -----------------------------------------------------------
resource "aws_config_conformance_pack" "s3conformancepack" {
  name = "s3conformancepack"

  template_body = <<EOT

Resources:
  S3BucketPublicReadProhibited:
    Type: AWS::Config::ConfigRule
    Properties:
      ConfigRuleName: S3BucketPublicReadProhibited
      Description: >- 
        Checks that your Amazon S3 buckets do not allow public read access.
        The rule checks the Block Public Access settings, the bucket policy, and the
        bucket access control list (ACL).
      Scope:
        ComplianceResourceTypes:
        - "AWS::S3::Bucket"
      Source:
        Owner: AWS
        SourceIdentifier: S3_BUCKET_PUBLIC_READ_PROHIBITED
      MaximumExecutionFrequency: Six_Hours
  S3BucketPublicWriteProhibited: 
    Type: "AWS::Config::ConfigRule"
    Properties: 
      ConfigRuleName: S3BucketPublicWriteProhibited
      Description: "Checks that your Amazon S3 buckets do not allow public write access. The rule checks the Block Public Access settings, the bucket policy, and the bucket access control list (ACL)."
      Scope: 
        ComplianceResourceTypes: 
        - "AWS::S3::Bucket"
      Source: 
        Owner: AWS
        SourceIdentifier: S3_BUCKET_PUBLIC_WRITE_PROHIBITED
      MaximumExecutionFrequency: Six_Hours
  S3BucketReplicationEnabled: 
    Type: "AWS::Config::ConfigRule"
    Properties: 
      ConfigRuleName: S3BucketReplicationEnabled
      Description: "Checks whether the Amazon S3 buckets have cross-region replication enabled."
      Scope: 
        ComplianceResourceTypes: 
        - "AWS::S3::Bucket"
      Source: 
        Owner: AWS
        SourceIdentifier: S3_BUCKET_REPLICATION_ENABLED
  S3BucketSSLRequestsOnly: 
    Type: "AWS::Config::ConfigRule"
    Properties: 
      ConfigRuleName: S3BucketSSLRequestsOnly
      Description: "Checks whether S3 buckets have policies that require requests to use Secure Socket Layer (SSL)."
      Scope: 
        ComplianceResourceTypes: 
        - "AWS::S3::Bucket"
      Source: 
        Owner: AWS
        SourceIdentifier: S3_BUCKET_SSL_REQUESTS_ONLY
  ServerSideEncryptionEnabled: 
    Type: "AWS::Config::ConfigRule"
    Properties: 
      ConfigRuleName: ServerSideEncryptionEnabled
      Description: "Checks that your Amazon S3 bucket either has S3 default encryption enabled or that the S3 bucket policy explicitly denies put-object requests without server side encryption."
      Scope: 
        ComplianceResourceTypes: 
        - "AWS::S3::Bucket"
      Source: 
        Owner: AWS
        SourceIdentifier: S3_BUCKET_SERVER_SIDE_ENCRYPTION_ENABLED
  S3BucketLoggingEnabled: 
    Type: "AWS::Config::ConfigRule"
    Properties: 
      ConfigRuleName: S3BucketLoggingEnabled
      Description: "Checks whether logging is enabled for your S3 buckets."
      Scope: 
        ComplianceResourceTypes: 
        - "AWS::S3::Bucket"
      Source: 
        Owner: AWS
        SourceIdentifier: S3_BUCKET_LOGGING_ENABLED

EOT

  depends_on = [aws_config_configuration_recorder.config_recorder]
}

Terraform Configuration to just deploy conformance pack (if AWS Config is already enabled)

# -----------------------------------------------------------
# set up the Conformance Pack
# -----------------------------------------------------------
resource "aws_config_conformance_pack" "s3conformancepack" {
  name = "s3conformancepack"

  template_body = <<EOT

Resources:
  S3BucketPublicReadProhibited:
    Type: AWS::Config::ConfigRule
    Properties:
      ConfigRuleName: S3BucketPublicReadProhibited
      Description: >- 
        Checks that your Amazon S3 buckets do not allow public read access.
        The rule checks the Block Public Access settings, the bucket policy, and the
        bucket access control list (ACL).
      Scope:
        ComplianceResourceTypes:
        - "AWS::S3::Bucket"
      Source:
        Owner: AWS
        SourceIdentifier: S3_BUCKET_PUBLIC_READ_PROHIBITED
      MaximumExecutionFrequency: Six_Hours
  S3BucketPublicWriteProhibited: 
    Type: "AWS::Config::ConfigRule"
    Properties: 
      ConfigRuleName: S3BucketPublicWriteProhibited
      Description: "Checks that your Amazon S3 buckets do not allow public write access. The rule checks the Block Public Access settings, the bucket policy, and the bucket access control list (ACL)."
      Scope: 
        ComplianceResourceTypes: 
        - "AWS::S3::Bucket"
      Source: 
        Owner: AWS
        SourceIdentifier: S3_BUCKET_PUBLIC_WRITE_PROHIBITED
      MaximumExecutionFrequency: Six_Hours
  S3BucketReplicationEnabled: 
    Type: "AWS::Config::ConfigRule"
    Properties: 
      ConfigRuleName: S3BucketReplicationEnabled
      Description: "Checks whether the Amazon S3 buckets have cross-region replication enabled."
      Scope: 
        ComplianceResourceTypes: 
        - "AWS::S3::Bucket"
      Source: 
        Owner: AWS
        SourceIdentifier: S3_BUCKET_REPLICATION_ENABLED
  S3BucketSSLRequestsOnly: 
    Type: "AWS::Config::ConfigRule"
    Properties: 
      ConfigRuleName: S3BucketSSLRequestsOnly
      Description: "Checks whether S3 buckets have policies that require requests to use Secure Socket Layer (SSL)."
      Scope: 
        ComplianceResourceTypes: 
        - "AWS::S3::Bucket"
      Source: 
        Owner: AWS
        SourceIdentifier: S3_BUCKET_SSL_REQUESTS_ONLY
  ServerSideEncryptionEnabled: 
    Type: "AWS::Config::ConfigRule"
    Properties: 
      ConfigRuleName: ServerSideEncryptionEnabled
      Description: "Checks that your Amazon S3 bucket either has S3 default encryption enabled or that the S3 bucket policy explicitly denies put-object requests without server side encryption."
      Scope: 
        ComplianceResourceTypes: 
        - "AWS::S3::Bucket"
      Source: 
        Owner: AWS
        SourceIdentifier: S3_BUCKET_SERVER_SIDE_ENCRYPTION_ENABLED
  S3BucketLoggingEnabled: 
    Type: "AWS::Config::ConfigRule"
    Properties: 
      ConfigRuleName: S3BucketLoggingEnabled
      Description: "Checks whether logging is enabled for your S3 buckets."
      Scope: 
        ComplianceResourceTypes: 
        - "AWS::S3::Bucket"
      Source: 
        Owner: AWS
        SourceIdentifier: S3_BUCKET_LOGGING_ENABLED
EOT
}

In this conformance pack, you are creating six immutable Config rules that help optimize your S3 buckets. These rules include S3BucketPublicReadProhibited, S3BucketPublicWriteProhibited, S3BucketReplicationEnabled, S3BucketSSLRequestsOnly, ServerSideEncryptionEnabled, and S3BucketLoggingEnabled.

Note that you can leverage other sample conformance packs for operational best practices or for compliance purposes. This strategy is particularly useful if you need to quickly establish a common baseline for resource configuration policies and best practices across multiple accounts in your organization in a scalable and efficient way. AWS builders are also empowered to build their own conformance packs that meet specific business or industry needs.

  1. Now that you have added the script with your conformance pack, you can initialize the directory. Initializing a configuration directory downloads and installs the AWS provider, which is defined in the configuration:

$ terraform init

You should see a message that says “Terraform has been successfully initialized!"

This command prints out the version of the provider that was installed.

  1. You must format and validate your configuration. The terraform fmt command automatically updates configurations in the current directory for readability and consistency. You can also ensure your configuration is syntactically valid and consistent with the terraform validate command:

$ terraform fmt
$ terraform validate
You should now see a success message, which confirms that your template configuration is valid.

  1. You will now apply the configuration to create the infrastructure:

$ terraform apply

Before applying any changes, Terraform prints out the execution plan to describe the actions Terraform will take to update your infrastructure. If you are using the template to simply deploy a conformance pack, enter the Region to deploy the conformance pack when prompted.

provider.aws.region
The region where AWS operations will take place. Examples
are us-east-1, us-west-2, etc.

Enter a value: us-east-1

  1. Once prompted, you will need to type “yes” in order to confirm that the plan can be run:

Enter a value: yes

After the successful deployment of the conformance pack, you will see Terraform output similar to the following messages:

aws_config_conformance_pack.s3conformancepack: Creating...
aws_config_conformance_pack.s3conformancepack: Still creating... [10s elapsed]
aws_config_conformance_pack.s3conformancepack: Still creating... [20s elapsed]
aws_config_conformance_pack.s3conformancepack: Still creating... [30s elapsed]
aws_config_conformance_pack.s3conformancepack: Creation complete after 34s [id=s3conformancepack]
Apply complete! Resources: 1 added, 0 changed, 0 destroyed.

Congratulations, you have now deployed the Conformance Pack using Terraform! To confirm that your conformance pack has been deployed, navigate to conformance packs from the AWS Config console. You should now see that the S3conformancepack has been successfully deployed:

Screenshot from AWS config Console showing the successful deployed s3conformancepack

Figure 2: Screenshot of AWS Config Console showing the Conformance Pack deployed

Cleaning up

To undeploy the conformance pack and disable AWS Config run the following terraform command.

$ terraform destroy

Before destroying all your managed resources, Terraform prints out the execution plan to describe the actions Terraform will take to update your infrastructure.

Once prompted, you will need to type “yes” in order to confirm that the plan can be run:

Do you really want to destroy all resources?
Terraform will destroy all your managed infrastructure, as shown above.
There is no undo. Only 'yes' will be accepted to confirm.

Enter a value: yes

If you used the Terraform Configuration to enable AWS Config and deploy a conformance pack, you will see Terraform output similar to the following messages:

aws_config_configuration_recorder_status.config_recorder_status: Destroying... [id=config_recorder]
aws_iam_role_policy_attachment.read_only_policy_attach: Destroying... [id=awsconfig-example-20210929214751996000000002]
aws_iam_role_policy_attachment.config_policy_attach: Destroying... [id=awsconfig-example-20210929214751974300000001]
aws_s3_bucket_policy.config_logging_policy: Destroying... [id=config-bucket-123456789012-us-east-1]
aws_config_conformance_pack.s3conformancepack: Destroying... [id=s3conformancepack]
aws_iam_role_policy_attachment.read_only_policy_attach: Destruction complete after 1s
aws_iam_role_policy_attachment.config_policy_attach: Destruction complete after 1s
aws_s3_bucket_policy.config_logging_policy: Destruction complete after 1s
aws_config_configuration_recorder_status.config_recorder_status: Destruction complete after 1s
aws_config_delivery_channel.config_channel: Destroying... [id=default]
aws_config_delivery_channel.config_channel: Destruction complete after 0s
aws_s3_bucket.new_config_bucket: Destroying... [id=config-bucket-123456789012-us-east-1]
aws_s3_bucket.new_config_bucket: Destruction complete after 3s
aws_config_conformance_pack.s3conformancepack: Still destroying... [id=s3conformancepack, 10s elapsed]
aws_config_conformance_pack.s3conformancepack: Still destroying... [id=s3conformancepack, 20s elapsed]
aws_config_conformance_pack.s3conformancepack: Still destroying... [id=s3conformancepack, 30s elapsed]
aws_config_conformance_pack.s3conformancepack: Still destroying... [id=s3conformancepack, 40s elapsed]
aws_config_conformance_pack.s3conformancepack: Still destroying... [id=s3conformancepack, 50s elapsed]
aws_config_conformance_pack.s3conformancepack: Still destroying... [id=s3conformancepack, 1m0s elapsed]
aws_config_conformance_pack.s3conformancepack: Still destroying... [id=s3conformancepack, 1m10s elapsed]
aws_config_conformance_pack.s3conformancepack: Still destroying... [id=s3conformancepack, 1m20s elapsed]
aws_config_conformance_pack.s3conformancepack: Still destroying... [id=s3conformancepack, 1m30s elapsed]
aws_config_conformance_pack.s3conformancepack: Still destroying... [id=s3conformancepack, 1m40s elapsed]
aws_config_conformance_pack.s3conformancepack: Destruction complete after 1m41s
aws_config_configuration_recorder.config_recorder: Destroying... [id=config_recorder]
aws_config_configuration_recorder.config_recorder: Destruction complete after 1s
aws_iam_role.config_role: Destroying... [id=awsconfig-example]
aws_iam_role.config_role: Destruction complete after 2s

Destroy complete! Resources: 9 destroyed.

Conclusion

This post demonstrates how to easily deploy a sample AWS Config conformance pack with rules and remediation actions in your account by using Terraform scripts.

To learn more about AWS Config conformance packs, visit our AWS documentation.

About the authors

About the author Chloe Goldstein

Chloe Goldstein

Chloe Goldstein is a Partner Solutions Architect at AWS. Working with AWS Consulting partners and Independent Software Vendors, Chloe helps these organizations leverage AWS best practices to improve the security, availability, and performance of their cloud applications and workloads.

About the author Jegan Sundarapandian

Jegan Sundarapandian

Jegan Sundarapandian is a Sr. Technical Account Manager with AWS. He works with AWS Customers to implement AWS best practices and keep them operationally healthy.