Containers
Scaling IaC and CI/CD pipelines with Terraform, GitHub Actions, and AWS Proton
Introduction
Modern applications run on a variety of compute platforms in AWS including serverless services such as AWS Lambda, AWS App Runner, and AWS Fargate. Organizations today are often required to support architectures using a variety of these AWS services, each offering unique runtime characteristics, such as concurrency and scaling, which can be purpose fit and optimized for a particular workload. As customers adopt these services using practices such as infrastructure as code (IaC) and continuous integration and continuous delivery (CI/CD) practices, they often face challenges with how to scale them within their organizations. In contrast to the original idea of the DevOps two-pizza team approach where a single team develops AND operates an application in production, we’ve seen new roles emerge in the industry such as DevOps engineer and platform engineer. These newer roles tend to be more focused on cloud infrastructure and operations work so that developers can focus on writing application code such as frontend graphical user interfaces (GUIs), backend APIs, and database queries. These newer DevOps and platform engineering roles are challenged with how to support many development teams with IaC and CI/CD. They want to ensure that development teams adhere to their organization’s standards and guidelines for security, reliability, and cost optimization, however, they don’t want to become bottlenecks as development teams need to move fast and deploy new services quickly. To help address these challenges, internal developer platforms (IDPs) are becoming increasingly popular as a mechanism to help platform engineers and development teams collaborate and move faster with increased reliability and consistency. If you’d like to learn more about how we’re seeing companies organize for modern cloud operations, please refer to this post.
A significant challenge with employing IaC and CI/CD templates at scale is maintaining and evolving them over time. Tools that scaffold templates are great, but what happens when changes are needed? For example, suppose you have a template that provisions web applications using Amazon Elastic Container Service (ECS) and AWS Fargate. AWS releases a new capability such as Amazon ECS service connect and you want to enable your template users to opt-in to this new capability. What if you’ve scaffolded dozens of CI/CD pipelines and later decide that you want to add a security scanning step? AWS Proton attempts to address these ongoing challenges by providing platform engineers with mechanisms to be able to version and track templates, and publish updates so that development teams can consume them. AWS Proton is a service that helps platform teams scale their impact by defining and updating infrastructure for self-service deployments. AWS Proton provides a managed deployment mechanism with a central dashboard providing template usage visibility and traceability across AWS accounts.
AWS Proton can support a large variety of use cases; however, as an example, this post will show how AWS Proton can be used to provision containerized web application architectures running on Amazon ECS Fargate with CI/CD pipelines running on GitHub Actions. The IaC to provision the web application and CI/CD pipeline is encapsulated within a Proton service template that can be easily consumed by developers in a self-service fashion and maintained over time. This post will use sample AWS Protons templates implemented using Terraform to provision and deploy a sample Python Flask web application. You’ll walk through how to publish and deploy two AWS Proton templates that provide developers with everything they’ll need to stand up a new web application in AWS and a CI/CD pipeline that builds and deploys their code.
Solution overview
The following is a high-level diagram that depicts this architecture:
Walkthrough
Template registration
To get started, you’ll need an environment in which to deploy a web app. The web application needs to accept traffic from the internet and possibly connect to internal resources such as an Amazon Relational Database Service (Amazon RDS) database running in a private subnet. In this post, you’ll use an AWS Proton environment template that provisions an Amazon Virtual Private Cloud (VPC) network to accomplish this using the VPC ECS Cluster sample template. The first thing you’ll need to do is to register this template in your AWS account. You can clone the sample repo to your machine and register the environment template using the following commands. You’ll need to supply an Amazon Simple Storage Service (Amazon S3) bucket that you can write to, which is used to store your template artifacts during registration. You can export an environment variable named S3_BUCKET or replace ${S3_BUCKET} in the following command. As an alternative, you can use AWS Proton’s template sync feature to automatically register templates from a Git repo. Template sync requires that you fork the repo into your own GitHub organization and use the AWS CodeStar connection to access the repo and make it available to AWS Proton. You can read about the template sync feature here, but for now you’ll use Amazon S3 to do the registration.
Next, you’ll register an AWS Proton service template that provisions a load-balanced web application and a CI/CD pipeline that can build and deploy containers from a source code repo.
Create environment
Now that you’ve registered the templates, you can create a new instance of an environment. To do that, you’ll login into the AWS Management Console, navigate to AWS Proton, select Environments, select Create environment, and choose the Environment for ECS Fargate template. Note that the following walkthrough setup process uses the AWS Management Console; however, these actions can be automated using the AWS Command Line Interface (AWS CLI) or Application Programming Interfaces (APIs).
After entering an environment name of dev and an optional description, you’ll select your AWS CodeBuild provisioning role. This role determines the scope of infrastructure that the environment template and the related service templates can provision. AWS Proton uses AWS CodeBuild as an environment in which to run your template commands. After selecting Next, AWS Proton allows you to specify input parameters for the environment template. For customers that prefer a graphical interface, it is important to point out that AWS Proton dynamically generates this user interface based on the schema file included in your proton template. For customers that prefer more of a GitOps style of provisioning, AWS Proton supports a service sync capability that you can read about here. You can accept the template defaults, select Next again, and choose Create.
At this point, AWS Proton provisions your environment resources, which includes a VPC network, Amazon ECS Cluster, AWS IAM roles, and an Amazon S3 bucket to store the Terraform remote state. This example template happens to be built using Terraform; however, AWS Proton’s CodeBuild provisioning feature allows you to use any IaC tool (e.g., AWS Cloud Development Kit (CDK)) or scripts that you prefer.
After a few minutes, AWS Proton has provisioned your dev environment by applying the template’s Terraform using AWS CodeBuild provisioning. When complete, the environment’s deployment status changes to Succeeded and you’re ready to deploy your web application into this environment.
Create service, pipeline, and instance
To provision your web application and CI/CD pipeline, you’ll navigate to Services, then select Create Service and choose Load Balanced ECS Fargate Service with a GitHub Actions CI/CD Pipeline.
After entering a service name (e.g., web-app), you connect the service to a GitHub repository containing your application source code. You can use your own application repo; however, in this example you’ll use the aws-proton-sample-services repository by forking it into your GitHub organization or by creating a new repository from the repository template. When you choose the Link another Git repository option, you should be able to select GitHub, your AWS CodeStar connection and then select the repository and main branch.
Next, the wizard asks you to configure your AWS Proton service instances. You’ll create a single instance with the name dev, and select the AWS Proton environment named dev, which you created earlier. Since you’ll eventually be deploying the ecs-backend sample app, you’ll enter port 80 and a health check path of /ping since that’s how the sample application has been configured. The sample application is a very simple Python application that uses the Flask web framework. You’ll accept the template’s default public.ecr.aws/aws-containers/proton-demo-image container image that dynamically uses the selected port and health check. This image is updated and replaced by a new image that’s built in the next CI/CD pipeline job.
For the AWS Proton Service’s Pipeline Inputs section, we’ll need a GitHub Personal Access Token (PAT) that allows the service to provide a GitHub Actions CI/CD pipeline by sending the workflow YAML file to our application repo. You can follow the GitHub docs to create a PAT with the workflow scope. After you’ve created the PAT, you add it to AWS Secrets Manager so that your AWS Proton service can consume it. To do this, open a new tab in the AWS Console, navigate to Secrets Manager and choose the Store a new secret option. Select Other type of secret and enter the GitHub PAT in the Plaintext tab. It should look something like the following.
Next, name the secret github/token/sample-app, give it an optional description, then choose Next, Next, and Store.
With the secret stored securely in Secrets Manager, you can go back to the AWS Proton tab and enter the name of the secret in the Pipeline inputs section. For Docker Path enter ./ecs-backend so that the proper sub-directory is built.
After selecting the Next button, AWS Proton provisions the service template, which involves two main steps.
- Provision the web application infrastructure for the service instance. This involves provisioning an Application Load Balancer and an Amazon ECS Service running the default container.
- Provision the pipeline infrastructure. This includes provisioning an Amazon Elastic Container Registry (Amazon ECR) repository, an AWS IAM role that GitHub Actions will assume, and a GitHub Actions workflow YAML file that’s sent to the application repository as a pull request (PR).
Run CI/CD pipeline
Once the AWS Proton service status changes to Active and the pipeline provisioning status changes to Succeeded, you should see the workflow PR URL appear in the outputs section of the pipeline tab. If you select that link, you’ll see a PR called Proton generated GitHub Actions CI/CD pipeline.
When you merge this PR, GitHub Actions runs a workflow job that builds the Python application into a container image, pushes it to Amazon ECR, and updates the AWS Proton service to deploy the newly built image to your service instance.
Once the GitHub Actions workflow job is complete, you should see Amazon ECS rolling out a new deployment.
When the Amazon ECS deployment is complete, you can select the endpoint located in the AWS Proton service instance output and you should see the Python application running. With this pipeline in place, future code commits that are pushed or merged to the main branch will trigger the pipeline.
Keep in mind that what’s shown above is just one workflow used as an example. The important thing is that this workflow is encapsulated within an AWS Proton service template that can be changed over time. A platform engineering team could also provide additional service templates with alternative workflows, or even allow template users to customize the workflow by specifying custom input parameters. An example of an alternative workflow might be to just trigger CI from GitHub PRs, building container images and pushing them to Amazon ECR. Deployments could be triggered in more of a GitOps approach using AWS Proton’s service sync capability, which allow you to declaratively specify an AWS Proton service’s input parameters in a file that lives in a Git repository that AWS Proton watches and deploys for you. The key is that you have flexibility and can encapsulate your workflow inside of templates that can be versioned, tracked, and maintained over time.
Maintenance
While the CI/CD pipeline for each workload lives in the workload’s GitHub repo, AWS Proton provides a loosely coupled mechanism for development teams to stay in sync and up-to-date with standards and best practices set by platform teams. For example, let’s say that a few months after deploying a workload using your templates, your platform engineering team decides they want everyone to start running a new security scanning tool and only deploying if the scans pass. The platform engineering team can update the service template to include this new automation, publish a new version, and then either allow consuming teams to upgrade their templates at their leisure or update all environments or services at once using the AWS Proton API. The following screenshot shows how a developer might be notified that there’s a newer template version available that allows them to update to the recommended version. Selecting on this button triggers a deployment of the new pipeline, which results in another PR with the latest pipeline changes.
Prerequisites
If you’d like to follow this walkthrough in your own AWS account, you’ll need to complete the one-time AWS Proton setup prerequisites, which includes setting up AWS Identity and Access Management (AWS IAM) roles and an AWS CodeStar connection to your GitHub account. This requires elevated AWS account permissions.
Cleaning up
If you’d like to remove the AWS resources created during the walkthrough, you can delete the AWS Proton service which will automatically delete all resources created by the service pipeline and instance. Once the service has been deleted, you can delete the AWS Proton environment which will delete the shared resources. The only exception is the Amazon S3 bucket created as part of the environment that holds the Terraform remote state. This bucket is not automatically deleted in case you need to restore any of the previous resources. You’ll need to manually empty and delete this bucket. Once the service and environment have been deleted, you can safely delete the AWS Proton service and environment templates.
Conclusion
In this post, we showed you one example of what you can build with AWS Proton templates. As a platform engineer, you’ve published AWS Proton templates that provide developers with everything they need to stand up a new web application in AWS and a CI/CD pipeline that builds and deploys their code. This enables your developers to be able to focus on building their app, while giving your platform teams a mechanism to provide guardrails and updates to their cloud resources and CI/CD pipelines over time. The value of AWS Proton is that you can encapsulate your infrastructure and pipeline rules and logic into a resource that is version-able, update-able, and traceable across your AWS accounts.
You can browse the proton-codebuild-provisioning-examples repository to see other example templates that use Terraform, CDK and Pulumi. There’s also another Terraform example template that uses AWS CodePipeline for CI/CD instead of GitHub Actions. With AWS Proton’s CodeBuild provisioning capability, you can build templates using whichever tooling you prefer. If you prefer AWS CloudFormation as your IaC tool, then we also have sample templates published here. We encourage you to try out the sample templates and provide feedback on the AWS Proton public roadmap GitHub repo.