Containers
Automate AWS App2Container workflow using Ansible
AWS App2Container is a command-line tool that helps to modernize legacy applications by moving them to run in containers managed by Amazon Elastic Container Service (Amazon ECS) or Amazon Elastic Kubernetes Service (Amazon EKS). Containerization helps with application resource utilization and agility.
You can use AWS App2Container for Java (Linux) or ASP.NET (Windows) applications that are compatible with the Open Containers Initiative (OCI). You can use App2Container directly on the application servers that are running the applications or it can perform the containerization and deployment steps on a worker machine. In this post, we show you how to automate the tasks performed by App2Container using an Ansible playbook on a worker machine.
To use App2Container to analyze, transform, and deploy applications to Amazon ECS or Amazon EKS or AWS App Runner these steps should be performed.
Inherent challenges
The App2Container workflow involves a series of tasks (install, discover, analyze, extract, containerize, generate deployment, and deploy) that must be orchestrated across application and worker servers. When you use App2Container to migrate multiple servers, it becomes overwhelming to manage and track the progress of each workflow manually. It increases the chances of human error and slows down the migration effort. If the application server has no access to the internet, you cannot install prerequisites like Docker. Or if the server does not have sufficient hardware resources, you cannot execute App2Container to containerize the applications running on it. In scenarios like these, a worker server will help perform the tasks of containerizing the applications, generating deployment artifacts, and provisioning on the target AWS account.
If there is no communication route between the application and the worker servers, a proxy instance will help bridge the communication route. The complexity becomes overwhelming if multiple applications are being migrated simultaneously using a common worker server.
Customers who have a large portfolio of applications to migrate will have multiple application servers. In this scenario, an automated solution to centrally manage and track the App2Container workflow is helpful. By automating the workflow to execute the previously mentioned tasks on the application and worker servers using an Ansible playbook, you get the following benefits:
- Monitoring migration progress for each application
- Reducing human intervention and error
- Migrating faster
You can execute the playbook from the worker server or the proxy instance and can containerize multiple application servers in parallel.
For supported applications, App2Container identifies files and dependencies that are required by the application, which minimizes the size of the Docker container image. This is when it runs in “application” mode. For other application types, App2Container runs in “process” mode, collects all the files and folders running on the application server for containerization, which eventually produces a large container image. The App2Container team is working to support other application frameworks in the future.
To further optimize the Docker image size, consider identifying the required, irrelevant files and add them to the appropriate list (appExcludeFiles
and appSpecificFiles
). The playbook creates a separate AWS CloudFormation stack for each application to be containerized. Use Docker version 17.07 or later.
Solution architecture
When the Ansible playbook is executed, it provisions the prerequisites and installs App2Container on the application and worker servers. On the application server, it starts application discovery and extracts the application as a tar ball. It copies the extracted tar ball from application to worker server, containerizes it, generates deployment artifacts, and provisions the architecture on the target AWS account.
You can install Ansible on the worker server. If there is no direct connectivity between the application server and the worker server (as shown in the preceding diagram), then you must use a proxy to complete the route. In this case, you can install Ansible on the proxy instance and it will run the tasks connecting to the application and the worker servers. AWS CodePipeline is triggered when the AWS CodeCommit repository has new commits. The pipeline uses AWS CodeBuild to build a new Docker image, push it to an Amazon Elastic Container Registry (Amazon ECR) repository, and deploy the image to the target serverless compute engine, AWS Fargate.
Orchestration
Create an IAM user for programmatically executing App2Container commands. App2Container needs access to Amazon S3, Amazon ECR, Amazon ECS or Amazon EKS, and AWS CodePipeline.
Configure an AWS profile on the worker node using the keys for this IAM user.
Create an S3 bucket with read/write access to the IAM user created in the previous step. Update the S3 bucket information in the Ansible configuration file.
Any Java application can be containerized using App2Container. Here is a sample output that a Java application generates and will be cross verified later with the output generated by the containerized application running on AWS.
Ansible is an agentless automation tool. It can migrate the application servers by containerizing on the worker node. Install Ansible on the worker server or, if there is no direct communication route between the application and worker servers, on the proxy instance. Use Ansible version 2.9 or later. Before you install Ansible, be sure to install the required software on the worker instance.
Set up the App2ContainerAutomation project by cloning the Ansible playbook to the Ansible Host. Use this command to clone the automation scripts from the repository:
git clone https://github.com/aws-samples/aws-app2container-ansible.git
Modify the Ansible variables to suit your requirements. You can set the mandatory variables through the command line while running the playbook or by editing the main.yml file in the vars folder. You’ll find the definitions for each variable in the readme. Update their values as required.
Mandatory variables
workspace_dir
: The workspace directory where all the work will be saved. Default is /root/container.awsProfile
: The AWS profile configured on the s.s3Bucket
: The S3 bucket configured in the account where the work will be saved.
Optional variables:
deployTarget
: The target environment (ECS or EKS) to be generated. Default is ECS.deployEnv
: The pipeline environment (beta or prod) to be generated. Default is beta.deploy
: Flag to deploy the generated artifacts. Possible options are true or false. Default is true.worker_mode
: Flag to use the worker mode. Possible options are true or false. Default is true.
In this post, the native settings that App2Container chooses by default are used. To configure containers, use the App2Container analysis.json
file. To configure deployment, the options are ECS
or EKS
. To configure pipelines, the options are beta
or prod
.
Add the application server information to the inventory.ini
file. Update the onPremisesApplicationServer
section to add all the application servers to be containerized and migrated to AWS. Update the onPremisesWorkerServer
section to add the worker node where the containerization will be done.
Use the following command to run the Ansible playbook.
ansible-playbook -i inventory.ini main.yml -e s3Bucket=<S3 Bucket created in prerequisite > -e awsProfile=<the aws profile setup in prerequisite>
After you have provided the required inputs for Ansible and the playbook has been executed from the worker server or proxy instance, Ansible automates the entire workflow. There are a total of 28 steps that include discovering the application, extracting the application, copying to worker server, containerizing, generating the CloudFormation template, uploading the Docker image to Amazon ECR, creating an AWS CodePipeline pipeline for CI/CD, provisioning the required compute infrastructure on Amazon ECS or Amazon EKS, and deploying the image.
An Amazon ECR repository is created.
An AWS CloudFormation stack is created with name prefixed by a2c-
.
To test the application, go to the Outputs section of the CloudFormation stack, copy the PublicLoadBalancerURLDNSName link, and then open it in the web browser or command line.
An AWS CodeCommit repository is created. To trigger a new image build by AWS CodePipeline, update the Dockerfile. AWS CodeBuild will build a new Docker image and upload it to the Amazon ECR repository. AWS CodeDeploy will deploy the new Docker image to the target compute environment of Amazon ECS or Amazon EKS.
Using Ansible tags
You can use Ansible tags to run specific stage in the App2Container workflow. This feature can be used to run each stage at a time or rerun any specific stage manually multiple times during troubleshooting.
Example:
ansible-playbook -i inventory.ini main.yml -e s3Bucket=<S3 Bucket created in prerequisite > -e awsProfile=<the aws profile setup in prerequisite> --tags <inventory>
Conclusion
In this post, we showed how you can use Ansible to automate the AWS App2Container workflow by configuring the parameters used by the tool. Readme.md. If your use case requires more customization, you can extend this playbook and define additional configuration options.