AWS Open Source Blog
Simplifying Kubernetes configurations using AWS Lambda
In this blog post, we explain how to create a multi-stage Dockerfile that uses eksctl, kubectl, and aws-auth. This will allow you to call Kubernetes APIs to create and manage resources through a unified control plane. You will interact with the Kubernetes API using Python, and the config map is created using a Jinja2 template. This provides a solution that simplifies the user experience by allowing you to manage a Kubernetes cluster without installing multiple tools on your local developer machine. Additionally, this solution removes the complexities of additional domain-specific language knowledge and reduces the dependencies and packages installed on a local machine.
The problem
In today’s Kubernetes environment, multiple tool sets and different platforms silo features between the multiple projects, increasing the complexity for customers to decide between new technologies. Developers are faced with an ever-increasing demand to learn new domain-specific languages rather than focusing on the end products. The solution described in this post can be applied to all Kubernetes configurations; however, for this post, we will be exploring the use case: Updating the Amazon Elastic Kubernetes Service (Amazon EKS) aws-auth config mapping.
Use case
Adding users and roles to the existing Kubernetes configmap.yml
at scale:
Method 1:
Develop a script, such as Python, to generate a configmap.yml
using a predefined template, such as Jinja2. Then use kubectl to apply it to Kubernetes using the command:
Method 2:
Use eksctl commands to add one developer at a time using the following command:
Issues
- Both methods require manual intervention.
- Currently there is no Terraform or AWS CloudFormation support for modifying or updating the underlying configuration of Amazon EKS clusters.
- Teams must create and allow human access to a privileged AWS Identity and Access Management (IAM) user/role to update the role’s permissions.
- These privileged users create a bottleneck for updating the cluster.
- There is potential for human error to misconfigure cluster permissions.
Recommended solution
Let’s walk through how we can simplify this tool set to become an API call designed specifically for the environment by using open source tools, such as kubectl, eksctl, AWS Command Line Interface (AWS CLI), Python, Jinja2, and custom Docker container images.
The solution uses container-images for AWS Lambda to create a Docker container using multi-stage builds and lightweight operating system image builds, such as Alpine, to reduce the attack surface by using what is needed to run the code. This method also allows for the Lambda builds to be declared as infrastructure as code (IaC) and therefore can be version controlled, as compared to using an AWS Lambda layer.
This Lambda function automates the process through which you would manually do the following command on a live Kubernetes system.
Prerequisites
- Docker desktop locally installed and running for packaging the container image.
- AWS CLI locally installed for programmatic interaction with AWS.
- The following AWS resources are required. Refer to the GitHub repository for all code samples.
AWS resources:
- AWS IAM resources:
- Lambda role
- Lambda permissions for Amazon EKS
- Amazon Elastic Container Registry (Amazon ECR)
- Elastic Kubernetes Cluster
- Lambda role authorized for Amazon EKS administration
AWS CLI commands for creating the prerequisites
1a. Create Lambda role:
1b. Create IAM policy:
Note: If you receive the error, “An error occurred (MalformedPolicyDocument) when calling the CreatePolicy operation: The policy failed legacy parsing,” then update the lambda-role-permission.json with your account IDs.
Add basic Lambda execution role:
Note: View the code within the GitHub repository. AWS best practices recommend reducing the IAM policy to meet your company’s requirements. These permissions are for demonstration only and are not production ready.
2a. Amazon Elastic Container Registry:
Authorization to push Docker images to Amazon ECR:
3a. Create Elastic Kubernetes Cluster:
3b. Authorizing the Lambda role to administer the Amazon EKS cluster:
3c. Add the following:
GitHub repository contents
File directory layout
The file directory layout is constructed as follows, with three directories and five files:
Dockerfile
The Dockerfile layout is as follows:
- Lines 1–6: Declare global arguments for all build stages; customize based on the customer’s needs.
- Lines 8–15: Create a common base of required libraries for the toolset.
- Lines 17–29: Take the binaries from the compiler stage and rename the next stage builder and stage libraries to run Python in Alpine.
- Lines 34–66: Take fresh copy binaries from the compiler stage for the final stage of the image and install
awscliv2
,eksctl
, andkubectl
. - Line 68: Include the Python stage inside the final stage taking the required binaries.
- Lines 70–71: Run the Python script inside the container image calling Kubernetes APIs to
config aws-auth configmap
.
API call made
The API call made to the Lambda function has the following format.
Allowed RequestTypes are:
- Create/Update
- Delete
Actions
Create
To run, type the following command according to the Lambda documentation on creating the container-images for Lambda:
Verify
To verify the updates to the configmap, run the following command:
and verify that the additional role mappings are added to your configmap.
Clean up
To clean up, run the following commands.
- To delete the Lambda function:
- To delete the Amazon ECR and images:
- To delete the Lambda IAM role:
- To delete the IAM policy:
- To delete the Amazon EKS cluster:
Summary
You now have a way to update your Amazon EKS clusters dynamically using a Lambda function rather than installing kubectl or eksctl on a local machine. Additionally, all container images are stored in a versioned format as infrastructure as code.