Containers
Building and deploying Fargate with EKS in an enterprise context using the AWS Cloud Development Kit and cdk8s+
Amazon Elastic Kubernetes Service (Amazon EKS) is a fully managed service that helps customers run their Kubernetes (K8s) clusters at scale by minimizing the effort required to operate the Kubernetes control plane. When you combine Amazon EKS to manage the cluster (the control plane) with AWS Fargate to provision and run pod infrastructure (the data plane), you no longer need to worry about patching, scaling, and maintaining Amazon Elastic Compute Cloud (Amazon EC2) instances to host your applications. You can instead focus on building applications, with the assurance that your cluster is running in a cost-effective, highly available, and secure manner.
In this blog post, we show you how to use the AWS Cloud Development Kit (AWS CDK) with cdk8s and cdk8s+ to deploy a sample Kubernetes workload on an Amazon EKS cluster running Kubernetes pods on AWS Fargate in an enterprise context. This post provides a solution we implemented with one of our enterprise customers.
Time to read | 20 minutes |
Time to complete | 60 minutes |
Cost to complete | $0.20 |
Learning Level | Expert (400) |
Services used | Amazon EKS, AWS Fargate, Application Load Balancer, AWS CDK, cdk8s, cdk8s+ |
Solution overview
In this blog post, we create an Amazon EKS cluster with two Fargate profiles and attach it to a VPC. All application workloads are running in the VPC’s private subnets. When you create a new cluster, Amazon EKS creates an endpoint for the managed Kubernetes API server that you use to communicate with your cluster. For your convenience, in this blog, we make such Amazon EKS cluster’s Kubernetes API server endpoint public, so that it’s easier for you to validate the solution in your AWS account. We also provide instructions for making the endpoint private when you use this solution in an enterprise environment.
In addition to the system components of CoreDNS and AWS Load Balancer Controller, the Amazon EKS cluster will host a sample microservice application through a typical Kubernetes deployment and service. We expose the microservice through an AWS Application Load Balancer that is provisioned by the AWS Load Balancer Controller according to the Kubernetes Ingress definition.
The following diagram shows the high-level architecture.
In this blog post, the microservice is based on a standard NGINX container. The architecture in the diagram is built in an infrastructure-as-code manner using the AWS CDK, cdk8s, and cdk8s+ codes in Typescript.
Walkthrough
Here are the high-level deployment steps:
- Clone the code from the GitHub repo.
- Run the
npm
command to compile the code. - Run the
cdk deploy
command to deploy all components, including the AWS resources and Kubernetes
Prerequisites
To deploy this AWS CDK app, you need the following:
- A good understanding of Amazon EKS, AWS Fargate, and Kubernetes. You also need basic knowledge of Application Load Balancers, the AWS CDK, cdk8s, cdk8s+, and Typescript.
- An AWS account with the permissions required to create and manage the Amazon EKS cluster, S3 bucket, AWS Fargate, and the Application Load Balancer. All those resources will be created by AWS CDK automatically.
- The AWS Command Line Interface (AWS CLI) configured to use one of the AWS Regions where an Amazon EKS cluster running Kubernetes pods on AWS Fargate is supported. For information about installing and configuring the AWS CLI, see Installing, updating, and uninstalling the AWS CLI version 2.
- A current version of Node.
- The Kubernetes command-line tool, kubectl. For installation and setup instructions, seeInstall and Set Up kubectl.
- If you choose to use an existing VPC in your AWS account, make sure the private subnets are tagged according to the information in Cluster VPC considerations.
Deployment steps
The AWS CDK Toolkit, the CLI command cdk, is the primary tool for interacting with your AWS CDK app.
- For information about installing the cdk command, see AWS CDK Toolkit (cdk command).
$ cd ~
$ npm install -g aws-cdk
- Use the git clone command to clone the repo that contains all the AWS CDK, cdk8s, and cdk8s+ code used in this blog
$ git clone https://github.com/aws-samples/cdk-eks-fargate.git
- (Optional) If you’re using an existing VPC in your account, enter its ID in the
cdk-eks-fargate.ts
file. If you have a preferred name for the cluster, enter it in the file. If you don’t set the VPC ID, the AWS CDK stack will create one for you. If you don’t set a cluster name, the stack will create one for the provisioned cluster. In this post, we useeks-fargate-blog
for the cluster name.
$ cd cdk-eks-fargate/bin/
$ vi cdk-eks-fargate.ts
Uncomment the line of "//vpcId: 'your-vpc-id'," and replace 'your-vpc-id' with the vpc id of your existing VPC.
Uncomment the line of "//clusterName: 'your-cluster-name'" and replace 'your-cluster-name' with a desired name for the EKS cluster.
- Install the required npm modules and then use npm commands to compile the AWS CDK code:
$ cd ..
$ npm install
$ npm run build
- Use the
cdk deploy
command to deploy the AWS resources and Kubernetes workloads. It takes approximately 30 minutes to provision the cluster and Fargate profiles.
$ cdk deploy
Type "y" to agree the question of "Do you wish to deploy these changes"
<< Take around 30 mins>>
<< Once finishes, CDK print outputs to the CLI as below: >>
✅ k8s-app-on-eks-fargate-stack
Outputs:
k8s-app-on-eks-fargate-stack.myclusterClusterNameCF144F5E = <<your EKS cluster name>>
k8s-app-on-eks-fargate-stack.myclusterConfigCommandEABBAF35 = aws eks update-kubeconfig --name <<your EKS cluster name>> --region ap-southeast-2 --role-arn arn:aws:iam::123456789012:role/k8s-app-on-eks-fargate-st-clustermasterroleCD184ED-XXXXXXXXXXXXX
k8s-app-on-eks-fargate-stack.myclusterGetTokenCommand1F2FC9A5 = aws eks get-token --cluster-name <<your EKS cluster name>> --region ap-southeast-2 --role-arn arn:aws:iam::123456789012:role/k8s-app-on-eks-fargate-st-clustermasterroleCD184ED-XXXXXXXXXXXXX
Stack ARN:
arn:aws:cloudformation:ap-southeast-2:123456789012:stack/k8s-app-on-eks-fargate-stack/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
- To set up kubectl to access the cluster, copy the output of the
k8s-app-on-eks-fargate-stack.myclusterConfigCommand
in the step 5:
$ aws eks update-kubeconfig --name <<your EKS cluster name>> --region ap-southeast-2 --role-arn arn:aws:iam::123456789012:role/k8s-app-on-eks-fargate-st-clustermasterroleCD184ED-XXXXXXXXXXXXX
Now that the deployment is complete, it’s time to validate the results.
- Open the Amazon EKS console and choose Clusters. The cluster you created should be displayed, as shown here:
- On the Compute tab, you should see two AWS Fargate profiles. One is generated automatically by the AWS CDK to host system components (the CoreDNS and AWS Load Balancer Controller). The other is defined by our AWS CDK code to host the Kubernetes NGINX app. This setting does not need EC2 as work nodes.
- On the Networking tab, you should see that the cluster is associated with the VPC and the VPC’s private subnets are specified in the AWS CDK code.
Use kubectl to validate the Kubernetes components. First, check the pods and nodes. You can see that in Amazon EKS and AWS Fargate, every Fargate pod is registered as a virtual Kubernetes node. You can also see that a Kubernetes Ingress has been created for the NGINX service, which is associated with an internal AWS Application Load Balancer instance. (It takes 2-3 minutes for the Application Load Balancer to become active).
$ kubectl get pods -A -o wide
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx nginx-deployment-000d00000-000xx 1/1 Running 0 54m 10.xxx.xxx.159 fargate-ip-10-xxx-xxx-159.ap-southeast-2.compute.internal <none> <none>
...
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
fargate-ip-10-xxx-xxx-159.ap-southeast-2.compute.internal Ready <none> 55m v1.18.8-eks-0x0xxx
...
$ kubectl get ingress -A
NAMESPACE NAME CLASS HOSTS ADDRESS PORTS AGE
nginx api-ingress <none> * internal-xxxxxxxx-nginx-apiingress-f2c1-000000000.ap-southeast-2.elb.amazonaws.com 80 84m
Sign to an EC2 instance connected to your VPC and run the following command to curl the Application Load Balancer host name shown in the Ingress information above. The NIGNX service should respond with the following hello message:
$ curl internal-xxxxxxxx-nginx-apiingress-f2c1-000000000.ap-southeast-2.elb.amazonaws.com
...
<p><em>Thank you for using nginx.</em></p>
...
Dive deep into the code
The code used in this post seamlessly combines the AWS CDK, cdk8s, and cdk8s+ in Typescript to deploy AWS resources and Kubernetes workloads in a cohesive manner through the cdk-deploy
command. No more out-of-band tweaking!
The cdk-eks-fargate.ts
file under the bin folder is the entry file. It defines the AWS CDK app and invokes the class constructor of the AWS CDK stack construct defined under the lib folder.
Under the lib folder, the cdk-eks-fargate-stack.ts
file defines the AWS CDK module. The CDK module defines an Amazon EKS cluster with pods running on Fargate , adds the AWS Load Balancer Controller defined in aws-loadbalancer-controller.ts
, and lastly the Kubernetes app defined in nginx-service.ts
.
Create the Amazon EKS cluster and AWS Fargate profile
The following code snippet from cdk-eks-fargate-stack.ts
illustrates how to create an Amazon EKS cluster with pods running on AWS Fargate in a VPC. In this part of the code, we first create an IAM role that has administrator access to the cluster. Then we create an Amazon EKS cluster with pods running on AWS Fargate, which includes an Amazon EKS control plane and a default Fargate profile that selects all pods that belong to default
and kube-system
namespaces. As we mentioned in the introduction, we set the Amazon EKS cluster’s Kubernetes API server endpoint to PUBLIC
.
// cluster master role
const masterRole = new iam.Role(this, 'cluster-master-role', {
assumedBy: new iam.AccountRootPrincipal(),
});
// Create a EKS cluster with Fargate profile.
const cluster = new eks.FargateCluster(this, 'my-cluster', {
version: eks.KubernetesVersion.V1_18,
mastersRole: masterRole,
clusterName: props.clusterName,
outputClusterName: true,
endpointAccess: eks.EndpointAccess.PUBLIC,
vpc:
props.vpcId == undefined
? undefined
: ec2.Vpc.fromLookup(this, 'vpc', { vpcId: props?.vpcId! }),
vpcSubnets: [
{ subnetType: ec2.SubnetType.PRIVATE },
],
});
Deploy the AWS Load Balancer Controller
Next, we deploy the AWS Load Balancer Controller on Fargate with EKS. This controller creates an AWS Application Load Balancer instance for Kubernetes ingress to expose Kubernetes services outside of the cluster.
The following code snippet from cdk-eks-fargate-stack.ts
shows how to deploy the AWS Load Balancer Controller:
// Deploy AWS LoadBalancer Controller onto EKS.
new AwsLoadBalancerController(this, 'aws-loadbalancer-controller', {eksCluster: cluster});
The deployment is defined in the aws-loadbalancer-controller.ts
file. This file uses the AWS CDK module to define the IAM policies required by the controller. The policies are then attached to the IAM role associated with the Kubernetes service account. The AWS Load Balancer Controller is deployed by the cluster.addHelmChart
method of the AWS CDK module.
Deploy the NGINX Kubernetes service
Now that the cluster and AWS Load Balancer Controller are provisioned, the code deploys the business app workload, NGINX.
NGINX does not need AWS permissions to access AWS resources for its work. However, to illustrate how to provide your Kubernetes workload with AWS permissions to AWS resources in your environment, our code assigns an IAM role to the NGINX Kubernetes service account and grants permissions to access an Amazon Simple Storage Service (Amazon S3) bucket. You can easily extend the code to provide permissions for Kubernetes resources as appropriate for your use case.
In the following code snippet from cdk-eks-fargate-stack.ts
, we first create an IAM role. The code ensures the IAM role has a trust policy so that it can only be assumed by the Kubernetes service account. Here is an example trust policy for the IAM role:
As the following code snippet shows, the AWS CDK provides an elegant way to grant AWS permissions. Least-privilege S3 bucket permissions are assigned to the IAM role.
const k8sAppNameSpace = 'nginx';
const k8sIngressName = 'api-ingress';
const k8sAppServiceAccount = 'sa-nginx';
const conditions = new cdk.CfnJson(this, 'ConditionJson', {
value: {
[`${cluster.clusterOpenIdConnectIssuer}:aud`]: 'sts.amazonaws.com',
[`${cluster.clusterOpenIdConnectIssuer}:sub`]: `system:serviceaccount:${k8sAppNameSpace}:${k8sAppServiceAccount}`,
},
});
const iamPrinciple = new iam.FederatedPrincipal(
oidcProvider.openIdConnectProviderArn,
{},
'sts:AssumeRoleWithWebIdentity'
).withConditions({
StringEquals: conditions,
});
const iamRoleForK8sSa = new iam.Role(this, 'nginx-app-sa-role', {
assumedBy: iamPrinciple,
});
// Grant the IAM role S3 permission as an example to show how you can assign Fargate Pod permissions to access AWS resources
// even though nginx Pod itself does not need to access AWS resources, such as S3.
const example_s3_bucket = new s3.Bucket(
this,
'S3BucketToShowGrantPermission',
{
encryption: s3.BucketEncryption.KMS_MANAGED,
}
);
example_s3_bucket.grantRead(iamRoleForK8sSa);
Next, we create a Fargate profile to host the customer app. This profile will host all pods that belong to the nginx
namespace. For information about how you can fine-tune its pod selection criteria, see AWS Fargate profile.
// Now create a Fargate profile to host the customer app which hosts pods belonging to nginx namespace.
cluster.addFargateProfile('customer-app-profile', {
selectors: [{ namespace: k8sAppNameSpace }],
subnetSelection: { subnetType: ec2.SubnetType.PRIVATE },
vpc: cluster.vpc,
});
Now we add the NGINX cdk8s chart to the Amazon EKS cluster, which deploys the NGINX Kubernetes components, including the Kubernetes service account and Kubernetes deployment and service onto the Amazon EKS cluster.
const k8sAppChart = cluster.addCdk8sChart(
'nginx-app-service',
new NginxService(cdk8sApp, 'nginx-app-chart', {
iamRoleForK8sSaArn: iamRoleForK8sSa.roleArn,
nameSpace: k8sAppNameSpace,
ingressName: k8sIngressName,
serviceAccountName: k8sAppServiceAccount,
})
);
The nginx-service.ts
file defines the NGINX cdk8s chart. In this file, we first create the Kubernetes service account associated with the IAM role created above. The service account is used by the Kubernetes deployment of NGINX. As a result, the Kubernetes pods for NGINX now have the AWS permissions defined in the IAM role.
const serviceAccount = new ServiceAccount(
this,
props.serviceAccountName,
{
metadata: {
name: props.serviceAccountName,
namespace: namespace.name,
annotations: {
'eks.amazonaws.com/role-arn':
props.iamRoleForK8sSaArn,
},
},
}
);
We use the intent-driven class library, cdk8s+, to define the Kubernetes deployment of NGINX. For information about cdk8s+, see the Introducing cdk8s+: Intent-driven APIs for Kubernetes objects blog post.
const deployment = new kplus.Deployment(this, 'nginx-deployment', {
containers: [
new kplus.Container({
image: 'nginx:1.14.2',
imagePullPolicy: kplus.ImagePullPolicy.ALWAYS,
name: 'nginx',
port: 80,
}),
],
metadata: {
name: 'nginx-deployment',
namespace: namespace.name,
},
serviceAccount
});
deployment.podMetadata.addLabel('app', 'nginx');
deployment.selectByLabel('app', 'nginx');
We define the Kubernetes NGINX service and a Kubernetes Ingress to expose the NGINX service. After the AWS Load Balancer Controller sees this object, it creates an AWS Application Load Balancer instance for it.
const service = new kplus.Service(this, 'game-service', {
metadata: {
namespace: namespace.name,
name: 'game-service',
labels: {
app: '2048',
},
annotations: {
'alb.ingress.kubernetes.io/target-type': 'ip',
},
},
type: kplus.ServiceType.NODE_PORT,
});
service.addDeployment(deployment, 80);
new kplus.IngressV1Beta1(this, 'game-ingress', {
metadata: {
name: 'game-ingress',
namespace: namespace.name,
annotations: {
'kubernetes.io/ingress.class': 'alb',
'alb.ingress.kubernetes.io/scheme': 'internal', // For enterprise internal-facing service, set it to "internal" instead.
'alb.ingress.kubernetes.io/target-type': 'ip',
},
labels: { app: '2048' },
},
rules: [
{
path: '/*',
backend: kplus.IngressBackendV1Beta1.fromService(service),
},
],
});
Considerations for isolated private subnets
In an enterprise context, we often see an isolated private subnet, which typically means the subnet does not have a default NAT gateway for outbound traffic to the internet. Workloads in a private subnet must explicitly specify https_proxy
to use the enterprise internet proxy server to access websites such as AWS service API endpoints that have been added to the allow list. The solution we present in this post works in a situation like this one, but requires additional setup.
When AWS Fargate creates a Fargate pod, it attaches an ENI in the isolated private subnet to the pod. It needs to access the AWS Security Token Service (AWS STS) service endpoint through that ENI in order to register the pod in the Amazon EKS cluster as a virtual node. We need to enable the AWS STS VPC endpoint and make sure that AWS Fargate can retrieve container images through the same ENI. We can either use an enterprise internal container image repo or enable an Amazon Elastic Container Registry (Amazon ECR) VPC endpoint and use container images hosted in Amazon ECR.
We need to adjust the AWS CDK code for creating an Amazon EKS cluster. For endpointAccess
, specify PRIVATE
. For kubectlEnvironment
, use the https_proxy/http_proxy/no_proxy
appropriate for your enterprise. Specify your vpc
and vpcSubnets
selections.
const cluster = new eks.FargateCluster(this, 'my-cluster', {
version: eks.KubernetesVersion.V1_18,
mastersRole: masterRole,
clusterName: props.clusterName,
outputClusterName: true,
// Networking related settings listed below - important in enterprise context.
endpointAccess: eks.EndpointAccess.PRIVATE, // In enterprise context, you may want to set it to PRIVATE.
kubectlEnvironment: { // Also if the enterprise private subnets do not provide explicit internet proxy instead the workloads need to specify https_proxy, then you need to use kubectlEnvironment to set up http_proxy/https_proxy/no_proxy accordingly for the Lambda provisioning EKS cluster behind the scene so that the Lambda function can access AWS service APIs via enterprise internet proxy.
https_proxy: "your-enterprise-proxy-server",
http_proxy: "your-enterprise-proxy-server",
no_proxy: "localhost,127.0.0.1,169.254.169.254,.eks.amazonaws.com,websites-should-not-be-accesses-via-proxy-in-your-environment"
},
vpc:
props.vpcId == undefined
? undefined
: ec2.Vpc.fromLookup(this, 'vpc', { vpcId: props?.vpcId! }),
vpcSubnets: [
{ subnetType: ec2.SubnetType.PRIVATE },
], // you can also specify the subnets by other attributes
});
If the pods need to access internal websites added to the allow list, we need to set up the https_proxy. This step is commonly required when running Kubernetes workloads in isolated private subnets, no matter the platform (Amazon EKS or others).
As this cdk8s+ code snippet shows, we suggest you define a Kubernetes configmap and then in the pod/deployment definition, add environment variables sourced from that configmap.
new kplus.ConfigMap(this, 'k8s-cm-for-http-proxy-env', {
metadata: {
name: 'proxy-environment-variables',
namespace: namespace.name,
},
data: {
http_proxy: 'your-enterprise-proxy-server',
https_proxy: 'your-enterprise-proxy-server',
no_proxy: 'localhost,127.0.0.1,169.254.169.254,.eks.amazonaws.com,websites-should-not-be-accesses-via-proxy-in-your-environment'
}
});
const deployment = new kplus.Deployment(this, 'nginx-deployment', {
containers: [
new kplus.Container({
...
env: {
http_proxy: kplus.EnvValue.fromConfigMap(httpEnvConfigMap, 'http_proxy'),
https_proxy: kplus.EnvValue.fromConfigMap(httpEnvConfigMap, 'https_proxy'),
no_proxy: kplus.EnvValue.fromConfigMap(httpEnvConfigMap, 'no_proxy'),
}
}),
],
...
});
Cleanup
To avoid ongoing charges to your account, run the following commands to clean up resources. The cleanup process takes approximately 30 minutes.
$ cd ~/cdk-eks-fargate/
$ cdk destroy
Conclusion
By combining Amazon EKS with AWS Fargate, you not only benefit from an AWS fully managed Kubernetes control plane, but you also eliminate the need to provision any EC2 for Kubernetes worker nodes. AWS Fargate allocates the right amount of compute, eliminating the need to choose instances and scale cluster capacity.
When you combine the AWS CDK with cdk8s and cdk8s+, you can define and deploy Kubernetes workloads along with dependent AWS resources cohesively and consistently. There’s no need to define separate YAML files and execute kubectl to deploy them in an out-of-band manner.