Containers

Bootstrapping clusters with EKS Blueprints

Today, we are introducing a new open-source project called EKS Blueprints that makes it easier and faster for you to adopt Amazon Elastic Kubernetes Service (Amazon EKS). EKS Blueprints is a collection of Infrastructure as Code (IaC) modules that will help you configure and deploy consistent, batteries-included EKS clusters across accounts and regions. You can use EKS Blueprints to easily bootstrap an EKS cluster with Amazon EKS add-ons as well as a wide range of popular open-source add-ons, including Prometheus, Karpenter, Nginx, Traefik, AWS Load Balancer Controller, Fluent Bit, Keda, Argo CD, and more. EKS Blueprints also helps you implement relevant security controls needed to operate workloads from multiple teams in the same cluster.

EKS Blueprints is implemented in two popular IaC frameworks, HashiCorp Terraform and AWS Cloud Development Kit (AWS CDK), which help you automate infrastructure deployments. To get started, please visit the Getting Started guides for either EKS Blueprints for Terraform or EKS Blueprints for CDK.

Motivation

Kubernetes is a powerful and extensible container orchestration technology that allows you to deploy and manage containerized applications at scale. The extensible nature of Kubernetes also allows you to use a wide range of popular open-source tools, commonly referred to as add-ons, in Kubernetes clusters. With such a large number of tooling and design choices available to you, building a tailored EKS cluster that meets your application’s specific needs can take a significant amount of time. It involves integrating a wide range of open-source tools and AWS services and requires deep expertise in AWS and Kubernetes.

If you’re operating workloads from multiple teams in the same cluster, there are additional considerations, such as governing network policies, access to EKS clusters, or access to AWS resources that run outside of an EKS cluster. Ensuring consistency and standardization across a fleet of EKS clusters as adoption grows can present additional challenges.

AWS customers have asked for examples that demonstrate how to integrate the landscape of Kubernetes tools and make it easy for them to provision complete, opinionated EKS clusters that meet specific application requirements. They want solutions that use familiar tools such as Terraform, CDK, and Helm to help manage the lifecycle of EKS clusters, the operational software that runs in each cluster, and the configuration for teams that need to run workloads in each cluster. EKS Blueprints was built to address this customer need.

What is EKS Blueprints?

EKS Blueprints helps you configure complete EKS clusters that are fully bootstrapped with the operational software that is needed to deploy and operate workloads. You can describe the configuration for the desired state of your EKS cluster, such as the control plane, worker nodes, and Kubernetes add-ons, as an IaC blueprint. Once a blueprint is configured, you can use it to deploy consistent environments across multiple AWS accounts and regions using continuous deployment automation. EKS Blueprints builds on existing work from the EKS open source community, including using the terraform-aws-eks module for cluster provisioning.

The following architecture diagram represents an EKS environment that can be configured and deployed with EKS Blueprints. The diagram illustrates an EKS cluster that is running across three availability zones, is bootstrapped with a wide range of Kubernetes add-ons, and hosts workloads from multiple different teams:

With EKS Blueprints, you can provision both EKS and self-managed add-ons in an EKS cluster. As the EKS service continues to expand its library of EKS add-ons, EKS Blueprints will evolve to add those capabilities as well. It also configures the appropriate IAM policies, roles, and service accounts for each add-on (as specified in EKS IAM roles for service accounts (IRSA) documentation).

If you want to allow multiple teams to run workloads in the same cluster, you can use EKS Blueprints to configure and manage the users and teams who have access to a cluster (admin teams) or namespaces within a cluster (application teams).

If you want to leverage a GitOps-based approach to managing both cluster configuration and workloads, you can use EKS Blueprints to bootstrap a cluster with Argo CD and any number Argo CD application resources. Support for Flux is on our roadmap as well.

Example blueprints

We have also developed a library of example implementations that demonstrate how to use EKS Blueprints to solve specific technical challenges on EKS. Our library currently includes examples that demonstrate how to run EMR on EKS, how to configure an EKS cluster to provision nodes with Karpenter, how to implement observability for EKS clusters and workloads, how to bootstrap an EKS cluster with Crossplane, how to use EKS Blueprints with AWS Proton, and more.

Over time, our library of examples will continue to grow and evolve. If there are additional examples that you would like to see, please let us know by creating a GitHub issue. Additionally, if you would like to build your own blueprint and share it with the community, we welcome your pull request!

Using EKS Blueprints

Let’s take a look at EKS Blueprints in action. The following Terraform example represents a simple blueprint that will deploy a new EKS cluster with a managed node group. It will also bootstrap the cluster with vpc-cni, coredns, kube-proxy, aws-load-balancer-controller, metrics server, and cluster-autoscaler add-ons. Indicating that an add-on should be installed in an EKS cluster is as simple as setting a boolean value to true:

module "eks_blueprints" {
  source = "github.com/aws-ia/terraform-aws-eks-blueprints?ref=v4.0.2"

  # EKS Cluster VPC and Subnet mandatory config
  vpc_id             = <vpc_id>
  private_subnet_ids = <private_subnet_ids>

  # EKS CLUSTER VERSION
  cluster_version = "1.21"

  # EKS MANAGED NODE GROUPS
  managed_node_groups = {
    mg_5 = {
      node_group_name = "managed-ondemand"
      instance_types  = ["m5.large"]
      min_size        = "2"
    }
  }
}

# Add-ons
module "kubernetes_addons" {
  source = "github.com/aws-ia/terraform-aws-eks-blueprints//modules/kubernetes-addons?ref=v4.0.2"

  eks_cluster_id = module.eks_blueprints.eks_cluster_id

  # EKS Add-ons
  enable_amazon_eks_vpc_cni            = true
  enable_amazon_eks_coredns            = true
  enable_amazon_eks_kube_proxy         = true
  enable_amazon_eks_aws_ebs_csi_driver = true

  # Self-managed Add-ons
  enable_aws_for_fluentbit            = true
  enable_aws_load_balancer_controller = true
  enable_aws_efs_csi_driver           = true
  enable_cluster_autoscaler           = true
  enable_metrics_server               = true
}

With CDK, you can do the following:

const app = new cdk.App();

const stackId = "<stack_id>";

// By default will provision in a new VPC
blueprints.EksBlueprint.builder()
    .region('us-west-2')
    .version(eks.KubernetesVersion.V1_21)
    .addOns(
        new blueprints.addons.VpcCniAddOn(),
        new blueprints.addons.CoreDnsAddOn(),
        new blueprints.addons.KubeProxyAddOn(),
        
        // Self-managed Add-ons
        new blueprints.addons.AwsForFluentBitAddOn(),
        new blueprints.addons.AwsLoadBalancerControllerAddOn(),
        new blueprints.addons.ClusterAutoScalerAddOn(),
        new blueprints.addons.EfsCsiDriverAddOn(),
        new blueprints.addons.MetricsServerAddOn()
    )
    .build(app, stackId);

Kubernetes add-on customization

Each add-on points to an open-source, upstream Helm repository. EKS Blueprints includes default IAM roles for service accounts (IRSA) configuration for each add-on that makes requests to AWS APIs. If you need advanced configuration (such as a private Helm repo), you can easily override default Helm values.

For example, Docker images used by a Helm chart can be replaced in values.yaml with private Docker repos such as ECR or Artifactory. The following code demonstrates how to support advanced configuration for the AWS Load Balancer Controller add-on:

module "kubernetes_addons" {
  source = "github.com/aws-ia/terraform-aws-eks-blueprints//modules/kubernetes-addons?ref=v4.0.2"

  eks_cluster_id = module.eks_blueprints.eks_cluster_id

  enable_aws_load_balancer_controller = true
  aws_load_balancer_controller_helm_config = {
    name       = "aws-load-balancer-controller"
    chart      = "aws-load-balancer-controller"
    repository = "https://aws.github.io/eks-charts"
    version    = "1.3.1"
    namespace  = "kube-system"
    values = [templatefile("${path.module}/values.yaml", { 
        operating_system = "linux"
    })]
  }
}

You can also configure add-ons with CDK:

const loadBalancerAddOn = new blueprints.AwsLoadBalancerControllerAddOn({
    name: "aws-load-balancer-controller",
    chart: "aws-load-balancer-controller",
    repository: "https://aws.github.io/eks-charts",
    version: "1.3.1",
    namespace: "kube-system",
    enableWaf: true, 
    values: {
        operating_system: "linux"
    } 
});

blueprints.EksBlueprint.builder()
    .addOns(loadBalancerAddOn)
    .build(app, stackId);

Worker nodes

EKS Blueprints supports provisioning EKS clusters with a variety of compute configurations including managed node groups, self-managed node groups, and AWS Fargate profiles:

module "eks_blueprints" {
  ...
 # Managed Node Groups
  managed_node_groups = {
    mg_5 = {
      node_group_name = "managed-ondemand"
      instance_types  = ["m5.large"]
      min_size        = "2"
      max_size        = "5"
    }
  }

  # Fargate Profiles
  fargate_profiles = {
    default = {
      fargate_profile_name = "default"
      fargate_profile_namespaces = [{
        namespace = "default"
      }]
      additional_tags = { ExtraTag = "Fargate" }
    }
  }
}

You can also specify compute configuration with CDK:

...
// Managed Node Group
const props: blueprints.MngClusterProviderProps = {
  version: eks.KubernetesVersion.V1_21,
  minSize: 2,
  maxSize: 5,
  instanceTypes: [new ec2.InstanceType('m5.large')],
}
const mngClusterProvider = new blueprints.MngClusterProvider(props);

// Fargate Profile
const fargateProfiles: Map<string, eks.FargateProfileOptions> = new Map([
    ["default", { selectors: [{ namespace: "default" }] }]
]);
  
const fargateClusterProvider = new blueprints.FargateClusterProvider({
    fargateProfiles,
    version: eks.KubernetesVersion.V1_21
});
...

Multi-team clusters

If you want to allow multiple teams to run workloads in the same cluster, EKS Blueprints provides an approach for enabling soft multi-tenancy. As defined in the EKS Best Practice Guides, soft multi-tenancy leverages native Kubernetes constructs (for example, namespaces, roles, role bindings, and network policies) to create logical separation between tenants. If you have hard multi-tenancy requirements such as software-as-a-service (SaaS) providers who need to run completely isolated workloads for different customers, we recommend provisioning dedicated clusters for each customer.

For soft multi-tenancy, EKS Blueprints makes it easy to configure the teams and identities that have access to a cluster, as well as the resources the teams and identities have access to. It currently supports two team types: platform and application. Platform teams represent platform administrators who have admin access to an EKS cluster. Application teams represent teams managing workloads running in cluster namespaces. Applications teams gain access to one or more dedicated namespaces in the cluster:

module “eks-blueprints” {
  …
  application_teams = {
    team-blue = {
      "labels" = {
        "appName" = "blue-team-app",
      }
      "quota" = {
        "requests.cpu"    = "2000m",
        "requests.memory" = "4Gi",
        "limits.cpu"      = "2000m",
        "limits.memory"   = "8Gi",
      }
      users = ["arn:aws:iam::<aws-account-id>:user/team-blue-user"]
    }
  }

  platform_teams = {
    platform_admin = {
      users = ["arn:aws:iam::<aws-account-id>:user/platform-user"]
    }
  }
}

And with CDK:

const applicationTeam = new blueprints.ApplicationTeam({
    name: "team-blue",
    namespaceLabels: {
        appName: "example",
    },
    namespaceHardLimits: {
        "requests.cpu"   : "1000m",
        "requests.memory" : "4Gi",
        "limits.cpu"      : "2000m",
        "limits.memory"   : "8Gi",
    },
    users: [new ArnPrincipal("arn:aws:iam::<aws-account-id>:user/team-blue-user")]
});

const platformTeam = new blueprints.PlatformTeam({
    name: "platform-admin",
    users: [new ArnPrincipal("arn:aws:iam::<aws-account-id>:user/platform-user")]
});

blueprints.EksBlueprint.builder()
    .teams(applicationTeam, platformTeam)
    .build(app, stackId);

GitOps

If you want to leverage a GitOps-based approach to deploying both add-ons and workloads into an EKS cluster, EKS Blueprints provides out-of-the-box support for deploying Argo CD. You can easily bootstrap an EKS cluster with Argo CD and one or many Argo CD Application resources.

EKS Blueprints provides two sample Argo CD repositories: a workloads repo that demonstrates how to manage workload configuration, and an add-ons repo that demonstrates how to manage add-on configuration. Both repositories follow the Argo CD App of Apps Pattern. The following sample code demonstrates how to bootstrap an EKS cluster with Argo CD and two application resources that leverage the sample repositories:

module “kubernetes-addons” {
  ...
  enable_argocd         = true
  argocd_manage_add_ons = true # Indicates that Argo CD is responsible for managing/deploying Add-ons.
  addons = {
    path               = "chart"
    repo_url           = "https://github.com/aws-samples/eks-blueprints-add-ons.git"
    add_on_application = true
  }
  workloads = {
    path               = "envs/dev"
    repo_url           = "https://github.com/aws-samples/eks-blueprints-workloads.git"
    add_on_application = false
  }
}

And with CDK:

const stackId = "<stack_id>";

const argoBootstrapAddOn = new blueprints.ArgoCDAddOn({
    bootstrapRepo: {
        repoUrl: "https://github.com/aws-samples/eks-blueprints-workloads.git",
        path: 'envs/dev'
    }
});

blueprints.EksBlueprint.builder()
    .addOns(argoBootstrapAddOn)
    .build(app, stackId);

AWS Partner collaboration

While building EKS Blueprints, we’ve worked closely with several AWS Partners to build add-ons for their products and services. By building an add-on for EKS Blueprints, our partners can lower the effort associated with bootstrapping their software into an EKS cluster with proper configuration. Datadog, Dynatrace, HashiCorp, KubecostNewRelic, Ondat, Rafay, Snyk, Tetrate, Kasten By Veeam have all built add-ons that allow customers to use their products with EKS Blueprints. For other AWS Partners who are interested in building an add-on, please see the extensibility guides in the respective Terraform and CDK repositories.

Conceptually, the capabilities of the EKS Blueprints are not constrained to specific tools such as CDK or Terraform. AWS Partners are free to use our tools, participate in joint development efforts through open-source collaboration, or develop their own tools. For example, Pulumi, an AWS Partner and the owner of the popular infrastructure as code (IaC) tool has joined our efforts by announcing their own Pulumi flavor of EKS Blueprints available in preview today.

We’ve also worked with several AWS Partners to create offerings that can help AWS customers use EKS Blueprints. These partners can assist customers who want to adopt and customize EKS Blueprints to suit their unique needs. 2nd Watch, Caylent, Grid Dynamics, HCL Technologies, nClouds, and Weaveworks have built dedicated offerings to help customers adopt EKS Blueprints.

Where we are heading

EKS Blueprints has been developed in open source over the past year by a passionate group of solution architects and specialists at AWS. We’ve been fortunate to work with the open-source community, our customers, and our partners to capture feedback and help shape the direction of the project. Our public roadmap is available today in both the Terraform and CDK repositories, and we want to hear from you. What additional add-ons would be useful? What new blueprints can we build?

Lastly, the EKS Blueprints community is open to everyone. We have a small but growing open-source community that is contributing to the project, and we want to grow our base of contributors. If you are interested in getting involved with the project, we welcome all contributions to Terraform or CDK projects, including bug reports, new features, corrections, or additional documentation.

Next steps

To get started with EKS Blueprints, please visit either the EKS Blueprints for Terraform or EKS Blueprints for CDK repositories. There you will find links to complete project documentation and instructions on getting started.

Availability, pricing, and support

EKS Blueprints for Terraform and CDK are available today on GitHub. They can be used to provision EKS environments in any AWS Region where EKS is currently available. EKS Anywhere support is on our roadmap.

EKS Blueprints is free to use, and you pay for only the resources you deploy. For example, when you deploy an EKS cluster with a managed node group, you will incur standard EKS and EC2 charges.

EKS Blueprints is a community-driven open-source project, not part of an AWS service, so it is therefore not included in AWS enterprise support. All AWS services provisioned by EKS Blueprints, such as EKS, are fully supported. If you need help using EKS Blueprints, please create an issue in our GitHub repository. AWS Professional Services and AWS Partners are ready to help as well.

Kevin Coleman

Kevin Coleman

Kevin is a Principal Container Specialist at AWS. He works with customers of all shapes and sizes who are building internal platforms on top of AWS container services.

Apoorva Kulkarni

Apoorva Kulkarni

Apoorva is a Sr. Specialist Solutions Architect, Containers, at AWS where he helps customers who are building modern ML platforms on AWS container services.

Mikhail Shapirov

Mikhail Shapirov

Mikhail is a Principal Partner Solutions Architect at AWS, focusing on container services, application modernization and cloud management services. Mikhail helps partners and customers drive their products and services on AWS with AWS Container Services, Serverless compute, Dev tools, Cloud Management Services. He is also a software engineer.

Vara Bonthu

Vara Bonthu

Vara Bonthu is a dedicated technology professional and Worldwide Tech Leader for Data on EKS, specializing in assisting AWS customers ranging from strategic accounts to diverse organizations. He is passionate about open-source technologies, Data Analytics, AI/ML, and Kubernetes, and boasts an extensive background in development, DevOps, and architecture. Vara's primary focus is on building highly scalable Data and AI/ML solutions on Kubernetes platforms, helping customers harness the full potential of cutting-edge technology for their data-driven pursuits.