AWS Compute Blog
Manage Kubernetes Clusters on AWS Using Kops
Any containerized application typically consists of multiple containers. There are containers for the application itself, a database, possibly a web server, and so on. During development, it’s normal to build and test this multi-container application on a single host. This approach works fine during early dev and test cycles but becomes a single point of failure for production, when application availability is critical.
In such cases, a multi-container application can be deployed on multiple hosts. Customers may need an external tool to manage such multi-container, multi-host deployments. Container orchestration frameworks provides the capability of cluster management, scheduling containers on different hosts, service discovery and load balancing, crash recovery, and other related functionalities. There are multiple options for container orchestration on Amazon Web Services: Amazon ECS, Docker for AWS, and DC/OS.
Another popular option for container orchestration on AWS is Kubernetes. There are multiple ways to run a Kubernetes cluster on AWS. This multi-part blog series provides a brief overview and explains some of these approaches in detail. This first post explains how to create a Kubernetes cluster on AWS using kops.
Kubernetes and Kops overview
Kubernetes is an open source, container orchestration platform. Applications packaged as Docker images can be easily deployed, scaled, and managed in a Kubernetes cluster. Some of the key features of Kubernetes are:
- Self-healing
Failed containers are restarted to ensure that the desired state of the application is maintained. If a node in the cluster dies, then the containers are rescheduled on a different node. Containers that do not respond to application-defined health check are terminated, and thus rescheduled. - Horizontal scaling
Number of containers can be easily scaled up and down automatically based upon CPU utilization, or manually using a command. - Service discovery and load balancing
Multiple containers can be grouped together discoverable using a DNS name. The service can be load balanced with integration to the native LB provided by the cloud provider. - Application upgrades and rollbacks
Applications can be upgraded to a newer version without an impact to the existing one. If something goes wrong, Kubernetes rolls back the change.
Kops, short for Kubernetes Operations, is a set of tools for installing, operating, and deleting Kubernetes clusters in the cloud. A rolling upgrade of an older version of Kubernetes to a new version can also be performed. It also manages the cluster add-ons. After the cluster is created, the usual kubectl CLI can be used to manage resources in the cluster.
Download Kops and Kubectl
There is no need to download the Kubernetes binary distribution for creating a cluster using kops. However, you do need to download the kops CLI. It then takes care of downloading the right Kubernetes binary in the cloud, and provisions the cluster.
The different download options for kops are explained at github.com/kubernetes/kops#installing. On MacOS, the easiest way to install kops is using the brew package manager.
brew update && brew install kops
The version of kops can be verified using the kops version command, which shows:
Version 1.6.1
In addition, download kubectl. This is required to manage the Kubernetes cluster. The latest version of kubectl can be downloaded using the following command:
curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/darwin/amd64/kubectl
Make sure to include the directory where kubectl is downloaded in your PATH
.
IAM user permission
The IAM user to create the Kubernetes cluster must have the following permissions:
AmazonEC2FullAccess
AmazonRoute53FullAccess
AmazonS3FullAccess
IAMFullAccess
AmazonVPCFullAccess
Create an Amazon S3 bucket for the Kubernetes state store
Kops needs a “state store” to store configuration information of the cluster. For example, how many nodes, instance type of each node, and Kubernetes version. The state is stored during the initial cluster creation. Any subsequent changes to the cluster are also persisted to this store as well. As of publication, Amazon S3 is the only supported storage mechanism. Create a S3 bucket and pass that to the kops CLI during cluster creation.
This post uses the bucket name kubernetes-aws-io
. Bucket names must be unique; you have to use a different name. Create an S3 bucket:
aws s3api create-bucket --bucket kubernetes-aws-io
I strongly recommend versioning this bucket in case you ever need to revert or recover a previous version of the cluster. This can be enabled using the AWS CLI as well:
aws s3api put-bucket-versioning --bucket kubernetes-aws-io --versioning-configuration Status=Enabled
For convenience, you can also define KOPS_STATE_STORE
environment variable pointing to the S3 bucket. For example:
export KOPS_STATE_STORE=s3://kubernetes-aws-io
This environment variable is then used by the kops CLI.
DNS configuration
As of Kops 1.6.1, a top-level domain or a subdomain is required to create the cluster. This domain allows the worker nodes to discover the master and the master to discover all the etcd servers. This is also needed for kubectl to be able to talk directly with the master.
This domain may be registered with AWS, in which case a Route 53 hosted zone is created for you. Alternatively, this domain may be at a different registrar. In this case, create a Route 53 hosted zone. Specify the name server (NS) records from the created zone as NS records with the domain registrar.
This post uses a kubernetes-aws.io
domain registered at a third-party registrar.
Generate a Route 53 hosted zone using the AWS CLI. Download jq to run this command:
ID=$(uuidgen) && \ aws route53 create-hosted-zone \ --name cluster.kubernetes-aws.io \ --caller-reference $ID \ | jq .DelegationSet.NameServers
This shows an output such as the following:
[ "ns-94.awsdns-11.com", "ns-1962.awsdns-53.co.uk", "ns-838.awsdns-40.net", "ns-1107.awsdns-10.org" ]
Create NS records for the domain with your registrar. Different options on how to configure DNS for the cluster are explained at https://github.com/kubernetes/kops/blob/master/docs/getting_started/aws.md.
Experimental support to create a gossip-based cluster was added in Kops 1.6.2. This post uses a DNS-based approach, as that is more mature and well tested.
Create the Kubernetes cluster
The Kops CLI can be used to create a highly available cluster, with multiple master nodes spread across multiple Availability Zones. Workers can be spread across multiple zones as well. Some of the tasks that happen behind the scene during cluster creation are:
- Provisioning EC2 instances
- Setting up AWS resources such as networks, Auto Scaling groups, IAM users, and security groups
- Installing Kubernetes.
Start the Kubernetes cluster using the following command:
kops create cluster \ --name cluster.kubernetes-aws.io \ --zones us-west-2a \ --state s3://kubernetes-aws-io \ --yes
In this command:
-
--zones
Defines the zones in which the cluster is going to be created. Multiple comma-separated zones can be specified to span the cluster across multiple zones. -
--name
Defines the cluster’s name. -
--state
Points to the S3 bucket that is the state store. -
--yes
Immediately creates the cluster. Otherwise, only the cloud resources are created and the cluster needs to be started explicitly using the commandkops update --yes
. If the cluster needs to be edited, then thekops edit cluster
command can be used.
This starts a single master and two worker node Kubernetes cluster. The master is in an Auto Scaling group and the worker nodes are in a separate group. By default, the master node is m3.medium
and the worker node is t2.medium
. Master and worker nodes are assigned separate IAM roles as well.
Wait for a few minutes for the cluster to be created. The cluster can be verified using the command kops validate cluster --state=s3://kubernetes-aws-io
. It shows the following output:
Using cluster from kubectl context: cluster.kubernetes-aws.io Validating cluster cluster.kubernetes-aws.io INSTANCE GROUPS NAME ROLE MACHINETYPE MIN MAX SUBNETS master-us-west-2a Master m3.medium 1 1 us-west-2a nodes Node t2.medium 2 2 us-west-2a NODE STATUS NAME ROLE READY ip-172-20-38-133.us-west-2.compute.internal node True ip-172-20-38-177.us-west-2.compute.internal master True ip-172-20-46-33.us-west-2.compute.internal node True Your cluster cluster.kubernetes-aws.io is ready
It shows the different instances started for the cluster, and their roles. If multiple cluster states are stored in the same bucket, then --name <NAME>
can be used to specify the exact cluster name.
Check all nodes in the cluster using the command kubectl get nodes:
NAME STATUS AGE VERSION ip-172-20-38-133.us-west-2.compute.internal Ready,node 14m v1.6.2 ip-172-20-38-177.us-west-2.compute.internal Ready,master 15m v1.6.2 ip-172-20-46-33.us-west-2.compute.internal Ready,node 14m v1.6.2
Again, the internal IP address of each node, their current status (master or node), and uptime are shown. The key information here is the Kubernetes version for each node in the cluster, 1.6.2 in this case.
The kubectl value included in the PATH earlier is configured to manage this cluster. Resources such as pods, replica sets, and services can now be created in the usual way.
Some of the common options that can be used to override the default cluster creation are:
-
--kubernetes-version
The version of Kubernetes cluster. The exact versions supported are defined at github.com/kubernetes/kops/blob/master/channels/stable. -
--master-size and --node-size
Define the instance of master and worker nodes. -
--master-count and --node-count
Define the number of master and worker nodes. By default, a master is created in each zone specified by--master-zones
. Multiple master nodes can be created by a higher number using--master-count
or specifying multiple Availability Zones in--master-zones
.
A three-master and five-worker node cluster, with master nodes spread across different Availability Zones, can be created using the following command:
kops create cluster \ --name cluster2.kubernetes-aws.io \ --zones us-west-2a,us-west-2b,us-west-2c \ --node-count 5 \ --state s3://kubernetes-aws-io \ --yes
Both the clusters are sharing the same state store but have different names. This also requires you to create an additional Amazon Route 53 hosted zone for the name.
By default, the resources required for the cluster are directly created in the cloud. The --target
option can be used to generate the AWS CloudFormation scripts instead. These scripts can then be used by the AWS CLI to create resources at your convenience.
Get a complete list of options for cluster creation with kops create cluster --help
.
More details about the cluster can be seen using the command kubectl cluster-info
:
Kubernetes master is running at https://api.cluster.kubernetes-aws.io KubeDNS is running at https://api.cluster.kubernetes-aws.io/api/v1/proxy/namespaces/kube-system/services/kube-dns To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
Check the client and server version using the command kubectl version
:
Client Version: version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.4", GitCommit:"d6f433224538d4f9ca2f7ae19b252e6fcb66a3ae", GitTreeState:"clean", BuildDate:"2017-05-19T18:44:27Z", GoVersion:"go1.7.5", Compiler:"gc", Platform:"darwin/amd64"} Server Version: version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.2", GitCommit:"477efc3cbe6a7effca06bd1452fa356e2201e1ee", GitTreeState:"clean", BuildDate:"2017-04-19T20:22:08Z", GoVersion:"go1.7.5", Compiler:"gc", Platform:"linux/amd64"}
Both client and server version are 1.6 as shown by the Major and Minor attribute values.
Upgrade the Kubernetes cluster
Kops can be used to create a Kubernetes 1.4.x, 1.5.x, or an older version of the 1.6.x cluster using the --kubernetes-version
option. The exact versions supported are defined at github.com/kubernetes/kops/blob/master/channels/stable.
Or, you may have used kops to create a cluster a while ago, and now want to upgrade to the latest recommended version of Kubernetes. Kops supports rolling cluster upgrades where the master and worker nodes are upgraded one by one.
As of kops 1.6.1, upgrading a cluster is a three-step process.
First, check and apply the latest recommended Kubernetes update.
kops upgrade cluster \ --name cluster2.kubernetes-aws.io \ --state s3://kubernetes-aws-io \ --yes
The --yes
option immediately applies the changes. Not specifying the --yes
option shows only the changes that are applied.
Second, update the state store to match the cluster state. This can be done using the following command:
kops update cluster \ --name cluster2.kubernetes-aws.io \ --state s3://kubernetes-aws-io \ --yes
Lastly, perform a rolling update for all cluster nodes using the kops rolling-update
command:
kops rolling-update cluster \ --name cluster2.kubernetes-aws.io \ --state s3://kubernetes-aws-io \ --yes
Previewing the changes before updating the cluster can be done using the same command but without specifying the --yes
option. This shows the following output:
NAME STATUS NEEDUPDATE READY MIN MAX NODES master-us-west-2a NeedsUpdate 1 0 1 1 1 nodes NeedsUpdate 2 0 2 2 2
Using --yes
updates all nodes in the cluster, first master and then worker. There is a 5-minute delay between restarting master nodes, and a 2-minute delay between restarting nodes. These values can be altered using --master-interval
and --node-interval
options, respectively.
Only the worker nodes may be updated by using the --instance-group
node option.
Delete the Kubernetes cluster
Typically, the Kubernetes cluster is a long-running cluster to serve your applications. After its purpose is served, you may delete it. It is important to delete the cluster using the kops command. This ensures that all resources created by the cluster are appropriately cleaned up.
The command to delete the Kubernetes cluster is:
kops delete cluster --state=s3://kubernetes-aws-io --yes
If multiple clusters have been created, then specify the cluster name as in the following command:
kops delete cluster cluster2.kubernetes-aws.io --state=s3://kubernetes-aws-io --yes
Conclusion
This post explained how to manage a Kubernetes cluster on AWS using kops. Kubernetes on AWS users provides a self-published list of companies using Kubernetes on AWS.
Try starting a cluster, create a few Kubernetes resources, and then tear it down. Kops on AWS provides a more comprehensive tutorial for setting up Kubernetes clusters. Kops docs are also helpful for understanding the details.
In addition, the Kops team hosts office hours to help you get started, from guiding you with your first pull request. You can always join the #kops channel on Kubernetes slack to ask questions. If nothing works, then file an issue at github.com/kubernetes/kops/issues.
Future posts in this series will explain other ways of creating and running a Kubernetes cluster on AWS.
— Arun