Containers

Using EKS encryption provider support for defense-in-depth

Gyuho Lee, Rashmi Dwaraka, and Michael Hausenblas

When we announced that we plan to natively support the AWS Encryption Provider in Amazon EKS, the feedback we got from you was pretty clear: can we have it yesterday? Now we’re launching EKS support for the encryption provider, a vital defense-in-depth security feature. That is, you can now use envelope encryption of Kubernetes secrets in EKS with your own master key. In this post we explain the background and walk you through how to get started.

Background

Secrets in Kubernetes enable you manage sensitive information, such as passwords or API keys, in a Kubernetes-native way. When you create a secret resource, for example using kubectl create secret , the Kubernetes API server stores it in etcd in a base64 encoded form. In EKS, we operate the etcd volumes encrypted at disk-level using AWS-managed encryption keys.

Envelope encryption means to encrypt a key with another key. Why would you want this? Motivation is security best practice for applications that store sensitive data and is part of a defense in depth security strategy. So you’d have a (longer-term) master key stored in AWS KMS that then would be utilized for data key generation in the Kubernetes API server, that in turn are used to encrypt/decrypt sensitive data stored in Kubernetes secrets.

Up to now you had no native way to use your own master keys with EKS for envelope encryption. With this launch, you can generate keys used to encrypt the secrets stored within an EKS cluster using AWS KMS. Alternatively, you can import keys generated from another system—for example, your on-premises solution—into KMS and use them in the EKS cluster, without needing to install or operate additional software.

How does it work?

When you create an EKS cluster, you can enable encryption provider support by setting the “KMS Key ARN”, via the AWS CLI, the console, or using eksctl, which supports setting the key ARN via the config file.

Once configured, when one of your developers creates a Kubernetes secret the encryption provider automatically encrypts the secret with a Kubernetes-generated data encryption key, which is then encrypted using the provided KMS master key.

Before we get into the details, let’s quickly get on the same page concerning two core terms used in the following:

  1. CMK stands for Customer Master Key. Think of it as the keys to the kingdom; you use it to encrypt and decrypt the keys that are then actually used to protect your sensitive information, which brings us to …
  2. the DEK, which is short for Data Encryption Key and is used on a per-secret basis to encrypt and decrypt the data.

Having clarified this, let’s have a look at how, on a high level, EKS supports the encryption provider and how write and read paths for Kubernetes secrets with this feature enabled look like:

The steps in detail are:

  1. All starts with a user (typically in an admin role) creates a secret, for example using kubectl or GitOps style.
  2. The Kubernetes API server in the control plane generates a DEK locally, and uses this to encrypt the plaintext payload in the secret. Note that we generate a unique DEK for every single write, and also that the plaintext DEK is never saved to disk.
  3. The Kubernetes API server calls kms:Encrypt to encrypt the DEK with the CMK. This key is the root of the key hierarchy and in case of KMS, it creates the CMK on hardware security modules (HSM). In this step, the API server uses the CMK to encrypt the DEK and also caches the base64 of the encrypted DEK.
  4. Finally, for the write-path, the API server stores the DEK- encrypted secret in etcd.
  5. If one now wants to use the secret in, say a pod via a volume (read-path), the reverse process takes place, that is the API server reads the encrypted secret from etcd and decrypts the secret with the DEK.
  6. The application, running in a pod on either EC2 or Fargate, can at this moment consume the secret as per usual.

Note that EKS support for encryption provider is available for clusters running Kubernetes version 1.13 (platform version eks.8) and 1.14 (eks.9). No changes in the way you’re using secrets are required, all that is necessary is to enable the encryption provider support on cluster create.

With these basics out of the way, let’s have a look how this looks in practice.

Walkthrough

First off, you’ll need a KMS key in the same region as your cluster to use for encryption. If you don’t already have a KMS key created, you can create a key and alias using the AWS CLI as follows:

$ MASTER_KEY_ARN=$(aws kms create-key --query KeyMetadata.Arn --output text)
$ aws kms create-alias \
      --alias-name alias/k8s-master-key \
      --target-key-id $(echo $MASTER_KEY_ARN | cut -d "/" -f 2)

Now, we’ll we create an EKS 1.14 cluster via the AWS console. The only thing noteworthy of your attention is to enable the secrets encryption—right below the network settings—and do remember that, at the current juncture, you can only set this at the cluster creation time (that is, not supported via cluster config updates):

Now, add a managed node group or define a Fargate profile for a serverless data plane.

Next, we update our local Kubernetes configuration file, making the cluster accessible from the command line:

$ aws eks update-kubeconfig --name ekseprovidercon
Added new context arn:aws:eks:us-west-2:123456789012:cluster/ekseprovidercon to /Users/example/.kube/config

And with that we’re ready to use secrets with envelop encryption. For example, let’s create a secret called test-creds in the namespace encprovtest and then consume it.

First, we prepare the secret value and the target namespace:

$ echo -n "am i safe?" > ./test-creds
$ cat ./test-creds
am i safe?

$ kubectl create ns encprovtest

With the secret’s value and the namespace in place, let’s get to it:

$ kubectl create secret \
          generic test-creds \
          --from-file=test-creds=./test-creds \
          --namespace encprovtest
secret/test-creds created

At this point, the secret landed in etcd, encrypted with the DEK.

A developer could use said secret, for example via a volume mount in a pod. For demo purposes, we’re trying to read it back via the CLI like so:

$ kubectl get secret test-creds \
  -o jsonpath="{.data.test-creds}" \
  --namespace encprovtest | \
  base64 —decode
am i safe?

Yay, that worked well! But how do we know that the secret actually was encrypted when we created it and is now decrypted when we read it? Well, let’s have a look at what happened using AWS CloudTrail. If you search for the Decrypt event, you’d see something like this:

 

OK, so this worked as expected, let’s now consume the secret from a pod. And to make things interesting, we will use EKS on Fargate, our serverless offering.

First, you want to make sure you have a Fargate profile configured. We are using a Fargate profile that supports pods launched in the Kubernetes namespace serverless, so check that your output looks something like this:

$ aws eks describe-fargate-profile \
      --cluster-name envencdemo 
      --fargate-profile-name fgp0
{
    "fargateProfile": {
        "fargateProfileName": "fgp0",
        "fargateProfileArn": "arn:aws:eks:us-west-2:123456789012:fargateprofile/envencdemo/fgp0/b4b84077-0074-34d0-eaab-a5e81c043ebb",
        "clusterName": "envencdemo",
        "createdAt": 1582711047.622,
        "podExecutionRoleArn": "arn:aws:iam::123456789012:role/fg-cluster-FargatePodExecutionRole-T0V7YEE2PZCM",
        "subnets": [
            "subnet-05286e168dbafbdc6"
        ],
        "selectors": [
            {
                "namespace": "serverless",
                "labels": {}
            }
        ],
        "status": "ACTIVE",
        "tags": {}
    }
}

Next, we’re creating the Kubernetes namespace, the target environment that the Fargate profile picks up:

kubectl create ns serverless

And now we’re in the position to create the secret in said namespace:

kubectl --namespace serverless \
        create secret generic test-creds \
        --from-file=test-creds=./test-creds 

We want to use the secret in a pod called consumesecret with the following manifest:

apiVersion: v1
kind: Pod
metadata:
  name: consumesecret
spec:
  containers:
  - name: shell
    image: amazonlinux:2018.03
    command:
      - "bin/bash"
      - "-c"
      - "cat /tmp/test-creds && sleep 10000"
    volumeMounts:
      - name: sec
        mountPath: "/tmp"
        readOnly: true
  volumes:
  - name: sec
    secret:
      secretName: test-creds

Alright, we have everything in place, so let’s launch the pod:

kubectl --namespace serverless \
        apply -f podconsumingsecret.yaml

If everything succeeded we should be able to see what the pods sees and that is, the secret being available in /tmp/test-creds within the container’s filesystem:

$ kubectl --namespace serverless exec -it consumesecret -- cat /tmp/test-creds
am i safe?

Great! Now that we have an understanding how to use secrets with envelop encryption enabled in the EKS cluster, let’s have a closer look at the underlying open source project and usage costs.

Contributions and usage

In EKS, we use the open source AWS Encryption Provider to provide you with the envelope encryption for secrets with KMS. This project is backed by the Kubernetes community and part of the Kubernetes SIGs organization. The AWS encryption provider must be run on the Kubernetes control plane, a configuration that has been previously possible for self-managed Kubernetes clusters on AWS, but not for EKS clusters. We fully support the encryption provider for EKS clusters and will continue to invest in improving and maintaining the open source project along with the project maintainers, led by EKS engineers.

In terms of costs, you pay $1 per month to store any key that you create or import to KMS. KMS charges for encryption and decryption requests with a free tier of 20,000 requests per month per account and you pay $0.03 per 10,000 requests above the free tier per month; do note however that due to the built-in caching capabilities of the AWS Encryption Provider not every read operations causes an actual request to KMS, hence lowering your overall bill. This applies across all KMS usage for an account, so the cost of using KMS on your cluster may be impacted by the usage of KMS on other cluster or AWS resources within your account.

Let us know what you think about this new, exciting security feature and how you plan to use it as well as consider contributing to the open source project kubernetes-sigs/aws-encryption-provider.

Gyuho Lee

Gyuho Lee

Gyuho is an SDE in the EKS team and active in Kubernetes and other OSS projects.

Rashmi Dwaraka

Rashmi Dwaraka

Rashmi is an SDE in the EKS team involved with EKS managed cluster experience.