中文版 – Custom AWS Lambda Runtimes were introduced at re:Invent 2018. Knative is an open source project to build, deploy, and manage serverless workloads. This post by Sebastien Goasguen explains that TriggerMesh’s Knative Lambda Runtime is a custom runtime that can run a Lambda function on Knative running on an Amazon EKS cluster.
–Arun
AWS Lambda has become the leading serverless offering worldwide. The ability to easily connect together AWS services has opened the door for true cloud-native applications, and enabled users to innovate tremendously faster. At the same time, Kubernetes has emerged as the leader in container orchestration and the leading enabler of cloud-native applications on-premise. Amazon Elastic Container Service for Kubernetes (EKS) became generally available last year.
The two worlds of serverless and containers could seem at odds, with serverless removing any infrastructure burden and putting the focus on application architecture and services, while Kubernetes still exposes containers and infrastructure concerns to its users and operators.
However, these two worlds seem bound to converge in several ways. One example is Knative, an open source project designed to use Kubernetes to build source-centric and container-based applications that can run anywhere. Knative is made up of three components: building, serving, and eventing. Building takes source code and generates a container as transparently to the users as possible. Serving takes those containers and offers features like scaling to 0. Finally, eventing offers a way to take cloud events, wherever they are, and use them to trigger functions, very much like AWS event sources can be used to trigger AWS Lambda functions.
With Knative being developed, one has to wonder whether it would be possible to run an AWS Lambda function within Knative, in an EKS cluster or even in another Kubernetes cluster (including on-premise). You might want to do this to unify workload management under Kubernetes, or to keep all workloads, including functions, under one “roof,” or perhaps you simply believe that AWS Lambda functions should become the standard for functions across serverless providers.
In this post, we want to show you how this is already a reality: you can indeed run an AWS Lambda function in Knative, using the TriggerMesh Knative Lambda Runtimes (KLR).
KLR are Knative objects called build-templates which use a custom implementation of the AWS Lambda custom runtime API. Lambda custom runtimes opened up a great integration point to the open source community. At TriggerMesh, we decided to implement what we call a function invoker to expose the Lambda custom runtime API. We then used this to build Knative templates for Go, Ruby, Node, and Python functions.
This is a very exciting development that we are eager to share with you. The rest of this post is a step-by-step walkthrough that will guide you through creating an EKS cluster and running TriggerMesh KLR functions. You will:
- Create an EKS cluster with
eksctl
- Install Knative
- Install the TriggerMesh Knative CLI
- Deploy a function in EKS using TriggerMesh KLR
The picture below depicts the overall setup that we are going to put in place.
In the remainder of this post, we assume that you have previously configured your AWS CLI (e.g aws
) and your Kubernetes CLI (e.g kubectl
). If this is not the case, head over to the following links:
Create an EKS Cluster
You’ll first need to set up an Amazon EKS cluster, using eksctl with the Cluster config file mechanism. Start by downloading these prerequisites:
With all the necessary tools installed, you can launch your EKS cluster. In this example, we’re deploying the cluster in us-east-2, our Ohio region; you can replace the AWS_REGION with any region that supports Amazon EKS.
Deploy Cluster
export AWS_REGION=us-east-1
export ACCOUNT_ID=$(aws sts get-caller-identity –query “Account” –output text)
Once you’ve exported the region, create the ClusterConfig as follows:
cat >cluster.yaml <<EOF
apiVersion: eksctl.io/v1alpha4
kind: ClusterConfig
metadata:
name: knative
region: ${AWS_REGION}
version: "1.11"
nodeGroups:
- name: knodes
desiredCapacity: 3
instanceType: m5.large
EOF
After the file has been created, we create the cluster and store the credentials to access it using the eksctl create cluster command:
eksctl create cluster --kubeconfig eksknative.yaml -f cluster.yaml
This will take roughly 15 minutes to complete, then you’ll have an Amazon EKS cluster ready to go.
Since you ran eksctl
with the --kubeconfig
option, the credentials used to access your Kubernetes cluster are stored in the file eksknative.yaml
in the directory you ran the command from. This has the advantage of keeping your main kubeconfig file clean, but it has the disadvantage that you now need to specify this file on every kubectl
command.
For convenience, you can set up an alias that will specify your configuration file when you use kubectl
:
alias knative='kubectl --kubeconfig=eksknative.yaml'
To verify that access to your cluster is working properly, issue the following command (it should list the three nodes in your cluster):
knative get nodes
NAME STATUS ROLES AGE VERSION
ip-192-168-10-163.ec2.internal Ready <none> 1m v1.11.5
ip-192-168-12-106.ec2.internal Ready <none> 1m v1.11.5
ip-192-168-43-220.ec2.internal Ready <none> 1m v1.11.5
With a working EKS cluster on hand, you are now ready to install Knative in it.
Install Knative in Your EKS cluster
Installing Knative can be done in a few kubectl apply
commands.
A default Knative installation will use the Istio service mesh. To learn more about service meshes especially on AWS, see the App Mesh service.
Install Istio
To install Istio, use the following two commands (note the use of our alias to kubectl --kubeconfig=eksknative.yaml
which uses the proper cluster configuration file):
knative apply --filename https://github.com/knative/serving/releases/download/v0.4.0/istio-crds.yaml && \
knative apply --filename https://github.com/knative/serving/releases/download/v0.4.0/istio.yaml
Install Knative Serving and Knative Build
Once this completes, you can move on to installing the Knative components: build and serving. In this post, we will skip the installation of the eventing component as we are not making use of it here.
knative apply --filename https://github.com/knative/serving/releases/download/v0.4.0/serving.yaml \
--filename https://github.com/knative/build/releases/download/v0.4.0/build.yaml \
--filename https://github.com/knative/serving/releases/download/v0.4.0/monitoring.yaml \
--filename https://raw.githubusercontent.com/knative/serving/v0.4.0/third_party/config/build/clusterrole.yaml
If the installation above fails, apply the manifest again. The declarative aspect of Kubernetes will ensure that all the objects get created.
After a few minutes, you should be able to list the pods in the several namespaces that the previous commands have created, namely:
-
istio-system
for the Istio components and the Ingress gateway that will receive Internet traffic
-
knative-serving
for the Serving controller and autoscaler
-
knative-build
for the Build controller
knative get pods -n knative-serving
NAME READY STATUS RESTARTS AGE
activator-6f7d494f55-57r9k 2/2 Running 1 2m
autoscaler-5cb4d56d69-mj4lm 2/2 Running 1 2m
controller-6d65444c78-tlpmm 1/1 Running 0 2m
webhook-55f88654fb-ppczx 1/1 Running 0 2m
knative get pods -n knative-build
NAME READY STATUS RESTARTS AGE
build-controller-68dfb74954-5d68l 1/1 Running 0 2m
build-webhook-866fd64885-95sq4 1/1 Running 0 2m
Once all the pods are in running state, congratulations! You have installed Knative on your EKS cluster.
Now let’s use it !
Get The Public DNS Name of the Ingress Gateway
As you deploy functions in your Knative cluster, you will need to know how you can reach them. The Knative installation that you did created a so-called Ingress gateway in the istio-system
namespace. This gateway will be configured with a LoadBalancer type service and get a public DNS name.
Let’s find this public DNS name so that you can target it to call your functions. There are many ways to do this, but if you know jq
you can easily write a query that will give it to you.
jq
is a command line JSON processor that can be very handy to process Kubernetes manifests. On OSX you can get jq
via brew install jq
or go to the Download page.
The DNS name will be in the status section of the Ingress gateway manifest, you can get it in a single query like this:
$ knative get svc istio-ingressgateway -o json -n istio-system | jq -r .status.loadBalancer.ingress[0].hostname
af375ca60465511e9911e02b74097eec-2072757389.us-east-1.elb.amazonaws.com
Install the TriggerMesh CLI tm
The Knative API can be used with kubectl
directly, but to simplify usage, we developed a CLI for Knative clusters. We call it tm
as a shorthand for TriggerMesh.
To create functions and use our AWS-compatible Lambda runtimes, start by installing tm
. You can head over to the GitHub release page and download the Linux or OSX binary for the latest release, or if you have working Go environment you can get it with:
go get github.com/triggermesh/tm
Verify that it is in your $PATH with:
which tm
Similarly to kubectl
, you need to specify a configuration file to target your EKS cluster. For convenience, you can set an alias like this:
alias tmk='tm --config=eksknative.yaml'
Verify that you can list Knative services and builds with:
tmk get services
NAMESPACE SERVICE
tmk get builds
NAMESPACE BUILD
If the command above returns properly, your TriggerMesh CLI is properly configured and talking to your EKS cluster running Knative. Now it’s time to deploy a function with it.
Deploy a Function with the Knative Lambda Runtime (KLR)
As mentioned in the introduction, TriggerMesh KLR represents a Knative way to build functions using the AWS custom runtime API, which means that you can deploy the same functions in Knative and AWS Lambda.
Python, Ruby, Node, and Go have been implemented. To show you how to create an AWS Lambda-compatible function with Knative, we will deploy a simple Python endpoint Knative using the TriggerMesh CLI. The same Python function can be deployed to AWS Lambda.
Install the KLR template
First, you need to create a Knative build template. A build template is a Knative API object which defines how you can build a container inside a Kubernetes cluster. The template also contains information to push the resulting container image to a container registry (e.g., ECR, Docker Hub). A function needs to be wrapped in an AWS custom runtime invoker and a bootstrap script (very much as described in the official documentation) and then stored as a container image.
Now create the Python KLR template on your EKS cluster with tm
– you only need to do this once:
tmk deploy buildtemplate -f https://raw.githubusercontent.com/triggermesh/knative-lambda-runtime/master/python-3.7/buildtemplate.yaml
You can load a local template referenced by a file, or as shown above, use a remote URL. The command above points to the URL of the Python build template directly stored in the KLR GitHub repository.
Once the template is created, you can list it and also potentially delete it (with tm delete
) to clean things up:
tmk get buildtemplates
NAMESPACE NAME READY
default knative-python37-runtime True
Set your Container Registry Credentials
As mentioned, the build template will be used to package our function as a container image. That image needs to be stored in a container registry. Here we will use Docker Hub. To tell Knative where to store the image, you can set your Docker Hub credentials via the tm
with the tm set registry-auth
command:
tmk set registry-auth dockerhub
Registry: index.docker.io
Username: runseb
Password: **********
Registry credentials set
This command also needs to be done only once. Once you have set your authentication credentials for a container registry, TriggerMesh will always know how to push to it.
Deploy a Function as a Knative Service with KLR Template
Now, to deploy your simple Python HTTP endpoint function, we can tell tm
to create a Service. A Knative Service is different from a core Kubernetes Service: it creates a configuration and a route object which can help do traffic splitting and store revisions of your functions. We won’t explore configuration and routes in this post, but will just show you how to deploy a service.
You will need to reference the build template that we just created so that Knative knows how to package your function. You will define a set of build arguments, namely which directory your function is in and what the handler is. You will define which registry host to use based on the tm set registry-auth
command that you ran above, and finally you will specify where your function is with the -f
option. That final option can point to a local directory or a remote one. In the example below we will point to the serverless framework example repository. All of this leads to a single command shown below:
tmk deploy service python-test -f https://github.com/serverless/examples \
--build-template knative-python37-runtime \
--build-argument DIRECTORY=aws-python-simple-http-endpoint \
--build-argument HANDLER=handler.endpoint \
--registry-host dockerhub \
--wait
Accessing Your KLR Function
Now that your function is successfully deployed, you can call it via the Istio Ingress Gateway DNS name, with the caveat that you need to pass a custom Host header:
curl -H 'Host: python-test.default.svc.cluster.local' http://af375ca60465511e9911e02b74097eec-2072757389.us-east-1.elb.amazonaws.com
{"statusCode": 200, "body": "{\"message\": \"Hello, the current time is 15:05:53.694014\"}"}
Congratulations, you have deployed an AWS Lambda-compatible function on Knative within your EKS cluster!
Note that this DNS name will not work for you – you will need to replace the command above with your own.
This command can be greatly improved by setting up a custom domain for your Knative services. This can be done following the custom domain documentation.
If you want to deploy a Go or Node or Ruby function, you can use the other KLR templates.
Finally, since Knative does expose traffic through the AWS API Gateway, the response body of the function may need to be modified slightly. We are currently investigating how to modify our KRL templates to send the same response that the API Gateway would send.
Summary
In summary, here is what we just did:
- We created a Kubernetes cluster on EKS using the eksctl cli.
- We deployed Knative components and Istio via kubectl cli.
- We then used the TriggerMesh CLI tm to create Knative service.
- The Knative service created a network Route object which allowed us to call the function via the Istio Ingress gateway. The function was transparently packaged as a container image using the KLR template which provided the AWS Lambda custom runtime API. When the function is called, Knative scales the function to at least one replica, and a Kubernetes Pod appears in the cluster.
Delete Your Cluster
Now that you are done experimenting with Knative and the TriggerMesh Knative Lambda Runtimes, you can clean everything up by deleting your cluster with a single command:
eksctl delete cluster knative
To be on the safe side, make sure that the CloudFormation stacks get properly deleted.
Conclusion
In this very detailed post we showed you:
- How to create an EKS cluster with
eksctl.
- How to install Knative in your EKS cluster.
- Install the TriggerMesh Knative CLI.
- Deploy a function in Knative using the TriggerMesh Lambda runtime that are compatible with AWS Lambda.
Knative Lambda Runtimes have been enabled by the AWS custom runtime APIs announced at re:Invent 2018. KLR are AWS Lambda custom runtimes that can be used on Kubernetes. The function invoker which provides the custom runtime API is available on GitHub. If you head over to that repository you will see how the AWS custom runtime examples for Rust and C++ are also possible in Knative.
This is a very exciting development made possible by the active open sourcing of AWS technology. While AWS Lambda clearly is the preferred way to run functions that link AWS services, we can envision a world where AWS Lambda functions are deployed on other clouds and even on-premise, making Lambda into a critical standard of the serverless paradigm even outside the AWS cloud.
Don’t hesitate to send us any feedback by filing an issue or reaching out to us on Twitter at @triggermesh.
Sebastien Goasguen
Sebastien Goasguen is a twenty year open source veteran. Former vice-president of the Apache CloudStack project, in 2016 he founded Skippbox, a Kubernetes based startup, where he led the development of kompose, cabin and kubeless. He co-founded TriggerMesh in the fall of 2018, a serverless management platform which builds on top of Knative and Kubernetes. Sebastien has written over 70 scientific papers in a past life and is the author of the O’Reilly Docker Cookbook and co-author of the Kubernetes Cookbook.
The content and opinions in this post are those of the third-party author and AWS is not responsible for the content or accuracy of this post.