Containers
Running microservices in Amazon EKS with AWS App Mesh and Kong
NOTICE: October 04, 2024 – This post no longer reflects the best guidance for configuring a service mesh with Amazon EKS and its examples no longer work as shown. Please refer to newer content on Amazon VPC Lattice.
——–
This post was created in collaboration with Claudio Acquaviva, Solution Engineer, Kong, and Morgan Davies, Kong Alliances.
A service mesh is transparent infrastructure layer that has become a common architectural pattern for intra-service communication. By combining Amazon EKS and AWS App Mesh, you form a powerful platform for your microservices, addressing technical requirements that occur in service-to-service communication, including load balancing, service discovery, observability, access control, tracing, health checks, and circuit breakers.
A modern enterprise solution requires clear management controls for the following categories:
- API Management covering external traffic ingress to the API endpoints.
- Service management capabilities focusing on operational controls and service health.
While service meshes primarily address the second category, the ingress is no less important and can benefit from a solution that supports cluster-wide policies such as throttling, application and user authentication, request logging and tracing, and data caching. In addition to these polices, the ingress is the layer that enables you to monetize your APIs by capturing usage, attaching billing systems, and generating alerts that go beyond operational concerns.
While it is possible to achieve this by stitching tools outside of the cluster perimeter, the Kong for Kubernetes Ingress Controller provides a solution that will protect your service mesh running side by side with your application services, leveraging Kubernetes capabilities like HPA, self-healing, RBAC, and cert-manager, among others.
This post will explore how to use Amazon EKS, AWS App Mesh, and Kong for Kubernetes to implement and protect a service mesh. The problem space that we will address is not just about managing your APIs and external traffic; we will also cover deeper integration scenarios. Not only will we handle the ingress in a Kubernetes-native way, we will also make it part of the service mesh itself, improving your observability, security, and traffic control.
Enter AWS App Mesh and Kong for Kubernetes Ingress Controller
AWS App Mesh is a fully managed service that customers can use to implement a service mesh. This service makes it easy to manage internal service-to-service communication across multiple types of compute infrastructure. Kong for Kubernetes is responsible for controlling the traffic going through the ingresses that expose the service mesh to external consumers by defining, applying, and enforcing policies to the ingresses.
Kong for Kubernetes supports the following capabilities:
- Scalability: Based on the Kong API gateway, it’s responsible for managing the ingresses. It is common for applications to experience significant fluctuations in volume of traffic, affecting your ingress as well. Kong for Kubernetes is taking advantage of standard Kubernetes scalability controls like Horizontal Pod Autoscaler (HPA) and will scale seamlessly with the demand.
- Security: Leverages Kubernetes namespace-based RBAC model to ensure consistent access controls. These controls are essential to segregate responsibilities between platform, API, and application teams, which handle their part in the software delivery and operations. For example, application teams restricted to their individual namespaces must still be able to define ingress objects, while access to the ingress controller and API management components can be restricted to the dedicated team(s).
- Extensibility: An extensive plugin ecosystem offers a variety of options to protect your service mesh, such as OpenID Connect and mutual TLS authentication and authorization, rate-limiting, IP restrictions, and self-service credential registration through the Kong Enterprise Developer Portal.
- Observability: It can be fully integrated with monitoring, tracing and logging tools like Prometheus, Jaeger, and AWS CloudWatch.
Here’s the Kong for Kubernetes architecture diagram:
Kong for Kubernetes Aarchitecture
The following diagram describes the Kong for Kubernetes architecture:
The Kong for Kubernetes pod contains two containers:
- The Kong Gateway container represents the data plane responsible for processing API traffic and enforcement of policies defined by the ready-to-use plugins available in Kong for Kubernetes.
- The controller container represents the control plane that translates Kubernetes manifests and CRDs into Kong configuration, removing the need for separate administration of proxy and Kubernetes configuration.
In front of Kong for Kubernetes, there is a Classic Load Balancer (CLB) or Network Load Balancer (NLB) exposing the Kong Gateway to the external consumers. Furthermore, Kong for Kubernetes is protecting all services behind it, including ClusterIP
services running inside the Kubernetes cluster or external services exposed in your cluster.
Prerequisites
Before starting this process, ensure the following prerequisites are ready:
- An EKS 1.15 or higher cluster is already deployed. For this exercise, eksctl was used
- Kubectl 1.15 or higher installed locally
- Helm V3
- Curl or any other HTTP client
Solution deployment
The deployment is an evolution of the DJ Service Mesh Application, adding the ingress controller layer on top of it. Kong for Kubernetes provides an extensive list of plugins to implement numerous policies, such as authentication, log processing, caching, and more.
To get started, let’s implement an API key-based security layer and rate-limiting policies to control the ingress consumption.
Step 1: Deploy your DJ service mesh application
Follow the following steps described in the EKS workshop to deploy the DJ service mesh application:
These steps will install a simple solution consisting of metal and jazz microservices (two versions of each), install AWS App Mesh components, including CRDs and App Mesh controller, which among other functions, acts as an admission controller, injecting envoy proxy sidecar to the deployed pods. The prod
namespace will be configured to enable automatic injection of sidecar containers, providing the required level of abstraction to control traffic policies.
The meshification of the application will result in abstraction of communication between the DJ pod and the existing microservices. Instead of the going directly to the endpoints of jazz and metal microservices, the API flow will go through the new Jazz and Metal virtual services, which in turn use the new virtual router objects to control the traffic. At the end of the process, the logical architecture will look like this:
Before we start, let’s check what virtual services, virtual routers and virtual nodes are available in the cluster:
Note, that the DJ node is redundant since it does not route any traffic and mainly serves testing purposes. What is important is that we have used the App Mesh to implement a canary release of jazz-v2 and metal-v2 services, routing 5% of all traffic to version 2 of both services:
Why is it important? Regardless of how your services are consumed, through the external ingress or internally by other services, all communication will honor the canary release sending 5% of all traffic to the new version.
That is all great, but how do you test the canary release as part of your development process? It is clear that you can run curl
commands from the DJ pod, but that will not work for your CI/CD pipeline and proper API testing strategies that demand that the consumer of the API must be external (similar to the real world).
This highlights the fact, that the solution in its current state should be improved if you are planning to expose your APIs to the outside world. Proper API management demands that we add controls over API clients (identified by the API key), rate limiting and, eventually, usage capturing, and billing.
Step 2: Deploy Kong for Kubernetes Ingress Controller
In this blog post, we will replace the redundant DJ node that can be fully replaced with Kong for Kubernetes. We will define an ingress object that will expose all of our services to external consumers.
The following diagram shows the final topology:
Kong for Kubernetes namespace
The Kubernetes namespace where Kong for Kubernetes components will reside has to be configured with proper labels in order to make it part of the existing mesh and to apply automatic injection of the App Mesh sidecar:
Kong for Kubernetes virtual node
Virtual node declaration for Kong for Kubernetes:
Notice that the declaration is:
- Selecting the Kong for Kubernetes pod to be installed in the step.
- Defining Jazz and Metal services as the backend, since they are the only allowed ingress points.
- Setting the Kong for Kubernetes’ Service FQDN as the DNS Service Discovery.
Use kubectl
to submit it:
Check the virtual nodes again. The Kong for Kubernetes specific one means it’s been incorporated by the “dj-app” Mesh.
Kong for Kubernetes installation
We will use Helm to add the kong chart repository and install Kong for Kubernetes:
By default App Mesh will not allow egress from the nodes except to the nodes that are explicitly defined in the mesh. This will prevent the ingress controller container to communicate with the API server, so the deployment of the Kong pod will be failing. To address this aspect, we will use an App Mesh feature, which allows to bypass egress filtering for containers running under security context with (by default) UID 1337. Setting this security context option for the ingress-controller enables it to communicate with the API server, while the rest of the nodes in the service mesh will be restricted to communicate only with the nodes that are defined in the mesh:
Once the patch is applied, let’s validate the deployment and make sure that all containers are running:
Since the provisioned service that exposes the ingress is of “type: LoadBalancer”, we get an ELB instance along with it:
Verify, the Envoy sidecar has been injected in the Kong for Kubernetes pod by listing all containers and images in the pod:
Step 3: Define Ingress to expose and protect the service mesh
Service configuration
Kong for Kubernetes configures targets based on the endpoints of the corresponding Kubernetes service. That means that Kong itself communicates to pods directly and is using service abstraction as a discovery mechanism for the endpoints. This allows Kong to perform load-balancing bypassing the extra hop of going through the kube proxy and applying optimized load balancing algorithms.
For AWS App Mesh, it is important to note that service communication is only allowed through fully qualified domain names of services. For example, routing to jazz.prod.svc.cluster.local
is permitted. However, directly invoking the service by its host name (e.g. curl jazz:9080
) will not succeed. Ability to address services by host names is on the AWS App Mesh roadmap, so in the future this setup will be a lot simpler.
Luckily, Kong allows to supply configuration that will address both constraints. The following command will do both, set up the service for upstream for Kong ingress, and instruct Kong to use configuration that enables FQDN name instead of service host name for routing.
Kong for Kubernetes Ingress
The next steps is to define the Kubernetes ingress object with the routing rules that will expose both jazz and metal virtual services to external traffic with proper security and traffic controls provided by Kong. Ingress object is a standard Kubernetes object:
- The annotation
konghq.com/strip-path: "true"
removes the extended path like “/dj/jazz” from the request before sending it to target virtual service. - The annotation
konghq.com/override: do-no-preserve-host
points to the configuration object that removes the original host for the request. Combined with the FQDN annotation applied to the service, it allows the sidecar to route the request based on the right authority.
The configuration option “do-not-preserve-host” referenced in the ingress definition refers to the following:
Apply this configuration option to the ingress in the prod namespace:
Check the ingress with kubectl
also. Notice that this ingress is using the same load balancer as the one provisioned for Kong proxy.
Consume the ingress using the external address specified for the ingress:
Run a loop to see the canary release in action:
You can change the URL path to /dj/metal
and do the same for the metal service.
With this step we have exposed our services externally preserving the canary functionality and in addition we got an important layer for ingress and API management on top of our service mesh. The ingress fully implements the DJ functionality routing traffic to the underlying virtual services, allowing us to drop the DJ virtual node.
Step 4: Apply rate-limiting policy
With the ingress in place, it’s necessary to define policies to control its consumption. The first one is rate-limiting. The process to apply policies to an ingress is very simple:
- Declare and create a policy
- Patch the ingress with an annotation
The rate-limiting policy shown below will allow three requests per minute:
Apply this policy:
Once the policy created it can be applied to the ingress that needs it:
If you try to consume the service more than three times a minute, you will receive an error. Note, that with the Kong Ingress you can control traffic to any DJ service that will be part of the underlying implementation (assuming later we add services like “classical” or “country”).
You can refer to to this page to check the plugins provided by Kong to integrate with an extensive list of log processing, real-time monitoring, and tracing tools.
Step 5: Define the API key security policy
Similarly to the rate-limiting policy, we will need to create the policy first and then apply this policy to the ingress:
Apply the API key policy by adding another annotation to the ingress. Notice that this time we’re applying both policies to it:
You will now see an error when trying to consume the ingress:
Provision a key and associate it to a consumer:
Let’s create the consumer and associate it with the consumerapikey
:
Now that the consumer can be identified, let’s consume the ingress with the API key (note the header passed to the curl command):
Conclusion
Kong for Kubernetes and AWS App Mesh make it easy to run services by providing consistent visibility and network traffic controls for services built across multiple platforms. You can learn more about products showcased in this blog through the official documentation: AWS App Mesh and Kong for Kubernetes.
In the next posts, we will show even deeper integration with the AWS ecosystem by adding observability, tracing, and monitoring to the picture, covering integration with AWS X-Ray and Elasticsearch. Additionally, we will focus on authentication and authorization aspects with OIDC and Amazon Cognito.
Feel free to change the policies used in this post and experiment further implementing policies like caching, log processing, OIDC-based authentication, canary, GraphQL integration, and more with the extensive list of plugins provided by Kong.
All the declarations used in this post are available on GitHub.