Containers

How to optimize log management for Amazon EKS with Amazon FSx for NetApp ONTAP

Cloud native technologies empower organizations to build and run scalable applications in modern, dynamic environments. Among Cloud Native patterns, Containers, and Kubernetes are mainstream across multiple businesses. According to the Cloud Native Computing Foundation Annual Survey of 2022, 44% of respondents are already using containers for nearly all applications and business segments, and another 35% say containers are used for at least a few production applications.

The Kubernetes in the Wild 2023 report showed that in 2021, in a typical Kubernetes cluster, application workloads accounted for most of the Pods (59%). In contrast, the non-application workloads (system and auxiliary workloads) played a relatively smaller part. In 2022, this picture was reversed. Auxiliary workloads outnumbered application workloads (63% vs. 37%) as organizations increasingly adopted advanced Kubernetes platform technologies, such as security controls, service meshes, messaging systems, and observability tools. The total number of auxiliary workloads in a typical Kubernetes cluster grew by 211% YoY, while the total number of application workloads grew by 30% YoY. Optimizing auxiliary workloads infrastructure releases more resources to support business applications, improve performance and service levels, and keep operational costs under control.

Sidecar container sprawl

Sidecar containers are a common method to capture different log streams in a Kubernetes environment. They manage log files, store them in persistent storage, and push them to the respective observability application. Although they help keep application containers small and agents free, their use at scale contributes to the growth of the cluster resource utilization with auxiliary workloads. Moreover, the resources are often duplicated across Pods. If you have 100 Pods, then you may need 100 sidecar containers, each consuming compute and memory resources and adding to your overall usage.

Instead of a sidecar container you should consider using AWS Storage services for log file persistence. Amazon Elastic Block Store (Amazon EBS), Amazon Elastic File System (Amazon EFS), and Amazon FSx file services have Container Storage Interface (CSI) drivers to provide persistent volumes for Kubernetes containers. Volumes and shares can be provisioned and mounted through kubectl and eksctl, making it easier to automate and operate at scale. For storing log files, choosing file services (Amazon EFS or Amazon FSx) helps consolidate entities to manage. One volume can be shared as a persistent volume claim across each Kubernetes Namespace, and this reduces the need for provisioning a volume per sidecar container and the associated management operations.

In large-scale deployments, a storage volume per Namespace could still mean a lot of connectivity and operations to configure. To overcome this operational overhead, we would need a storage solution that allows us to attach the respective storage to multiple nodes and make its volume available across namespaces. This is where Amazon FSx for NetApp ONTAP provides capabilities to address the multi-attach and cross namespace challenges mentioned previously.

FSx for ONTAP is a fully managed ONTAP filesystem with the following capabilities.

  • Kubernetes integration: NetApp Astra Trident CSI enables provisioning and operations through Kubernetes APIs for both block (iSCSI) and file (NFS and/or SMB) volumes.
  • Capacity efficiency: native data duplication and compression reduces the amount of space needed to consolidate multiple log stream files by up to 70%.
  • Intelligent data replication: space-saving snapshots and clones for ease of restore and environment generation, all managed through kubectl.
  • Cost optimization: capacity tier with low-cost storage for continuous optimization.
  • High availability options: deploy file systems in Single-Availability Zone (AZ) or Multi-AZ configurations for quick failover, and allow containers in different AZs to write to a single volume.

The NetApp Astra Trident driver extends these capabilities with the TridentVolumeReference resource feature. With this resource, a PersistentVolumeClaim (PVC) for a given Namespace can be mounted by containers in multiple Namespaces. The logs across multiple Namespaces can be written to the same shared volume, and then the log files can be read by a log aggregator engine in a dedicated Namespace. This solution reduces the operational overhead of deploying, configuring, and maintaining sidecar logging agents across the infrastructure. It also reduces the compute and memory overhead that each sidecar container would claim on multiple Pods, as well as the number of open connections and network traffic.

The multi-log problem

An application can have multiple log streams for different purposes, such as general stdout logs, access logs, and audit logs. Furthermore, different applications might create different logs streams of their own that suit the application’s observability needs.

Each log stream has its own format, frequency, and permissions configurations, but we still need to collect these logs and push them to our log aggregation engine. The main challenge is that these logs are located on disposable Pod storage and cannot be accessed or streamed in the same way as the default log streams from stdout/err.

Amazon EKS Cluster setup where each pod writes logging to stdout

Amazon EKS Cluster setup where each pod writes logging to stdout

In a common log architecture (as shown in the preceding figure), you use a DaemonSet that runs the collector agent to stream the logs to the central log agent. However, working with this architecture can prove challenging when each application can save different log types (format, write frequency, etc.). The operation team would have to know when each new log type is added and understand from the application developers how to handle and parse it.

There are a few ways to deal with this challenge. First would be using a sidecar container from the log collector engine to stream these additional logs files, and second would be to save these additional files in a PVC for persistency and then stream that into the log engine.

Solution A – Sidecar container streaming

A sidecar container runs as an additional application with a dedicated role and functionality inside the application Pod. In our case that would be a log collection agent to stream the additional log files from each application Pods directly into our log collection engine. This is achieved using a local mount shared by both the main container and the sidecar (marked pink in the following figure).

Amazon EKS Cluster setup where pods has sidecar collectors to send logs to remote logging systems

Amazon EKS Cluster setup where pods has sidecar collectors to send logs to remote logging systems

Upsides:

  • Stream individual log files as needed by each application

Downsides:

  • Resources used by the sidecars are duplicated, meaning if you have 100 Pods you need 100 sidecars, each consuming its own resources (RAM, CPU).
  • Configuration, deployment, and maintenance of the sidecar component
  • Open connections by tens or hundreds of sidecars to log aggregation engines

Solution B – Collect logs in PVC

Since PVC is a namespaced object, we can create a volume per namespace that holds all of the applications’ log streams in a single namespace, as shown in the following figure. We also need a single collector per namespace to stream the data from this namespaced volume to the target logging system. Therefore, we must configure it with the accessType “ReadWriteMany”.

Upsides:

  • No sidecar containers, meaning a more centralized approach in log collection and shipment
  • Low resource overhead, meaning one log agent per namespace

Downsides:

  • PVC per namespace can be expensive and inefficient
  • Need to manage storage operation and many volumes
Amazon EKS Cluster setup where each namespace has a Kubernetes PVC that is being used by a single collector to send to remote logging systems

Amazon EKS Cluster setup where each namespace has a Kubernetes PVC that is being used by a single collector to send to remote logging systems

Solution C – Sharing scalable volume across namespaces

This solution is an evolution of the previous one (Solution B), which uses a functionality unique to FSx for ONTAP that removes the need to manage multiple PVCs per namespace.

ONTAP’s CSI driver has the ability to share PVCs across namespaces (see this Astra Trident doc). It does this by using a CustomResourceDefinition (CRD) called TridentVolumeReference that enables the referencing of an existing PVC. With this we can consolidate all of our applications’ logs into a single volume, as well as run a single collector that has access to the same volume. In turn, we have not only reduced the number of volumes we need to manage, but also we’ve run a single collector that reduces the excess usage of compute resources in our cluster. Since FSx for ONTAP is a managed ONTAP solution, it can be used with your Amazon Elastic Kubernetes Service (Amazon )  cluster to achieve this architecture, as shown in the following figure.

Amazon EKS Cluster setup with a shared storage across multiple namespaces and a single log collector that ships logs to remote logging systems

Amazon EKS Cluster setup with a shared storage across multiple namespaces and a single log collector that ships logs to remote logging systems

Upsides:

  • Single volume for all of our applications regardless of which namespace they’re running in
  • No sidecar containers
  • No excess compute resource consumption, meaning one logging agent per cluster
  • Volume configuration can be managed by platform builders, where applications only need to reference that volume (using TridentVolumeReference)

Downsides:

  • Careful configuration of mount paths for Pods to support multi-tenancy (such as a naming convention for path per namespace)
  • Using objects other than the official CSI supported objects (TridentVolumeReference)

Getting Started

In this next section walk you through a quick-start implementation of this solution.

Prerequisites

The following prerequisites are required to complete this section:

Trident CSI EKS Add-on: to use the Trident CSI EKS Add-on, first you must subscribe to this add-on (note that subscribing and using the add-on doesn’t incur a cost, as it’s a free add-on to use under the Amazon EKS managed add-ons. To do that, head to this AWS Marketplace link and subscribe to that add-on, as shown in the following image.

Amazon EKS Managed Add-on for NetApp Astra Trident CSI Driver

Amazon EKS Managed Add-on for NetApp Astra Trident CSI Driver

Walkthrough

  1. Setting up your file system and Amazon EKS using Terraform
    • EKS cluster version: 1.29
    • FSx for ONTAP file system and storage virtual machine (SVM). You can read more about manually creating these resources in the FSx for ONTAP getting started guide.
    • Configure Trident CSI backend to FSx for ONTAP.
  2. Create shared log collection infrastructure
    • Create the shared PVC for the logs.
    • Install Fluent-bit log collection so it mounts the shared PVC to Fluent Bit Pods.
  3. Deploy the sample application
    • Create PVC to reference to the shared log PVC.
    • Run sample application and log event.
    • Verify log stream with Fluent Bit.

Create an EKS cluster and FSx for ONTAP file system using Terraform

In this section of the post, we create and configure an FSx for ONTAP file system with Amazon EKS and the Trident CSI driver for persistent storage. We can now run all the commands from the machine that we have installed in the Prerequisites section. Clone the sample repository from GitHub and create the relevant resources using the Terraform code in that repository:

git clone https://github.com/aws-samples/amazon-eks-fsx-for-netapp-ontap
cd amazon-eks-fsx-for-netapp-ontap/eks-logs-fsxn/terraform
terraform init
terraform apply -auto-approve

This process can take 30–45 minutes to complete. When finished, the output of the command should look like the following:

fsx-ontap-id = "<fs-id000000123456789>"
fsx-password = <GENERATED-PASSWORD>
fsx-svm-name = "ekssvm"
region = "us-east-2"
secret_arn = "arn:aws:secretsmanager:us-east-2:<ACCOUNT-ID>:secret:fsxn_password_secret-<RANDOM-STRING"
zz_update_kubeconfig_command = "aws eks update-kubeconfig --name fsx-eks-<RANDOM-STRING> --region us-east-2"
Next, copy and run the AWS CLI command from the preceding update_kubeconfig_command output and check that we can reach the cluster by running kubectl get nodes:
kubectl get nodes

NAME                                       STATUS   ROLES    AGE   VERSION
ip-10-0-1-156.us-east-2.compute.internal   Ready    <none>   54d   v1.27.9-eks-5e0fdde
ip-10-0-2-221.us-east-2.compute.internal   Ready    <none>   54d   v1.27.9-eks-5e0fdde
Next use, kubectl to make sure the FSxN Trident CSI driver is up and running.
kubectl get pods -n trident
NAME                                  		READY   STATUS    RESTARTS   AGE
trident-controller-659fcc67bc-wvb96   	6/6     Running   0          25s
trident-node-linux-b246t              	2/2     Running   0          25s
trident-node-linux-r4kdm              	2/2     Running   0          25s
trident-operator-bf899d55f-ln8t9      	1/1     Running   0          33s

Verify that the Trident CSI configured to use the FSx for ONTAP filesystem

Since the Terraform configuration created the StorageClass and the TridentBackendConfig for use, we need to verify that it was configured successfully. Run the following commands and verify that the output is similar to the response you’re getting from the cluster.

kubectl get tridentbackendconfig -n trident

NAME                    BACKEND NAME    BACKEND UUID                           PHASE   STATUS
backend-tbc-ontap-nas   tbc-ontap-nas   35b1d73a-61eb-48b8-a28c-2bc7dd677a0b   Bound   Success

kubectl get storageclass

NAME              PROVISIONER             RECLAIMPOLICY   VOLUMEBINDINGMODE      ALLOWVOLUMEEXPANSION   AGE
gp2 (default)     kubernetes.io/aws-ebs   Delete          WaitForFirstConsumer   false                  54d
trident-csi-nas   csi.trident.netapp.io   Delete          Immediate              true                   3h7m

Create the shared log collection infrastructure

Now that we’ve finished installing the Trident CSI driver and connecting it to our FSx for ONTAP file system, we can create the log collection shared infrastructure and connect that to the shared PVC. For this post, we use Fluent-bit as the log collector.

  1. Create the “fluent” namespace, as well as the PersistentVolumeClaim that is shared across all namespaces (fluent and sample apps).
cd..
kubectl apply -f ./manifests/log-collection/primarypvc.yaml

Note the annotation of “https://trident.netapp.io/shareToNamespace: ‘*’”, which tells the trident CSI driver to share this PVC to other namespaces.

2. Add the fluent-bit Helm repo and install the fluent-bit operator.

helm repo add fluent https://fluent.github.io/helm-charts
helm upgrade --install fluent-operator --create-namespace -n fluent fluent/fluent-operator -f ./manifests/log-collection/fluentbit-values.yaml
"fluent" has been added to your repositories

NAME: fluent-operator
LAST DEPLOYED: Tue Apr 30 12:35:40 2024
NAMESPACE: fluent
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
Thank you for installing  fluent-operator
Your release is named    fluent-operator

To learn more about the release, try the following:
   $ helm status  fluent-operator  -n  fluent
   $ helm get  fluent-operator  -n fluent

Note the configuration of the Fluent Bit under the “./manifests/log-collection/fluentbit-values.yaml”.

We’re configuring Fluent Bit to read the log files under “/var/log/*/*/*.log”. This path is where our sample-app Pods writes their log files. In addition, to make this walkthrough clear, we’ve configured the output to stdout so we can run kubectl logs on the fluent-bit Pods to get the log records (instead of configuring log system services such as Amazon CloudWatch or Amazon OpenSearch).

Deploy sample application

Our sample applications is deployed into two different namespaces where each “app” writes the date and time into a file, which is expected to be collected by the Fluent Bit deployment we deployed earlier.

The sample manifests are under “./manifests/sample-application/”. If you look at any of the files, then you can see that it first creates an object called TridentVolumeReference, which references the PVC created earlier in the fluent namespace. Then the PVC object has been created, so the sample application can use it. However, since the TridentVolumeReference was created earlier, it tells the Trident CSI driver to not create an actual PersistenVolume (PV) for that PVC, rather to point to the PV created in the fluent namespace

1.Create the sample application shared volume and volume reference.

kubectl apply -f ./manifests/sample-application/time-writer-ns1.yaml
kubectl apply -f ./manifests/sample-application/time-writer-ns2.yaml

namespace/ns1 created
tridentvolumereference.trident.netapp.io/ns-app-pvc created
persistentvolumeclaim/ns1-app-pvc created
deployment.apps/time-writer-ns1 created

namespace/ns2 created
tridentvolumereference.trident.netapp.io/ns-app-pvc created
persistentvolumeclaim/ns2-app-pvc created
deployment.apps/time-writer-ns2 created

2. At this point, our application outputs the date into path. We can now expect the logs of the Fluent Bit Pods to have the output records from both sample applications deployments.

Note that the records come from two different Pods running in two different namespaces (see “ns1” and “ns2” in the log record name).

kubectl logs -n fluent --selector app.kubernetes.io/name=fluent-bit

[0] kube.var.log.ip-10-0-2-149.eu-west-1.compute.internal.time-writer-ns2-584c57cf89-5cn6v.time.log: [[1716417762.756066266, {}], {"log"=>"Wed May 22 22:42:42 UTC 2024"}]
[0] kube.var.log.ip-10-0-2-149.eu-west-1.compute.internal.time-writer-ns1-76dffcbb8-vcp45.time.log: [[1716417772.851480648, {}], {"log"=>"Wed May 22 22:42:52 UTC 2024"}]

Cleaning up

To clean all resources, first delete the deployments:

kubectl delete -f ./manifests/sample-application/time-writer-ns1.yaml
kubectl delete -f ./manifests/sample-application/time-writer-ns2.yaml

Then uninstall Fluent Bit by running the following:

helm -n fluent uninstall fluent-operator
kubectl patch -n fluent FluentBit fluent-bit -p '{"metadata":{"finalizers":[]}}' --type=merge
kubectl delete ns fluent

Finally, delete all resources by running Terraform destroy.

cd terraform
terraform apply -auto-approve -destroy

Summary

In this post, we demonstrated how to use FSX for ONTAP as a centralized storage service to collect logs from multiple applications deployed on Amazon EKS. By creating a shared storage volume, applications in different Kubernetes namespaces could write log records to custom files, which were then collected and forwarded by Fluent Bit to remote logging services. This approach provides scalable, durable log storage and simplifies the log management process, offering flexibility to adapt to various logging requirements.

Tsahi Duek

Tsahi Duek

Tsahi Duek is a Principal Container Specialist Solutions Architect at Amazon Web Services. He has over 20 years of experience building systems, applications, and production environments, with a focus on reliability, scalability, and operational aspects. He is a system architect with a software engineering mindset.

Michael Shaul

Michael Shaul

Michael Shaul is NetApp’s Principal Technologist. Based in Israel and with a long career working with data management and infrastructure, Michael is part of the Cloud Data Services CTO office and has a unique in-depth perspective of NetApp cloud technologies.