Containers

Category: Amazon Elastic Kubernetes Service

Extending EKS with Hybrid Nodes: IAM Roles Anywhere and HashiCorp Vault

In this post, we explore how to use AWS Identity and Access Management (IAM) Roles Anywhere, supported by HashiCorp Vault PKI, to facilitate joining EKS Hybrid Nodes to an Amazon EKS Cluster. This solution enables businesses to flexibly make use of compute resources outside of AWS by extending an Amazon Elastic Kubernetes Service (Amazon EKS) data plane beyond the AWS Cloud boundary, addressing use cases focused on data sovereignty, low latency communication, and regulatory compliance.

New Amazon EKS Auto Mode features for enhanced security, network control, and performance

In this post, we explore the latest Amazon Elastic Kubernetes Service (Amazon EKS) Auto Mode features that enhance security, network control, and performance for enterprise Kubernetes deployments. These new capabilities address critical operational challenges including capacity management, network segmentation, enterprise PKI integration, and comprehensive encryption while maintaining the automated cluster management that makes EKS Auto Mode transformative for development teams.

Running Slurm on Amazon EKS with Slinky

In this post, we introduce the Slinky Project and explore how it enables organizations to run Slurm workload management within Amazon EKS, combining the deterministic scheduling capabilities of Slurm with Kubernetes’ dynamic resource allocation for efficient hybrid workload management. This unified approach allows teams to maximize resource utilization across both batch processing jobs and cloud-native applications without maintaining separate infrastructure silos.

How to manage EKS Pod Identities at scale using Argo CD and AWS ACK

In this post, we explore how to manage EKS Pod Identity associations at scale using Argo CD and AWS Controllers for Kubernetes (ACK), addressing the critical challenge of the eventually consistent EKS Pod Identity API. The guide demonstrates automation techniques to ensure proper IAM role associations before application deployment, maintaining GitOps workflows while preventing permission-related failures.

SaaS deployment architectures with Amazon EKS

In this post, we explore patterns and practices for building and operating distributed Amazon Elastic Kubernetes Service (Amazon EKS)-based applications effectively. We examine three deployment models – SaaS Provider Hosted, Remote Application Plane, and Hybrid Nodes – each offering distinct advantages for specific use cases as companies scale their software as a service (SaaS) offerings.

Implementing granular failover in multi-Region Amazon EKS

In this post, we demonstrate how to configure Amazon Route 53 to enable unique failover behavior for each application within multi-tenant Amazon EKS environments across AWS Regions. This solution allows organizations to maintain the cost benefits of shared infrastructure while meeting diverse availability requirements by implementing application-specific health checks that provide granular control over failover scenarios.

Use Raspberry Pi 5 as Amazon EKS Hybrid Nodes for edge workloads

In this post, we demonstrate how to use a Raspberry Pi 5 as an Amazon EKS hybrid node to process edge workloads while maintaining cloud connectivity. We show how to set up an EKS cluster that connects cloud and edge infrastructure, secure connectivity using WireGuard VPN, enable container networking with Cilium, and implement a real-world IoT application using an ultrasonic sensor that demonstrates edge-cloud integration.

Kubernetes right-sizing with metrics-driven GitOps automation

In this post, we introduce an automated, GitOps-driven approach to resource optimization in Amazon EKS using AWS services such as Amazon Managed Service for Prometheus and Amazon Bedrock. The solution helps optimize Kubernetes resource allocation through metrics-driven analysis, pattern-aware optimization strategies, and automated pull request generation while maintaining GitOps principles of collaboration, version control, and auditability.

How to build highly available Kubernetes applications with Amazon EKS Auto Mode

In this post, we explore how to build highly available Kubernetes applications using Amazon EKS Auto Mode by implementing critical features like Pod Disruption Budgets, Pod Readiness Gates, and Topology Spread Constraints. Through various test scenarios including pod failures, node failures, AZ failures, and cluster upgrades, we demonstrate how these implementations maintain service continuity and maximize uptime in EKS Auto Mode environments.

How to run AI model inference with GPUs on Amazon EKS Auto Mode

In this post, we show you how to swiftly deploy inference workloads on EKS Auto Mode and demonstrate key features that streamline GPU management. We walk through a practical example by deploying open weight models from OpenAI using vLLM, while showing best practices for model deployment and maintaining operational efficiency.