Amazon Elastic Kubernetes Service (EKS) was introduced in 2018 to capitalize on the soaring popularity of Kubernetes and offer a fully managed control plane so organizations could focus on building and deploying applications rather than managing infrastructure. Since launch, EKS has added features like Managed Node Groups, Fargate support, and integrations with AWS’ other services. Today, there’s even EKS Anywhere and EKS Outposts to run Kubernetes on-premises.

While Amazon promises to abstract away a lot of the infrastructure tasks that teams just as soon deprioritize, organizations still need to adopt some key habits to securely manage their containerized workloads in EKS.

What is Amazon Elastic Kubernetes Service (EKS)?

Amazon EKS is a managed container orchestration service that simplifies the deployment, management, and scaling of Kubernetes clusters on AWS. With EKS, users can run Kubernetes applications without needing to install or operate their own Kubernetes control plane or nodes.

EKS integrates seamlessly with AWS services, including identity and access management (IAM), VPC, and CloudWatch, delivering built-in security, scalability, and observability. It supports both AWS Fargate for serverless compute and EC2 for customizable infrastructure, giving teams the flexibility to choose the best way to run their workloads.

Where security is concerned, that means:

  • The control plane is abstracted and untouchable
  • Authentication uses IAM, not Kubernetes
  • The EKS network layer is tied to virtual private cloud (VPC) constructs
  • IAM Roles for Service Accounts (IRSA) allows fine-grained IAM access to AWS from Kubernetes service accounts
  • Monitoring and logging are routed through the AWS ecosystem, which doesn’t natively speak Kubernetes
  • Node management can be shared or self-managed
  • There’s limited admission controller support

Here’s what’s EKS-specific:

AreaEKS-Specific Security Considerations
Control PlaneFully managed, no access or customization
AuthenticationIAM-based, mapped via aws-auth
NetworkingTightly integrated with VPC, Elastic Network Interface (ENIs), and security groups
Service AccessUses IAM Roles for Service Accounts (IRSA)
TelemetryRelies on AWS-native tools like CloudTrail and GuardDuty
Node TypesOptions vary, from self-managed to managed or Fargate
Admission ControlLimited: Must use user-space tools like Open Policy Agent (OPA)

The Importance of Security in EKS Environments

If the infrastructure is abstracted, why does security matter? 

Security is a critical concern in Amazon EKS environments, as Kubernetes clusters often manage sensitive workloads and operate at scale. And while AWS abstracts the control plane, organizations are still responsible for everything that runs inside the cluster, where most security risk lives. Misconfigurations, overly permissive roles, or unpatched vulnerabilities can expose containers, nodes, or even the broader AWS environment to attack.

Does EKS help? 

Yes, EKS provides foundational security features, like integration with AWS IAM, encryption of data at rest and in transit, and fine-grained access control through Kubernetes’ Role-Based Access Control (RBAC). 

However, securing an EKS environment also requires continuous monitoring, runtime protection, and adherence to best practices for container image scanning, network segmentation, and secret management. Why does EKS need these measures more so than any other environment?

The answer isn’t that EKS needs more security. It’s that it creates unique conditions where standard practices become even more critical due to how EKS integrates with AWS infrastructure.

EKS combines Kubernetes with direct access to AWS infrastructure. And it means that:

  • EKS is deeply integrated with AWS IAM and network layers, so workloads regularly interact directly with AWS services like S3, DynamoDB, and Lambda, via IAM roles.
  • The Attack Surface is Hybrid (K8s plus AWS), so the Kubernetes environment includes native access to cloud infrastructure, unlike other Kubernetes setups.
  • The control plane won’t let teams patch or inspect it, but they’re still accountable for misconfigurations and risky workloads.
  • Pod networking is managed via VPC-level constructs, and network segmentation mistakes can expose internal services or data.
  • EKS users rely heavily on CI/CD and auto-scaling, deploying fast across clusters and accounts where drift, unscanned images, and secrets in plaintext can proliferate.

Runtime and Container Scanning for EKS with Upwind

Upwind enhances EKS security with runtime-powered container scanning that detects threats as they happen, inside actual workloads. By correlating Kubernetes with AWS infrastructure, it delivers real-time visibility, faster root cause analysis, and targeted remediation that’s far more effective than static scanning alone.

EKS Shared Responsibility Model

Before diving into specific security tactics, let’s review who owns what in an EKS deployment.

The EKS Shared Responsibility Model defines the division of security responsibilities between AWS and the customer. AWS is responsible for securing the underlying infrastructure that runs EKS, including the Kubernetes control plane, networking, and physical data centers. This includes patching and maintaining the control plane and ensuring high availability and compliance.

On the other hand, customers are responsible for securing the workloads they run in EKS, including configuring Kubernetes RBAC, managing node-level security (for self-managed or EC2 worker nodes), securing container images, and implementing proper network policies. Teams should fully understand this model to effectively manage risks in EKS environments, as it clarifies where AWS’s protection ends and where customer accountability begins.

Aligning Security Practices to Your Responsibilities

To effectively secure an Amazon EKS environment, organizations must align their security practices with their responsibilities under the shared responsibility model. This means taking ownership of the parts of the stack they truly control, including application-layer security, workload isolation, identity and access controls, and node hardening.

This means that the core EKS best practices, specific to EKS security, that matter more on this platform than anywhere else, are:

  • Enforcing least privilege across two planes: AWS IAM (cluster access, node roles) and Kubernetes RBAC (in-cluster permissions).
  • Implementing layered isolation, using both Kubernetes NetworkPolicies and VPC-level constructs like security groups and subnets.
  • Hardening workloads, from CI-integrated container image scanning to runtime protections against drift, misconfigurations, and unknown behavior.
  • Monitoring across boundaries, integrating Kubernetes audit logs, GuardDuty findings, and CloudWatch metrics to detect cross-surface anomalies.

It means operationalizing security at the workload level, where teams have complete control. The approach means that the team assumes the AWS infrastructure will perform, but teams don’t need to rely on it to catch everything.

Let’s get into the details of the fundamentals.

Identity and Access Management Best Practices

Effective IAM is crucial for securing Amazon EKS environments and ensuring that only authorized entities can perform specific actions. Teams should start by enforcing the principle of least privilege, granting users, roles, and service accounts only the permissions they need. IAM Roles for Service Accounts (IRSA) should be used for assigning fine-grained AWS permissions to Kubernetes workloads without hardcoding credentials. 

Teams should regularly audit IAM policies and role usage to identify and remove unused or overly permissive access, though note that Kubernetes RBAC usage isn’t logged by default unless audit logs are specifically enabled. Further, implement multi-factor authentication (MFA) for all human users. Roles should be separated between administrative and operational tasks to reduce the blast radius of compromised credentials. These best practices help contain access risk and maintain strong boundaries between Kubernetes resources and the broader AWS environment.

Implementing Least Privilege with IAM Policies

Implementing least privilege in Amazon EKS starts with granting only the minimum permissions needed for users, roles, and workloads; teams should avoid broad permissions by defining specific actions and resources, and use policy conditions to restrict access by IP, time, or encryption _ but note that Kubernetes RBAC does not support conditional logic. RBAC permissions are static and must be scoped through careful role and group design.

When managing Kubernetes pods, teams should use IRSA to avoid credential sharing and over-permissioned EC2 roles. IAM Access Analyzer should be used to regularly audit policies, along with CloudTrail to track usage levels.

Using IAM Roles for Service Accounts (IRSA)

IRSA allows Kubernetes pods to securely access AWS services using fine-grained IAM roles, without the need for managing long-lived credentials. By linking a Kubernetes service account to a specific IAM role, teams can assign scoped AWS permissions to individual workloads, ensuring they only access what they need. This eliminates the need to use EC2 instance roles for pod-level access, reducing the risk of over-privileged access across your cluster.

IRSA leverages AWS STS to issue temporary credentials, which are automatically rotated and injected into pods via the AWS SDK. Proper use of IRSA helps enforce least privilege, simplifies credential management, and strengthens the overall security posture of your EKS environment.

Integrating IAM with Kubernetes RBAC

By integrating AWS IAM with Kubernetes RBAC in Amazon EKS, you can authenticate via IAM, then map those identities to Kubernetes using the aws-auth ConfigMap to manage in-cluster permissions separately. 

Once mapped, these groups can be bound to Kubernetes roles or cluster roles, granting granular permissions within the cluster. For example, an IAM role assumed by a DevOps engineer can be mapped to a Kubernetes group with read-only access to certain namespaces. 

This integration ensures consistent identity enforcement, supports the principle of least privilege, and allows for centralized access governance using familiar AWS IAM practices, while leveraging Kubernetes-native controls for workload and namespace isolation.

Securing Authentication with MFA and OIDC

Teams should secure authentication in Amazon EKS to prevent unauthorized access. Furthermore, they should combine multi-factor authentication (MFA) with OpenID Connect (OIDC). MFA adds an extra layer of security for IAM users by requiring a second verification step, significantly reducing the risk of credential-based attacks.

EKS supports OIDC identity providers for Kubernetes workloads, enabling teams to integrate with external identity systems, such as AWS Cognito, Okta, or GitHub, for federated access. By using OIDC, you can issue short-lived, scoped tokens for authentication without managing long-term credentials, aligning with zero trust principles.

MFA secures human access to the AWS account and EKS cluster entry points. Separately, OIDC enables federated identity for workloads, letting external systems assume IAM roles securely.

Network Security and Policies in EKS

Network security in Amazon EKS is vital for isolating workloads, controlling traffic flow, and protecting against lateral movement within the cluster. Kubernetes Network Policies allow you to define rules that control which pods can communicate with each other and with external services, enabling microsegmentation. These policies are enforced by the container network interface (CNI) plugin (such as the AWS VPC CNI or third-party options like Calico) that integrates Kubernetes networking with your VPC.

You should also leverage security groups at the EC2 and ENI levels to enforce traffic restrictions beyond the cluster. By combining VPC-level controls, Kubernetes Network Policies, and service mesh technologies (like AWS App Mesh or Istio) in a layered defense strategy, you can ensure that only authorized communications occur between components. Proper network segmentation is a key defense strategy in preventing the spread of threats inside your EKS environment.

What EKS Can’t Do (And What Fills the Gap)

EKS doesn’t do everything. And that’s not a flaw so much as a reality of how managed services work. So, instead of leaning on EKS to handle Kubernetes security, it’s worth slowing down to ask what exactly is being outsourced and what’s still on the team.

Take lateral movement, for example. EKS gives you the tools, like NetworkPolicies, security groups, and VPC isolation. But it doesn’t enforce anything out of the box. For actual segmentation, teams need to take the time to define it in both layers. If they don’t, they’ll be left with flat networking inside the cluster and a wide-open blast radius if something goes wrong.

Or look at runtime threats. EKS doesn’t watch what’s happening inside containers after deployment. If a pod starts mining crypto or a sidecar makes a strange egress call late at night, EKS won’t say a word. This kind of behavioral visibility has to come from somewhere else. That’s often an eBPF-based runtime security layer in a CNAPP that correlates process-level activity with workload context.

Even identity is a split responsibility. IRSA lends a helping hand. It’s arguably one of the strongest security features that AWS shipped for Kubernetes, but it only works with tightly managed IAM roles, service accounts, and AWS policies. EKS won’t alert teams when an IRSA role gets over-permissioned or if a misconfigured RBAC binding opens up a pod to internal abuse. That kind of misalignment can look fine on paper, but in reality, lead to a breach.

There’s also business logic, data classification, and access boundaries between development and production. EKS won’t stop someone from deploying test containers into a sensitive namespace or running a one-off job that touches customer data.

The point isn’t that EKS is inadequate. It’s that, like most managed services, EKS assumes a level of maturity and intent on the part of customers. EKS security outcomes still depend on teams: the tools they layer, how they monitor behavior, and how they bring tight controls into areas EKS doesn’t touch.

Upwind Undergirds EKS Security

Upwind strengthens EKS security by providing deep visibility into Kubernetes clusters, containers, and cloud-native applications. It fills the critical gaps EKS leaves open, like detecting runtime threats inside pods, correlating IAM and RBAC permissions across planes, and identifying anomalous behavior that managed services won’t flag on their own.

By continuously monitoring workloads in context, Upwind helps security teams respond to threats — and know exactly where they’re coming from. In an environment like EKS, where responsibility is split and abstraction runs deep, Upwind gives you the clarity and control to secure K8s. Schedule a demo to explore how.

FAQ

How do I know if my EKS security is overly reliant on AWS defaults?

Relying on AWS defaults in EKS can mean unaddressed security gaps. Look for these signs:

  • No Kubernetes NetworkPolicies defined (flat internal traffic)
  • Broad IAM roles assigned via IRSA with no conditions or scoping
  • No runtime monitoring or container behavior visibility
  • Default security groups allow open ingress or egress
  • No enforcement of image provenance or container scanning in CI/CD
  • RBAC roles assigned ad hoc with no audit trail or review

What’s the best way to detect IAM-to-RBAC drift in EKS?

IAM-to-RBAC drift happens when AWS IAM roles grant access to the cluster, but Kubernetes RBAC permissions don’t reflect the intended scope of that access. Since IAM handles authentication and RBAC handles in-cluster authorization, the misalignment is easy to miss. Look for drift by:

  • Auditing the aws-auth ConfigMap regularly for changes or role creep
  • Cross-reference IAM policies with ClusterRoleBindings and RoleBindings
  • Use access graph tooling or a CNAPP to visualize privilege relationships across both layers
  • Monitor for unexpected actions in Kubernetes audit logs from IAM-authenticated identities
  • Set up alerts for new IAM roles mapped to cluster access.

Teams need to correlate identity and action across AWS and Kubernetes, something most native tools don’t do on their own.

Can runtime detection work in Fargate-based EKS clusters?

It’s limited to EC2-based nodes because users can’t deploy DaemonSets or install agents on the underlying infrastructure. AWS manages the nodes entirely, so customers won’t have direct access for traditional runtime tools. Here’s what’s possible instead:

  • Use agentless CNAPPs (or don’t activate sensors). Rely on cloud logs, network flow, and Kubernetes API activity.
  • Monitor application behavior via sidecar containers
  • Leverage VPC flow logs and GuardDuty for network-level anomaly detection
  • Use EKS audit logs to detect suspicious API activity from Fargate workloads

Runtime detection is harder in Fargate. But it’s not impossible. Use tools designed for ephemeral and abstracted environments and bundle logs and tools for a more complete view.

When should we use EKS over self-managed Kubernetes or GKE/AKS?

EKS makes sense when:

  • Workloads rely on AWS services already (like S3, RDS, or Lambda)
  • Teams want to use IRSA for fine-grained, pod-level IAM access
  • The team has strong AWS experience but limited Kubernetes operational overhead
  • You’re using VPC-native networking and security group enforcement
  • It’s necessary to align with existing AWS compliance baselines

Choose self-managed Kubernetes for control over the control plane, custom admission controllers, or advanced compliance configurations that managed services don’t support

Opt for Google Kubernetes Engine (GKE) or Azure Kubernetes Service (AKS) if you’re already invested in those ecosystems and want tighter integration with Google Cloud IAM, Azure AD, or cloud-native CI/CD pipelines.