Back to blog
KUBERNETES

Configuring Kubernetes Ingress on AWS? Don’t Make These Mistakes

August 15, 2022 | 10 min read

Key considerations for developers looking to configure an ingress controller on AWS or Amazon EKS

What is Ingress in AWS?

Kubernetes Ingress is an API resource that allows you to manage external or internal HTTP and HTTPS access to Kubernetes Services running in a cluster. AWS provide several load balancer types that can be used in conjunction with a Kubernetes ingress, and these include both transport-based layer 4 (L4) and application-based layer 7 (L7) options

Configuring Kubernetes Ingress in AWS

We’ve helped thousands of developers get their Kubernetes Ingress Controllers up and running across different cloud providers. Amazon users have two primary options for running Kubernetes on AWS: you can deploy and self-manage Kubernetes on EC2 instances, or you can use Amazon’s managed offering with Amazon Elastic Kubernetes Service (EKS).

If you choose EKS, you can either run this within the AWS public cloud platform, or use EKS Anywhere, which allows you to create and operate Kubernetes clusters on your own infrastructure, supported by AWS. The default Ingress solution for AWS EKS is Emissary-ingress.

Overall, AWS provides a powerful, customizable platform on which to run Kubernetes. However, the multitude of options for customization often leads to confusion among new users and makes it difficult for you to know when and where to optimize for your particular use case.

After working with many customers to configure their AWS Ingress Controller successfully on EC2 and Amazon EKS, we have found a common set of questions that we were asking users. We took those questions and converted them into a series of key decisions that we’ve presented here.

If you’re struggling to configure Kubernetes Ingress on AWS, here’s our recommended consideration path.

Choose the Right Load Balancer Type

The most important choice you will make when deciding how to handle ingress in AWS is the type of load balancer you want to use. The other major cloud providers make this easy by not providing so many options. Configuring a “type: LoadBalancer” Service in several other providers always gives the same L4 load balancer.

In AWS, a “type: LoadBalancer” Service in Kubernetes can mean a classic Load Balancer in L4 or L7 (called an Elastic Load Balancer or ELB) or a Network Load Balancer (NLB). Additionally, users can also manually provision an Application Load Balancer and point it at their Ingress exposed as a “type: NodePort” Service.

Layer 4 Load Balancers: ELB & NLB

The L4 ELB and NLB are layer 4 load balancers which route requests to your AWS Ingress Controller at the TCP layer.

This means that they are typically very efficient but can be limited in the types of traffic they can route and how intelligently they are routing requests to your Ingress Controller. For example, the L4 ELB is widely deployed in long-live AWS deployments but it cannot handle WebSockets connections. The NLB is the fastest and most efficient AWS load balancer, but it cannot load balance to multiple Kubernetes cluster namespaces.

All L4 load balancers are limited to round robin load balancing algorithms. They are also limited in their ability to preserve information about the client to the Ingress Controller.

Layer 7 Load Balancers: ELB & ALB

The L7 ELB and ALB are layer 7 load balancers which route requests to your AWS Ingress Controller at the “application” protocol layer. This means that they are able to more intelligently decide how to route requests, but are typically less efficient. For example, the ALB can route requests based on information sent in the request, such as the Host or URL path.

While this is powerful, if your application is living in Kubernetes and your load balancer is just routing requests to your Ingress Controller, you typically do not need this level of control over how you are routing requests.

Another benefit of L7 load balancers is their ability to preserve information about the client in the X-Forwarded headers.

Committing to an AWS Load Balancer Type in Kubernetes

Before you determine which type of load balancer is best for your use case, you’ll want to consider these four key criteria:

  1. Where to terminate TLS
  2. How to manage certificates
  3. How to handle cleartext redirection
  4. How to preserve client information

Consideration #1: TLS Termination

TLS encryption is a common requirement for modern web apps. Users want to be sure they are communicating with the intended recipient without anyone intercepting or modifying their requests. If you want to encrypt connections you need to terminate TLS at an entry point into your application.

Since AWS allows you to terminate TLS at any of the four load balancers available, deciding where to terminate TLS is dependent on your choice of load balancer.

  • L7 load balancers are required to terminate TLS so they can read information from the request.
  • L4 load balancers are able to perform SSL passthrough, which allows your AWS Ingress Controller to terminate TLS.

If you choose to terminate TLS at your load balancer, your Ingress Controller will receive traffic over clear text, which creates another trade-off: L7 load balancers can inform your Ingress Controller of whether the request originated encrypted by setting “X-Forwarded-Proto” whereas L4 load balancers cannot.

If you choose to terminate TLS at your Ingress Controller, you can fully control and manage TLS certificates, Server Name Indication (SNI) for multiple host/domain name support, and also how connections are encrypted and unencrypted.

Consideration #2: Certificate Management

How TLS certificates are managed in AWS is dependent on where you are terminating TLS. If you are terminating TLS at the load balancer, you can use Amazon Certificate Manager (ACM) to manage your TLS certificates.

If you are terminating TLS at your AWS Ingress Controller, then your Ingress Controller is responsible for how it manages TLS certificates. Some Ingress Controllers, such as Edge Stack, can automatically manage certificates whereas others require that you use other tools, like cert-manager, or store and rotate certificates in Kubernetes manually.

Consideration #3: Cleartext Redirection

While most modern web apps want TLS encryption, many users will still make unencrypted requests to your application. Therefore, it is important for your application to be able to properly handle these unencrypted requests. So, your next decision is how your Ingress Controller will be configured to handle cleartext.

If you chose to terminate TLS at your AWS Ingress Controller or at an L7 load balancer, your Ingress Controller will be able to identify if the request arrived over an encrypted connection or not, and fully manage if you want to allow, deny, or automatically redirect cleartext traffic to an encrypted connection.

If you choose to terminate TLS at an L4 load balancer, however, you are forced to route both cleartext and encrypted connections.

Consideration #4: Preserving Client Information

The final consideration when choosing how to handle ingress in AWS is if you need to preserve information from the client. Like all of the other decisions, this choice depends on the load balancer you are using.

L7 load balancers easily preserve this information by appending to the X-Forwarded-For header. This passes the IP address of your client to your AWS Ingress Controller and upstream services.

L4 load balancers cannot preserve this information in the same way. Instead, the ability to do this requires some tradeoffs.

In AWS, there are two ways for L4 load balancers to preserve the client IP address.

  1. The HAProxy Protocol gives L4 proxies the ability to append the client IP address by wrapping the request with additional data. This, however, requires your Ingress Controller expect requests set the proxy protocol which means all requests must carry this extra data. This is okay when all requests go through the load balancer, but this can present difficulties if sending requests directly to the Ingress Controller
  2. Configuring Kubernetes to force connections only to Nodes running your Ingress Controller. This configures this so that the L4 load balancer is always connecting directly to the Ingress Controller instead of via Kubernetes networking. However, this causes stability issues when restarting and upgrading your Ingress Controller, as well as causing more uneven load balancing to your Ingress Controller pods.

Wrapping Up: AWS Ingress Options

After helping hundreds of users configure our Ambassador Edge Stack API Gateway to run effectively in AWSm we became acutely aware of how confusing this process can be.

To help developers easily configure an Ingress Controller and get this up and running in AWS faster, we have provided extension documentation on “Edge Stack with AWS”.