What does Kubernetes Ingress mean?

The study note excerpt learnt on PKOS study 3rd week

Sigrid Jin
10 min readFeb 4, 2023
https://blog.devgenius.io/ingress-in-kubernetes-67a4b843ea4e

Options for exposing applications deployed in Kubernetes

https://medium.com/tensult/alb-ingress-controller-on-aws-eks-45bf8e36020d

You may specify the kind of Service you want to offer using Kubernetes ServiceTypes. Four ServiceTypes are available: ClusterIP, NodePort, LoadBalancer, and ExternalName.

ClusterIP is the default ServiceType, exposing the Service on a cluster-internal IP address that is only accessible inside the cluster. This is the preferable choice for internal Service access and is used for internal traffic, development and testing debugging, and dashboards.

NodePort is used to provide a Service on a port number static to each node’s IP. This sort of Service is generally used for exposing Services in a non-production environment; its usage in production is not advised.

LoadBalancer exposes the Service externally using the Load Balancer of an external cloud provider. This sort of Service is appropriate for usage in a production setting, however Ingress is often recommended.

Kubernetes Ingress

https://blog.devgenius.io/ingress-in-kubernetes-67a4b843ea4e

Kubernetes Ingress is a powerful network routing object that lets HTTP and HTTPS routes from outside sources reach internal services in a cluster. The Ingress resource plays a crucial role in controlling the routing of incoming traffic, as a set of rules defines. This allows for a highly configurable and flexible traffic management system, providing enhanced security and stability for applications.

Ingress can be used to expose services to external users, load balance incoming traffic, terminate SSL/TLS certificates, and support name-based virtual hosting. It acts as an HTTP load balancer for applications running on k8s and is represented by one or more internal services.

Not to mention, the term service — which enables network access to a set of Pods in k8s — means to be used in exposing an application deployed on a set of pods using a single endpoint, which introduces the way to provide reliable networking by bringing stable IP addresses and DNS names to ephemeral pods.

It is important to note that Ingress is not a service type but rather acts as the primary entry point for the cluster. Ingress provides a straightforward gateway solution and consolidates routing rules into a single resource, which makes it possible to expose multiple services under the same IP address using the same load balancer.

Ingress also offers various configurations to improve the resilience of the system, such as time-outs, rate limiting, content-based routing, authentication, and more. This enhances the security and performance of the cluster and makes it easier to manage incoming traffic.

In addition to traffic management, Ingress also supports content-based routing. This includes host-based routing, where requests with specific host headers, such as foo.example.com, are directed to specific groups of services. Ingress also supports path-based routing, where requests with specific URIs, such as those starting with /serviceA, are routed to corresponding services (like /mario and /tetris in the assignment given for this week)

This ability to manage and direct traffic based on both host and path makes Ingress a powerful tool for managing access to applications in a Kubernetes cluster, ensuring security, performance, and ease of use.

For numerous reasons, Ingress is preferable for exposing services in a production environment. First, the rules set up on the ingress resource control how traffic is sent where it needs to go. Second, ingress is an integral component of the Kubernetes cluster, while an external load balancer is not and needs separate maintenance. Last but not least, an external load balancer is often costly, but Kubernetes Ingress is maintained inside.

Ingress Controller

An Ingress controller is responsible for fulfilling the ingress request and can use a load balancer or other network components to manage the incoming traffic. The Ingress spec has all the information you need to set up a load balancer or proxy server. It also has a list of rules that match requests that are coming in.

Unlike the Ingress controller, the Ingress resource is a set of configurations that define the URL routes, SSL certificates, and other access details for the services in the cluster. The ingress controller detects new ingress resources and updates the underlying configuration file, such as the nginx.conf file, to reflect the changes in the ingress resource. When the ingress resource is deleted, the ingress controller updates the configuration file accordingly.

It is important to note that for the Ingress resource to be effective, an Ingress controller must be running in the cluster. Multiple Ingress controllers can be set up within a cluster, which gives you more options for managing incoming traffic.

There are many different types of Ingress controllers, such as Nginx, Ambassador, EnRoute, HAProxy, AWS ALB, and AKS Application Gateway. Cloud-native load balancers from major cloud providers like GCP, AWS, and Azure are also supported.

https://aws.amazon.com/blogs/opensource/kubernetes-ingress-aws-alb-ingress-controller/

AWS Load Balancer Controller is an essential tool for exposing Kubernetes Services to the public. It manages the provisioning of AWS Load Balancers, namely an AWS Application Load Balancer (ALB) when a Kubernetes Ingress is created, and an AWS Network Load Balancer (NLB) when a Kubernetes Service of type LoadBalancer is created using IP targets on Amazon EKS 1.18 or later clusters.

ALBs are used by AWS Ingress Controllers to expose the Ingress Controller to external traffic. They provide sophisticated routing capabilities, such as (aforementioned) path-based routing, and the ability to combine numerous Services into a single entry point, resulting in cost savings and centralized setup. AWS Load Balancer Controller is a popular approach for exposing Kubernetes Services through Kubernetes ingress rules in order to construct an ALB. This offers advantages such as ingress path-based routing and the ability to route traffic directly to pods inside the Kubernetes cluster, as opposed to depending on internal service IPs and kube-proxy.

It is crucial to remember that simply creating an Ingress resource will have no effect on incoming traffic. An Ingress controller must be present in the cluster to satisfy the Ingress and manage incoming traffic effectively.

Kubernetes Deployment Objects

Kubernetes is a tool that doesn't need to be bound by specific cloud providers, so there are different Ingress Controllers. Among the open-source ingress controllers, Ingress-Nginx is a well-known controller that is officially supported by Kubernetes. However, it is not a project managed directly by Nginx, but by the Kubernetes community. If you want to set up and manage performance tuning, rate limits, JWT validation, and other rich Nginx features that you were previously using, using Nginx-Ingress, which is officially managed by Nginx, is also an option.

The following file is the Kubernetes object file that creates a single instance of the Nginx Ingress Controller. A deployment in Kubernetes provides declarative updates for Pods and ReplicaSets (the pods managed by the deployment). In this deployment, the replicas are set to 1, which means that only one instance of the Nginx Ingress Controller will be created.

# Reference: https://blog.devgenius.io/ingress-in-kubernetes-67a4b843ea4e
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-ingress-controller
spec:
replicas: 1
selector:
matchLabels:
name: nginx-ingress
template:
metadata:
labels:
name: nginx-ingress
spec:
containers:
- name: nginx-ingress-controller
image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.21.0
args:
- /nginx-ingress-controller
- --configmap=$(POD_NAMESPACE)/nginx-configuration
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
ports:
- name: http
containerPort: 80
- name: https
containerPort: 443

Kubernetes Service Object

The following YAML file is a Kubernetes Service that exposes the Nginx Ingress Controller Deployment. A Service in Kubernetes is an abstraction that defines a set of Pods and a policy by which to access them.

In this case, the Service is used to expose the Nginx Ingress Controller Deployment to the network, so that incoming requests can be forwarded to the appropriate Pod.

The type of Service is set to NodePort, which means that the Service will be assigned a static port on each node in the cluster, and traffic to this port will be forwarded to the target port specified in the Service’s ports section.

apiVersion: v1
kind: Service
metadata:
name: nginx-ingress
spec:
type: NodePort
ports:
- port: 80
targetPort: 80
protocol: TCP
name: http
- port: 443
targetPort: 443
protocol: TCP
name: https
selector:
name: nginx-ingress

Get to another example that creates two objects in a kubernetes cluster.

The first piece of code’s deployment object establishes a deployment with the name “blog”. The deployment consists of three duplicates of a pod that runs the “blog” container, which is based on the dockersamples/static-site image. The value “blog” is set for the environment variable AUTHOR. On port 80, the container is listening.

The second block of code’s definition of the Service object creates a Service with the name blog. The service is of type NodePort, which means it will allocate a static port to each node in the cluster to make the pods running the blog app visible to the network. The target port is set to 80 and the service is configured with a port of 80 using the TCP protocol. The label specified in the Deployment object matches the selector for the Service, which is “app: blog”.

Deployment is generated in this example YAML file to control replicas of a containerized application. Using a static port on each node, Service is constructed to expose this application to the network, making it accessible from outside the cluster.

cat > sample-app.yaml <<EOF
apiVersion: apps/v1
kind: Deployment
metadata:
name: blog
spec:
selector:
matchLabels:
app: blog
replicas: 3
template:
metadata:
labels:
app: blog
spec:
containers:
- name: blog
image: dockersamples/static-site
env:
- name: AUTHOR
value: blog
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
labels:
app: blog
name: blog
spec:
type: NodePort
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
app: blog
EOF

Kubernetes ConfigMap Object

  • The ConfigMap is a Kubernetes resource that is used to store configuration data. In this case, the ConfigMap is used to store configuration data for the Nginx Ingress Controller, such as SSL protocols, log paths, etc. The name of the ConfigMap is nginx-configuration.
apiVersion: v1
kind: ConfigMap
metadata:
name: nginx-configuration

Kubernetes ServiceAccount Object

  • The ServiceAccount is a Kubernetes resource that defines an identity for processes running in a Pod. The ServiceAccount in this case is named nginx-ingress-serviceaccount and is used to apply the configurations defined in the Ingress resource.
  • The ServiceAccount must have the appropriate roles, cluster roles, and role bindings configured in order for it to have the necessary permissions to apply the Ingress resource configurations.
apiVersion: v1
kind: ServiceAccount
metadata:
name: nginx-ingress-serviceaccount

Kubernetes Ingress Object

For a blog application, the YAML file generates a Kubernetes Ingress object. The Kubernetes API version to utilize is extensions/v1beta1, which is specified by the apiVersion parameter. This object is of the Ingress type, according to the kind field.

cat > ingress.yaml <<EOF
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: blog
labels:
app: blog
annotations:
kubernetes.io/ingress.class: alb
alb.ingress.kubernetes.io/scheme: internet-facing
spec:
rules:
- http:
paths:
- path: /*
backend:
serviceName: blog
servicePort: 80
EOF

The several Kubernetes annotations that may be applied to an Ingress object are described. AWS Application Load Balancer (ALB) configuration is made easier in particular by the annotations. The annotations define the ALB scheme as internal or internet-facing, specify the target type for managed Target Groups, specify the subnets where the ALB instance should be deployed, apply security groups to the ALB instance, specify the certificate ARN to enable HTTPS, specify the ports the ALB will expose, specify the health check port, and specify the expected HTTP status code during health checks.

To expose the domain on Route 53, domain hosts may also be added to the Ingress object. One host, dashboard.in.com, with a route of /* and a backend service called kubernetes-dashboard is added in the sample below.

# defines whether an ALB should be internal or internet-facing
kubernetes.io/ingress.class: alb
alb.ingress.kubernetes.io/scheme: internet-facing

# define if the ec2 instance ID or the pod IP are used in the managed target group. (defaults to instance, but other options are available for instance and ip)
alb.ingress.kubernetes.io/target-type: instance

# after subnets matching the above 2 tags have been located, they are checked to ensure 2 or more are in unique AZs, otherwise, the ALB will not be created. If 2 subnets share the same AZ, only 1 of the 2 is used.
alb.ingress.kubernetes.io/subnets:<Public_subnetID 1>,<Public_subnetID 2>,<Public_subnetID 3>

# Security groups that should be applied to the ALB instance. These can be referenced by security group IDs or the name tag associated with each security group.
alb.ingress.kubernetes.io/security-groups: {$SECRUITY_GROUPS}

# enables HTTPS and uses the certificate defined based on ARN stored in personal AWS Certificate Manager)
alb.ingress.kubernetes.io/certificate-arn: {$CERTIFICATE_ARN}

# specifies the port used when performing a health check on targets.
alb.ingress.kubernetes.io/listen-ports: '[{"HTTP":80,"HTTPS": 443}]'

# defines the HTTP status code that should be expected when doing health checks against the defined healthcheck-path. When omitted, 200 is used.
alb.ingress.kubernetes.io/success-codes: 200,404,301

# if you are using Domain and sub-domain.expose by adding Host in the Ingress object file. These domains will be hosted on Route 53.
- host: dashboard.in.com
http:
paths:
- path: /*
backend:
serviceName: kubernetes-dashboard
servicePort: 80

Sharing an ALB with multiple k8s ingress rules

Prior to version 2.0 of the AWS ALB Ingress Controller, each ingress object in k8s would need to get its very own ALB. However, by supplying the alb.ingress.kubernetes.io/group.name annotation, developers may save costsby sharing an ALB while continuing to use the same annotations for advanced routings for a team or any mix of applications. The same load balancer will be used by all services under the same group.name.

# Reference: https://gtsopour.medium.com/kubernetes-ingress-aws-eks-cluster-with-aws-load-balancer-controller-cf49126f8221
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: {NAME}
namespace: {NAMESPACE}
annotations:
kubernetes.io/ingress.class: alb
#Share a single ALB with all Ingress rules with a specific group name
alb.ingress.kubernetes.io/group.name: {GROUP_NAME}
alb.ingress.kubernetes.io/target-type: instance
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/certificate-arn: {CERTIFICATE}
alb.ingress.kubernetes.io/listen-ports: '[{"HTTPS":443}]'
alb.ingress.kubernetes.io/actions.ssl-redirect: '{"Type": "redirect", "RedirectConfig": { "Protocol": "HTTPS", "Port": "443", "StatusCode": "HTTP_301"}}'
spec:
rules:
- host: {host}
http:
paths:
- path: /*
pathType: Prefix
backend:
service:
name: {SERVICE}
port:
number: 8080

References

--

--