Understanding Kubernetes Services: NodePort and ClusterIP
In the dynamic world of container orchestration, Kubernetes Services play a crucial role in providing a stable and abstract way to access pods. As pods are ephemeral and their IP addresses change frequently, directly using pod IPs is not a reliable approach for communication. This is where Services
come into play, offering a layer of abstraction that ensures consistent access to pods regardless of their underlying changes.
Consider a scenario where one pod needs to communicate with another pod — or an external entity needs to access a pod:
- If the communication is based on the pod’s IP address, any pod restart or rescheduling would lead to IP changes.
- This change in IP address would break the existing connections, causing service disruptions and potential application failures.
To address this challenge, Kubernetes introduced the concept of Services. A Service creates a stable, virtual IP address that remains constant even as the underlying pods change.
- Stability: Services provide a fixed endpoint for accessing a set of pods.
- Load Balancing: They can distribute incoming traffic across multiple pod replicas, enabling better resource utilization and improved application performance.
- Service Discovery: Services facilitate easy discovery of microservices within the cluster.
Types of Kubernetes Services
Kubernetes offers several types of Services to cater to different networking requirements.
ClusterIP is the default and most basic type of Kubernetes Service.
- Exposes the Service on an internal IP address within the cluster.
- Allows communication between different sets of pods within the same cluster.
- Not accessible from outside the cluster.
- Internal communication between microservices.
- Backend services that don’t need external exposure.
- Utilizes iptables NAT rules for routing traffic to pods.
- Applies identical iptables distribution rules across all nodes in the cluster.
NodePort build upon ClusterIP, offering additional external accessibility.
- Exposes the Service on a static port on each node’s IP.
- Accessible both internally (like ClusterIP) and externally.
- Automatically creates a ClusterIP Service as part of its configuration.
- Development and testing environments.
- Applications that require direct access to node ports.
- Allocates a port (default range: 30000–32767) on all nodes.
- External traffic can reach the Service through
<NodeIP>:<NodePort>
. - Like ClusterIP, it uses iptables NAT for traffic routing.
LoadBalancer are typically used in cloud environments to distribute external traffic across your services.
- Integrates with cloud providers’ load balancing solutions (e.g., AWS ELB, GCP Cloud Load Balancing).
- Automatically creates NodePort and ClusterIP Services as part of its configuration.
- Provides a single point of contact for external traffic.
- Production applications requiring high availability and scalability.
- Services that need a dedicated load balancer for external access.
- For non-cloud environments, solutions like MetalLB can be used to implement LoadBalancer Services.
When defining a Service, several key components come into play:
- port: The port on which the Service listens.
- targetPort: The port to which traffic is forwarded on the target pods.
- selector: Labels used to identify the pods that the Service should route traffic to.
# In this example:
# The Service listens on port 80.
# It forwards traffic to port 8080 on the pods.
# It selects pods with the label app: my-app.
apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
selector:
app: my-app
ports:
- port: 80
targetPort: 8080
The kube-proxy component plays a vital role in implementing the Service concept in Kubernetes. It runs on each node as a DaemonSet and manages the network rules that allow network communication to pods from inside or outside the cluster.
Userspace Proxy Mode
- Runs in user space.
- Proxies connections through the kube-proxy process.
- Introduces additional network hops, leading to performance overhead, between the netfilter — the kernel space and kube-proxy — the user space. Not commonly used in modern Kubernetes deployments.
iptables Proxy Mode
- Uses iptables rules to handle traffic routing.
- kube-proxy manages iptables rules instead of proxying traffic directly.
- More efficient than userspace mode as it avoids additional network hops.
- Operates entirely in kernel space, reducing context switches.
- Performance can degrade with a large number of services due to linear iptables rule processing.
- Troubleshooting can be challenging with complex iptables rules.
IPVS
IP Virtual Server mode offers better performance and scalability compared to iptables mode.
- Uses netlink interface to create IPVS rules.
- Supports more load balancing algorithms than iptables.
- Better performance and scalability for large clusters.
- Reduced complexity in rule management compared to iptables.
- Requires the IPVS kernel modules to be installed on the node.
nftables
nftables is a new packet classification framework that aims to replace iptables.
- Offers more flexible and powerful packet classification.
- Currently in experimental phase for Kubernetes.
- Improved performance and flexibility compared to iptables.
- Unified firewall tool for IPv4, IPv6, ARP, and Ethernet bridging.
- Not yet widely adopted in production Kubernetes environments.
eBPF & XDP Networking Mode
This is an advanced mode leveraging extended Berkeley Packet Filter (eBPF) and eXpress Data Path (XDP).
- Processes packets at the earliest possible point in the networking stack.
- Can be implemented using projects like Cilium or Calico.
- Significant performance improvements over traditional modes.
- Allows for more complex and efficient networking policies.
- Requires newer kernel versions and specific hardware support for optimal performance.
Practice
We are now practicing with Kind cluster named “myk8s” with four nodes:
- One control-plane node (myk8s-control-plane)
- Three worker nodes (myk8s-worker, myk8s-worker2, myk8s-worker3)
# kind 클러스터 정의 파일 생성
$ cat <<EOT> kind-svc-w3.yaml
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
featureGates:
"InPlacePodVerticalScaling": true
"MultiCIDRServiceAllocator": true
nodes:
- role: control-plane
labels:
mynode: control-plane
extraPortMappings:
- containerPort: 30000
hostPort: 30000
- containerPort: 30001
hostPort: 30001
- containerPort: 30002
hostPort: 30002
kubeadmConfigPatches:
- |
kind: ClusterConfiguration
apiServer:
extraArgs:
runtime-config: api/all=true
- role: worker
labels:
mynode: worker1
- role: worker
labels:
mynode: worker2
- role: worker
labels:
mynode: worker3
networking:
podSubnet: 10.10.0.0/16
serviceSubnet: 10.200.1.0/24
EOT
kubectl get nodes -o jsonpath="{.items[*].metadata.labels}" | grep mynode
This command retrieves the labels of all nodes and filters for the custom “mynode” label.
{"mynode":"control-plane"} {"mynode":"worker1"} {"mynode":"worker2"} {"mynode":"worker3"}
Custom labels have been applied to the nodes:
- Control-plane: mynode=control-plane
- Workers: mynode=worker1, mynode=worker2, mynode=worker3
- This labeling strategy allows for targeted pod scheduling and network policies. The use of custom labels demonstrates how Kubernetes can be configured for specific organizational needs.
kubectl get cm -n kube-system kubeadm-config -oyaml | grep -i subnet
kubectl cluster-info dump | grep -m 2 -E "cluster-cidr|service-cluster-ip-range"
These commands extract subnet information from the kubeadm configuration and cluster info.
podSubnet: 10.10.0.0/16
serviceSubnet: 10.200.1.0/24
" - service-cluster-ip-range=10.200.1.0/24",
" - cluster-cidr=10.10.0.0/16",
Cluster CIDR
- Pod CIDR: 10.10.0.0/16
- Service CIDR: 10.200.1.0/24
This configuration defines the IP ranges for pods and services across the entire cluster. The large pod CIDR allows for many pods across all nodes, while the smaller service CIDR is sufficient for Kubernetes services.
kubectl get nodes -o jsonpath="{.items[*].spec.podCIDR}"
# 10.10.0.0/24 10.10.2.0/24 10.10.1.0/24 10.10.3.0/24
- Control-plane: 10.10.0.0/24
- Worker: 10.10.2.0/24
- Worker2: 10.10.1.0/24
- Worker3: 10.10.3.0/24
This configuration ensures each node has a unique subnet for its pods, preventing IP conflicts. The allocation of /24 subnets to each node allows for up to 254 pods per node.
for i in control-plane worker worker2 worker3; do echo ">> node myk8s-$i <<"; docker exec -it myk8s-$i ls /opt/cni/bin/; echo; done
This command lists the CNI plugins available on each node. All nodes show the same set of CNI plugins.
host-local loopback portmap ptp
The cluster uses kindnet as its CNI plugin. Each node’s /opt/cni/bin directory contains the following CNI plugins.
- host-local: For IP address management
- loopback: For loopback interface configuration
- portmap: For port mapping
- ptp (point-to-point): For creating point-to-point links between containers
for i in control-plane worker worker2 worker3; do echo ">> node myk8s-$i <<"; docker exec -it myk8s-$i cat /etc/cni/net.d/10-kindnet.conflist; echo; done
The kindnet configuration (10-kindnet.conflist) uses the ptp plugin for pod networking, with host-local for IP address management (IPAM). Each node has a unique subnet configured in the “ranges” section, corresponding to the node-specific Pod CIDRs mentioned earlier.
kubectl describe cm -n kube-system kube-proxy
- clusterCIDR: 10.10.0.0/16
- mode: iptables
- Feature gates enabled: InPlacePodVerticalScaling and
The kube-proxy is configured to use iptables mode for service routing. The clusterCIDR matches the Pod CIDR we saw earlier. The enabled feature gates suggest that the cluster supports in-place vertical scaling of pods and multi-CIDR service allocation, which are advanced Kubernetes features.
for i in filter nat mangle raw ; do echo ">> IPTables Type : $i <<"; docker exec -it myk8s-control-plane iptables -t $i -S ; echo; done
for i in filter nat mangle raw ; do echo ">> IPTables Type : $i <<"; docker exec -it myk8s-worker iptables -t $i -S ; echo; done
These commands display the iptables rules for different tables on both the control-plane and worker nodes.
- KUBE-SERVICES chain handles service routing
- KUBE-NODEPORTS chain for NodePort services
- KUBE-EXTERNAL-SERVICES for externally-visible services
- KUBE-MARK-MASQ for SNAT (Source Network Address Translation)
- Specific rules for kube-dns (CoreDNS) service
The control-plane node has more complex iptables rules, particularly in the NAT table, to handle service discovery and load balancing. The presence of rules for kube-dns indicates that DNS resolution for services is properly configured.
- Simpler rules compared to the control-plane
- KUBE-FIREWALL chain for basic network policies
- KIND-MASQ-AGENT chain for masquerading outbound traffic
Worker nodes have simpler iptables configurations, mainly focused on enforcing network policies and ensuring proper network address translation for pod traffic.
docker network ls
docker inspect kind
- The Kind cluster uses a Docker bridge network named "kind"
- IPv4 subnet: 172.18.0.0/16
- IPv6 subnet: fc00:f853:ccd:e793::/64
- Each node (container) is assigned an IP from this subnet
These commands list Docker networks and inspect the “kind” network used by the cluster. The Kind cluster creates a separate Docker network to isolate the Kubernetes nodes. Additionally, the network provides connectivity between the nodes and allows for simulating a multi-node Kubernetes cluster on a single machine.
docker exec -it myk8s-control-plane arp-scan - interfac=eth0 - localnet
Interface: eth0, type: EN10MB, MAC: 02:42:ac:12:00:05, IPv4: 172.18.0.5
Starting arp-scan 1.10.0 with 65536 hosts (https://github.com/royhills/arp-scan)
172.18.0.1 02:42:f9:27:f1:b5 (Unknown: locally administered)
172.18.0.2 02:42:ac:12:00:02 (Unknown: locally administered)
172.18.0.3 02:42:ac:12:00:03 (Unknown: locally administered)
172.18.0.4 02:42:ac:12:00:04 (Unknown: locally administered)
172.18.0.6 02:42:ac:12:00:06 (Unknown: locally administered)
The control-plane node’s IP is 172.18.0.5 with MAC 02:42:ac:12:00:05, and it discovers 5 other IP addresses on the network.
- 172.18.0.1: Docker bridge gateway
- 172.18.0.2, 172.18.0.3, 172.18.0.4: The worker nodes
- 172.18.0.6: A netshoot container which is named mypc.
docker run -d - rm - name mypc - network kind nicolaka/netshoot sleep infinity
This command creates a new container named “mypc” and connects it to the “kind” network. It uses the nicolaka/netshoot image, which is a network troubleshooting toolkit.
docker exec -it mypc ping -c 1 172.18.0.1
for i in {1..5} ; do docker exec -it mypc ping -c 1 172.18.0.$i; done
--- 172.18.0.1 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.106/0.106/0.106/0.000 ms
PING 172.18.0.2 (172.18.0.2) 56(84) bytes of data.
64 bytes from 172.18.0.2: icmp_seq=1 ttl=64 time=0.147 ms
--- 172.18.0.2 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.147/0.147/0.147/0.000 ms
PING 172.18.0.3 (172.18.0.3) 56(84) bytes of data.
64 bytes from 172.18.0.3: icmp_seq=1 ttl=64 time=0.130 ms
--- 172.18.0.3 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.130/0.130/0.130/0.000 ms
PING 172.18.0.4 (172.18.0.4) 56(84) bytes of data.
64 bytes from 172.18.0.4: icmp_seq=1 ttl=64 time=0.258 ms
--- 172.18.0.4 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.258/0.258/0.258/0.000 ms
PING 172.18.0.5 (172.18.0.5) 56(84) bytes of data.
64 bytes from 172.18.0.5: icmp_seq=1 ttl=64 time=0.182 ms
These commands ping all the discovered IP addresses from the new “mypc” container. All nodes (172.18.0.1 to 172.18.0.5) are reachable from the new container. The ping times are very low (0.1–0.2ms), indicating they’re on the same local network.
docker exec -it mypc zsh
ifconfig
eth0 Link encap:Ethernet HWaddr 02:42:AC:12:00:06
inet addr:172.18.0.6 Bcast:172.18.255.255 Mask:255.255.0.0
inet6 addr: fe80::42:acff:fe12:6/64 Scope:Link
inet6 addr: fc00:f853:ccd:e793::6/64 Scope:Global
The new container “mypc” has been assigned IP 172.18.0.6. It’s on the same subnet (172.18.0.0/16) as the Kubernetes nodes. It also has an IPv6 address in the same subnet as the Kind network (fc00:f853:ccd:e793::/64).
# kube-ops-view 설치
$ helm repo add geek-cookbook https://geek-cookbook.github.io/charts/
# => "geek-cookbook" has been added to your repositories
$ helm install kube-ops-view geek-cookbook/kube-ops-view --version 1.2.2 --set service.main.type=NodePort,service.main.ports.http.nodePort=30000 --set env.TZ="Asia/Seoul" --namespace kube-system
# => NAME: kube-ops-view
# ...
# 1. Get the application URL by running these commands:
# export NODE_PORT=$(kubectl get --namespace kube-system -o jsonpath="{.spec.ports[0].nodePort}" services kube-ops-view)
# export NODE_IP=$(kubectl get nodes --namespace kube-system -o jsonpath="{.items[0].status.addresses[0].address}")
# echo http://$NODE_IP:$NODE_PORT
# myk8s-control-plane 배치
$ kubectl -n kube-system edit deploy kube-ops-view
---
spec:
...
template:
...
spec:
nodeSelector:
mynode: control-plane
tolerations:
- key: "node-role.kubernetes.io/control-plane"
operator: "Equal"
effect: "NoSchedule"
---
# 설치 확인
$ kubectl -n kube-system get pod -o wide -l app.kubernetes.io/instance=kube-ops-view
# => NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
# kube-ops-view-58f96c464d-t5t68 1/1 Running 0 30s 10.10.0.5 myk8s-control-plane <none> <none>
Practices with three backend Pods
We’ll start by creating three backend Pods using the traefik/whoami image. This image returns information about the container it’s running in, which will help us verify load balancing.
# 3pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: webpod1
labels:
app: webpod
spec:
nodeName: myk8s-worker
containers:
- name: container
image: traefik/whoami
terminationGracePeriodSeconds: 0
---
apiVersion: v1
kind: Pod
metadata:
name: webpod2
labels:
app: webpod
spec:
nodeName: myk8s-worker2
containers:
- name: container
image: traefik/whoami
terminationGracePeriodSeconds: 0
---
apiVersion: v1
kind: Pod
metadata:
name: webpod3
labels:
app: webpod
spec:
nodeName: myk8s-worker3
containers:
- name: container
image: traefik/whoami
terminationGracePeriodSeconds: 0
# Note: We're using `nodeName` to explicitly place each Pod on a different worker node. In a production environment, you'd typically let Kubernetes handle Pod scheduling.
We’ll create a client Pod that we’ll use to test our ClusterIP service.
# netpod.yml
apiVersion: v1
kind: Pod
metadata:
name: net-pod
spec:
nodeName: myk8s-control-plane
containers:
- name: netshoot-pod
image: nicolaka/netshoot
command: ["tail"]
args: ["-f", "/dev/null"]
terminationGracePeriodSeconds: 0
This Pod uses the nicolaka/netshoot image, which contains various networking tools that will be useful for our tests. Now, let’s create the ClusterIP service that will expose our backend Pods:
# svc-clusterip.yml
apiVersion: v1
kind: Service
metadata:
name: svc-clusterip
spec:
ports:
- name: svc-webport
port: 9000
targetPort: 80
selector:
app: webpod
type: ClusterIP
- port: 9000 This is the port on which the service will be accessible within the cluster.
- targetPort: 80: This is the port on which the backend Pods are listening.
- selector: app: webpod — This selector determines which Pods will be part of this service.
- type: ClusterIP — This explicitly sets the service type to ClusterIP (though this is the default if omitted).
When a ClusterIP service is created, several things happen behind the scenes: Kubernetes assigns a cluster-internal IP address to the service from a predefined IP range (service CIDR). The Kubernetes DNS service creates a DNS entry for the service, allowing it to be accessed by name within the cluster.
The service creation triggers a chain of events:
1. The API server records the new service.
2. The kubelet on each node is notified of the change.
3. kube-proxy on each node updates the node’s iptables rules.
These iptables rules are crucial for routing traffic to the correct Pods. They ensure that…
- Requests to the service IP are intercepted.
- Destination NAT (DNAT) is performed to redirect the traffic to one of the backend Pods.
- The selection of the backend Pod is done randomly, providing a simple form of load balancing.
After creating all the components, we can run some tests from our client Pod.
# Access the service by IP
kubectl exec -it net-pod - curl http://<service-ip>:9000
# Access the service by DNS name
kubectl exec -it net-pod - curl http://svc-clusterip.default.svc.cluster.local:9000
# Run multiple requests to see load balancing in action
kubectl exec -it net-pod - bash -c 'for i in {1..10}; do curl -s http://svc-clusterip:9000 | grep Hostname; done'
# The service is accessible by both IP and DNS name.
# Requests are distributed across all backend Pods.
# The service is only accessible from within the cluster.
By default, ClusterIP services use random load balancing. If you need session affinity, you can set spec.sessionAffinity: ClientIP in the service definition. For direct DNS-based access to all Pods, you can create a headless service by setting spec.clusterIP: None.
kubectl describe svc svc-clusterip
Name: svc-clusterip
Namespace: default
Labels: <none>
Annotations: <none>
Selector: app=webpod
Type: ClusterIP
IP Family Policy: SingleStack
IP Families: IPv4
IP: 10.200.1.197
IPs: 10.200.1.197
Port: svc-webport 9000/TCP
TargetPort: 80/TCP
Endpoints: 10.10.1.2:80,10.10.2.2:80,10.10.3.3:80
Session Affinity: None
Events: <none>
Kubernetes creates an Endpoints object for each service, which you can inspect to see which Pods are currently backing the service.
(⎈|kind-myk8s:N/A) root@k8s-m:~/week4# kubectl get endpoints svc-clusterip
NAME ENDPOINTS AGE
svc-clusterip 10.10.1.2:80,10.10.2.2:80,10.10.3.3:80 8m26s
(⎈|kind-myk8s:N/A) root@k8s-m:~/week4# kubectl get endpointslices -l kubernetes.io/service-name=svc-clusterip
NAME ADDRESSTYPE PORTS ENDPOINTS AGE
svc-clusterip-cmkp6 IPv4 80 10.10.1.2,10.10.2.2,10.10.3.3 8m28s
The commands used to retrieve Pod IP addresses demonstrate how Kubernetes assigns IP addresses from the configured Pod CIDR range.
This output shows that each Pod is assigned an IP address from a different subnet, corresponding to the node it’s running on. This aligns with the Kubernetes networking model where each node is allocated a subnet from the cluster’s Pod CIDR range.
WEBPOD1=$(kubectl get pod webpod1 -o jsonpath={.status.podIP})
WEBPOD2=$(kubectl get pod webpod2 -o jsonpath={.status.podIP})
WEBPOD3=$(kubectl get pod webpod3 -o jsonpath={.status.podIP})
echo $WEBPOD1 $WEBPOD2 $WEBPOD3
# Output: 10.10.2.2 10.10.1.2 10.10.3.3
The following command demonstrates direct communication with each Pod:
for pod in $WEBPOD1 $WEBPOD2 $WEBPOD3; do
kubectl exec -it net-pod -- curl -s $pod | egrep 'Host|RemoteAddr';
done
This shows that Pods can communicate directly with each other using their assigned IP addresses, a fundamental aspect of the Kubernetes networking model.
SVC1=$(kubectl get svc svc-clusterip -o jsonpath={.spec.clusterIP})
echo $SVC1
# Output: 10.200.1.197
This IP is from the Service CIDR range, which is separate from the Pod CIDR range.
for i in control-plane worker worker2 worker3; do
echo ">> node myk8s-$i <<";
docker exec -it myk8s-$i iptables -t nat -S | grep $SVC1;
echo;
done
- Rules are created on the control-plane and worker2 nodes, but not on worker and worker3.
- The rules redirect traffic destined for the service IP (10.200.1.197) to a specific iptables chain (KUBE-SVC-KBDEBIL6IU6WL7RF).
- There’s a rule to mark packets for masquerading if they’re not from the Pod CIDR (10.10.0.0/16).
kubectl exec -it net-pod -- curl -s --connect-timeout 1 $SVC1:9000 | grep Hostname
# Output: Hostname: webpod1
kubectl exec -it net-pod -- zsh -c "for i in {1..10}; do curl -s $SVC1:9000 | grep Hostname; done | sort | uniq -c | sort -nr"
This shows that requests are distributed across all three backend Pods, implementing simple round-robin load balancing.
10.10.2.2 10.10.1.2 10.10.3.3(⎈|kind-myk8s:N/A) root@k8s-m:~/week4# dssd^C
(⎈|kind-myk8s:N/A) root@k8s-m:~/week4# WEBPOD1=$(kubectl get pod webpod1 -o jsonpath={.status.podIP})
(⎈|kind-myk8s:N/A) root@k8s-m:~/week4# WEBPOD2=$(kubectl get pod webpod2 -o jsonpath={.status.podIP})
(⎈|kind-myk8s:N/A) root@k8s-m:~/week4# WEBPOD3=$(kubectl get pod webpod3 -o jsonpath={.status.podIP})
(⎈|kind-myk8s:N/A) root@k8s-m:~/week4# echo $WEBPOD1 $WEBPOD2 $WEBPOD3
10.10.2.2 10.10.1.2 10.10.3.3
(⎈|kind-myk8s:N/A) root@k8s-m:~/week4# for pod in $WEBPOD1 $WEBPOD2 $WEBPOD3; do kubectl exec -it net-pod -- curl -s $pod; done
Hostname: webpod1
IP: 127.0.0.1
IP: ::1
IP: 10.10.2.2
IP: fe80::ccbe:deff:fe5c:c5a7
RemoteAddr: 10.10.0.6:39874
GET / HTTP/1.1
Host: 10.10.2.2
User-Agent: curl/8.7.1
Accept: */*
Hostname: webpod2
IP: 127.0.0.1
IP: ::1
IP: 10.10.1.2
IP: fe80::d0d7:b1ff:fe30:1e9c
RemoteAddr: 10.10.0.6:45786
GET / HTTP/1.1
Host: 10.10.1.2
User-Agent: curl/8.7.1
Accept: */*
Hostname: webpod3
IP: 127.0.0.1
IP: ::1
IP: 10.10.3.3
IP: fe80::ec27:fcff:fe16:e870
RemoteAddr: 10.10.0.6:56258
GET / HTTP/1.1
Host: 10.10.3.3
User-Agent: curl/8.7.1
Accept: */*
(⎈|kind-myk8s:N/A) root@k8s-m:~/week4# for pod in $WEBPOD1 $WEBPOD2 $WEBPOD3; do kubectl exec -it net-pod -- curl -s $pod | egrep 'Host|RemoteAddr'; done
Hostname: webpod1
RemoteAddr: 10.10.0.6:57996
Host: 10.10.2.2
Hostname: webpod2
RemoteAddr: 10.10.0.6:50170
Host: 10.10.1.2
Hostname: webpod3
RemoteAddr: 10.10.0.6:34514
Host: 10.10.3.3
(⎈|kind-myk8s:N/A) root@k8s-m:~/week4# SVC1=$(kubectl get svc svc-clusterip -o jsonpath={.spec.clusterIP})
(⎈|kind-myk8s:N/A) root@k8s-m:~/week4# echo $SVC1
10.200.1.197
(⎈|kind-myk8s:N/A) root@k8s-m:~/week4# docker exec -it myk8s-control-plane iptables -t nat -S | grep $SVC1
-A KUBE-SERVICES -d 10.200.1.197/32 -p tcp -m comment --comment "default/svc-clusterip:svc-webport cluster IP" -m tcp --dport 9000 -j KUBE-SVC-KBDEBIL6IU6WL7RF
-A KUBE-SVC-KBDEBIL6IU6WL7RF ! -s 10.10.0.0/16 -d 10.200.1.197/32 -p tcp -m comment --comment "default/svc-clusterip:svc-webport cluster IP" -m tcp --dport 9000 -j KUBE-MARK-MASQ
(⎈|kind-myk8s:N/A) root@k8s-m:~/week4# for i in control-plane worker worker2 worker3; do echo ">> node myk8s-$i <<"; docker exec -it myk8s-$i iptables -t nat -S | grep $SVC1; echo; done
>> node myk8s-control-plane
-A KUBE-SERVICES -d 10.200.1.197/32 -p tcp -m comment --comment "default/svc-clusterip:svc-webport cluster IP" -m tcp --dport 9000 -j KUBE-SVC-KBDEBIL6IU6WL7RF
-A KUBE-SVC-KBDEBIL6IU6WL7RF ! -s 10.10.0.0/16 -d 10.200.1.197/32 -p tcp -m comment --comment "default/svc-clusterip:svc-webport cluster IP" -m tcp --dport 9000 -j KUBE-MARK-MASQ
>> node myk8s-worker
>> node myk8s-worker2
-A KUBE-SERVICES -d 10.200.1.197/32 -p tcp -m comment --comment "default/svc-clusterip:svc-webport cluster IP" -m tcp --dport 9000 -j KUBE-SVC-KBDEBIL6IU6WL7RF
-A KUBE-SVC-KBDEBIL6IU6WL7RF ! -s 10.10.0.0/16 -d 10.200.1.197/32 -p tcp -m comment --comment "default/svc-clusterip:svc-webport cluster IP" -m tcp --dport 9000 -j KUBE-MARK-MASQ
>> node myk8s-worker3
(⎈|kind-myk8s:N/A) root@k8s-m:~/week4# docker exec -it myk8s-control-plane ss -tnlp
State Recv-Q Send-Q Local Address:Port Peer Address:Port Process
LISTEN 0 4096 172.18.0.5:2379 0.0.0.0:* users:(("etcd",pid=628,fd=9))
LISTEN 0 4096 172.18.0.5:2380 0.0.0.0:* users:(("etcd",pid=628,fd=7))
LISTEN 0 4096 127.0.0.11:46641 0.0.0.0:*
LISTEN 0 4096 127.0.0.1:2381 0.0.0.0:* users:(("etcd",pid=628,fd=13))
LISTEN 0 4096 127.0.0.1:2379 0.0.0.0:* users:(("etcd",pid=628,fd=8))
LISTEN 0 4096 127.0.0.1:10248 0.0.0.0:* users:(("kubelet",pid=684,fd=19))
LISTEN 0 4096 127.0.0.1:10249 0.0.0.0:* users:(("kube-proxy",pid=853,fd=18))
LISTEN 0 4096 127.0.0.1:10259 0.0.0.0:* users:(("kube-scheduler",pid=492,fd=3))
LISTEN 0 4096 127.0.0.1:10257 0.0.0.0:* users:(("kube-controller",pid=564,fd=3))
LISTEN 0 4096 127.0.0.1:45875 0.0.0.0:* users:(("containerd",pid=111,fd=11))
LISTEN 0 4096 *:10256 *:* users:(("kube-proxy",pid=853,fd=17))
LISTEN 0 4096 *:10250 *:* users:(("kubelet",pid=684,fd=13))
LISTEN 0 4096 *:6443 *:* users:(("kube-apiserver",pid=537,fd=3))
(⎈|kind-myk8s:N/A) root@k8s-m:~/week4# d^C
(⎈|kind-myk8s:N/A) root@k8s-m:~/week4# docker exec -it myk8s-control-plane ss -tnlp
State Recv-Q Send-Q Local Address:Port Peer Address:Port Process
LISTEN 0 4096 172.18.0.5:2379 0.0.0.0:* users:(("etcd",pid=628,fd=9))
LISTEN 0 4096 172.18.0.5:2380 0.0.0.0:* users:(("etcd",pid=628,fd=7))
LISTEN 0 4096 127.0.0.11:46641 0.0.0.0:*
LISTEN 0 4096 127.0.0.1:2381 0.0.0.0:* users:(("etcd",pid=628,fd=13))
LISTEN 0 4096 127.0.0.1:2379 0.0.0.0:* users:(("etcd",pid=628,fd=8))
LISTEN 0 4096 127.0.0.1:10248 0.0.0.0:* users:(("kubelet",pid=684,fd=19))
LISTEN 0 4096 127.0.0.1:10249 0.0.0.0:* users:(("kube-proxy",pid=853,fd=18))
LISTEN 0 4096 127.0.0.1:10259 0.0.0.0:* users:(("kube-scheduler",pid=492,fd=3))
LISTEN 0 4096 127.0.0.1:10257 0.0.0.0:* users:(("kube-controller",pid=564,fd=3))
LISTEN 0 4096 127.0.0.1:45875 0.0.0.0:* users:(("containerd",pid=111,fd=11))
LISTEN 0 4096 *:10256 *:* users:(("kube-proxy",pid=853,fd=17))
LISTEN 0 4096 *:10250 *:* users:(("kubelet",pid=684,fd=13))
LISTEN 0 4096 *:6443 *:* users:(("kube-apiserver",pid=537,fd=3))
(⎈|kind-myk8s:N/A) root@k8s-m:~/week4# docker exec -it myk8s-control-plane ip -c route
default via 172.18.0.1 dev eth0
10.10.0.2 dev veth8e88ba3f scope host
10.10.0.3 dev veth39b2d962 scope host
10.10.0.4 dev vethac443434 scope host
10.10.0.5 dev veth336fde7d scope host
10.10.0.6 dev vethf7ada54e scope host
10.10.1.0/24 via 172.18.0.2 dev eth0
10.10.2.0/24 via 172.18.0.3 dev eth0
10.10.3.0/24 via 172.18.0.4 dev eth0
172.18.0.0/16 dev eth0 proto kernel scope link src 172.18.0.5
(⎈|kind-myk8s:N/A) root@k8s-m:~/week4# kubectl exec -it net-pod -- curl -s --connect-timeout 1 $SVC1:80
command terminated with exit code 28
(⎈|kind-myk8s:N/A) root@k8s-m:~/week4# kubectl exec -it net-pod -- curl -s --connect-timeout 1 $SVC1:9000 | grep HostnameHostname: webpod1
(⎈|kind-myk8s:N/A) root@k8s-m:~/week4#
# curl로 접속했을때 컨테이너의 포트인 targetPort 80으로는 접속이 안 되고 port 9000로는 접속이 됩니다.
# 또한 접속시마다 각 pod에 부하가 분산되어 HostName: 이 변경됨을 확인할 수 있습니다.
# 서비스(ClusterIP) 부하분산 접속 확인
## for 문을 이용하여 SVC1 IP 로 100번 접속을 시도 후 출력되는 내용 중 반복되는 내용의 갯수 출력
## 반복해서 실행을 해보면, SVC1 IP로 curl 접속 시 3개의 파드로 대략 33% 정도로 부하분산 접속됨을 확인
(⎈|kind-myk8s:N/A) root@k8s-m:~/week4# kubectl exec -it net-pod -- zsh -c "for i in {1..10}; do curl -s $SVC1:9000 | grep Hostname; done | sort | uniq -c | sort -nr"
4 Hostname: webpod2
4 Hostname: webpod1
2 Hostname: webpod3
(⎈|kind-myk8s:N/A) root@k8s-m:~/week4# kubectl exec -it net-pod -- zsh -c "for i in {1..10}; do curl -s $SVC1:9000 | grep Hostname; done | sort | uniq -c | sort -nr"
5 Hostname: webpod1
4 Hostname: webpod3
1 Hostname: webpod2
(⎈|kind-myk8s:N/A) root@k8s-m:~/week4# kubectl exec -it net-pod -- zsh -c "for i in {1..10}; do curl -s $SVC1:9000 | grep Hostname; done | sort | uniq -c | sort -nr"
7 Hostname: webpod3
2 Hostname: webpod1
1 Hostname: webpod2
2 Hostname: webpod3
docker exec -it myk8s-control-plane bash
root@myk8s-control-plane:/# conntrack -E
[NEW] tcp 6 120 SYN_SENT src=127.0.0.1 dst=127.0.0.1 sport=43952 dport=2381 [UNREPLIED] src=127.0.0.1 dst=127.0.0.1 sport=2381 dport=43952
[UPDATE] tcp 6 60 SYN_RECV src=127.0.0.1 dst=127.0.0.1 sport=43952 dport=2381 src=127.0.0.1 dst=127.0.0.1 sport=2381 dport=43952
[UPDATE] tcp 6 86400 ESTABLISHED src=127.0.0.1 dst=127.0.0.1 sport=43952 dport=2381 src=127.0.0.1 dst=127.0.0.1 sport=2381 dport=43952 [ASSURED]
[UPDATE] tcp 6 30 LAST_ACK src=127.0.0.1 dst=127.0.0.1 sport=48474 dport=2381 src=127.0.0.1 dst=127.0.0.1 sport=2381 dport=48474 [ASSURED]
[UPDATE] tcp 6 120 TIME_WAIT src=127.0.0.1 dst=127.0.0.1 sport=48474 dport=2381 src=127.0.0.1 dst=127.0.0.1 sport=2381 dport=48474 [ASSURED]
[NEW] tcp 6 120 SYN_SENT src=172.18.0.5 dst=172.18.0.5 sport=39734 dport=6443 [UNREPLIED] src=172.18.0.5 dst=172.18.0.5 sport=6443 dport=39734
[UPDATE] tcp 6 60 SYN_RECV src=172.18.0.5 dst=172.18.0.5 sport=39734 dport=6443 src=172.18.0.5 dst=172.18.0.5 sport=6443 dport=39734
[UPDATE] tcp 6 86400 ESTABLISHED src=172.18.0.5 dst=172.18.0.5 sport=39734 dport=6443 src=172.18.0.5 dst=172.18.0.5 sport=6443 dport=39734 [ASSURED]
[UPDATE] tcp 6 120 FIN_WAIT src=172.18.0.5 dst=172.18.0.5 sport=39734 dport=6443 src=172.18.0.5 dst=172.18.0.5 sport=6443 dport=39734 [ASSURED]
[UPDATE] tcp 6 300 CLOSE_WAIT src=172.18.0.5 dst=172.18.0.5 sport=39734 dport=6443 src=172.18.0.5 dst=172.18.0.5 sport=6443 dport=39734 [ASSURED]
[UPDATE] tcp 6 10 CLOSE src=172.18.0.5 dst=172.18.0.5 sport=39734 dport=6443 src=172.18.0.5 dst=172.18.0.5 sport=6443 dport=39734 [ASSURED]
^Cconntrack v1.4.7 (conntrack-tools): 26 flow events have been shown.
root@myk8s-control-plane:/# ^C
root@myk8s-control-plane:/# ^C
root@myk8s-control-plane:/# ^C
root@myk8s-control-plane:/# conntrack -L --dst $SVC1 # service ClusterIP
root@myk8s-control-plane:/# ebtables -L
Bridge table: filter
Bridge chain: INPUT, entries: 0, policy: ACCEPT
Bridge chain: FORWARD, entries: 0, policy: ACCEPT
Bridge chain: OUTPUT, entries: 0, policy: ACCEPT
The conntrack -E
command output shows how connections are tracked, including the various states a TCP connection goes through (SYN_SENT, SYN_RECV, ESTABLISHED, etc.). This is crucial for the correct functioning of iptables-based service proxying.
The ip route
command output shows how traffic to different Pod CIDRs is routed through the appropriate nodes, demonstrating the overlay network configuration in Kubernetes.
Access Kubernetes worker containers and use tcpdump inside them
docker exec -it myk8s-worker bash
docker exec -it myk8s-worker2 bash
docker exec -it myk8s-worker3 bash
ip -c link
ip -c addr
ip -c route
(⎈|kind-myk8s:N/A) root@k8s-m:~/week4# ip link
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: ens5: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9001 qdisc mq state UP mode DEFAULT group default qlen 1000
link/ether 02:47:bd:e3:55:01 brd ff:ff:ff:ff:ff:ff
altname enp0s5
3: tunl0@NONE: <NOARP,UP,LOWER_UP> mtu 8981 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
link/ipip 0.0.0.0 brd 0.0.0.0
12: wireguard.cali: <POINTOPOINT,NOARP,UP,LOWER_UP> mtu 8941 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
link/none
15: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN mode DEFAULT group default
link/ether 02:42:08:c5:59:f2 brd ff:ff:ff:ff:ff:ff
18: br-a765bac7e794: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default
link/ether 02:42:f9:27:f1:b5 brd ff:ff:ff:ff:ff:ff
20: vethdbf1a57@if19: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-a765bac7e794 state UP mode DEFAULT group default
link/ether 3a:64:16:2a:01:c7 brd ff:ff:ff:ff:ff:ff link-netnsid 2
22: veth2a82105@if21: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-a765bac7e794 state UP mode DEFAULT group default
link/ether 1e:09:dc:26:03:b2 brd ff:ff:ff:ff:ff:ff link-netnsid 1
24: veth7e2e38a@if23: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-a765bac7e794 state UP mode DEFAULT group default
link/ether 4e:ab:24:24:1a:fb brd ff:ff:ff:ff:ff:ff link-netnsid 0
26: vethce7e62b@if25: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-a765bac7e794 state UP mode DEFAULT group default
link/ether 36:6b:b8:a5:51:b9 brd ff:ff:ff:ff:ff:ff link-netnsid 3
28: veth7e1b3bc@if27: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-a765bac7e794 state UP mode DEFAULT group default
link/ether 42:78:29:22:14:eb brd ff:ff:ff:ff:ff:ff link-netnsid 4
root@myk8s-worker:/# ip -c addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: tunl0@NONE: <NOARP> mtu 1480 qdisc noop state DOWN group default qlen 1000
link/ipip 0.0.0.0 brd 0.0.0.0
3: veth50ff3276@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether ce:a2:21:fa:64:2a brd ff:ff:ff:ff:ff:ff link-netns cni-26827c08-44e3-8409-6699-59f0890dc523
inet 10.10.2.1/32 scope global veth50ff3276
valid_lft forever preferred_lft forever
inet6 fe80::cca2:21ff:fefa:642a/64 scope link
valid_lft forever preferred_lft forever
21: eth0@if22: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:ac:12:00:03 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 172.18.0.3/16 brd 172.18.255.255 scope global eth0
valid_lft forever preferred_lft forever
inet6 fc00:f853:ccd:e793::3/64 scope global nodad
valid_lft forever preferred_lft forever
inet6 fe80::42:acff:fe12:3/64 scope link
valid_lft forever preferred_lft forever
Choose one of the veth
interfaces that is connected to your pods or relevant traffic. In this case, we can start by checking vethdbf1a57
or any other veth
interface that is UP
.
tcpdump -i vethdbf1a57 tcp port 80 -nnq
tcpdump -i vethdbf1a57 tcp port 9000 -nnq
# If you need to capture and analyze the traffic in Wireshark later, use the following command to save the capture to a .pcap file
tcpdump -i vethdbf1a57 tcp port 80 -w /root/svc1-1.pcap
# You can use ngrep as well to filter and analyze traffic in a more readable form:
ngrep -tW byline -d vethdbf1a57 '' 'tcp port 80'
root@myk8s-worker2:/# ngrep -tW byline -d vethe07796d9 '' 'tcp port 80'
interface: vethe07796d9 (10.10.1.1/255.255.255.255)
filter: ( tcp port 80 ) and ((ip || ip6) || (vlan && (ip || ip6)))
$ kubectl exec -it net-pod -- zsh
----------------------------------
$ for i in {1..10}; do curl -s $SVC1:9000 | grep Hostname; done | sort | uniq -c | sort -nr
# => 4 Hostname: webpod3
# 3 Hostname: webpod2
# 3 Hostname: webpod1
$ exit
----------------------------------
tshark -r /root/worker1.pcap
# Analyze the packet flow from net-pod (10.10.0.7) to webpod1 (10.10.4.3).
root@myk8s-control-plane:/# iptables -t filter -S
-P INPUT ACCEPT
-P FORWARD ACCEPT
-P OUTPUT ACCEPT
-N KUBE-EXTERNAL-SERVICES
-N KUBE-FIREWALL
-N KUBE-FORWARD
-N KUBE-KUBELET-CANARY
-N KUBE-NODEPORTS
-N KUBE-PROXY-CANARY
-N KUBE-PROXY-FIREWALL
-N KUBE-SERVICES
-A INPUT -m conntrack --ctstate NEW -m comment --comment "kubernetes load balancer firewall" -j KUBE-PROXY-FIREWALL
-A INPUT -m comment --comment "kubernetes health check service ports" -j KUBE-NODEPORTS
-A INPUT -m conntrack --ctstate NEW -m comment --comment "kubernetes externally-visible service portals" -j KUBE-EXTERNAL-SERVICES
-A INPUT -j KUBE-FIREWALL
-A FORWARD -m conntrack --ctstate NEW -m comment --comment "kubernetes load balancer firewall" -j KUBE-PROXY-FIREWALL
-A FORWARD -m comment --comment "kubernetes forwarding rules" -j KUBE-FORWARD
-A FORWARD -m conntrack --ctstate NEW -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A FORWARD -m conntrack --ctstate NEW -m comment --comment "kubernetes externally-visible service portals" -j KUBE-EXTERNAL-SERVICES
-A OUTPUT -m conntrack --ctstate NEW -m comment --comment "kubernetes load balancer firewall" -j KUBE-PROXY-FIREWALL
-A OUTPUT -m conntrack --ctstate NEW -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A OUTPUT -j KUBE-FIREWALL
-A KUBE-FIREWALL ! -s 127.0.0.0/8 -d 127.0.0.0/8 -m comment --comment "block incoming localnet connections" -m conntrack ! --ctstate RELATED,ESTABLISHED,DNAT -j DROP
-A KUBE-FORWARD -m conntrack --ctstate INVALID -m nfacct --nfacct-name ct_state_invalid_dropped_pkts -j DROP
-A KUBE-FORWARD -m comment --comment "kubernetes forwarding rules" -m mark --mark 0x4000/0x4000 -j ACCEPT
-A KUBE-FORWARD -m comment --comment "kubernetes forwarding conntrack rule" -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
root@myk8s-control-plane:/# iptables -t nat -S
-P PREROUTING ACCEPT
-P INPUT ACCEPT
-P OUTPUT ACCEPT
-P POSTROUTING ACCEPT
-N DOCKER_OUTPUT
-N DOCKER_POSTROUTING
-N KIND-MASQ-AGENT
-N KUBE-EXT-7EJNTS7AENER2WX5
-N KUBE-KUBELET-CANARY
-N KUBE-MARK-MASQ
-N KUBE-NODEPORTS
-N KUBE-POSTROUTING
-N KUBE-PROXY-CANARY
-N KUBE-SEP-2XZJVPRY2PQVE3B3
-N KUBE-SEP-2ZVL7EJZGLLRN3QG
-N KUBE-SEP-6GODNNVFRWQ66GUT
-N KUBE-SEP-DOIEFYKPESCDTYCH
-N KUBE-SEP-K7ALM6KJRBAYOHKX
-N KUBE-SEP-OJCTP5LCEHQJ3D72
-N KUBE-SEP-RT3F6VLY3P67FIV3
-N KUBE-SEP-TBW2IYJKUCAC7GB3
-N KUBE-SEP-XVHB3NIW2NQLTFP3
-N KUBE-SEP-XWEOB3JN6VI62DQQ
-N KUBE-SEP-ZEA5VGCBA2QNA7AK
-N KUBE-SERVICES
-N KUBE-SVC-7EJNTS7AENER2WX5
-N KUBE-SVC-ERIFXISQEP7F7OF4
-N KUBE-SVC-JD5MR3NA4I4DYORP
-N KUBE-SVC-KBDEBIL6IU6WL7RF
-N KUBE-SVC-NPX46M4PTMTKRN6Y
-N KUBE-SVC-TCOU7JCQXEZGVUNU
-A PREROUTING -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A PREROUTING -d 172.18.0.1/32 -j DOCKER_OUTPUT
-A OUTPUT -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A OUTPUT -d 172.18.0.1/32 -j DOCKER_OUTPUT
-A POSTROUTING -m comment --comment "kubernetes postrouting rules" -j KUBE-POSTROUTING
-A POSTROUTING -d 172.18.0.1/32 -j DOCKER_POSTROUTING
-A POSTROUTING -m addrtype ! --dst-type LOCAL -m comment --comment "kind-masq-agent: ensure nat POSTROUTING directs all non-LOCAL destination traffic to our custom KIND-MASQ-AGENT chain" -j KIND-MASQ-AGENT
-A DOCKER_OUTPUT -d 172.18.0.1/32 -p tcp -m tcp --dport 53 -j DNAT --to-destination 127.0.0.11:46641
-A DOCKER_OUTPUT -d 172.18.0.1/32 -p udp -m udp --dport 53 -j DNAT --to-destination 127.0.0.11:50010
-A DOCKER_POSTROUTING -s 127.0.0.11/32 -p tcp -m tcp --sport 46641 -j SNAT --to-source 172.18.0.1:53
-A DOCKER_POSTROUTING -s 127.0.0.11/32 -p udp -m udp --sport 50010 -j SNAT --to-source 172.18.0.1:53
-A KIND-MASQ-AGENT -d 10.10.0.0/16 -m comment --comment "kind-masq-agent: local traffic is not subject to MASQUERADE" -j RETURN
-A KIND-MASQ-AGENT -m comment --comment "kind-masq-agent: outbound traffic is subject to MASQUERADE (must be last in chain)" -j MASQUERADE
-A KUBE-EXT-7EJNTS7AENER2WX5 -m comment --comment "masquerade traffic for kube-system/kube-ops-view:http external destinations" -j KUBE-MARK-MASQ
-A KUBE-EXT-7EJNTS7AENER2WX5 -j KUBE-SVC-7EJNTS7AENER2WX5
-A KUBE-MARK-MASQ -j MARK --set-xmark 0x4000/0x4000
-A KUBE-NODEPORTS -d 127.0.0.0/8 -p tcp -m comment --comment "kube-system/kube-ops-view:http" -m tcp --dport 30000 -m nfacct --nfacct-name localhost_nps_accepted_pkts -j KUBE-EXT-7EJNTS7AENER2WX5
-A KUBE-NODEPORTS -p tcp -m comment --comment "kube-system/kube-ops-view:http" -m tcp --dport 30000 -j KUBE-EXT-7EJNTS7AENER2WX5
-A KUBE-POSTROUTING -m mark ! --mark 0x4000/0x4000 -j RETURN
-A KUBE-POSTROUTING -j MARK --set-xmark 0x4000/0x0
-A KUBE-POSTROUTING -m comment --comment "kubernetes service traffic requiring SNAT" -j MASQUERADE --random-fully
-A KUBE-SEP-2XZJVPRY2PQVE3B3 -s 10.10.0.2/32 -m comment --comment "kube-system/kube-dns:dns" -j KUBE-MARK-MASQ
-A KUBE-SEP-2XZJVPRY2PQVE3B3 -p udp -m comment --comment "kube-system/kube-dns:dns" -m udp -j DNAT --to-destination 10.10.0.2:53
-A KUBE-SEP-2ZVL7EJZGLLRN3QG -s 10.10.0.5/32 -m comment --comment "kube-system/kube-ops-view:http" -j KUBE-MARK-MASQ
-A KUBE-SEP-2ZVL7EJZGLLRN3QG -p tcp -m comment --comment "kube-system/kube-ops-view:http" -m tcp -j DNAT --to-destination 10.10.0.5:8080
-A KUBE-SEP-6GODNNVFRWQ66GUT -s 10.10.0.3/32 -m comment --comment "kube-system/kube-dns:metrics" -j KUBE-MARK-MASQ
-A KUBE-SEP-6GODNNVFRWQ66GUT -p tcp -m comment --comment "kube-system/kube-dns:metrics" -m tcp -j DNAT --to-destination 10.10.0.3:9153
-A KUBE-SEP-DOIEFYKPESCDTYCH -s 10.10.2.2/32 -m comment --comment "default/svc-clusterip:svc-webport" -j KUBE-MARK-MASQ
-A KUBE-SEP-DOIEFYKPESCDTYCH -p tcp -m comment --comment "default/svc-clusterip:svc-webport" -m tcp -j DNAT --to-destination 10.10.2.2:80
-A KUBE-SEP-K7ALM6KJRBAYOHKX -s 10.10.3.3/32 -m comment --comment "default/svc-clusterip:svc-webport" -j KUBE-MARK-MASQ
-A KUBE-SEP-K7ALM6KJRBAYOHKX -p tcp -m comment --comment "default/svc-clusterip:svc-webport" -m tcp -j DNAT --to-destination 10.10.3.3:80
-A KUBE-SEP-OJCTP5LCEHQJ3D72 -s 172.18.0.5/32 -m comment --comment "default/kubernetes:https" -j KUBE-MARK-MASQ
-A KUBE-SEP-OJCTP5LCEHQJ3D72 -p tcp -m comment --comment "default/kubernetes:https" -m tcp -j DNAT --to-destination 172.18.0.5:6443
-A KUBE-SEP-RT3F6VLY3P67FIV3 -s 10.10.0.2/32 -m comment --comment "kube-system/kube-dns:metrics" -j KUBE-MARK-MASQ
-A KUBE-SEP-RT3F6VLY3P67FIV3 -p tcp -m comment --comment "kube-system/kube-dns:metrics" -m tcp -j DNAT --to-destination 10.10.0.2:9153
-A KUBE-SEP-TBW2IYJKUCAC7GB3 -s 10.10.1.2/32 -m comment --comment "default/svc-clusterip:svc-webport" -j KUBE-MARK-MASQ
-A KUBE-SEP-TBW2IYJKUCAC7GB3 -p tcp -m comment --comment "default/svc-clusterip:svc-webport" -m tcp -j DNAT --to-destination 10.10.1.2:80
-A KUBE-SEP-XVHB3NIW2NQLTFP3 -s 10.10.0.2/32 -m comment --comment "kube-system/kube-dns:dns-tcp" -j KUBE-MARK-MASQ
-A KUBE-SEP-XVHB3NIW2NQLTFP3 -p tcp -m comment --comment "kube-system/kube-dns:dns-tcp" -m tcp -j DNAT --to-destination 10.10.0.2:53
-A KUBE-SEP-XWEOB3JN6VI62DQQ -s 10.10.0.3/32 -m comment --comment "kube-system/kube-dns:dns" -j KUBE-MARK-MASQ
-A KUBE-SEP-XWEOB3JN6VI62DQQ -p udp -m comment --comment "kube-system/kube-dns:dns" -m udp -j DNAT --to-destination 10.10.0.3:53
-A KUBE-SEP-ZEA5VGCBA2QNA7AK -s 10.10.0.3/32 -m comment --comment "kube-system/kube-dns:dns-tcp" -j KUBE-MARK-MASQ
-A KUBE-SEP-ZEA5VGCBA2QNA7AK -p tcp -m comment --comment "kube-system/kube-dns:dns-tcp" -m tcp -j DNAT --to-destination 10.10.0.3:53
-A KUBE-SERVICES -d 10.200.1.1/32 -p tcp -m comment --comment "default/kubernetes:https cluster IP" -m tcp --dport 443 -j KUBE-SVC-NPX46M4PTMTKRN6Y
-A KUBE-SERVICES -d 10.200.1.10/32 -p udp -m comment --comment "kube-system/kube-dns:dns cluster IP" -m udp --dport 53 -j KUBE-SVC-TCOU7JCQXEZGVUNU
-A KUBE-SERVICES -d 10.200.1.10/32 -p tcp -m comment --comment "kube-system/kube-dns:dns-tcp cluster IP" -m tcp --dport 53 -j KUBE-SVC-ERIFXISQEP7F7OF4
-A KUBE-SERVICES -d 10.200.1.10/32 -p tcp -m comment --comment "kube-system/kube-dns:metrics cluster IP" -m tcp --dport 9153 -j KUBE-SVC-JD5MR3NA4I4DYORP
-A KUBE-SERVICES -d 10.200.1.112/32 -p tcp -m comment --comment "kube-system/kube-ops-view:http cluster IP" -m tcp --dport 8080 -j KUBE-SVC-7EJNTS7AENER2WX5
-A KUBE-SERVICES -d 10.200.1.197/32 -p tcp -m comment --comment "default/svc-clusterip:svc-webport cluster IP" -m tcp --dport 9000 -j KUBE-SVC-KBDEBIL6IU6WL7RF
-A KUBE-SERVICES -m comment --comment "kubernetes service nodeports; NOTE: this must be the last rule in this chain" -m addrtype --dst-type LOCAL -j KUBE-NODEPORTS
-A KUBE-SVC-7EJNTS7AENER2WX5 ! -s 10.10.0.0/16 -d 10.200.1.112/32 -p tcp -m comment --comment "kube-system/kube-ops-view:http cluster IP" -m tcp --dport 8080 -j KUBE-MARK-MASQ
-A KUBE-SVC-7EJNTS7AENER2WX5 -m comment --comment "kube-system/kube-ops-view:http -> 10.10.0.5:8080" -j KUBE-SEP-2ZVL7EJZGLLRN3QG
-A KUBE-SVC-ERIFXISQEP7F7OF4 ! -s 10.10.0.0/16 -d 10.200.1.10/32 -p tcp -m comment --comment "kube-system/kube-dns:dns-tcp cluster IP" -m tcp --dport 53 -j KUBE-MARK-MASQ
-A KUBE-SVC-ERIFXISQEP7F7OF4 -m comment --comment "kube-system/kube-dns:dns-tcp -> 10.10.0.2:53" -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-XVHB3NIW2NQLTFP3
-A KUBE-SVC-ERIFXISQEP7F7OF4 -m comment --comment "kube-system/kube-dns:dns-tcp -> 10.10.0.3:53" -j KUBE-SEP-ZEA5VGCBA2QNA7AK
-A KUBE-SVC-JD5MR3NA4I4DYORP ! -s 10.10.0.0/16 -d 10.200.1.10/32 -p tcp -m comment --comment "kube-system/kube-dns:metrics cluster IP" -m tcp --dport 9153 -j KUBE-MARK-MASQ
-A KUBE-SVC-JD5MR3NA4I4DYORP -m comment --comment "kube-system/kube-dns:metrics -> 10.10.0.2:9153" -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-RT3F6VLY3P67FIV3
-A KUBE-SVC-JD5MR3NA4I4DYORP -m comment --comment "kube-system/kube-dns:metrics -> 10.10.0.3:9153" -j KUBE-SEP-6GODNNVFRWQ66GUT
-A KUBE-SVC-KBDEBIL6IU6WL7RF ! -s 10.10.0.0/16 -d 10.200.1.197/32 -p tcp -m comment --comment "default/svc-clusterip:svc-webport cluster IP" -m tcp --dport 9000 -j KUBE-MARK-MASQ
-A KUBE-SVC-KBDEBIL6IU6WL7RF -m comment --comment "default/svc-clusterip:svc-webport -> 10.10.1.2:80" -m statistic --mode random --probability 0.33333333349 -j KUBE-SEP-TBW2IYJKUCAC7GB3
-A KUBE-SVC-KBDEBIL6IU6WL7RF -m comment --comment "default/svc-clusterip:svc-webport -> 10.10.2.2:80" -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-DOIEFYKPESCDTYCH
-A KUBE-SVC-KBDEBIL6IU6WL7RF -m comment --comment "default/svc-clusterip:svc-webport -> 10.10.3.3:80" -j KUBE-SEP-K7ALM6KJRBAYOHKX
-A KUBE-SVC-NPX46M4PTMTKRN6Y ! -s 10.10.0.0/16 -d 10.200.1.1/32 -p tcp -m comment --comment "default/kubernetes:https cluster IP" -m tcp --dport 443 -j KUBE-MARK-MASQ
-A KUBE-SVC-NPX46M4PTMTKRN6Y -m comment --comment "default/kubernetes:https -> 172.18.0.5:6443" -j KUBE-SEP-OJCTP5LCEHQJ3D72
-A KUBE-SVC-TCOU7JCQXEZGVUNU ! -s 10.10.0.0/16 -d 10.200.1.10/32 -p udp -m comment --comment "kube-system/kube-dns:dns cluster IP" -m udp --dport 53 -j KUBE-MARK-MASQ
-A KUBE-SVC-TCOU7JCQXEZGVUNU -m comment --comment "kube-system/kube-dns:dns -> 10.10.0.2:53" -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-2XZJVPRY2PQVE3B3
-A KUBE-SVC-TCOU7JCQXEZGVUNU -m comment --comment "kube-system/kube-dns:dns -> 10.10.0.3:53" -j KUBE-SEP-XWEOB3JN6VI62DQQ
root@myk8s-control-plane:/# iptables -t nat -S | wc -l
98
root@myk8s-control-plane:/# iptables -t mangle -S
-P PREROUTING ACCEPT
-P INPUT ACCEPT
-P FORWARD ACCEPT
-P OUTPUT ACCEPT
-P POSTROUTING ACCEPT
-N KUBE-IPTABLES-HINT
-N KUBE-KUBELET-CANARY
-N KUBE-PROXY-CANARY
root@myk8s-control-plane:/# iptables -nvL -t filter
Chain INPUT (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
13131 789K KUBE-PROXY-FIREWALL 0 -- * * 0.0.0.0/0 0.0.0.0/0 ctstate NEW /* kubernetes load balancer firewall */
966K 460M KUBE-NODEPORTS 0 -- * * 0.0.0.0/0 0.0.0.0/0 /* kubernetes health check service ports */
13132 789K KUBE-EXTERNAL-SERVICES 0 -- * * 0.0.0.0/0 0.0.0.0/0 ctstate NEW /* kubernetes externally-visible service portals */
981K 468M KUBE-FIREWALL 0 -- * * 0.0.0.0/0 0.0.0.0/0
Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
130 7800 KUBE-PROXY-FIREWALL 0 -- * * 0.0.0.0/0 0.0.0.0/0 ctstate NEW /* kubernetes load balancer firewall */
1282 119K KUBE-FORWARD 0 -- * * 0.0.0.0/0 0.0.0.0/0 /* kubernetes forwarding rules */
130 7800 KUBE-SERVICES 0 -- * * 0.0.0.0/0 0.0.0.0/0 ctstate NEW /* kubernetes service portals */
130 7800 KUBE-EXTERNAL-SERVICES 0 -- * * 0.0.0.0/0 0.0.0.0/0 ctstate NEW /* kubernetes externally-visible service portals */
Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
15969 959K KUBE-PROXY-FIREWALL 0 -- * * 0.0.0.0/0 0.0.0.0/0 ctstate NEW /* kubernetes load balancer firewall */
15969 959K KUBE-SERVICES 0 -- * * 0.0.0.0/0 0.0.0.0/0 ctstate NEW /* kubernetes service portals */
973K 197M KUBE-FIREWALL 0 -- * * 0.0.0.0/0 0.0.0.0/0
Chain KUBE-EXTERNAL-SERVICES (2 references)
pkts bytes target prot opt in out source destination
Chain KUBE-FIREWALL (2 references)
pkts bytes target prot opt in out source destination
0 0 DROP 0 -- * * !127.0.0.0/8 127.0.0.0/8 /* block incoming localnet connections */ ! ctstate RELATED,ESTABLISHED,DNAT
Chain KUBE-FORWARD (1 references)
pkts bytes target prot opt in out source destination
0 0 DROP 0 -- * * 0.0.0.0/0 0.0.0.0/0 ctstate INVALID nfacct-name ct_state_invalid_dropped_pkts
0 0 ACCEPT 0 -- * * 0.0.0.0/0 0.0.0.0/0 /* kubernetes forwarding rules */ mark match 0x4000/0x4000
819 78806 ACCEPT 0 -- * * 0.0.0.0/0 0.0.0.0/0 /* kubernetes forwarding conntrack rule */ ctstate RELATED,ESTABLISHED
Chain KUBE-KUBELET-CANARY (0 references)
pkts bytes target prot opt in out source destination
Chain KUBE-NODEPORTS (1 references)
pkts bytes target prot opt in out source destination
Chain KUBE-PROXY-CANARY (0 references)
pkts bytes target prot opt in out source destination
Chain KUBE-PROXY-FIREWALL (3 references)
pkts bytes target prot opt in out source destination
Chain KUBE-SERVICES (2 references)
pkts bytes target prot opt in out source destination
root@myk8s-control-plane:/# iptables -nvL -t nat
Chain PREROUTING (policy ACCEPT 155 packets, 9324 bytes)
pkts bytes target prot opt in out source destination
278 16754 KUBE-SERVICES 0 -- * * 0.0.0.0/0 0.0.0.0/0 /* kubernetes service portals */
2 170 DOCKER_OUTPUT 0 -- * * 0.0.0.0/0 172.18.0.1
Chain INPUT (policy ACCEPT 157 packets, 9494 bytes)
pkts bytes target prot opt in out source destination
Chain OUTPUT (policy ACCEPT 16093 packets, 966K bytes)
pkts bytes target prot opt in out source destination
15973 959K KUBE-SERVICES 0 -- * * 0.0.0.0/0 0.0.0.0/0 /* kubernetes service portals */
44 3478 DOCKER_OUTPUT 0 -- * * 0.0.0.0/0 172.18.0.1
Chain POSTROUTING (policy ACCEPT 16243 packets, 975K bytes)
pkts bytes target prot opt in out source destination
16103 967K KUBE-POSTROUTING 0 -- * * 0.0.0.0/0 0.0.0.0/0 /* kubernetes postrouting rules */
0 0 DOCKER_POSTROUTING 0 -- * * 0.0.0.0/0 172.18.0.1
3116 187K KIND-MASQ-AGENT 0 -- * * 0.0.0.0/0 0.0.0.0/0 ADDRTYPE match dst-type !LOCAL /* kind-masq-agent: ensure nat POSTROUTING directs all non-LOCAL destination traffic to our custom KIND-MASQ-AGENT chain */
Chain DOCKER_OUTPUT (2 references)
pkts bytes target prot opt in out source destination
0 0 DNAT 6 -- * * 0.0.0.0/0 172.18.0.1 tcp dpt:53 to:127.0.0.11:46641
46 3648 DNAT 17 -- * * 0.0.0.0/0 172.18.0.1 udp dpt:53 to:127.0.0.11:50010
Chain DOCKER_POSTROUTING (1 references)
pkts bytes target prot opt in out source destination
0 0 SNAT 6 -- * * 127.0.0.11 0.0.0.0/0 tcp spt:46641 to:172.18.0.1:53
0 0 SNAT 17 -- * * 127.0.0.11 0.0.0.0/0 udp spt:50010 to:172.18.0.1:53
Chain KIND-MASQ-AGENT (1 references)
pkts bytes target prot opt in out source destination
3092 186K RETURN 0 -- * * 0.0.0.0/0 10.10.0.0/16 /* kind-masq-agent: local traffic is not subject to MASQUERADE */
24 1440 MASQUERADE 0 -- * * 0.0.0.0/0 0.0.0.0/0 /* kind-masq-agent: outbound traffic is subject to MASQUERADE (must be last in chain) */
Chain KUBE-EXT-7EJNTS7AENER2WX5 (2 references)
pkts bytes target prot opt in out source destination
0 0 KUBE-MARK-MASQ 0 -- * * 0.0.0.0/0 0.0.0.0/0 /* masquerade traffic for kube-system/kube-ops-view:http external destinations */
0 0 KUBE-SVC-7EJNTS7AENER2WX5 0 -- * * 0.0.0.0/0 0.0.0.0/0
Chain KUBE-KUBELET-CANARY (0 references)
pkts bytes target prot opt in out source destination
Chain KUBE-MARK-MASQ (18 references)
pkts bytes target prot opt in out source destination
0 0 MARK 0 -- * * 0.0.0.0/0 0.0.0.0/0 MARK or 0x4000
Chain KUBE-NODEPORTS (1 references)
pkts bytes target prot opt in out source destination
0 0 KUBE-EXT-7EJNTS7AENER2WX5 6 -- * * 0.0.0.0/0 127.0.0.0/8 /* kube-system/kube-ops-view:http */ tcp dpt:30000 nfacct-name localhost_nps_accepted_pkts
0 0 KUBE-EXT-7EJNTS7AENER2WX5 6 -- * * 0.0.0.0/0 0.0.0.0/0 /* kube-system/kube-ops-view:http */ tcp dpt:30000
Chain KUBE-POSTROUTING (1 references)
pkts bytes target prot opt in out source destination
1291 77460 RETURN 0 -- * * 0.0.0.0/0 0.0.0.0/0 mark match ! 0x4000/0x4000
0 0 MARK 0 -- * * 0.0.0.0/0 0.0.0.0/0 MARK xor 0x4000
0 0 MASQUERADE 0 -- * * 0.0.0.0/0 0.0.0.0/0 /* kubernetes service traffic requiring SNAT */ random-fully
Chain KUBE-PROXY-CANARY (0 references)
pkts bytes target prot opt in out source destination
Chain KUBE-SEP-2XZJVPRY2PQVE3B3 (1 references)
pkts bytes target prot opt in out source destination
0 0 KUBE-MARK-MASQ 0 -- * * 10.10.0.2 0.0.0.0/0 /* kube-system/kube-dns:dns */
0 0 DNAT 17 -- * * 0.0.0.0/0 0.0.0.0/0 /* kube-system/kube-dns:dns */ udp to:10.10.0.2:53
Chain KUBE-SEP-2ZVL7EJZGLLRN3QG (1 references)
pkts bytes target prot opt in out source destination
0 0 KUBE-MARK-MASQ 0 -- * * 10.10.0.5 0.0.0.0/0 /* kube-system/kube-ops-view:http */
0 0 DNAT 6 -- * * 0.0.0.0/0 0.0.0.0/0 /* kube-system/kube-ops-view:http */ tcp to:10.10.0.5:8080
Chain KUBE-SEP-6GODNNVFRWQ66GUT (1 references)
pkts bytes target prot opt in out source destination
0 0 KUBE-MARK-MASQ 0 -- * * 10.10.0.3 0.0.0.0/0 /* kube-system/kube-dns:metrics */
0 0 DNAT 6 -- * * 0.0.0.0/0 0.0.0.0/0 /* kube-system/kube-dns:metrics */ tcp to:10.10.0.3:9153
Chain KUBE-SEP-DOIEFYKPESCDTYCH (1 references)
pkts bytes target prot opt in out source destination
0 0 KUBE-MARK-MASQ 0 -- * * 10.10.2.2 0.0.0.0/0 /* default/svc-clusterip:svc-webport */
45 2700 DNAT 6 -- * * 0.0.0.0/0 0.0.0.0/0 /* default/svc-clusterip:svc-webport */ tcp to:10.10.2.2:80
Chain KUBE-SEP-K7ALM6KJRBAYOHKX (1 references)
pkts bytes target prot opt in out source destination
0 0 KUBE-MARK-MASQ 0 -- * * 10.10.3.3 0.0.0.0/0 /* default/svc-clusterip:svc-webport */
43 2580 DNAT 6 -- * * 0.0.0.0/0 0.0.0.0/0 /* default/svc-clusterip:svc-webport */ tcp to:10.10.3.3:80
Chain KUBE-SEP-OJCTP5LCEHQJ3D72 (1 references)
pkts bytes target prot opt in out source destination
0 0 KUBE-MARK-MASQ 0 -- * * 172.18.0.5 0.0.0.0/0 /* default/kubernetes:https */
8 480 DNAT 6 -- * * 0.0.0.0/0 0.0.0.0/0 /* default/kubernetes:https */ tcp to:172.18.0.5:6443
Chain KUBE-SEP-RT3F6VLY3P67FIV3 (1 references)
pkts bytes target prot opt in out source destination
0 0 KUBE-MARK-MASQ 0 -- * * 10.10.0.2 0.0.0.0/0 /* kube-system/kube-dns:metrics */
0 0 DNAT 6 -- * * 0.0.0.0/0 0.0.0.0/0 /* kube-system/kube-dns:metrics */ tcp to:10.10.0.2:9153
Chain KUBE-SEP-TBW2IYJKUCAC7GB3 (1 references)
pkts bytes target prot opt in out source destination
0 0 KUBE-MARK-MASQ 0 -- * * 10.10.1.2 0.0.0.0/0 /* default/svc-clusterip:svc-webport */
34 2040 DNAT 6 -- * * 0.0.0.0/0 0.0.0.0/0 /* default/svc-clusterip:svc-webport */ tcp to:10.10.1.2:80
Chain KUBE-SEP-XVHB3NIW2NQLTFP3 (1 references)
pkts bytes target prot opt in out source destination
0 0 KUBE-MARK-MASQ 0 -- * * 10.10.0.2 0.0.0.0/0 /* kube-system/kube-dns:dns-tcp */
0 0 DNAT 6 -- * * 0.0.0.0/0 0.0.0.0/0 /* kube-system/kube-dns:dns-tcp */ tcp to:10.10.0.2:53
Chain KUBE-SEP-XWEOB3JN6VI62DQQ (1 references)
pkts bytes target prot opt in out source destination
0 0 KUBE-MARK-MASQ 0 -- * * 10.10.0.3 0.0.0.0/0 /* kube-system/kube-dns:dns */
0 0 DNAT 17 -- * * 0.0.0.0/0 0.0.0.0/0 /* kube-system/kube-dns:dns */ udp to:10.10.0.3:53
Chain KUBE-SEP-ZEA5VGCBA2QNA7AK (1 references)
pkts bytes target prot opt in out source destination
0 0 KUBE-MARK-MASQ 0 -- * * 10.10.0.3 0.0.0.0/0 /* kube-system/kube-dns:dns-tcp */
0 0 DNAT 6 -- * * 0.0.0.0/0 0.0.0.0/0 /* kube-system/kube-dns:dns-tcp */ tcp to:10.10.0.3:53
Chain KUBE-SERVICES (2 references)
pkts bytes target prot opt in out source destination
0 0 KUBE-SVC-NPX46M4PTMTKRN6Y 6 -- * * 0.0.0.0/0 10.200.1.1 /* default/kubernetes:https cluster IP */ tcp dpt:443
0 0 KUBE-SVC-TCOU7JCQXEZGVUNU 17 -- * * 0.0.0.0/0 10.200.1.10 /* kube-system/kube-dns:dns cluster IP */ udp dpt:53
0 0 KUBE-SVC-ERIFXISQEP7F7OF4 6 -- * * 0.0.0.0/0 10.200.1.10 /* kube-system/kube-dns:dns-tcp cluster IP */ tcp dpt:53
0 0 KUBE-SVC-JD5MR3NA4I4DYORP 6 -- * * 0.0.0.0/0 10.200.1.10 /* kube-system/kube-dns:metrics cluster IP */ tcp dpt:9153
0 0 KUBE-SVC-7EJNTS7AENER2WX5 6 -- * * 0.0.0.0/0 10.200.1.112 /* kube-system/kube-ops-view:http cluster IP */ tcp dpt:8080
91 5460 KUBE-SVC-KBDEBIL6IU6WL7RF 6 -- * * 0.0.0.0/0 10.200.1.197 /* default/svc-clusterip:svc-webport cluster IP */ tcp dpt:9000
972 58320 KUBE-NODEPORTS 0 -- * * 0.0.0.0/0 0.0.0.0/0 /* kubernetes service nodeports; NOTE: this must be the last rule in this chain */ ADDRTYPE match dst-type LOCAL
Chain KUBE-SVC-7EJNTS7AENER2WX5 (2 references)
pkts bytes target prot opt in out source destination
0 0 KUBE-MARK-MASQ 6 -- * * !10.10.0.0/16 10.200.1.112 /* kube-system/kube-ops-view:http cluster IP */ tcp dpt:8080
0 0 KUBE-SEP-2ZVL7EJZGLLRN3QG 0 -- * * 0.0.0.0/0 0.0.0.0/0 /* kube-system/kube-ops-view:http -> 10.10.0.5:8080 */
Chain KUBE-SVC-ERIFXISQEP7F7OF4 (1 references)
pkts bytes target prot opt in out source destination
0 0 KUBE-MARK-MASQ 6 -- * * !10.10.0.0/16 10.200.1.10 /* kube-system/kube-dns:dns-tcp cluster IP */ tcp dpt:53
0 0 KUBE-SEP-XVHB3NIW2NQLTFP3 0 -- * * 0.0.0.0/0 0.0.0.0/0 /* kube-system/kube-dns:dns-tcp -> 10.10.0.2:53 */ statistic mode random probability 0.50000000000
0 0 KUBE-SEP-ZEA5VGCBA2QNA7AK 0 -- * * 0.0.0.0/0 0.0.0.0/0 /* kube-system/kube-dns:dns-tcp -> 10.10.0.3:53 */
Chain KUBE-SVC-JD5MR3NA4I4DYORP (1 references)
pkts bytes target prot opt in out source destination
0 0 KUBE-MARK-MASQ 6 -- * * !10.10.0.0/16 10.200.1.10 /* kube-system/kube-dns:metrics cluster IP */ tcp dpt:9153
0 0 KUBE-SEP-RT3F6VLY3P67FIV3 0 -- * * 0.0.0.0/0 0.0.0.0/0 /* kube-system/kube-dns:metrics -> 10.10.0.2:9153 */ statistic mode random probability 0.50000000000
0 0 KUBE-SEP-6GODNNVFRWQ66GUT 0 -- * * 0.0.0.0/0 0.0.0.0/0 /* kube-system/kube-dns:metrics -> 10.10.0.3:9153 */
Chain KUBE-SVC-KBDEBIL6IU6WL7RF (1 references)
pkts bytes target prot opt in out source destination
0 0 KUBE-MARK-MASQ 6 -- * * !10.10.0.0/16 10.200.1.197 /* default/svc-clusterip:svc-webport cluster IP */ tcp dpt:9000
34 2040 KUBE-SEP-TBW2IYJKUCAC7GB3 0 -- * * 0.0.0.0/0 0.0.0.0/0 /* default/svc-clusterip:svc-webport -> 10.10.1.2:80 */ statistic mode random probability 0.33333333349
45 2700 KUBE-SEP-DOIEFYKPESCDTYCH 0 -- * * 0.0.0.0/0 0.0.0.0/0 /* default/svc-clusterip:svc-webport -> 10.10.2.2:80 */ statistic mode random probability 0.50000000000
43 2580 KUBE-SEP-K7ALM6KJRBAYOHKX 0 -- * * 0.0.0.0/0 0.0.0.0/0 /* default/svc-clusterip:svc-webport -> 10.10.3.3:80 */
Chain KUBE-SVC-NPX46M4PTMTKRN6Y (1 references)
pkts bytes target prot opt in out source destination
0 0 KUBE-MARK-MASQ 6 -- * * !10.10.0.0/16 10.200.1.1 /* default/kubernetes:https cluster IP */ tcp dpt:443
8 480 KUBE-SEP-OJCTP5LCEHQJ3D72 0 -- * * 0.0.0.0/0 0.0.0.0/0 /* default/kubernetes:https -> 172.18.0.5:6443 */
Chain KUBE-SVC-TCOU7JCQXEZGVUNU (1 references)
pkts bytes target prot opt in out source destination
0 0 KUBE-MARK-MASQ 17 -- * * !10.10.0.0/16 10.200.1.10 /* kube-system/kube-dns:dns cluster IP */ udp dpt:53
0 0 KUBE-SEP-2XZJVPRY2PQVE3B3 0 -- * * 0.0.0.0/0 0.0.0.0/0 /* kube-system/kube-dns:dns -> 10.10.0.2:53 */ statistic mode random probability 0.50000000000
0 0 KUBE-SEP-XWEOB3JN6VI62DQQ 0 -- * * 0.0.0.0/0 0.0.0.0/0 /* kube-system/kube-dns:dns -> 10.10.0.3:53 */
root@myk8s-control-plane:/# iptables -nvL -t mangle
Chain PREROUTING (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
Chain INPUT (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
Chain POSTROUTING (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
Chain KUBE-IPTABLES-HINT (0 references)
pkts bytes target prot opt in out source destination
Chain KUBE-KUBELET-CANARY (0 references)
pkts bytes target prot opt in out source destination
Chain KUBE-PROXY-CANARY (0 references)
pkts bytes target prot opt in out source destination
root@myk8s-control-plane:/# iptables -nvL -t filter | wc -l
47
root@myk8s-control-plane:/# iptables -nvL -t nat | wc -l
159
root@myk8s-control-plane:/# iptables -t filter --zero; iptables -t nat --zero; iptables -t mangle --zero
root@myk8s-control-plane:/# iptables -t nat -nvL
Chain PREROUTING (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
0 0 KUBE-SERVICES 0 -- * * 0.0.0.0/0 0.0.0.0/0 /* kubernetes service portals */
0 0 DOCKER_OUTPUT 0 -- * * 0.0.0.0/0 172.18.0.1
Chain INPUT (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
Chain OUTPUT (policy ACCEPT 6 packets, 360 bytes)
pkts bytes target prot opt in out source destination
6 360 KUBE-SERVICES 0 -- * * 0.0.0.0/0 0.0.0.0/0 /* kubernetes service portals */
0 0 DOCKER_OUTPUT 0 -- * * 0.0.0.0/0 172.18.0.1
Chain POSTROUTING (policy ACCEPT 6 packets, 360 bytes)
pkts bytes target prot opt in out source destination
6 360 KUBE-POSTROUTING 0 -- * * 0.0.0.0/0 0.0.0.0/0 /* kubernetes postrouting rules */
0 0 DOCKER_POSTROUTING 0 -- * * 0.0.0.0/0 172.18.0.1
2 120 KIND-MASQ-AGENT 0 -- * * 0.0.0.0/0 0.0.0.0/0 ADDRTYPE match dst-type !LOCAL /* kind-masq-agent: ensure nat POSTROUTING directs all non-LOCAL destination traffic to our custom KIND-MASQ-AGENT chain */
Chain DOCKER_OUTPUT (2 references)
pkts bytes target prot opt in out source destination
0 0 DNAT 6 -- * * 0.0.0.0/0 172.18.0.1 tcp dpt:53 to:127.0.0.11:46641
0 0 DNAT 17 -- * * 0.0.0.0/0 172.18.0.1 udp dpt:53 to:127.0.0.11:50010
Chain DOCKER_POSTROUTING (1 references)
pkts bytes target prot opt in out source destination
0 0 SNAT 6 -- * * 127.0.0.11 0.0.0.0/0 tcp spt:46641 to:172.18.0.1:53
0 0 SNAT 17 -- * * 127.0.0.11 0.0.0.0/0 udp spt:50010 to:172.18.0.1:53
Chain KIND-MASQ-AGENT (1 references)
pkts bytes target prot opt in out source destination
2 120 RETURN 0 -- * * 0.0.0.0/0 10.10.0.0/16 /* kind-masq-agent: local traffic is not subject to MASQUERADE */
0 0 MASQUERADE 0 -- * * 0.0.0.0/0 0.0.0.0/0 /* kind-masq-agent: outbound traffic is subject to MASQUERADE (must be last in chain) */
Chain KUBE-EXT-7EJNTS7AENER2WX5 (2 references)
pkts bytes target prot opt in out source destination
0 0 KUBE-MARK-MASQ 0 -- * * 0.0.0.0/0 0.0.0.0/0 /* masquerade traffic for kube-system/kube-ops-view:http external destinations */
0 0 KUBE-SVC-7EJNTS7AENER2WX5 0 -- * * 0.0.0.0/0 0.0.0.0/0
Chain KUBE-KUBELET-CANARY (0 references)
pkts bytes target prot opt in out source destination
Chain KUBE-MARK-MASQ (18 references)
pkts bytes target prot opt in out source destination
0 0 MARK 0 -- * * 0.0.0.0/0 0.0.0.0/0 MARK or 0x4000
Chain KUBE-NODEPORTS (1 references)
pkts bytes target prot opt in out source destination
0 0 KUBE-EXT-7EJNTS7AENER2WX5 6 -- * * 0.0.0.0/0 127.0.0.0/8 /* kube-system/kube-ops-view:http */ tcp dpt:30000 nfacct-name localhost_nps_accepted_pkts
0 0 KUBE-EXT-7EJNTS7AENER2WX5 6 -- * * 0.0.0.0/0 0.0.0.0/0 /* kube-system/kube-ops-view:http */ tcp dpt:30000
Chain KUBE-POSTROUTING (1 references)
pkts bytes target prot opt in out source destination
6 360 RETURN 0 -- * * 0.0.0.0/0 0.0.0.0/0 mark match ! 0x4000/0x4000
0 0 MARK 0 -- * * 0.0.0.0/0 0.0.0.0/0 MARK xor 0x4000
0 0 MASQUERADE 0 -- * * 0.0.0.0/0 0.0.0.0/0 /* kubernetes service traffic requiring SNAT */ random-fully
Chain KUBE-PROXY-CANARY (0 references)
pkts bytes target prot opt in out source destination
Chain KUBE-SEP-2XZJVPRY2PQVE3B3 (1 references)
pkts bytes target prot opt in out source destination
0 0 KUBE-MARK-MASQ 0 -- * * 10.10.0.2 0.0.0.0/0 /* kube-system/kube-dns:dns */
0 0 DNAT 17 -- * * 0.0.0.0/0 0.0.0.0/0 /* kube-system/kube-dns:dns */ udp to:10.10.0.2:53
Chain KUBE-SEP-2ZVL7EJZGLLRN3QG (1 references)
pkts bytes target prot opt in out source destination
0 0 KUBE-MARK-MASQ 0 -- * * 10.10.0.5 0.0.0.0/0 /* kube-system/kube-ops-view:http */
0 0 DNAT 6 -- * * 0.0.0.0/0 0.0.0.0/0 /* kube-system/kube-ops-view:http */ tcp to:10.10.0.5:8080
Chain KUBE-SEP-6GODNNVFRWQ66GUT (1 references)
pkts bytes target prot opt in out source destination
0 0 KUBE-MARK-MASQ 0 -- * * 10.10.0.3 0.0.0.0/0 /* kube-system/kube-dns:metrics */
0 0 DNAT 6 -- * * 0.0.0.0/0 0.0.0.0/0 /* kube-system/kube-dns:metrics */ tcp to:10.10.0.3:9153
Chain KUBE-SEP-DOIEFYKPESCDTYCH (1 references)
pkts bytes target prot opt in out source destination
0 0 KUBE-MARK-MASQ 0 -- * * 10.10.2.2 0.0.0.0/0 /* default/svc-clusterip:svc-webport */
0 0 DNAT 6 -- * * 0.0.0.0/0 0.0.0.0/0 /* default/svc-clusterip:svc-webport */ tcp to:10.10.2.2:80
Chain KUBE-SEP-K7ALM6KJRBAYOHKX (1 references)
pkts bytes target prot opt in out source destination
0 0 KUBE-MARK-MASQ 0 -- * * 10.10.3.3 0.0.0.0/0 /* default/svc-clusterip:svc-webport */
0 0 DNAT 6 -- * * 0.0.0.0/0 0.0.0.0/0 /* default/svc-clusterip:svc-webport */ tcp to:10.10.3.3:80
Chain KUBE-SEP-OJCTP5LCEHQJ3D72 (1 references)
pkts bytes target prot opt in out source destination
0 0 KUBE-MARK-MASQ 0 -- * * 172.18.0.5 0.0.0.0/0 /* default/kubernetes:https */
0 0 DNAT 6 -- * * 0.0.0.0/0 0.0.0.0/0 /* default/kubernetes:https */ tcp to:172.18.0.5:6443
Chain KUBE-SEP-RT3F6VLY3P67FIV3 (1 references)
pkts bytes target prot opt in out source destination
0 0 KUBE-MARK-MASQ 0 -- * * 10.10.0.2 0.0.0.0/0 /* kube-system/kube-dns:metrics */
0 0 DNAT 6 -- * * 0.0.0.0/0 0.0.0.0/0 /* kube-system/kube-dns:metrics */ tcp to:10.10.0.2:9153
Chain KUBE-SEP-TBW2IYJKUCAC7GB3 (1 references)
pkts bytes target prot opt in out source destination
0 0 KUBE-MARK-MASQ 0 -- * * 10.10.1.2 0.0.0.0/0 /* default/svc-clusterip:svc-webport */
0 0 DNAT 6 -- * * 0.0.0.0/0 0.0.0.0/0 /* default/svc-clusterip:svc-webport */ tcp to:10.10.1.2:80
Chain KUBE-SEP-XVHB3NIW2NQLTFP3 (1 references)
pkts bytes target prot opt in out source destination
0 0 KUBE-MARK-MASQ 0 -- * * 10.10.0.2 0.0.0.0/0 /* kube-system/kube-dns:dns-tcp */
0 0 DNAT 6 -- * * 0.0.0.0/0 0.0.0.0/0 /* kube-system/kube-dns:dns-tcp */ tcp to:10.10.0.2:53
Chain KUBE-SEP-XWEOB3JN6VI62DQQ (1 references)
pkts bytes target prot opt in out source destination
0 0 KUBE-MARK-MASQ 0 -- * * 10.10.0.3 0.0.0.0/0 /* kube-system/kube-dns:dns */
0 0 DNAT 17 -- * * 0.0.0.0/0 0.0.0.0/0 /* kube-system/kube-dns:dns */ udp to:10.10.0.3:53
Chain KUBE-SEP-ZEA5VGCBA2QNA7AK (1 references)
pkts bytes target prot opt in out source destination
0 0 KUBE-MARK-MASQ 0 -- * * 10.10.0.3 0.0.0.0/0 /* kube-system/kube-dns:dns-tcp */
0 0 DNAT 6 -- * * 0.0.0.0/0 0.0.0.0/0 /* kube-system/kube-dns:dns-tcp */ tcp to:10.10.0.3:53
Chain KUBE-SERVICES (2 references)
pkts bytes target prot opt in out source destination
0 0 KUBE-SVC-NPX46M4PTMTKRN6Y 6 -- * * 0.0.0.0/0 10.200.1.1 /* default/kubernetes:https cluster IP */ tcp dpt:443
0 0 KUBE-SVC-TCOU7JCQXEZGVUNU 17 -- * * 0.0.0.0/0 10.200.1.10 /* kube-system/kube-dns:dns cluster IP */ udp dpt:53
0 0 KUBE-SVC-ERIFXISQEP7F7OF4 6 -- * * 0.0.0.0/0 10.200.1.10 /* kube-system/kube-dns:dns-tcp cluster IP */ tcp dpt:
53
pkts bytes target prot opt in out source destination
0 0 KUBE-MARK-MASQ 6 -- * * !10.10.0.0/16 10.200.1.197 /* default/svc-clusterip:svc-webport cluster IP */ tcp dpt:9000
0 0 KUBE-SEP-TBW2IYJKUCAC7GB3 0 -- * * 0.0.0.0/0 0.0.0.0/0 /* default/svc-clusterip:svc-webport -> 10.10.1.2:80 *
/ statistic mode random probability 0.33333333349
0 0 KUBE-SEP-DOIEFYKPESCDTYCH 0 -- * * 0.0.0.0/0 0.0.0.0/0 /* default/svc-clusterip:svc-webport -> 10.10.2.2:80 *
/ statistic mode random probability 0.50000000000
0 0 KUBE-SEP-K7ALM6KJRBAYOHKX 0 -- * * 0.0.0.0/0 0.0.0.0/0 /* default/svc-clusterip:svc-webport -> 10.10.3.3:80 *
/
0 0 KUBE-SVC-JD5MR3NA4I4DYORP 6 -- * * 0.0.0.0/0 10.200.1.10 /* kube-system/kube-dns:metrics cluster IP */ tcp dpt:9153
0 0 KUBE-SVC-7EJNTS7AENER2WX5 6 -- * * 0.0.0.0/0 10.200.1.112 /* kube-system/kube-ops-view:http cluster IP */ tcp dpt:8080
0 0 KUBE-SVC-KBDEBIL6IU6WL7RF 6 -- * * 0.0.0.0/0 10.200.1.197 /* default/svc-clusterip:svc-webport cluster IP */ tcp dpt:9000
4 240 KUBE-NODEPORTS 0 -- * * 0.0.0.0/0 0.0.0.0/0 /* kubernetes service nodeports; NOTE: this must be the last rule in this chain */ ADDRTYPE match dst-type LOCAL
Chain KUBE-SVC-7EJNTS7AENER2WX5 (2 references)
0 0 KUBE-MARK-MASQ 6 -- * * !10.10.0.0/16 10.200.1.197 /* default/svc-clusterip:svc-webport cluster IP */ tcp dpt:9000
0 0 KUBE-SEP-TBW2IYJKUCAC7GB3 0 -- * * 0.0.0.0/0 0.0.0.0/0 /* default/svc-clusterip:svc-webport -> 10.10.1.2:80
*/ statistic mode random probability 0.33333333349
0 0 KUBE-SEP-DOIEFYKPESCDTYCH 0 -- * * 0.0.0.0/0 0.0.0.0/0 /* default/svc-clusterip:svc-webport -> 10.10.2.2:80
*/ statistic mode random probability 0.50000000000
0 0 KUBE-SEP-K7ALM6KJRBAYOHKX 0 -- * * 0.0.0.0/0 0.0.0.0/0 /* default/svc-clusterip:svc-webport -> 10.10.3.3:80
*/
Every 2.0s: iptables -v --numeric --table nat --list KUBE-SVC-KBDEBIL6IU6WL7RF myk8s-control-plane: Sat Sep 28 19:06:51 2024
Chain KUBE-SVC-KBDEBIL6IU6WL7RF (1 references)
pkts bytes target prot opt in out source destination
0 0 KUBE-MARK-MASQ 6 -- * * !10.10.0.0/16 10.200.1.197 /* default/svc-clusterip:svc-webport clusterIP */ tcp dpt:9000
0 0 KUBE-SEP-TBW2IYJKUCAC7GB3 0 -- * * 0.0.0.0/0 0.0.0.0/0 /* default/svc-clusterip:svc-webort -> 10.10.1.2:80 */ statistic mode random probability 0.33333333349
0 0 KUBE-SEP-DOIEFYKPESCDTYCH 0 -- * * 0.0.0.0/0 0.0.0.0/0 /* default/svc-clusterip:svc-webort -> 10.10.2.2:80 */ statistic mode random probability 0.50000000000
0 0 KUBE-SEP-K7ALM6KJRBAYOHKX 0 -- * * 0.0.0.0/0 0.0.0.0/0 /* default/svc-clusterip:svc-webort -> 10.10.3.3:80 */
pkts bytes target prot opt in out source destination
0 0 KUBE-MARK-MASQ 6 -- * * !10.10.0.0/16 10.200.1.112 /* kube-system/kube-ops-view:http cluster IP */ tcp dpt:8080
0 0 KUBE-SEP-2ZVL7EJZGLLRN3QG 0 -- * * 0.0.0.0/0 0.0.0.0/0 /* kube-system/kube-ops-view:http -> 10.10.0.5:8080 */
Chain KUBE-SVC-ERIFXISQEP7F7OF4 (1 references)
pkts bytes target prot opt in out source destination
0 0 KUBE-MARK-MASQ 6 -- * * !10.10.0.0/16 10.200.1.10 /* kube-system/kube-dns:dns-tcp cluster IP */ tcp dpt:53
0 0 KUBE-SEP-XVHB3NIW2NQLTFP3 0 -- * * 0.0.0.0/0 0.0.0.0/0 /* kube-system/kube-dns:dns-tcp -> 10.10.0.2:53 */ statistic mode random probability 0.50000000000
0 0 KUBE-SEP-ZEA5VGCBA2QNA7AK 0 -- * * 0.0.0.0/0 0.0.0.0/0 /* kube-system/kube-dns:dns-tcp -> 10.10.0.3:53 */
Chain KUBE-SVC-JD5MR3NA4I4DYORP (1 references)
pkts bytes target prot opt in out source destination
0 0 KUBE-MARK-MASQ 6 -- * * !10.10.0.0/16 10.200.1.10 /* kube-system/kube-dns:metrics cluster IP */ tcp dpt:9153
0 0 KUBE-SEP-RT3F6VLY3P67FIV3 0 -- * * 0.0.0.0/0 0.0.0.0/0 /* kube-system/kube-dns:metrics -> 10.10.0.2:9153 */ statistic mode random probability 0.50000000000
0 0 KUBE-SEP-6GODNNVFRWQ66GUT 0 -- * * 0.0.0.0/0 0.0.0.0/0 /* kube-system/kube-dns:metrics -> 10.10.0.3:9153 */
Chain KUBE-SVC-KBDEBIL6IU6WL7RF (1 references)
pkts bytes target prot opt in out source destination
0 0 KUBE-MARK-MASQ 6 -- * * !10.10.0.0/16 10.200.1.197 /* default/svc-clusterip:svc-webport cluster IP */ tcp dpt:9000
0 0 KUBE-SEP-TBW2IYJKUCAC7GB3 0 -- * * 0.0.0.0/0 0.0.0.0/0 /* default/svc-clusterip:svc-webport -> 10.10.1.2:80 */ statistic mode random probability 0.33333333349
0 0 KUBE-SEP-DOIEFYKPESCDTYCH 0 -- * * 0.0.0.0/0 0.0.0.0/0 /* default/svc-clusterip:svc-webport -> 10.10.2.2:80 */ statistic mode random probability 0.50000000000
0 0 KUBE-SEP-K7ALM6KJRBAYOHKX 0 -- * * 0.0.0.0/0 0.0.0.0/0 /* default/svc-clusterip:svc-webport -> 10.10.3.3:80 */
Chain KUBE-SVC-NPX46M4PTMTKRN6Y (1 references)
pkts bytes target prot opt in out source destination
0 0 KUBE-MARK-MASQ 6 -- * * !10.10.0.0/16 10.200.1.1 /* default/kubernetes:https cluster IP */ tcp dpt:443
0 0 KUBE-SEP-OJCTP5LCEHQJ3D72 0 -- * * 0.0.0.0/0 0.0.0.0/0 /* default/kubernetes:https -> 172.18.0.5:6443 */
Chain KUBE-SVC-TCOU7JCQXEZGVUNU (1 references)
pkts bytes target prot opt in out source destination
0 0 KUBE-MARK-MASQ 17 -- * * !10.10.0.0/16 10.200.1.10 /* kube-system/kube-dns:dns cluster IP */ udp dpt:53
0 0 KUBE-SEP-2XZJVPRY2PQVE3B3 0 -- * * 0.0.0.0/0 0.0.0.0/0 /* kube-system/kube-dns:dns -> 10.10.0.2:53 */ statistic mode random probability 0.50000000000
0 0 KUBE-SEP-XWEOB3JN6VI62DQQ 0 -- * * 0.0.0.0/0 0.0.0.0/0 /* kube-system/kube-dns:dns -> 10.10.0.3:53 */
The main chains we need to focus on for our ClusterIP service are:
- KUBE-SERVICES
- KUBE-SVC-KBDEBIL6IU6WL7RF (service-specific chain)
- KUBE-SEP-* (endpoint-specific chains)
In the KUBE-SERVICES chain, we see this rule:
0 0 KUBE-SVC-KBDEBIL6IU6WL7RF 6 - * * 0.0.0.0/0 10.200.1.197 /* default/svc-clusterip:svc-webport cluster IP */ tcp dpt:9000
This rule matches traffic destined for the ClusterIP (10.200.1.197) on port 9000 and forwards it to the KUBE-SVC-KBDEBIL6IU6WL7RF chain.
0 0 KUBE-MARK-MASQ 6 - * * !10.10.0.0/16 10.200.1.197 /* default/svc-clusterip:svc-webport cluster IP */ tcp dpt:9000
0 0 KUBE-SEP-TBW2IYJKUCAC7GB3 0 - * * 0.0.0.0/0 0.0.0.0/0 /* default/svc-clusterip:svc-webport -> 10.10.1.2:80 */ statistic mode random probability 0.33333333349
0 0 KUBE-SEP-DOIEFYKPESCDTYCH 0 - * * 0.0.0.0/0 0.0.0.0/0 /* default/svc-clusterip:svc-webport -> 10.10.2.2:80 */ statistic mode random probability 0.50000000000
0 0 KUBE-SEP-K7ALM6KJRBAYOHKX 0 - * * 0.0.0.0/0 0.0.0.0/0 /* default/svc-clusterip:svc-webport -> 10.10.3.3:80 */
a. The first rule marks packets for masquerade if they’re not from the pod network (10.10.0.0/16).
b. The next three rules implement load balancing across the three backend pods.
- Each rule has a probability, which determines the likelihood of selecting that endpoint.
- The probabilities are set to distribute traffic evenly (approximately 33.33% each).
KUBE-SEP-* Chains represent individual service endpoints (pods). For example, KUBE-SEP-TBW2IYJKUCAC7GB3:
0 0 KUBE-MARK-MASQ 0 - * * 10.10.1.2 0.0.0.0/0 /* default/svc-clusterip:svc-webport */
0 0 DNAT 6 - * * 0.0.0.0/0 0.0.0.0/0 /* default/svc-clusterip:svc-webport */ tcp to:10.10.1.2:80
These rules:
a. Mark the packet for masquerade if it’s from the pod itself.
b. Perform DNAT (Destination Network Address Translation) to redirect the traffic to the pod’s IP and port.
When a packet destined for the ClusterIP service arrives:
1. It matches the rule in KUBE-SERVICES and is sent to KUBE-SVC-KBDEBIL6IU6WL7RF.
2. In KUBE-SVC-KBDEBIL6IU6WL7RF, it’s randomly directed to one of the KUBE-SEP-* chains based on the probabilities.
3. The chosen KUBE-SEP-* chain performs DNAT to send the packet to the specific pod.
The random selection in KUBE-SVC-KBDEBIL6IU6WL7RF implements a simple form of load balancing. The probabilities ensure an approximately even distribution of traffic across the three pods.
The KUBE-MARK-MASQ rules ensure that if traffic originates from outside the cluster (or from a pod to its own service IP), it gets masqueraded. This is crucial for maintaining correct return routing.
The use of iptables allows this to be done efficiently at the kernel level, without requiring a userspace proxy for most traffic.
# 패킷 전달 수를 확인 하기 위해 watch를 겁니다.
$ watch -d 'iptables -v --numeric --table nat --list KUBE-SVC-KBDEBIL6IU6WL7RF'
# control-plane 에서 테스트 패킷을 보냅니다.
$ SVC1=$(kubectl get svc svc-clusterip -o jsonpath={.spec.clusterIP})
$ kubectl exec -it net-pod -- zsh -c "for i in {1..100}; do curl -s $SVC1:9000 | grep Hostname; sleep 1; done"
root@myk8s-control-plane:/# iptables -v --numeric --table nat --list KUBE-SVC-NPX46M4PTMTKRN6Y
Chain KUBE-SVC-NPX46M4PTMTKRN6Y (1 references)
pkts bytes target prot opt in out source destination
0 0 KUBE-MARK-MASQ 6 -- * * !10.10.0.0/16 10.200.1.1 /* default/kubernetes:https cluster IP */ tcp dpt:443
0 0 KUBE-SEP-OJCTP5LCEHQJ3D72 0 -- * * 0.0.0.0/0 0.0.0.0/0 /* default/kubernetes:https -> 172.18.0.5:6443 */
root@myk8s-control-plane:/#
watch -d 'iptables -v --numeric --table nat --list POSTROUTING; echo ; iptables -v --numeric --table nat --list KUBE-POSTROUTING'
Every 2.0s: iptables -v --numeric --table nat --list POSTROUTING; echo ; iptables -v --... myk8s-control-plane: Sat Sep 28 19:12:30 2024
Chain POSTROUTING (policy ACCEPT 47 packets, 2820 bytes)
pkts bytes target prot opt in out source destination
47 2820 KUBE-POSTROUTING 0 -- * * 0.0.0.0/0 0.0.0.0/0 /* kubernetes postrouting rules */
0 0 DOCKER_POSTROUTING 0 -- * * 0.0.0.0/0 172.18.0.1
10 600 KIND-MASQ-AGENT 0 -- * * 0.0.0.0/0 0.0.0.0/0 ADDRTYPE match dst-type !LOCAL /* kind-mas
q-agent: ensure nat POSTROUTING directs all non-LOCAL destination traffic to our custom KIND-MASQ-AGENT chain */
Chain KUBE-POSTROUTING (1 references)
pkts bytes target prot opt in out source destination
47 2820 RETURN 0 -- * * 0.0.0.0/0 0.0.0.0/0 mark match ! 0x4000/0x4000
0 0 MARK 0 -- * * 0.0.0.0/0 0.0.0.0/0 MARK xor 0x4000
0 0 MASQUERADE 0 -- * * 0.0.0.0/0 0.0.0.0/0 /* kubernetes service traffic requiring SNAT */
random-fully
root@myk8s-control-plane:/# iptables -t nat -S | grep KUBE-POSTROUTING
-N KUBE-POSTROUTING
-A POSTROUTING -m comment --comment "kubernetes postrouting rules" -j KUBE-POSTROUTING
-A KUBE-POSTROUTING -m mark ! --mark 0x4000/0x4000 -j RETURN
-A KUBE-POSTROUTING -j MARK --set-xmark 0x4000/0x0
-A KUBE-POSTROUTING -m comment --comment "kubernetes service traffic requiring SNAT" -j MASQUERADE --random-fully
root@myk8s-control-plane:/# exit
(⎈|kind-myk8s:N/A) root@k8s-m:~/week4# docker exec -it myk8s-control-plane iptables -v --numeric --table nat --list KUBE-SVC-NPX46M4PTMTKRN6Y
Chain KUBE-SVC-NPX46M4PTMTKRN6Y (1 references)
pkts bytes target prot opt in out source destination
0 0 KUBE-MARK-MASQ 6 -- * * !10.10.0.0/16 10.200.1.1 /* default/kubernetes:https cluster IP */ tcp dpt:443
0 0 KUBE-SEP-OJCTP5LCEHQJ3D72 0 -- * * 0.0.0.0/0 0.0.0.0/0 /* default/kubernetes:https -> 172.18.0.5:6443 */
(⎈|kind-myk8s:N/A) root@k8s-m:~/week4# for i in control-plane worker worker2 worker3; do echo ">> node myk8s-$i <<"; docker exec -it myk8s-$i iptables -v --numeric --table nat --list KUBE-SVC-NPX46M4PTMTKRN6Y; echo; done
>> node myk8s-control-plane <<
Chain KUBE-SVC-NPX46M4PTMTKRN6Y (1 references)
pkts bytes target prot opt in out source destination
0 0 KUBE-MARK-MASQ 6 -- * * !10.10.0.0/16 10.200.1.1 /* default/kubernetes:https cluster IP */ tcp dpt:443
0 0 KUBE-SEP-OJCTP5LCEHQJ3D72 0 -- * * 0.0.0.0/0 0.0.0.0/0 /* default/kubernetes:https -> 172.18.0.5:6443 */
>> node myk8s-worker <<
iptables v1.8.9 (nf_tables): chain `KUBE-SVC-NPX46M4PTMTKRN6Y' in table `nat' is incompatible, use 'nft' tool.
>> node myk8s-worker2 <<
Chain KUBE-SVC-NPX46M4PTMTKRN6Y (1 references)
pkts bytes target prot opt in out source destination
0 0 KUBE-MARK-MASQ 6 -- * * !10.10.0.0/16 10.200.1.1 /* default/kubernetes:https cluster IP */ tcp dpt:443
0 0 KUBE-SEP-OJCTP5LCEHQJ3D72 0 -- * * 0.0.0.0/0 0.0.0.0/0 /* default/kubernetes:https -> 172.18.0.5:6443 */
>> node myk8s-worker3 <<
iptables v1.8.9 (nf_tables): chain `KUBE-SVC-NPX46M4PTMTKRN6Y' in table `nat' is incompatible, use 'nft' tool.
To observe the behavior of a Kubernetes ClusterIP service during pod failures, we set up two terminal windows for monitoring:
Terminal 1: Monitoring Endpoints
We use the following command to continuously watch for changes in pods, services, endpoints, and endpoint slices:
watch -d 'kubectl get pod -owide;echo; kubectl get svc,ep svc-clusterip;echo; kubectl get endpointslices -l kubernetes.io/service-name=svc-clusterip'
This command provides real-time updates on the state of our pods and the associated service endpoints.
Terminal 2: Testing Service Accessibility
Option 1: Continuous curl requests
SVC1=$(kubectl get svc svc-clusterip -o jsonpath={.spec.clusterIP})
kubectl exec -it net-pod - zsh -c "while true; do curl -s - connect-timeout 1 $SVC1:9000 | egrep 'Hostname|IP: 10'; date '+%Y-%m-%d %H:%M:%S' ; echo ; sleep 1; done"
This command continuously sends requests to the service and displays the hostname and IP of the responding pod.
Option 2: Multiple requests with result aggregation
kubectl exec -it net-pod - zsh -c "for i in {1..100}; do curl -s $SVC1:9000 | grep Hostname; done | sort | uniq -c | sort -nr"
This command sends 100 requests and aggregates the results, showing how many times each pod responded.
We simulate a pod failure by deleting one of the pods:
kubectl delete pod webpod3
After executing this command, we observe
- The webpod3 disappears from the pod list.
- The service endpoints are updated, removing the IP of webpod3.
- The curl requests in Terminal 2 now only show responses from the two remaining pods.
We recreate the deleted pod:
kubectl apply -f 3pod.yaml
- webpod3 reappears in the pod list with a new IP address.
- The service endpoints are updated to include the new webpod3 IP.
- The curl requests in Terminal 2 now show responses from all three pods again.
As an alternative to deleting the pod, we can remove its label.
kubectl label pod webpod3 app-
This command removes the ‘app’ label from webpod3. We observe:
- webpod3 remains in the pod list but is removed from the service endpoints
- The curl requests no longer show responses from webpod3.
We can then restore the label:
kubectl label pod webpod3 app=webpod
After this, we observe:
- webpod3 is added back to the service endpoints.
- The curl requests once again show responses from all three pods.
Session Affinity in Kubernetes Services: ClientIP Option
In Kubernetes, ClusterIP services typically distribute traffic randomly across all available pods. However, there are scenarios where maintaining a consistent connection between a client and a specific pod is desirable. This is where the sessionAffinity feature comes into play, particularly with the ClientIP option.
The sessionAffinity: ClientIP
setting in a Kubernetes service ensures that requests from the same client IP address are consistently routed to the same pod. This feature is useful for applications that require session stickiness or when you need to maintain state for a client across multiple requests.
- Default (sessionAffinity: None): By default, services distribute traffic randomly among pods, which is suitable for stateless applications.
- With ClientIP Affinity: When set to ClientIP, the service will route all requests from a specific client IP to the same pod for a defined period.
Implementing ClientIP Affinity
apiVersion: v1
kind: Service
metadata:
name: svc-clusterip
spec:
sessionAffinity: ClientIP
sessionAffinityConfig:
clientIP:
timeoutSeconds: 10800 # 3 hours
- Initially, requests were distributed randomly among pods (webpod1, webpod2, webpod3).
- After setting sessionAffinity: ClientIP, all 100 (and later 1000) requests from the test pod (net-pod) were consistently routed to the same pod (webpod2).
- The timeoutSeconds field (default: 10800 seconds or 3 hours) determines how long the session affinity is maintained for a client IP. After this period, the affinity may be reset, and the client might be routed to a different pod.
Use Cases and Considerations
- Stateful Applications: Useful for applications that maintain session state.
- Caching: Can improve cache hit rates by ensuring a client always hits the same pod.
- Performance: May lead to uneven load distribution if client traffic patterns are not uniform.
While sessionAffinity: ClientIP
addresses some specific needs, ClusterIP services have certain limitations:
- External Access: Not accessible from outside the cluster (addressed by NodePort or LoadBalancer types).
- Health Checking: IPtables doesn’t perform health checks on pods (can be mitigated with Readiness Probes).
- Load Balancing Algorithms: Limited to random distribution and session affinity (IPVS offers more algorithms).
- Node Affinity: Doesn’t consider pod-to-node placement for traffic optimization.
The NodePort service is defined in the echo-deploy.yml
file:
apiVersion: apps/v1
kind: Deployment
metadata:
name: deploy-echo
spec:
replicas: 3
selector:
matchLabels:
app: deploy-websrv
template:
metadata:
labels:
app: deploy-websrv
spec:
terminationGracePeriodSeconds: 0
containers:
- name: kans-websrv
image: mendhak/http-https-echo
ports:
- containerPort: 8080
resources:
limits:
memory: "128Mi"
cpu: "500m"
requests:
memory: "64Mi"
cpu: "250m"
livenessProbe:
httpGet:
path: /
port: 8080
initialDelaySeconds: 10
periodSeconds: 5
readinessProbe:
httpGet:
path: /
port: 8080
initialDelaySeconds: 5
periodSeconds: 5
- The service type is set to NodePort.
- It exposes port 9000 internally (ClusterIP) and maps it to port 8080 on the target pods.
- The selector
app: deploy-websrv
determines which pods are part of this service.
apiVersion: v1
kind: Service
metadata:
name: svc-nodeport
spec:
ports:
- name: svc-webport
port: 9000 # Port to access the service via ClusterIP
targetPort: 8080 # Port on the target pod accessed through the service
selector:
app: deploy-websrv
type: NodePort
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
svc-nodeport NodePort 10.200.1.93 <none> 9000:31162/TCP 7m23s
- The service gets a Cluster IP (10.200.1.93).
- A random NodePort (31162) is assigned to expose the service externally.
NAME ENDPOINTS AGE
svc-nodeport 10.10.1.4:8080,10.10.2.4:8080,10.10.3.6:8080 4m50s
This shows that three pods are backing this service, each on a different node.
Accessing the Service Via NodePort:
docker exec -it mypc curl -s $CNODE:$NPORT | jq
This command accesses the service through the NodePort (31162) on the control plane node. The response shows:
- The request is received by one of the backend pods.
- The
hostname
in the response is the IP of the node (172.18.0.5), not the pod's IP.
docker exec -it mypc zsh -c "for i in {1..100}; do curl -s $CNODE:$NPORT | grep hostname; done | sort | uniq -c | sort -nr"
This command demonstrates that requests are load-balanced across the three backend pods.
ClusterIP Access:
docker exec -it myk8s-control-plane curl -s $CIP:$CIPPORT | jq
This shows that the service is also accessible via its ClusterIP (10.200.1.93) from within the cluster.
Network Address Translation (NAT):
The conntrack -L --any-nat
command output shows:
- SNAT (Source NAT) is applied when traffic leaves the cluster.
- DNAT (Destination NAT) is used to route incoming traffic to the appropriate pods.
iptables Rules:
The iptables rules (not shown in the provided output) would typically include:
- DNAT rules to redirect traffic from the NodePort to the ClusterIP.
- Load balancing rules to distribute traffic among the backend pods.
Connection Tracking:
The conntrack -E
output shows how connections are tracked, including:
- New connection establishments (SYN_SENT, SYN_RECV)
- Established connections
- Connection terminations (FIN_WAIT, CLOSE_WAIT, TIME_WAIT)
- This tracking is crucial for maintaining stateful connections and applying NAT consistently.
Cross-Node Communication:
When a request hits a node that doesn’t host the target pod, Kubernetes uses SNAT to ensure the return traffic is routed correctly. This is why the hostname
in the response is often the node's IP, not the pod's IP.
(⎈|kind-myk8s:N/A) root@k8s-m:~/week4# docker exec -it myk8s-control-plane iptables -v --numeric --table nat --list KUBE-SVC-NPX46M4PTMTKRN6Y
Chain KUBE-SVC-NPX46M4PTMTKRN6Y (1 references)
pkts bytes target prot opt in out source destination
0 0 KUBE-MARK-MASQ 6 -- * * !10.10.0.0/16 10.200.1.1 /* default/kubernetes:https cluster IP */ tcp dpt:443
0 0 KUBE-SEP-OJCTP5LCEHQJ3D72 0 -- * * 0.0.0.0/0 0.0.0.0/0 /* default/kubernetes:https -> 172.18.0.5:6443 */
(⎈|kind-myk8s:N/A) root@k8s-m:~/week4# for i in control-plane worker worker2 worker3; do echo ">> node myk8s-$i <<"; docker exec -it myk8s-$i iptables -v --numeric --table nat --list KUBE-SVC-NPX46M4PTMTKRN6Y; echo; done
>> node myk8s-control-plane <<
Chain KUBE-SVC-NPX46M4PTMTKRN6Y (1 references)
pkts bytes target prot opt in out source destination
0 0 KUBE-MARK-MASQ 6 -- * * !10.10.0.0/16 10.200.1.1 /* default/kubernetes:https cluster IP */ tcp dpt:443
0 0 KUBE-SEP-OJCTP5LCEHQJ3D72 0 -- * * 0.0.0.0/0 0.0.0.0/0 /* default/kubernetes:https -> 172.18.0.5:6443 */
>> node myk8s-worker <<
iptables v1.8.9 (nf_tables): chain `KUBE-SVC-NPX46M4PTMTKRN6Y' in table `nat' is incompatible, use 'nft' tool.
>> node myk8s-worker2 <<
Chain KUBE-SVC-NPX46M4PTMTKRN6Y (1 references)
pkts bytes target prot opt in out source destination
0 0 KUBE-MARK-MASQ 6 -- * * !10.10.0.0/16 10.200.1.1 /* default/kubernetes:https cluster IP */ tcp dpt:443
0 0 KUBE-SEP-OJCTP5LCEHQJ3D72 0 -- * * 0.0.0.0/0 0.0.0.0/0 /* default/kubernetes:https -> 172.18.0.5:6443 */
>> node myk8s-worker3 <<
iptables v1.8.9 (nf_tables): chain `KUBE-SVC-NPX46M4PTMTKRN6Y' in table `nat' is incompatible, use 'nft' tool.
(⎈|kind-myk8s:N/A) root@k8s-m:~/week4# watch -d 'kubectl get pod -owide;echo; kubectl get svc,ep svc-clusterip;echo; kubectl get endpointslices -l kubernetes.io/service-name=svc-clusterip'
(⎈|kind-myk8s:N/A) root@k8s-m:~/week4# ls
3pod.yaml netpod.yaml svc-clusterip.yaml
(⎈|kind-myk8s:N/A) root@k8s-m:~/week4# vim echo-deploy.yaml
(⎈|kind-myk8s:N/A) root@k8s-m:~/week4# vim svc-nodeport.yaml
(⎈|kind-myk8s:N/A) root@k8s-m:~/week4# vim svc-nodeport.yaml
(⎈|kind-myk8s:N/A) root@k8s-m:~/week4# kubectl apply -f echo-deploy.yml,svc-nodeport.yml
the path "echo-deploy.yml" does not exist
the path "svc-nodeport.yml" does not exist
(⎈|kind-myk8s:N/A) root@k8s-m:~/week4# kubectl apply -f echo-deploy.ayml,svc-nodeport.yaml
error: the path "echo-deploy.ayml" does not exist
(⎈|kind-myk8s:N/A) root@k8s-m:~/week4# kubectl apply -f echo-deploy.yaml,svc-nodeport.yaml
error parsing echo-deploy.yaml: error converting YAML to JSON: yaml: line 2: mapping values are not allowed in this context
error parsing svc-nodeport.yaml: error converting YAML to JSON: yaml: line 12: did not find expected alphabetic or numeric character
(⎈|kind-myk8s:N/A) root@k8s-m:~/week4# vim echo-deploy.yml
(⎈|kind-myk8s:N/A) root@k8s-m:~/week4# kubectl apply -f echo-deploy.yml
service/svc-nodeport created
(⎈|kind-myk8s:N/A) root@k8s-m:~/week4# vim echo-nodeport.yml
(⎈|kind-myk8s:N/A) root@k8s-m:~/week4# kubectl apply -f echo-nodeport.yml
deployment.apps/echo-deployment created
(⎈|kind-myk8s:N/A) root@k8s-m:~/week4# kubectl get svc svc-nodeport
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
svc-nodeport NodePort 10.200.1.93 <none> 9000:31162/TCP 81s
(⎈|kind-myk8s:N/A) root@k8s-m:~/week4# kubectl get endpoints svc-nodeport
NAME ENDPOINTS AGE
svc-nodeport <none> 82s
(⎈|kind-myk8s:N/A) root@k8s-m:~/week4# kubectl describe svc svc-nodeport
Name: svc-nodeport
Namespace: default
Labels: <none>
Annotations: <none>
Selector: app=deploy-websrv
Type: NodePort
IP Family Policy: SingleStack
IP Families: IPv4
IP: 10.200.1.93
IPs: 10.200.1.93
Port: svc-webport 9000/TCP
TargetPort: 8080/TCP
NodePort: svc-webport 31162/TCP
Endpoints: <none>
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
(⎈|kind-myk8s:N/A) root@k8s-m:~/week4# kubectl get service svc-nodeport -o jsonpath='{.spec.ports[0].nodePort}'
31162(⎈|kind-myk8s:N/A) root@k8s-m:~/weNPORT=$(kubectl get service svc-nodeport -o jsonpath='{.spec.ports[0].nodePort}')rt}')
(⎈|kind-myk8s:N/A) root@k8s-m:~/week4# echo $NPORT
31162
(⎈|kind-myk8s:N/A) root@k8s-m:~/week4# for i in control-plane worker worker2 worker3; do echo ">> node myk8s-$i <<"; docker exec -it myk8s-$i ss -tlnp; echo; done
>> node myk8s-control-plane <<
State Recv-Q Send-Q Local Address:Port Peer Address:Port Process
LISTEN 0 4096 172.18.0.5:2379 0.0.0.0:* users:(("etcd",pid=628,fd=9))
LISTEN 0 4096 172.18.0.5:2380 0.0.0.0:* users:(("etcd",pid=628,fd=7))
LISTEN 0 4096 127.0.0.11:46641 0.0.0.0:*
LISTEN 0 4096 127.0.0.1:2381 0.0.0.0:* users:(("etcd",pid=628,fd=13))
LISTEN 0 4096 127.0.0.1:2379 0.0.0.0:* users:(("etcd",pid=628,fd=8))
LISTEN 0 4096 127.0.0.1:10248 0.0.0.0:* users:(("kubelet",pid=684,fd=19))
LISTEN 0 4096 127.0.0.1:10249 0.0.0.0:* users:(("kube-proxy",pid=853,fd=18))
LISTEN 0 4096 127.0.0.1:10259 0.0.0.0:* users:(("kube-scheduler",pid=492,fd=3))
LISTEN 0 4096 127.0.0.1:10257 0.0.0.0:* users:(("kube-controller",pid=564,fd=3))
LISTEN 0 4096 127.0.0.1:45875 0.0.0.0:* users:(("containerd",pid=111,fd=11))
LISTEN 0 4096 *:10256 *:* users:(("kube-proxy",pid=853,fd=17))
LISTEN 0 4096 *:10250 *:* users:(("kubelet",pid=684,fd=13))
LISTEN 0 4096 *:6443 *:* users:(("kube-apiserver",pid=537,fd=3))
>> node myk8s-worker <<
State Recv-Q Send-Q Local Address:Port Peer Address:Port Process
LISTEN 0 4096 127.0.0.11:43069 0.0.0.0:*
LISTEN 0 4096 127.0.0.1:44497 0.0.0.0:* users:(("containerd",pid=111,fd=10))
LISTEN 0 4096 127.0.0.1:10248 0.0.0.0:* users:(("kubelet",pid=216,fd=24))
LISTEN 0 4096 *:10250 *:* users:(("kubelet",pid=216,fd=26))
>> node myk8s-worker2 <<
State Recv-Q Send-Q Local Address:Port Peer Address:Port Process
LISTEN 0 4096 127.0.0.11:40187 0.0.0.0:*
LISTEN 0 4096 127.0.0.1:10249 0.0.0.0:* users:(("kube-proxy",pid=1191,fd=18))
LISTEN 0 4096 127.0.0.1:10248 0.0.0.0:* users:(("kubelet",pid=218,fd=24))
LISTEN 0 4096 127.0.0.1:36707 0.0.0.0:* users:(("containerd",pid=111,fd=8))
LISTEN 0 4096 *:10250 *:* users:(("kubelet",pid=218,fd=16))
LISTEN 0 4096 *:10256 *:* users:(("kube-proxy",pid=1191,fd=17))
>> node myk8s-worker3 <<
State Recv-Q Send-Q Local Address:Port Peer Address:Port Process
LISTEN 0 4096 127.0.0.11:44047 0.0.0.0:*
LISTEN 0 4096 127.0.0.1:45889 0.0.0.0:* users:(("containerd",pid=110,fd=11))
"svc-nodeport.yml" [New] 0,0-1 All
apiVersion: apps/v1
LISTEN 0 4096 127.0.0.1:10248 0.0.0.0:* users:(("kubelet",pid=216,fd=24))
LISTEN 0 4096 *:10250 *:* users:(("kubelet",pid=216,fd=27))
(⎈|kind-myk8s:N/A) root@k8s-m:~/week4# kubectl logs -l app=deploy-websrv -f
No resources found in default namespace.
(⎈|kind-myk8s:N/A) root@k8s-m:~/week4# kubectl get pods -A
NAMESPACE NAME READY STATUS RESTARTS AGE
default echo-deployment-6bf67d69cb-jd6z4 1/1 Running 0 83s
default echo-deployment-6bf67d69cb-p9j8d 1/1 Running 0 83s
default echo-deployment-6bf67d69cb-w4z6v 1/1 Running 0 83s
default net-pod 1/1 Running 0 88m
default webpod1 1/1 Running 0 86m
default webpod2 1/1 Running 0 86m
default webpod3 1/1 Running 0 11m
kube-system coredns-6f6b679f8f-7wsp6 1/1 Running 0 108m
kube-system coredns-6f6b679f8f-8cgq4 1/1 Running 0 108m
kube-system etcd-myk8s-control-plane 1/1 Running 0 108m
kube-system kindnet-28tj8 1/1 Running 0 107m
kube-system kindnet-2xrzt 1/1 Running 0 107m
kube-system kindnet-8dvdl 1/1 Running 0 108m
kube-system kindnet-df988 1/1 Running 0 107m
kube-system kube-apiserver-myk8s-control-plane 1/1 Running 0 108m
kube-system kube-controller-manager-myk8s-control-plane 1/1 Running 0 108m
kube-system kube-ops-view-58f96c464d-r5wcc 1/1 Running 0 90m
kube-system kube-proxy-55j2x 0/1 CrashLoopBackOff 25 (4m48s ago) 107m
kube-system kube-proxy-5db4d 1/1 Running 0 108m
kube-system kube-proxy-8hphq 0/1 Error 26 (5m19s ago) 107m
kube-system kube-proxy-f4kln 1/1 Running 4 (107m ago) 107m
kube-system kube-scheduler-myk8s-control-plane 1/1 Running 0 108m
local-path-storage local-path-provisioner-57c5987fd4-s4dnv 1/1 Running 0 108m
(⎈|kind-myk8s:N/A) root@k8s-m:~/week4# ls
3pod.yaml echo-deploy.yaml echo-deploy.yml echo-nodeport.yml netpod.yaml svc-clusterip.yaml svc-nodeport.yaml
(⎈|kind-myk8s:N/A) root@k8s-m:~/week4# cat echo-deploy.yml
apiVersion: v1
kind: Service
metadata:
name: svc-nodeport
spec:
ports:
- name: svc-webport
port: 9000 # Port to access the service via ClusterIP
targetPort: 8080 # Port on the target pod accessed through the service
selector:
app: deploy-websrv
type: NodePort
(⎈|kind-myk8s:N/A) root@k8s-m:~/week4# kubectl apply -f echo-deploy.yml
service/svc-nodeport unchanged
(⎈|kind-myk8s:N/A) root@k8s-m:~/week4# cat svc-nodeport.yml
cat: svc-nodeport.yml: No such file or directory
(⎈|kind-myk8s:N/A) root@k8s-m:~/week4# vim svc-nodeport.yml
(⎈|kind-myk8s:N/A) root@k8s-m:~/week4# kubectl apply -f svc-nodeport.yml
deployment.apps/echo-deployment unchanged
(⎈|kind-myk8s:N/A) root@k8s-m:~/week4# kubectl describe svc svc-nodeport
Name: svc-nodeport
Namespace: default
Labels: <none>
Annotations: <none>
Selector: app=deploy-websrv
Type: NodePort
IP Family Policy: SingleStack
IP Families: IPv4
IP: 10.200.1.93
IPs: 10.200.1.93
Port: svc-webport 9000/TCP
TargetPort: 8080/TCP
NodePort: svc-webport 31162/TCP
Endpoints: <none>
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
(⎈|kind-myk8s:N/A) root@k8s-m:~/week4# ^C
(⎈|kind-myk8s:N/A) root@k8s-m:~/week4# kubectl describe svc svc-nodeport
Name: svc-nodeport
Namespace: default
Labels: <none>
Annotations: <none>
Selector: app=deploy-websrv
Type: NodePort
IP Family Policy: SingleStack
0,0-1 All
apiVersion: apps/v1
IP Families: IPv4
IP: 10.200.1.93
IPs: 10.200.1.93
Port: svc-webport 9000/TCP
TargetPort: 8080/TCP
NodePort: svc-webport 31162/TCP
Endpoints: <none>
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
(⎈|kind-myk8s:N/A) root@k8s-m:~/week4# kubectl logs -l app=deploy-websrv
No resources found in default namespace.
(⎈|kind-myk8s:N/A) root@k8s-m:~/week4# kubectl get deploy,pod -o wide
NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR
deployment.apps/echo-deployment 3/3 3 3 2m46s echo-web-container mendhak/http-https-echo app=web-service
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod/echo-deployment-6bf67d69cb-jd6z4 1/1 Running 0 2m46s 10.10.2.3 myk8s-worker <none> <none>
pod/echo-deployment-6bf67d69cb-p9j8d 1/1 Running 0 2m46s 10.10.3.5 myk8s-worker3 <none> <none>
pod/echo-deployment-6bf67d69cb-w4z6v 1/1 Running 0 2m46s 10.10.1.3 myk8s-worker2 <none> <none>
pod/net-pod 1/1 Running 0 89m 10.10.0.6 myk8s-control-plane <none> <none>
pod/webpod1 1/1 Running 0 87m 10.10.2.2 myk8s-worker <none> <none>
pod/webpod2 1/1 Running 0 87m 10.10.1.2 myk8s-worker2 <none> <none>
pod/webpod3 1/1 Running 0 12m 10.10.3.4 myk8s-worker3 <none> <none>
(⎈|kind-myk8s:N/A) root@k8s-m:~/week4# ls
3pod.yaml echo-deploy.yaml echo-deploy.yml echo-nodeport.yml netpod.yaml svc-clusterip.yaml svc-nodeport.yaml svc-nodeport.yml
(⎈|kind-myk8s:N/A) root@k8s-m:~/week4# cat echo-nodeport.yml
apiVersion: apps/v1
kind: Deployment
metadata:
name: echo-deployment
spec:
replicas: 3
selector:
matchLabels:
app: web-service
template:
metadata:
labels:
app: web-service
spec:
terminationGracePeriodSeconds: 0
containers:
- name: echo-web-container
image: mendhak/http-https-echo
ports:
- containerPort: 8080
(⎈|kind-myk8s:N/A) root@k8s-m:~/week4# rm -rf echo-nodeport.yml
(⎈|kind-myk8s:N/A) root@k8s-m:~/week4# vim echo-nodeport.yml
(⎈|kind-myk8s:N/A) root@k8s-m:~/week4# kubectl apply -f echo-nodeport.yml
deployment.apps/deploy-echo created
(⎈|kind-myk8s:N/A) root@k8s-m:~/week4# clear
(⎈|kind-myk8s:N/A) root@k8s-m:~/week4# kubectl get svc svc-nodeport
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
svc-nodeport NodePort 10.200.1.93 <none> 9000:31162/TCP 4m48s
(⎈|kind-myk8s:N/A) root@k8s-m:~/week4# kubectl get endpoints svc-nodeport
NAME ENDPOINTS AGE
svc-nodeport 10.10.1.4:8080,10.10.2.4:8080,10.10.3.6:8080 4m50s
(⎈|kind-myk8s:N/A) root@k8s-m:~/week4# kubectl describe svc svc-nodeport
Name: svc-nodeport
Namespace: default
Labels: <none>
Annotations: <none>
Selector: app=deploy-websrv
Type: NodePort
IP Family Policy: SingleStack
IP Families: IPv4
IP: 10.200.1.93
IPs: 10.200.1.93
Port: svc-webport 9000/TCP
TargetPort: 8080/TCP
NodePort: svc-webport 31162/TCP
Endpoints: 10.10.1.4:8080,10.10.2.4:8080,10.10.3.6:8080
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
(⎈|kind-myk8s:N/A) root@k8s-m:~/week4# kubectl get service svc-nodeport -o jsonpath='{.spec.ports[0].nodePort}'
31162(⎈|kind-myk8s:N/A) root@k8s-m:~/weNPORT=$(kubectl get service svc-nodeport -o jsonpath='{.spec.ports[0].nodePort}')rt}')
(⎈|kind-myk8s:N/A) root@k8s-m:~/week4# echo $NPORT
31162
(⎈|kind-myk8s:N/A) root@k8s-m:~/week4# for i in control-plane worker worker2 worker3; do echo ">> node myk8s-$i <<"; docker exec -it myk8s-$i ss -tlnp; echo; done
>> node myk8s-control-plane <<
State Recv-Q Send-Q Local Address:Port Peer Address:Port Process
LISTEN 0 4096 172.18.0.5:2379 0.0.0.0:* users:(("etcd",pid=628,fd=9))
LISTEN 0 4096 172.18.0.5:2380 0.0.0.0:* users:(("etcd",pid=628,fd=7))
LISTEN 0 4096 127.0.0.11:46641 0.0.0.0:*
LISTEN 0 4096 127.0.0.1:2381 0.0.0.0:* users:(("etcd",pid=628,fd=13))
LISTEN 0 4096 127.0.0.1:2379 0.0.0.0:* users:(("etcd",pid=628,fd=8))
LISTEN 0 4096 127.0.0.1:10248 0.0.0.0:* users:(("kubelet",pid=684,fd=19))
LISTEN 0 4096 127.0.0.1:10249 0.0.0.0:* users:(("kube-proxy",pid=853,fd=18))
LISTEN 0 4096 127.0.0.1:10259 0.0.0.0:* users:(("kube-scheduler",pid=492,fd=3))
LISTEN 0 4096 127.0.0.1:10257 0.0.0.0:* users:(("kube-controller",pid=564,fd=3))
LISTEN 0 4096 127.0.0.1:45875 0.0.0.0:* users:(("containerd",pid=111,fd=11))
LISTEN 0 4096 *:10256 *:* users:(("kube-proxy",pid=853,fd=17))
LISTEN 0 4096 *:10250 *:* users:(("kubelet",pid=684,fd=13))
LISTEN 0 4096 *:6443 *:* users:(("kube-apiserver",pid=537,fd=3))
>> node myk8s-worker <<
State Recv-Q Send-Q Local Address:Port Peer Address:Port Process
LISTEN 0 4096 127.0.0.11:43069 0.0.0.0:*
LISTEN 0 4096 127.0.0.1:44497 0.0.0.0:* users:(("containerd",pid=111,fd=10))
LISTEN 0 4096 127.0.0.1:10248 0.0.0.0:* users:(("kubelet",pid=216,fd=24))
LISTEN 0 4096 *:10250 *:* users:(("kubelet",pid=216,fd=26))
>> node myk8s-worker2 <<
State Recv-Q Send-Q Local Address:Port Peer Address:Port Process
LISTEN 0 4096 127.0.0.11:40187 0.0.0.0:*
LISTEN 0 4096 127.0.0.1:10249 0.0.0.0:* users:(("kube-proxy",pid=1191,fd=18))
LISTEN 0 4096 127.0.0.1:10248 0.0.0.0:* users:(("kubelet",pid=218,fd=24))
LISTEN 0 4096 127.0.0.1:36707 0.0.0.0:* users:(("containerd",pid=111,fd=8))
LISTEN 0 4096 *:10250 *:* users:(("kubelet",pid=218,fd=16))
LISTEN 0 4096 *:10256 *:* users:(("kube-proxy",pid=1191,fd=17))
>> node myk8s-worker3 <<
State Recv-Q Send-Q Local Address:Port Peer Address:Port Process
LISTEN 0 4096 127.0.0.11:44047 0.0.0.0:*
LISTEN 0 4096 127.0.0.1:45889 0.0.0.0:* users:(("containerd",pid=110,fd=11))
LISTEN 0 4096 127.0.0.1:10248 0.0.0.0:* users:(("kubelet",pid=216,fd=24))
LISTEN 0 4096 *:10250 *:* users:(("kubelet",pid=216,fd=27))
(⎈|kind-myk8s:N/A) root@k8s-m:~/week4# kubectl logs -l app=deploy-websrv -f
"protocol": "http",
"query": {},
"subdomains": [],
"xhr": false,
"os": {
"hostname": "deploy-echo-59979f4fb5-k4dw4"
},
"connection": {}
}
::ffff:10.10.2.1 - - [28/Sep/2024:19:29:02 +0000] "GET / HTTP/1.1" 200 425 "-" "kube-probe/1.31"
failed to create fsnotify watcher: too many open files "protocol": "http",
"query": {},
"subdomains": [],
"xhr": false,
"os": {
"hostname": "deploy-echo-59979f4fb5-8vtf9"
},
"connection": {}
}
::ffff:10.10.3.1 - - [28/Sep/2024:19:29:02 +0000] "GET / HTTP/1.1" 200 425 "-" "kube-probe/1.31"
"protocol": "http",
"query": {},
"subdomains": [],
"xhr": false,
"os": {
"hostname": "deploy-echo-59979f4fb5-24zg6"
},
"connection": {}
}
::ffff:10.10.1.1 - - [28/Sep/2024:19:29:02 +0000] "GET / HTTP/1.1" 200 425 "-" "kube-probe/1.31"
failed to create fsnotify watcher: too many open filesfailed to create fsnotify watcher: too many open files(⎈|kind-myk8s:N/A) root@k8s-m:~/week4# ^C
(⎈|kind-myk8s:N/A) root@k8s-m:~/week4# kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
myk8s-control-plane Ready control-plane 111m v1.31.0 172.18.0.5 <none> Debian GNU/Linux 12 (bookworm) 6.5.0-1024-aws containerd://1.7.18
myk8s-worker Ready <none> 111m v1.31.0 172.18.0.3 <none> Debian GNU/Linux 12 (bookworm) 6.5.0-1024-aws containerd://1.7.18
myk8s-worker2 Ready <none> 111m v1.31.0 172.18.0.2 <none> Debian GNU/Linux 12 (bookworm) 6.5.0-1024-aws containerd://1.7.18
myk8s-worker3 Ready <none> 111m v1.31.0 172.18.0.4 <none> Debian GNU/Linux 12 (bookworm) 6.5.0-1024-aws containerd://1.7.18
(⎈|kind-myk8s:N/A) root@k8s-m:~/week4# CNODE=172.18.0.5
(⎈|kind-myk8s:N/A) root@k8s-m:~/week4# NODE1=172.18.0.3
(⎈|kind-myk8s:N/A) root@k8s-m:~/week4# NODE2=172.18.0.2
(⎈|kind-myk8s:N/A) root@k8s-m:~/week4# NODE3=172.18.0.4
(⎈|kind-myk8s:N/A) root@k8s-m:~/week4# NPORT=$(kubectl get service svc-nodeport -o jsonpath='{.spec.ports[0].nodePort}')
(⎈|kind-myk8s:N/A) root@k8s-m:~/week4# echo $NPORT
31162
(⎈|kind-myk8s:N/A) root@k8s-m:~/week4# docker exec -it mypc curl -s $CNODE:$NPORT | jq # headers.host 주소는 왜 그런거죠?
{
"path": "/",
"headers": {
"host": "172.18.0.5:31162",
"user-agent": "curl/8.7.1",
"accept": "*/*"
},
"method": "GET",
"body": "",
"fresh": false,
"hostname": "172.18.0.5",
"ip": "::ffff:172.18.0.5",
"ips": [],
"protocol": "http",
"query": {},
"subdomains": [],
"xhr": false,
"os": {
"hostname": "deploy-echo-59979f4fb5-k4dw4"
},
"connection": {}
}
(⎈|kind-myk8s:N/A) root@k8s-m:~/week4# # "host": "172.23.0.2:31791", <<여기의 headers.host는 요청하는 url의 주소인데, 우리가 $CNODE(컨트롤플레인의 IP)의 url로 접속했기 때문입니다.>>
^C
(⎈|kind-myk8s:N/A) root@k8s-m:~/week4# # "hostname": "172.23.0.2", <<이 hostname과>>
# "ip": "::ffff:172.23.0.2", <<이 ip는 접속하는 클라이언트의 ip인데 부하분산 과정에서 목적지가 Local Pod가 아닌 경우 Node IP로 POSTROUTING(SNAT) 되기 때문입니다.>>
^C
(⎈|kind-myk8s:N/A) root@k8s-m:~/week4# for i in $CNODE $NODE1 $NODE2 $NODE3 ; do echo ">> node $i <<"; docker exec -it mypc curl -s $i:$NPORT; echo; done
>> node 172.18.0.5 <<
{
"path": "/",
"headers": {
"host": "172.18.0.5:31162",
"user-agent": "curl/8.7.1",
"accept": "*/*"
},
"method": "GET",
"body": "",
"fresh": false,
"hostname": "172.18.0.5",
"ip": "::ffff:172.18.0.5",
"ips": [],
"protocol": "http",
"query": {},
"subdomains": [],
"xhr": false,
"os": {
"hostname": "deploy-echo-59979f4fb5-24zg6"
},
"connection": {}
}
>> node 172.18.0.3 <<
>> node 172.18.0.2 <<
{
"path": "/",
"headers": {
"host": "172.18.0.2:31162",
"user-agent": "curl/8.7.1",
"accept": "*/*"
},
"method": "GET",
"body": "",
"fresh": false,
"hostname": "172.18.0.2",
"ip": "::ffff:172.18.0.2",
"ips": [],
"protocol": "http",
"query": {},
"subdomains": [],
"xhr": false,
"os": {
"hostname": "deploy-echo-59979f4fb5-8vtf9"
},
"connection": {}
}
>> node 172.18.0.4 <<
(⎈|kind-myk8s:N/A) root@k8s-m:~/week4# # 컨트롤플레인 노드에는 목적지 파드가 없는데도, 접속을 받아줍니다! 이유는 서비스(nodePort)의 endpoint로 로드밸런싱 되기 때문입니다.
^C
(⎈|kind-myk8s:N/A) root@k8s-m:~/week4# docker exec -it mypc zsh -c "for i in {1..100}; do curl -s $CNODE:$NPORT | grep hostname; done | sort | uniq -c | sort -nr"
100 "hostname": "172.18.0.5",
36 "hostname": "deploy-echo-59979f4fb5-8vtf9"
33 "hostname": "deploy-echo-59979f4fb5-k4dw4"
31 "hostname": "deploy-echo-59979f4fb5-24zg6"
(⎈|kind-myk8s:N/A) root@k8s-m:~/week4# docker exec -it mypc zsh -c "for i in {1..100}; do curl -s $NODE1:$NPORT | grep hostname; done | sort | uniq -c | sort -nr"
(⎈|kind-myk8s:N/A) root@k8s-m:~/week4# docker exec -it mypc zsh -c "for i in {1..100}; do curl -s $NODE3:$NPORT | grep hostname; done | sort | uniq -c | sort -nr"
(⎈|kind-myk8s:N/A) root@k8s-m:~/week4# docker exec -it mypc zsh -c "while true; do curl -s --connect-timeout 1 $CNODE:$NPORT | grep hostname; date '+%Y-%m-%d %H:%M:%S' ; echo ; sleep 1; done"
"hostname": "172.18.0.5",
"hostname": "deploy-echo-59979f4fb5-24zg6"
2024-09-28 19:31:04
"hostname": "172.18.0.5",
"hostname": "deploy-echo-59979f4fb5-8vtf9"
2024-09-28 19:31:05
"hostname": "172.18.0.5",
"hostname": "deploy-echo-59979f4fb5-k4dw4"
2024-09-28 19:31:06
"hostname": "172.18.0.5",
"hostname": "deploy-echo-59979f4fb5-k4dw4"
2024-09-28 19:31:08
"hostname": "172.18.0.5",
"hostname": "deploy-echo-59979f4fb5-k4dw4"
2024-09-28 19:31:09
"hostname": "172.18.0.5",
"hostname": "deploy-echo-59979f4fb5-24zg6"
2024-09-28 19:31:10
^C(⎈|kind-myk8s:N/A) root@k8s-m:~/week4# kubectl get svc svc-nodeport
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
svc-nodeport NodePort 10.200.1.93 <none> 9000:31162/TCP 7m23s
(⎈|kind-myk8s:N/A) root@k8s-m:~/week4# # NodePort 서비스는 ClusterIP 를 포함
# CLUSTER-IP:PORT 로 접속 가능! <- 컨트롤노드에서 아래 실행 해보자^C
(⎈|kind-myk8s:N/A) root@k8s-m:~/week4# CIP=$(kubectl get service svc-nodeport -o jsonpath="{.spec.clusterIP}")
(⎈|kind-myk8s:N/A) root@k8s-m:~/week4# CIPPORT=$(kubectl get service svc-nodeport -o jsonpath="{.spec.ports[0].port}")
(⎈|kind-myk8s:N/A) root@k8s-m:~/week4# echo $CIP $CIPPORT
10.200.1.93 9000
(⎈|kind-myk8s:N/A) root@k8s-m:~/week4# docker exec -it myk8s-control-plane curl -s $CIP:$CIPPORT | jq
{
"path": "/",
"headers": {
"host": "10.200.1.93:9000",
"user-agent": "curl/7.88.1",
"accept": "*/*"
},
"method": "GET",
"body": "",
"fresh": false,
"hostname": "10.200.1.93",
"ip": "::ffff:172.18.0.5",
"ips": [],
"protocol": "http",
"query": {},
"subdomains": [],
"xhr": false,
"os": {
"hostname": "deploy-echo-59979f4fb5-8vtf9"
},
"connection": {}
}
(⎈|kind-myk8s:N/A) root@k8s-m:~/week4# docker exec -it mypc curl -s $CIP:$CIPPORT
^C(⎈|kind-myk8s:N/A) root@k8s-m:~/week4# mypc에서 cluster ip port로의 접속은 불가능합니다. mypc는 kubernetes 클러스터 내부에 있지 않기 때문입니다.^C
(⎈|kind-myk8s:N/A) root@k8s-m:~/week4# conntrack -E
[NEW] tcp 6 120 SYN_SENT src=192.168.10.10 dst=192.168.10.10 sport=33100 dport=6443 [UNREPLIED] src=192.168.10.10 dst=192.168.10.10 sport=6443 dport=33100
[UPDATE] tcp 6 60 SYN_RECV src=192.168.10.10 dst=192.168.10.10 sport=33100 dport=6443 src=192.168.10.10 dst=192.168.10.10 sport=6443 dport=33100
[UPDATE] tcp 6 86400 ESTABLISHED src=192.168.10.10 dst=192.168.10.10 sport=33100 dport=6443 src=192.168.10.10 dst=192.168.10.10 sport=6443 dport=33100 [ASSURED]
[NEW] tcp 6 120 SYN_SENT src=192.168.10.10 dst=192.168.10.10 sport=33104 dport=6443 [UNREPLIED] src=192.168.10.10 dst=192.168.10.10 sport=6443 dport=33104
[UPDATE] tcp 6 59 SYN_RECV src=192.168.10.10 dst=192.168.10.10 sport=33104 dport=6443 src=192.168.10.10 dst=192.168.10.10 sport=6443 dport=33104
[UPDATE] tcp 6 86400 ESTABLISHED src=192.168.10.10 dst=192.168.10.10 sport=33104 dport=6443 src=192.168.10.10 dst=192.168.10.10 sport=6443 dport=33104 [ASSURED]
[UPDATE] tcp 6 120 FIN_WAIT src=192.168.10.10 dst=192.168.10.10 sport=33100 dport=6443 src=192.168.10.10 dst=192.168.10.10 sport=6443 dport=33100 [ASSURED]
[UPDATE] tcp 6 300 CLOSE_WAIT src=192.168.10.10 dst=192.168.10.10 sport=33100 dport=6443 src=192.168.10.10 dst=192.168.10.10 sport=6443 dport=33100 [ASSURED]
[UPDATE] tcp 6 10 CLOSE src=192.168.10.10 dst=192.168.10.10 sport=33100 dport=6443 src=192.168.10.10 dst=192.168.10.10 sport=6443 dport=33100 [ASSURED]
[UPDATE] tcp 6 120 FIN_WAIT src=192.168.10.10 dst=192.168.10.10 sport=33104 dport=6443 src=192.168.10.10 dst=192.168.10.10 sport=6443 dport=33104 [ASSURED]
[UPDATE] tcp 6 300 CLOSE_WAIT src=192.168.10.10 dst=192.168.10.10 sport=33104 dport=6443 src=192.168.10.10 dst=192.168.10.10 sport=6443 dport=33104 [ASSURED]
[UPDATE] tcp 6 10 CLOSE src=192.168.10.10 dst=192.168.10.10 sport=33104 dport=6443 src=192.168.10.10 dst=192.168.10.10 sport=6443 dport=33104 [ASSURED]
[NEW] tcp 6 120 SYN_SENT src=127.0.0.1 dst=127.0.0.1 sport=37444 dport=10257 [UNREPLIED] src=127.0.0.1 dst=127.0.0.1 sport=10257 dport=37444
[UPDATE] tcp 6 60 SYN_RECV src=127.0.0.1 dst=127.0.0.1 sport=37444 dport=10257 src=127.0.0.1 dst=127.0.0.1 sport=10257 dport=37444
[UPDATE] tcp 6 86400 ESTABLISHED src=127.0.0.1 dst=127.0.0.1 sport=37444 dport=10257 src=127.0.0.1 dst=127.0.0.1 sport=10257 dport=37444 [ASSURED]
[UPDATE] tcp 6 300 CLOSE_WAIT src=127.0.0.1 dst=127.0.0.1 sport=37444 dport=10257 src=127.0.0.1 dst=127.0.0.1 sport=10257 dport=37444 [ASSURED]
[UPDATE] tcp 6 300 CLOSE_WAIT src=127.0.0.1 dst=127.0.0.1 sport=37444 dport=10257 src=127.0.0.1 dst=127.0.0.1 sport=10257 dport=37444 [ASSURED]
[UPDATE] tcp 6 10 CLOSE src=127.0.0.1 dst=127.0.0.1 sport=37444 dport=10257 src=127.0.0.1 dst=127.0.0.1 sport=10257 dport=37444 [ASSURED]
[NEW] tcp 6 120 SYN_SENT src=192.168.10.10 dst=192.168.10.10 sport=33120 dport=6443 [UNREPLIED] src=192.168.10.10 dst=192.168.10.10 sport=6443 dport=33120
[UPDATE] tcp 6 60 SYN_RECV src=192.168.10.10 dst=192.168.10.10 sport=33120 dport=6443 src=192.168.10.10 dst=192.168.10.10 sport=6443 dport=33120
[UPDATE] tcp 6 86400 ESTABLISHED src=192.168.10.10 dst=192.168.10.10 sport=33120 dport=6443 src=192.168.10.10 dst=192.168.10.10 sport=6443 dport=33120 [ASSURED]
[UPDATE] tcp 6 300 CLOSE_WAIT src=192.168.10.10 dst=192.168.10.10 sport=33120 dport=6443 src=192.168.10.10 dst=192.168.10.10 sport=6443 dport=33120 [ASSURED]
[UPDATE] tcp 6 300 CLOSE_WAIT src=192.168.10.10 dst=192.168.10.10 sport=33120 dport=6443 src=192.168.10.10 dst=192.168.10.10 sport=6443 dport=33120 [ASSURED]
[UPDATE] tcp 6 10 CLOSE src=192.168.10.10 dst=192.168.10.10 sport=33120 dport=6443 src=192.168.10.10 dst=192.168.10.10 sport=6443 dport=33120 [ASSURED]
[NEW] tcp 6 120 SYN_SENT src=192.168.10.10 dst=192.168.10.10 sport=33130 dport=6443 [UNREPLIED] src=192.168.10.10 dst=192.168.10.10 sport=6443 dport=33130
[UPDATE] tcp 6 60 SYN_RECV src=192.168.10.10 dst=192.168.10.10 sport=33130 dport=6443 src=192.168.10.10 dst=192.168.10.10 sport=6443 dport=33130
[UPDATE] tcp 6 86400 ESTABLISHED src=192.168.10.10 dst=192.168.10.10 sport=33130 dport=6443 src=192.168.10.10 dst=192.168.10.10 sport=6443 dport=33130 [ASSURED]
[UPDATE] tcp 6 120 FIN_WAIT src=192.168.10.10 dst=192.168.10.10 sport=33130 dport=6443 src=192.168.10.10 dst=192.168.10.10 sport=6443 dport=33130 [ASSURED]
[UPDATE] tcp 6 300 CLOSE_WAIT src=192.168.10.10 dst=192.168.10.10 sport=33130 dport=6443 src=192.168.10.10 dst=192.168.10.10 sport=6443 dport=33130 [ASSURED]
[UPDATE] tcp 6 10 CLOSE src=192.168.10.10 dst=192.168.10.10 sport=33130 dport=6443 src=192.168.10.10 dst=192.168.10.10 sport=6443 dport=33130 [ASSURED]
[NEW] tcp 6 120 SYN_SENT src=127.0.0.1 dst=127.0.0.1 sport=49404 dport=2381 [UNREPLIED] src=127.0.0.1 dst=127.0.0.1 sport=2381 dport=49404
[UPDATE] tcp 6 60 SYN_RECV src=127.0.0.1 dst=127.0.0.1 sport=49404 dport=2381 src=127.0.0.1 dst=127.0.0.1 sport=2381 dport=49404
[UPDATE] tcp 6 86400 ESTABLISHED src=127.0.0.1 dst=127.0.0.1 sport=49404 dport=2381 src=127.0.0.1 dst=127.0.0.1 sport=2381 dport=49404 [ASSURED]
[UPDATE] tcp 6 120 FIN_WAIT src=127.0.0.1 dst=127.0.0.1 sport=49404 dport=2381 src=127.0.0.1 dst=127.0.0.1 sport=2381 dport=49404 [ASSURED]
[UPDATE] tcp 6 30 LAST_ACK src=127.0.0.1 dst=127.0.0.1 sport=49404 dport=2381 src=127.0.0.1 dst=127.0.0.1 sport=2381 dport=49404 [ASSURED]
[UPDATE] tcp 6 120 TIME_WAIT src=127.0.0.1 dst=127.0.0.1 sport=49404 dport=2381 src=127.0.0.1 dst=127.0.0.1 sport=2381 dport=49404 [ASSURED]
[NEW] tcp 6 120 SYN_SENT src=35.203.211.154 dst=192.168.10.10 sport=52440 dport=48156 [UNREPLIED] src=192.168.10.10 dst=35.203.211.154 sport=48156 dport=52440
[DESTROY] tcp 6 120 CLOSE src=35.203.211.154 dst=192.168.10.10 sport=52440 dport=48156 [UNREPLIED] src=192.168.10.10 dst=35.203.211.154 sport=48156 dport=52440
[NEW] tcp 6 120 SYN_SENT src=192.168.10.10 dst=192.168.10.10 sport=33138 dport=6443 [UNREPLIED] src=192.168.10.10 dst=192.168.10.10 sport=6443 dport=33138
[UPDATE] tcp 6 60 SYN_RECV src=192.168.10.10 dst=192.168.10.10 sport=33138 dport=6443 src=192.168.10.10 dst=192.168.10.10 sport=6443 dport=33138
[UPDATE] tcp 6 86400 ESTABLISHED src=192.168.10.10 dst=192.168.10.10 sport=33138 dport=6443 src=192.168.10.10 dst=192.168.10.10 sport=6443 dport=33138 [ASSURED]
[UPDATE] tcp 6 120 FIN_WAIT src=192.168.10.10 dst=192.168.10.10 sport=33138 dport=6443 src=192.168.10.10 dst=192.168.10.10 sport=6443 dport=33138 [ASSURED]
[UPDATE] tcp 6 30 LAST_ACK src=192.168.10.10 dst=192.168.10.10 sport=33138 dport=6443 src=192.168.10.10 dst=192.168.10.10 sport=6443 dport=33138 [ASSURED]
[UPDATE] tcp 6 120 TIME_WAIT src=192.168.10.10 dst=192.168.10.10 sport=33138 dport=6443 src=192.168.10.10 dst=192.168.10.10 sport=6443 dport=33138 [ASSURED]
[NEW] tcp 6 120 SYN_SENT src=147.185.133.29 dst=192.168.10.10 sport=53482 dport=49455 [UNREPLIED] src=192.168.10.10 dst=147.185.133.29 sport=49455 dport=53482
[DESTROY] tcp 6 120 CLOSE src=147.185.133.29 dst=192.168.10.10 sport=53482 dport=49455 [UNREPLIED] src=192.168.10.10 dst=147.185.133.29 sport=49455 dport=53482
[NEW] tcp 6 120 SYN_SENT src=127.0.0.1 dst=127.0.0.1 sport=34474 dport=9099 [UNREPLIED] src=127.0.0.1 dst=127.0.0.1 sport=9099 dport=34474
[UPDATE] tcp 6 60 SYN_RECV src=127.0.0.1 dst=127.0.0.1 sport=34474 dport=9099 src=127.0.0.1 dst=127.0.0.1 sport=9099 dport=34474
[UPDATE] tcp 6 86400 ESTABLISHED src=127.0.0.1 dst=127.0.0.1 sport=34474 dport=9099 src=127.0.0.1 dst=127.0.0.1 sport=9099 dport=34474 [ASSURED]
[UPDATE] tcp 6 120 FIN_WAIT src=127.0.0.1 dst=127.0.0.1 sport=34474 dport=9099 src=127.0.0.1 dst=127.0.0.1 sport=9099 dport=34474 [ASSURED]
[UPDATE] tcp 6 30 LAST_ACK src=127.0.0.1 dst=127.0.0.1 sport=34474 dport=9099 src=127.0.0.1 dst=127.0.0.1 sport=9099 dport=34474 [ASSURED]
[UPDATE] tcp 6 120 TIME_WAIT src=127.0.0.1 dst=127.0.0.1 sport=34474 dport=9099 src=127.0.0.1 dst=127.0.0.1 sport=9099 dport=34474 [ASSURED]
[NEW] tcp 6 120 SYN_SENT src=127.0.0.1 dst=127.0.0.1 sport=34484 dport=9099 [UNREPLIED] src=127.0.0.1 dst=127.0.0.1 sport=9099 dport=34484
[UPDATE] tcp 6 60 SYN_RECV src=127.0.0.1 dst=127.0.0.1 sport=34484 dport=9099 src=127.0.0.1 dst=127.0.0.1 sport=9099 dport=34484
[UPDATE] tcp 6 86400 ESTABLISHED src=127.0.0.1 dst=127.0.0.1 sport=34484 dport=9099 src=127.0.0.1 dst=127.0.0.1 sport=9099 dport=34484 [ASSURED]
[UPDATE] tcp 6 120 FIN_WAIT src=127.0.0.1 dst=127.0.0.1 sport=34484 dport=9099 src=127.0.0.1 dst=127.0.0.1 sport=9099 dport=34484 [ASSURED]
[UPDATE] tcp 6 30 LAST_ACK src=127.0.0.1 dst=127.0.0.1 sport=34484 dport=9099 src=127.0.0.1 dst=127.0.0.1 sport=9099 dport=34484 [ASSURED]
[UPDATE] tcp 6 120 TIME_WAIT src=127.0.0.1 dst=127.0.0.1 sport=34484 dport=9099 src=127.0.0.1 dst=127.0.0.1 sport=9099 dport=34484 [ASSURED]
[NEW] tcp 6 120 SYN_SENT src=192.168.10.10 dst=192.168.10.10 sport=33152 dport=6443 [UNREPLIED] src=192.168.10.10 dst=192.168.10.10 sport=6443 dport=33152
[UPDATE] tcp 6 60 SYN_RECV src=192.168.10.10 dst=192.168.10.10 sport=33152 dport=6443 src=192.168.10.10 dst=192.168.10.10 sport=6443 dport=33152
[UPDATE] tcp 6 86400 ESTABLISHED src=192.168.10.10 dst=192.168.10.10 sport=33152 dport=6443 src=192.168.10.10 dst=192.168.10.10 sport=6443 dport=33152 [ASSURED]
[UPDATE] tcp 6 120 FIN_WAIT src=192.168.10.10 dst=192.168.10.10 sport=33152 dport=6443 src=192.168.10.10 dst=192.168.10.10 sport=6443 dport=33152 [ASSURED]
[UPDATE] tcp 6 300 CLOSE_WAIT src=192.168.10.10 dst=192.168.10.10 sport=33152 dport=6443 src=192.168.10.10 dst=192.168.10.10 sport=6443 dport=33152 [ASSURED]
[UPDATE] tcp 6 10 CLOSE src=192.168.10.10 dst=192.168.10.10 sport=33152 dport=6443 src=192.168.10.10 dst=192.168.10.10 sport=6443 dport=33152 [ASSURED]
^Cconntrack v1.4.6 (conntrack-tools): 64 flow events have been shown.
(⎈|kind-myk8s:N/A) root@k8s-m:~/week4# # SNAT나 빠른 iptables 룰 처리등을 위해 접속 정보가 추적됨을 알 수 있습니다.
^C
(⎈|kind-myk8s:N/A) root@k8s-m:~/week4# conntrack -L --any-nat
tcp 6 101 SYN_SENT src=172.18.0.6 dst=10.200.1.93 sport=46636 dport=9000 [UNREPLIED] src=10.200.1.93 dst=192.168.10.10 sport=9000 dport=46636 mark=0 use=1
tcp 6 86374 ESTABLISHED src=192.168.10.10 dst=10.200.1.1 sport=44282 dport=443 src=192.168.10.10 dst=192.168.10.10 sport=6443 dport=54023 [ASSURED] mark=0 use=1
tcp 6 86385 ESTABLISHED src=192.168.10.10 dst=10.200.1.1 sport=40858 dport=443 src=192.168.10.10 dst=192.168.10.10 sport=6443 dport=6906 [ASSURED] mark=0 use=1
tcp 6 86399 ESTABLISHED src=192.168.10.10 dst=10.200.1.1 sport=44270 dport=443 src=192.168.10.10 dst=192.168.10.10 sport=6443 dport=38536 [ASSURED] mark=0 use=1
tcp 6 86385 ESTABLISHED src=192.168.10.10 dst=10.200.1.1 sport=44248 dport=443 src=192.168.10.10 dst=192.168.10.10 sport=6443 dport=25956 [ASSURED] mark=0 use=1
conntrack v1.4.6 (conntrack-tools): 5 flow entries have been shown.
(⎈|kind-myk8s:N/A) root@k8s-m:~/week4#
iptables -v --numeric --table nat --list
# Chain cali-nat-outgoing (1 references)
pkts bytes target prot opt in out source destination
0 0 MASQUERADE all -- * * 0.0.0.0/0 0.0.0.0/0 /* cali:flqWnvo8yq4ULQLa */ match-set cali40masq-ipam-pools src ! match-set cali40all-ipam-pools dst random-fully
(⎈|kind-myk8s:N/A) root@k8s-m:~/week4# for i in {1..100}; do curl -s $CNODE:$NPORT | grep hostname; done | sort | uniq -c | sort -nr
100 "hostname": "172.18.0.5",
36 "hostname": "deploy-echo-59979f4fb5-24zg6"
33 "hostname": "deploy-echo-59979f4fb5-k4dw4"
31 "hostname": "deploy-echo-59979f4fb5-8vtf9"
(⎈|kind-myk8s:N/A) root@k8s-m:~/week4# for i in {1..100}; do curl -s $NODE1:$NPORT | grep hostname; done | sort | uniq -c | sort -nr
(⎈|kind-myk8s:N/A) root@k8s-m:~/week4# for i in {1..100}; do curl -s $NODE2:$NPORT | grep hostname; done | sort | uniq -c | sort -nr
100 "hostname": "172.18.0.2",
35 "hostname": "deploy-echo-59979f4fb5-24zg6"
34 "hostname": "deploy-echo-59979f4fb5-k4dw4"
31 "hostname": "deploy-echo-59979f4fb5-8vtf9"
(⎈|kind-myk8s:N/A) root@k8s-m:~/week4# for i in {1..100}; do curl -s $NODE2:$NPORT | grep hostname; done | sort | uniq -c | sort -nr
for i in {1..100}; do curl -s $NODE3:$NPORT | grep hostname; done | sort | uniq -c | sort -nr
100 "hostname": "172.18.0.2",
39 "hostname": "deploy-echo-59979f4fb5-8vtf9"
33 "hostname": "deploy-echo-59979f4fb5-24zg6"
28 "hostname": "deploy-echo-59979f4fb5-k4dw4"
A NodePort service in Kubernetes exposes a service on a static port on each node in the cluster.
- The same port (NodePort) is opened on all nodes.
- Clients can access the service through any node’s IP:NodePort combination.
- kube-proxy creates iptables rules on all nodes to manage traffic.
With the default configuration (externalTrafficPolicy: Cluster):
- The client’s IP is changed to the node’s IP through SNAT.
- As a result, the target pod cannot see the real client IP.
- SNAT ensures that return traffic is correctly routed.
externalTrafficPolicy: Local Setting is applied:
- Traffic is only routed to nodes that have target pods.
- SNAT doesn’t occur, preserving the original client IP.
- Attempts to access through nodes without target pods fail.
::ffff:10.10.2.1 - - [28/Sep/2024:19:37:52 +0000] "GET / HTTP/1.1" 200 425 "-" "kube-probe/1.31"
::ffff:10.10.2.1 - - [28/Sep/2024:19:37:57 +0000] "GET / HTTP/1.1" 200 425 "-" "kube-probe/1.31"
::ffff:10.10.2.1 - - [28/Sep/2024:19:37:57 +0000] "GET / HTTP/1.1" 200 425 "-" "kube-probe/1.31"
::ffff:10.10.2.1 - - [28/Sep/2024:19:38:02 +0000] "GET / HTTP/1.1" 200 425 "-" "kube-probe/1.31"
::ffff:10.10.2.1 - - [28/Sep/2024:19:38:02 +0000] "GET / HTTP/1.1" 200 425 "-" "kube-probe/1.31"
::ffff:10.10.2.1 - - [28/Sep/2024:19:38:07 +0000] "GET / HTTP/1.1" 200 425 "-" "kube-probe/1.31"
::ffff:10.10.2.1 - - [28/Sep/2024:19:38:07 +0000] "GET / HTTP/1.1" 200 425 "-" "kube-probe/1.31"
::ffff:10.10.2.1 - - [28/Sep/2024:19:38:12 +0000] "GET / HTTP/1.1" 200 425 "-" "kube-probe/1.31"
::ffff:10.10.2.1 - - [28/Sep/2024:19:38:12 +0000] "GET / HTTP/1.1" 200 425 "-" "kube-probe/1.31"
::ffff:10.10.2.1 - - [28/Sep/2024:19:38:17 +0000] "GET / HTTP/1.1" 200 425 "-" "kube-probe/1.31"
::ffff:10.10.2.1 - - [28/Sep/2024:19:38:17 +0000] "GET / HTTP/1.1" 200 425 "-" "kube-probe/1.31"
(⎈|kind-myk8s:N/A) root@k8s-m:~/week4# kubectl logs -f deploy-echo-59979f4fb5-k4dw4 | grep HTTP
When externalTrafficPolicy: Local is set, the iptables rules change:
- Traffic jumps from the KUBE-NODEPORTS chain to a KUBE-EXT-* chain.
- From KUBE-EXT-, it jumps to a KUBE-SVL- (Service Local) chain.
- The KUBE-SVL-* chain only forwards traffic to local pods on that node.
- Preserves original client IP
- Potentially improves performance by eliminating unnecessary network hops
- Connections fail when accessing through nodes without target pods
- Load balancing may be uneven, depending on pod distribution
(⎈|kind-myk8s:N/A) root@k8s-m:~/week4# docker exec -it myk8s-control-plane bash
root@myk8s-control-plane:/# iptables -t nat --zero
root@myk8s-control-plane:/# iptables -t nat -S | grep PREROUTING
-P PREROUTING ACCEPT
-A PREROUTING -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A PREROUTING -d 172.18.0.1/32 -j DOCKER_OUTPUT # 외부 클라이언트가 노드IP:NodePort 로 접속하기 때문에 --dst-type LOCAL 에 매칭되어서 -j KUBE-NODEPORTS 로 점프!-j KUBE-NODEPORTS 로 점프!RVICES # 외부 클라이언트가 노드IP:NodePort 로 접속하기 때��
-N KUBE-SERVICES
-A PREROUTING -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A OUTPUT -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A KUBE-SERVICES -d 10.200.1.10/32 -p tcp -m comment --comment "kube-system/kube-dns:dns-tcp cluster IP" -m tcp --dport 53 -j KUBE-SVC-ERIFXISQEP7F7OF4
-A KUBE-SERVICES -d 10.200.1.10/32 -p tcp -m comment --comment "kube-system/kube-dns:metrics cluster IP" -m tcp --dport 9153 -j KUBE-SVC-JD5MR3NA4I4DYORP
-A KUBE-SERVICES -d 10.200.1.112/32 -p tcp -m comment --comment "kube-system/kube-ops-view:http cluster IP" -m tcp --dport 8080 -j KUBE-SVC-7EJNTS7AENER2WX5
-A KUBE-SERVICES -d 10.200.1.197/32 -p tcp -m comment --comment "default/svc-clusterip:svc-webport cluster IP" -m tcp --dport 9000 -j KUBE-SVC-KBDEBIL6IU6WL7RF
-A KUBE-SERVICES -d 10.200.1.93/32 -p tcp -m comment --comment "default/svc-nodeport:svc-webport cluster IP" -m tcp --dport 9000 -j KUBE-SVC-VTR7MTHHNMFZ3OFS
-A KUBE-SERVICES -d 10.200.1.1/32 -p tcp -m comment --comment "default/kubernetes:https cluster IP" -m tcp --dport 443 -j KUBE-SVC-NPX46M4PTMTKRN6Y
-A KUBE-SERVICES -d 10.200.1.10/32 -p udp -m comment --comment "kube-system/kube-dns:dns cluster IP" -m udp --dport 53 -j KUBE-SVC-TCOU7JCQXEZGVUNU
-A KUBE-SERVICES -m comment --comment "kubernetes service nodeports; NOTE: this must be the last rule in this chain" -m addrtype --dst-type LOCAL -j KUBE-NODEPORTS
root@myk8s-control-plane:/# # KUBE-NODEPORTS 에서 KUBE-EXT-# 로 점프!
## -m nfacct --nfacct-name localhost_nps_accepted_pkts 추가됨 : 패킷 flow 카운팅 - 카운트 이름 지정
$ NPORT=$(kubectl get service svc-nodeport -o jsonpath='{.spec.ports[0].nodePort}')
bash: $: command not found
root@myk8s-control-plane:/# NPORT=$(kubectl get service svc-nodeport -o jsonpath='{.spec.ports[0].nodePort}')
root@myk8s-control-plane:/# echo $NPORT
31162
root@myk8s-control-plane:/# iptables -t nat -S | grep KUBE-NODEPORTS | grep $NPORT
-A KUBE-NODEPORTS -d 127.0.0.0/8 -p tcp -m comment --comment "default/svc-nodeport:svc-webport" -m tcp --dport 31162 -m nfacct --nfacct-name localhost_nps_accepted_pkts -j KUBE-EXT-VTR7MTHHNMFZ3OFS
-A KUBE-NODEPORTS -p tcp -m comment --comment "default/svc-nodeport:svc-webport" -m tcp --dport 31162 -j KUBE-EXT-VTR7MTHHNMFZ3OFS
root@myk8s-control-plane:/# nfacct list
{ pkts = 00000000000000000000, bytes = 00000000000000000000 } = ct_state_invalid_dropped_pkts;
{ pkts = 00000000000000000000, bytes = 00000000000000000000 } = localhost_nps_accepted_pkts;
root@myk8s-control-plane:/# iptables -t nat -S | grep "A KUBE-EXT-VTR7MTHHNMFZ3OFS"
-A KUBE-EXT-VTR7MTHHNMFZ3OFS -m comment --comment "masquerade traffic for default/svc-nodeport:svc-webport external destinations" -j KUBE-MARK-MASQ
-A KUBE-EXT-VTR7MTHHNMFZ3OFS -j KUBE-SVC-VTR7MTHHNMFZ3OFS
root@myk8s-control-plane:/# iptables -t nat -S | grep "A KUBE-SVC-VTR7MTHHNMFZ3OFS -"
-A KUBE-SVC-VTR7MTHHNMFZ3OFS -m comment --comment "default/svc-nodeport:svc-webport -> 10.10.1.4:8080" -m statistic --mode random --probability 0.33333333349 -j KUBE-SEP-BVGSCXVWMMVKXIKE
-A KUBE-SVC-VTR7MTHHNMFZ3OFS -m comment --comment "default/svc-nodeport:svc-webport -> 10.10.2.4:8080" -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-R5FVJKGMGG2F5UME
-A KUBE-SVC-VTR7MTHHNMFZ3OFS -m comment --comment "default/svc-nodeport:svc-webport -> 10.10.3.6:8080" -j KUBE-SEP-V3RZOCJ3OXHZAILQ
root@myk8s-control-plane:/# iptables -t nat -S | grep "A KUBE-POSTROUTING"
-A KUBE-POSTROUTING -m mark ! --mark 0x4000/0x4000 -j RETURN
-A KUBE-POSTROUTING -j MARK --set-xmark 0x4000/0x0
-A KUBE-POSTROUTING -m comment --comment "kubernetes service traffic requiring SNAT" -j MASQUERADE --random-fully
We started by resetting the packet counters in the NAT table using iptables -t nat --zero
. This ensures a clean slate for observing packet flows.
PREROUTING Chain: The PREROUTING chain is the first point of contact for incoming packets. Key rules include:
- A jump to KUBE-SERVICES for handling Kubernetes service portals
- A specific rule for Docker output, likely for container networking
KUBE-SERVICES Chain: This chain is crucial for Kubernetes service routing. Notable points:
- It handles ClusterIP services for various components (e.g., kube-dns, kubernetes API server)
- Each service has its own targeted chain (e.g., KUBE-SVC-ERIFXISQEP7F7OF4 for DNS)
- The last rule in this chain directs traffic to KUBE-NODEPORTS for NodePort services
NodePort Service Handling: For the specific NodePort service (svc-nodeport):
- The service is exposed on port 31162
- Two rules handle this: one for localhost (127.0.0.0/8) and another for external traffic
- Both rules jump to KUBE-EXT-VTR7MTHHNMFZ3OFS
Packet Accounting: The system uses nfacct for packet accounting:
- A counter named “localhost_nps_accepted_pkts” tracks localhost NodePort connections
- Another counter “ct_state_invalid_dropped_pkts” likely tracks invalid connections
KUBE-EXT Chain: The KUBE-EXT-VTR7MTHHNMFZ3OFS chain:
- Marks packets for potential masquerading (NAT)
- Jumps to KUBE-SVC-VTR7MTHHNMFZ3OFS for further processing
Load Balancing: The KUBE-SVC-VTR7MTHHNMFZ3OFS chain implements load balancing:
- It distributes traffic among three backend pods (10.10.1.4, 10.10.2.4, 10.10.3.6)
- Uses random selection with weighted probabilities for even distribution
POSTROUTING and Masquerading: The KUBE-POSTROUTING chain:
- Handles packets marked for NAT (0x4000 mark)
- Applies MASQUERADE (SNAT) to packets requiring it, ensuring proper return routing
(⎈|kind-myk8s:N/A) root@k8s-m:~/week4# docker exec -it myk8s-control-plane iptables -t nat -S | grep KUBE-NODEPORTS | grep $NPORT
-A KUBE-NODEPORTS -d 127.0.0.0/8 -p tcp -m comment --comment "default/svc-nodeport:svc-webport" -m tcp --dport 31162 -m nfacct --nfacct-name localhost_nps_accepted_pkts -j KUBE-EXT-VTR7MTHHNMFZ3OFS
-A KUBE-NODEPORTS -p tcp -m comment --comment "default/svc-nodeport:svc-webport" -m tcp --dport 31162 -j KUBE-EXT-VTR7MTHHNMFZ3OFS
(⎈|kind-myk8s:N/A) root@k8s-m:~/week4# for i in control-plane worker worker2 worker3; do echo ">> node myk8s-$i <<"; docker exec -it myk8s-$i iptables -t nat -S | grep KUBE-NODEPORTS | grep $NPORT; echo; done
>> node myk8s-control-plane
-A KUBE-NODEPORTS -d 127.0.0.0/8 -p tcp -m comment --comment "default/svc-nodeport:svc-webport" -m tcp --dport 31162 -m nfacct --nfacct-name localhost_nps_accepted_pkts -j KUBE-EXT-VTR7MTHHNMFZ3OFS
-A KUBE-NODEPORTS -p tcp -m comment --comment "default/svc-nodeport:svc-webport" -m tcp --dport 31162 -j KUBE-EXT-VTR7MTHHNMFZ3OFS
>> node myk8s-worker
>> node myk8s-worker2
-A KUBE-NODEPORTS -d 127.0.0.0/8 -p tcp -m comment --comment "default/svc-nodeport:svc-webport" -m tcp --dport 31162 -m nfacct --nfacct-name localhost_nps_accepted_pkts -j KUBE-EXT-VTR7MTHHNMFZ3OFS
-A KUBE-NODEPORTS -p tcp -m comment --comment "default/svc-nodeport:svc-webport" -m tcp --dport 31162 -j KUBE-EXT-VTR7MTHHNMFZ3OFS
>> node myk8s-worker3
The NodePort service is configured to use port 31162 across all nodes in the cluster. This is evident from the consistent — dport 31162 in the iptables rules.
The iptables rules for the NodePort service are distributed across most nodes in the cluster, but not all.
- The control-plane node has the rules.
- Worker2 node has the rules.
- Worker and Worker3 nodes do not show the rules in this output.
Both rules direct traffic to the KUBE-EXT-VTR7MTHHNMFZ3OFS chain, which likely handles further processing and load balancing.
The fact that worker and worker3 nodes don’t show these rules is unusual. In a typical Kubernetes setup, we would expect these rules to be present on all nodes. The NodePort service should be accessible via port 31162 on the control-plane and worker2 nodes. Access attempts through worker and worker3 might fail unless there are other mechanisms in place.
(⎈|kind-myk8s:N/A) root@k8s-m:~/week4# for i in control-plane worker worker2 worker3; do echo ">> node myk8s-$i <<"; docker exec -it myk8s-$i conntrack -F; echo; done
>> node myk8s-control-plane <<
conntrack v1.4.7 (conntrack-tools): connection tracking table has been emptied.
>> node myk8s-worker <<
conntrack v1.4.7 (conntrack-tools): connection tracking table has been emptied.
>> node myk8s-worker2 <<
conntrack v1.4.7 (conntrack-tools): connection tracking table has been emptied.
>> node myk8s-worker3 <<
conntrack v1.4.7 (conntrack-tools): connection tracking table has been emptied.
(⎈|kind-myk8s:N/A) root@k8s-m:~/week4# kubectl delete -f svc-nodeport.yml
deployment.apps "echo-deployment" deleted
(⎈|kind-myk8s:N/A) root@k8s-m:~/week4# kubectl apply -f svc-nodeport.yml
deployment.apps/echo-deployment created
(⎈|kind-myk8s:N/A) root@k8s-m:~/week4# kubectl delete -f svc-
svc-clusterip.yaml svc-nodeport.yaml svc-nodeport.yml
(⎈|kind-myk8s:N/A) root@k8s-m:~/week4# kubectl delete -f svc-
svc-clusterip.yaml svc-nodeport.yaml svc-nodeport.yml
(⎈|kind-myk8s:N/A) root@k8s-m:~/week4# kubectl delete -f svc-clusterip.yaml
service "svc-clusterip" deleted
(⎈|kind-myk8s:N/A) root@k8s-m:~/week4# kubectl apply -f svc-clusterip.yaml
service/svc-clusterip created
(⎈|kind-myk8s:N/A) root@k8s-m:~/week4# kubectl patch svc svc-nodeport -p '{"spec":{"externalTrafficPolicy": "Local"}}'
service/svc-nodeport patched
(⎈|kind-myk8s:N/A) root@k8s-m:~/week4# kubectl get svc svc-nodeport -o json | grep 'TrafficPolicy"'
"externalTrafficPolicy": "Local",
"internalTrafficPolicy": "Cluster",
(⎈|kind-myk8s:N/A) root@k8s-m:~/week4# kubectl scale deployment deploy-echo --replicas=2
deployment.apps/deploy-echo scaled
We retrieved detailed information about all pods in the current namespace. The ‘-owide’ flag provides additional details such as the IP address and the node on which each pod is running.
- Two ‘deploy-echo’ pods running on different worker nodes (myk8s-worker and myk8s-worker2).
- One ‘net-pod’ running on the control-plane node. This distribution is crucial for understanding how the NodePort service with externalTrafficPolicy: Local will behave, as it will only route traffic to nodes that have pods for this service.
NPORT=$(kubectl get service svc-nodeport -o jsonpath='{.spec.ports[0].nodePort}')
echo $NPORT
These commands retrieve the NodePort assigned to the service and store it in the NPORT variable. The ‘jsonpath’ option is used to extract the specific NodePort value from the service configuration.
The NodePort is 31177. This means that the service is accessible on this port on any node in the cluster. However, due to the ‘Local’ externalTrafficPolicy, the behavior of this NodePort will be different from the default.
for i in $CNODE $NODE1 $NODE2 $NODE3 ; do echo ">> node $i <<"; docker exec -it mypc curl -s --connect-timeout 1 $i:$NPORT; echo; done
This command iterates through all nodes in the cluster, attempting to connect to the NodePort service on each. It uses ‘curl’ from within a ‘mypc’ container to simulate external access.
- No response from the control-plane node (172.23.0.2) and worker3 (172.23.0.3), as they don’t have pods for this service.
- Successful responses from worker (172.23.0.4) and worker2 (172.23.0.5), each returning information about the pod running on that node. This behavior demonstrates the ‘Local’ policy in action: traffic is only served by nodes that have local pods for the service.
docker exec -it mypc zsh -c "for i in {1..100}; do curl -s $NODE1:$NPORT | grep hostname; done | sort | uniq -c | sort -nr"
docker exec -it mypc zsh -c "for i in {1..100}; do curl -s $NODE2:$NPORT | grep hostname; done | sort | uniq -c | sort -nr"
These commands perform 100 requests to each of the nodes that have pods, then count and sort the responses based on the hostname.
For both NODE1 and NODE2, all 100 requests are served by the single pod on that node. This confirms that:
- There’s no cross-node load balancing.
- The ‘Local’ policy ensures that each node only serves traffic to its own pods.
iptables Rule Inspection:
iptables -t nat -S | grep 31177
iptables -t nat -S | grep 'A KUBE-EXT-VTR7MTHHNMFZ3OFS'
iptables -t nat -S | grep 'A KUBE-SVL-VTR7MTHHNMFZ3OFS'
iptables -t nat -S | grep 'A KUBE-SEP-COBCKEECYTEF2ZXK'
Explanation: These commands inspect the iptables NAT rules related to the NodePort service. They show the chain of rules that handle incoming traffic to the NodePort.
- Traffic to the NodePort (31177) is directed to the KUBE-EXT chain.
- The KUBE-EXT chain then directs to the KUBE-SVL (Service Local) chain.
- The KUBE-SVL chain performs DNAT (Destination NAT) to the specific local pod IP and port.
- Notably, there’s no SNAT (Source NAT) rule, which is why the client IP is preserved.
Network Performance Testing in Kubernetes Using iperf3
We use iperf3 to measure network performance between pods in a Kubernetes cluster. iperf3 is a powerful tool for measuring network bandwidth, supporting TCP, UDP, and SCTP protocols. It operates in a client-server model, where the server provides bandwidth and the client measures it.
We begin by deploying iperf3 server and client pods in our Kubernetes cluster using a pre-configured YAML file:
kubectl apply -f https://raw.githubusercontent.com/gasida/PKOS/main/aews/k8s-iperf3.yaml
This command creates two deployments (iperf3-client and iperf3-server) and a ClusterIP service for the server. It’s crucial to verify that the client and server pods are scheduled on different worker nodes to ensure we’re testing inter-node network performance.
We start with a basic TCP performance test, running for 5 seconds:
kubectl exec -it deploy/iperf3-client - iperf3 -c iperf3-server -t 5
The -t 5 option sets the test duration to 5 seconds. The output shows a transfer rate of about 41.6 Gbits/sec, which is significant but noticeably lower than what we might see in a bare-metal environment. This reduction in performance is likely due to the overhead introduced by Kubernetes networking, including kube-proxy and iptables forwarding rules.
Next, we perform a UDP test with a target bandwidth of 20 Gbits/sec:
kubectl exec -it deploy/iperf3-client - iperf3 -c iperf3-server -u -b 20G
The -u option specifies UDP, and -b 20G sets the target bandwidth to 20 Gbits/sec. Interestingly, the achieved bandwidth is only about 1.47 Gbits/sec, significantly lower than the TCP test.
This dramatic reduction in UDP performance compared to TCP suggests that the Kubernetes networking stack may be optimized for TCP traffic, or there might be specific UDP handling mechanisms introducing additional overhead.
We also conduct a bidirectional TCP test to measure simultaneous upload and download speeds:
kubectl exec -it deploy/iperf3-client - iperf3 -c iperf3-server -t 5 - bidir
The — bidir option enables bidirectional testing. This test shows asymmetric performance, with the client-to-server (TX) achieving about 18.1 Gbits/sec and server-to-client (RX) reaching 20.5 Gbits/sec. This asymmetry could be due to various factors in the Kubernetes networking implementation or underlying infrastructure.
Lastly, we perform a multi-stream TCP test to simulate multiple concurrent connections:
kubectl exec -it deploy/iperf3-client - iperf3 -c iperf3-server -t 10 -P 2
The -P 2 option creates two parallel streams. This test achieves a combined bandwidth of about 50.1 Gbits/sec, demonstrating that multiple streams can utilize more of the available network capacity, possibly by overcoming some of the per-stream overheads in the Kubernetes networking stack.