By default, Kubernetes allows the creation of a Pod with privileged containers, which are likely to compromise system security, while the pod security policy Pod security policy (PSP) protects the cluster from the impact of privileged pods by ensuring that the requester has permission to create pods according to the configuration.
Admission Controller
The Admission Controller intercepts the request for Kube apiserver. The interception occurs before the requested object is persisted, but after the request is verified and authorized. In this way, we can check the source of the request object and verify whether the required content is correct. Enable admission controllers by adding them to the -- enable admission plugins parameter of Kube apiserver. Before version 1.10, the deprecated -- admission control parameter was used. In addition, it should be noted that the order of admission controllers is very important.
Add the PodSecurityPolicy to the --enabled-admission-plugins parameter on the Kube apiserver, and restart the Kube apiserver:
--enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,PodSecurityPolicy
Other plug-ins are from Kubernetes List of some plug-ins recommended in the document.
PodSecurityPolicy has been added to the above list. Now the PSP controller has been enabled, but some security policies are missing in our cluster, so the creation of a new Pod will fail.
For example, now we create a Deployment of Nginx, and the YAML file content is as follows: (nginx.yaml)
apiVersion: apps/v1 kind: Deployment metadata: name: nginx-deploy namespace: default labels: app: nginx spec: replicas: 1 selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: containers: - name: nginx image: nginx:1.15.4
Then directly create the above Deployment:
$ kubectl apply -f nginx.yaml deployment.apps/nginx-deploy created
We can see that the deployment has been created successfully. Now check the pod, replicaset and deployment under the default namespace:
$ kubectl get po,rs,deploy -l app=nginx NAME READY STATUS RESTARTS AGE NAME DESIRED CURRENT READY AGE replicaset.extensions/nginx-deploy-77f7d4c6b4 1 0 0 40s NAME READY UP-TO-DATE AVAILABLE AGE deployment.extensions/nginx-deploy 0/1 0 0 40s
You can see that both replicaset and deployment are created successfully, but the replicaset controller does not create a Pod. At this time, you need to use ServiceAccount.
ServiceAccount Controller Manager
Generally speaking, users rarely create pods directly. They usually create pods through controllers such as Deployment, StatefulSet, Job or DasemonSet. Here, we need to configure Kube controller manager to use a separate ServiceAccount for each controller. We can do this by adding the following flags in the command startup parameters:
--use-service-account-credentials=true
Generally, the above flag is enabled by default in most installation tools (such as kubedm), so it does not need to be configured separately.
When Kube controller manager turns on the above flag, it will use the following service account automatically generated by Kubernetes:
$ kubectl get serviceaccount -n kube-system | egrep -o '[A-Za-z0-9-]+-controller' attachdetach-controller calico-kube-controller certificate-controller clusterrole-aggregation-controller cronjob-controller daemon-set-controller deployment-controller disruption-controller endpoint-controller expand-controller job-controller namespace-controller node-controller pv-protection-controller pvc-protection-controller replicaset-controller replication-controller resourcequota-controller service-account-controller service-controller statefulset-controller ttl-controller
These serviceaccounts specify which controller can resolve the configuration of which policies.
strategy
The PodSecurityPolicy object provides a declarative way to express the content created by our running user and ServiceAccount in our cluster. We can check the policy document to see how to set it up. In our current example, we will create two policies. The first is to provide a "default" policy to restrict access, which ensures that pods cannot be created using privilege settings (such as using hostNetwork). The second is an "elevated" licensing policy that allows privilege settings to be used for certain pods, such as those created under the Kube system namespace.
First, create a restrictive policy as the default policy: (PSP restrictive. Yaml)
apiVersion: policy/v1beta1 kind: PodSecurityPolicy metadata: name: restrictive spec: privileged: false hostNetwork: false allowPrivilegeEscalation: false defaultAllowPrivilegeEscalation: false hostPID: false hostIPC: false runAsUser: rule: RunAsAny fsGroup: rule: RunAsAny seLinux: rule: RunAsAny supplementalGroups: rule: RunAsAny volumes: - 'configMap' - 'downwardAPI' - 'emptyDir' - 'persistentVolumeClaim' - 'secret' - 'projected' allowedCapabilities: - '*'
Directly create the above psp object:
$ kubectl apply -f psp-restrictive.yaml podsecuritypolicy.policy/restrictive configured
Although restricted access is sufficient for most Pod creation, some permission policies are required for pods that need to increase access rights. For example, Kube proxy needs to enable hostNetwork:
$ kubectl get pods -n kube-system -l k8s-app=kube-proxy NAME READY STATUS RESTARTS AGE kube-proxy-4z4vf 1/1 Running 0 18d $ kubectl get pods -n kube-system kube-proxy-4z4vf -o yaml |grep hostNetwork hostNetwork: true
This requires the creation of a permission policy to promote the creation permission: (PSP permission. Yaml)
apiVersion: policy/v1beta1 kind: PodSecurityPolicy metadata: name: permissive spec: privileged: true hostNetwork: true hostIPC: true hostPID: true seLinux: rule: RunAsAny supplementalGroups: rule: RunAsAny runAsUser: rule: RunAsAny fsGroup: rule: RunAsAny hostPorts: - min: 0 max: 65535 volumes: - '*'
Similarly, directly create the above psp object:
$ kubectl apply -f psp-permissive.yaml podsecuritypolicy.policy/permissive configured $ kubectl get psp NAME PRIV CAPS SELINUX RUNASUSER FSGROUP SUPGROUP READONLYROOTFS VOLUMES permissive true RunAsAny RunAsAny RunAsAny RunAsAny false * restrictive false * RunAsAny RunAsAny RunAsAny RunAsAny false configMap,downwardAPI,emptyDir,persistentVolumeClaim,secret,projected
Now the configuration is ready, but we need to introduce Kubernetes authorization to determine whether the user or ServiceAccount requesting Pod creation has solved the restrictive or licensing policy, which requires RBAC.
RBAC
When we enable Pod security policy, it may cause confusion to RBAC. It determines the policies that an account can use. Using cluster wide ClusterRoleBinding can provide access to restrictive policies for service accounts (such as replicaset controller). Using namespace wide RoleBinding, you can enable access to the license policy, so that you can operate in a specific namespace (such as Kube system).
First, create a ClusterRole that allows the use of restrictive policies. Then create a ClusterRoleBinding to bind the restrictive policy to the serviceaccounts of all controllers in the system: (psp-restrictive-rbac.yaml)
kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 metadata: name: psp-restrictive rules: - apiGroups: - extensions resources: - podsecuritypolicies resourceNames: - restrictive verbs: - use --- kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: psp-default subjects: - kind: Group name: system:serviceaccounts namespace: kube-system roleRef: kind: ClusterRole name: psp-restrictive apiGroup: rbac.authorization.k8s.io
Directly create resource objects related to RBAC above:
$ kubectl apply -f psp-restrictive-rbac.yaml clusterrole.rbac.authorization.k8s.io/psp-restrictive created clusterrolebinding.rbac.authorization.k8s.io/psp-default created
Then we will recreate the Deployment defined above:
$ kubectl delete -f nginx.yaml deployment.apps "nginx-deploy" deleted $ kubectl apply -f nginx.yaml deployment.apps/nginx-deploy created
After creation, also check the default namespace and some of the resource objects we created below:
$ kubectl get po,rs,deploy -l app=nginx NAME READY STATUS RESTARTS AGE pod/nginx-deploy-77f7d4c6b4-njfdl 1/1 Running 0 13s NAME DESIRED CURRENT READY AGE replicaset.extensions/nginx-deploy-77f7d4c6b4 1 1 1 13s NAME READY UP-TO-DATE AVAILABLE AGE deployment.extensions/nginx-deploy 1/1 1 1 13s
We can see that Pods is successfully created, but if we try to do something that is not allowed by the policy, it should normally be rejected. First delete the above Deployment:
$ kubectl delete -f nginx.yaml deployment.apps "nginx-deploy" deleted
Now we add hostNetwork: true on the basis of nginx deploy to use the privilege of hostNetwork: (nginx hostNetwork. Yaml)
apiVersion: apps/v1 kind: Deployment metadata: name: nginx-hostnetwork-deploy namespace: default labels: app: nginx spec: replicas: 1 selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: containers: - name: nginx image: nginx:1.15.4 hostNetwork: true # Note the addition of hostNetwork
Then directly create the above Deployment resource object:
$ kubectl apply -f nginx-hostnetwork.yaml deployment.apps/nginx-hostnetwork-deploy created
After creation, you can also view some resource objects under the default namespace:
$ kubectl get po,rs,deploy -l app=nginx NAME READY STATUS RESTARTS AGE NAME DESIRED CURRENT READY AGE replicaset.extensions/nginx-hostnetwork-deploy-74c8fbd687 1 0 0 44s NAME READY UP-TO-DATE AVAILABLE AGE deployment.extensions/nginx-hostnetwork-deploy 0/1 0 0 44s
Now we find that the ReplicaSet has not created a Pod again. You can use the kubectl describe command to view the ReplicaSet resource object we created here for more information:
$ kubectl describe rs nginx-hostnetwork-deploy-74c8fbd687 Name: nginx-hostnetwork-deploy-74c8fbd687 ...... Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning FailedCreate 80s (x15 over 2m42s) replicaset-controller Error creating: pods "nginx-hostnetwork-deploy-74c8fbd687-" is forbidden: unable to validate against any pod security policy: [spec.securityContext.hostNetwork: Invalid value: true: Host network is not allowed to be used]
We can see that hostnetwork is obviously not allowed to be used, but in some cases, we do create a Pod using hostnetwork under a namespace (such as Kube system). Here, we need to create a ClusterRole that can be executed, and then create a RoleBinding for a specific namespace, Bind the ClusterRole here with the related controller ServiceAccount: (PSP permission RBAC. Yaml)
kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 metadata: name: psp-permissive rules: - apiGroups: - extensions resources: - podsecuritypolicies resourceNames: - permissive verbs: - use --- apiVersion: rbac.authorization.k8s.io/v1beta1 kind: RoleBinding metadata: name: psp-permissive namespace: kube-system roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: psp-permissive subjects: - kind: ServiceAccount name: daemon-set-controller namespace: kube-system - kind: ServiceAccount name: replicaset-controller namespace: kube-system - kind: ServiceAccount name: job-controller namespace: kube-system
Then directly create the RBAC related resource objects above:
$ kubectl apply -f psp-permissive-rbac.yaml clusterrole.rbac.authorization.k8s.io/psp-permissive created rolebinding.rbac.authorization.k8s.io/psp-permissive created
Now, we can use hostNetwork to create a Pod under the Kube system namespace. Change the nginx resource list above to the Kube system namespace below:
apiVersion: apps/v1 kind: Deployment metadata: name: nginx-hostnetwork-deploy namespace: kube-system labels: app: nginx spec: replicas: 1 selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: containers: - name: nginx image: nginx:1.15.4 hostNetwork: true
Recreate this Deployment:
$ kubectl apply -f nginx-hostnetwork.yaml deployment.apps/nginx-hostnetwork-deploy created
After creation, you can also view the creation of the corresponding resource object:
$ kubectl get po,rs,deploy -n kube-system -l app=nginx NAME READY STATUS RESTARTS AGE pod/nginx-hostnetwork-deploy-74c8fbd687-7x8px 1/1 Running 0 2m1s NAME DESIRED CURRENT READY AGE replicaset.extensions/nginx-hostnetwork-deploy-74c8fbd687 1 1 1 2m1s NAME READY UP-TO-DATE AVAILABLE AGE deployment.extensions/nginx-hostnetwork-deploy 1/1 1 1 2m1s
Now we can see that Pod is successfully created under the Kube system namespace.
Application specific ServiceAccount
If we now have a requirement to enforce the restrictive policy we created under a namespace, but an application under this namespace needs to use the permission policy, what should we do? In the current model, we only have cluster level and namespace level resolution. In order to provide a separate license policy for an application, we can provide the application's ServiceAccount with the ability to use the cluster role of permission.
For example, create a ServiceAccount named specialsa under the default namespace:
$ kubectl create serviceaccount specialsa serviceaccount/specialsa created
Then create a RoleBinding and bind specialsa to the above PSP permission CluterRole: (specialsa PSP. Yaml)
apiVersion: rbac.authorization.k8s.io/v1beta1 kind: RoleBinding metadata: name: specialsa-psp-permissive namespace: default roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: psp-permissive subjects: - kind: ServiceAccount name: specialsa namespace: default
Create the RoleBinding object above:
$ kubectl apply -f specialsa-psp.yaml rolebinding.rbac.authorization.k8s.io/specialsa-psp-permissive created
Then add the serviceAccount attribute to the Deployment above: (nginx-hostnetwork-sa.yaml)
apiVersion: apps/v1 kind: Deployment metadata: name: nginx-hostnetwork-deploy namespace: default labels: app: nginx spec: replicas: 1 selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: containers: - name: nginx image: nginx:1.15.4 hostNetwork: true serviceAccount: specialsa # Note the permission binding of sa used here
Then create it directly:
$ kubectl apply -f nginx-hostnetwork-sa.yaml deployment.apps/nginx-hostnetwork-deploy configured
At this time, we can see that the Pod with hostNetwork under the namespace default has also been created successfully:
$ kubectl get po,rs,deploy -l app=nginx NAME READY STATUS RESTARTS AGE pod/nginx-hostnetwork-deploy-6c85dfbf95-hqt8j 1/1 Running 0 65s NAME DESIRED CURRENT READY AGE replicaset.extensions/nginx-hostnetwork-deploy-6c85dfbf95 1 1 1 65s replicaset.extensions/nginx-hostnetwork-deploy-74c8fbd687 0 0 0 31m NAME READY UP-TO-DATE AVAILABLE AGE deployment.extensions/nginx-hostnetwork-deploy 1/1 1 1 31m
We described above Pod Security policy is a method of PSP Authorization policy to protect k8s In cluster Pod Method of creating process.
summary
Use PSP security policy
After the psp policy is defined, you need to create a role, bind the psp policy to the role, bind the role to the serviceaccount, and then use the serviceaccount in the Pod.
There are also several necessary conditions:
1. Add the PodSecurityPolicy option to the apiserver startup parameter -- enable admission plugins. It is not enabled by default.
2. Turn on the -- use service account credentials = true option in the startup parameter of controller manager