You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Dear carina maintainers:
I am Nanzi Yang, and I find a potential risk in carina that can be leveraged to get the cluster's admin token. I have tried to report this "potential risk" via private mail. However, it seems the "[email protected]" is not working anymore. I am sorry if my report carries some "ethical considerations"
Details:
The carina's DaemonSet csi-carina-node has the cluster role named carina-csi-node-rbac, which has the "get list watch update patch" verb of "node".
The carina's Deployment csi-carina-provisioner runs on the worker node randomly, and has the cluster role named carina-external-provisioner-runner, this cluster role has the "get list create" verb of "secret". Thus, the csi-carina-provisioner pod can get/list ALL secrets in the whole cluster.
If a malicious user controls one worker node, which has the "csi-carina-node" pod by default, he/she can leverage this pod to patch/update other nodes, and force the "csi-carina-provisioner" pod to run on the malicious worker node. After that, he/she can leverage the token of "csi-carina-provisioner" pod to get the cluster's admin token and make a cluster-level privilege escalation.
In our local environment, we have a four-node cluster with Kubernetes v1.25 (one control plane and two worker nodes). We install the carina following the official document (https://github.com/carina-io/carina#install-by-shell). We use the token of csi-carina-node to patch other nodes with "node.kubernetes.io/unschedulable: NoExecute" Thus, the "csi-carina-provisioner" pod is forced to run on the malicious worker node. After that, we use the token of "csi-carina-provisioner" to get the cluster's admin token, making a cluster-level privilege escalation.
Mitigation Discussion:
The carina's maintainer can use the role binding, NOT the cluster role binding, to bind the cluster role to the "csi-carina-provisioner"
pod. Thus, it can only get/list the secret in the namespace in which carina is installed. By the way, it seems like the carina are installed in the "kube-system" namespace, perhaps it should create its own namespace before installing?
The carina's maintainer can use the secret name to restrain the secrets that can be accessed by the carina's related pods. However, it may need a careful review of the source code to mitigate the risks without disrupting its functionalities.
The carina's maintainer should reduce the permission of carina's DaemonSet. For example, carina's DaemonSet should not have the "patch update" verb of the "node" resources, and should not have the "patch" verb of the "pod" resources.
Several questions:
Is it a real issue in carina?
If it's a real issue, can carina mitigate the risks following my
suggestions discussed in the "mitigation discussion"?
Looking forward to your reply.
Regards,
Nanzi Yang
The text was updated successfully, but these errors were encountered:
Dear carina maintainers:
I am Nanzi Yang, and I find a potential risk in carina that can be leveraged to get the cluster's admin token. I have tried to report this "potential risk" via private mail. However, it seems the "[email protected]" is not working anymore. I am sorry if my report carries some "ethical considerations"
Details:
The carina's DaemonSet csi-carina-node has the cluster role named carina-csi-node-rbac, which has the "get list watch update patch" verb of "node".
The carina's Deployment csi-carina-provisioner runs on the worker node randomly, and has the cluster role named carina-external-provisioner-runner, this cluster role has the "get list create" verb of "secret". Thus, the csi-carina-provisioner pod can get/list ALL secrets in the whole cluster.
If a malicious user controls one worker node, which has the "csi-carina-node" pod by default, he/she can leverage this pod to patch/update other nodes, and force the "csi-carina-provisioner" pod to run on the malicious worker node. After that, he/she can leverage the token of "csi-carina-provisioner" pod to get the cluster's admin token and make a cluster-level privilege escalation.
In our local environment, we have a four-node cluster with Kubernetes v1.25 (one control plane and two worker nodes). We install the carina following the official document (https://github.com/carina-io/carina#install-by-shell). We use the token of csi-carina-node to patch other nodes with "node.kubernetes.io/unschedulable: NoExecute" Thus, the "csi-carina-provisioner" pod is forced to run on the malicious worker node. After that, we use the token of "csi-carina-provisioner" to get the cluster's admin token, making a cluster-level privilege escalation.
Mitigation Discussion:
The carina's maintainer can use the role binding, NOT the cluster role binding, to bind the cluster role to the "csi-carina-provisioner"
pod. Thus, it can only get/list the secret in the namespace in which carina is installed. By the way, it seems like the carina are installed in the "kube-system" namespace, perhaps it should create its own namespace before installing?
The carina's maintainer can use the secret name to restrain the secrets that can be accessed by the carina's related pods. However, it may need a careful review of the source code to mitigate the risks without disrupting its functionalities.
The carina's maintainer should reduce the permission of carina's DaemonSet. For example, carina's DaemonSet should not have the "patch update" verb of the "node" resources, and should not have the "patch" verb of the "pod" resources.
Several questions:
suggestions discussed in the "mitigation discussion"?
Looking forward to your reply.
Regards,
Nanzi Yang
The text was updated successfully, but these errors were encountered: