diff --git a/docs/design-proposals/resources/eSDKTopologyAwareness.jpg b/docs/design-proposals/resources/eSDKTopologyAwareness.jpg new file mode 100644 index 00000000..4acdbe3f Binary files /dev/null and b/docs/design-proposals/resources/eSDKTopologyAwareness.jpg differ diff --git a/docs/design-proposals/topology-support.md b/docs/design-proposals/topology-support.md new file mode 100644 index 00000000..1f412d18 --- /dev/null +++ b/docs/design-proposals/topology-support.md @@ -0,0 +1,305 @@ +# eSDK support CSI Topology-Aware Volume Provisioning with Kubernetes + +**Author(s)**: [Amit Roushan](https://github.com/AmitRoushan) + +## Version Updates +Date | Version | Description | Author +---|---|---|--- +Aug 5th 2021 | 0.1.0 | Initial design draft for storage topology support for eSDK kubernetes plugin| Amit Roushan + +## Terminology + + Term | Definition +------|------ +CSI | A specification attempting to establish an industry standard interface that Container Orchestration Systems (COs) can use to expose arbitrary storage systems to their containerized workloads. +PV | A PersistentVolume (PV) is a piece of storage in the cluster that has been provisioned by an administrator or dynamically provisioned using Storage Classes +PVC | A PersistentVolumeClaim (PVC) is a request for storage by a user. It is similar to a Pod. Pods consume node resources and PVCs consume PV resources. + +## Motivation and background +Some storage systems expose volumes that are not equally accessible by all nodes in a +Kubernetes cluster. Instead volumes may be constrained to some subset of node(s) in the cluster. +The cluster may be segmented into, for example, “racks” or “regions” and “zones” or some other +grouping, and a given volume may be accessible only from one of those groups. + +To enable orchestration systems, like Kubernetes, to work well with storage systems which expose +volumes that are not equally accessible by all nodes, the CSI spec enables: + +- Ability for a CSI Driver to opaquely specify where a particular node exists with respect to + storage system (e.g. "node A" is in "zone 1"). +- Ability for Kubernetes (users or components) to influence where a volume is provisioned + (e.g. provision new volume in either "zone 1" or "zone 2"). +- Ability for a CSI Driver to opaquely specify where a particular volume exists + (e.g. "volume X" is accessible by all nodes in "zone 1" and "zone 2"). + +Kubernetes support this CSI abilities to make intelligent scheduling and provisioning decisions. + +Being a CSI plugin, eSDK strive to support topological scheduling and provisioning for end customer. + +## Goals +The document present detailed design to make eSDK enable for topological volume scheduling +and provisioning in kubernetes cluster. + +The design should +- Enable operator/cluster admin to configure eSDK for topological distribution +- Enable end user to provision volumes based on configured topology +- Add recommendations for topological name and configuration strategy + +### Non-Goals +The document will not explicitly define, provide or explain: +- kubernetes [Volume Topology-aware Scheduling](https://github.com/jsafrane/community/blob/master/contributors/design-proposals/storage/volume-topology-scheduling.md) +- [CSI spec](https://github.com/container-storage-interface/spec/blob/master/spec.md) for topology support + +### Assumptions and Constraints +- The document has only considered kubernetes as orchestrator/provisioner. +- Volume provisioning/scheduling over Kubernetes nodes are part of [kubernetes Volume Topology-aware Scheduling](https://github.com/jsafrane/community/blob/master/contributors/design-proposals/storage/volume-topology-scheduling.md). + +### Input Requirements +Support topology awareness for eSDK + +### Feature Requirements +- Enable operator/cluster admin to configure eSDK for topological distribution +- Enable end user to provision volumes based on configured topology +- Add recommendations for topological name and configuration strategy + +#### Requirement Analysis +Support topology aware volume provisioning on kubernetes for eSDK plugin: +- Should work for pre-provisioned persistent volume (PV) + - Cluster admin can create a PV with NodeAffinity which mean PV can only be accessed from Nodes that + satisfy the NodeSelector + - Kubernetes ensures: + - Scheduler predicate: if a Pod references a PVC that is bound to a PV with NodeAffinity, the predicate will + evaluate the NodeSelector against the Node's labels to filter the nodes that the Pod can be schedule to. + Kubelet: PV NodeAffinity is verified against the Node when mounting PVs. +- Should work for Dynamic provisioned persistent volume (PV) + - Dynamic provisioning aware of pod scheduling decisions, delayed volume binding must also be enabled + - Scheduler will pass its selected node to the dynamic provisioner, and the provisioner will create a + volume in the topology domain that the selected node is part of. +- Operator/Cluster admin should + - Able to specify topological distribution of kubernetes nodes + - Able to provision topology aware dynamic or pre-provisioned persistent volume (PV) + - Able to configure eSDK for topology aware volume provisioning +- Application developer/deployer should + - Able to configure topology aware volume for workload + +##### Functional Requirements +- Should work for pre-provisioned persistent volume (PV) +- Should work for Dynamic provisioned persistent volume (PV) +- Able to specify topological distribution of kubernetes nodes +- Able to provision topology aware dynamic or pre-provisioned persistent volume (PV) +- Able to configure topology aware volume for workload + +##### Non Functional Requirements +- Should support old version of volume provisioning + +### Performance Requirements +- Volume provisioned without topology remains performant + +### Security Requirements +NA +### Other Non Functional Requirements (Scalability, HA etc…) +NA + +## Architecture Analysis + +### System Architecture + +![System Architecture](resources/eSDKTopologyAwareness.jpg) + +Kubernetes supports CSI specification to enable storage provider to write their storage plugin for volume provisioning. +Normally Kubernetes nodes are equally accessible for volume provisioning hence PV controller in controller manager trigger +volume provisioning as soon as PV/PVC are defined. Therefore, scheduler cannot take into account any of the pod’s other +scheduling constraints. This makes it possible for the PV controller to bind a PVC to a PV or provision a PV with +constraints that can make a pod unschedulable. +Detail design of [Volume Topology-aware Scheduling](https://github.com/jsafrane/community/blob/master/contributors/design-proposals/storage/volume-topology-scheduling.md) is in purview of kubernetes +Summary: +- Admin pre-provisions PVs and/or StorageClasses. +- User creates unbound PVC and there are no prebound PVs for it. +- PVC binding and provisioning is delayed until a pod is created that references it. +- User creates a pod that uses the PVC. +- Pod starts to get processed by the scheduler. +- Scheduler processes predicates. the predicate function, will process both bound and unbound PVCs of the Pod. It will + validate the VolumeNodeAffinity for bound PVCs. For unbound PVCs, it will try to find matching PVs for that node + based on the PV NodeAffinity. If there are no matching PVs, then it checks if dynamic provisioning is possible for + that node based on StorageClass AllowedTopologies. +- After evaluation, the scheduler will pick a node. +- Schedule triggers volume provisioning by annotating PV with the selected nodes +- PV controller get informed by the event and start provisioning by passing selected node topogical info to external provisioner +- External provisioner eventually calls ```CreateVolume``` gRPC request to eSDK controller plugin with topological data. +- CSI controller plugin consumes topology data and provision volume accordingly. + +The document handles eSDK adaption towards topological volume provisioning. +eSDK is centralized and split component CSI plugin. The component involves: +- ```Controller plugin``` communicates indirectly with Kubernetes master components and storage backend to + implement CSI Controller service functionality +- ```Node plugin``` communicates indirectly with Kubernetes master components adn storage backend to implement + node service functionality + + +## Detailed Design + +Enabling topology aware provisioning with eSDK has the following aspects: +- Enabling Feature Gates in kubernetes + - Topology aware volume scheduling is controlled by the VolumeScheduling feature gate, + and must be configured in + - kube-scheduler + - kube-controller-manager + - all kubelets. + +- Set ```VOLUME_ACCESSIBILITY_CONSTRAINTS``` flag in ```GetPluginCapabilities``` call for identity service + +- Make Kubernetes aware about topology + - Enable eSDK node plugin to publish Specifies where (regions, zones, racks, etc.) the node is accessible from + - Cluster admin MUST add topology aware labels for each node. + + Ex: topology.kubernetes.io/region or topology.kubernetes.io/zone + - eSDK node plugin fetches node labels and pass on the topological data to kubelet in ```NodeGetInfo``` gRPC call + ``` + NodeGetInfoResponse{ + node_id = "{HostName: k8s-node-1}" + accessible_topology = + {"topology.kubernetes.io/region": "R1", + "topology.kubernetes.io/zone": "Z2"} + } + ``` + - Kubelet creates ```CSINodeInfo``` with topological data. + - Same ```CSINodeInfo``` object is used during Volume provisioning/scheduling + + +- Topology aware volume provisioning: + - Controller plugin responsible to make intelligent volume provisioning decision based on topological data + provided in ```CreateVolume``` CSI gRPC call + - To support topology aware provisioning, configuration of different backend are provided with supported + topological distributions during deployment + ```json + csi.json: { + "backends": [ + { + "storage": "***", + "name": "storage1", + "urls": ["https://*.*.*.*:28443"], + "pools": ["***"], + "parameters": {"protocol": "iscsi", "portals": ["*.*.*.*"]} + "supportedTopologies":[ + {"topology.kubernetes.io/region": "R1", + "topology.kubernetes.io/zone": "Z1"}, + ] + }, + { + "storage": "***", + "name": "storage2", + "urls": ["https://*.*.*.*:28443"], + "pools": ["***"], + "parameters": {"protocol": "iscsi", "portals": ["*.*.*.*"]} + "supportedTopologies":[ + {"topology.kubernetes.io/region": "R1", + "topology.kubernetes.io/zone": "Z2"}, + ] + } + ] + } + ``` + - Topology aware provisioning by controller plugin: + - External provisioner initiate volume provisioning by ```CreateVolume``` CSI gRPC call to controller plugin + - External provisioner pass on ```accessibility_requirements``` parameter in ```CreateVolumeRequest```. + - #### Scenario : ```accessibility_requirements``` parameter supplied: + - Controller need to consider topological constraints provided in ```accessibility_requirements``` + - Controller can get topology attributes in two variants in ```accessibility_requirements```: + - ```requisite```: + - If requisite is specified, the volume MUST be accessible from at least one of requisite topologies + - ```preffered``` + - MUST attempt to make the provisioned volume accessible using the preferred topologies in order from first to last + - if requisite is specified, all topologies in preferred list 0 { + for _, segment := range supportedTopology { + accessibleTopologies = append(accessibleTopologies, &csi.Topology{Segments: segment}) + } + } + if contentSource != nil { attributes := map[string]string{ - "backend": localPool.Parent, + "backend": pool.Parent, "name": volName, } - return &csi.CreateVolumeResponse{ - Volume: &csi.Volume{ - VolumeId: localPool.Parent + "." + volName, - CapacityBytes: size, - VolumeContext: attributes, - ContentSource: req.VolumeContentSource, - }, + return &csi.Volume{ + VolumeId: pool.Parent + "." + volName, + CapacityBytes: size, + VolumeContext: attributes, + ContentSource: contentSource, + AccessibleTopology: accessibleTopologies, }, nil } - return &csi.CreateVolumeResponse{ - Volume: &csi.Volume{ - VolumeId: localPool.Parent + "." + volName, - CapacityBytes: size, - }, + return &csi.Volume{ + VolumeId: pool.Parent + "." + volName, + CapacityBytes: size, + AccessibleTopology: accessibleTopologies, }, nil } +func (d *Driver) processVolumeContentSource(req *csi.CreateVolumeRequest, parameters map[string]interface{}) error { + contentSource := req.GetVolumeContentSource() + if contentSource != nil { + if contentSnapshot := contentSource.GetSnapshot(); contentSnapshot != nil { + sourceSnapshotId := contentSnapshot.GetSnapshotId() + sourceBackendName, snapshotParentId, sourceSnapshotName := utils.SplitSnapshotId(sourceSnapshotId) + parameters["sourceSnapshotName"] = sourceSnapshotName + parameters["snapshotParentId"] = snapshotParentId + parameters["backend"] = sourceBackendName + log.Infof("Start to create volume from snapshot %s", sourceSnapshotName) + } else if contentVolume := contentSource.GetVolume(); contentVolume != nil { + sourceVolumeId := contentVolume.GetVolumeId() + sourceBackendName, sourceVolumeName := utils.SplitVolumeId(sourceVolumeId) + parameters["sourceVolumeName"] = sourceVolumeName + parameters["backend"] = sourceBackendName + log.Infof("Start to create volume from volume %s", sourceVolumeName) + } else { + log.Errorf("The source %s is not snapshot either volume", contentSource) + return status.Error(codes.InvalidArgument, "no source ID provided is invalid") + } + } + + return nil +} + +func (d *Driver) processAccessibilityRequirements(req *csi.CreateVolumeRequest, parameters map[string]interface{}) { + accessibleTopology := req.GetAccessibilityRequirements() + if accessibleTopology == nil { + return + } + + var requisiteTopologies = make([]map[string]string, 0) + for _, requisite := range accessibleTopology.GetRequisite() { + requirement := make(map[string]string) + for k, v := range requisite.GetSegments() { + requirement[k] = v + } + requisiteTopologies = append(requisiteTopologies, requirement) + } + + var preferredTopologies = make([]map[string]string, 0) + for _, preferred := range accessibleTopology.GetPreferred() { + preference := make(map[string]string) + for k, v := range preferred.GetSegments() { + preference[k] = v + } + preferredTopologies = append(preferredTopologies, preference) + } + + parameters[backend.TopologyRequirement] = backend.AccessibleTopology{ + RequisiteTopologies: requisiteTopologies, + PreferredTopologies: preferredTopologies, + } +} + func (d *Driver) DeleteVolume(ctx context.Context, req *csi.DeleteVolumeRequest) (*csi.DeleteVolumeResponse, error) { volumeId := req.GetVolumeId() diff --git a/src/csi/driver/driver.go b/src/csi/driver/driver.go index 89e96804..732a8446 100644 --- a/src/csi/driver/driver.go +++ b/src/csi/driver/driver.go @@ -1,17 +1,27 @@ package driver +import ( + "strings" + "utils/k8sutils" +) + type Driver struct { name string version string useMultiPath bool isNeedMultiPath bool + k8sUtils k8sutils.Interface + nodeName string } -func NewDriver(name, version string, useMultiPath, isNeedMultiPath bool) *Driver { +func NewDriver(name, version string, useMultiPath, isNeedMultiPath bool, + k8sUtils k8sutils.Interface, nodeName string) *Driver { return &Driver{ name: name, version: version, useMultiPath: useMultiPath, isNeedMultiPath: isNeedMultiPath, + k8sUtils: k8sUtils, + nodeName: strings.TrimSpace(nodeName), } } diff --git a/src/csi/driver/identity.go b/src/csi/driver/identity.go index b6868e09..731c4765 100644 --- a/src/csi/driver/identity.go +++ b/src/csi/driver/identity.go @@ -27,6 +27,13 @@ func (d *Driver) GetPluginCapabilities(ctx context.Context, req *csi.GetPluginCa }, }, }, + &csi.PluginCapability{ + Type: &csi.PluginCapability_Service_{ + Service: &csi.PluginCapability_Service{ + Type: csi.PluginCapability_Service_VOLUME_ACCESSIBILITY_CONSTRAINTS, + }, + }, + }, }, }, nil } diff --git a/src/csi/driver/node.go b/src/csi/driver/node.go index e57976f5..6c717377 100644 --- a/src/csi/driver/node.go +++ b/src/csi/driver/node.go @@ -171,10 +171,26 @@ func (d *Driver) NodeGetInfo(ctx context.Context, req *csi.NodeGetInfoRequest) ( log.Errorf("Marshal node info of %s error: %v", nodeBytes, err) return nil, status.Error(codes.Internal, err.Error()) } - log.Infof("Get NodeId %s", nodeBytes) + + if d.nodeName == "" { + return &csi.NodeGetInfoResponse{ + NodeId: string(nodeBytes), + }, nil + } + + // Get topology info from Node labels + topology, err := d.k8sUtils.GetNodeTopology(d.nodeName) + if err != nil { + log.Errorln(err) + return nil, status.Error(codes.Internal, err.Error()) + } + return &csi.NodeGetInfoResponse{ NodeId: string(nodeBytes), + AccessibleTopology: &csi.Topology{ + Segments: topology, + }, }, nil } diff --git a/src/csi/main.go b/src/csi/main.go index 4cd9a617..66b2860d 100644 --- a/src/csi/main.go +++ b/src/csi/main.go @@ -14,6 +14,7 @@ import ( "runtime/debug" "time" "utils" + "utils/k8sutils" "utils/log" "github.com/container-storage-interface/spec/lib/go/csi" @@ -30,6 +31,8 @@ const ( csiVersion = "2.2.13" defaultDriverName = "csi.huawei.com" + + nodeNameEnv = "CSI_NODENAME" ) var ( @@ -54,6 +57,12 @@ var ( volumeUseMultiPath = flag.Bool("volume-use-multipath", true, "Whether to use multipath when attach block volume") + kubeconfig = flag.String("kubeconfig", + "", + "absolute path to the kubeconfig file") + nodeName = flag.String("nodename", + os.Getenv(nodeNameEnv), + "node name in kubernetes cluster") config CSIConfig secret CSISecret @@ -64,7 +73,7 @@ type CSIConfig struct { } type CSISecret struct { - Secrets map[string]interface{} `json:"secrets"` + Secrets map[string]interface{} `json:"secrets"` } func init() { @@ -97,6 +106,10 @@ func init() { _ = mergeData(config, secret) + if "" == *nodeName { + logrus.Warning("Node name is empty. Topology aware volume provisioning feature may not behave normal") + } + if *containerized { *controllerFlagFile = "" } @@ -169,14 +182,14 @@ func main() { }() if *controller || *controllerFlagFile != "" { - err := backend.RegisterBackend(config.Backends, true) + err := backend.RegisterBackend(config.Backends, true, *driverName) if err != nil { log.Fatalf("Register backends error: %v", err) } go updateBackendCapabilities() } else { - err := backend.RegisterBackend(config.Backends, false) + err := backend.RegisterBackend(config.Backends, false, *driverName) if err != nil { log.Fatalf("Register backends error: %v", err) } @@ -199,10 +212,15 @@ func main() { log.Fatalf("Listen on %s error: %v", *endpoint, err) } + k8sUtils, err := k8sutils.NewK8SUtils(*kubeconfig) + if err != nil { + log.Fatalf("Kubernetes client initialization failed %v", err) + } + isNeedMultiPath := utils.NeedMultiPath(config.Backends) - d := driver.NewDriver(*driverName, csiVersion, *volumeUseMultiPath, isNeedMultiPath) - server := grpc.NewServer() + d := driver.NewDriver(*driverName, csiVersion, *volumeUseMultiPath, isNeedMultiPath, k8sUtils, *nodeName) + server := grpc.NewServer() csi.RegisterIdentityServer(server, d) csi.RegisterControllerServer(server, d) csi.RegisterNodeServer(server, d) diff --git a/src/utils/k8sutils/k8s_utils.go b/src/utils/k8sutils/k8s_utils.go new file mode 100644 index 00000000..8a38ec75 --- /dev/null +++ b/src/utils/k8sutils/k8s_utils.go @@ -0,0 +1,94 @@ +/* + Copyright (c) Huawei Technologies Co., Ltd. 2021-2021. All rights reserved. + + Licensed under the Apache License, Version 2.0 (the "License"); + you may not use this file except in compliance with the License. + You may obtain a copy of the License at + http://www.apache.org/licenses/LICENSE-2.0 + Unless required by applicable law or agreed to in writing, software + distributed under the License is distributed on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + See the License for the specific language governing permissions and + limitations under the License. +*/ + +// Package k8sutils provides Kubernetes utilities +package k8sutils + +import ( + "fmt" + "regexp" + + corev1 "k8s.io/api/core/v1" + metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" + "k8s.io/client-go/kubernetes" + "k8s.io/client-go/rest" + "k8s.io/client-go/tools/clientcmd" +) + +const ( + // TopologyPrefix supported by CSI plugin + TopologyPrefix = "topology.kubernetes.io" + topologyRegx = TopologyPrefix + "/.*" +) + +// Interface is a kubernetes utility interface required by CSI plugin to interact with Kubernetes +type Interface interface { + // GetNodeTopology returns configured kubernetes node's topological labels + GetNodeTopology(nodeName string) (map[string]string, error) +} + +type kubeClient struct { + clientSet *kubernetes.Clientset +} + +// NewK8SUtils returns an object of Kubernetes utility interface +func NewK8SUtils(kubeConfig string) (Interface, error) { + var clientset *kubernetes.Clientset + + if kubeConfig != "" { + config, err := clientcmd.BuildConfigFromFlags("", kubeConfig) + if err != nil { + return nil, err + } + + clientset, err = kubernetes.NewForConfig(config) + if err != nil { + return nil, err + } + } else { + config, err := rest.InClusterConfig() + if err != nil { + return nil, err + } + + clientset, err = kubernetes.NewForConfig(config) + if err != nil { + return nil, err + } + } + + return &kubeClient{ + clientSet: clientset, + }, nil +} + +func (k *kubeClient) GetNodeTopology(nodeName string) (map[string]string, error) { + k8sNode, err := k.getNode(nodeName) + if err != nil { + return nil, fmt.Errorf("failed to get node topology with error: %v", err) + } + + topology := make(map[string]string) + for key, value := range k8sNode.Labels { + if match, err := regexp.MatchString(topologyRegx, key); err == nil && match { + topology[key] = value + } + } + + return topology, nil +} + +func (k *kubeClient) getNode(nodeName string) (*corev1.Node, error) { + return k.clientSet.CoreV1().Nodes().Get(nodeName, metav1.GetOptions{}) +} diff --git a/src/utils/utils.go b/src/utils/utils.go index 265fe361..be1fd2ca 100644 --- a/src/utils/utils.go +++ b/src/utils/utils.go @@ -513,4 +513,4 @@ func NeedMultiPath(backendConfigs []map[string]interface{}) bool { } return needMultiPath -} +} \ No newline at end of file diff --git a/yamls/deploy/huawei-csi-controller.yaml b/yamls/deploy/huawei-csi-controller.yaml index ca43b9bb..a9f8eaf3 100644 --- a/yamls/deploy/huawei-csi-controller.yaml +++ b/yamls/deploy/huawei-csi-controller.yaml @@ -22,6 +22,7 @@ spec: args: - "--csi-address=$(ADDRESS)" - "--timeout=6h" + - "--feature-gates=Topology=true" env: - name: ADDRESS value: /var/lib/csi/sockets/pluginproxy/csi.sock @@ -52,6 +53,11 @@ spec: env: - name: CSI_ENDPOINT value: /var/lib/csi/sockets/pluginproxy/csi.sock + - name: CSI_NODENAME + valueFrom: + fieldRef: + apiVersion: v1 + fieldPath: spec.nodeName imagePullPolicy: "IfNotPresent" volumeMounts: - name: socket-dir diff --git a/yamls/deploy/huawei-csi-multi-controller.yaml b/yamls/deploy/huawei-csi-multi-controller.yaml index b21753d7..a2772f24 100644 --- a/yamls/deploy/huawei-csi-multi-controller.yaml +++ b/yamls/deploy/huawei-csi-multi-controller.yaml @@ -23,6 +23,7 @@ spec: - "--csi-address=$(ADDRESS)" - "--timeout=6h" - "--enable-leader-election" + - "--feature-gates=Topology=true" env: - name: ADDRESS value: /var/lib/csi/sockets/pluginproxy/csi.sock @@ -55,6 +56,11 @@ spec: env: - name: CSI_ENDPOINT value: /var/lib/csi/sockets/pluginproxy/csi.sock + - name: CSI_NODENAME + valueFrom: + fieldRef: + apiVersion: v1 + fieldPath: spec.nodeName imagePullPolicy: "IfNotPresent" volumeMounts: - name: socket-dir diff --git a/yamls/deploy/huawei-csi-node.yaml b/yamls/deploy/huawei-csi-node.yaml index 760694d4..b612187c 100644 --- a/yamls/deploy/huawei-csi-node.yaml +++ b/yamls/deploy/huawei-csi-node.yaml @@ -36,6 +36,12 @@ spec: - "--containerized" - "--driver-name=csi.huawei.com" - "--volume-use-multipath=true" + env: + - name: CSI_NODENAME + valueFrom: + fieldRef: + apiVersion: v1 + fieldPath: spec.nodeName securityContext: privileged: true capabilities: diff --git a/yamls/deploy/huawei-csi-rbac.yaml b/yamls/deploy/huawei-csi-rbac.yaml index f90380a2..6e81156d 100644 --- a/yamls/deploy/huawei-csi-rbac.yaml +++ b/yamls/deploy/huawei-csi-rbac.yaml @@ -100,6 +100,9 @@ rules: - apiGroups: [""] resources: ["events"] verbs: ["get", "list", "watch", "create", "update", "patch"] + - apiGroups: [""] + resources: ["nodes"] + verbs: ["get"] --- kind: ClusterRoleBinding diff --git a/yamls/deploy/huawei-csi-resize-controller.yaml b/yamls/deploy/huawei-csi-resize-controller.yaml index c2df7794..df25a03e 100644 --- a/yamls/deploy/huawei-csi-resize-controller.yaml +++ b/yamls/deploy/huawei-csi-resize-controller.yaml @@ -22,6 +22,7 @@ spec: args: - "--csi-address=$(ADDRESS)" - "--timeout=6h" + - "--feature-gates=Topology=true" env: - name: ADDRESS value: /var/lib/csi/sockets/pluginproxy/csi.sock @@ -65,6 +66,11 @@ spec: env: - name: CSI_ENDPOINT value: /var/lib/csi/sockets/pluginproxy/csi.sock + - name: CSI_NODENAME + valueFrom: + fieldRef: + apiVersion: v1 + fieldPath: spec.nodeName imagePullPolicy: "IfNotPresent" volumeMounts: - name: socket-dir diff --git a/yamls/deploy/huawei-csi-resize-rbac.yaml b/yamls/deploy/huawei-csi-resize-rbac.yaml index 57e11aee..3ca0f90b 100644 --- a/yamls/deploy/huawei-csi-resize-rbac.yaml +++ b/yamls/deploy/huawei-csi-resize-rbac.yaml @@ -164,6 +164,9 @@ rules: - apiGroups: [""] resources: ["events"] verbs: ["get", "list", "watch", "create", "update", "patch"] + - apiGroups: [""] + resources: ["nodes"] + verbs: ["get"] --- kind: ClusterRoleBinding diff --git a/yamls/deploy/huawei-csi-resize-snapshot-controller.yaml b/yamls/deploy/huawei-csi-resize-snapshot-controller.yaml index c4835271..ef5a2c5e 100644 --- a/yamls/deploy/huawei-csi-resize-snapshot-controller.yaml +++ b/yamls/deploy/huawei-csi-resize-snapshot-controller.yaml @@ -22,6 +22,7 @@ spec: args: - "--csi-address=$(ADDRESS)" - "--timeout=6h" + - "--feature-gates=Topology=true" env: - name: ADDRESS value: /var/lib/csi/sockets/pluginproxy/csi.sock @@ -88,6 +89,11 @@ spec: env: - name: CSI_ENDPOINT value: /var/lib/csi/sockets/pluginproxy/csi.sock + - name: CSI_NODENAME + valueFrom: + fieldRef: + apiVersion: v1 + fieldPath: spec.nodeName imagePullPolicy: "IfNotPresent" volumeMounts: - name: socket-dir @@ -110,4 +116,4 @@ spec: name: huawei-csi-configmap - name: secret secret: - secretName: huawei-csi-secret \ No newline at end of file + secretName: huawei-csi-secret diff --git a/yamls/deploy/huawei-csi-resize-snapshot-rbac.yaml b/yamls/deploy/huawei-csi-resize-snapshot-rbac.yaml index 0a189402..779bf7a1 100644 --- a/yamls/deploy/huawei-csi-resize-snapshot-rbac.yaml +++ b/yamls/deploy/huawei-csi-resize-snapshot-rbac.yaml @@ -298,6 +298,9 @@ rules: - apiGroups: [""] resources: ["events"] verbs: ["get", "list", "watch", "create", "update", "patch"] + - apiGroups: [""] + resources: ["nodes"] + verbs: ["get"] --- kind: ClusterRoleBinding