From 2d6c13cd55e14f0b75ef57d2a752ceadc47f4a61 Mon Sep 17 00:00:00 2001 From: Ross Kirkpatrick Date: Mon, 18 Nov 2024 13:43:13 -0500 Subject: [PATCH] create ip holder per cluster + tests, update docs Signed-off-by: Ross Kirkpatrick --- README.md | 81 ++++---- cloud/linode/cilium_loadbalancers.go | 74 ++++++-- cloud/linode/cilium_loadbalancers_test.go | 216 +++++++++++++++++++--- cloud/linode/cloud.go | 1 + cloud/linode/loadbalancers.go | 26 ++- cloud/linode/service_controller.go | 2 +- go.mod | 30 +-- go.sum | 68 ++++--- main.go | 1 + 9 files changed, 368 insertions(+), 131 deletions(-) diff --git a/README.md b/README.md index 957de726..b250ce02 100644 --- a/README.md +++ b/README.md @@ -40,37 +40,37 @@ provisions a new IP on an ip-holder Nanode to share for the desired region. See #### Annotations The Linode CCM accepts several annotations which affect the properties of the underlying NodeBalancer deployment. -All of the Service annotation names listed below have been shortened for readability. The values, such as `http`, are case-sensitive. +All the Service annotation names listed below have been shortened for readability. The values, such as `http`, are case-sensitive. Each *Service* annotation **MUST** be prefixed with:
**`service.beta.kubernetes.io/linode-loadbalancer-`** -Annotation (Suffix) | Values | Default | Description ----|---|---|--- -`throttle` | `0`-`20` (`0` to disable) | `0` | Client Connection Throttle, which limits the number of subsequent new connections per second from the same client IP -`default-protocol` | `tcp`, `http`, `https` | `tcp` | This annotation is used to specify the default protocol for Linode NodeBalancer. -`default-proxy-protocol` | `none`, `v1`, `v2` | `none` | Specifies whether to use a version of Proxy Protocol on the underlying NodeBalancer. -`port-*` | json (e.g. `{ "tls-secret-name": "prod-app-tls", "protocol": "https", "proxy-protocol": "v2"}`) | | Specifies port specific NodeBalancer configuration. See [Port Specific Configuration](#port-specific-configuration). `*` is the port being configured, e.g. `linode-loadbalancer-port-443` -`check-type` | `none`, `connection`, `http`, `http_body` | | The type of health check to perform against back-ends to ensure they are serving requests -`check-path` | string | | The URL path to check on each back-end during health checks -`check-body` | string | | Text which must be present in the response body to pass the NodeBalancer health check -`check-interval` | int | | Duration, in seconds, to wait between health checks -`check-timeout` | int (1-30) | | Duration, in seconds, to wait for a health check to succeed before considering it a failure -`check-attempts` | int (1-30) | | Number of health check failures necessary to remove a back-end from the service -`check-passive` | [bool](#annotation-bool-values) | `false` | When `true`, `5xx` status codes will cause the health check to fail -`preserve` | [bool](#annotation-bool-values) | `false` | When `true`, deleting a `LoadBalancer` service does not delete the underlying NodeBalancer. This will also prevent deletion of the former LoadBalancer when another one is specified with the `nodebalancer-id` annotation. -`nodebalancer-id` | string | | The ID of the NodeBalancer to front the service. When not specified, a new NodeBalancer will be created. This can be configured on service creation or patching -`hostname-only-ingress` | [bool](#annotation-bool-values) | `false` | When `true`, the LoadBalancerStatus for the service will only contain the Hostname. This is useful for bypassing kube-proxy's rerouting of in-cluster requests originally intended for the external LoadBalancer to the service's constituent pod IPs. -`tags` | string | | A comma seperated list of tags to be applied to the createad NodeBalancer instance -`firewall-id` | string | | An existing Cloud Firewall ID to be attached to the NodeBalancer instance. See [Firewalls](#firewalls). -`firewall-acl` | string | | The Firewall rules to be applied to the NodeBalancer. Adding this annotation creates a new CCM managed Linode CloudFirewall instance. See [Firewalls](#firewalls). +| Annotation (Suffix) | Values | Default | Description | +|--------------------------|-------------------------------------------------------------------------------------------------|---------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| +| `throttle` | `0`-`20` (`0` to disable) | `0` | Client Connection Throttle, which limits the number of subsequent new connections per second from the same client IP | +| `default-protocol` | `tcp`, `http`, `https` | `tcp` | This annotation is used to specify the default protocol for Linode NodeBalancer. | +| `default-proxy-protocol` | `none`, `v1`, `v2` | `none` | Specifies whether to use a version of Proxy Protocol on the underlying NodeBalancer. | +| `port-*` | json (e.g. `{ "tls-secret-name": "prod-app-tls", "protocol": "https", "proxy-protocol": "v2"}`) | | Specifies port specific NodeBalancer configuration. See [Port Specific Configuration](#port-specific-configuration). `*` is the port being configured, e.g. `linode-loadbalancer-port-443` | +| `check-type` | `none`, `connection`, `http`, `http_body` | | The type of health check to perform against back-ends to ensure they are serving requests | +| `check-path` | string | | The URL path to check on each back-end during health checks | +| `check-body` | string | | Text which must be present in the response body to pass the NodeBalancer health check | +| `check-interval` | int | | Duration, in seconds, to wait between health checks | +| `check-timeout` | int (1-30) | | Duration, in seconds, to wait for a health check to succeed before considering it a failure | +| `check-attempts` | int (1-30) | | Number of health check failures necessary to remove a back-end from the service | +| `check-passive` | [bool](#annotation-bool-values) | `false` | When `true`, `5xx` status codes will cause the health check to fail | +| `preserve` | [bool](#annotation-bool-values) | `false` | When `true`, deleting a `LoadBalancer` service does not delete the underlying NodeBalancer. This will also prevent deletion of the former LoadBalancer when another one is specified with the `nodebalancer-id` annotation. | +| `nodebalancer-id` | string | | The ID of the NodeBalancer to front the service. When not specified, a new NodeBalancer will be created. This can be configured on service creation or patching | +| `hostname-only-ingress` | [bool](#annotation-bool-values) | `false` | When `true`, the LoadBalancerStatus for the service will only contain the Hostname. This is useful for bypassing kube-proxy's rerouting of in-cluster requests originally intended for the external LoadBalancer to the service's constituent pod IPs. | +| `tags` | string | | A comma seperated list of tags to be applied to the createad NodeBalancer instance | +| `firewall-id` | string | | An existing Cloud Firewall ID to be attached to the NodeBalancer instance. See [Firewalls](#firewalls). | +| `firewall-acl` | string | | The Firewall rules to be applied to the NodeBalancer. Adding this annotation creates a new CCM managed Linode CloudFirewall instance. See [Firewalls](#firewalls). | #### Deprecated Annotations These annotations are deprecated, and will be removed in a future release. -Annotation (Suffix) | Values | Default | Description | Scheduled Removal ----|---|---|---|--- -`proxy-protcol` | `none`, `v1`, `v2` | `none` | Specifies whether to use a version of Proxy Protocol on the underlying NodeBalancer | Q4 2021 +| Annotation (Suffix) | Values | Default | Description | Scheduled Removal | +|---------------------|--------------------|---------|-------------------------------------------------------------------------------------|-------------------| +| `proxy-protcol` | `none`, `v1`, `v2` | `none` | Specifies whether to use a version of Proxy Protocol on the underlying NodeBalancer | Q4 2021 | #### Annotation bool values For annotations with bool value types, `"1"`, `"t"`, `"T"`, `"True"`, `"true"` and `"True"` are valid string representations of `true`. Any other values will be interpreted as false. For more details, see [strconv.ParseBool](https://golang.org/pkg/strconv/#ParseBool). @@ -78,16 +78,16 @@ For annotations with bool value types, `"1"`, `"t"`, `"T"`, `"True"`, `"true"` #### Port Specific Configuration These configuration options can be specified via the `port-*` annotation, encoded in JSON. -Key | Values | Default | Description ----|---|---|--- -`protocol` | `tcp`, `http`, `https` | `tcp` | Specifies protocol of the NodeBalancer port. Overwrites `default-protocol`. -`proxy-protocol` | `none`, `v1`, `v2` | `none` | Specifies whether to use a version of Proxy Protocol on the underlying NodeBalancer. Overwrites `default-proxy-protocol`. -`tls-secret-name` | string | | Specifies a secret to use for TLS. The secret type should be `kubernetes.io/tls`. +| Key | Values | Default | Description | +|-------------------|------------------------|---------|---------------------------------------------------------------------------------------------------------------------------| +| `protocol` | `tcp`, `http`, `https` | `tcp` | Specifies protocol of the NodeBalancer port. Overwrites `default-protocol`. | +| `proxy-protocol` | `none`, `v1`, `v2` | `none` | Specifies whether to use a version of Proxy Protocol on the underlying NodeBalancer. Overwrites `default-proxy-protocol`. | +| `tls-secret-name` | string | | Specifies a secret to use for TLS. The secret type should be `kubernetes.io/tls`. | #### Shared IP Load-Balancing **NOTE:** This feature requires contacting [Customer Support](https://www.linode.com/support/contact/) to enable provisioning additional IPs. -Services of `type: LoadBalancer` can receive an external IP not backed by a NodeBalancer if `--bgp-node-selector` is set on the Linode CCM and `--load-balancer-type` is set to `cilium-bgp`. Additionally, the `LINODE_URL` environment variable in the linode CCM needs to be set to "https://api.linode.com/v4beta" for IP sharing to work. +Services of `type: LoadBalancer` can receive an external IP not backed by a NodeBalancer if `--bgp-node-selector` is set on the Linode CCM and `--load-balancer-type` is set to `cilium-bgp`. Additionally, if you plan to run multiple clusters within a single API Region, setting `--ip-holder-suffix` on the Linode CCM to a unique value per cluster will create an ip-holder nanode for every cluster created within that API Region. This feature requires the Kubernetes cluster to be using [Cilium](https://cilium.io/) as the CNI with the `bgp-control-plane` feature enabled. @@ -107,10 +107,11 @@ spec: name: ccm-linode env: - name: LINODE_URL - value: https://api.linode.com/v4beta + value: https://api.linode.com/v4 args: - --bgp-node-selector=cilium-bgp-peering=true - --load-balancer-type=cilium-bgp + - --ip-holder-suffix=myclustername1 ... ``` @@ -120,6 +121,7 @@ spec: sharedIPLoadBalancing: loadBalancerType: cilium-bgp bgpNodeSelector: cilium-bgp-peering=true + ipHolderSuffix: myclustername1 ``` #### Firewalls @@ -187,10 +189,9 @@ Kubernetes Nodes can be configured with the following annotations. Each *Node* annotation **MUST** be prefixed with:
**`node.k8s.linode.com/`** -Key | Values | Default | Description ----|---|---|--- -`private-ip` | `IPv4` | `none` | Specifies the Linode Private IP overriding default detection of the Node InternalIP.
When using a [VLAN] or [VPC], the Node InternalIP may not be a Linode Private IP as [required for NodeBalancers] and should be specified. - +| Key | Values | Default | Description | +|--------------|--------|---------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| +| `private-ip` | `IPv4` | `none` | Specifies the Linode Private IP overriding default detection of the Node InternalIP.
When using a [VLAN] or [VPC], the Node InternalIP may not be a Linode Private IP as [required for NodeBalancers] and should be specified. | [required for NodeBalancers]: https://www.linode.com/docs/api/nodebalancers/#nodebalancer-create__request-body-schema [VLAN]: https://www.linode.com/products/vlan/ @@ -289,12 +290,12 @@ sessionAffinityConfig: ## Additional environment variables To tweak CCM based on needs, one can overwrite the default values set for caches and requests by setting appropriate environment variables when applying the manifest or helm chart. -Environment Variable | Default | Description ----|---|--- -`LINODE_INSTANCE_CACHE_TTL` | `15` | Default timeout of instance cache in seconds -`LINODE_ROUTES_CACHE_TTL_SECONDS` | `60` | Default timeout of route cache in seconds -`LINODE_REQUEST_TIMEOUT_SECONDS` | `120` | Default timeout in seconds for http requests to linode API -`LINODE_EXTERNAL_SUBNET` | | Mark private network as external. Example - `172.24.0.0/16` +| Environment Variable | Default | Description | +|-----------------------------------|---------|-------------------------------------------------------------| +| `LINODE_INSTANCE_CACHE_TTL` | `15` | Default timeout of instance cache in seconds | +| `LINODE_ROUTES_CACHE_TTL_SECONDS` | `60` | Default timeout of route cache in seconds | +| `LINODE_REQUEST_TIMEOUT_SECONDS` | `120` | Default timeout in seconds for http requests to linode API | +| `LINODE_EXTERNAL_SUBNET` | | Mark private network as external. Example - `172.24.0.0/16` | ## Generating a Manifest for Deployment Use the script located at `./deploy/generate-manifest.sh` to generate a self-contained deployment manifest for the Linode CCM. Two arguments are required. diff --git a/cloud/linode/cilium_loadbalancers.go b/cloud/linode/cilium_loadbalancers.go index 478131ac..f9c9550c 100644 --- a/cloud/linode/cilium_loadbalancers.go +++ b/cloud/linode/cilium_loadbalancers.go @@ -145,7 +145,7 @@ func (l *loadbalancers) shareIPs(ctx context.Context, addrs []string, node *v1.N // perform IP sharing (via a specified node selector) have the expected IPs shared // in the event that a Node joins the cluster after the LoadBalancer Service already // exists -func (l *loadbalancers) handleIPSharing(ctx context.Context, node *v1.Node) error { +func (l *loadbalancers) handleIPSharing(ctx context.Context, node *v1.Node, ipHolderSuffix string) error { // ignore cases where the provider ID has been set if node.Spec.ProviderID == "" { klog.Info("skipping IP while providerID is unset") @@ -179,7 +179,7 @@ func (l *loadbalancers) handleIPSharing(ctx context.Context, node *v1.Node) erro // if any of the addrs don't exist on the ip-holder (e.g. someone manually deleted it outside the CCM), // we need to exclude that from the list // TODO: also clean up the CiliumLoadBalancerIPPool for that missing IP if that happens - ipHolder, err := l.getIPHolder(ctx) + ipHolder, err := l.getIPHolder(ctx, ipHolderSuffix) if err != nil { return err } @@ -204,8 +204,8 @@ func (l *loadbalancers) handleIPSharing(ctx context.Context, node *v1.Node) erro // createSharedIP requests an additional IP that can be shared on Nodes to support // loadbalancing via Cilium LB IPAM + BGP Control Plane. -func (l *loadbalancers) createSharedIP(ctx context.Context, nodes []*v1.Node) (string, error) { - ipHolder, err := l.ensureIPHolder(ctx) +func (l *loadbalancers) createSharedIP(ctx context.Context, nodes []*v1.Node, ipHolderSuffix string) (string, error) { + ipHolder, err := l.ensureIPHolder(ctx, ipHolderSuffix) if err != nil { return "", err } @@ -273,7 +273,20 @@ func (l *loadbalancers) deleteSharedIP(ctx context.Context, service *v1.Service) return err } bgpNodes := nodeList.Items - ipHolder, err := l.getIPHolder(ctx) + + serviceNn := getServiceNn(service) + var ipHolderSuffix string + if Options.IpHolderSuffix != "" { + ipHolderSuffix = Options.IpHolderSuffix + klog.V(3).Infof("using parameter-based IP Holder suffix %s for Service %s", ipHolderSuffix, serviceNn) + } else { + if service.Namespace != "" { + ipHolderSuffix = service.Namespace + klog.V(3).Infof("using service-based IP Holder suffix %s for Service %s", ipHolderSuffix, serviceNn) + } + } + + ipHolder, err := l.getIPHolder(ctx, ipHolderSuffix) if err != nil { // return error or nil if not found since no IP holder means there // is no IP to reclaim @@ -307,48 +320,87 @@ func (l *loadbalancers) deleteSharedIP(ctx context.Context, service *v1.Service) // To hold the IP in lieu of a proper IP reservation system, a special Nanode is // created but not booted and used to hold all shared IPs. -func (l *loadbalancers) ensureIPHolder(ctx context.Context) (*linodego.Instance, error) { - ipHolder, err := l.getIPHolder(ctx) +func (l *loadbalancers) ensureIPHolder(ctx context.Context, suffix string) (*linodego.Instance, error) { + ipHolder, err := l.getIPHolder(ctx, suffix) if err != nil { return nil, err } if ipHolder != nil { return ipHolder, nil } - + label := generateClusterScopedIPHolderLinodeName(l.zone, suffix) ipHolder, err = l.client.CreateInstance(ctx, linodego.InstanceCreateOptions{ Region: l.zone, Type: "g6-nanode-1", - Label: fmt.Sprintf("%s-%s", ipHolderLabelPrefix, l.zone), + Label: label, RootPass: uuid.NewString(), Image: "linode/ubuntu22.04", Booted: ptr.To(false), }) if err != nil { + lerr := linodego.NewError(err) + if lerr.Code == http.StatusConflict { + // TODO (rk): should we handle more status codes on error? + klog.Errorf("failed to create new IP Holder instance %s since it already exists: %s", label, err.Error()) + return nil, err + } return nil, err } + klog.Infof("created new IP Holder instance %s", label) return ipHolder, nil } -func (l *loadbalancers) getIPHolder(ctx context.Context) (*linodego.Instance, error) { +func (l *loadbalancers) getIPHolder(ctx context.Context, suffix string) (*linodego.Instance, error) { + // even though we have updated the naming convention, leaving this in ensures we have backwards compatibility filter := map[string]string{"label": fmt.Sprintf("%s-%s", ipHolderLabelPrefix, l.zone)} rawFilter, err := json.Marshal(filter) if err != nil { panic("this should not have failed") } var ipHolder *linodego.Instance + // TODO (rk): should we switch to using GET instead of LIST? we would be able to wrap logic around errors linodes, err := l.client.ListInstances(ctx, linodego.NewListOptions(1, string(rawFilter))) if err != nil { return nil, err } + if len(linodes) == 0 { + // since a list that returns 0 results has a 200/OK status code (no error) + + // we assume that either + // a) an ip holder instance does not exist yet + // or + // b) another cluster already holds the linode grant to an ip holder using the old naming convention + filter = map[string]string{"label": generateClusterScopedIPHolderLinodeName(l.zone, suffix)} + rawFilter, err = json.Marshal(filter) + if err != nil { + panic("this should not have failed") + } + linodes, err = l.client.ListInstances(ctx, linodego.NewListOptions(1, string(rawFilter))) + if err != nil { + return nil, err + } + } if len(linodes) > 0 { ipHolder = &linodes[0] } - return ipHolder, nil } +// generateClusterScopedIPHolderLinodeName attempts to generate a unique name for the IP Holder +// instance used alongside Cilium LoadBalancers and Shared IPs for Kubernetes Services. +// The `suffix` is set to either value of the `--ip-holder-suffix` parameter, if it is set. +// The backup method involves keying off the service namespace. +// While it _is_ possible to have an empty suffix passed in, it would mean that kubernetes +// allowed the creation of a service without a namespace, which is highly improbable. +func generateClusterScopedIPHolderLinodeName(zone, suffix string) string { + // since Linode CCM consumers are varied, we require a method of providing a + // suffix that does not rely on the use of a specific product (ex. LKE) to + // have a specific piece of metadata (ex. annotation(s), label(s) ) present to key off of. + // + return fmt.Sprintf("%s-%s-%s", ipHolderLabelPrefix, zone, suffix) +} + func (l *loadbalancers) retrieveCiliumClientset() error { if l.ciliumClient != nil { return nil diff --git a/cloud/linode/cilium_loadbalancers_test.go b/cloud/linode/cilium_loadbalancers_test.go index a4226241..3ee67cc4 100644 --- a/cloud/linode/cilium_loadbalancers_test.go +++ b/cloud/linode/cilium_loadbalancers_test.go @@ -69,14 +69,21 @@ var ( }, }, } - publicIPv4 = net.ParseIP("45.76.101.25") - ipHolderInstance = linodego.Instance{ + publicIPv4 = net.ParseIP("45.76.101.25") + oldIpHolderInstance = linodego.Instance{ ID: 12345, Label: fmt.Sprintf("%s-%s", ipHolderLabelPrefix, zone), Type: "g6-standard-1", Region: "us-west", IPv4: []*net.IP{&publicIPv4}, } + newIpHolderInstance = linodego.Instance{ + ID: 123456, + Label: generateClusterScopedIPHolderLinodeName(zone, "linodelb"), + Type: "g6-standard-1", + Region: "us-west", + IPv4: []*net.IP{&publicIPv4}, + } ) func TestCiliumCCMLoadBalancers(t *testing.T) { @@ -93,20 +100,32 @@ func TestCiliumCCMLoadBalancers(t *testing.T) { f: testUnsupportedRegion, }, { - name: "Create Cilium Load Balancer With explicit loadBalancerClass and existing IP holder nanode", - f: testCreateWithExistingIPHolder, + name: "Create Cilium Load Balancer With explicit loadBalancerClass and existing IP holder nanode with old IP Holder naming convention", + f: testCreateWithExistingIPHolderWithOldIpHolderNamingConvention, + }, + { + name: "Create Cilium Load Balancer With explicit loadBalancerClass and existing IP holder nanode with new IP Holder naming convention", + f: testCreateWithExistingIPHolderWithNewIpHolderNamingConvention, }, { name: "Create Cilium Load Balancer With no existing IP holder nanode", f: testCreateWithNoExistingIPHolder, }, { - name: "Delete Cilium Load Balancer", - f: testEnsureCiliumLoadBalancerDeleted, + name: "Delete Cilium Load Balancer With Old IP Holder Naming Convention", + f: testEnsureCiliumLoadBalancerDeletedWithOldIpHolderNamingConvention, + }, + { + name: "Delete Cilium Load Balancer With New IP Holder Naming Convention", + f: testEnsureCiliumLoadBalancerDeletedWithNewIpHolderNamingConvention, }, { - name: "Add node to existing Cilium Load Balancer", - f: testCiliumUpdateLoadBalancerAddNode, + name: "Add node to existing Cilium Load Balancer With Old IP Holder Naming Convention", + f: testCiliumUpdateLoadBalancerAddNodeWithOldIpHolderNamingConvention, + }, + { + name: "Add node to existing Cilium Load Balancer With New IP Holder Naming Convention", + f: testCiliumUpdateLoadBalancerAddNodeWithNewIpHolderNamingConvention, }, } for _, tc := range testCases { @@ -165,6 +184,7 @@ func addNodes(t *testing.T, kubeClient kubernetes.Interface, nodes []*v1.Node) { func testNoBGPNodeLabel(t *testing.T, mc *mocks.MockClient) { Options.BGPNodeSelector = "" + Options.IpHolderSuffix = "linodelb" svc := createTestService() kubeClient, _ := k8sClient.NewFakeClientset() @@ -176,14 +196,17 @@ func testNoBGPNodeLabel(t *testing.T, mc *mocks.MockClient) { filter := map[string]string{"label": fmt.Sprintf("%s-%s", ipHolderLabelPrefix, zone)} rawFilter, _ := json.Marshal(filter) mc.EXPECT().ListInstances(gomock.Any(), linodego.NewListOptions(1, string(rawFilter))).Times(1).Return([]linodego.Instance{}, nil) + filter = map[string]string{"label": generateClusterScopedIPHolderLinodeName(zone, Options.IpHolderSuffix)} + rawFilter, _ = json.Marshal(filter) + mc.EXPECT().ListInstances(gomock.Any(), linodego.NewListOptions(1, string(rawFilter))).Times(1).Return([]linodego.Instance{}, nil) dummySharedIP := "45.76.101.26" - mc.EXPECT().CreateInstance(gomock.Any(), gomock.Any()).Times(1).Return(&ipHolderInstance, nil) - mc.EXPECT().GetInstanceIPAddresses(gomock.Any(), ipHolderInstance.ID).Times(1).Return(&linodego.InstanceIPAddressResponse{ + mc.EXPECT().CreateInstance(gomock.Any(), gomock.Any()).Times(1).Return(&newIpHolderInstance, nil) + mc.EXPECT().GetInstanceIPAddresses(gomock.Any(), newIpHolderInstance.ID).Times(1).Return(&linodego.InstanceIPAddressResponse{ IPv4: &linodego.InstanceIPv4Response{ Public: []*linodego.InstanceIP{{Address: publicIPv4.String()}, {Address: dummySharedIP}}, }, }, nil) - mc.EXPECT().AddInstanceIPAddress(gomock.Any(), ipHolderInstance.ID, true).Times(1).Return(&linodego.InstanceIP{Address: dummySharedIP}, nil) + mc.EXPECT().AddInstanceIPAddress(gomock.Any(), newIpHolderInstance.ID, true).Times(1).Return(&linodego.InstanceIP{Address: dummySharedIP}, nil) mc.EXPECT().ShareIPAddresses(gomock.Any(), linodego.IPAddressesShareOptions{ IPs: []string{dummySharedIP}, LinodeID: 11111, @@ -224,8 +247,47 @@ func testUnsupportedRegion(t *testing.T, mc *mocks.MockClient) { } } -func testCreateWithExistingIPHolder(t *testing.T, mc *mocks.MockClient) { +func testCreateWithExistingIPHolderWithOldIpHolderNamingConvention(t *testing.T, mc *mocks.MockClient) { + Options.BGPNodeSelector = "cilium-bgp-peering=true" + svc := createTestService() + + kubeClient, _ := k8sClient.NewFakeClientset() + ciliumClient := &fakev2alpha1.FakeCiliumV2alpha1{Fake: &kubeClient.CiliumFakeClientset.Fake} + addService(t, kubeClient, svc) + addNodes(t, kubeClient, nodes) + lb := &loadbalancers{mc, zone, kubeClient, ciliumClient, ciliumLBType} + + filter := map[string]string{"label": fmt.Sprintf("%s-%s", ipHolderLabelPrefix, zone)} + rawFilter, _ := json.Marshal(filter) + mc.EXPECT().ListInstances(gomock.Any(), linodego.NewListOptions(1, string(rawFilter))).Times(1).Return([]linodego.Instance{oldIpHolderInstance}, nil) + dummySharedIP := "45.76.101.26" + mc.EXPECT().AddInstanceIPAddress(gomock.Any(), oldIpHolderInstance.ID, true).Times(1).Return(&linodego.InstanceIP{Address: dummySharedIP}, nil) + mc.EXPECT().GetInstanceIPAddresses(gomock.Any(), oldIpHolderInstance.ID).Times(1).Return(&linodego.InstanceIPAddressResponse{ + IPv4: &linodego.InstanceIPv4Response{ + Public: []*linodego.InstanceIP{{Address: publicIPv4.String()}, {Address: dummySharedIP}}, + }, + }, nil) + mc.EXPECT().ShareIPAddresses(gomock.Any(), linodego.IPAddressesShareOptions{ + IPs: []string{dummySharedIP}, + LinodeID: 11111, + }).Times(1) + mc.EXPECT().ShareIPAddresses(gomock.Any(), linodego.IPAddressesShareOptions{ + IPs: []string{dummySharedIP}, + LinodeID: 22222, + }).Times(1) + + lbStatus, err := lb.EnsureLoadBalancer(context.TODO(), "linodelb", svc, nodes) + if err != nil { + t.Fatalf("expected a nil error, got %v", err) + } + if lbStatus == nil { + t.Fatal("expected non-nil lbStatus") + } +} + +func testCreateWithExistingIPHolderWithNewIpHolderNamingConvention(t *testing.T, mc *mocks.MockClient) { Options.BGPNodeSelector = "cilium-bgp-peering=true" + Options.IpHolderSuffix = "linodelb" svc := createTestService() kubeClient, _ := k8sClient.NewFakeClientset() @@ -236,10 +298,10 @@ func testCreateWithExistingIPHolder(t *testing.T, mc *mocks.MockClient) { filter := map[string]string{"label": fmt.Sprintf("%s-%s", ipHolderLabelPrefix, zone)} rawFilter, _ := json.Marshal(filter) - mc.EXPECT().ListInstances(gomock.Any(), linodego.NewListOptions(1, string(rawFilter))).Times(1).Return([]linodego.Instance{ipHolderInstance}, nil) + mc.EXPECT().ListInstances(gomock.Any(), linodego.NewListOptions(1, string(rawFilter))).Times(1).Return([]linodego.Instance{oldIpHolderInstance}, nil) dummySharedIP := "45.76.101.26" - mc.EXPECT().AddInstanceIPAddress(gomock.Any(), ipHolderInstance.ID, true).Times(1).Return(&linodego.InstanceIP{Address: dummySharedIP}, nil) - mc.EXPECT().GetInstanceIPAddresses(gomock.Any(), ipHolderInstance.ID).Times(1).Return(&linodego.InstanceIPAddressResponse{ + mc.EXPECT().AddInstanceIPAddress(gomock.Any(), oldIpHolderInstance.ID, true).Times(1).Return(&linodego.InstanceIP{Address: dummySharedIP}, nil) + mc.EXPECT().GetInstanceIPAddresses(gomock.Any(), oldIpHolderInstance.ID).Times(1).Return(&linodego.InstanceIPAddressResponse{ IPv4: &linodego.InstanceIPv4Response{ Public: []*linodego.InstanceIP{{Address: publicIPv4.String()}, {Address: dummySharedIP}}, }, @@ -264,6 +326,7 @@ func testCreateWithExistingIPHolder(t *testing.T, mc *mocks.MockClient) { func testCreateWithNoExistingIPHolder(t *testing.T, mc *mocks.MockClient) { Options.BGPNodeSelector = "cilium-bgp-peering=true" + Options.IpHolderSuffix = "linodelb" svc := createTestService() kubeClient, _ := k8sClient.NewFakeClientset() @@ -275,14 +338,17 @@ func testCreateWithNoExistingIPHolder(t *testing.T, mc *mocks.MockClient) { filter := map[string]string{"label": fmt.Sprintf("%s-%s", ipHolderLabelPrefix, zone)} rawFilter, _ := json.Marshal(filter) mc.EXPECT().ListInstances(gomock.Any(), linodego.NewListOptions(1, string(rawFilter))).Times(1).Return([]linodego.Instance{}, nil) + filter = map[string]string{"label": generateClusterScopedIPHolderLinodeName(zone, Options.IpHolderSuffix)} + rawFilter, _ = json.Marshal(filter) + mc.EXPECT().ListInstances(gomock.Any(), linodego.NewListOptions(1, string(rawFilter))).Times(1).Return([]linodego.Instance{}, nil) dummySharedIP := "45.76.101.26" - mc.EXPECT().CreateInstance(gomock.Any(), gomock.Any()).Times(1).Return(&ipHolderInstance, nil) - mc.EXPECT().GetInstanceIPAddresses(gomock.Any(), ipHolderInstance.ID).Times(1).Return(&linodego.InstanceIPAddressResponse{ + mc.EXPECT().CreateInstance(gomock.Any(), gomock.Any()).Times(1).Return(&newIpHolderInstance, nil) + mc.EXPECT().GetInstanceIPAddresses(gomock.Any(), newIpHolderInstance.ID).Times(1).Return(&linodego.InstanceIPAddressResponse{ IPv4: &linodego.InstanceIPv4Response{ Public: []*linodego.InstanceIP{{Address: publicIPv4.String()}, {Address: dummySharedIP}}, }, }, nil) - mc.EXPECT().AddInstanceIPAddress(gomock.Any(), ipHolderInstance.ID, true).Times(1).Return(&linodego.InstanceIP{Address: dummySharedIP}, nil) + mc.EXPECT().AddInstanceIPAddress(gomock.Any(), newIpHolderInstance.ID, true).Times(1).Return(&linodego.InstanceIP{Address: dummySharedIP}, nil) mc.EXPECT().ShareIPAddresses(gomock.Any(), linodego.IPAddressesShareOptions{ IPs: []string{dummySharedIP}, LinodeID: 11111, @@ -301,7 +367,7 @@ func testCreateWithNoExistingIPHolder(t *testing.T, mc *mocks.MockClient) { } } -func testEnsureCiliumLoadBalancerDeleted(t *testing.T, mc *mocks.MockClient) { +func testEnsureCiliumLoadBalancerDeletedWithOldIpHolderNamingConvention(t *testing.T, mc *mocks.MockClient) { Options.BGPNodeSelector = "cilium-bgp-peering=true" svc := createTestService() @@ -316,10 +382,10 @@ func testEnsureCiliumLoadBalancerDeleted(t *testing.T, mc *mocks.MockClient) { filter := map[string]string{"label": fmt.Sprintf("%s-%s", ipHolderLabelPrefix, zone)} rawFilter, _ := json.Marshal(filter) - mc.EXPECT().ListInstances(gomock.Any(), linodego.NewListOptions(1, string(rawFilter))).Times(1).Return([]linodego.Instance{ipHolderInstance}, nil) + mc.EXPECT().ListInstances(gomock.Any(), linodego.NewListOptions(1, string(rawFilter))).Times(1).Return([]linodego.Instance{oldIpHolderInstance}, nil) mc.EXPECT().DeleteInstanceIPAddress(gomock.Any(), 11111, dummySharedIP).Times(1).Return(nil) mc.EXPECT().DeleteInstanceIPAddress(gomock.Any(), 22222, dummySharedIP).Times(1).Return(nil) - mc.EXPECT().DeleteInstanceIPAddress(gomock.Any(), ipHolderInstance.ID, dummySharedIP).Times(1).Return(nil) + mc.EXPECT().DeleteInstanceIPAddress(gomock.Any(), oldIpHolderInstance.ID, dummySharedIP).Times(1).Return(nil) err := lb.EnsureLoadBalancerDeleted(context.TODO(), "linodelb", svc) if err != nil { @@ -327,8 +393,9 @@ func testEnsureCiliumLoadBalancerDeleted(t *testing.T, mc *mocks.MockClient) { } } -func testCiliumUpdateLoadBalancerAddNode(t *testing.T, mc *mocks.MockClient) { +func testEnsureCiliumLoadBalancerDeletedWithNewIpHolderNamingConvention(t *testing.T, mc *mocks.MockClient) { Options.BGPNodeSelector = "cilium-bgp-peering=true" + Options.IpHolderSuffix = "linodelb" svc := createTestService() kubeClient, _ := k8sClient.NewFakeClientset() @@ -337,12 +404,41 @@ func testCiliumUpdateLoadBalancerAddNode(t *testing.T, mc *mocks.MockClient) { addNodes(t, kubeClient, nodes) lb := &loadbalancers{mc, zone, kubeClient, ciliumClient, ciliumLBType} + dummySharedIP := "45.76.101.26" + svc.Status.LoadBalancer = v1.LoadBalancerStatus{Ingress: []v1.LoadBalancerIngress{{IP: dummySharedIP}}} + filter := map[string]string{"label": fmt.Sprintf("%s-%s", ipHolderLabelPrefix, zone)} rawFilter, _ := json.Marshal(filter) - mc.EXPECT().ListInstances(gomock.Any(), linodego.NewListOptions(1, string(rawFilter))).Times(1).Return([]linodego.Instance{ipHolderInstance}, nil) + mc.EXPECT().ListInstances(gomock.Any(), linodego.NewListOptions(1, string(rawFilter))).Times(1).Return([]linodego.Instance{}, nil) + filter = map[string]string{"label": generateClusterScopedIPHolderLinodeName(zone, Options.IpHolderSuffix)} + rawFilter, _ = json.Marshal(filter) + mc.EXPECT().ListInstances(gomock.Any(), linodego.NewListOptions(1, string(rawFilter))).Times(1).Return([]linodego.Instance{newIpHolderInstance}, nil) + mc.EXPECT().DeleteInstanceIPAddress(gomock.Any(), 11111, dummySharedIP).Times(1).Return(nil) + mc.EXPECT().DeleteInstanceIPAddress(gomock.Any(), 22222, dummySharedIP).Times(1).Return(nil) + mc.EXPECT().DeleteInstanceIPAddress(gomock.Any(), newIpHolderInstance.ID, dummySharedIP).Times(1).Return(nil) + + err := lb.EnsureLoadBalancerDeleted(context.TODO(), "linodelb", svc) + if err != nil { + t.Fatalf("expected a nil error, got %v", err) + } +} + +func testCiliumUpdateLoadBalancerAddNodeWithOldIpHolderNamingConvention(t *testing.T, mc *mocks.MockClient) { + Options.BGPNodeSelector = "cilium-bgp-peering=true" + svc := createTestService() + + kubeClient, _ := k8sClient.NewFakeClientset() + ciliumClient := &fakev2alpha1.FakeCiliumV2alpha1{Fake: &kubeClient.CiliumFakeClientset.Fake} + addService(t, kubeClient, svc) + addNodes(t, kubeClient, nodes) + lb := &loadbalancers{mc, zone, kubeClient, ciliumClient, ciliumLBType} + + filter := map[string]string{"label": fmt.Sprintf("%s-%s", ipHolderLabelPrefix, zone)} + rawFilter, _ := json.Marshal(filter) + mc.EXPECT().ListInstances(gomock.Any(), linodego.NewListOptions(1, string(rawFilter))).Times(1).Return([]linodego.Instance{oldIpHolderInstance}, nil) dummySharedIP := "45.76.101.26" - mc.EXPECT().AddInstanceIPAddress(gomock.Any(), ipHolderInstance.ID, true).Times(1).Return(&linodego.InstanceIP{Address: dummySharedIP}, nil) - mc.EXPECT().GetInstanceIPAddresses(gomock.Any(), ipHolderInstance.ID).Times(1).Return(&linodego.InstanceIPAddressResponse{ + mc.EXPECT().AddInstanceIPAddress(gomock.Any(), oldIpHolderInstance.ID, true).Times(1).Return(&linodego.InstanceIP{Address: dummySharedIP}, nil) + mc.EXPECT().GetInstanceIPAddresses(gomock.Any(), oldIpHolderInstance.ID).Times(1).Return(&linodego.InstanceIPAddressResponse{ IPv4: &linodego.InstanceIPv4Response{ Public: []*linodego.InstanceIP{{Address: publicIPv4.String()}, {Address: dummySharedIP}}, }, @@ -365,8 +461,74 @@ func testCiliumUpdateLoadBalancerAddNode(t *testing.T, mc *mocks.MockClient) { } // Now add another node to the cluster and assert that it gets the shared IP - mc.EXPECT().ListInstances(gomock.Any(), linodego.NewListOptions(1, string(rawFilter))).Times(1).Return([]linodego.Instance{ipHolderInstance}, nil) - mc.EXPECT().GetInstanceIPAddresses(gomock.Any(), ipHolderInstance.ID).Times(1).Return(&linodego.InstanceIPAddressResponse{ + mc.EXPECT().ListInstances(gomock.Any(), linodego.NewListOptions(1, string(rawFilter))).Times(1).Return([]linodego.Instance{oldIpHolderInstance}, nil) + mc.EXPECT().GetInstanceIPAddresses(gomock.Any(), oldIpHolderInstance.ID).Times(1).Return(&linodego.InstanceIPAddressResponse{ + IPv4: &linodego.InstanceIPv4Response{ + Public: []*linodego.InstanceIP{{Address: publicIPv4.String()}, {Address: dummySharedIP}}, + }, + }, nil) + mc.EXPECT().ShareIPAddresses(gomock.Any(), linodego.IPAddressesShareOptions{ + IPs: []string{dummySharedIP}, + LinodeID: 55555, + }).Times(1) + addNodes(t, kubeClient, additionalNodes) + + err = lb.UpdateLoadBalancer(context.TODO(), "linodelb", svc, additionalNodes) + if err != nil { + t.Fatalf("expected a nil error, got %v", err) + } +} + +func testCiliumUpdateLoadBalancerAddNodeWithNewIpHolderNamingConvention(t *testing.T, mc *mocks.MockClient) { + Options.BGPNodeSelector = "cilium-bgp-peering=true" + Options.IpHolderSuffix = "linodelb" + svc := createTestService() + + kubeClient, _ := k8sClient.NewFakeClientset() + ciliumClient := &fakev2alpha1.FakeCiliumV2alpha1{Fake: &kubeClient.CiliumFakeClientset.Fake} + addService(t, kubeClient, svc) + addNodes(t, kubeClient, nodes) + lb := &loadbalancers{mc, zone, kubeClient, ciliumClient, ciliumLBType} + + filter := map[string]string{"label": fmt.Sprintf("%s-%s", ipHolderLabelPrefix, zone)} + rawFilter, _ := json.Marshal(filter) + mc.EXPECT().ListInstances(gomock.Any(), linodego.NewListOptions(1, string(rawFilter))).Times(1).Return([]linodego.Instance{}, nil) + filter = map[string]string{"label": generateClusterScopedIPHolderLinodeName(zone, Options.IpHolderSuffix)} + rawFilter, _ = json.Marshal(filter) + mc.EXPECT().ListInstances(gomock.Any(), linodego.NewListOptions(1, string(rawFilter))).Times(1).Return([]linodego.Instance{newIpHolderInstance}, nil) + dummySharedIP := "45.76.101.26" + mc.EXPECT().AddInstanceIPAddress(gomock.Any(), newIpHolderInstance.ID, true).Times(1).Return(&linodego.InstanceIP{Address: dummySharedIP}, nil) + mc.EXPECT().GetInstanceIPAddresses(gomock.Any(), newIpHolderInstance.ID).Times(1).Return(&linodego.InstanceIPAddressResponse{ + IPv4: &linodego.InstanceIPv4Response{ + Public: []*linodego.InstanceIP{{Address: publicIPv4.String()}, {Address: dummySharedIP}}, + }, + }, nil) + mc.EXPECT().ShareIPAddresses(gomock.Any(), linodego.IPAddressesShareOptions{ + IPs: []string{dummySharedIP}, + LinodeID: 11111, + }).Times(1) + mc.EXPECT().ShareIPAddresses(gomock.Any(), linodego.IPAddressesShareOptions{ + IPs: []string{dummySharedIP}, + LinodeID: 22222, + }).Times(1) + + lbStatus, err := lb.EnsureLoadBalancer(context.TODO(), "linodelb", svc, nodes) + if err != nil { + t.Fatalf("expected a nil error, got %v", err) + } + if lbStatus == nil { + t.Fatal("expected non-nil lbStatus") + } + + // Now add another node to the cluster and assert that it gets the shared IP + filter = map[string]string{"label": fmt.Sprintf("%s-%s", ipHolderLabelPrefix, zone)} + rawFilter, _ = json.Marshal(filter) + mc.EXPECT().ListInstances(gomock.Any(), linodego.NewListOptions(1, string(rawFilter))).Times(1).Return([]linodego.Instance{}, nil) + filter = map[string]string{"label": generateClusterScopedIPHolderLinodeName(zone, Options.IpHolderSuffix)} + rawFilter, _ = json.Marshal(filter) + mc.EXPECT().ListInstances(gomock.Any(), linodego.NewListOptions(1, string(rawFilter))).Times(1).Return([]linodego.Instance{newIpHolderInstance}, nil) + + mc.EXPECT().GetInstanceIPAddresses(gomock.Any(), newIpHolderInstance.ID).Times(1).Return(&linodego.InstanceIPAddressResponse{ IPv4: &linodego.InstanceIPv4Response{ Public: []*linodego.InstanceIP{{Address: publicIPv4.String()}, {Address: dummySharedIP}}, }, diff --git a/cloud/linode/cloud.go b/cloud/linode/cloud.go index 46230bbf..5404ca60 100644 --- a/cloud/linode/cloud.go +++ b/cloud/linode/cloud.go @@ -38,6 +38,7 @@ var Options struct { VPCName string LoadBalancerType string BGPNodeSelector string + IpHolderSuffix string LinodeExternalNetwork *net.IPNet } diff --git a/cloud/linode/loadbalancers.go b/cloud/linode/loadbalancers.go index 4a396ccf..2998be3e 100644 --- a/cloud/linode/loadbalancers.go +++ b/cloud/linode/loadbalancers.go @@ -222,9 +222,20 @@ func (l *loadbalancers) EnsureLoadBalancer(ctx context.Context, clusterName stri }, nil } + var ipHolderSuffix string + if Options.IpHolderSuffix != "" { + ipHolderSuffix = Options.IpHolderSuffix + klog.Infof("using parameter-based IP Holder suffix %s for Service %s", ipHolderSuffix, serviceNn) + } else { + if service.Namespace != "" { + ipHolderSuffix = service.Namespace + klog.Infof("using service-based IP Holder suffix %s for Service %s", ipHolderSuffix, serviceNn) + } + } + // CiliumLoadBalancerIPPool does not yet exist for the service var sharedIP string - if sharedIP, err = l.createSharedIP(ctx, nodes); err != nil { + if sharedIP, err = l.createSharedIP(ctx, nodes, ipHolderSuffix); err != nil { klog.Errorf("Failed to request shared instance IP: %s", err.Error()) return nil, err } @@ -428,9 +439,20 @@ func (l *loadbalancers) UpdateLoadBalancer(ctx context.Context, clusterName stri // handle LoadBalancers backed by Cilium if l.loadBalancerType == ciliumLBType { klog.Infof("handling update for LoadBalancer Service %s/%s as %s", service.Namespace, service.Name, ciliumLBClass) + serviceNn := getServiceNn(service) + var ipHolderSuffix string + if Options.IpHolderSuffix != "" { + ipHolderSuffix = Options.IpHolderSuffix + klog.V(3).Infof("using parameter-based IP Holder suffix %s for Service %s", ipHolderSuffix, serviceNn) + } else { + if service.Namespace != "" { + ipHolderSuffix = service.Namespace + klog.V(3).Infof("using service-based IP Holder suffix %s for Service %s", ipHolderSuffix, serviceNn) + } + } // make sure that IPs are shared properly on the Node if using load-balancers not backed by NodeBalancers for _, node := range nodes { - if err := l.handleIPSharing(ctx, node); err != nil { + if err := l.handleIPSharing(ctx, node, ipHolderSuffix); err != nil { return err } } diff --git a/cloud/linode/service_controller.go b/cloud/linode/service_controller.go index f04167e9..abbf707d 100644 --- a/cloud/linode/service_controller.go +++ b/cloud/linode/service_controller.go @@ -110,5 +110,5 @@ func (s *serviceController) processNextDeletion() bool { func (s *serviceController) handleServiceDeleted(service *v1.Service) error { klog.Infof("ServiceController handling service (%s) deletion", getServiceNn(service)) clusterName := strings.TrimPrefix(service.Namespace, "kube-system-") - return s.loadbalancers.EnsureLoadBalancerDeleted(context.Background(), clusterName, service) + return s.loadbalancers.EnsureLoadBalancerDeleted(context.TODO(), clusterName, service) } diff --git a/go.mod b/go.mod index 4c721f77..28a90efb 100644 --- a/go.mod +++ b/go.mod @@ -6,7 +6,7 @@ toolchain go1.22.2 require ( github.com/appscode/go v0.0.0-20200323182826-54e98e09185a - github.com/cilium/cilium v1.15.5 + github.com/cilium/cilium v1.15.10 github.com/getsentry/sentry-go v0.4.0 github.com/golang/mock v1.6.0 github.com/google/uuid v1.6.0 @@ -14,7 +14,6 @@ require ( github.com/spf13/pflag v1.0.5 github.com/stretchr/testify v1.9.0 golang.org/x/exp v0.0.0-20240119083558-1b970713d09a - golang.org/x/oauth2 v0.21.0 k8s.io/api v0.29.3 k8s.io/apimachinery v0.29.3 k8s.io/client-go v0.29.3 @@ -42,7 +41,7 @@ require ( github.com/evanphx/json-patch v5.7.0+incompatible // indirect github.com/felixge/httpsnoop v1.0.3 // indirect github.com/fsnotify/fsnotify v1.7.0 // indirect - github.com/go-logr/logr v1.4.1 // indirect + github.com/go-logr/logr v1.4.2 // indirect github.com/go-logr/stdr v1.2.2 // indirect github.com/go-ole/go-ole v1.2.6 // indirect github.com/go-openapi/analysis v0.21.4 // indirect @@ -65,7 +64,7 @@ require ( github.com/google/gofuzz v1.2.0 // indirect github.com/google/gopacket v1.1.19 // indirect github.com/grpc-ecosystem/go-grpc-prometheus v1.2.0 // indirect - github.com/grpc-ecosystem/grpc-gateway/v2 v2.16.0 // indirect + github.com/grpc-ecosystem/grpc-gateway/v2 v2.20.0 // indirect github.com/hashicorp/golang-lru/v2 v2.0.7 // indirect github.com/hashicorp/hcl v1.0.0 // indirect github.com/imdario/mergo v0.3.16 // indirect @@ -94,7 +93,7 @@ require ( github.com/prometheus/client_model v0.5.0 // indirect github.com/prometheus/common v0.45.0 // indirect github.com/prometheus/procfs v0.12.0 // indirect - github.com/rogpeppe/go-internal v1.11.0 // indirect + github.com/rogpeppe/go-internal v1.12.0 // indirect github.com/sagikazarmark/locafero v0.4.0 // indirect github.com/sagikazarmark/slog-shim v0.1.0 // indirect github.com/sasha-s/go-deadlock v0.3.1 // indirect @@ -109,7 +108,7 @@ require ( github.com/subosito/gotenv v1.6.0 // indirect github.com/tklauser/go-sysconf v0.3.11 // indirect github.com/tklauser/numcpus v0.6.0 // indirect - github.com/vishvananda/netlink v1.2.1-beta.2.0.20231127184239-0ced8385386a // indirect + github.com/vishvananda/netlink v1.2.1-beta.2.0.20240524165444-4d4ba1473f21 // indirect github.com/vishvananda/netns v0.0.4 // indirect github.com/yusufpapurcu/wmi v1.2.3 // indirect go.etcd.io/etcd/api/v3 v3.5.11 // indirect @@ -118,13 +117,13 @@ require ( go.mongodb.org/mongo-driver v1.13.1 // indirect go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc v0.42.0 // indirect go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.44.0 // indirect - go.opentelemetry.io/otel v1.21.0 // indirect + go.opentelemetry.io/otel v1.28.0 // indirect go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.19.0 // indirect go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc v1.19.0 // indirect - go.opentelemetry.io/otel/metric v1.21.0 // indirect - go.opentelemetry.io/otel/sdk v1.21.0 // indirect - go.opentelemetry.io/otel/trace v1.21.0 // indirect - go.opentelemetry.io/proto/otlp v1.0.0 // indirect + go.opentelemetry.io/otel/metric v1.28.0 // indirect + go.opentelemetry.io/otel/sdk v1.28.0 // indirect + go.opentelemetry.io/otel/trace v1.28.0 // indirect + go.opentelemetry.io/proto/otlp v1.3.1 // indirect go.uber.org/dig v1.17.1 // indirect go.uber.org/multierr v1.11.0 // indirect go.uber.org/zap v1.26.0 // indirect @@ -132,6 +131,7 @@ require ( golang.org/x/crypto v0.24.0 // indirect golang.org/x/mod v0.17.0 // indirect golang.org/x/net v0.26.0 // indirect + golang.org/x/oauth2 v0.21.0 // indirect golang.org/x/sync v0.7.0 // indirect golang.org/x/sys v0.21.0 // indirect golang.org/x/term v0.21.0 // indirect @@ -139,10 +139,10 @@ require ( golang.org/x/time v0.5.0 // indirect golang.org/x/tools v0.21.1-0.20240508182429-e35e4ccd0d2d // indirect google.golang.org/genproto v0.0.0-20231120223509-83a465c0220f // indirect - google.golang.org/genproto/googleapis/api v0.0.0-20231106174013-bbf56f31fb17 // indirect - google.golang.org/genproto/googleapis/rpc v0.0.0-20231127180814-3a041ad873d4 // indirect - google.golang.org/grpc v1.61.0 // indirect - google.golang.org/protobuf v1.33.0 // indirect + google.golang.org/genproto/googleapis/api v0.0.0-20240701130421-f6361c86f094 // indirect + google.golang.org/genproto/googleapis/rpc v0.0.0-20240701130421-f6361c86f094 // indirect + google.golang.org/grpc v1.64.0 // indirect + google.golang.org/protobuf v1.34.2 // indirect gopkg.in/inf.v0 v0.9.1 // indirect gopkg.in/ini.v1 v1.67.0 // indirect gopkg.in/natefinch/lumberjack.v2 v2.2.1 // indirect diff --git a/go.sum b/go.sum index a365ff47..08a3a9c7 100644 --- a/go.sum +++ b/go.sum @@ -1,5 +1,5 @@ cloud.google.com/go v0.110.10 h1:LXy9GEO+timppncPIAZoOj3l58LIU9k+kn48AN7IO3Y= -cloud.google.com/go/compute v1.23.3 h1:6sVlXXBmbd7jNX0Ipq0trII3e4n1/MsADLK6a+aiVlk= +cloud.google.com/go/compute v1.25.1 h1:ZRpHJedLtTpKgr3RV1Fx23NuaAEN1Zfx9hw1u4aJdjU= cloud.google.com/go/compute/metadata v0.3.0 h1:Tz+eQXMEqDIKRsmY3cHTL6FVaynIjX2QxYC4trgAKZc= cloud.google.com/go/compute/metadata v0.3.0/go.mod h1:zFmK7XCadkQkj6TtorcaGlCW1hT1fIilQDwofLpJ20k= github.com/AdaLogics/go-fuzz-headers v0.0.0-20230811130428-ced1acdcaa24 h1:bvDV9vkmnHYOMsOr4WLk+Vo07yKIzd94sVoIqshQ4bU= @@ -39,14 +39,14 @@ github.com/cespare/xxhash/v2 v2.2.0 h1:DC2CZ1Ep5Y4k3ZQ899DldepgrayRUGE6BBZ/cd9Cj github.com/cespare/xxhash/v2 v2.2.0/go.mod h1:VGX0DQ3Q6kWi7AoAeZDth3/j3BFtOZR5XLFGgcrjCOs= github.com/cilium/checkmate v1.0.3 h1:CQC5eOmlAZeEjPrVZY3ZwEBH64lHlx9mXYdUehEwI5w= github.com/cilium/checkmate v1.0.3/go.mod h1:KiBTasf39/F2hf2yAmHw21YFl3hcEyP4Yk6filxc12A= -github.com/cilium/cilium v1.15.5 h1:AFhWniiqVyQXYfpaPZTRfKdS0pLx+8lCDPp7JpAZqfo= -github.com/cilium/cilium v1.15.5/go.mod h1:hsruyj1KCncND7AyIlbKgHUlk7V+ONxTn3EbrOu39dI= +github.com/cilium/cilium v1.15.10 h1:DeTMoJQm78pgfrpfODSDOq+tWDS9ew0m1+B8SzojB4o= +github.com/cilium/cilium v1.15.10/go.mod h1:c7Pk1BmmZBUdYYxBGc/T4mLOocxmXx6K6idB7Iw012c= github.com/cilium/ebpf v0.12.3 h1:8ht6F9MquybnY97at+VDZb3eQQr8ev79RueWeVaEcG4= github.com/cilium/ebpf v0.12.3/go.mod h1:TctK1ivibvI3znr66ljgi4hqOT8EYQjz1KWBfb1UVgM= github.com/cilium/proxy v0.0.0-20231031145409-f19708f3d018 h1:R/QlThqx099hS6req1k2Q87fvLSRgCEicQGate9vxO4= github.com/cilium/proxy v0.0.0-20231031145409-f19708f3d018/go.mod h1:p044XccCmONGIUbx3bJ7qvHXK0RcrdvIvbTGiu/RjUA= -github.com/cncf/xds/go v0.0.0-20231109132714-523115ebc101 h1:7To3pQ+pZo0i3dsWEbinPNFs5gPSBOsJtx3wTT94VBY= -github.com/cncf/xds/go v0.0.0-20231109132714-523115ebc101/go.mod h1:eXthEFrGJvWHgFFCl3hGmgk+/aYT6PnTQLykKQRLhEs= +github.com/cncf/xds/go v0.0.0-20240318125728-8a4994d93e50 h1:DBmgJDC9dTfkVyGgipamEh2BpGYxScCH1TOF1LL1cXc= +github.com/cncf/xds/go v0.0.0-20240318125728-8a4994d93e50/go.mod h1:5e1+Vvlzido69INQaVO6d87Qn543Xr6nooe9Kz7oBFM= github.com/codegangsta/inject v0.0.0-20150114235600-33e0aa1cb7c0/go.mod h1:4Zcjuz89kmFXt9morQgcfYZAYZ5n8WHjt81YYWIwtTM= github.com/codeskyblue/go-sh v0.0.0-20190412065543-76bd3d59ff27/go.mod h1:VQx0hjo2oUeQkQUET7wRwradO6f+fN5jzXgB/zROxxE= github.com/coreos/etcd v3.3.10+incompatible/go.mod h1:uF7uidLiAD3TWHmW31ZFd/JWoc32PjwdhPthX9715RE= @@ -74,8 +74,8 @@ github.com/dustin/go-humanize v1.0.1/go.mod h1:Mu1zIs6XwVuF/gI1OepvI0qD18qycQx+m github.com/eknkc/amber v0.0.0-20171010120322-cdade1c07385/go.mod h1:0vRUJqYpeSZifjYj7uP3BG/gKcuzL9xWVV/Y+cK33KM= github.com/emicklei/go-restful/v3 v3.11.2 h1:1onLa9DcsMYO9P+CXaL0dStDqQ2EHHXLiz+BtnqkLAU= github.com/emicklei/go-restful/v3 v3.11.2/go.mod h1:6n3XBCmQQb25CM2LCACGz8ukIrRry+4bhvbpWn3mrbc= -github.com/envoyproxy/protoc-gen-validate v1.0.2 h1:QkIBuU5k+x7/QXPvPPnWXWlCdaBFApVqftFV6k087DA= -github.com/envoyproxy/protoc-gen-validate v1.0.2/go.mod h1:GpiZQP3dDbg4JouG/NNS7QWXpgx6x8QiMKdmN72jogE= +github.com/envoyproxy/protoc-gen-validate v1.0.4 h1:gVPz/FMfvh57HdSJQyvBtF00j8JU4zdyUgIUNhlgg0A= +github.com/envoyproxy/protoc-gen-validate v1.0.4/go.mod h1:qys6tmnRsYrQqIhm2bvKZH4Blx/1gTIZ2UKVY1M+Yew= github.com/etcd-io/bbolt v1.3.3/go.mod h1:ZF2nL25h33cCyBtcyWeZ2/I3HQOfTP+0PIEvHjkjCrw= github.com/evanphx/json-patch v5.7.0+incompatible h1:vgGkfT/9f8zE6tvSCe74nfpAVDQ2tG6yudJd8LBksgI= github.com/evanphx/json-patch v5.7.0+incompatible/go.mod h1:50XU6AFN0ol/bzJsmQLiYLvXMP4fmwYFNcr97nuDLSk= @@ -99,8 +99,8 @@ github.com/go-check/check v0.0.0-20180628173108-788fd7840127/go.mod h1:9ES+weclK github.com/go-errors/errors v1.0.1 h1:LUHzmkK3GUKUrL/1gfBUxAHzcev3apQlezX/+O7ma6w= github.com/go-errors/errors v1.0.1/go.mod h1:f4zRHt4oKfwPJE5k8C9vpYG+aDHdBFUsgrm6/TyX73Q= github.com/go-logr/logr v1.2.2/go.mod h1:jdQByPbusPIv2/zmleS9BjJVeZ6kBagPoEUsqbVz/1A= -github.com/go-logr/logr v1.4.1 h1:pKouT5E8xu9zeFC39JXRDukb6JFQPXM5p5I91188VAQ= -github.com/go-logr/logr v1.4.1/go.mod h1:9T104GzyrTigFIr8wt5mBrctHMim0Nb2HLGrmQ40KvY= +github.com/go-logr/logr v1.4.2 h1:6pFjapn8bFcIbiKo3XT4j/BhANplGihG6tvd+8rYgrY= +github.com/go-logr/logr v1.4.2/go.mod h1:9T104GzyrTigFIr8wt5mBrctHMim0Nb2HLGrmQ40KvY= github.com/go-logr/stdr v1.2.2 h1:hSWxHoqTgW2S2qGc0LTAI563KZ5YKYRhT3MFKZMbjag= github.com/go-logr/stdr v1.2.2/go.mod h1:mMo/vtBO5dYbehREoey6XUKy/eSumjCCveDpRre4VKE= github.com/go-logr/zapr v1.2.3 h1:a9vnzlIBPQBBkeaR9IuMUfmVOrQlkoC4YfPoFkX3T7A= @@ -150,8 +150,6 @@ github.com/gogo/protobuf v1.3.2/go.mod h1:P1XiOD3dCwIKUDQYPy72D8LYyHL2YPYrpS2s69 github.com/golang-jwt/jwt/v4 v4.5.0 h1:7cYmW1XlMY7h7ii7UhUyChSgS5wUJEnm9uZVTGqOWzg= github.com/golang-jwt/jwt/v4 v4.5.0/go.mod h1:m21LjoU+eqJr34lmDMbreY2eSTRJ1cv77w39/MY0Ch0= github.com/golang/glog v0.0.0-20160126235308-23def4e6c14b/go.mod h1:SBH7ygxi8pfUlaOkMMuAQtPIUF8ecWP5IEl/CR7VP2Q= -github.com/golang/glog v1.1.2 h1:DVjP2PbBOzHyzA+dn3WhHIq4NdVu3Q+pvivFICf/7fo= -github.com/golang/glog v1.1.2/go.mod h1:zR+okUeTbrL6EL3xHUDxZuEtGv04p5shwip1+mL/rLQ= github.com/golang/groupcache v0.0.0-20210331224755-41bb18bfe9da h1:oI5xCqsCo564l8iNU+DwB5epxmsaqB+rhGL0m5jtYqE= github.com/golang/groupcache v0.0.0-20210331224755-41bb18bfe9da/go.mod h1:cIg4eruTrX1D+g88fzRXU5OdNfaM+9IcxsU14FzY7Hc= github.com/golang/mock v1.6.0 h1:ErTB+efbowRARo13NNdxyJji2egdxLGQhRaY+DUumQc= @@ -195,8 +193,8 @@ github.com/grpc-ecosystem/go-grpc-prometheus v1.2.0 h1:Ovs26xHkKqVztRpIrF/92Bcuy github.com/grpc-ecosystem/go-grpc-prometheus v1.2.0/go.mod h1:8NvIoxWQoOIhqOTXgfV/d3M/q6VIi02HzZEHgUlZvzk= github.com/grpc-ecosystem/grpc-gateway v1.16.0 h1:gmcG1KaJ57LophUzW0Hy8NmPhnMZb4M0+kPpLofRdBo= github.com/grpc-ecosystem/grpc-gateway v1.16.0/go.mod h1:BDjrQk3hbvj6Nolgz8mAMFbcEtjT1g+wF4CSlocrBnw= -github.com/grpc-ecosystem/grpc-gateway/v2 v2.16.0 h1:YBftPWNWd4WwGqtY2yeZL2ef8rHAxPBD8KFhJpmcqms= -github.com/grpc-ecosystem/grpc-gateway/v2 v2.16.0/go.mod h1:YN5jB8ie0yfIUg6VvR9Kz84aCaG7AsGZnLjhHbUqwPg= +github.com/grpc-ecosystem/grpc-gateway/v2 v2.20.0 h1:bkypFPDjIYGfCYD5mRBvpqxfYX1YCS1PXdKYWi8FsN0= +github.com/grpc-ecosystem/grpc-gateway/v2 v2.20.0/go.mod h1:P+Lt/0by1T8bfcF3z737NnSbmxQAppXMRziHUxPOC8k= github.com/hashicorp/go-version v1.2.0/go.mod h1:fltr4n8CU8Ke44wwGCBoEymUuxUHl09ZGVZPK5anwXA= github.com/hashicorp/golang-lru/v2 v2.0.7 h1:a+bsQ5rvGLjzHuww6tVxozPZFVghXaHOwFs4luLUK2k= github.com/hashicorp/golang-lru/v2 v2.0.7/go.mod h1:QeFd9opnmA6QUJc5vARoKUSoFhyfM2/ZepoAG6RGpeM= @@ -328,8 +326,8 @@ github.com/prometheus/common v0.45.0/go.mod h1:YJmSTw9BoKxJplESWWxlbyttQR4uaEcGy github.com/prometheus/procfs v0.12.0 h1:jluTpSng7V9hY0O2R9DzzJHYb2xULk9VTR1V1R/k6Bo= github.com/prometheus/procfs v0.12.0/go.mod h1:pcuDEFsWDnvcgNzo4EEweacyhjeA9Zk3cnaOZAZEfOo= github.com/rogpeppe/go-internal v1.9.0/go.mod h1:WtVeX8xhTBvf0smdhujwtBcq4Qrzq/fJaraNFVN+nFs= -github.com/rogpeppe/go-internal v1.11.0 h1:cWPaGQEPrBb5/AsnsZesgZZ9yb1OQ+GOISoDNXVBh4M= -github.com/rogpeppe/go-internal v1.11.0/go.mod h1:ddIwULY96R17DhadqLgMfk9H9tvdUzkipdSkR5nkCZA= +github.com/rogpeppe/go-internal v1.12.0 h1:exVL4IDcn6na9z1rAb56Vxr+CgyK3nn3O+epU5NdKM8= +github.com/rogpeppe/go-internal v1.12.0/go.mod h1:E+RYuTGaKKdloAfM02xzb0FW3Paa99yedzYV+kq4uf4= github.com/russross/blackfriday v1.5.2/go.mod h1:JO/DiYxRf+HjHt06OyowR9PTA263kcR/rfWxYHBV53g= github.com/russross/blackfriday/v2 v2.1.0/go.mod h1:+Rmxgy9KzJVeS9/2gXHxylqXiyQDYRxCVz55jmeOWTM= github.com/ryanuber/columnize v2.1.0+incompatible/go.mod h1:sm1tb6uqfes/u+d4ooFouqFdy9/2g9QGwK3SQygK0Ts= @@ -404,8 +402,8 @@ github.com/valyala/bytebufferpool v1.0.0/go.mod h1:6bBcMArwyJ5K/AmCkWv1jt77kVWyC github.com/valyala/fasthttp v1.6.0/go.mod h1:FstJa9V+Pj9vQ7OJie2qMHdwemEDaDiSdBnvPM1Su9w= github.com/valyala/fasttemplate v1.0.1/go.mod h1:UQGH1tvbgY+Nz5t2n7tXsz52dQxojPUpymEIMZ47gx8= github.com/valyala/tcplisten v0.0.0-20161114210144-ceec8f93295a/go.mod h1:v3UYOV9WzVtRmSR+PDvWpU/qWl4Wa5LApYYX4ZtKbio= -github.com/vishvananda/netlink v1.2.1-beta.2.0.20231127184239-0ced8385386a h1:PdKmLjqKUM8AfjGqDbrF/C56RvuGFDMYB0Z+8TMmGpU= -github.com/vishvananda/netlink v1.2.1-beta.2.0.20231127184239-0ced8385386a/go.mod h1:whJevzBpTrid75eZy99s3DqCmy05NfibNaF2Ol5Ox5A= +github.com/vishvananda/netlink v1.2.1-beta.2.0.20240524165444-4d4ba1473f21 h1:tcHUxOT8j/R+0S+A1j8D2InqguXFNxAiij+8QFOlX7Y= +github.com/vishvananda/netlink v1.2.1-beta.2.0.20240524165444-4d4ba1473f21/go.mod h1:whJevzBpTrid75eZy99s3DqCmy05NfibNaF2Ol5Ox5A= github.com/vishvananda/netns v0.0.0-20200728191858-db3c7e526aae/go.mod h1:DD4vA1DwXk04H54A1oHXtwZmA0grkVMdPxx/VGLCah0= github.com/vishvananda/netns v0.0.4 h1:Oeaw1EM2JMxD51g9uhtC0D7erkIjgmj8+JZc26m1YX8= github.com/vishvananda/netns v0.0.4/go.mod h1:SpkAiCQRtJ6TvvxPnOSyH3BMl6unz3xZlaprSwhNNJM= @@ -455,20 +453,20 @@ go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc v0.4 go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc v0.42.0/go.mod h1:5z+/ZWJQKXa9YT34fQNx5K8Hd1EoIhvtUygUQPqEOgQ= go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.44.0 h1:KfYpVmrjI7JuToy5k8XV3nkapjWx48k4E4JOtVstzQI= go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.44.0/go.mod h1:SeQhzAEccGVZVEy7aH87Nh0km+utSpo1pTv6eMMop48= -go.opentelemetry.io/otel v1.21.0 h1:hzLeKBZEL7Okw2mGzZ0cc4k/A7Fta0uoPgaJCr8fsFc= -go.opentelemetry.io/otel v1.21.0/go.mod h1:QZzNPQPm1zLX4gZK4cMi+71eaorMSGT3A4znnUvNNEo= +go.opentelemetry.io/otel v1.28.0 h1:/SqNcYk+idO0CxKEUOtKQClMK/MimZihKYMruSMViUo= +go.opentelemetry.io/otel v1.28.0/go.mod h1:q68ijF8Fc8CnMHKyzqL6akLO46ePnjkgfIMIjUIX9z4= go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.19.0 h1:Mne5On7VWdx7omSrSSZvM4Kw7cS7NQkOOmLcgscI51U= go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.19.0/go.mod h1:IPtUMKL4O3tH5y+iXVyAXqpAwMuzC1IrxVS81rummfE= go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc v1.19.0 h1:3d+S281UTjM+AbF31XSOYn1qXn3BgIdWl8HNEpx08Jk= go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc v1.19.0/go.mod h1:0+KuTDyKL4gjKCF75pHOX4wuzYDUZYfAQdSu43o+Z2I= -go.opentelemetry.io/otel/metric v1.21.0 h1:tlYWfeo+Bocx5kLEloTjbcDwBuELRrIFxwdQ36PlJu4= -go.opentelemetry.io/otel/metric v1.21.0/go.mod h1:o1p3CA8nNHW8j5yuQLdc1eeqEaPfzug24uvsyIEJRWM= -go.opentelemetry.io/otel/sdk v1.21.0 h1:FTt8qirL1EysG6sTQRZ5TokkU8d0ugCj8htOgThZXQ8= -go.opentelemetry.io/otel/sdk v1.21.0/go.mod h1:Nna6Yv7PWTdgJHVRD9hIYywQBRx7pbox6nwBnZIxl/E= -go.opentelemetry.io/otel/trace v1.21.0 h1:WD9i5gzvoUPuXIXH24ZNBudiarZDKuekPqi/E8fpfLc= -go.opentelemetry.io/otel/trace v1.21.0/go.mod h1:LGbsEB0f9LGjN+OZaQQ26sohbOmiMR+BaslueVtS/qQ= -go.opentelemetry.io/proto/otlp v1.0.0 h1:T0TX0tmXU8a3CbNXzEKGeU5mIVOdf0oykP+u2lIVU/I= -go.opentelemetry.io/proto/otlp v1.0.0/go.mod h1:Sy6pihPLfYHkr3NkUbEhGHFhINUSI/v80hjKIs5JXpM= +go.opentelemetry.io/otel/metric v1.28.0 h1:f0HGvSl1KRAU1DLgLGFjrwVyismPlnuU6JD6bOeuA5Q= +go.opentelemetry.io/otel/metric v1.28.0/go.mod h1:Fb1eVBFZmLVTMb6PPohq3TO9IIhUisDsbJoL/+uQW4s= +go.opentelemetry.io/otel/sdk v1.28.0 h1:b9d7hIry8yZsgtbmM0DKyPWMMUMlK9NEKuIG4aBqWyE= +go.opentelemetry.io/otel/sdk v1.28.0/go.mod h1:oYj7ClPUA7Iw3m+r7GeEjz0qckQRJK2B8zjcZEfu7Pg= +go.opentelemetry.io/otel/trace v1.28.0 h1:GhQ9cUuQGmNDd5BTCP2dAvv75RdMxEfTmYejp+lkx9g= +go.opentelemetry.io/otel/trace v1.28.0/go.mod h1:jPyXzNPg6da9+38HEwElrQiHlVMTnVfM3/yv2OlIHaI= +go.opentelemetry.io/proto/otlp v1.3.1 h1:TrMUixzpM0yuc/znrFTP9MMRh8trP93mkCiDVeXrui0= +go.opentelemetry.io/proto/otlp v1.3.1/go.mod h1:0X1WI4de4ZsLrrJNLAQbFeLCm3T7yBkR0XqQ7niQU+8= go.uber.org/dig v1.17.1 h1:Tga8Lz8PcYNsWsyHMZ1Vm0OQOUaJNDyvPImgbAu9YSc= go.uber.org/dig v1.17.1/go.mod h1:Us0rSJiThwCv2GteUN0Q7OKvU7n5J4dxZ9JKUXozFdE= go.uber.org/goleak v1.3.0 h1:2K3zAYmnTNqV73imy9J1T3WC+gmCePx2hEGkimedGto= @@ -602,14 +600,14 @@ golang.org/x/xerrors v0.0.0-20200804184101-5ec99f83aff1/go.mod h1:I/5z698sn9Ka8T gomodules.xyz/version v0.1.0/go.mod h1:Y8xuV02mL/45psyPKG3NCVOwvAOy6T5Kx0l3rCjKSjU= google.golang.org/genproto v0.0.0-20231120223509-83a465c0220f h1:Vn+VyHU5guc9KjB5KrjI2q0wCOWEOIh0OEsleqakHJg= google.golang.org/genproto v0.0.0-20231120223509-83a465c0220f/go.mod h1:nWSwAFPb+qfNJXsoeO3Io7zf4tMSfN8EA8RlDA04GhY= -google.golang.org/genproto/googleapis/api v0.0.0-20231106174013-bbf56f31fb17 h1:JpwMPBpFN3uKhdaekDpiNlImDdkUAyiJ6ez/uxGaUSo= -google.golang.org/genproto/googleapis/api v0.0.0-20231106174013-bbf56f31fb17/go.mod h1:0xJLfVdJqpAPl8tDg1ujOCGzx6LFLttXT5NhllGOXY4= -google.golang.org/genproto/googleapis/rpc v0.0.0-20231127180814-3a041ad873d4 h1:DC7wcm+i+P1rN3Ff07vL+OndGg5OhNddHyTA+ocPqYE= -google.golang.org/genproto/googleapis/rpc v0.0.0-20231127180814-3a041ad873d4/go.mod h1:eJVxU6o+4G1PSczBr85xmyvSNYAKvAYgkub40YGomFM= -google.golang.org/grpc v1.61.0 h1:TOvOcuXn30kRao+gfcvsebNEa5iZIiLkisYEkf7R7o0= -google.golang.org/grpc v1.61.0/go.mod h1:VUbo7IFqmF1QtCAstipjG0GIoq49KvMe9+h1jFLBNJs= -google.golang.org/protobuf v1.33.0 h1:uNO2rsAINq/JlFpSdYEKIZ0uKD/R9cpdv0T+yoGwGmI= -google.golang.org/protobuf v1.33.0/go.mod h1:c6P6GXX6sHbq/GpV6MGZEdwhWPcYBgnhAHhKbcUYpos= +google.golang.org/genproto/googleapis/api v0.0.0-20240701130421-f6361c86f094 h1:0+ozOGcrp+Y8Aq8TLNN2Aliibms5LEzsq99ZZmAGYm0= +google.golang.org/genproto/googleapis/api v0.0.0-20240701130421-f6361c86f094/go.mod h1:fJ/e3If/Q67Mj99hin0hMhiNyCRmt6BQ2aWIJshUSJw= +google.golang.org/genproto/googleapis/rpc v0.0.0-20240701130421-f6361c86f094 h1:BwIjyKYGsK9dMCBOorzRri8MQwmi7mT9rGHsCEinZkA= +google.golang.org/genproto/googleapis/rpc v0.0.0-20240701130421-f6361c86f094/go.mod h1:Ue6ibwXGpU+dqIcODieyLOcgj7z8+IcskoNIgZxtrFY= +google.golang.org/grpc v1.64.0 h1:KH3VH9y/MgNQg1dE7b3XfVK0GsPSIzJwdF617gUSbvY= +google.golang.org/grpc v1.64.0/go.mod h1:oxjF8E3FBnjp+/gVFYdWacaLDx9na1aqy9oovLpxQYg= +google.golang.org/protobuf v1.34.2 h1:6xV6lTsCfpGD21XK49h7MhtcApnLqkfYgPcdHftf6hg= +google.golang.org/protobuf v1.34.2/go.mod h1:qYOHts0dSfpeUzUFpOMr/WGzszTmLH+DiWniOlNbLDw= gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0= gopkg.in/check.v1 v1.0.0-20180628173108-788fd7840127/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0= gopkg.in/check.v1 v1.0.0-20200227125254-8fa46927fb4f/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0= diff --git a/main.go b/main.go index f8b3c1ee..e657855d 100644 --- a/main.go +++ b/main.go @@ -84,6 +84,7 @@ func main() { command.Flags().StringVar(&linode.Options.VPCName, "vpc-name", "", "vpc name whose routes will be managed by route-controller") command.Flags().StringVar(&linode.Options.LoadBalancerType, "load-balancer-type", "nodebalancer", "configures which type of load-balancing to use for LoadBalancer Services (options: nodebalancer, cilium-bgp)") command.Flags().StringVar(&linode.Options.BGPNodeSelector, "bgp-node-selector", "", "node selector to use to perform shared IP fail-over with BGP (e.g. cilium-bgp-peering=true") + command.Flags().StringVar(&linode.Options.IpHolderSuffix, "ip-holder-suffix", "", "suffix to append to the ip holder name when using shared IP fail-over with BGP (e.g. ip-holder-suffix=my-cluster-name") // Set static flags command.Flags().VisitAll(func(fl *pflag.Flag) {