Skip to content

Commit

Permalink
Sync upstream v1.29.2 (#302)
Browse files Browse the repository at this point in the history
* clusterapi: Add 'watch' verb to scale-from-zero example

If the 'get' and 'list' verbs are present, but the 'watch' verb is
absent, the autoscaler reports an error. For example:

cluster-autoscaler-b8949d8b9-76vcd E1006 22:11:43.056176       1
reflector.go:148]
k8s.io/client-go/dynamic/dynamicinformer/informer.go:108: Failed to
watch infrastructure.cluster.x-k8s.io/v1beta2,
Resource=vcdmachinetemplates: unknown

* Update with make generate

* Add pdb filtering to remainingPdbTracker

* Convert replicated, system, not-safe-to-evict, and local storage pods to drainability rules

* Convert scale-down pdb check to drainability rule

* Pass DeleteOptions once during default rule creation

* Split out custom controller and common checks into separate drainability rules

* Filter out disabled drainability rules during creation

* Refactor GetPodsForDeletion logic and tests into simulator

* Fix custom controller drainability rule and add test coverage

* Add unit test for long-terminating pod past grace period

* Removed node drainer, kept node termination handler

* Add HasNodeGroupStartedScaleUp to cluster state registry.

- HasNodeGroupStartedScaleUp checks wheter a scale up request exists
  without checking any upcoming nodes.

* Add kwiesmueller to OWNERS

jbartosik et al are transitioning off of workload autoscalers (incl vpa
and addon-resizer). kwiesmueller is on the new team and has agreed to
take on reviewer/approver responsibilities.

* Add information about provisioning-class-name annotation.

* Remove redundant if branch

* Add mechanism to override drainability status

* Log drainability override

* fix(cluster-autoscaler-chart): if secretKeyRefNameOverride is true, don't create secret

Signed-off-by: Jonathan Raymond <[email protected]>

* fix: correct version bump

Signed-off-by: Jonathan Raymond <[email protected]>

* Initialize default drainability rules

* feat: each node pool can now have different init configs

* ClusterAPI: Allow overriding the kubernetes.io/arch label set by the scale from zero method via environment variable

The architecture label in the build generic labels method of the cluster API (CAPI) provider is now populated using the GetDefaultScaleFromZeroArchitecture().Name() method.

The method allows CAPI users deploying the cluster-autoscaler to define the default architecture to be used by the cluster-autoscaler for scale up from zero via the env var CAPI_SCALE_ZERO_DEFAULT_ARCH. Amd64 is kept as a fallback for historical reasons.

The introduced changes will not take into account the case of nodes heterogeneous in architecture. The labels generation to infer properties like the cpu architecture from the node groups' features should be considered as a CAPI provider specific implementation.

* Update image builder to use Go 1.21.3

Some of Cluster Autoscaler code is now using features only available in Go 1.21.

* Add node-delete-delay-after-taint to FAQ

* Reports node taints.

* Add debugging-snapshot-enabled back

* Rename comments, logs, structs, and vars from packet to equinix metal

* Rename types

* fix: provider name to be used in builder to provide backward compatibility

Signed-off-by: Ayush Rangwala <[email protected]>

* Rename comments, logs, structs, and vars from packet to equinix metal

* Created a new env var for metal to replace/support packet env vars as usual

* Support backward compatibility for PACKET_MANAGER env var

Signed-off-by: Ayush Rangwala <[email protected]>

* fix: refactor cloud provider names

Signed-off-by: Ayush Rangwala <[email protected]>

* Documents startup/status/ignore node taints.

* Adding price info for c3d
(Price for preemptible instances is calculated as: (Spot price / On-demand price) * instance prices)

* Bump CA golang to 1.21.3

* cloudprovider/exoscale: update limits/quotas URL

https://portal.exoscale.com/account/limits has been deprecated in
favor of https://portal.exoscale.com/organization/quotas. Update
README accordingly.

* Add the AppVersion to cluster-autoscaler.labels as app.kubernetes.io/version

* Bump version in chart.yaml

* add note for CRD and RBAC handling for VPA (>=1.0.0)

* feat(helm): add support for exoscale provider

Signed-off-by: Thomas Stadler <[email protected]>

* Add TOC link in README for EvictionRequirement example

* Fix 'evictionRequirements.resources' to be plural in yaml

* Run 'hack/generate-crd-yamls.sh'

* Adapt AEP to have 'resources' in plural

* Remove deprecated dependency: gogo/protobuf

* Fix klog formating directives in cluster-autoscaler package.

* Update kubernetes dependencies to 1.29.0-alpha.3.

* Change scheduler framework function names after recent refactor in
kubernetes scheduler.

* chore(helm): bump version of cluster-autoscaler

Signed-off-by: Thomas Stadler <[email protected]>

* chore(helm): docs, update README template

Signed-off-by: Thomas Stadler <[email protected]>

* Fix capacityType label in AWS ManagedNodeGroup

Fixes an issue where the capacityType label inferred from an empty
EKS ManagedNodeGroup does not match the same label on the node after it
is created and joins the cluster

* Cleanup: Remove deprecated github.com/golang/protobuf usage

- Regenerate cloudprovider/externalgrpc proto
- go mod tidy

* Remove maps.Copy usage.

* chore: upgrade vpa go and k8s dependencies

Signed-off-by: Amir Alavi <[email protected]>

* ScaleUp is only ever called when there are unscheduled pods

* Bump golang from 1.21.2 to 1.21.4 in /vertical-pod-autoscaler/builder

Bumps golang from 1.21.2 to 1.21.4.

---
updated-dependencies:
- dependency-name: golang
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <[email protected]>

* Disambiguate the resource usage node removal eligibility messages

* Cleanup: Remove separate client for k8s events

Remove RateLimiting options - replay on APF for apiserver protection.
Details: kubernetes/kubernetes#111880

* Update Chart.yaml

* Remove gce-expander-ephemeral-storage-support flag

Always enable the feature

* Add min/max/current asg size to log

* Clarify that log line updates cache, now AWS

* Update README.md: Link to Cluster-API

Add Link to Cluster API.

* azure: add owner-jackfrancis

* Update OWNERS - typo

* Update README.md

* Template the autoDiscovery.clusterName variable in the Helm chart

* fix: Add revisionHistoryLimit override to cluster-autoscaler

Signed-off-by: Matt Dainty <[email protected]>

* allow users to avoid aws instance not found spam

* fix: alicloud the function NodeGroupForNode is incorrect

* Update README.md

Fix error in text

* fix: handle error when listing machines

Signed-off-by: Cyrill Troxler <[email protected]>

* AWS: cache instance requirements

* fix: update node annotation used to limit log spam with valid key

* Removes unnecessary check

* Allow overriding domain suffix in GCE cloud provider.

* chore(deps): update vendored hcloud-go to 2.4.0

Generated by:

```
UPSTREAM_REF=v2.4.0 hack/update-vendor.sh
```

* Add new pod list processors for clearing TPU requests & filtering out
expendable pods

Treat non-processed pods yet as unschedulable

* Fix multiple comments and update flags

* Add new test for new behaviour and revert changes made to other tests

* Allow users to specify which schedulers to ignore

* Update flags, Improve tests readability & use Bypass instead of ignore in naming

* Update static_autoscaler tests & handle pod list processors errors as warnings

* Fix: Include restartable init containers in Pod utilization calc

Reuse k/k resourcehelper func

* Implement ProvReq service

* Set Go versions to the same settings kubernetes/kubernetes uses

Looks like specifying the Go patch version in go.mod might've been
a mistake: kubernetes/kubernetes#121808.

* feat: implement kwok cloudprovider

feat: wip implement `CloudProvider` interface boilerplate for `kwok` provider
Signed-off-by: vadasambar <[email protected]>

feat: add builder for `kwok`
- add logic to scale up and scale down nodes in `kwok` provider
Signed-off-by: vadasambar <[email protected]>

feat: wip parse node templates from file
Signed-off-by: vadasambar <[email protected]>

docs: add short README
Signed-off-by: vadasambar <[email protected]>

feat: implement remaining things
- to get the provider in a somewhat working state
Signed-off-by: vadasambar <[email protected]>

docs: add in-cluster `kwok` as pre-requisite in the README
Signed-off-by: vadasambar <[email protected]>

fix: templates file not correctly marshalling into node list
Signed-off-by: vadasambar <[email protected]>

fix: `invalid leading UTF-8 octet` error during template parsing
- remove encoding using `gob`
- not required
Signed-off-by: vadasambar <[email protected]>

fix: use lister to get and list
- instead of uncached kube client
- add lister as a field on the provider and nodegroup struct
Signed-off-by: vadasambar <[email protected]>

fix: `did not find nodegroup annotation` error
- CA was thinking the annotation is not present even though it is
- fix a bug with parsing annotation
Signed-off-by: vadasambar <[email protected]>

fix: CA node recognizing fake nodegroups
- add provider ID to nodes in the format `kwok:<node-name>`
- fix invalid `KwokManagedAnnotation`
- sanitize template nodes (remove `resourceVersion` etc.,)
- not sanitizing the node leads to error during creation of new nodes
- abstract code to get NG name into a separate function `getNGNameFromAnnotation`
Signed-off-by: vadasambar <[email protected]>

fix: node not getting deleted
Signed-off-by: vadasambar <[email protected]>

test: add empty test file
Signed-off-by: vadasambar <[email protected]>

chore: add OWNERS file
Signed-off-by: vadasambar <[email protected]>

feat: wip kwok provider config
- add samples for static and dynamic template nodes
Signed-off-by: vadasambar <[email protected]>

feat: wip implement pulling node templates from cluster
- add status field to kwok provider config
- this is to capture how the nodes would be grouped by (can be annotation or label)
- use kwok provider config status to get ng name from the node template
Signed-off-by: vadasambar <[email protected]>

fix: syntax error in calling `loadNodeTemplatesFromCluster`
Signed-off-by: vadasambar <[email protected]>

feat: first draft of dynamic node templates
- this allows node templates to be pulled from the cluster
- instead of having to specify static templates manually
Signed-off-by: vadasambar <[email protected]>

fix: syntax error
Signed-off-by: vadasambar <[email protected]>

refactor: abstract out related code into separate files
- use named constants instead of hardcoded values
Signed-off-by: vadasambar <[email protected]>

feat: cleanup kwok nodes when CA is exiting
- so that the user doesn't have to cleanup the fake nodes themselves
Signed-off-by: vadasambar <[email protected]>

refactor: return `nil` instead of err for `HasInstance`
- because there is no underlying cloud provider (hence no reason to return `cloudprovider.ErrNotImplemented`
Signed-off-by: vadasambar <[email protected]>

test: start working on tests for kwok provider config
Signed-off-by: vadasambar <[email protected]>

feat: add `gpuLabelKey` under `nodes` field in kwok provider config
- fix validation for kwok provider config
Signed-off-by: vadasambar <[email protected]>

docs: add motivation doc
- update README with more details
Signed-off-by: vadasambar <[email protected]>

feat: update kwok provider config example to support pulling gpu labels and types from existing providers
- still needs to be implemented in the code
Signed-off-by: vadasambar <[email protected]>

feat: wip update kwok provider config to get gpu label and available types
Signed-off-by: vadasambar <[email protected]>

feat: wip read gpu label and available types from specified provider
- add available gpu types in kwok provider config status
Signed-off-by: vadasambar <[email protected]>

feat: add validation for gpu fields in kwok provider config
- load gpu related fields in kwok provider config status
Signed-off-by: vadasambar <[email protected]>

feat: implement `GetAvailableGPUTypes`
Signed-off-by: vadasambar <[email protected]>

feat: add support to install and uninstall kwok
- add option to disable installation
- add option to manually specify kwok release tag
- add future scope in readme
Signed-off-by: vadasambar <[email protected]>

docs: add future scope 'evaluate adding support to check if kwok controller already exists'
Signed-off-by: vadasambar <[email protected]>

fix: vendor conflict and cyclic import
- remove support to get gpu config from the specified provider (can't be used because leads to cyclic import)
Signed-off-by: vadasambar <[email protected]>

docs: add a TODO 'get gpu config from other providers'
Signed-off-by: vadasambar <[email protected]>

refactor: rename `file` -> `configmap`
- load config and templates from configmap instead of file
- move `nodes` and `nodegroups` config to top level
- add helper to encode configmap data into `[]bytes`
- add helper to get current pod namespace
Signed-off-by: vadasambar <[email protected]>

feat: add new options to the kwok provider config
- auto install kwok only if the version is >= v0.4.0
- add test for `GPULabel()`
- use `kubectl apply` way of installing kwok instead of kustomize
- add test for kwok helpers
- add test for kwok config
- inject service account name in CA deployment
- add example configmap for node templates and kwok provider config in CA helm chart
- add permission to create `clusterrolebinding` (so that kwok provider can create a clusterrolebinding with `cluster-admin` role and create/delete upstream manifests)
- update kwok provider sample configs
- update `README`
Signed-off-by: vadasambar <[email protected]>

chore: update go.mod to use v1.28 packages
Signed-off-by: vadasambar <[email protected]>

chore: `go mod tidy` and `go mod vendor` (again)
Signed-off-by: vadasambar <[email protected]>

refactor: kwok installation code
- add functions to create and delete clusterrolebinding to create kwok resources
- refactor kwok install and uninstall fns
- delete manifests in the opposite order of install ]
- add cleaning up left-over kwok installation to future scope
Signed-off-by: vadasambar <[email protected]>

fix: nil ptr error
- add `TODO` in README for adding docs around kwok config fields
Signed-off-by: vadasambar <[email protected]>

refactor: remove code to automatically install and uninstall `kwok`
- installing/uninstalling requires strong permissions to be granted to `kwok`
- granting strong permissions to `kwok` means granting strong permissions to the entire CA codebase
- this can pose a security risk
- I have removed the code related to install and uninstall for now
- will proceed after discussion with the community
Signed-off-by: vadasambar <[email protected]>

chore: run `go mod tidy` and `go mod vendor`
Signed-off-by: vadasambar <[email protected]>

fix: add permission to create nodes
- to fix permissions error for kwok provider
Signed-off-by: vadasambar <[email protected]>

test: add more unit tests
- add tests for kwok helpers
- fix and update kwok config tests
- fix a bug where gpu label was getting assigned to `kwokConfig.status.key`
- expose `loadConfigFile` -> `LoadConfigFile`
- throw error if templates configmap does not have `templates` key (value of which is node templates)
- finish test for `GPULabel()`
- add tests for `NodeGroupForNode()`
- expose `loadNodeTemplatesFromConfigMap` -> `LoadNodeTemplatesFromConfigMap`
- fix `KwokCloudProvider`'s kwok config was empty (this caused `GPULabel()` to return empty)
Signed-off-by: vadasambar <[email protected]>

refactor: abstract provider ID code into `getProviderID` fn
- fix provider name in test `kwok` -> `kwok:kind-worker-xxx`
Signed-off-by: vadasambar <[email protected]>

chore: run `go mod vendor` and `go mod tidy
Signed-off-by: vadasambar <[email protected]>

docs(cloudprovider/kwok): update info on creating nodegroups based on `hostname/label`
Signed-off-by: vadasambar <[email protected]>

refactor(charts): replace fromLabelKey value `"kubernetes.io/hostname"` -> `"kwok-nodegroup"`
- `"kubernetes.io/hostname"` leads to infinite scale-up
Signed-off-by: vadasambar <[email protected]>

feat: support running CA with kwok provider locally
Signed-off-by: vadasambar <[email protected]>

refactor: use global informer factory
Signed-off-by: vadasambar <[email protected]>

refactor: use `fromNodeLabelKey: "kwok-nodegroup"` in test templates
Signed-off-by: vadasambar <[email protected]>

refactor: `Cleanup()` logic
- clean up only nodes managed by the kwok provider
Signed-off-by: vadasambar <[email protected]>

fix/refactor: nodegroup creation logic
- fix issue where fake node was getting created which caused fatal error
- use ng annotation to keep track of nodegroups
- (when creating nodegroups) don't process nodes which don't have the right ng nabel
- suffix ng name with unix timestamp
Signed-off-by: vadasambar <[email protected]>

refactor/test(cloudprovider/kwok): write tests for `BuildKwokProvider` and `Cleanup`
- pass only the required node lister to cloud provider instead of the entire informer factory
- pass the required configmap name to `LoadNodeTemplatesFromConfigMap` instead of passing the entire kwok provider config
- implement fake node lister for testing
Signed-off-by: vadasambar <[email protected]>

test: add test case for dynamic templates in `TestNodeGroupForNode`
- remove non-required fields from template node
Signed-off-by: vadasambar <[email protected]>

test: add tests for `NodeGroups()`
- add extra node template without ng selector label to add more variability in the test
Signed-off-by: vadasambar <[email protected]>

test: write tests for `GetNodeGpuConfig()`
Signed-off-by: vadasambar <[email protected]>

test: add test for `GetAvailableGPUTypes`
Signed-off-by: vadasambar <[email protected]>

test: add test for `GetResourceLimiter()`
Signed-off-by: vadasambar <[email protected]>

test(cloudprovider/kwok): add tests for nodegroup's `IncreaseSize()`
- abstract error msgs into variables to use them in tests
Signed-off-by: vadasambar <[email protected]>

test(cloudprovider/kwok): add test for ng `DeleteNodes()` fn
- add check for deleting too many nodes
- rename err msg var names to make them consistent
Signed-off-by: vadasambar <[email protected]>

test(cloudprovider/kwok): add tests for ng `DecreaseTargetSize()`
- abstract error msgs into variables (for easy use in tests)
Signed-off-by: vadasambar <[email protected]>

test(cloudprovider/kwok): add test for ng `Nodes()`
- add extra test case for `DecreaseTargetSize()` to check lister error
Signed-off-by: vadasambar <[email protected]>

test(cloudprovider/kwok): add test for ng `TemplateNodeInfo`
Signed-off-by: vadasambar <[email protected]>

test(cloudprovider/kwok): improve tests for `BuildKwokProvider()`
- add more test cases
- refactor lister for `TestBuildKwokProvider()` and `TestCleanUp()`
Signed-off-by: vadasambar <[email protected]>

test(cloudprovider/kwok): add test for ng `GetOptions`
Signed-off-by: vadasambar <[email protected]>

test(cloudprovider/kwok): unset `KWOK_CONFIG_MAP_NAME` at the end of the test
- not doing so leads to failure in other tests
- remove `kwokRelease` field from kwok config (not used anymore) - this was causing the tests to fail
Signed-off-by: vadasambar <[email protected]>

chore: bump CA chart version
- this is because of changes made related to kwok
- fix type `everwhere` -> `everywhere`
Signed-off-by: vadasambar <[email protected]>

chore: fix linting checks
Signed-off-by: vadasambar <[email protected]>

chore: address CI lint errors
Signed-off-by: vadasambar <[email protected]>

chore: generate helm docs for `kwokConfigMapName`
- remove `KWOK_CONFIG_MAP_KEY` (not being used in the code)
- bump helm chart version
Signed-off-by: vadasambar <[email protected]>

docs: revise the outline for README
- add AEP link to the motivation doc
Signed-off-by: vadasambar <[email protected]>

docs: wip create an outline for the README
- remove `kwok` field from examples (not needed right now)
Signed-off-by: vadasambar <[email protected]>

docs: add outline for ascii gifs
Signed-off-by: vadasambar <[email protected]>

refactor: rename env variable `KWOK_CONFIG_MAP_NAME` -> `KWOK_PROVIDER_CONFIGMAP`
Signed-off-by: vadasambar <[email protected]>

docs: update README with info around installation and benefits of using kwok provider
- add `Kwok` as a provider in main CA README
Signed-off-by: vadasambar <[email protected]>

chore: run `go mod vendor`
- remove TODOs that are not needed anymore
Signed-off-by: vadasambar <[email protected]>

docs: finish first draft of README
Signed-off-by: vadasambar <[email protected]>

fix: env variable in chart `KWOK_CONFIG_MAP_NAME` -> `KWOK_PROVIDER_CONFIGMAP`
Signed-off-by: vadasambar <[email protected]>

refactor: remove redundant/deprecated code
Signed-off-by: vadasambar <[email protected]>

chore: bump chart version `9.30.1` -> `9.30.2`
- because of kwok provider related changes
Signed-off-by: vadasambar <[email protected]>

chore: fix typo `offical` -> `official`
Signed-off-by: vadasambar <[email protected]>

chore: remove debug log msg
Signed-off-by: vadasambar <[email protected]>

docs: add links for getting help
Signed-off-by: vadasambar <[email protected]>

refactor: fix type in log `external cluster` -> `cluster`
Signed-off-by: vadasambar <[email protected]>

chore: add newline in chart.yaml to fix CI lint
Signed-off-by: vadasambar <[email protected]>

docs: fix mistake `sig-kwok` -> `sig-scheduling`
- kwok is a part if sig-scheduling (there is no sig-kwok)
Signed-off-by: vadasambar <[email protected]>

docs: fix type `release"` -> `"release"`
Signed-off-by: vadasambar <[email protected]>

refactor: pass informer instead of lister to cloud provider builder fn
Signed-off-by: vadasambar <[email protected]>

* add unit test for function getScalingInstancesByGroup

* Azure: Remove AKS vmType

Signed-off-by: Jack Francis <[email protected]>

* Implement TemplateNodeInfo for civo cloudprovider

Signed-off-by: Vishal Anarse <[email protected]>

* Add comment for type and function

Signed-off-by: Vishal Anarse <[email protected]>

* refactor(*): move getKubeClient to utils/kubernetes

(cherry picked from commit b9f636d)

Signed-off-by: qianlei.qianl <[email protected]>

refactor: move logic to create client to utils/kubernetes pkg
- expose `CreateKubeClient` as public function
- make `GetKubeConfig` into a private `getKubeConfig` function (can be exposed as a public function in the future if needed)
Signed-off-by: vadasambar <[email protected]>

fix: CI failing because cloudproviders were not updated to use new autoscaling option fields
Signed-off-by: vadasambar <[email protected]>

refactor: define errors as constants
Signed-off-by: vadasambar <[email protected]>

refactor: pass kube client options by value
Signed-off-by: vadasambar <[email protected]>

* Calculate real value for template using node group

Signed-off-by: Vishal Anarse <[email protected]>

* Fix lint error

* Fix tests

Signed-off-by: Vishal Anarse <[email protected]>

* Update aws-sdk-go to 1.48.7 via tarball
Remove *_test.go, models/, examples

* + Added SDK version in the log
+ Update version in README + command

* Switch to multistage build Dockerfiles for VPA

* Adding 33 instances types

* heml chart - update cluster-autoscaler to 1.28

* Bump builder images to go 1.21.5

* feat: add metrics to show target size of every node group

* deprecate unused node-autoprovisioning-enabled and max-autoprovisioned-node-group-count flags

Signed-off-by: Prashant Rewar <[email protected]>

* fix(hetzner): insufficient nodes when boot fails

The Hetzner Cloud API returns "Actions" for anything asynchronous that
happens inside the backend. When creating a new server multiple actions
are returned: `create_server`, `start_server`, `attach_to_network` (if set).

Our current code waits for the `create_server` and if it fails, it makes
sure to delete the server so cluster-autoscaler can create a new one
immediately to provide the required capacity. If one of the "follow up"
actions fails though, we do not handle this. This causes issues when the
server for whatever reason did not start properly on the first try, as
then the customer has a shutdown server, is paying for it, but does not
receive the additional capacity for their Kubernetes cluster.

This commit fixes the bug, by awaiting all actions returned by the
create server API call, and deleting the server if any of them fail.

* Add VSCode workspace files to .gitignore

* Remove vpa/builder and switch dependabot updates to component Dockerfiles

* fix: updated readme for hetzner cloud provider

* Add error details to autoscaling backoff.

Change-Id: I3b5c62ba13c2e048ce2d7170016af07182c11eee

* Make backoff.Status.ErrorInfo non-pointer.

Change-Id: I1f812d4d6f42db97670ef7304fc0e895c837a13b

* allow specifing grpc timeout rather than hardcoded 5 seconds

Signed-off-by: lizhen <[email protected]>

* [GCE] Support paginated instance listing

* azure: fix chart bugs after AKS vmType deprecation

Signed-off-by: Jack Francis <[email protected]>

* Update VPA release README to reference 1.X VPA versions.

* implement priority based evictor and refactor drain logic

* Update dependencies to kubernetes 1.29.0

* [civo] Add Gpu count to node template

Signed-off-by: Vishal Anarase <[email protected]>
(cherry picked from commit 8703ff9)

* Restore flags for setting QPS limit in CA

Partially undo kubernetes#6274. I noticed that with this change CA get rate limited and
slows down significantly (especially during large scale downs).

* Pass Burst and QPS client params to capi k8s clients

* Dependency update for CA 1.29.1

* feat: support `--scale-down-delay-after-*` per nodegroup
Signed-off-by: vadasambar <[email protected]>

feat: update scale down status after every scale up
- move scaledown delay status to cluster state/registry
- enable scale down if  `ScaleDownDelayTypeLocal` is enabled
- add new funcs on cluster state to get and update scale down delay status
- use timestamp instead of booleans to track scale down delay status
Signed-off-by: vadasambar <[email protected]>

refactor: use existing fields on clusterstate
- uses `scaleUpRequests`, `scaleDownRequests` and `scaleUpFailures` instead of `ScaleUpDelayStatus`
- changed the above existing fields a little to make them more convenient for use
- moved initializing scale down delay processor to static autoscaler (because clusterstate is not available in main.go)
Signed-off-by: vadasambar <[email protected]>

refactor: remove note saying only `scale-down-after-add` is supported
- because we are supporting all the flags
Signed-off-by: vadasambar <[email protected]>

fix: evaluate `scaleDownInCooldown` the old way only if `ScaleDownDelayTypeLocal` is set to `false`
Signed-off-by: vadasambar <[email protected]>

refactor: remove line saying `--scale-down-delay-type-local` is only supported for `--scale-down-delay-after-add`
- because it is not true anymore
- we are supporting all `--scale-down-delay-after-*` flags per nodegroup
Signed-off-by: vadasambar <[email protected]>

test: fix clusterstate tests failing
Signed-off-by: vadasambar <[email protected]>

refactor: move back initializing processors logic to from static autoscaler to main
- we don't want to initialize processors in static autoscaler because anyone implementing an alternative to static_autoscaler has to initialize the processors
- and initializing specific processors is making static autoscaler aware of an implementation detail which might not be the best practice
Signed-off-by: vadasambar <[email protected]>

refactor: revert changes related to `clusterstate`
- since I am going with observer pattern
Signed-off-by: vadasambar <[email protected]>

feat: add observer interface for state of scaling
- to implement observer pattern for tracking state of scale up/downs (as opposed to using clusterstate to do the same)
- refactor `ScaleDownCandidatesDelayProcessor` to use fields from the new observer
Signed-off-by: vadasambar <[email protected]>

refactor: remove params passed to `clearScaleUpFailures`
- not needed anymore
Signed-off-by: vadasambar <[email protected]>

refactor: revert clusterstate tests
- approach has changed
- I am not making any changes in clusterstate now
Signed-off-by: vadasambar <[email protected]>

refactor: add accidentally deleted lines for clusterstate test
Signed-off-by: vadasambar <[email protected]>

feat: implement `Add` fn for scale state observer
- to easily add new observers
- re-word comments
- remove redundant params from `NewDefaultScaleDownCandidatesProcessor`
Signed-off-by: vadasambar <[email protected]>

fix: CI complaining because no comments on fn definitions
Signed-off-by: vadasambar <[email protected]>

feat: initialize parent `ScaleDownCandidatesProcessor`
- instead  of `ScaleDownCandidatesSortingProcessor` and `ScaleDownCandidatesDelayProcessor` separately
Signed-off-by: vadasambar <[email protected]>

refactor: add scale state notifier to list of default processors
- initialize processors for `NewDefaultScaleDownCandidatesProcessor` outside and pass them to the fn
- this allows more flexibility
Signed-off-by: vadasambar <[email protected]>

refactor: add observer interface
- create a separate observer directory
- implement `RegisterScaleUp` function in the clusterstate
- TODO: resolve syntax errors
Signed-off-by: vadasambar <[email protected]>

feat: use `scaleStateNotifier` in place of `clusterstate`
- delete leftover `scale_stateA_observer.go` (new one is already present in `observers` directory)
- register `clustertstate` with `scaleStateNotifier`
- use `Register` instead of `Add` function in `scaleStateNotifier`
- fix `go build`
- wip: fixing tests
Signed-off-by: vadasambar <[email protected]>

test: fix syntax errors
- add utils package `pointers` for converting `time` to pointer (without having to initialize a new variable)
Signed-off-by: vadasambar <[email protected]>

feat: wip track scale down failures along with scale up failures
- I was tracking scale up failures but not scale down failures
- fix copyright year 2017 -> 2023 for the new `pointers` package
Signed-off-by: vadasambar <[email protected]>

feat: register failed scale down with scale state notifier
- wip writing tests for `scale_down_candidates_delay_processor`
- fix CI lint errors
- remove test file for `scale_down_candidates_processor` (there is not much to test as of now)
Signed-off-by: vadasambar <[email protected]>

test: wip tests for `ScaleDownCandidatesDelayProcessor`
Signed-off-by: vadasambar <[email protected]>

test: add unit tests for `ScaleDownCandidatesDelayProcessor`
Signed-off-by: vadasambar <[email protected]>

refactor: don't track scale up failures in `ScaleDownCandidatesDelayProcessor`
- not needed
Signed-off-by: vadasambar <[email protected]>

test: better doc comments for `TestGetScaleDownCandidates`
Signed-off-by: vadasambar <[email protected]>

refactor: don't ignore error in `NGChangeObserver`
- return it instead and let the caller decide what to do with it
Signed-off-by: vadasambar <[email protected]>

refactor: change pointers to values in `NGChangeObserver` interface
- easier to work with
- remove `expectedAddTime` param from `RegisterScaleUp` (not needed for now)
- add tests for clusterstate's `RegisterScaleUp`
Signed-off-by: vadasambar <[email protected]>

refactor: conditions in `GetScaleDownCandidates`
- set scale down in cool down if the number of scale down candidates is 0
Signed-off-by: vadasambar <[email protected]>

test: use `ng1` instead of `ng2` in existing test
Signed-off-by: vadasambar <[email protected]>

feat: wip static autoscaler tests
Signed-off-by: vadasambar <[email protected]>

refactor: assign directly instead of using `sdProcessor` variable
- variable is not needed
Signed-off-by: vadasambar <[email protected]>

test: first working test for static autoscaler
Signed-off-by: vadasambar <[email protected]>

test: continue working on static autoscaler tests
Signed-off-by: vadasambar <[email protected]>

test: wip second static autoscaler test
Signed-off-by: vadasambar <[email protected]>

refactor: remove `Println` used for debugging
Signed-off-by: vadasambar <[email protected]>

test: add static_autoscaler tests for scale down delay per nodegroup flags
Signed-off-by: vadasambar <[email protected]>

chore: rebase off the latest `master`
- change scale state observer interface's `RegisterFailedScaleup` to reflect latest changes around clusterstate's `RegisterFailedScaleup` in `master`
Signed-off-by: vadasambar <[email protected]>

test: fix clusterstate test failing
Signed-off-by: vadasambar <[email protected]>

test: fix failing orchestrator test
Signed-off-by: vadasambar <[email protected]>

refactor: rename `defaultScaleDownCandidatesProcessor` -> `combinedScaleDownCandidatesProcessor`
- describes the processor better
Signed-off-by: vadasambar <[email protected]>

refactor: replace `NGChangeObserver` -> `NodeGroupChangeObserver`
- makes it easier to understand for someone not familiar with the codebase
Signed-off-by: vadasambar <[email protected]>

docs: reword code comment `after` -> `for which`
Signed-off-by: vadasambar <[email protected]>

refactor: don't return error from `RegisterScaleDown`
- not needed as of now (no implementer function returns a non-nil error for this function)
Signed-off-by: vadasambar <[email protected]>

refactor: address review comments around ng change observer interface
- change dir structure of nodegroup change observer package
- stop returning errors wherever it is not needed in the nodegroup change observer interface
- rename `NGChangeObserver` -> `NodeGroupChangeObserver` interface (makes it easier to understand)
Signed-off-by: vadasambar <[email protected]>

refactor: make nodegroupchange observer thread-safe
Signed-off-by: vadasambar <[email protected]>

docs: add TODO to consider using multiple mutexes in nodegroupchange observer
Signed-off-by: vadasambar <[email protected]>

refactor: use `time.Now()` directly instead of assigning a variable to it
Signed-off-by: vadasambar <[email protected]>

refactor: share code for checking if there was a recent scale-up/down/failure
Signed-off-by: vadasambar <[email protected]>

test: convert `ScaleDownCandidatesDelayProcessor` into table tests
Signed-off-by: vadasambar <[email protected]>

refactor: change scale state notifier's `Register()` -> `RegisterForNotifications()`
- makes it easier to understand what the function does
Signed-off-by: vadasambar <[email protected]>

test: replace scale state notifier `Register` -> `RegisterForNotifications` in test
- to fix syntax errors since it is already renamed in the actual code
Signed-off-by: vadasambar <[email protected]>

refactor: remove `clusterStateRegistry` from `delete_in_batch` tests
- not needed anymore since we have `scaleStateNotifier`
Signed-off-by: vadasambar <[email protected]>

refactor: address PR review comments
Signed-off-by: vadasambar <[email protected]>

fix: add empty `RegisterFailedScaleDown` for clusterstate
- fix syntax error in static autoscaler test
Signed-off-by: vadasambar <[email protected]>
(cherry picked from commit 5de49a1)

* Backport kubernetes#6522 [CA] Bump go version into CA1.29

* Backport kubernetes#6491 and kubernetes#6494 [CA] Add informer argument to the CloudProviders builder into CA1.29

* Merge pull request kubernetes#6617 from ionos-cloud/update-ionos-sdk

ionoscloud: Update ionos-cloud sdk-go and add metrics

* CA - Update k/k vendor to 1.29.3

* [v1.29][Hetzner] Fix missing ephemeral storage definition

This fixed requests for pods with ephemeral storage requests being denied due to insufficient ephemeral storage for the Hetzner provider.

Backport of kubernetes#6574 to `v1.29` branch.

* Use cache to track vms pools

* fx

* Add UTs

* Fx boilder plate header

* Add const

* Rename vmsPoolSet

* [v1.29][Hetzner] Fix Autoscaling for worker nodes with invalid ProviderID

This change fixes a bug that arises when the user's cluster includes
worker nodes not from Hetzner Cloud, such as a Hetzner Dedicated server
or any server resource other than Hetzner. It also corrects the
behavior when a server has been physically deleted from Hetzner Cloud.

Signed-off-by: Maksim Paskal <[email protected]>

* Sync with upstream v1.29.0

* Sync with upstream v1.29.0

* Gofmt format

* Update go.mod for vpA

* Update go.mod for vpa/e2e

* Added mcm as exception in boilerplate

* Added integration as exception in boilerplate

* Updateed README for charts

* Addressed review comments

* Addressed review comments

* Addressed review comments

* Addressed review comments

* Improved log levels for better logging

* modified the log place

* Added flags

---------

Signed-off-by: Jonathan Raymond <[email protected]>
Signed-off-by: Ayush Rangwala <[email protected]>
Signed-off-by: Thomas Stadler <[email protected]>
Signed-off-by: Amir Alavi <[email protected]>
Signed-off-by: dependabot[bot] <[email protected]>
Signed-off-by: Matt Dainty <[email protected]>
Signed-off-by: Cyrill Troxler <[email protected]>
Signed-off-by: vadasambar <[email protected]>
Signed-off-by: Jack Francis <[email protected]>
Signed-off-by: Vishal Anarse <[email protected]>
Signed-off-by: Prashant Rewar <[email protected]>
Signed-off-by: lizhen <[email protected]>
Signed-off-by: Maksim Paskal <[email protected]>
Co-authored-by: Daniel Lipovetsky <[email protected]>
Co-authored-by: Kubernetes Prow Robot <[email protected]>
Co-authored-by: Mathieu Bruneau <[email protected]>
Co-authored-by: Artem Minyaylov <[email protected]>
Co-authored-by: Dumlu Timuralp <[email protected]>
Co-authored-by: Hakan Bostan <[email protected]>
Co-authored-by: Rich Gowman <[email protected]>
Co-authored-by: Daniel Gutowski <[email protected]>
Co-authored-by: mikutas <[email protected]>
Co-authored-by: Jonathan Raymond <[email protected]>
Co-authored-by: Johnnie Ho <[email protected]>
Co-authored-by: aleskandro <[email protected]>
Co-authored-by: Kuba Tużnik <[email protected]>
Co-authored-by: lisenet <[email protected]>
Co-authored-by: Piotr Wrótniak <[email protected]>
Co-authored-by: Ayush Rangwala <[email protected]>
Co-authored-by: Dixita Narang <[email protected]>
Co-authored-by: Artur Żyliński <[email protected]>
Co-authored-by: Alexandros Afentoulis <[email protected]>
Co-authored-by: jw-maynard <[email protected]>
Co-authored-by: xiaoqing <[email protected]>
Co-authored-by: Thomas Stadler <[email protected]>
Co-authored-by: Marco Voelz <[email protected]>
Co-authored-by: Aleksandra Gacek <[email protected]>
Co-authored-by: Luis Ramirez <[email protected]>
Co-authored-by: piotrwrotniak <[email protected]>
Co-authored-by: Amir Alavi <[email protected]>
Co-authored-by: Michael Grosser <[email protected]>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: shapirus <[email protected]>
Co-authored-by: Guy Templeton <[email protected]>
Co-authored-by: Mads Hartmann <[email protected]>
Co-authored-by: Thomas Güttler <[email protected]>
Co-authored-by: Prachi Gandhi <[email protected]>
Co-authored-by: Prachi Gandhi <[email protected]>
Co-authored-by: Mike Tougeron <[email protected]>
Co-authored-by: Matt Dainty <[email protected]>
Co-authored-by: Guo Peng <[email protected]>
Co-authored-by: Alex Serbul <[email protected]>
Co-authored-by: Cyrill Troxler <[email protected]>
Co-authored-by: alexanderConstantinescu <[email protected]>
Co-authored-by: Brydon Cheyney <[email protected]>
Co-authored-by: Julian Tölle <[email protected]>
Co-authored-by: Mahmoud Atwa <[email protected]>
Co-authored-by: Yaroslava Serdiuk <[email protected]>
Co-authored-by: vadasambar <[email protected]>
Co-authored-by: Jack Francis <[email protected]>
Co-authored-by: Vishal Anarse <[email protected]>
Co-authored-by: qianlei.qianl <[email protected]>
Co-authored-by: Andrea Scarpino <[email protected]>
Co-authored-by: Prashant Rewar <[email protected]>
Co-authored-by: Jont828 <[email protected]>
Co-authored-by: Pascal <[email protected]>
Co-authored-by: Walid Ghallab <[email protected]>
Co-authored-by: lizhen <[email protected]>
Co-authored-by: Daniel Kłobuszewski <[email protected]>
Co-authored-by: Luiz Antonio <[email protected]>
Co-authored-by: damikag <[email protected]>
Co-authored-by: Maciek Pytel <[email protected]>
Co-authored-by: Joachim Bartosik <[email protected]>
Co-authored-by: Kyle Weaver <[email protected]>
Co-authored-by: shubham82 <[email protected]>
Co-authored-by: Kubernetes Prow Robot <[email protected]>
Co-authored-by: wenxuanW <[email protected]>
Co-authored-by: Maksim Paskal <[email protected]>
  • Loading branch information
Show file tree
Hide file tree
Showing 11,039 changed files with 2,998,208 additions and 2,749,659 deletions.
The diff you're trying to view is too large. We only load the first 3000 changed files.
18 changes: 16 additions & 2 deletions .github/dependabot.yml
Original file line number Diff line number Diff line change
Expand Up @@ -8,9 +8,23 @@ updates:
labels:
- "vertical-pod-autoscaler"
- package-ecosystem: docker
directory: "/vertical-pod-autoscaler/builder"
directory: "/vertical-pod-autoscaler/pkg/recommender"
schedule:
interval: daily
open-pull-requests-limit: 10
open-pull-requests-limit: 3
labels:
- "vertical-pod-autoscaler"
- package-ecosystem: docker
directory: "/vertical-pod-autoscaler/pkg/updater"
schedule:
interval: daily
open-pull-requests-limit: 3
labels:
- "vertical-pod-autoscaler"
- package-ecosystem: docker
directory: "/vertical-pod-autoscaler/pkg/admission-controller"
schedule:
interval: daily
open-pull-requests-limit: 3
labels:
- "vertical-pod-autoscaler"
2 changes: 1 addition & 1 deletion .github/workflows/ci.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@ jobs:
- name: Set up Go
uses: actions/setup-go@v2
with:
go-version: '>=1.20.0'
go-version: '1.21.6'

- uses: actions/checkout@v2
with:
Expand Down
6 changes: 5 additions & 1 deletion .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -18,6 +18,7 @@ cluster-autoscaler/dev

# VSCode project files
**/.vscode
*.code-workspace

# Emacs save files
*~
Expand All @@ -29,4 +30,7 @@ cluster-autoscaler/dev
[._]s[a-w][a-z]
*.un~
Session.vim
.netrwhist
.netrwhist

# Binary files
bin/
9 changes: 0 additions & 9 deletions LICENSES/BSD-2-Clause.txt
Original file line number Diff line number Diff line change
@@ -1,9 +0,0 @@
Copyright (c) <year> <owner>

Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met:

1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer.

2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution.

THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
11 changes: 0 additions & 11 deletions LICENSES/BSD-3-Clause.txt
Original file line number Diff line number Diff line change
@@ -1,11 +0,0 @@
Copyright (c) <year> <owner>.

Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met:

1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer.

2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution.

3. Neither the name of the copyright holder nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission.

THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
9 changes: 0 additions & 9 deletions LICENSES/MIT.txt
Original file line number Diff line number Diff line change
@@ -1,9 +0,0 @@
MIT License

Copyright (c) <year> <copyright holders>

Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
2 changes: 2 additions & 0 deletions addon-resizer/OWNERS
Original file line number Diff line number Diff line change
@@ -1,6 +1,8 @@
approvers:
- kwiesmueller
- jbartosik
reviewers:
- kwiesmueller
- jbartosik
emeritus_approvers:
- bskiba # 2022-09-30
Expand Down
1 change: 1 addition & 0 deletions builder/Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -12,6 +12,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.


FROM golang:1.22.2
LABEL maintainer="Marcin Wielgus <[email protected]>"

Expand Down
4 changes: 2 additions & 2 deletions charts/cluster-autoscaler/Chart.yaml
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
apiVersion: v2
appVersion: 1.27.2
appVersion: 1.28.2
description: Scales Kubernetes worker nodes within autoscaling groups.
engine: gotpl
home: https://github.com/kubernetes/autoscaler
Expand All @@ -11,4 +11,4 @@ name: cluster-autoscaler
sources:
- https://github.com/kubernetes/autoscaler/tree/master/cluster-autoscaler
type: application
version: 9.29.2
version: 9.34.1
47 changes: 39 additions & 8 deletions charts/cluster-autoscaler/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -70,8 +70,13 @@ To create a valid configuration, follow instructions for your cloud provider:

- [AWS](#aws---using-auto-discovery-of-tagged-instance-groups)
- [GCE](#gce)
- [Azure AKS](#azure-aks)
- [Azure](#azure)
- [OpenStack Magnum](#openstack-magnum)
- [Cluster API](#cluster-api)

### Templating the autoDiscovery.clusterName

The cluster name can be templated in the `autoDiscovery.clusterName` variable. This is useful when the cluster name is dynamically generated based on other values coming from external systems like Argo CD or Flux. This also allows you to use global Helm values to set the cluster name, e.g., `autoDiscovery.clusterName=\{\{ .Values.global.clusterName }}`, so that you don't need to set it in more than 1 location in the values file.

### AWS - Using auto-discovery of tagged instance groups

Expand Down Expand Up @@ -182,20 +187,18 @@ In the event you want to explicitly specify MIGs instead of using auto-discovery
--set autoscalingGroups[n].name=https://content.googleapis.com/compute/v1/projects/$PROJECTID/zones/$ZONENAME/instanceGroupManagers/$FULL-MIG-NAME,autoscalingGroups[n].maxSize=$MAXSIZE,autoscalingGroups[n].minSize=$MINSIZE
```
### Azure AKS
### Azure
The following parameters are required:
- `cloudProvider=azure`
- `autoscalingGroups[0].name=your-agent-pool,autoscalingGroups[0].maxSize=10,autoscalingGroups[0].minSize=1`
- `autoscalingGroups[0].name=your-vmss,autoscalingGroups[0].maxSize=10,autoscalingGroups[0].minSize=1`
- `azureClientID: "your-service-principal-app-id"`
- `azureClientSecret: "your-service-principal-client-secret"`
- `azureSubscriptionID: "your-azure-subscription-id"`
- `azureTenantID: "your-azure-tenant-id"`
- `azureClusterName: "your-aks-cluster-name"`
- `azureResourceGroup: "your-aks-cluster-resource-group-name"`
- `azureVMType: "AKS"`
- `azureNodeResourceGroup: "your-aks-cluster-node-resource-group"`
- `azureVMType: "vmss"`
### OpenStack Magnum
Expand Down Expand Up @@ -230,6 +233,32 @@ Additional config parameters available, see the `values.yaml` for more details
- `clusterAPIWorkloadKubeconfigPath`
- `clusterAPICloudConfigPath`

### Exoscale

The following parameters are required:

- `cloudProvider=exoscale`
- `autoDiscovery.clusterName=<CLUSTER NAME>`

Create an Exoscale API key with appropriate permissions as described in [cluster-autoscaler/cloudprovider/exoscale/README.md](https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/cloudprovider/exoscale/README.md).
A secret of name `<release-name>-exoscale-cluster-autoscaler` needs to be created, containing the api key and secret, as well as the zone.

```console
$ kubectl create secret generic my-release-exoscale-cluster-autoscaler \
--from-literal=api-key="EXOxxxxxxxxxxxxxxxxxxxxxxxx" \
--from-literal=api-secret="xxxxxxxxx-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx" --from-literal=api-zone="ch-gva-2"
```

After creating the secret, the chart may be installed:

```console
$ helm install my-release autoscaler/cluster-autoscaler \
--set cloudProvider=exoscale \
--set autoDiscovery.clusterName=<CLUSTER NAME>
```

Read [cluster-autoscaler/cloudprovider/exoscale/README.md](https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/cloudprovider/exoscale/README.md) for further information on the setup without helm.

## Uninstalling the Chart

To uninstall `my-release`:
Expand Down Expand Up @@ -360,7 +389,7 @@ vpa:
| azureTenantID | string | `""` | Azure tenant where the resources are located. Required if `cloudProvider=azure` |
| azureUseManagedIdentityExtension | bool | `false` | Whether to use Azure's managed identity extension for credentials. If using MSI, ensure subscription ID, resource group, and azure AKS cluster name are set. You can only use one authentication method at a time, either azureUseWorkloadIdentityExtension or azureUseManagedIdentityExtension should be set. |
| azureUseWorkloadIdentityExtension | bool | `false` | Whether to use Azure's workload identity extension for credentials. See the project here: https://github.com/Azure/azure-workload-identity for more details. You can only use one authentication method at a time, either azureUseWorkloadIdentityExtension or azureUseManagedIdentityExtension should be set. |
| azureVMType | string | `"AKS"` | Azure VM type. |
| azureVMType | string | `"vmss"` | Azure VM type. |
| cloudConfigPath | string | `""` | Configuration file for cloud provider. |
| cloudProvider | string | `"aws"` | The cloud provider where the autoscaler runs. Currently only `gce`, `aws`, `azure`, `magnum` and `clusterapi` are supported. `aws` supported for AWS. `gce` for GCE. `azure` for Azure AKS. `magnum` for OpenStack Magnum, `clusterapi` for Cluster API. |
| clusterAPICloudConfigPath | string | `"/etc/kubernetes/mgmt-kubeconfig"` | Path to kubeconfig for connecting to Cluster API Management Cluster, only used if `clusterAPIMode=kubeconfig-kubeconfig or incluster-kubeconfig` |
Expand All @@ -386,8 +415,9 @@ vpa:
| image.pullPolicy | string | `"IfNotPresent"` | Image pull policy |
| image.pullSecrets | list | `[]` | Image pull secrets |
| image.repository | string | `"registry.k8s.io/autoscaling/cluster-autoscaler"` | Image repository |
| image.tag | string | `"v1.27.2"` | Image tag |
| image.tag | string | `"v1.28.2"` | Image tag |
| kubeTargetVersionOverride | string | `""` | Allow overriding the `.Capabilities.KubeVersion.GitVersion` check. Useful for `helm template` commands. |
| kwokConfigMapName | string | `"kwok-provider-config"` | configmap for configuring kwok provider |
| magnumCABundlePath | string | `"/etc/kubernetes/ca-bundle.crt"` | Path to the host's CA bundle, from `ca-file` in the cloud-config file. |
| magnumClusterName | string | `""` | Cluster name or ID in Magnum. Required if `cloudProvider=magnum` and not setting `autoDiscovery.clusterName`. |
| nameOverride | string | `""` | String to partially override `cluster-autoscaler.fullname` template (will maintain the release name) |
Expand All @@ -411,6 +441,7 @@ vpa:
| rbac.serviceAccount.name | string | `""` | The name of the ServiceAccount to use. If not set and create is `true`, a name is generated using the fullname template. |
| replicaCount | int | `1` | Desired number of pods |
| resources | object | `{}` | Pod resource requests and limits. |
| revisionHistoryLimit | int | `10` | The number of revisions to keep. |
| secretKeyRefNameOverride | string | `""` | Overrides the name of the Secret to use when loading the secretKeyRef for AWS and Azure env variables |
| securityContext | object | `{}` | [Security context for pod](https://kubernetes.io/docs/tasks/configure-pod-container/security-context/) |
| service.annotations | object | `{}` | Annotations to add to service |
Expand Down
41 changes: 35 additions & 6 deletions charts/cluster-autoscaler/README.md.gotmpl
Original file line number Diff line number Diff line change
Expand Up @@ -70,8 +70,13 @@ To create a valid configuration, follow instructions for your cloud provider:

- [AWS](#aws---using-auto-discovery-of-tagged-instance-groups)
- [GCE](#gce)
- [Azure AKS](#azure-aks)
- [Azure](#azure)
- [OpenStack Magnum](#openstack-magnum)
- [Cluster API](#cluster-api)

### Templating the autoDiscovery.clusterName

The cluster name can be templated in the `autoDiscovery.clusterName` variable. This is useful when the cluster name is dynamically generated based on other values coming from external systems like Argo CD or Flux. This also allows you to use global Helm values to set the cluster name, e.g., `autoDiscovery.clusterName=\{\{ .Values.global.clusterName }}`, so that you don't need to set it in more than 1 location in the values file.

### AWS - Using auto-discovery of tagged instance groups

Expand Down Expand Up @@ -182,20 +187,18 @@ In the event you want to explicitly specify MIGs instead of using auto-discovery
--set autoscalingGroups[n].name=https://content.googleapis.com/compute/v1/projects/$PROJECTID/zones/$ZONENAME/instanceGroupManagers/$FULL-MIG-NAME,autoscalingGroups[n].maxSize=$MAXSIZE,autoscalingGroups[n].minSize=$MINSIZE
```

### Azure AKS
### Azure

The following parameters are required:

- `cloudProvider=azure`
- `autoscalingGroups[0].name=your-agent-pool,autoscalingGroups[0].maxSize=10,autoscalingGroups[0].minSize=1`
- `autoscalingGroups[0].name=your-vmss,autoscalingGroups[0].maxSize=10,autoscalingGroups[0].minSize=1`
- `azureClientID: "your-service-principal-app-id"`
- `azureClientSecret: "your-service-principal-client-secret"`
- `azureSubscriptionID: "your-azure-subscription-id"`
- `azureTenantID: "your-azure-tenant-id"`
- `azureClusterName: "your-aks-cluster-name"`
- `azureResourceGroup: "your-aks-cluster-resource-group-name"`
- `azureVMType: "AKS"`
- `azureNodeResourceGroup: "your-aks-cluster-node-resource-group"`
- `azureVMType: "vmss"`

### OpenStack Magnum

Expand Down Expand Up @@ -230,6 +233,32 @@ Additional config parameters available, see the `values.yaml` for more details
- `clusterAPIWorkloadKubeconfigPath`
- `clusterAPICloudConfigPath`

### Exoscale

The following parameters are required:

- `cloudProvider=exoscale`
- `autoDiscovery.clusterName=<CLUSTER NAME>`

Create an Exoscale API key with appropriate permissions as described in [cluster-autoscaler/cloudprovider/exoscale/README.md](https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/cloudprovider/exoscale/README.md).
A secret of name `<release-name>-exoscale-cluster-autoscaler` needs to be created, containing the api key and secret, as well as the zone.

```console
$ kubectl create secret generic my-release-exoscale-cluster-autoscaler \
--from-literal=api-key="EXOxxxxxxxxxxxxxxxxxxxxxxxx" \
--from-literal=api-secret="xxxxxxxxx-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx" --from-literal=api-zone="ch-gva-2"
```

After creating the secret, the chart may be installed:

```console
$ helm install my-release autoscaler/cluster-autoscaler \
--set cloudProvider=exoscale \
--set autoDiscovery.clusterName=<CLUSTER NAME>
```

Read [cluster-autoscaler/cloudprovider/exoscale/README.md](https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/cloudprovider/exoscale/README.md) for further information on the setup without helm.

## Uninstalling the Chart

To uninstall `my-release`:
Expand Down
7 changes: 5 additions & 2 deletions charts/cluster-autoscaler/templates/_helpers.tpl
Original file line number Diff line number Diff line change
Expand Up @@ -40,11 +40,14 @@ app.kubernetes.io/name: {{ include "cluster-autoscaler.name" . | quote }}


{{/*
Return labels, including instance and name.
Return labels, including instance, name and version.
*/}}
{{- define "cluster-autoscaler.labels" -}}
{{ include "cluster-autoscaler.instance-name" . }}
app.kubernetes.io/managed-by: {{ .Release.Service | quote }}
{{- if .Chart.AppVersion }}
app.kubernetes.io/version: {{ .Chart.AppVersion | quote }}
{{- end }}
helm.sh/chart: {{ include "cluster-autoscaler.chart" . | quote }}
{{- if .Values.additionalLabels }}
{{ toYaml .Values.additionalLabels }}
Expand Down Expand Up @@ -113,7 +116,7 @@ Return the autodiscoveryparameters for clusterapi.
*/}}
{{- define "cluster-autoscaler.capiAutodiscoveryConfig" -}}
{{- if .Values.autoDiscovery.clusterName -}}
{{- print "clusterName=" -}}{{ .Values.autoDiscovery.clusterName }}
{{- print "clusterName=" -}}{{ tpl (.Values.autoDiscovery.clusterName) . }}
{{- end -}}
{{- if and .Values.autoDiscovery.clusterName .Values.autoDiscovery.labels -}}
{{- print "," -}}
Expand Down
3 changes: 3 additions & 0 deletions charts/cluster-autoscaler/templates/clusterrole.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -42,6 +42,8 @@ rules:
verbs:
- watch
- list
- create
- delete
- get
- update
- apiGroups:
Expand Down Expand Up @@ -120,6 +122,7 @@ rules:
verbs:
- list
- watch
- get
- apiGroups:
- coordination.k8s.io
resources:
Expand Down
Loading

0 comments on commit 73b5bfe

Please sign in to comment.