Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

FR: Propagate hyperparameters of optimizers in kedro #25

Open
stroblme opened this issue Sep 29, 2023 · 1 comment
Open

FR: Propagate hyperparameters of optimizers in kedro #25

stroblme opened this issue Sep 29, 2023 · 1 comment
Labels
enhancement New feature or request

Comments

@stroblme
Copy link
Member

Is your feature request related to a problem? Please describe.
Cannot set learning rate for optimizer(s) individually

Describe the solution you'd like
Introduce individual learning rate for optimizer(s) together with a concept for handling optimizer specific hyperparameters

Describe alternatives you've considered
N/A

Additional context
N/A

@stroblme stroblme added the enhancement New feature or request label Sep 29, 2023
@stroblme
Copy link
Member Author

Implemented this issue using an optimizer section in the data_science.yml file. Note that one has to comment out either the combined or split section.

optimizer:
  combined:
    name: Adam #Adam, SGD
    lr: 0.001
  split:
    classical:
      name: SGD #Adam, SGD
      lr: 0.03
    quantum:
      name: SPSA #Adam, SPSA, SGD, NGD, QNG
      lr: 0.04

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

1 participant