Thanks for your interest in this library!! I have gotten a few recent e-mails about it, and I have been responding to them with this message:
Thanks for your interest in this library. Since I wrote this wrapper several years ago, the tools that enable bayesian optimization in python have greatly improved. In fact -- they are so good, that I can no longer recommend using my wrapper for experimentation. I would highly recommend checking out fmfn's bayesian optimization code for a more modern/easy-to-use codebase. There are quite a few usage examples in the "examples" directory that illustrate its usage.
https://github.com/fmfn/BayesianOptimization
I really like the bayesian optimization tool Spearmint (https://github.com/HIPS/Spearmint) but I found that some of its functionality was confusing and overkill if one is trying to run a few tests locally on some data. I have built some tools to streamline the process of small-scale hyperparameter optimization. Hopefully the tool is simple enough to understand -- it's just a few python scripts. I mostly made this repository for myself, but if others find it useful, that's awesome too!
- Basic HP optimization, geared towards supervised learning tasks
- Train/validation/test splits
- Python 2.7
- numpy
- Spearmint (and all of its dependencies)
- PrettyTable
Included with this repository is an example configuration and dataset, under the example
directory. Copy the config.json
and experiment.py
into the root directory of this repository, and then unzip exampleData
into the experiments
directory. Then, run setupExperiments.py
, followed by viewExperiments.py
.
- Get your dataset and determine how many train/val/test splits you want to make
- Fill the experiment folder with your data from each of your splits (see readme in the folder for specifics)
- Fill in what hyperparameters you want to optimize in
config.json
(see Spearmint's config json examples or example/config.json) - Fill in your training/evaluation functions in
experiment.py
(see comments in file for help, and the example in example/experiment.py) - Set the 3 parameters in basicSpearmint.json appropriately
- Run
python setupExperiments.py
and watch the magic of Bayesian Optimization! - Once that is done, run
python viewResults.py
and wait for your results to be tested. - Testing results using best hyperparameter settings will be printed in a table, and validation results will be saved in pickle files under the
results
directory.