-
Notifications
You must be signed in to change notification settings - Fork 19
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Added the Bietti benchmark and improved openml timeout handling.
- Loading branch information
Showing
5 changed files
with
127 additions
and
25 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1,18 +1,18 @@ | ||
""" | ||
This is an example script that creates and executes an Experiment. | ||
This script requires that the matplotlib and vowpalwabbit packages be installed. | ||
This script depends on the matplotlib and vowpalwabbit packages. | ||
""" | ||
|
||
import coba as cb | ||
|
||
#First, we define the learners that we want to test | ||
#First, we define the learners that we wish to evaluate | ||
learners = [ cb.VowpalEpsilonLearner(), cb.RandomLearner() ] | ||
|
||
#Next we create an environment we'd like to evaluate against | ||
#Next, we create an environment we'd like to evaluate against | ||
environments = cb.Environments.from_linear_synthetic(1000, n_action_features=0).shuffle([1,2,3]) | ||
|
||
#We then create and run our experiment from our environments and learners | ||
#We then create and run an experiment using our environments and learners | ||
result = cb.Experiment(environments,learners).run() | ||
|
||
#After evaluating can create a quick summary plot to get a sense of how the learners performed | ||
#Finally, we can plot the results of our experiment | ||
result.plot_learners(y='reward',err='se',xlim=(10,None)) |