-
Notifications
You must be signed in to change notification settings - Fork 1.9k
Tutorial
Jack Gerrits edited this page Jul 29, 2022
·
80 revisions
Getting started > Building > Installing > [Tutorial]
Tutorials on vowpalwabbit.org/tutorials
For the most up to date tutorials go to the tutorials section of the website.
The following are various tutorials produced over the years that may be a bit out of date but are still of value.
The following tutorials generally cover features added in each release, but may be slightly outdated due to their age.
- Sparse model, baseline, optimized exploration
- Cost Sensitive Active Learning
- Java interface
- Decision Service JSON Ingestion
- Intro + reductions + log_multi + exploration library + AzureML
- Polynomial learning
- LRQ + Hogwild mode
- Online kernel SVM
- Learning2Search + python
- Entity Relation and Dependency Parsing
- The Learning Reduction and Searn systems
- Several improvements from Zhen including holdout, bootstrap, early termination, top k
- Normalized Gradient Descent from Paul.
- A new Active Learning interface from Nikos.
- Version 7.0 tutorial
- It covers the basics and most common options, how to use VW and the data format for different types of problems, such as Binary Classification, Regression, Multiclass Classification, Cost-Sensitive Multiclass Classification, "Offline" Contextual Bandit and Sequence Predictions. Many more advanced options in terms of flags and the data format are not covered. You can refer to previous tutorials for these more advanced details.
- This tutorial and other items below cover some topics that weren't covered in version 7 as they haven't changed in that version.
- Version 6.1 tutorial introduction
- Description and use of L-BFGS.
- Cluster parallel learning.
- Active Learning (v5.0 presentation, but little changed)
- Latent Dirichlet Allocation (v5.0 presentation, little changed) See also Latent Dirichlet Allocation
- Vowpal Wabbit tutorial for the Uninitiated by Rob Zinkov
- Version 5.0 Videolecture
- Version 5.0 tutorial
- The importance weight invariant update rule. (covered in 6.1 intro)
- Home
- First Steps
- Input
- Command line arguments
- Model saving and loading
- Controlling VW's output
- Audit
- Algorithm details
- Awesome Vowpal Wabbit
- Learning algorithm
- Learning to Search subsystem
- Loss functions
- What is a learner?
- Docker image
- Model merging
- Evaluation of exploration algorithms
- Reductions
- Contextual Bandit algorithms
- Contextual Bandit Exploration with SquareCB
- Contextual Bandit Zeroth Order Optimization
- Conditional Contextual Bandit
- Slates
- CATS, CATS-pdf for Continuous Actions
- Automl
- Epsilon Decay
- Warm starting contextual bandits
- Efficient Second Order Online Learning
- Latent Dirichlet Allocation
- VW Reductions Workflows
- Interaction Grounded Learning
- CB with Large Action Spaces
- CB with Graph Feedback
- FreeGrad
- Marginal
- Active Learning
- Eigen Memory Trees (EMT)
- Element-wise interaction
- Bindings
-
Examples
- Logged Contextual Bandit example
- One Against All (oaa) multi class example
- Weighted All Pairs (wap) multi class example
- Cost Sensitive One Against All (csoaa) multi class example
- Multiclass classification
- Error Correcting Tournament (ect) multi class example
- Malicious URL example
- Daemon example
- Matrix factorization example
- Rcv1 example
- Truncated gradient descent example
- Scripts
- Implement your own joint prediction model
- Predicting probabilities
- murmur2 vs murmur3
- Weight vector
- Matching Label and Prediction Types Between Reductions
- Zhen's Presentation Slides on enhancements to vw
- EZExample Archive
- Design Documents
- Contribute: