Utils and modules for NLP, audio and multimodal processing using sklearn and pytorch
Training:
- Make a venv and use pip install -r requirements.txt to install required packages.
- Add Movie Triples data to: ./data/MovieTriples
- Add emebddings file to: ./cache
- Run in terminal: export PYTHONPATH=./ (in cloned root dir)
- python experiments/hred/training/hred_movie_triples.py -epochs 80 -lr 0 .0005 -ckpt ./checkpoints/hred -emb_dim 300
You can also add more options:
-shared (to use shared weights between encoder and decoder)
-shared_emb (to use shared embedding layer for encoder and decoder)
-emb drop 0.2 (embeddings dropout)
-encembtrain (to train encoder embeddings)
-decembtrain (to train decoder embeddings)
(see argparser in experiments/hred/training/hred_movie_triples.py for more)
Running experiments:
-
python experiments/hred/training/hred_movie_triples.py -epochs 80 -lr 0 .0005 -ckpt ./checkpoints/ -emb_dim 300 -decr_tc_ratio -encembtrain -decembtrain -shared_emb
-
python experiments/hred/training/hred_movie_triples.py -epochs 80 -lr 0 .0005 -ckpt ./checkpoints/ -emb_dim 300