Skip to content

A repository containing different chatbot implementations

License

Notifications You must be signed in to change notification settings

manzar96/movie_corpus_chatbot

Repository files navigation

Chatbot implementations based on movie corpus dataset

Build Status Maintainability

Utils and modules for NLP, audio and multimodal processing using sklearn and pytorch

Training:

  1. Make a venv and use pip install -r requirements.txt to install required packages.
  2. Add Movie Triples data to: ./data/MovieTriples
  3. Add emebddings file to: ./cache
  4. Run in terminal: export PYTHONPATH=./ (in cloned root dir)
  5. python experiments/hred/training/hred_movie_triples.py -epochs 80 -lr 0 .0005 -ckpt ./checkpoints/hred -emb_dim 300

You can also add more options:

-shared (to use shared weights between encoder and decoder)

-shared_emb (to use shared embedding layer for encoder and decoder)

-emb drop 0.2 (embeddings dropout)

-encembtrain (to train encoder embeddings)

-decembtrain (to train decoder embeddings)

(see argparser in experiments/hred/training/hred_movie_triples.py for more)

Running experiments:

  1. python experiments/hred/training/hred_movie_triples.py -epochs 80 -lr 0 .0005 -ckpt ./checkpoints/ -emb_dim 300 -decr_tc_ratio -encembtrain -decembtrain -shared_emb

  2. python experiments/hred/training/hred_movie_triples.py -epochs 80 -lr 0 .0005 -ckpt ./checkpoints/ -emb_dim 300

About

A repository containing different chatbot implementations

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published