We aim to build a toolkit focusing on front-end processing in the cocktail party set up, including target speaker extraction and
speech separation (Future work)
- Clone this repo
https://github.com/wenet-e2e/wesep.git
- Create conda env: pytorch version >= 1.12.0 is required !!!
conda create -n wesep python=3.9
conda activate wesep
conda install pytorch=1.12.1 torchaudio=0.12.1 cudatoolkit=11.3 -c pytorch -c conda-forge
pip install -r requirements.txt
pre-commit install # for clean and tidy code
Target speaker extraction (TSE) focuses on isolating the speech of a specific target speaker from overlapped multi-talker speech, which is a typical setup in the cocktail party problem. WeSep is featured with flexible target speaker modeling, scalable data management, effective on-the-fly data simulation, structured recipes and deployment support.
- On the fly data simulation
- Dynamic Mixture simulation
- Dynamic Reverb simulation
- Dynamic Noise simulation
- Support time- and frequency- domain models
- Time-domain
- conv-tasnet based models
- Spex+
- conv-tasnet based models
- Frequency domain
- pBSRNN
- pDPCCN
- tf-gridnet (Extremely slow, need double check)
- Time-domain
- Training Criteria
- SISNR loss
- GAN loss (Need further investigation)
- Datasets
- Libri2Mix (Illustration for pre-mixed speech)
- VoxCeleb (Illustration for online training)
- WSJ0-2Mix
- Speaker Embedding
- Wespeaker Intergration
- Joint Learned Speaker Embedding
- Different fusion methods
- Pretrained models
- CLI Usage
- Runtime
Following Wenet and WesSeaker, WeSep organizes the data processing modules as a pipeline of a set of different processors. The following figure shows such a pipeline with essential processors.
For Chinese users, you can scan the QR code on the left to join our group directly. If it has expired, please scan the personal Wechat QR code on the right.
If you find wespeaker useful, please cite it as
@inproceedings{wang24fa_interspeech,
title = {WeSep: A Scalable and Flexible Toolkit Towards Generalizable Target Speaker Extraction},
author = {Shuai Wang and Ke Zhang and Shaoxiong Lin and Junjie Li and Xuefei Wang and Meng Ge and Jianwei Yu and Yanmin Qian and Haizhou Li},
year = {2024},
booktitle = {Interspeech 2024},
pages = {4273--4277},
doi = {10.21437/Interspeech.2024-1840},
}