This repository provides inference scripts for Neural FCA proposed in our paper: Neural Full-rank Spatial Covariance Analysis for Blind Source Separation.
Please cite as:
@article{bando2021neural,
title={Neural Full-Rank Spatial Covariance Analysis for Blind Source Separation},
author={Bando, Yoshiaki and Sekiguchi, Kouhei and Masuyama, Yoshiki and Nugraha, Aditya Arie and Fontaine, Mathieu and Yoshii, Kazuyoshi},
journal={IEEE Signal Processing Letters},
volume={28},
pages={1670--1674},
year={2021},
publisher={IEEE}
}
Neural FCA was developed with Python 3.8 and the following requirements:
pip install -r requirements.txt
The pre-trained model used in the paper for separating speech mixtures can be downloaded from the following URL:
wget https://github.com/yoshipon/spl2021_neural-fca/releases/download/spl2021/model.zip
unzip model.zip
- The pre-trained model can separate four-channel two-speech mixtures as follows:
python neural-fca/separate.py model/ input.wav output.wav
The model will predict three sources assuming two target sources and one noise source.
- If you want to perform the separation without inference-time parameter updates, run the following command:
python neural-fca/separate.py model/ input.wav output.wav --n_iter=0
- You can obtain
neural-fca.png
showing mixture and source spectrograms by:
python neural-fca/separate.py model/ input.wav output.wav --n_iter=0 --plot
This repository is released under the MIT License. The pre-trained model is released under the Creative Commons BY-NC-ND 4.0 License.
Yoshiaki Bando, [email protected]
National Institute of Advanced Industrial Science and Technology, Japan