-
Notifications
You must be signed in to change notification settings - Fork 52
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Mel model #44
Comments
no, it is not well tested for mel always welcome contributions |
I am working with Mel version in my reimplementation, AMA! |
@ex3ndr |
@atmbb which task? |
@ex3ndr Diverse sampling is well working. |
@atmbb i just restarted training from scratch of my model. I am now at 27651 step, training on just two 4090 batch size is 16 * 8 per GPU - quite small comparing to original paper. It somehow follows, the prompt but it is too early. In my previous run i have trained for 400k steps and it followed prompts correctly. |
@atmbb i remembered one thing: alibi requires longer training sequences that is easily available for training. I have been training for max 5 sec segments and audio style collapsed after ~5 seconds. I have seen same problem when audio was conditioned well for few seconds and then collapses, then i figured out that for longer conditioning audio i had less "valid" seconds. Alibi starts to work at around 300k iterations in my case, but longer context training still required. Funny thing is that they mention degradation on longer conditioning and they have seen degradation start at ~15 seconds which is exactly how long is author's training context size. Looking at alibi coefficients i think it needed ~2k context size for training or more to generalize well. No one expected alibi to generalize after just 500 (~5 seconds) - the coefficients are just not steep enough to vanish. |
Employing a different backbone network than the one (Transformer model with only convolutional positional coding) used in the voicebox paper to implement the ODE model, I have achieved a good zero-shot performance. However, multi-layer transformer with one convolutional positional coding layer still does not work on Mel in my experiment. I speculate that perhaps the original paper may have used multiple layers of convolutional positional encoding before the transformer module. I‘ll try to contribute the code that worked well on Mel. |
zero-shot.zip |
I have published beta: https://github.com/ex3ndr/supervoice |
May I ask if this implementation of the model has been experimented on the MEL spectrum.? I used
Transformer model with only convolutional positional coding added at the beginning to get discontinuous generation results.
The text was updated successfully, but these errors were encountered: