Giter Club home page Giter Club logo

Comments (10)

ivanvovk avatar ivanvovk commented on August 24, 2024 1

@quangnh-2761 Hi! Yes, we've tried several experiments with end2end pipeline, but for some reason training on raw waveforms with mean-shifted terminal distribution was not stable enough. The samples you've listened to at the demo page were synthesized with the end2end model, which generates audios from the pure Gaussian noise, meaning starting denoising process from $\mathcal{N}(0, \mathbf{I})$, not from $\mathcal{N}(\mu, \mathbf{I})$ (however, there are recent studies how to apply this concept to raw waveforms, check SpecGrad paper). Decoder WaveGrad architecture was conditioned on aligned mel-spectrograms obtained from text encoder directly. Thus, to train the duration predictor with MAS we still used $\mathcal{L}_{enc}$ loss on mel-specs. Conditioning decoder with uncurated sharp mel-specs from text encoder is the main reason why the final quality is bad. Adding some intermediate layers like in current end2end (VITS, NaturalSpeech) pipelines or increasing number of the WaveGrad parameters significantly (like in WaveGrad 2) can potentially solve the problem.

from speech-backbones.

quangnh-2761 avatar quangnh-2761 commented on August 24, 2024 1

Sorry but I can not share samples from my datasets because of privacy policy, but I am experimenting with LJSpeech. I will send some samples when finish training.

from speech-backbones.

quangnh-2761 avatar quangnh-2761 commented on August 24, 2024 1

Thank you, I will check your work on fast solver. For my dataset, its language (Vietnamese) is single syllable and has no connection between words when pronounce so I think it's easier to learn and harder to identify errors, IMO model's size is kind of data dependent.

from speech-backbones.

quangnh-2761 avatar quangnh-2761 commented on August 24, 2024

Thanks for your informative response. I will reproduce on my data and try other end2end architectures to check if they help.

from speech-backbones.

quangnh-2761 avatar quangnh-2761 commented on August 24, 2024

Wavegrad large indeed helps, but interestingly outputs on my dataset with wavegrad base architecture are not distorted, I will experiment with LJSpeech to find what the difference is. Can I ask which noise schedule you tried, I was struggle to find suitable $\beta_0$ and $\beta_1$ and decided to train with $1e-4$ to $1$ (corresponding to 100 steps $1e-6$ to $1e-2$ in ddpm)

from speech-backbones.

ivanvovk avatar ivanvovk commented on August 24, 2024

@quangnh-2761 Great! We've used the same noise schedule as in original WaveGrad work: $[1e-6, 1e-2]$. In case of Grad-TTS implementation, you should multiply it by 1000 ($t\in[0,1]$ continuous process discretization) when setting in params.py.

from speech-backbones.

quangnh-2761 avatar quangnh-2761 commented on August 24, 2024

Thank you. For some reason wavegrad base with $[1e-3,10]$ generated noisy audio (maybe I didn't train long enough), while output of wavegrad base with $[1e-4,1]$ have decent quality. Wavegrad large's performance is good on both schedules.

from speech-backbones.

ivanvovk avatar ivanvovk commented on August 24, 2024

@quangnh-2761 Good! Can you share some audio samples to listen to?

from speech-backbones.

quangnh-2761 avatar quangnh-2761 commented on August 24, 2024

https://drive.google.com/drive/folders/1OCK_CD6nFmQZGPd_4hSdJLEN_ME1PxIU here are some samples from base and large models, trained with ~1k epochs. I think they are acceptable but still not perfect, I keep training to see if they improve (wavegrad 2 with base model can reach nearly 3.9 MOS, maybe because of other small details). A problem is that I can not do batch inference, because in the training phase I did random segment waveform to fit into memory, so model did not know how to deal with padding when inference, output would soon explode to infinity if I multiply with mask. Otherwise, generated audio might be affected by noise from padding frames.
Another thing is that I have to sample up to 1k steps to obtain good output. I have tried few steps schedule in wavegrad but it cant converge (I have divided $\beta$ by 1000 and converted score to noise to match with wavegrad). Do you know how to find few steps schedule manually? Edited: It's because I used a wrong formula to convert time, when correct schedule is used it works.

from speech-backbones.

ivanvovk avatar ivanvovk commented on August 24, 2024

@quangnh-2761 I see, good! Seems like hard-increase of parameters number really helps. To improve the inference speed I can suggest to try noise schedule grid-search for needed number of solver steps. Or you can add to your existing model and use our novel reverse SDE solver: https://arxiv.org/abs/2109.13821. It requires much lower number of steps to produce good quality samples. It is easy to implement, you don't need to re-train the model.

from speech-backbones.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.