Giter Club home page Giter Club logo

Comments (14)

ease-zh avatar ease-zh commented on August 18, 2024 2

Yet I have trained a model with linear spec loss for 100k steps, the 4k Hz signal is gone.
Since I randomly selected the sample to syn, I cannot find the exact sample that was used for the aforementioned plot. So I re-plot the figure with a new sample.
Looking at the figure with spec loss, the generated spec looks to make sense, the 4k line was gone, and the waveform for the beginning silence is not always the same. Now everything looks all right.
image
image

I have trained a model with 100k steps, it sounds good, but looking into the generated spec, it seems to make no sense. Looking at the generated spec, we can find the highlight in 2k and 6k Hz, which do not exist in the input mel, even not in the mel re-calc from the generated audio. Comparing the two specs, we can say, that even it can be converted to audio by istft, the model output is actually NOT spectrum. Moreover, run ifft and de-window for the generated spec, it generates the audio frames with length 16. For sil, the beginning 100 frames look very similar, thus, followed with the overlap-and-add method, just the same signal added with 4 samples moving behind. And that's it, the 4k Hz signal.

I think adding spec loss directly to the generated spec may be a good idea to improve the quality and to solve this question. I will train a new model to see what will happen. image

from istftnet-pytorch.

ease-zh avatar ease-zh commented on August 18, 2024 1

@ease-zh what changes you have made to this repo to achieve that ?

Just a l1 loss for the generated linear spec, loss_spec = F.l1_loss(y_spec, spec) * 45. However, after careful comparison, I found that the spec loss harms the audio quality.
Maybe changing the loss formula or tuning the loss weight can further improve the quality. Do you have more suggestions?

By the way, I think using reflection_pad before the conv_post seems to make less sense, although it did work. I guess here is to justify the length so that torch.istft will give the exact samples as used to calculate the mel? But the difference is caused by that torch.istft only supports center mode, while calculating the mel, we manually pad the wav, and set center=False for torch.stft. This may not affect the final synthesis, and I will give it a try later.

from istftnet-pytorch.

rishikksh20 avatar rishikksh20 commented on August 18, 2024

yes I also saw that line but it won't impact my quality

from istftnet-pytorch.

leminhnguyen avatar leminhnguyen commented on August 18, 2024

Hi @mayfool, can you show the mel-spectrogram image which has a single frequency line ?

from istftnet-pytorch.

mayfool avatar mayfool commented on August 18, 2024

Hi @mayfool, can you show the mel-spectrogram image which has a single frequency line ?

All synthesised wavs have single frequency line. Not few of them. So I think it has no matter with input mels.

from istftnet-pytorch.

xiaoyangnihao avatar xiaoyangnihao commented on August 18, 2024

Hi @mayfool, can you show the mel-spectrogram image which has a single frequency line ?

All synthesised wavs have single frequency line. Not few of them. So I think it has no matter with input mels.

@mayfool have you solved this problem now?

from istftnet-pytorch.

mayfool avatar mayfool commented on August 18, 2024

Hi @mayfool, can you show the mel-spectrogram image which has a single frequency line ?

All synthesised wavs have single frequency line. Not few of them. So I think it has no matter with input mels.

@mayfool have you solved this problem now?

@xiaoyangnihao Nope..

from istftnet-pytorch.

Dekakhrone avatar Dekakhrone commented on August 18, 2024

I implemented the code for any number of upsampling, and it seems that the problem with the single line frequency (which, by the way, lies in the region of 5500 Hz, that is, half of the fmax at a sample rate of 22050 Hz) is observed specifically for the configuration C8C8I.

Below is a comparison of C8C8I and C8C8C2I

Screenshot from 2022-07-12 12-13-41
Screenshot from 2022-07-12 12-13-16

from istftnet-pytorch.

jstzwj avatar jstzwj commented on August 18, 2024

I also encountered this problem. I set the sample rate to 48k, the horizontal line appears at 12k HZ (half of the fmax 24k HZ).

from istftnet-pytorch.

leng-yue avatar leng-yue commented on August 18, 2024

I encountered this problem too.

from istftnet-pytorch.

ease-zh avatar ease-zh commented on August 18, 2024

I have trained a model with 100k steps, it sounds good, but looking into the generated spec, it seems to make no sense.
Looking at the generated spec, we can find the highlight in 2k and 6k Hz, which do not exist in the input mel, even not in the mel re-calc from the generated audio. Comparing the two specs, we can say, that even it can be converted to audio by istft, the model output is actually NOT spectrum.
Moreover, run ifft and de-window for the generated spec, it generates the audio frames with length 16. For sil, the beginning 100 frames look very similar, thus, followed with the overlap-and-add method, just the same signal added with 4 samples moving behind. And that's it, the 4k Hz signal.

I think adding spec loss directly to the generated spec may be a good idea to improve the quality and to solve this question.
I will train a new model to see what will happen.
image

from istftnet-pytorch.

rishikksh20 avatar rishikksh20 commented on August 18, 2024

@ease-zh what changes you have made to this repo to achieve that ?

from istftnet-pytorch.

a897456 avatar a897456 commented on August 18, 2024

@ease-zh what changes you have made to this repo to achieve that ?

Just a l1 loss for the generated linear spec, loss_spec = F.l1_loss(y_spec, spec) * 45. However, after careful comparison, I found that the spec loss harms the audio quality. Maybe changing the loss formula or tuning the loss weight can further improve the quality. Do you have more suggestions?

By the way, I think using reflection_pad before the conv_post seems to make less sense, although it did work. I guess here is to justify the length so that torch.istft will give the exact samples as used to calculate the mel? But the difference is caused by that torch.istft only supports center mode, while calculating the mel, we manually pad the wav, and set center=False for torch.stft. This may not affect the final synthesis, and I will give it a try later.

then? have you tried?

from istftnet-pytorch.

ease-zh avatar ease-zh commented on August 18, 2024

@a897456 Not yet. I've been busy with something else.

from istftnet-pytorch.

Related Issues (15)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.