Comments (14)
Yet I have trained a model with linear spec loss for 100k steps, the 4k Hz signal is gone.
Since I randomly selected the sample to syn, I cannot find the exact sample that was used for the aforementioned plot. So I re-plot the figure with a new sample.
Looking at the figure with spec loss, the generated spec looks to make sense, the 4k line was gone, and the waveform for the beginning silence is not always the same. Now everything looks all right.
I have trained a model with 100k steps, it sounds good, but looking into the generated spec, it seems to make no sense. Looking at the generated spec, we can find the highlight in 2k and 6k Hz, which do not exist in the input mel, even not in the mel re-calc from the generated audio. Comparing the two specs, we can say, that even it can be converted to audio by istft, the model output is actually NOT spectrum. Moreover, run ifft and de-window for the generated spec, it generates the audio frames with length 16. For sil, the beginning 100 frames look very similar, thus, followed with the overlap-and-add method, just the same signal added with 4 samples moving behind. And that's it, the 4k Hz signal.
I think adding spec loss directly to the generated spec may be a good idea to improve the quality and to solve this question. I will train a new model to see what will happen.
from istftnet-pytorch.
@ease-zh what changes you have made to this repo to achieve that ?
Just a l1 loss for the generated linear spec, loss_spec = F.l1_loss(y_spec, spec) * 45
. However, after careful comparison, I found that the spec loss harms the audio quality.
Maybe changing the loss formula or tuning the loss weight can further improve the quality. Do you have more suggestions?
By the way, I think using reflection_pad
before the conv_post
seems to make less sense, although it did work. I guess here is to justify the length so that torch.istft
will give the exact samples as used to calculate the mel? But the difference is caused by that torch.istft
only supports center
mode, while calculating the mel, we manually pad the wav, and set center=False
for torch.stft
. This may not affect the final synthesis, and I will give it a try later.
from istftnet-pytorch.
yes I also saw that line but it won't impact my quality
from istftnet-pytorch.
Hi @mayfool, can you show the mel-spectrogram image which has a single frequency line ?
from istftnet-pytorch.
Hi @mayfool, can you show the mel-spectrogram image which has a single frequency line ?
All synthesised wavs have single frequency line. Not few of them. So I think it has no matter with input mels.
from istftnet-pytorch.
Hi @mayfool, can you show the mel-spectrogram image which has a single frequency line ?
All synthesised wavs have single frequency line. Not few of them. So I think it has no matter with input mels.
@mayfool have you solved this problem now?
from istftnet-pytorch.
Hi @mayfool, can you show the mel-spectrogram image which has a single frequency line ?
All synthesised wavs have single frequency line. Not few of them. So I think it has no matter with input mels.
@mayfool have you solved this problem now?
@xiaoyangnihao Nope..
from istftnet-pytorch.
I implemented the code for any number of upsampling, and it seems that the problem with the single line frequency (which, by the way, lies in the region of 5500 Hz, that is, half of the fmax at a sample rate of 22050 Hz) is observed specifically for the configuration C8C8I.
Below is a comparison of C8C8I and C8C8C2I
from istftnet-pytorch.
I also encountered this problem. I set the sample rate to 48k, the horizontal line appears at 12k HZ (half of the fmax 24k HZ).
from istftnet-pytorch.
I encountered this problem too.
from istftnet-pytorch.
I have trained a model with 100k steps, it sounds good, but looking into the generated spec, it seems to make no sense.
Looking at the generated spec, we can find the highlight in 2k and 6k Hz, which do not exist in the input mel, even not in the mel re-calc from the generated audio. Comparing the two specs, we can say, that even it can be converted to audio by istft, the model output is actually NOT spectrum.
Moreover, run ifft and de-window for the generated spec, it generates the audio frames with length 16. For sil, the beginning 100 frames look very similar, thus, followed with the overlap-and-add method, just the same signal added with 4 samples moving behind. And that's it, the 4k Hz signal.
I think adding spec loss directly to the generated spec may be a good idea to improve the quality and to solve this question.
I will train a new model to see what will happen.
from istftnet-pytorch.
@ease-zh what changes you have made to this repo to achieve that ?
from istftnet-pytorch.
@ease-zh what changes you have made to this repo to achieve that ?
Just a l1 loss for the generated linear spec,
loss_spec = F.l1_loss(y_spec, spec) * 45
. However, after careful comparison, I found that the spec loss harms the audio quality. Maybe changing the loss formula or tuning the loss weight can further improve the quality. Do you have more suggestions?By the way, I think using
reflection_pad
before theconv_post
seems to make less sense, although it did work. I guess here is to justify the length so thattorch.istft
will give the exact samples as used to calculate the mel? But the difference is caused by thattorch.istft
only supportscenter
mode, while calculating the mel, we manually pad the wav, and setcenter=False
fortorch.stft
. This may not affect the final synthesis, and I will give it a try later.
then? have you tried?
from istftnet-pytorch.
@a897456 Not yet. I've been busy with something else.
from istftnet-pytorch.
Related Issues (15)
- A multi-gpu training bug HOT 1
- hi. does this repo implements tinyVITS? HOT 1
- Different sample rate HOT 2
- Pretrained models HOT 4
- RuntimeError: istft input and window must be on the same device but got self on cuda:0 and window on cpu HOT 3
- Predicted phase not in range [-pi .. pi], but in range [-1 .. 1] HOT 2
- Directly model complex numbers
- The output channels of the final convolutional layer
- Fix TypeError: 'torch.device' object is not callable
- How about the audio quality? HOT 6
- how about the quality of this net HOT 3
- can STFT module convert to onnx format ? HOT 1
- window_sum in stft is just a constant? HOT 3
- A sample as good as HiFiGAN HOT 2
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from istftnet-pytorch.