Comments (8)
Thank you for your reply. The demo prompt audio works as expected.
I tried using a prompt audio of 5s, but the same error still occurred. However in this case, the input text, which is the text that the output should be speaking (not to be confused with the prompt text, which is the text of the prompt audio) was one word long. Is there an optimal length for the input text as well? I noticed that for generating a one word long input text without a prompt audio, there are no issues.
Additionally, how can the code be modified to accommodate longer prompt audios and fix this kernel size issue?
NOTE: Since the released model is trained on the LibriTTS model, which contains limited data. The generalization ability is also limited. Therefore,
In zero-shot mode, the input text shouldn't be short or long, I recommend the total duration of prompt and generated audios (expected value) matches the training samples (5s~15s). If the prompt text is too long the model will stop decoding early, and the number generated tokens will be very small, resulting in the kernel size error. If the input text is too short, the number generated tokens will be also very small.
Without prompt audios, aka, in free generation mode, the input text shouldn't be too short as well.
Tips: You can estimate the duration of generated audios with prompt as follows:
duration of generated audios = length of input text * (duration of prompt audio / length of prompt text)
To accommodate longer prompt audios, what you need to do is finetune the model on more data to improve its generalization ability. From the aspect of code, you can:
- filter out the too short sentence after
model.decode_codec
at the line 171 oftext2audio_inference.py
. - modify the decoding strategy, add penalty to the eos token before achieving the minimum expected length in
decode_codec
function ofLauraGenModel
inlaura_model.py
:
min_length = None
if min_length_ratio is not None and prompt_text_lens is not None and continual is not None:
min_length = int(float(len(continual)) / prompt_text_lens * (
text_lengths - prompt_text_lens) * min_length_ratio)
if max_length_ratio is not None and prompt_text_lens is not None and continual is not None:
max_length = int(float(len(continual)) / prompt_text_lens * (
text_lengths - prompt_text_lens) * max_length_ratio)
if min_length is not None and i < min_length:
pred[:, self.codebook_size + self.sos_eos] = float(np.finfo(np.float32).min)
from funcodec.
Hi, could you please briefly explain what the continual parameter is used for?
The continual
parameter represents the codec tokens of prompt audios.
from funcodec.
From the Traceback, I think the error is because your own prompt is too short, may be shorter than 1 second ? Our model consists of convolution layers and padding ops, if the prompt is to short the kernel size and paddings will larger than the input size, resulting into the error you met.
from funcodec.
The prompt audio I used was 11 seconds long, could there be another reason why this happens?
from funcodec.
I found this error is reported by decoder. This may be due to the prompt audio is too long, and the LM decodes a very small number of tokens (bad case, actually). You can try a shorter one. BTW, have you tried the demo prompt audio ? Is it can be used normally ?
from funcodec.
I recommend a prompt audio of the duration 4~6s. This is because the training set is LibriTTS, a small corpus, in which the length of utterance is not very long.
from funcodec.
Thank you for your reply. The demo prompt audio works as expected.
I tried using a prompt audio of 5s, but the same error still occurred. However in this case, the input text, which is the text that the output should be speaking (not to be confused with the prompt text, which is the text of the prompt audio) was one word long. Is there an optimal length for the input text as well? I noticed that for generating a one word long input text without a prompt audio, there are no issues.
Additionally, how can the code be modified to accommodate longer prompt audios and fix this kernel size issue?
from funcodec.
Hi, could you please briefly explain what the continual parameter is used for?
from funcodec.
Related Issues (20)
- Relation between bitrate and token ratio HOT 2
- LauraTTS: _pickle.UnpicklingError: invalid load key, 'v'. HOT 5
- Low-complexity FreqCodec requires a lot of VRAM HOT 3
- Questions about training from scratch HOT 2
- TKR? HOT 5
- TypeError: 'NoneType' object is not callable HOT 4
- [bug] encoding阶段生成的codec.txt, 无法直接读取? HOT 1
- Discriminator loss? HOT 5
- LauraTTS模型的训练花了多长时间? HOT 5
- 如何仅用funcodec的4层或者8层量化器进行推理 HOT 2
- zipfile.BadZipFile: File is not a zip file HOT 1
- run.sh: 34: utils/parse_options.sh: Syntax error: Bad for loop variable
- How to check progress? HOT 1
- 运行run.sh在stage 4时报错 HOT 2
- Stage 3 HOT 1
- Feature to resume training after stopping?
- NET/Plugin : dlerror=libnccl-net.so: cannot open shared object file: No such file or directory No plugin found (libnccl-net.so)
- Out of memory when train on large dataset (librilight)
- Stage 1 can only be run on one gpu card 0
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from funcodec.