Comments (5)
here is my configuration in hparams.py
input_type="raw",
quantize_channels=2 ** 16, # 65536 (16-bit) (raw) or 256 (8-bit) (mulaw or mulaw-quantize) // number of classes = 256 <=> mu = 255
log_scale_min=float(np.log(1e-14)), #Mixture of logistic distributions minimal log scale
log_scale_min_gauss = float(np.log(1e-7)), #Gaussian distribution minimal allowed log scale
#To use Gaussian distribution as output distribution instead of mixture of logistics, set "out_channels = 2" instead of "out_channels = 10 * 3". (UNDER TEST)
out_channels = 2, #This should be equal to quantize channels when input type is 'mulaw-quantize' else: num_distributions * 3 (prob, mean, log_scale).
layers = 30, #Number of dilated convolutions (Default: Simplified Wavenet of Tacotron-2 paper)
stacks = 3, #Number of dilated convolution stacks (Default: Simplified Wavenet of Tacotron-2 paper)
residual_channels = 512,
gate_channels = 512, #split in 2 in gated convolutions
skip_out_channels = 256,
kernel_size = 3,
cin_channels = 80, #Set this to -1 to disable local conditioning, else it must be equal to num_mels!!
upsample_conditional_features = True, #Whether to repeat conditional features or upsample them (The latter is recommended)
upsample_scales = [15, 20], #prod(upsample_scales) should be equal to hop_size
freq_axis_kernel_size = 3,
leaky_alpha = 0.4,
gin_channels = -1, #Set this to -1 to disable global conditioning, Only used for multi speaker dataset. It defines the depth of the embeddings (Recommended: 16)
use_speaker_embedding = True, #whether to make a speaker embedding
n_speakers = 5, #number of speakers (rows of the embedding)
use_bias = True, #Whether to use bias in convolutional layers of the Wavenet
from tacotron-2.
`
Exiting due to Exception: assertion failed: [] [Condition x == y did not hold element-wise:] [x (model/inference/strided_slice_5:0) = ] [154000] [y (model/inference/strided_slice_6:0) = ] [112000]
[[Node: model/inference/assert_equal/Assert/Assert = Assert[T=[DT_STRING, DT_STRING, DT_STRING, DT_INT32, DT_STRING, DT_INT32], summarize=3, _device="/job:localhost/replica:0/task:0/device:CPU:0"](model/inference/assert_equal/Equal/_31, model/loss/assert_equal_2/Assert/Assert/data_0, model/loss/assert_equal_2/Assert/Assert/data_1, model/inference/assert_equal/Assert/Assert/data_2, model/inference/strided_slice_5/_33, model/inference/assert_equal/Assert/Assert/data_4, model/inference/strided_slice_1/_35)]]
Caused by op 'model/inference/assert_equal/Assert/Assert', defined at:
File "train.py", line 128, in
main()
File "train.py", line 122, in main
train(args, log_dir, hparams)
File "train.py", line 76, in train
checkpoint = wavenet_train(args, log_dir, hparams, input_path)
File "/root/works/Tacotron-2/wavenet_vocoder/train.py", line 243, in wavenet_train
train(log_dir, args, hparams, input_path)
File "/root/works/Tacotron-2/wavenet_vocoder/train.py", line 167, in train
model, stats = model_train_mode(args, feeder, hparams, global_step)
File "/root/works/Tacotron-2/wavenet_vocoder/train.py", line 117, in model_train_mode
feeder.input_lengths, x=feeder.inputs)
File "/root/works/Tacotron-2/wavenet_vocoder/models/wavenet.py", line 169, in initialize
y_hat = self.step(x, c, g, softmax=False) #softmax is automatically computed inside softmax_cross_entropy if needed
File "/root/works/Tacotron-2/wavenet_vocoder/models/wavenet.py", line 439, in step
with tf.control_dependencies([tf.assert_equal(tf.shape(c)[-1], tf.shape(x)[-1])]):
`
I also have shape error too when training WaveNet, could you give some advice for it?@Rayhane-mamah
from tacotron-2.
See the same issue.
from tacotron-2.
Hello sorry for the super late answer, I believe I fixed those in latest commit, so I'm closing this.
Feel free to reopen if the problem persists. also, always make sure to use out_channels = 10 * 3
with "raw and out_channels = 256
with mulaw-quantize.
from tacotron-2.
error occur agagin..
Exiting due to exception: assertion failed: [] [Condition x == y did not hold element-wise:] [x (model/inference/strided_slice_5:0) = ] [14100] [y (model/inference/strided_slice_6:0) = ] [12925]
[[Node: model/inference/assert_equal/Assert/Assert = Assert[T=[DT_STRING, DT_STRING, DT_STRING, DT_INT32, DT_STRING, DT_INT32], summarize=3, _device="/job:localhost/replica:0/task:0/device:CPU:0"](model/inference/assert_equal/Equal/_27, model/loss/assert_equal_2/Assert/Assert/data_0, model/loss/assert_equal_2/Assert/Assert/data_1, model/inference/assert_equal/Assert/Assert/data_2, model/inference/strided_slice_5/_29, model/inference/assert_equal/Assert/Assert/data_4, model/inference/strided_slice_1/_31)]]
[[Node: model/inference/residual_block_cin_conv_layer_ResidualConv1dGLU_9/assert_equal/Assert/Assert/_324 = _Recv[client_terminated=false, recv_device="/job:localhost/replica:0/task:0/device:GPU:0", send_device="/job:localhost/replica:0/task:0/device:CPU:0", send_device_incarnation=1, tensor_name="edge_1801_...ert/Assert", tensor_type=DT_FLOAT, _device="/job:localhost/replica:0/task:0/device:GPU:0"]()]]
Caused by op 'model/inference/assert_equal/Assert/Assert', defined at:
from tacotron-2.
Related Issues (20)
- Type of sequence_length in BiLSTM
- Preprocessing Error occurring for using custom dataset
- Preprocess problem HOT 1
- Linear Projection
- Speech synthesis in a language not found in the dataset
- Question: How can I train the model on my own data HOT 2
- About Loss Function and Preprocessing
- Why do we need to shuffle data in one particular batch?
- 2 different questions in 1 issue
- Errors in training WaveNet
- Is it possible to do quick and custom evaluation?
- when i run training
- is there anyway to implement garbage collection ?
- Trouble with using pretrained checkpoints HOT 1
- Can Someone please provide any pre-trained checkpoints for this model to do the transfer learning?
- Has anyone deployed this model on web before? Need some help with that? If anyone has kindly provide the steps. It's urgent
- Saving the entire model
- BrokenProcessPool: A process in the process pool was terminated abruptly while the future was running or pending HOT 1
- hparams.py and error with " python preprocess.py "
- ValueError: please enter a valid model to train: ['Tacotron', 'WaveNet', 'Tacotron-2'] HOT 2
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from tacotron-2.