Giter Club home page Giter Club logo

mtts's Introduction

本项目已停止维护,已相当老旧

推荐:

欢迎加入

  • 语音合成交流QQ群:882726654

Build Status

A Demo of MTTS Mandarin/Chinese Text to Speech FrontEnd

Mandarin/Chinese Text to Speech based on statistical parametric speech synthesis using merlin toolkit

这只是一个语音合成前端的Demo,没有提供文本正则化,韵律预测功能,文字转拼音使用pypinyin,分词使用结巴分词,这两者的准确度也达不到商用水平。

其他语音合成项目传送门,端到端是不错的方向,自然度要优于merlin。

This is only a demo of mandarin frontend which is lack of some parts like "text normalization" and "prosody prediction", and the phone set && Question Set this project use havn't fully tested yet.

一个粗略的文档:A draft documentation written in Mandarin

Data

There is no open-source mandarin speech synthesis dataset on the internet, this proj used thchs30 dataset to demostrate speech synthesis

UPDATE

open-source mandarin speech synthesis data from data-banker company, 开源的中文语音合成数据,感谢标贝公司

【数据下载】https://weixinxcxdb.oss-cn-beijing.aliyuncs.com/gwYinPinKu/BZNSYP.rar 【数据说明】http://www.data-baker.com/open_source.html

Generated Samples

Listen to https://jackiexiao.github.io/MTTS/

How To Reproduce

  1. First, you need data contain wav and txt (prosody mark is optional)
  2. Second, generate HTS label using this project
  3. Using merlin/egs/mandarin_voice to train and generate Mandarin Voice

Context related annotation & Question Set

Install

Python : python3.6
System: linux(tested on ubuntu16.04)

pip install jieba pypinyin
sudo apt-get install libatlas3-base

Run bash tools/install_mtts.sh
Or download file by yourself

Run Demo

bash run_demo.sh

Usage

1. Generate HTS Label by wav and text

  • Usage: Run python src/mtts.py txtfile wav_directory_path output_directory_path (Absolute path or relative path) Then you will get HTS label, if you have your own acoustic model trained by monthreal-forced-aligner, add-a your_acoustic_model.zip, otherwise, this project use thchs30.zip acoustic model as default
  • Attention: Currently only support Chinese Character, txt should not have any Arabia number or English alphabet(不可包含阿拉伯数字和英文字符)

txtfile example

A_01 这是一段文本
A_02 这是第二段文本

wav_directory example(Sampleing Rate should larger than 16khz)

A_01.wav  
A_02.wav  

2. Generate HTS Label by text with or without alignment file

  • Usage: Run python src/mandarin_frontend.py txtfile output_directory_path
  • or import mandarin_frontend
from mandarin_frontend import txt2label

result = txt2label('向香港特别行政区同胞澳门和**同胞海外侨胞')
[print(line) for line in result]

# with prosody mark and alignment file (sfs file)
# result = txt2label('向#1香港#2特别#1行政区#1同胞#4澳门#2和#1**#1同胞#4海外#1侨胞',
            sfsfile='example_file/example.sfs')

see source code for more information, but pay attention to the alignment file(sfs file), the format is endtime phone_type not start_time, phone_type(which is different from speech ocean's data)

3. Forced-alignment

This project use Montreal-Forced-Aligner to do forced alignment, if you want to get a better alignment, use your data to train a alignment-model, see mfa: algin-using-only-the-dataset

  1. We trained the acoustic model using thchs30 dataset, see misc/thchs30.zip, the dictionary we use mandarin_mtts.lexicon. If you use larger dataset than thchs30, you may get better alignment.
  2. If you want to use mfa's (montreal-forced-aligner) pre-trained mandarin model, this is the dictionary you need mandarin-for-montreal-forced-aligner-pre-trained-model.lexicon

Prosody Mark

You can generate HTS Label without prosody mark. we assume that word segment is smaller than prosodic word(which is adjusted in code)

"#0","#1", "#2","#3" and "#4" are the prosody labeling symbols.

  • #0 stands for word segment
  • #1 stands for prosodic word
  • #2 stands for stressful word (actually in this project we regrad it as #1)
  • #3 stands for prosodic phrase
  • #4 stands for intonational phrase

Improvement to be done in future

  • Text Normalization
  • Better Chinese word segment
  • G2P: Polyphone Problem
  • Better Label format and Question Set
  • Improvement of prosody analyse
  • Better alignment

Contributor

  • Jackiexiao
  • willian56

mtts's People

Contributors

jackiexiao avatar osmboy avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

mtts's Issues

关于声学模型训练时帧对齐问题

训练声学模型时,语言特征和声学特征是怎么实现一帧帧对应训练的?语言特征是按声韵母根据问题集得出特征,但是声韵母的时长是和声学特征时长不一样的,merlin中应该是5ms一帧,所以语言特征最后几维是不是加了相关特征?还望解惑!
非常感谢

about “format pinyin to system's format”

the function in txt2pinyin.py
def pinyinformat(syllabel):
translate_dict = {'ju':'jv', 'qu':'qv', 'xu':'xv', 'zi':'zic',
'ci':'cic', 'si':'sic', 'zhi':'zhih',
'chi':'chih', 'shi':'shih', 'ri':'rih',
'yuan':'yvan', 'yue':'yve', 'yun':'yvn',
'iu':'iou', 'ui':'uei', 'un':'uen'}

why not translate ('quan':'qvan','xuan':'xvan','juan':'jvan','qun':'qvn','xun':'xvn') etc
the pypinyin result has "quan" and in mandarin_mtts.lexicon define the
"quan1 q van1" etc

and if forcealignment use the mandarin_mtts.lexicon and the system use another format
there are not one to one mapping. will affect the result?

hope your replay
thanks a lot

how to use my data to train a new alignment-model ?

前辈,你好! 我使用自己的数据训练alignment-model,一开始报错,我就在语料文件中加入*.lab文件(拼音),顺利生成了一个新的模型.但是,我在使用时报错,"WARNING : --Miss: ///**.TextGrid",没有生成TextGrid,找不到问题所在,请大神指点一下,谢谢!

音频格式不支持

INFO : Start montreal forced align
Setting up corpus information...
Traceback (most recent call last):
File "aligner/command_line/align.py", line 186, in
File "aligner/command_line/align.py", line 146, in validate_args
File "aligner/command_line/align.py", line 84, in align_corpus
File "aligner/corpus.py", line 309, in init
File "aligner/corpus.py", line 151, in get_sample_rate
File "wave.py", line 499, in open
File "wave.py", line 163, in init
File "wave.py", line 143, in initfp
File "wave.py", line 260, in _read_fmt_chunk
wave.Error: unknown format: 3
Failed to execute script align
WARNING : --Miss: ./tang/output/textgrid/mandarin_voice/1.TextGrid
INFO : the label files are in ./tang/output/labels
INFO : the error log is in ./tang/output/mtts.log

能将这个前端生成的label输入给HTS进行中文语音合成吗

Hi,

非常感谢提供的这个前端!刚刚接触语音合成,可能会问一些基础的问题。

就是这个生成的label可以放到HTS里面进行语音合成吗?我试过了发现由单音素拼接成的声音起码还能听出点意思,但是在HTS中使用HHEd聚类后生成的有上下文连接关系的声音就不能听了。。。这个问题是不是和该前端生成的label格式与HTS的label的格式不一致有关?如果确实有关,那么有没办法解决呢?

合成的两种声音、label与原文都附在这上面了。

单音素.zip
上下文.zip
label.zip
原文.txt

字母和数字问题

我看代码里面是直接跳过了 阿拉伯数字和英文字母。是不是可以将数字转换为对应的中文文字呢,比如3 --> ,英文字母不知道可不可以转换为近音字,B--> 或者直接在音素词典中添加英文字母的音素呢

Does it support Cantonese?

I've lexicon, phoneset and questions set and POS tags and dataset(audio, texts) for Cantonese. Need guidance in building frontend for Cantonese.

Failed to execute script align

按照你的流程步骤(数据和程序都下载下来了),执行:run_demo.sh报错:
INFO : Start montreal forced align
/home/chwbin/workplace/MTTS-master/tools/montreal-forced-aligner/bin/mfa_align data/thchs30_250_demo/output/wav /home/chwbin/workplace/MTTS-master/misc/mandarin_mtts.lexicon /home/chwbin/workplace/MTTS-master/misc/thchs30.zip data/thchs30_250_demo/output/textgrid
Setting up corpus information...
Number of speakers in corpus: 1, average number of utterances per speaker: 250.0
Creating dictionary information...
Setting up training data...
Calculating MFCCs...
Traceback (most recent call last):
File "aligner/command_line/align.py", line 186, in
args.acoustic_model_path.lower(), ', '.join(PRETRAINED_LANGUAGES))))
File "aligner/command_line/align.py", line 146, in validate_args
a.export_textgrids()
File "aligner/command_line/align.py", line 93, in align_corpus
else:
File "aligner/aligner/pretrained.py", line 71, in init
return os.path.join(self.temp_directory, 'model')
File "aligner/aligner/pretrained.py", line 117, in setup
File "aligner/aligner/base.py", line 80, in setup
'''
File "aligner/corpus.py", line 970, in initialize_corpus
File "aligner/corpus.py", line 848, in create_mfccs
self.write()
File "aligner/corpus.py", line 859, in _combine_feats
self.figure_utterance_lengths()
FileNotFoundError: [Errno 2] No such file or directory: '/home/chwbin/Documents/MFA/wav/train/mfcc/raw_mfcc.0.scp'
Failed to execute script align

install error

my system is centos, must install libatlas-dev?
sudo yum install libatlas-dev

Trying other mirror.
Determining fastest mirrors

  • nux-dextop: li.nux.ro
    No package libatlas-dev available.
    Error: Nothing to do

WARNING:--Miss!

WARNING : --Miss: data/thchs30_250_demo/output/textgrid/mandarin_voice/A11_249.TextGrid
流程走 用的还是那个<txt, wav>包,output一直warning:miss,我去翻源码,一边来Issues.

synthetic speech pacing is very fast with frontend script

@Jackiexiao synthetic speech pacing is very fast, do you think what is the reason?
how to solve it?

37050000 37800000 x^iou4-m+ei4=sh@/A:4-4^1@/B:25+4@2^1^26+5#26-5-/C:a_a^n#2+2+2&/D:xx=30!xx@1-1&/E:xx|30-xx@xx#1&xx!1-1#/F:xx^30=17_1-1!
37800000 38600000 iou4^m-ei4+sh=ih1@/A:4-4^1@/B:25+4@2^1^26+5#26-5-/C:a_a^n#2+2+2&/D:xx=30!xx@1-1&/E:xx|30-xx@xx#1&xx!1-1#/F:xx^30=17_1-1!
38600000 39750000 m^ei4-sh+ih1=y@/A:4-1^4@/B:26+3@1^2^27+4#27-4-/C:a_n^z#2+2+2&/D:xx=30!xx@1-1&/E:xx|30-xx@xx#1&xx!1-1#/F:xx^30=17_1-1!
39750000 40400000 ei4^sh-ih1+y=i4@/A:4-1^4@/B:26+3@1^2^27+4#27-4-/C:a_n^z#2+2+2&/D:xx=30!xx@1-1&/E:xx|30-xx@xx#1&xx!1-1#/F:xx^30=17_1-1!
40400000 41200000 sh^ih1-y+i4=ang4@/A:1-4^4@/B:27+2@2^1^28+3#28-3-/C:a_n^z#2+2+2&/D:xx=30!xx@1-1&/E:xx|30-xx@xx#1&xx!1-1#/F:xx^30=17_1-1!

Prosody Mark lead to very strange results

我用你的这个方法mtts.py生成训练的lab,在标贝数据集上,如果去掉文本中的#1、#2、#3、#4这些韵律标签,用merlin训练能得到一个还算ok的合成结果。
但是如果携带这些韵律标签去生成lab,同样的参数下,合成的效果就很差,差到听不得。请问这是什么原因呢?
是否有Prosody Mark对产生lab的区别只有:
韵律词层:/D:d1=d2!d3@d4-d5& 以及 语句层F中的 f4

Cantonese questions set?

This is not a related question but I'm working on Cantonese. I've data(audios, txts) I tried to build the TTS for Cantonese but it seems I'm missing question file, phone sets, lexicon. Could you help to build Cantonese TTS, I will contribute it to the project.

can I generate HTS Label by only txt file not wav ?

excuse me , I have a question of this frontend ,

If I have a chinese txt file and I want to use this frontend to generate its HTS label ,
do I have to generate a wav file of it by other TTS first ?
is there a way to generate labels of target txt file without its corresponding wav file?

thanks

关于pau和sp的问题。

你好,我想问下你的音素表里包含了sil、pau和sp三个静音标志,在inference阶段,sil和pau还好理解,可以分别加在文本首尾和标点处,但是sp就没办法找到合适的位置加入了。这就意味着在测试阶段没法加入sp,所以在训练阶段对sp加以区分的意义还大吗?

FileNotFoundError: [Errno 2] No such file or directory: '/root/Documents/MFA/wav/train/mfcc/raw_mfcc.0.scp'

I have encountered the problem above when I execute "bash run_demo.sh" ,could you help me to solve it?

File "aligner/command_line/align.py", line 186, in
File "aligner/command_line/align.py", line 146, in validate_args
File "aligner/command_line/align.py", line 93, in align_corpus
File "aligner/aligner/pretrained.py", line 71, in init
File "aligner/aligner/pretrained.py", line 117, in setup
File "aligner/aligner/base.py", line 80, in setup
File "aligner/corpus.py", line 970, in initialize_corpus
File "aligner/corpus.py", line 848, in create_mfccs
File "aligner/corpus.py", line 859, in _combine_feats
FileNotFoundError: [Errno 2] No such file or directory: '/root/Documents/MFA/wav/train/mfcc/raw_mfcc.0.scp'

ZeroDivisionError: division by zero

Hi,i got this issue when i inputted the code : bash run_demo.sh .How can i do?Thanks!

Traceback (most recent call last):
File "aligner/command_line/align.py", line 224, in
File "aligner/command_line/align.py", line 181, in validate_args
File "aligner/command_line/align.py", line 117, in align_corpus
File "aligner/corpus.py", line 496, in speaker_utterance_info
ZeroDivisionError: division by zero
[14920] Failed to execute script align
Traceback (most recent call last):
File "src/mtts.py", line 276, in
args.acoustic_model_path)
File "src/mtts.py", line 236, in generate_label
_mfa_align(txtlines, wav_dir_path, output_path, acoustic_model_path)
File "src/mtts.py", line 140, in _mfa_align
raise OSError('Failed to run forced align tools, check if you install'
OSError: Failed to run forced align tools, check if you installmontreal-forced-aligner correctly

No such file or directory: '/home/top/Documents/MFA/wav/corpus_data/split1/feats.0.scp'

umber of speakers in corpus: 1, average number of utterances per speaker: 1340.0
Creating dictionary information...
Setting up corpus_data directory...
Generating base features (mfcc)...
[3581] Failed to execute script align
Traceback (most recent call last):
File "aligner/command_line/align.py", line 224, in
File "aligner/command_line/align.py", line 181, in validate_args
File "aligner/command_line/align.py", line 129, in align_corpus
File "aligner/aligner/pretrained.py", line 61, in init
File "aligner/aligner/base.py", line 50, in init
File "aligner/aligner/pretrained.py", line 79, in setup
File "aligner/aligner/base.py", line 55, in setup
File "aligner/features/config.py", line 153, in generate_features
File "aligner/features/config.py", line 141, in generate_base_features
File "aligner/corpus.py", line 813, in combine_feats
FileNotFoundError: [Errno 2] No such file or directory: '/home/top/Documents/MFA/wav/corpus_data/split1/feats.0.scp'

no label file get

I ran run_demo.sh script and get follow result with txtfile="thchs30_250_demo/A11.txt" ,wav_dir_path="thchs30_250_demo/wav" :

/usr/bin/python3.6 /home/top/workspace/tts/mandain_front/mtts.py
INFO    : Start montreal forced align
Setting up corpus information...
Number of speakers in corpus: 1, average number of utterances per speaker: 1340.0
Creating dictionary information...
Setting up corpus_data directory...
Generating base features (mfcc)...
Calculating CMVN...
Done with setup.
Done! Everything took 15.08411955833435 seconds
WARNING : --Miss: ../data/output/textgrid/mandarin_voice/A11_0.TextGrid
WARNING : --Miss: ../data/output/textgrid/mandarin_voice/A11_1.TextGrid
WARNING : --Miss: ../data/output/textgrid/mandarin_voice/A11_2.TextGrid
WARNING : --Miss: ../data/output/textgrid/mandarin_voice/A11_3.TextGrid
WARNING : --Miss: ../data/output/textgrid/mandarin_voice/A11_4.TextGrid

....
WARNING : --Miss: ../data/output/textgrid/mandarin_voice/A11_249.TextGrid
INFO    : the label files are in ../data/output/labels
INFO    : the error log is in ../data/output/mtts.log

Process finished with exit code 0

生成的lab文件

您好!使用您的工程可以生成的lab文件是否需要进一步处理才可以被Merlin使用呢?
我看到 merlin/Mandarin_voice生成的lab文件如下:

0 9800000 xx^xx-sil+t=a1@/A:xx-xx^xx@/B:xx+xx@xx^xx^xx+xx#xx-xx-/C:xx_xx^xx#xx+xx+xx&/D:xx=xx!xx@xx-xx&/E:xx|xx-xx@xx#xx&xx!xx-xx#/F:xx^xx=xx_xx-xx!
9800000 10900000 xx^sil-t+a1=j@/A:xx-1^3@/B:0+10@1^1^1+11#1-11-/C:xx_r^v#xx+0+0&/D:xx=11!xx@1-1&/E:xx|11-xx@xx#1&xx!1-1#/F:xx^11=21_1-1!
10900000 12100000 sil^t-a1+j=in3@/A:xx-1^3@/B:0+10@1^1^1+11#1-11-/C:xx_r^v#xx+0+0&/D:xx=11!xx@1-1&/E:xx|11-xx@xx#1&xx!1-1#/F:xx^11=21_1-1!
12100000 13000000 t^a1-j+in3=p@/A:1-3^2@/B:1+9@1^2^1+11#1-11-/C:r_v^n#0+0+0&/D:xx=11!xx@1-1&/E:xx|11-xx@xx#1&xx!1-1#/F:xx^11=21_1-1!

但是,此工程生成的lab文件长这样

0 0 xx^xx-sil+d=ong3@/A:xx-xx^xx@/B:xx+xx@xx^xx^xx+xx#xx-xx-/C:xx_xx^xx#xx+xx+xx&/D:xx=xx!xx@xx-xx&/E:xx|xx-xx@xx#xx&xx!xx-xx#/F:xx^xx=xx_xx-xx!
0 0 xx^sil-d+ong3=x@/A:xx-3^1@/B:0+2@1^3^1+3#1-3-/C:xx_n^xx#xx+3+xx&/D:xx=3!xx@1-1&/E:xx|3-xx@xx#1&xx!1-1#/F:xx^3=1_1-1!
0 0 sil^d-ong3+x=ian1@/A:xx-3^1@/B:0+2@1^3^1+3#1-3-/C:xx_n^xx#xx+3+xx&/D:xx=3!xx@1-1&/E:xx|3-xx@xx#1&xx!1-1#/F:xx^3=1_1-1!
0 0 d^ong3-x+ian1=sh@/A:3-1^1@/B:1+1@2^2^2+2#2-2-/C:xx_n^xx#xx+3+xx&/D:xx=3!xx@1-1&/E:xx|3-xx@xx#1&xx!1-1#/F:xx^3=1_1-1!


是因为没有进一步做对齐吗?这种没有对齐lab是否可以被Merlin使用呢

关于merlin

您好! 我从Merlin中转来的,看到您在Merlin mandarin中的问答。但是我一直无法正确运行 run_demo.sh脚本。 conf/global_setting.conf中的Vocoder=WORLD其实是错误的?因为实际只有小写的world,world_v2,可以用。所以这里好像需要修改,然后我修改了,也没法运行。在run_demo.sh的第6步,06_traub_acoustic_model.sh 中,出现错误

IndexError:index 0 is out of bounds for axis 0 with size 0

请问这个是因为在合成汉语的原因吗?还是我有什么其他地方做错了。
以及,我单独运行 /merlin/misc/scripts/vocoder/world/extract_features_for_merlin.sh会报错 SPTK-2.9/x2x Permission denies,但是其实我已经修改权限为777了,还是出现这个错误。运行完会有相应的输出。目前不知道哪里出错,如果有时间,能否帮我解决这个问题呢?

A ISSUE in MFA 2.0 linux version

All required kaldi binaries were found!
Setting up corpus information...
Parsing dictionary without pronunciation probabilties without silence probabilties
Creating dictionary information...
Setting up corpus_data directory...
Generating base features (mfcc)...
Traceback (most recent call last):
File "/home/thuyth-is/anaconda3/envs/aligner/bin/mfa", line 8, in
sys.exit(main())
File "/home/thuyth-is/anaconda3/envs/aligner/lib/python3.8/site-packages/montreal_forced_aligner/command_line/mfa.py", line 292, in main
run_train_corpus(args)
File "/home/thuyth-is/anaconda3/envs/aligner/lib/python3.8/site-packages/montreal_forced_aligner/command_line/train_and_align.py", line 109, in run_train_corpus
align_corpus(args)
File "/home/thuyth-is/anaconda3/envs/aligner/lib/python3.8/site-packages/montreal_forced_aligner/command_line/train_and_align.py", line 65, in align_corpus
a = TrainableAligner(corpus, dictionary, train_config, align_config,
File "/home/thuyth-is/anaconda3/envs/aligner/lib/python3.8/site-packages/montreal_forced_aligner/aligner/trainable.py", line 29, in init
super(TrainableAligner, self).init(corpus, dictionary, align_config, temp_directory,
File "/home/thuyth-is/anaconda3/envs/aligner/lib/python3.8/site-packages/montreal_forced_aligner/aligner/base.py", line 40, in init
self.setup()
File "/home/thuyth-is/anaconda3/envs/aligner/lib/python3.8/site-packages/montreal_forced_aligner/aligner/trainable.py", line 36, in setup
trainer.feature_config.generate_features(self.corpus)
File "/home/thuyth-is/anaconda3/envs/aligner/lib/python3.8/site-packages/montreal_forced_aligner/features/config.py", line 170, in generate_features
self.generate_base_features(corpus)
File "/home/thuyth-is/anaconda3/envs/aligner/lib/python3.8/site-packages/montreal_forced_aligner/features/config.py", line 158, in generate_base_features
corpus.combine_feats()
File "/home/thuyth-is/anaconda3/envs/aligner/lib/python3.8/site-packages/montreal_forced_aligner/corpus/base.py", line 436, in combine_feats
with open(path, 'r') as inf:
FileNotFoundError: [Errno 2] No such file or directory: '/home/thuyth/Documents/MFA/indir/corpus_data/split1/feats.0.scp'

I can't know that. Who can i help me, PLEASE?

mandarin_frontend.py文件 _adjust方法这一行是否有bug

def _adjust(prosody_txt):
'''Make sure that segment word is smaller than prosody word'''
prosody_words = re.split('#\d', prosody_txt)
rhythms = re.findall('#\d', prosody_txt)
txt = ''.join(prosody_words)
words = []
poses = []
# for txt in prosody_words:
for word, pos in posseg.cut(txt):
words.append(word)
poses.append(pos[0])
index = 0
insert_time = 0
length = len(prosody_words[index])
i = 0
while i < len(words):
done = False
while not done:
if (len(words[i]) > length):
#print(words[i], prosody_words[index])
length += len(prosody_words[index + 1])
rhythms[index+ insert_time] = '' #这一行之前是rhythms[index] = ''
index += 1
elif (len(words[i]) < length):
# print(' less than ', words[i], prosody_words[index])
rhythms.insert(index + insert_time, '#0')
# rhythms.insert(index, '#0')
insert_time += 1
length -= len(words[i])
i += 1
else:
# print('equal :', words[i])
# print(rhythms)
done = True
index += 1
else:
if (index < len(prosody_words)):
length = len(prosody_words[index])
i += 1
if rhythms[-1] != '#4':
rhythms.append('#4')
rhythms = [x for x in rhythms if x != '']
# print(rhythms)
return (words, poses, rhythms)

生成HTS label 出错

请问一下,为什么在txt里面,只要加了标点符号就会报错??
Error at A11_0, please check your txt 绿是阳春烟景大块文章的底色#4四月的林峦更是绿得鲜活秀媚诗意盎然#4
Prefix dict has been built succesfully.
less than 绿 绿是阳春烟景大块文章的底色
less than 是 绿是阳春烟景大块文章的底色
less than 阳春 绿是阳春烟景大块文章的底色
less than 烟景 绿是阳春烟景大块文章的底色
less than 大块文章 绿是阳春烟景大块文章的底色
less than 的 绿是阳春烟景大块文章的底色
equal : 底色
['#0', '#0', '#0', '#0', '#0', '#0', '#4', '#4']
less than 四月 四月的林峦更是绿得鲜活秀媚诗意盎然
less than 的 四月的林峦更是绿得鲜活秀媚诗意盎然
less than 林峦 四月的林峦更是绿得鲜活秀媚诗意盎然
less than 更是 四月的林峦更是绿得鲜活秀媚诗意盎然
less than 绿 四月的林峦更是绿得鲜活秀媚诗意盎然
less than 得 四月的林峦更是绿得鲜活秀媚诗意盎然
less than 鲜活 四月的林峦更是绿得鲜活秀媚诗意盎然
less than 秀媚 四月的林峦更是绿得鲜活秀媚诗意盎然
less than 诗意 四月的林峦更是绿得鲜活秀媚诗意盎然
equal : 盎然
['#0', '#0', '#0', '#0', '#0', '#0', '#4', '#0', '#0', '#0', '#0', '#0', '#0', '#0', '#0', '#0', '#4']
['#0', '#0', '#0', '#0', '#0', '#0', '#4', '#0', '#0', '#0', '#0', '#0', '#0', '#0', '#0', '#0', '#4', '#4']
ERROR : Error at A11_0, please check your txt 绿是阳春烟景大块文章的底色#4四月的林峦更是绿得鲜活秀媚诗意盎然#4
INFO : processing 2, file A11_1
less than 他 他仅凭腰部的力量
less than 仅凭 他仅凭腰部的力量
less than 腰部 他仅凭腰部的力量
less than 的 他仅凭腰部的力量
equal : 力量
['#0', '#0', '#0', '#0', '#4']
less than 在 在泳道上下翻腾蛹动蛇行状如海豚一直以一头的优势领先
less than 泳道 在泳道上下翻腾蛹动蛇行状如海豚一直以一头的优势领先
less than 上下 在泳道上下翻腾蛹动蛇行状如海豚一直以一头的优势领先
less than 翻腾 在泳道上下翻腾蛹动蛇行状如海豚一直以一头的优势领先
less than 蛹 在泳道上下翻腾蛹动蛇行状如海豚一直以一头的优势领先
less than 动 在泳道上下翻腾蛹动蛇行状如海豚一直以一头的优势领先
less than 蛇行 在泳道上下翻腾蛹动蛇行状如海豚一直以一头的优势领先
less than 状 在泳道上下翻腾蛹动蛇行状如海豚一直以一头的优势领先
less than 如 在泳道上下翻腾蛹动蛇行状如海豚一直以一头的优势领先
less than 海豚 在泳道上下翻腾蛹动蛇行状如海豚一直以一头的优势领先
less than 一直 在泳道上下翻腾蛹动蛇行状如海豚一直以一头的优势领先
less than 以 在泳道上下翻腾蛹动蛇行状如海豚一直以一头的优势领先
less than 一头 在泳道上下翻腾蛹动蛇行状如海豚一直以一头的优势领先
less than 的 在泳道上下翻腾蛹动蛇行状如海豚一直以一头的优势领先
less than 优势 在泳道上下翻腾蛹动蛇行状如海豚一直以一头的优势领先
equal : 领先
['#0', '#0', '#0', '#0', '#4', '#0', '#0', '#0', '#0', '#0', '#0', '#0', '#0', '#0', '#0', '#0', '#0', '#0', '#0', '#0']
['#0', '#0', '#0', '#0', '#4', '#0', '#0', '#0', '#0', '#0', '#0', '#0', '#0', '#0', '#0', '#0', '#0', '#0', '#0', '#0', '#4']

大佬求带

大佬很牛逼啊。我在跟你的项目,球带

biaobei dataset error - not .wav format

在用这个脚本${merlin}/misc/scripts/vocoder/world/extract_features_for_merlin.sh抽取声学特征时,
在biaobei数据集(采样率48000)上,大半wav文件(约004000以后,如010000.wav)都出现下面这个错误:
fmt (2) error.
error: The file is not .wav format.
其他的数据集如thchs30(采样率16000)、king-003(采样率44100)没这个问题。

.lab中文件格式是什么意思

刚开始接触语音合成,问题可能比较基础
下边是A11_0.lab中的第一行,从第一个@之后我就看不懂了,麻烦解答一下
0 11000000 xx^xx-sil+l=v4@/A:xx-xx^xx@/B:xx+xx@xx^xx^xx+xx#xx-xx-/C:xx_xx^xx#xx+xx+xx&/D:xx=xx!xx@xx-xx&/E:xx|xx-xx@xx#xx&xx!xx-xx#/F:xx^xx=xx_xx-xx!

duplicated file names?

Hi,

I am looking into merlin for Korean extension and found your work. It is AMAZING. I tried your script and found the following error. Could you have a look if I am misleading?

Best,
Homin

$ ./run_demo.sh 
Setting up corpus information...
Traceback (most recent call last):
  File "aligner/command_line/align.py", line 186, in <module>
  File "aligner/command_line/align.py", line 146, in validate_args
  File "aligner/command_line/align.py", line 84, in align_corpus
  File "aligner/corpus.py", line 320, in __init__
aligner.exceptions.CorpusError: Files with the same file name are not permitted. Files with the same name are: data/thchs30_250_demo/output/wav/mandarin_voice/A11_0.wav, data/thchs30_250_demo/output/wav/mandarin_voice/wav/A11_0.wav.
Failed to execute script align
--Miss: data/thchs30_250_demo/output/textgrid/mandarin_voice/A11_0.TextGrid
--Miss: data/thchs30_250_demo/output/textgrid/mandarin_voice/A11_1.TextGrid
--Miss: data/thchs30_250_demo/output/textgrid/mandarin_voice/A11_2.TextGrid
--Miss: data/thchs30_250_demo/output/textgrid/mandarin_voice/A11_3.TextGrid
--Miss: data/thchs30_250_demo/output/textgrid/mandarin_voice/A11_4.TextGrid
--Miss: data/thchs30_250_demo/output/textgrid/mandarin_voice/A11_5.TextGrid
--Miss: data/thchs30_250_demo/output/textgrid/mandarin_voice/A11_6.TextGrid
--Miss: data/thchs30_250_demo/output/textgrid/mandarin_voice/A11_7.TextGrid
--Miss: data/thchs30_250_demo/output/textgrid/mandarin_voice/A11_8.TextGrid
--Miss: data/thchs30_250_demo/output/textgrid/mandarin_voice/A11_9.TextGrid
--Miss: data/thchs30_250_demo/output/textgrid/mandarin_voice/A11_10.TextGrid
--Miss: data/thchs30_250_demo/output/textgrid/mandarin_voice/A11_11.TextGrid
--Miss: data/thchs30_250_demo/output/textgrid/mandarin_voice/A11_12.TextGrid
--Miss: data/thchs30_250_demo/output/textgrid/mandarin_voice/A11_13.TextGrid
--Miss: data/thchs30_250_demo/output/textgrid/mandarin_voice/A11_14.TextGrid
--Miss: data/thchs30_250_demo/output/textgrid/mandarin_voice/A11_15.TextGrid
--Miss: data/thchs30_250_demo/output/textgrid/mandarin_voice/A11_16.TextGrid
--Miss: data/thchs30_250_demo/output/textgrid/mandarin_voice/A11_17.TextGrid
--Miss: data/thchs30_250_demo/output/textgrid/mandarin_voice/A11_18.TextGrid
--Miss: data/thchs30_250_demo/output/textgrid/mandarin_voice/A11_19.TextGrid
--Miss: data/thchs30_250_demo/output/textgrid/mandarin_voice/A11_20.TextGrid
--Miss: data/thchs30_250_demo/output/textgrid/mandarin_voice/A11_21.TextGrid
--Miss: data/thchs30_250_demo/output/textgrid/mandarin_voice/A11_22.TextGrid
--Miss: data/thchs30_250_demo/output/textgrid/mandarin_voice/A11_23.TextGrid
--Miss: data/thchs30_250_demo/output/textgrid/mandarin_voice/A11_24.TextGrid
--Miss: data/thchs30_250_demo/output/textgrid/mandarin_voice/A11_25.TextGrid
--Miss: data/thchs30_250_demo/output/textgrid/mandarin_voice/A11_26.TextGrid
--Miss: data/thchs30_250_demo/output/textgrid/mandarin_voice/A11_27.TextGrid
--Miss: data/thchs30_250_demo/output/textgrid/mandarin_voice/A11_28.TextGrid
--Miss: data/thchs30_250_demo/output/textgrid/mandarin_voice/A11_29.TextGrid
--Miss: data/thchs30_250_demo/output/textgrid/mandarin_voice/A11_30.TextGrid
--Miss: data/thchs30_250_demo/output/textgrid/mandarin_voice/A11_31.TextGrid
--Miss: data/thchs30_250_demo/output/textgrid/mandarin_voice/A11_32.TextGrid
--Miss: data/thchs30_250_demo/output/textgrid/mandarin_voice/A11_33.TextGrid
--Miss: data/thchs30_250_demo/output/textgrid/mandarin_voice/A11_34.TextGrid
--Miss: data/thchs30_250_demo/output/textgrid/mandarin_voice/A11_35.TextGrid
--Miss: data/thchs30_250_demo/output/textgrid/mandarin_voice/A11_36.TextGrid
--Miss: data/thchs30_250_demo/output/textgrid/mandarin_voice/A11_37.TextGrid
--Miss: data/thchs30_250_demo/output/textgrid/mandarin_voice/A11_38.TextGrid
--Miss: data/thchs30_250_demo/output/textgrid/mandarin_voice/A11_39.TextGrid
--Miss: data/thchs30_250_demo/output/textgrid/mandarin_voice/A11_40.TextGrid
--Miss: data/thchs30_250_demo/output/textgrid/mandarin_voice/A11_41.TextGrid
--Miss: data/thchs30_250_demo/output/textgrid/mandarin_voice/A11_42.TextGrid
--Miss: data/thchs30_250_demo/output/textgrid/mandarin_voice/A11_43.TextGrid
--Miss: data/thchs30_250_demo/output/textgrid/mandarin_voice/A11_44.TextGrid
--Miss: data/thchs30_250_demo/output/textgrid/mandarin_voice/A11_45.TextGrid
--Miss: data/thchs30_250_demo/output/textgrid/mandarin_voice/A11_46.TextGrid
--Miss: data/thchs30_250_demo/output/textgrid/mandarin_voice/A11_47.TextGrid
--Miss: data/thchs30_250_demo/output/textgrid/mandarin_voice/A11_48.TextGrid
--Miss: data/thchs30_250_demo/output/textgrid/mandarin_voice/A11_49.TextGrid
--Miss: data/thchs30_250_demo/output/textgrid/mandarin_voice/A11_50.TextGrid
--Miss: data/thchs30_250_demo/output/textgrid/mandarin_voice/A11_51.TextGrid
--Miss: data/thchs30_250_demo/output/textgrid/mandarin_voice/A11_52.TextGrid
--Miss: data/thchs30_250_demo/output/textgrid/mandarin_voice/A11_53.TextGrid
--Miss: data/thchs30_250_demo/output/textgrid/mandarin_voice/A11_54.TextGrid
--Miss: data/thchs30_250_demo/output/textgrid/mandarin_voice/A11_55.TextGrid
--Miss: data/thchs30_250_demo/output/textgrid/mandarin_voice/A11_56.TextGrid
--Miss: data/thchs30_250_demo/output/textgrid/mandarin_voice/A11_57.TextGrid
--Miss: data/thchs30_250_demo/output/textgrid/mandarin_voice/A11_58.TextGrid
--Miss: data/thchs30_250_demo/output/textgrid/mandarin_voice/A11_59.TextGrid
--Miss: data/thchs30_250_demo/output/textgrid/mandarin_voice/A11_60.TextGrid
--Miss: data/thchs30_250_demo/output/textgrid/mandarin_voice/A11_61.TextGrid
--Miss: data/thchs30_250_demo/output/textgrid/mandarin_voice/A11_62.TextGrid
--Miss: data/thchs30_250_demo/output/textgrid/mandarin_voice/A11_63.TextGrid
--Miss: data/thchs30_250_demo/output/textgrid/mandarin_voice/A11_64.TextGrid
--Miss: data/thchs30_250_demo/output/textgrid/mandarin_voice/A11_65.TextGrid
--Miss: data/thchs30_250_demo/output/textgrid/mandarin_voice/A11_66.TextGrid
--Miss: data/thchs30_250_demo/output/textgrid/mandarin_voice/A11_67.TextGrid
--Miss: data/thchs30_250_demo/output/textgrid/mandarin_voice/A11_68.TextGrid
--Miss: data/thchs30_250_demo/output/textgrid/mandarin_voice/A11_69.TextGrid
--Miss: data/thchs30_250_demo/output/textgrid/mandarin_voice/A11_70.TextGrid
--Miss: data/thchs30_250_demo/output/textgrid/mandarin_voice/A11_71.TextGrid
--Miss: data/thchs30_250_demo/output/textgrid/mandarin_voice/A11_72.TextGrid
--Miss: data/thchs30_250_demo/output/textgrid/mandarin_voice/A11_73.TextGrid
--Miss: data/thchs30_250_demo/output/textgrid/mandarin_voice/A11_74.TextGrid
--Miss: data/thchs30_250_demo/output/textgrid/mandarin_voice/A11_75.TextGrid
--Miss: data/thchs30_250_demo/output/textgrid/mandarin_voice/A11_76.TextGrid
--Miss: data/thchs30_250_demo/output/textgrid/mandarin_voice/A11_77.TextGrid
--Miss: data/thchs30_250_demo/output/textgrid/mandarin_voice/A11_78.TextGrid
--Miss: data/thchs30_250_demo/output/textgrid/mandarin_voice/A11_79.TextGrid
--Miss: data/thchs30_250_demo/output/textgrid/mandarin_voice/A11_80.TextGrid
--Miss: data/thchs30_250_demo/output/textgrid/mandarin_voice/A11_81.TextGrid
--Miss: data/thchs30_250_demo/output/textgrid/mandarin_voice/A11_82.TextGrid
--Miss: data/thchs30_250_demo/output/textgrid/mandarin_voice/A11_83.TextGrid
--Miss: data/thchs30_250_demo/output/textgrid/mandarin_voice/A11_84.TextGrid
--Miss: data/thchs30_250_demo/output/textgrid/mandarin_voice/A11_85.TextGrid
--Miss: data/thchs30_250_demo/output/textgrid/mandarin_voice/A11_86.TextGrid
--Miss: data/thchs30_250_demo/output/textgrid/mandarin_voice/A11_87.TextGrid
--Miss: data/thchs30_250_demo/output/textgrid/mandarin_voice/A11_88.TextGrid
--Miss: data/thchs30_250_demo/output/textgrid/mandarin_voice/A11_89.TextGrid
--Miss: data/thchs30_250_demo/output/textgrid/mandarin_voice/A11_90.TextGrid
--Miss: data/thchs30_250_demo/output/textgrid/mandarin_voice/A11_91.TextGrid
--Miss: data/thchs30_250_demo/output/textgrid/mandarin_voice/A11_92.TextGrid
--Miss: data/thchs30_250_demo/output/textgrid/mandarin_voice/A11_93.TextGrid
--Miss: data/thchs30_250_demo/output/textgrid/mandarin_voice/A11_94.TextGrid
--Miss: data/thchs30_250_demo/output/textgrid/mandarin_voice/A11_95.TextGrid
--Miss: data/thchs30_250_demo/output/textgrid/mandarin_voice/A11_96.TextGrid
--Miss: data/thchs30_250_demo/output/textgrid/mandarin_voice/A11_97.TextGrid
--Miss: data/thchs30_250_demo/output/textgrid/mandarin_voice/A11_98.TextGrid
--Miss: data/thchs30_250_demo/output/textgrid/mandarin_voice/A11_99.TextGrid
--Miss: data/thchs30_250_demo/output/textgrid/mandarin_voice/A11_100.TextGrid
--Miss: data/thchs30_250_demo/output/textgrid/mandarin_voice/A11_101.TextGrid
--Miss: data/thchs30_250_demo/output/textgrid/mandarin_voice/A11_102.TextGrid
--Miss: data/thchs30_250_demo/output/textgrid/mandarin_voice/A11_103.TextGrid
--Miss: data/thchs30_250_demo/output/textgrid/mandarin_voice/A11_104.TextGrid
--Miss: data/thchs30_250_demo/output/textgrid/mandarin_voice/A11_105.TextGrid
--Miss: data/thchs30_250_demo/output/textgrid/mandarin_voice/A11_106.TextGrid
--Miss: data/thchs30_250_demo/output/textgrid/mandarin_voice/A11_107.TextGrid
--Miss: data/thchs30_250_demo/output/textgrid/mandarin_voice/A11_108.TextGrid
--Miss: data/thchs30_250_demo/output/textgrid/mandarin_voice/A11_109.TextGrid
--Miss: data/thchs30_250_demo/output/textgrid/mandarin_voice/A11_110.TextGrid
--Miss: data/thchs30_250_demo/output/textgrid/mandarin_voice/A11_111.TextGrid
--Miss: data/thchs30_250_demo/output/textgrid/mandarin_voice/A11_112.TextGrid
--Miss: data/thchs30_250_demo/output/textgrid/mandarin_voice/A11_113.TextGrid
--Miss: data/thchs30_250_demo/output/textgrid/mandarin_voice/A11_114.TextGrid
--Miss: data/thchs30_250_demo/output/textgrid/mandarin_voice/A11_115.TextGrid
--Miss: data/thchs30_250_demo/output/textgrid/mandarin_voice/A11_116.TextGrid
--Miss: data/thchs30_250_demo/output/textgrid/mandarin_voice/A11_117.TextGrid
--Miss: data/thchs30_250_demo/output/textgrid/mandarin_voice/A11_118.TextGrid
--Miss: data/thchs30_250_demo/output/textgrid/mandarin_voice/A11_119.TextGrid
--Miss: data/thchs30_250_demo/output/textgrid/mandarin_voice/A11_120.TextGrid
--Miss: data/thchs30_250_demo/output/textgrid/mandarin_voice/A11_121.TextGrid
--Miss: data/thchs30_250_demo/output/textgrid/mandarin_voice/A11_122.TextGrid
--Miss: data/thchs30_250_demo/output/textgrid/mandarin_voice/A11_123.TextGrid
--Miss: data/thchs30_250_demo/output/textgrid/mandarin_voice/A11_124.TextGrid
--Miss: data/thchs30_250_demo/output/textgrid/mandarin_voice/A11_125.TextGrid
--Miss: data/thchs30_250_demo/output/textgrid/mandarin_voice/A11_126.TextGrid
--Miss: data/thchs30_250_demo/output/textgrid/mandarin_voice/A11_127.TextGrid
--Miss: data/thchs30_250_demo/output/textgrid/mandarin_voice/A11_128.TextGrid
--Miss: data/thchs30_250_demo/output/textgrid/mandarin_voice/A11_129.TextGrid
--Miss: data/thchs30_250_demo/output/textgrid/mandarin_voice/A11_130.TextGrid
--Miss: data/thchs30_250_demo/output/textgrid/mandarin_voice/A11_131.TextGrid
--Miss: data/thchs30_250_demo/output/textgrid/mandarin_voice/A11_132.TextGrid
--Miss: data/thchs30_250_demo/output/textgrid/mandarin_voice/A11_133.TextGrid
--Miss: data/thchs30_250_demo/output/textgrid/mandarin_voice/A11_134.TextGrid
--Miss: data/thchs30_250_demo/output/textgrid/mandarin_voice/A11_135.TextGrid
--Miss: data/thchs30_250_demo/output/textgrid/mandarin_voice/A11_136.TextGrid
--Miss: data/thchs30_250_demo/output/textgrid/mandarin_voice/A11_137.TextGrid
--Miss: data/thchs30_250_demo/output/textgrid/mandarin_voice/A11_138.TextGrid
--Miss: data/thchs30_250_demo/output/textgrid/mandarin_voice/A11_139.TextGrid
--Miss: data/thchs30_250_demo/output/textgrid/mandarin_voice/A11_140.TextGrid
--Miss: data/thchs30_250_demo/output/textgrid/mandarin_voice/A11_141.TextGrid
--Miss: data/thchs30_250_demo/output/textgrid/mandarin_voice/A11_142.TextGrid
--Miss: data/thchs30_250_demo/output/textgrid/mandarin_voice/A11_143.TextGrid
--Miss: data/thchs30_250_demo/output/textgrid/mandarin_voice/A11_144.TextGrid
--Miss: data/thchs30_250_demo/output/textgrid/mandarin_voice/A11_145.TextGrid
--Miss: data/thchs30_250_demo/output/textgrid/mandarin_voice/A11_146.TextGrid
--Miss: data/thchs30_250_demo/output/textgrid/mandarin_voice/A11_147.TextGrid
--Miss: data/thchs30_250_demo/output/textgrid/mandarin_voice/A11_148.TextGrid
--Miss: data/thchs30_250_demo/output/textgrid/mandarin_voice/A11_149.TextGrid
--Miss: data/thchs30_250_demo/output/textgrid/mandarin_voice/A11_150.TextGrid
--Miss: data/thchs30_250_demo/output/textgrid/mandarin_voice/A11_151.TextGrid
--Miss: data/thchs30_250_demo/output/textgrid/mandarin_voice/A11_152.TextGrid
--Miss: data/thchs30_250_demo/output/textgrid/mandarin_voice/A11_153.TextGrid
--Miss: data/thchs30_250_demo/output/textgrid/mandarin_voice/A11_154.TextGrid
--Miss: data/thchs30_250_demo/output/textgrid/mandarin_voice/A11_155.TextGrid
--Miss: data/thchs30_250_demo/output/textgrid/mandarin_voice/A11_156.TextGrid
--Miss: data/thchs30_250_demo/output/textgrid/mandarin_voice/A11_157.TextGrid
--Miss: data/thchs30_250_demo/output/textgrid/mandarin_voice/A11_158.TextGrid
--Miss: data/thchs30_250_demo/output/textgrid/mandarin_voice/A11_159.TextGrid
--Miss: data/thchs30_250_demo/output/textgrid/mandarin_voice/A11_160.TextGrid
--Miss: data/thchs30_250_demo/output/textgrid/mandarin_voice/A11_161.TextGrid
--Miss: data/thchs30_250_demo/output/textgrid/mandarin_voice/A11_162.TextGrid
--Miss: data/thchs30_250_demo/output/textgrid/mandarin_voice/A11_163.TextGrid
--Miss: data/thchs30_250_demo/output/textgrid/mandarin_voice/A11_164.TextGrid
--Miss: data/thchs30_250_demo/output/textgrid/mandarin_voice/A11_165.TextGrid
--Miss: data/thchs30_250_demo/output/textgrid/mandarin_voice/A11_166.TextGrid
--Miss: data/thchs30_250_demo/output/textgrid/mandarin_voice/A11_167.TextGrid
--Miss: data/thchs30_250_demo/output/textgrid/mandarin_voice/A11_168.TextGrid
--Miss: data/thchs30_250_demo/output/textgrid/mandarin_voice/A11_169.TextGrid
--Miss: data/thchs30_250_demo/output/textgrid/mandarin_voice/A11_170.TextGrid
--Miss: data/thchs30_250_demo/output/textgrid/mandarin_voice/A11_171.TextGrid
--Miss: data/thchs30_250_demo/output/textgrid/mandarin_voice/A11_172.TextGrid
--Miss: data/thchs30_250_demo/output/textgrid/mandarin_voice/A11_173.TextGrid
--Miss: data/thchs30_250_demo/output/textgrid/mandarin_voice/A11_174.TextGrid
--Miss: data/thchs30_250_demo/output/textgrid/mandarin_voice/A11_175.TextGrid
--Miss: data/thchs30_250_demo/output/textgrid/mandarin_voice/A11_176.TextGrid
--Miss: data/thchs30_250_demo/output/textgrid/mandarin_voice/A11_177.TextGrid
--Miss: data/thchs30_250_demo/output/textgrid/mandarin_voice/A11_178.TextGrid
--Miss: data/thchs30_250_demo/output/textgrid/mandarin_voice/A11_179.TextGrid
--Miss: data/thchs30_250_demo/output/textgrid/mandarin_voice/A11_180.TextGrid
--Miss: data/thchs30_250_demo/output/textgrid/mandarin_voice/A11_181.TextGrid
--Miss: data/thchs30_250_demo/output/textgrid/mandarin_voice/A11_182.TextGrid
--Miss: data/thchs30_250_demo/output/textgrid/mandarin_voice/A11_183.TextGrid
--Miss: data/thchs30_250_demo/output/textgrid/mandarin_voice/A11_184.TextGrid
--Miss: data/thchs30_250_demo/output/textgrid/mandarin_voice/A11_185.TextGrid
--Miss: data/thchs30_250_demo/output/textgrid/mandarin_voice/A11_186.TextGrid
--Miss: data/thchs30_250_demo/output/textgrid/mandarin_voice/A11_187.TextGrid
--Miss: data/thchs30_250_demo/output/textgrid/mandarin_voice/A11_188.TextGrid
--Miss: data/thchs30_250_demo/output/textgrid/mandarin_voice/A11_189.TextGrid
--Miss: data/thchs30_250_demo/output/textgrid/mandarin_voice/A11_190.TextGrid
--Miss: data/thchs30_250_demo/output/textgrid/mandarin_voice/A11_191.TextGrid
--Miss: data/thchs30_250_demo/output/textgrid/mandarin_voice/A11_192.TextGrid
--Miss: data/thchs30_250_demo/output/textgrid/mandarin_voice/A11_193.TextGrid
--Miss: data/thchs30_250_demo/output/textgrid/mandarin_voice/A11_194.TextGrid
--Miss: data/thchs30_250_demo/output/textgrid/mandarin_voice/A11_195.TextGrid
--Miss: data/thchs30_250_demo/output/textgrid/mandarin_voice/A11_196.TextGrid
--Miss: data/thchs30_250_demo/output/textgrid/mandarin_voice/A11_197.TextGrid
--Miss: data/thchs30_250_demo/output/textgrid/mandarin_voice/A11_198.TextGrid
--Miss: data/thchs30_250_demo/output/textgrid/mandarin_voice/A11_199.TextGrid
--Miss: data/thchs30_250_demo/output/textgrid/mandarin_voice/A11_200.TextGrid
--Miss: data/thchs30_250_demo/output/textgrid/mandarin_voice/A11_201.TextGrid
--Miss: data/thchs30_250_demo/output/textgrid/mandarin_voice/A11_202.TextGrid
--Miss: data/thchs30_250_demo/output/textgrid/mandarin_voice/A11_203.TextGrid
--Miss: data/thchs30_250_demo/output/textgrid/mandarin_voice/A11_204.TextGrid
--Miss: data/thchs30_250_demo/output/textgrid/mandarin_voice/A11_205.TextGrid
--Miss: data/thchs30_250_demo/output/textgrid/mandarin_voice/A11_206.TextGrid
--Miss: data/thchs30_250_demo/output/textgrid/mandarin_voice/A11_207.TextGrid
--Miss: data/thchs30_250_demo/output/textgrid/mandarin_voice/A11_208.TextGrid
--Miss: data/thchs30_250_demo/output/textgrid/mandarin_voice/A11_209.TextGrid
--Miss: data/thchs30_250_demo/output/textgrid/mandarin_voice/A11_210.TextGrid
--Miss: data/thchs30_250_demo/output/textgrid/mandarin_voice/A11_211.TextGrid
--Miss: data/thchs30_250_demo/output/textgrid/mandarin_voice/A11_212.TextGrid
--Miss: data/thchs30_250_demo/output/textgrid/mandarin_voice/A11_213.TextGrid
--Miss: data/thchs30_250_demo/output/textgrid/mandarin_voice/A11_214.TextGrid
--Miss: data/thchs30_250_demo/output/textgrid/mandarin_voice/A11_215.TextGrid
--Miss: data/thchs30_250_demo/output/textgrid/mandarin_voice/A11_216.TextGrid
--Miss: data/thchs30_250_demo/output/textgrid/mandarin_voice/A11_217.TextGrid
--Miss: data/thchs30_250_demo/output/textgrid/mandarin_voice/A11_218.TextGrid
--Miss: data/thchs30_250_demo/output/textgrid/mandarin_voice/A11_219.TextGrid
--Miss: data/thchs30_250_demo/output/textgrid/mandarin_voice/A11_220.TextGrid
--Miss: data/thchs30_250_demo/output/textgrid/mandarin_voice/A11_221.TextGrid
--Miss: data/thchs30_250_demo/output/textgrid/mandarin_voice/A11_222.TextGrid
--Miss: data/thchs30_250_demo/output/textgrid/mandarin_voice/A11_223.TextGrid
--Miss: data/thchs30_250_demo/output/textgrid/mandarin_voice/A11_224.TextGrid
--Miss: data/thchs30_250_demo/output/textgrid/mandarin_voice/A11_225.TextGrid
--Miss: data/thchs30_250_demo/output/textgrid/mandarin_voice/A11_226.TextGrid
--Miss: data/thchs30_250_demo/output/textgrid/mandarin_voice/A11_227.TextGrid
--Miss: data/thchs30_250_demo/output/textgrid/mandarin_voice/A11_228.TextGrid
--Miss: data/thchs30_250_demo/output/textgrid/mandarin_voice/A11_229.TextGrid
--Miss: data/thchs30_250_demo/output/textgrid/mandarin_voice/A11_230.TextGrid
--Miss: data/thchs30_250_demo/output/textgrid/mandarin_voice/A11_231.TextGrid
--Miss: data/thchs30_250_demo/output/textgrid/mandarin_voice/A11_232.TextGrid
--Miss: data/thchs30_250_demo/output/textgrid/mandarin_voice/A11_233.TextGrid
--Miss: data/thchs30_250_demo/output/textgrid/mandarin_voice/A11_234.TextGrid
--Miss: data/thchs30_250_demo/output/textgrid/mandarin_voice/A11_235.TextGrid
--Miss: data/thchs30_250_demo/output/textgrid/mandarin_voice/A11_236.TextGrid
--Miss: data/thchs30_250_demo/output/textgrid/mandarin_voice/A11_237.TextGrid
--Miss: data/thchs30_250_demo/output/textgrid/mandarin_voice/A11_238.TextGrid
--Miss: data/thchs30_250_demo/output/textgrid/mandarin_voice/A11_239.TextGrid
--Miss: data/thchs30_250_demo/output/textgrid/mandarin_voice/A11_240.TextGrid
--Miss: data/thchs30_250_demo/output/textgrid/mandarin_voice/A11_241.TextGrid
--Miss: data/thchs30_250_demo/output/textgrid/mandarin_voice/A11_242.TextGrid
--Miss: data/thchs30_250_demo/output/textgrid/mandarin_voice/A11_243.TextGrid
--Miss: data/thchs30_250_demo/output/textgrid/mandarin_voice/A11_244.TextGrid
--Miss: data/thchs30_250_demo/output/textgrid/mandarin_voice/A11_245.TextGrid
--Miss: data/thchs30_250_demo/output/textgrid/mandarin_voice/A11_246.TextGrid
--Miss: data/thchs30_250_demo/output/textgrid/mandarin_voice/A11_247.TextGrid
--Miss: data/thchs30_250_demo/output/textgrid/mandarin_voice/A11_248.TextGrid
--Miss: data/thchs30_250_demo/output/textgrid/mandarin_voice/A11_249.TextGrid

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.