- Python >= 3.6
- Clone this repository.
- Install python requirements. Please refer requirements.txt
- Download and extract the LJ Speech dataset.
And move all wav files to
LJSpeech-1.1/wavs
python train.py --config config_v1.json
To train V2 or V3 Generator, replace config_v1.json
with config_v2.json
or config_v3.json
.
Checkpoints and copy of the configuration file are saved in cp_hifigan
directory by default.
You can change the path by adding --checkpoint_path
option.
- Make
test_files
directory and copy wav files into the directory. - Run the following command.
python inference.py --checkpoint_file [generator checkpoint file path]
Generated wav files are saved in generated_files
by default.
You can change the path by adding --output_dir
option.
- Make
test_mel_files
directory and copy mel-spectrogram files into the directory.
You can generate mel-spectrograms using Tacotron2. - Run the following command.
python inference_e2e.py --checkpoint_file [generator checkpoint file path]
Generated wav files are saved in generated_files_from_mel
by default.
You can change the path by adding --output_dir
option.