Merge pull request #89 from GrzegorzKarchNV/readme-epochs
changed number of epochs in train scripts; removed number of epochs f…
This commit is contained in:
commit
f89dcca19d
|
@ -137,8 +137,7 @@ Ensure your loss values are comparable to those listed in the table in the
|
|||
Results section. For both models, the loss values are stored in the
|
||||
`./output/nvlog.json` log file.
|
||||
|
||||
After you have trained the Tacotron 2 model for 1500 epochs and the
|
||||
WaveGlow model for 800 epochs, you should get audio results similar to the
|
||||
After you have trained the Tacotron 2 and WaveGlow models, you should get audio results similar to the
|
||||
samples in the `./audio` folder. For details about generating audio, see the
|
||||
[Inference process](#inference-process) section below.
|
||||
|
||||
|
|
|
@ -1,2 +1,2 @@
|
|||
mkdir -p output
|
||||
python -m multiproc train.py -m Tacotron2 -o ./output/ -lr 1e-3 --epochs 2001 -bs 80 --weight-decay 1e-6 --grad-clip-thresh 1.0 --cudnn-benchmark=True --log-file ./output/nvlog.json --anneal-steps 500 1000 1500 --anneal-factor 0.1 --fp16-run
|
||||
python -m multiproc train.py -m Tacotron2 -o ./output/ -lr 1e-3 --epochs 1500 -bs 80 --weight-decay 1e-6 --grad-clip-thresh 1.0 --cudnn-benchmark=True --log-file ./output/nvlog.json --anneal-steps 500 1000 1500 --anneal-factor 0.1 --fp16-run
|
||||
|
|
|
@ -1,2 +1,2 @@
|
|||
mkdir -p output
|
||||
python -m multiproc train.py -m WaveGlow -o ./output/ -lr 1e-4 --epochs 2001 -bs 8 --segment-length 8000 --weight-decay 0 --grad-clip-thresh 65504.0 --epochs-per-checkpoint 50 --cudnn-benchmark=True --log-file ./output/nvlog.json --fp16-run
|
||||
python -m multiproc train.py -m WaveGlow -o ./output/ -lr 1e-4 --epochs 1000 -bs 8 --segment-length 8000 --weight-decay 0 --grad-clip-thresh 65504.0 --epochs-per-checkpoint 50 --cudnn-benchmark=True --log-file ./output/nvlog.json --fp16-run
|
||||
|
|
Loading…
Reference in a new issue