adding support for --cpu-run

This commit is contained in:
maggiezha 2020-05-06 21:52:49 +10:00 committed by GitHub
parent 342c4710fc
commit 3a6b667118
No known key found for this signature in database
GPG key ID: 4AEE18F83AFDEB23

View file

@ -340,10 +340,10 @@ and `--waveglow` arguments. Tacotron2 and WaveGlow checkpoints can also be downl
You can also run inference on CPU with TorchScript by adding flag --cpu-run:
```bash
export CUDA_VISIBLE_DEVICES=
export CUDA_VISIBLE_DEVICES=
```
```bash
python inference.py --tacotron2 <Tacotron2_checkpoint> --waveglow <WaveGlow_checkpoint> --wn-channels 256 --cpu-run -o output/ -i phrases/phrase.txt
python inference.py --tacotron2 <Tacotron2_checkpoint> --waveglow <WaveGlow_checkpoint> --wn-channels 256 --cpu-run -o output/ -i phrases/phrase.txt
```
## Advanced
@ -385,7 +385,7 @@ WaveGlow models.
* `--learning-rate` - learning rate (Tacotron 2: 1e-3, WaveGlow: 1e-4)
* `--batch-size` - batch size (Tacotron 2 FP16/FP32: 104/48, WaveGlow FP16/FP32: 10/4)
* `--amp-run` - use mixed precision training
* `--cpu-run` - use CPU with TorchScript inference
* `--cpu-run` - use CPU with TorchScript for inference
#### Shared audio/STFT parameters
@ -494,13 +494,6 @@ mixed precision and FP32 training, respectively.
You can find all the available options by calling `python inference.py --help`.
You can also run inference on CPU with TorchScript by adding flag --cpu-run:
```export CUDA_VISIBLE_DEVICES=
```
```bash
python inference.py --tacotron2 <Tacotron2_checkpoint> --waveglow <WaveGlow_checkpoint> --wn-channels 256 --cpu-run -o output/ -i phrases/phrase.txt
## Performance
### Benchmarking