From 8bd6dd14d374d4baa786bbbf9d991bacb93a6e3b Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?=C5=81ukasz=20Pier=C5=9Bcieniewski?= Date: Thu, 20 Aug 2020 16:21:50 +0200 Subject: [PATCH] Document synthetic dataset options --- TensorFlow/Classification/ConvNets/resnet50v1.5/README.md | 6 +++--- .../Classification/ConvNets/resnext101-32x4d/README.md | 5 +++-- .../Classification/ConvNets/se-resnext101-32x4d/README.md | 5 +++-- 3 files changed, 9 insertions(+), 7 deletions(-) diff --git a/TensorFlow/Classification/ConvNets/resnet50v1.5/README.md b/TensorFlow/Classification/ConvNets/resnet50v1.5/README.md index 07032288..e17f9542 100644 --- a/TensorFlow/Classification/ConvNets/resnet50v1.5/README.md +++ b/TensorFlow/Classification/ConvNets/resnet50v1.5/README.md @@ -194,7 +194,7 @@ To train your model using mixed precision or TF32 with Tensor Cores or FP32, per 1. Clone the repository. ``` git clone https://github.com/NVIDIA/DeepLearningExamples -cd DeepLearningExamples/TensorFlow/Classification/RN50v1.5 +cd DeepLearningExamples/TensorFlow/Classification/ConvNets ``` 2. Download and preprocess the dataset. @@ -452,10 +452,9 @@ To benchmark the training performance on a specific batch size, run: Each of these scripts runs 200 warm-up iterations and measures the first epoch. To control warmup and benchmark length, use the `--warmup_steps`, `--num_iter` and `--iter_unit` flags. Features like XLA or DALI can be controlled -with `--use_xla` and `--use_dali` flags. +with `--use_xla` and `--use_dali` flags. If no `--data_dir=` flag is specified then the benchmarks will use a synthetic dataset. Suggested batch sizes for training are 256 for mixed precision training and 128 for single precision training per single V100 16 GB. - #### Inference performance benchmark To benchmark the inference performance on a specific batch size, run: @@ -470,6 +469,7 @@ To benchmark the inference performance on a specific batch size, run: By default, each of these scripts runs 20 warm-up iterations and measures the next 80 iterations. To control warm-up and benchmark length, use the `--warmup_steps`, `--num_iter` and `--iter_unit` flags. +If no `--data_dir=` flag is specified then the benchmarks will use a synthetic dataset. The benchmark can be automated with the `inference_benchmark.sh` script provided in `resnet50v1.5`, by simply running: `bash ./resnet50v1.5/inference_benchmark.sh ` diff --git a/TensorFlow/Classification/ConvNets/resnext101-32x4d/README.md b/TensorFlow/Classification/ConvNets/resnext101-32x4d/README.md index 26bc0e6c..765b8c24 100644 --- a/TensorFlow/Classification/ConvNets/resnext101-32x4d/README.md +++ b/TensorFlow/Classification/ConvNets/resnext101-32x4d/README.md @@ -203,7 +203,7 @@ To train your model using mixed precision or TF32 with Tensor Cores or FP32, per 1. Clone the repository. ``` git clone https://github.com/NVIDIA/DeepLearningExamples -cd DeepLearningExamples/TensorFlow/Classification/RN50v1.5 +cd DeepLearningExamples/TensorFlow/Classification/ConvNets ``` 2. Download and preprocess the dataset. @@ -420,7 +420,7 @@ To benchmark the training performance on a specific batch size, run: Each of these scripts runs 200 warm-up iterations and measures the first epoch. To control warmup and benchmark length, use the `--warmup_steps`, `--num_iter` and `--iter_unit` flags. Features like XLA or DALI can be controlled -with `--use_xla` and `--use_dali` flags. +with `--use_xla` and `--use_dali` flags. If no `--data_dir=` flag is specified then the benchmarks will use a synthetic dataset. Suggested batch sizes for training are 128 for mixed precision training and 64 for single precision training per single V100 16 GB. @@ -438,6 +438,7 @@ To benchmark the inference performance on a specific batch size, run: By default, each of these scripts runs 20 warm-up iterations and measures the next 80 iterations. To control warm-up and benchmark length, use the `--warmup_steps`, `--num_iter` and `--iter_unit` flags. +If no `--data_dir=` flag is specified then the benchmarks will use a synthetic dataset. The benchmark can be automated with the `inference_benchmark.sh` script provided in `resnext101-32x4d`, by simply running: `bash ./resnext101-32x4d/inference_benchmark.sh ` diff --git a/TensorFlow/Classification/ConvNets/se-resnext101-32x4d/README.md b/TensorFlow/Classification/ConvNets/se-resnext101-32x4d/README.md index 2d472745..c24a9828 100644 --- a/TensorFlow/Classification/ConvNets/se-resnext101-32x4d/README.md +++ b/TensorFlow/Classification/ConvNets/se-resnext101-32x4d/README.md @@ -198,7 +198,7 @@ To train your model using mixed precision or TF32 with Tensor Cores or FP32, per 1. Clone the repository. ``` git clone https://github.com/NVIDIA/DeepLearningExamples -cd DeepLearningExamples/TensorFlow/Classification/RN50v1.5 +cd DeepLearningExamples/TensorFlow/Classification/ConvNets ``` 2. Download and preprocess the dataset. @@ -415,7 +415,7 @@ To benchmark the training performance on a specific batch size, run: Each of these scripts runs 200 warm-up iterations and measures the first epoch. To control warmup and benchmark length, use the `--warmup_steps`, `--num_iter` and `--iter_unit` flags. Features like XLA or DALI can be controlled -with `--use_xla` and `--use_dali` flags. +with `--use_xla` and `--use_dali` flags. If no `--data_dir=` flag is specified then the benchmarks will use a synthetic dataset. Suggested batch sizes for training are 96 for mixed precision training and 64 for single precision training per single V100 16 GB. @@ -433,6 +433,7 @@ To benchmark the inference performance on a specific batch size, run: By default, each of these scripts runs 20 warm-up iterations and measures the next 80 iterations. To control warm-up and benchmark length, use the `--warmup_steps`, `--num_iter` and `--iter_unit` flags. +If no `--data_dir=` flag is specified then the benchmarks will use a synthetic dataset. The benchmark can be automated with the `inference_benchmark.sh` script provided in `se-resnext101-32x4d`, by simply running: `bash ./se-resnext101-32x4d/inference_benchmark.sh `