Document behaviour when --num_iter < --warmup_steps
This commit is contained in:
parent
a095658e44
commit
4f8aaa22b0
|
@ -464,6 +464,7 @@ Each of these scripts runs 200 warm-up iterations and measures the first epoch.
|
|||
|
||||
To control warmup and benchmark length, use the `--warmup_steps`, `--num_iter` and `--iter_unit` flags. Features like XLA or DALI can be controlled
|
||||
with `--use_xla` and `--use_dali` flags. If no `--data_dir=<path to imagenet>` flag is specified then the benchmarks will use a synthetic dataset.
|
||||
For proper throughput reporting the value of `--num_iter` must be greater than `--warmup_steps` value.
|
||||
Suggested batch sizes for training are 256 for mixed precision training and 128 for single precision training per single V100 16 GB.
|
||||
|
||||
#### Inference performance benchmark
|
||||
|
@ -480,6 +481,7 @@ To benchmark the inference performance on a specific batch size, run:
|
|||
|
||||
By default, each of these scripts runs 20 warm-up iterations and measures the next 80 iterations.
|
||||
To control warm-up and benchmark length, use the `--warmup_steps`, `--num_iter` and `--iter_unit` flags.
|
||||
For proper throughput and latency reporting the value of `--num_iter` must be greater than `--warmup_steps` value.
|
||||
If no `--data_dir=<path to imagenet>` flag is specified then the benchmarks will use a synthetic dataset.
|
||||
|
||||
The benchmark can be automated with the `inference_benchmark.sh` script provided in `resnet50v1.5`, by simply running:
|
||||
|
|
|
@ -430,6 +430,7 @@ Each of these scripts runs 200 warm-up iterations and measures the first epoch.
|
|||
|
||||
To control warmup and benchmark length, use the `--warmup_steps`, `--num_iter` and `--iter_unit` flags. Features like XLA or DALI can be controlled
|
||||
with `--use_xla` and `--use_dali` flags. If no `--data_dir=<path to imagenet>` flag is specified then the benchmarks will use a synthetic dataset.
|
||||
For proper throughput reporting the value of `--num_iter` must be greater than `--warmup_steps` value.
|
||||
Suggested batch sizes for training are 128 for mixed precision training and 64 for single precision training per single V100 16 GB.
|
||||
|
||||
|
||||
|
@ -447,6 +448,7 @@ To benchmark the inference performance on a specific batch size, run:
|
|||
|
||||
By default, each of these scripts runs 20 warm-up iterations and measures the next 80 iterations.
|
||||
To control warm-up and benchmark length, use the `--warmup_steps`, `--num_iter` and `--iter_unit` flags.
|
||||
For proper throughput and latency reporting the value of `--num_iter` must be greater than `--warmup_steps` value.
|
||||
If no `--data_dir=<path to imagenet>` flag is specified then the benchmarks will use a synthetic dataset.
|
||||
|
||||
The benchmark can be automated with the `inference_benchmark.sh` script provided in `resnext101-32x4d`, by simply running:
|
||||
|
|
|
@ -425,6 +425,7 @@ Each of these scripts runs 200 warm-up iterations and measures the first epoch.
|
|||
|
||||
To control warmup and benchmark length, use the `--warmup_steps`, `--num_iter` and `--iter_unit` flags. Features like XLA or DALI can be controlled
|
||||
with `--use_xla` and `--use_dali` flags. If no `--data_dir=<path to imagenet>` flag is specified then the benchmarks will use a synthetic dataset.
|
||||
For proper throughput reporting the value of `--num_iter` must be greater than `--warmup_steps` value.
|
||||
Suggested batch sizes for training are 96 for mixed precision training and 64 for single precision training per single V100 16 GB.
|
||||
|
||||
|
||||
|
@ -442,6 +443,7 @@ To benchmark the inference performance on a specific batch size, run:
|
|||
|
||||
By default, each of these scripts runs 20 warm-up iterations and measures the next 80 iterations.
|
||||
To control warm-up and benchmark length, use the `--warmup_steps`, `--num_iter` and `--iter_unit` flags.
|
||||
For proper throughput and latency reporting the value of `--num_iter` must be greater than `--warmup_steps` value.
|
||||
If no `--data_dir=<path to imagenet>` flag is specified then the benchmarks will use a synthetic dataset.
|
||||
|
||||
The benchmark can be automated with the `inference_benchmark.sh` script provided in `se-resnext101-32x4d`, by simply running:
|
||||
|
|
Loading…
Reference in a new issue