DeepLearningExamples/TensorFlow/Detection/SSD/README.md
Przemek Strzelczyk a644350589 Updating models and adding BERT/PyT
Tacotron2+Waveglow/PyT
* AMP support
* Data preprocessing for Tacotron 2 training
* Fixed dropouts on LSTMCells

SSD/PyT
* script and notebook for inference
* AMP support
* README update
* updates to examples/*

BERT/PyT
* initial release

GNMT/PyT
* Default container updated to NGC PyTorch 19.05-py3
* Mixed precision training implemented using APEX AMP
* Added inference throughput and latency results on NVIDIA Tesla V100 16G
* Added option to run inference on user-provided raw input text from command line

NCF/PyT
* Updated performance tables.
* Default container changed to PyTorch 19.06-py3.
* Caching validation negatives between runs

Transformer/PyT
* new README
* jit support added

UNet Medical/TF
* inference example scripts added
* inference benchmark measuring latency added
* TRT/TF-TRT support added
* README updated

GNMT/TF
* Performance improvements

Small updates (mostly README) for other models.
2019-07-16 21:13:08 +02:00

17 KiB
Raw Blame History

SSD320 v1.2 For TensorFlow

This repository provides a script and recipe to train SSD320 v1.2 to achieve state of the art accuracy, and is tested and maintained by NVIDIA.

Table Of Contents

The model

The SSD320 v1.2 model is based on the SSD: Single Shot MultiBox Detector paper, which describes SSD as “a method for detecting objects in images using a single deep neural network”.

We have altered the network in order to improve accuracy and increase throughput. Changes we have made include:

  • Replacing the VGG backbone with the more popular ResNet50.
  • Adding multi-scale detection to the backbone using Feature Pyramid Networks.
  • Replacing the original hard negative mining loss function with Focal Loss.
  • Decreasing the input size to 320 x 320.

Our implementation is based on the existing model from the TensorFlow models repository.

This model trains with mixed precision tensor cores on NVIDIA Volta GPUs, therefore you can get results much faster than training without tensor cores. This model is tested against each NGC monthly container release to ensure consistent accuracy and performance over time.

The following features were implemented in this model:

  • Data-parallel multi-GPU training with Horovod.
  • Mixed precision support with TensorFlow Automatic Mixed Precision (TF-AMP), which enables mixed precision training without any changes to the code-base by performing automatic graph rewrites and loss scaling controlled by an environmental variable.
  • Tensor Core operations to maximize throughput using NVIDIA Volta GPUs.
  • Dynamic loss scaling for tensor cores (mixed precision) training.

Because of these enhancements, the SSD320 v1.2 model achieves higher accuracy.

Default configuration

We trained the model for 12500 steps (27 epochs) with the following setup:

  • SGDR with cosine decay learning rate
  • Learning rate base = 0.16
  • Momentum = 0.9
  • Warm-up learning rate = 0.0693312
  • Warm-up steps = 1000
  • Batch size per GPU = 32
  • Number of GPUs = 8

Setup

The following section list the requirements in order to start training the SSD320 v1.2 model.

Requirements

This repository contains Dockerfile which extends the TensorFlow NGC container and encapsulates some dependencies. Aside from these dependencies, ensure you have the following software:

For more information about how to get started with NGC containers, see the following sections from the NVIDIA GPU Cloud Documentation and the Deep Learning Documentation:

Quick Start Guide

To train your model using mixed precision with tensor cores or using FP32, perform the following steps using the default parameters of the SSD320 v1.2 model on the COCO 2017 dataset.

1. Clone the repository.

git clone https://github.com/NVIDIA/DeepLearningExamples
cd DeepLearningExamples/TensorFlow/Detection/SSD

2. Build the SSD320 v1.2 TensorFlow NGC container.

docker build . -t nvidia_ssd

3. Download and preprocess the dataset.

Extract the COCO 2017 dataset with:

download_all.sh nvidia_ssd <data_dir_path> <checkpoint_dir_path>

Data will be downloaded, preprocessed to tfrecords format and saved in the <data_dir_path> directory (on the host). Moreover the script will download pre-trained RN50 checkpoint in the <checkpoint_dir_path> directory

4. Launch the NGC container to run training/inference.

nvidia-docker run --rm -it --shm-size=1g --ulimit memlock=-1 --ulimit stack=67108864 -v <data_dir_path>:/data/coco2017_tfrecords -v <checkpoint_dir_path>:/checkpoints --ipc=host nvidia_ssd

5. Start training.

The ./examples directory provides several sample scripts for various GPU settings and act as wrappers around object_detection/model_main.py script. The example scripts can be modified by arguments:

  • A path to directory for checkpoints
  • A path to directory for configs
  • Additional arguments to object_detection/model_main.py

If you want to run 8 GPUs, training with tensor cores acceleration and save checkpoints in /checkpoints directory, run:

bash ./examples/SSD320_FP16_8GPU.sh /checkpoints

6. Start validation/evaluation.

The model_main.py training script automatically runs validation during training. The results from the validation are printed to stdout.

Pycocotools open-sourced scripts provides a consistent way to evaluate models on the COCO dataset. We are using these scripts during validation to measure models performance in AP metric. Metrics below are evaluated using pycocotools methodology, in the following format:during validation to measure models performance in AP metric. Metrics below are evaluated using pycocotools methodology, in the following format:

 Average Precision  (AP) @[ IoU=0.50:0.95 | area=   all | maxDets=100 ] = 0.273
 Average Precision  (AP) @[ IoU=0.50      | area=   all | maxDets=100 ] = 0.423
 Average Precision  (AP) @[ IoU=0.75      | area=   all | maxDets=100 ] = 0.291
 Average Precision  (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.024
 Average Precision  (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.218
 Average Precision  (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.451
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets=  1 ] = 0.257
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets= 10 ] = 0.398
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets=100 ] = 0.427
 Average Recall     (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.070
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.418
 Average Recall     (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.645

The metric reported in our results is present in the first row.

To evaluate a checkpointed model saved in the previous step, you can use script from examples directory. If you want to run inference with tensor cores acceleration, run:

bash examples/SSD320_evaluate.sh <path to checkpoint>

Details

The following sections provide greater details of the dataset, running training and inference, and the training results.

Command line options

The SSD model training is conducted by the script from the object_detection library, model_main.py. Our experiments were done with settings described in the examples directory. If you would like to get more details about available arguments, please run:

python object_detection/model_main.py --help

Getting the data

The SSD320 v1.2 model was trained on the COCO 2017 dataset. The val2017 validation set was used as a validation dataset. The download_data.sh script will preprocess the data to tfrecords format.

This repository contains the download_dataset.sh script which will automatically download and preprocess the training, validation and test datasets. By default, data will be downloaded to the /data/coco2017_tfrecords directory.

Training process

Training the SSD model is implemented in the object_detection/model_main.py script.

All training parameters are set in the config files. Because evaluation is relatively time consuming, it does not run every epoch. By default, evaluation is executed only once at the end of the training. The model is evaluated using pycocotools distributed with the COCO dataset. The number of evaluations can be changed using the eval_count parameter.

To run training with tensor cores, use ./examples/SSD320_FP16_{1,4,8}GPU.sh scripts. For more details see Enabling mixed precision section below.

Data preprocessing

Before we feed data to the model, both during training and inference, we perform:

  • Normalization
  • Encoding bounding boxes
  • Resize to 320x320

Data augmentation

During training we perform the following augmentation techniques:

  • Random crop
  • Random horizontal flip
  • Color jitter

Enabling mixed precision

Mixed precision training offers significant computational speedup by performing operations in half-precision format, while storing minimal information in single-precision to retain as much information as possible in critical parts of the network. Since the introduction of tensor cores in the Volta and Turing architectures, significant training speedups are experienced by switching to mixed precision -- up to 3x overall speedup on the most arithmetically intense model architectures. Using mixed precision training previously required two steps:

  1. Porting the model to use the FP16 data type where appropriate.
  2. Manually adding loss scaling to preserve small gradient values.

This can now be achieved using Automatic Mixed Precision (AMP) for TensorFlow to enable the full mixed precision methodology in your existing TensorFlow model code. AMP enables mixed precision training on Volta and Turing GPUs automatically. The TensorFlow framework code makes all necessary model changes internally.

In TF-AMP, the computational graph is optimized to use as few casts as necessary and maximize the use of FP16, and the loss scaling is automatically applied inside of supported optimizers. AMP can be configured to work with the existing tf.contrib loss scaling manager by disabling the AMP scaling with a single environment variable to perform only the automatic mixed-precision optimization. It accomplishes this by automatically rewriting all computation graphs with the necessary operations to enable mixed precision training and automatic loss scaling.

For information about:

Benchmarking

The following section shows how to run benchmarks measuring the model performance in training and inference modes.

Training performance benchmark

Training benchmark was run in various scenarios on V100 16G GPU. For each scenario, batch size was set to 32.

To benchmark training, run:

bash examples/SSD320_{PREC}_{NGPU}GPU_BENCHMARK.sh

Where the {NGPU} defines number of GPUs used in benchmark, and the {PREC} defines precision. The benchmark runs training with only 1200 steps and computes average training speed of last 300 steps.

Inference performance benchmark

Inference benchmark was run with various batch-sizes on V100 16G GPU. For inference we are using single GPU setting. Examples are taken from the validation dataset.

To benchmark inference, run:

bash examples/SSD320_FP{16,32}_inference.sh --batch_size <batch size> --checkpoint_dir <path to checkpoint>

Batch size for the inference benchmark is controlled by the --batch_size argument, while the checkpoint is provided to the script with the --checkpoint_dir argument.

The benchmark script provides extra arguments for extra control over the experiment. We were using default values for the extra arguments during the experiments. For more details about them, please run:

bash examples/SSD320_FP16_inference.sh --help

Results

The following sections provide details on how we achieved our performance and accuracy in training and inference.

Training accuracy results

Our results were obtained by running the ./examples/SSD320_FP{16,32}_{1,4,8}GPU.sh script in the TensorFlow-19.03-py3 NGC container on NVIDIA DGX-1 with 8x V100 16G GPUs. All the results are obtained with batch size set to 32.

Number of GPUs Mixed precision mAP Training time with mixed precision FP32 mAP Training time with FP32
1 0.276 7h 17min 0.278 10h 20min
4 0.277 2h 15min 0.275 2h 53min
8 0.269 1h 19min 0.268 1h 37min

Here are example graphs of FP32 and FP16 training on 8 GPU configuration:

TrainingLoss

ValidationAccuracy

Training performance results

Our results were obtained by running:

python bash examples/SSD320_FP*GPU_BENCHMARK.sh

scripts in the TensorFlow-19.03-py3 NGC container on NVIDIA DGX-1 with V100 16G GPUs.

Number of GPUs Batch size per GPU Mixed precision img/s FP32 img/s Speed-up with mixed precision Multi-gpu weak scaling with mixed precision Multi-gpu weak scaling with FP32
1 32 124.97 87.87 1.42 1.00 1.00
4 32 430.79 330.35 1.39 3.45 3.76
8 32 752.04 569.01 1.32 6.02 6.48

To achieve same results, follow the Quick start guide outlined above.

Those results can be improved when XLA is used in conjunction with mixed precision, delivering up to 2x speedup over FP32 on a single GPU (~179 img/s). However XLA is still considered experimental.

Inference performance results

Our results were obtained by running the examples/SSD320_FP{16,32}_inference.sh script in the TensorFlow-19.03-py3 NGC container on NVIDIA DGX-1 with 1x V100 16G GPUs.

Batch size Mixed precision img/s FP32 img/s
1 93.37 97.29
2 135.33 134.04
4 171.70 163.38
8 189.25 174.47
16 187.62 175.42
32 187.37 175.07
64 191.40 177.75

To achieve same results, follow the Quick start guide outlined above.

Changelog

March 2019

  • Initial release

May 2019

  • Test scripts updated

Known issues

There are no known issues with this model.