* ResNet50/PyT Triton ONNXruntime fix with env flag
Scripts were modified to fix missing ORT_TENSORRT_FP16_ENABLE flag for
Triton Inference Server with ONNXRuntime and TensorRT execution provider.
* ResNet50/PyT TensorRT FP16 support fixed
ONNX to TensorRT converter was fixed to force FP16 precision for
TensorRT networks.
Tacotron2+Waveglow/PyT
* AMP support
* Data preprocessing for Tacotron 2 training
* Fixed dropouts on LSTMCells
SSD/PyT
* script and notebook for inference
* AMP support
* README update
* updates to examples/*
BERT/PyT
* initial release
GNMT/PyT
* Default container updated to NGC PyTorch 19.05-py3
* Mixed precision training implemented using APEX AMP
* Added inference throughput and latency results on NVIDIA Tesla V100 16G
* Added option to run inference on user-provided raw input text from command line
NCF/PyT
* Updated performance tables.
* Default container changed to PyTorch 19.06-py3.
* Caching validation negatives between runs
Transformer/PyT
* new README
* jit support added
UNet Medical/TF
* inference example scripts added
* inference benchmark measuring latency added
* TRT/TF-TRT support added
* README updated
GNMT/TF
* Performance improvements
Small updates (mostly README) for other models.