DeepLearningExamples/TensorFlow/Segmentation/UNet_Industrial/notebooks/Colab_UNet_Industrial_TF_TFTRT_inference_demo.ipynb

1635 lines
775 KiB
Text
Raw Normal View History

{
"cells": [
{
"cell_type": "raw",
"metadata": {
"colab_type": "text",
"id": "view-in-github"
},
"source": [
"<a href=\"https://colab.research.google.com/github/NVIDIA/DeepLearningExamples/blob/master/TensorFlow/Segmentation/UNet_Industrial/notebooks/Colab_UNet_Industrial_TF_TFTRT_inference_demo.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
]
},
{
"cell_type": "code",
"execution_count": 0,
"metadata": {
"colab": {},
"colab_type": "code",
"id": "Gwt7z7qdmTbW"
},
"outputs": [],
"source": [
"# Copyright 2019 NVIDIA Corporation. All Rights Reserved.\n",
"#\n",
"# Licensed under the Apache License, Version 2.0 (the \"License\");\n",
"# you may not use this file except in compliance with the License.\n",
"# You may obtain a copy of the License at\n",
"#\n",
"# http://www.apache.org/licenses/LICENSE-2.0\n",
"#\n",
"# Unless required by applicable law or agreed to in writing, software\n",
"# distributed under the License is distributed on an \"AS IS\" BASIS,\n",
"# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n",
"# See the License for the specific language governing permissions and\n",
"# limitations under the License.\n",
"# =============================================================================="
]
},
{
"cell_type": "markdown",
"metadata": {
"colab_type": "text",
"id": "i4NKCp2VmTbn"
},
"source": [
"<img src=\"http://developer.download.nvidia.com/compute/machine-learning/frameworks/nvidia_logo.png\" style=\"width: 90px; float: right;\">\n",
"\n",
"# UNet Industrial Inference Demo with TF-TRT"
]
},
{
"cell_type": "markdown",
"metadata": {
"colab_type": "text",
"id": "fW0OKDzvmTbt"
},
"source": [
"## Overview\n",
"\n",
"\n",
"In this notebook, we will demo the process of carrying out inference on new images using a pre-trained UNet model downloaded from the NVIDIA NGC Model registry. We will also optimize the naitive TensorFlow trained model for deployment with TensorFlow-TensorRT (TF-TRT). TensorRT is the NVIDIA high-performance runtime environment for deployment of deep learning applications. TF-TRT is the integration of TensorRT directly into the TensorFlow ecosystem, allowing users to benefit much improved performance using a relatively easy and convenient Python API interfance. \n",
"\n",
"\n",
"### Requirement\n",
"1. Before running this notebook, please set the Colab runtime environment to GPU via the menu *Runtime => Change runtime type => GPU*.\n",
"\n",
"For TF-TRT FP16 and INT8 inference, an NVIDIA Volta, Turing or newer GPU generations is required. "
]
},
{
"cell_type": "code",
"execution_count": 0,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/",
"height": 316
},
"colab_type": "code",
"id": "HVsrGkj4Zn2L",
"outputId": "444fdef7-46bc-4e92-d33e-32a35f9faa34"
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Fri Sep 27 04:28:18 2019 \n",
"+-----------------------------------------------------------------------------+\n",
"| NVIDIA-SMI 430.40 Driver Version: 418.67 CUDA Version: 10.1 |\n",
"|-------------------------------+----------------------+----------------------+\n",
"| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |\n",
"| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |\n",
"|===============================+======================+======================|\n",
"| 0 Tesla K80 Off | 00000000:00:04.0 Off | 0 |\n",
"| N/A 69C P0 72W / 149W | 6601MiB / 11441MiB | 0% Default |\n",
"+-------------------------------+----------------------+----------------------+\n",
" \n",
"+-----------------------------------------------------------------------------+\n",
"| Processes: GPU Memory |\n",
"| GPU PID Type Process name Usage |\n",
"|=============================================================================|\n",
"+-----------------------------------------------------------------------------+\n"
]
}
],
"source": [
"!nvidia-smi"
]
},
{
"cell_type": "markdown",
"metadata": {
"colab_type": "text",
"id": "chjbvfyJbUrY"
},
"source": [
"\n",
"2. Install TensorFlow GPU 1.15.0\n",
"\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/",
"height": 430
},
"colab_type": "code",
"id": "4kxy3SY1UoX3",
"outputId": "546180b2-ed02-4b8a-a877-4e7b078ae229"
},
"outputs": [],
"source": [
"!pip install tensorflow-gpu==1.15.0-rc1"
]
},
{
"cell_type": "markdown",
"metadata": {
"colab_type": "text",
"id": "pV3rzgO8-tSK"
},
"source": [
"The below code check whether a Tensor core GPU is present."
]
},
{
"cell_type": "code",
"execution_count": 0,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/",
"height": 36
},
"colab_type": "code",
"id": "Djyvo8mm9poq",
"outputId": "c92131a7-9911-4d6c-a502-a50a7f128baa"
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Tensor Core GPU Present: None\n"
]
}
],
"source": [
"from tensorflow.python.client import device_lib\n",
"\n",
"def check_tensor_core_gpu_present():\n",
" local_device_protos = device_lib.list_local_devices()\n",
" for line in local_device_protos:\n",
" if \"compute capability\" in str(line):\n",
" compute_capability = float(line.physical_device_desc.split(\"compute capability: \")[-1])\n",
" if compute_capability>=7.0:\n",
" return True\n",
" \n",
"print(\"Tensor Core GPU Present:\", check_tensor_core_gpu_present())\n",
"tensor_core_gpu = check_tensor_core_gpu_present()"
]
},
{
"cell_type": "markdown",
"metadata": {
"colab_type": "text",
"id": "FCEfkBAbbaLI"
},
"source": [
"3. Next, we clone the Github UNet_Industrial repository and set up the workspace."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/",
"height": 36
},
"colab_type": "code",
"id": "y3u_VMjXtAto",
"outputId": "e04d1fb2-24ce-41bb-f13f-006df7916389"
},
"outputs": [],
"source": [
"!git clone https://github.com/NVIDIA/DeepLearningExamples"
]
},
{
"cell_type": "code",
"execution_count": 0,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/",
"height": 36
},
"colab_type": "code",
"id": "-rE46y-ftAuQ",
"outputId": "fd16441f-0068-4432-c72e-8ecc4eeba491"
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"/content/DeepLearningExamples/TensorFlow/Segmentation/UNet_Industrial/notebooks\n"
]
}
],
"source": [
"import os\n",
"\n",
"WORKSPACE_DIR='/content/DeepLearningExamples/TensorFlow/Segmentation/UNet_Industrial/notebooks'\n",
"os.chdir(WORKSPACE_DIR)\n",
"print (os.getcwd())"
]
},
{
"cell_type": "markdown",
"metadata": {
"colab_type": "text",
"id": "HqSUGePjmTb9"
},
"source": [
"## Data download\n",
"\n",
"We will first download some data, in particular, the [Weakly Supervised Learning for Industrial Optical Inspection (DAGM 2007)](https://resources.mpi-inf.mpg.de/conference/dagm/2007/prizes.html) dataset. \n",
"\n",
"> The competition is inspired by problems from industrial image processing. In order to satisfy their customers' needs, companies have to guarantee the quality of their products, which can often be achieved only by inspection of the finished product. Automatic visual defect detection has the potential to reduce the cost of quality assurance significantly.\n",
">\n",
"> The competitors have to design a stand-alone algorithm which is able to detect miscellaneous defects on various background textures.\n",
">\n",
"> The particular challenge of this contest is that the algorithm must learn, without human intervention, to discern defects automatically from a weakly labeled (i.e., labels are not exact to the pixel level) training set, the exact characteristics of which are unknown at development time. During the competition, the programs have to be trained on new data without any human guidance.\n",
"\n",
"**Source:** https://resources.mpi-inf.mpg.de/conference/dagm/2007/prizes.html\n"
]
},
{
"cell_type": "code",
"execution_count": 0,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/",
"height": 355
},
"colab_type": "code",
"id": "S2PR7weWmTcK",
"outputId": "9d5c8000-8ae7-4179-9b6a-5ed8cab4acc8"
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"################################################\n",
"Processing Public Dataset\n",
"################################################\n",
"\n",
"Archive: /content/DeepLearningExamples/TensorFlow/Segmentation/UNet_Industrial/notebooks/data/zip_files/public/Class1.zip\n",
"Archive: /content/DeepLearningExamples/TensorFlow/Segmentation/UNet_Industrial/notebooks/data/zip_files/public/Class1_def.zip\n",
"Archive: /content/DeepLearningExamples/TensorFlow/Segmentation/UNet_Industrial/notebooks/data/zip_files/public/Class2.zip\n",
"Archive: /content/DeepLearningExamples/TensorFlow/Segmentation/UNet_Industrial/notebooks/data/zip_files/public/Class2_def.zip\n",
"Archive: /content/DeepLearningExamples/TensorFlow/Segmentation/UNet_Industrial/notebooks/data/zip_files/public/Class3.zip\n",
"Archive: /content/DeepLearningExamples/TensorFlow/Segmentation/UNet_Industrial/notebooks/data/zip_files/public/Class3_def.zip\n",
"Archive: /content/DeepLearningExamples/TensorFlow/Segmentation/UNet_Industrial/notebooks/data/zip_files/public/Class4.zip\n",
"Archive: /content/DeepLearningExamples/TensorFlow/Segmentation/UNet_Industrial/notebooks/data/zip_files/public/Class4_def.zip\n",
"Archive: /content/DeepLearningExamples/TensorFlow/Segmentation/UNet_Industrial/notebooks/data/zip_files/public/Class5.zip\n",
"Archive: /content/DeepLearningExamples/TensorFlow/Segmentation/UNet_Industrial/notebooks/data/zip_files/public/Class5_def.zip\n",
"Archive: /content/DeepLearningExamples/TensorFlow/Segmentation/UNet_Industrial/notebooks/data/zip_files/public/Class6.zip\n",
"Archive: /content/DeepLearningExamples/TensorFlow/Segmentation/UNet_Industrial/notebooks/data/zip_files/public/Class6_def.zip\n"
]
}
],
"source": [
"! ./download_and_preprocess_dagm2007_public.sh ./data"
]
},
{
"cell_type": "markdown",
"metadata": {
"colab_type": "text",
"id": "EQAIszkxmTcT"
},
"source": [
"The final data directory should look like:\n",
"\n",
"```\n",
"./data\n",
" raw_images\n",
" public\n",
" Class1\t \n",
" Class2\t\n",
" Class3\t \n",
" Class4\t\n",
" Class5\t \n",
" Class6\n",
" Class1_def \n",
" Class2_def\t\n",
" Class3_def \n",
" Class4_def\t\n",
" Class5_def \n",
" Class6_def\n",
" private\n",
" zip_files\n",
"```"
]
},
{
"cell_type": "markdown",
"metadata": {
"colab_type": "text",
"id": "xSztH-mf-6hY"
},
"source": [
"Each data directory contains training images corresponding to one of 6 types of defects."
]
},
{
"cell_type": "markdown",
"metadata": {
"colab_type": "text",
"id": "RL8d9IwzmTcV"
},
"source": [
"## Model download from NVIDIA NGC model repository\n",
"\n",
"NVIDIA provides pretrained UNet models along with many other deep learning models such as ResNet, BERT, Transformer, SSD... at https://ngc.nvidia.com/catalog/models. Here, we will download and upzip pretrained UNet models. "
]
},
{
"cell_type": "code",
"execution_count": 0,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/",
"height": 1000
},
"colab_type": "code",
"id": "wNA8uFflu7gO",
"outputId": "82d5f16b-69f4-47e0-f352-851e8fa2051c"
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Archive: ./unet_model.zip\n",
" inflating: JoC_UNET_Industrial_FP32_TF_20190522/Class+1/checkpoint \n",
" inflating: JoC_UNET_Industrial_FP32_TF_20190522/Class+1/graph.pbtxt \n",
" inflating: JoC_UNET_Industrial_FP32_TF_20190522/Class+1/model.ckpt-2500.data-00000-of-00002 \n",
" inflating: JoC_UNET_Industrial_FP32_TF_20190522/Class+1/model.ckpt-2500.data-00001-of-00002 \n",
" inflating: JoC_UNET_Industrial_FP32_TF_20190522/Class+1/model.ckpt-2500.index \n",
" inflating: JoC_UNET_Industrial_FP32_TF_20190522/Class+1/model.ckpt-2500.meta \n",
" inflating: JoC_UNET_Industrial_FP32_TF_20190522/Class+10/checkpoint \n",
" inflating: JoC_UNET_Industrial_FP32_TF_20190522/Class+10/graph.pbtxt \n",
" inflating: JoC_UNET_Industrial_FP32_TF_20190522/Class+10/model.ckpt-2500.data-00000-of-00002 \n",
" inflating: JoC_UNET_Industrial_FP32_TF_20190522/Class+10/model.ckpt-2500.data-00001-of-00002 \n",
" inflating: JoC_UNET_Industrial_FP32_TF_20190522/Class+10/model.ckpt-2500.index \n",
" inflating: JoC_UNET_Industrial_FP32_TF_20190522/Class+10/model.ckpt-2500.meta \n",
" inflating: JoC_UNET_Industrial_FP32_TF_20190522/Class+2/checkpoint \n",
" inflating: JoC_UNET_Industrial_FP32_TF_20190522/Class+2/graph.pbtxt \n",
" inflating: JoC_UNET_Industrial_FP32_TF_20190522/Class+2/model.ckpt-2500.data-00000-of-00002 \n",
" inflating: JoC_UNET_Industrial_FP32_TF_20190522/Class+2/model.ckpt-2500.data-00001-of-00002 \n",
" inflating: JoC_UNET_Industrial_FP32_TF_20190522/Class+2/model.ckpt-2500.index \n",
" inflating: JoC_UNET_Industrial_FP32_TF_20190522/Class+2/model.ckpt-2500.meta \n",
" inflating: JoC_UNET_Industrial_FP32_TF_20190522/Class+3/checkpoint \n",
" inflating: JoC_UNET_Industrial_FP32_TF_20190522/Class+3/graph.pbtxt \n",
" inflating: JoC_UNET_Industrial_FP32_TF_20190522/Class+3/model.ckpt-2500.data-00000-of-00002 \n",
" inflating: JoC_UNET_Industrial_FP32_TF_20190522/Class+3/model.ckpt-2500.data-00001-of-00002 \n",
" inflating: JoC_UNET_Industrial_FP32_TF_20190522/Class+3/model.ckpt-2500.index \n",
" inflating: JoC_UNET_Industrial_FP32_TF_20190522/Class+3/model.ckpt-2500.meta \n",
" inflating: JoC_UNET_Industrial_FP32_TF_20190522/Class+4/checkpoint \n",
" inflating: JoC_UNET_Industrial_FP32_TF_20190522/Class+4/graph.pbtxt \n",
" inflating: JoC_UNET_Industrial_FP32_TF_20190522/Class+4/model.ckpt-2500.data-00000-of-00002 \n",
" inflating: JoC_UNET_Industrial_FP32_TF_20190522/Class+4/model.ckpt-2500.data-00001-of-00002 \n",
" inflating: JoC_UNET_Industrial_FP32_TF_20190522/Class+4/model.ckpt-2500.index \n",
" inflating: JoC_UNET_Industrial_FP32_TF_20190522/Class+4/model.ckpt-2500.meta \n",
" inflating: JoC_UNET_Industrial_FP32_TF_20190522/Class+5/checkpoint \n",
" inflating: JoC_UNET_Industrial_FP32_TF_20190522/Class+5/graph.pbtxt \n",
" inflating: JoC_UNET_Industrial_FP32_TF_20190522/Class+5/model.ckpt-2500.data-00000-of-00002 \n",
" inflating: JoC_UNET_Industrial_FP32_TF_20190522/Class+5/model.ckpt-2500.data-00001-of-00002 \n",
" inflating: JoC_UNET_Industrial_FP32_TF_20190522/Class+5/model.ckpt-2500.index \n",
" inflating: JoC_UNET_Industrial_FP32_TF_20190522/Class+5/model.ckpt-2500.meta \n",
" inflating: JoC_UNET_Industrial_FP32_TF_20190522/Class+6/checkpoint \n",
" inflating: JoC_UNET_Industrial_FP32_TF_20190522/Class+6/graph.pbtxt \n",
" inflating: JoC_UNET_Industrial_FP32_TF_20190522/Class+6/model.ckpt-2500.data-00000-of-00002 \n",
" inflating: JoC_UNET_Industrial_FP32_TF_20190522/Class+6/model.ckpt-2500.data-00001-of-00002 \n",
" inflating: JoC_UNET_Industrial_FP32_TF_20190522/Class+6/model.ckpt-2500.index \n",
" inflating: JoC_UNET_Industrial_FP32_TF_20190522/Class+6/model.ckpt-2500.meta \n",
" inflating: JoC_UNET_Industrial_FP32_TF_20190522/Class+7/checkpoint \n",
" inflating: JoC_UNET_Industrial_FP32_TF_20190522/Class+7/graph.pbtxt \n",
" inflating: JoC_UNET_Industrial_FP32_TF_20190522/Class+7/model.ckpt-2500.data-00000-of-00002 \n",
" inflating: JoC_UNET_Industrial_FP32_TF_20190522/Class+7/model.ckpt-2500.data-00001-of-00002 \n",
" inflating: JoC_UNET_Industrial_FP32_TF_20190522/Class+7/model.ckpt-2500.index \n",
" inflating: JoC_UNET_Industrial_FP32_TF_20190522/Class+7/model.ckpt-2500.meta \n",
" inflating: JoC_UNET_Industrial_FP32_TF_20190522/Class+8/checkpoint \n",
" inflating: JoC_UNET_Industrial_FP32_TF_20190522/Class+8/graph.pbtxt \n",
" inflating: JoC_UNET_Industrial_FP32_TF_20190522/Class+8/model.ckpt-2500.data-00000-of-00002 \n",
" inflating: JoC_UNET_Industrial_FP32_TF_20190522/Class+8/model.ckpt-2500.data-00001-of-00002 \n",
" inflating: JoC_UNET_Industrial_FP32_TF_20190522/Class+8/model.ckpt-2500.index \n",
" inflating: JoC_UNET_Industrial_FP32_TF_20190522/Class+8/model.ckpt-2500.meta \n",
" inflating: JoC_UNET_Industrial_FP32_TF_20190522/Class+9/checkpoint \n",
" inflating: JoC_UNET_Industrial_FP32_TF_20190522/Class+9/graph.pbtxt \n",
" inflating: JoC_UNET_Industrial_FP32_TF_20190522/Class+9/model.ckpt-2500.data-00000-of-00002 \n",
" inflating: JoC_UNET_Industrial_FP32_TF_20190522/Class+9/model.ckpt-2500.data-00001-of-00002 \n",
" inflating: JoC_UNET_Industrial_FP32_TF_20190522/Class+9/model.ckpt-2500.index \n",
" inflating: JoC_UNET_Industrial_FP32_TF_20190522/Class+9/model.ckpt-2500.meta \n"
]
}
],
"source": [
"%%bash \n",
"wget -nc -q --show-progress -O unet_model.zip \\\n",
"https://api.ngc.nvidia.com/v2/models/nvidia/unetindustrial_for_tensorflow_32/versions/1/zip\n",
"unzip -o ./unet_model.zip"
]
},
{
"cell_type": "markdown",
"metadata": {
"colab_type": "text",
"id": "i6ADZZfGtAvP"
},
"source": [
"Upon completion of the download, the following model directories should exist, containing pre-trained model corresponding to 10 classes of the DAGM 2007 competition data set."
]
},
{
"cell_type": "code",
"execution_count": 0,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/",
"height": 54
},
"colab_type": "code",
"id": "jtqhp3X5tAvS",
"outputId": "db51e242-2828-4994-9179-86f459c17db2"
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Class+1 Class+2 Class+4 Class+6 Class+8\n",
"Class+10 Class+3 Class+5 Class+7 Class+9\n"
]
}
],
"source": [
"!ls JoC_UNET_Industrial_FP32_TF_20190522"
]
},
{
"cell_type": "markdown",
"metadata": {
"colab_type": "text",
"id": "dt6oArfSmTc5"
},
"source": [
"## Inference with Naitive TensorFlow\n",
"\n",
"We will now launch an interactive testing, where you can load new test images. First, we load some required libraries and define some helper functions to load the pretrained UNet model."
]
},
{
"cell_type": "code",
"execution_count": 0,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/",
"height": 352
},
"colab_type": "code",
"id": "6NktI1GUtAvb",
"outputId": "700b92ec-db41-40aa-ec65-cd5599938550"
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Processing /content/DeepLearningExamples/TensorFlow/Segmentation/UNet_Industrial/dllogger\n",
"Building wheels for collected packages: DLLogger\n",
" Building wheel for DLLogger (setup.py) ... \u001b[?25l\u001b[?25hdone\n",
" Created wheel for DLLogger: filename=DLLogger-0.3.1-cp36-none-any.whl size=9884 sha256=294b0b226bb82109933049017fcab2268b6a282b616790276b0a2af8763ed245\n",
" Stored in directory: /tmp/pip-ephem-wheel-cache-kpseamaa/wheels/23/a4/72/2606d992c53ecdd7969c79ed3fb0c23dacdbdb438a8c17999a\n",
"Successfully built DLLogger\n",
"Installing collected packages: DLLogger\n",
" Found existing installation: DLLogger 0.3.1\n",
" Uninstalling DLLogger-0.3.1:\n",
" Successfully uninstalled DLLogger-0.3.1\n",
"Successfully installed DLLogger-0.3.1\n"
]
},
{
"data": {
"application/vnd.colab-display-data+json": {
"pip_warning": {
"packages": [
"dllogger"
]
}
}
},
"metadata": {
"tags": []
},
"output_type": "display_data"
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"1.15.0-rc1\n"
]
}
],
"source": [
"!pip install ../dllogger\n",
"\n",
"import tensorflow as tf\n",
"print(tf.__version__)\n",
"\n",
"try:\n",
" __import__(\"horovod\")\n",
"except ImportError:\n",
" os.system(\"pip install horovod\")\n",
" \n",
"\n",
" \n",
"import horovod.tensorflow\n",
"import sys\n",
"sys.path.insert(0,'/content/DeepLearningExamples/TensorFlow/Segmentation/UNet_Industrial')\n",
"\n",
"from model.unet import UNet_v1\n",
"\n",
"import numpy as np\n",
"%matplotlib inline\n",
"import matplotlib.pyplot as plt\n",
"import matplotlib.image as mpimg\n"
]
},
{
"cell_type": "markdown",
"metadata": {
"colab_type": "text",
"id": "Ct-izSTsv04V"
},
"source": [
"We will now load and inspect one defect image from Class 1."
]
},
{
"cell_type": "code",
"execution_count": 0,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/",
"height": 614
},
"colab_type": "code",
"id": "EIOhZBuptAvu",
"outputId": "68def336-33cf-4a25-8893-848da6ddda42"
},
"outputs": [
{
"data": {
"text/plain": [
"<matplotlib.image.AxesImage at 0x7fbf789ac710>"
]
},
"execution_count": 89,
"metadata": {
"tags": []
},
"output_type": "execute_result"
},
{
"data": {
"image/png": "iVBORw0KGgoAAAANSUhEUgAAAksAAAJCCAYAAADQsoPKAAAABHNCSVQICAgIfAhkiAAAAAlwSFlz\nAAALEgAACxIB0t1+/AAAADl0RVh0U29mdHdhcmUAbWF0cGxvdGxpYiB2ZXJzaW9uIDMuMC4zLCBo\ndHRwOi8vbWF0cGxvdGxpYi5vcmcvnQurowAAIABJREFUeJzsvWlspOeV3/urfa9irazivq9Nsjd2\ns9utBVbLbltjz1iyE9mOMQMkMDDABSZAgOAO8qUT4CYI8mGAJEiCJJMPg8GMrdgZyJmRJVmSJbW6\n1Su7m2Rz31ks1kbWvm/3g+Y5kYALx7gY504ueAB/aItk1fu+z3Oec/7n//+/mlarxWmcxmmcxmmc\nxmmcxmn8P4f2/+svcBqncRqncRqncRqn8bc5Toul0ziN0ziN0ziN0ziNXxGnxdJpnMZpnMZpnMZp\nnMaviNNi6TRO4zRO4zRO4zRO41fEabF0GqdxGqdxGqdxGqfxK+K0WDqN0ziN0ziN0ziN0/gV8Rsp\nljQazQ2NRrOm0Wg2NRrN//mb+IzTOI3TOI3TOI3TOI3/FaH5m/ZZ0mg0OmAdeBkIAw+A77ZareW/\n0Q86jdM4jdM4jdM4jdP4XxC/CWTpErDZarW2W61WFfgR8Nu/gc85jdM4jdM4jdM4jdP4jYf+N/A3\nO4GDz/07DFz+Vb9gMBhaLpeLarWK3+/HbDbTaDSIRCI0m00MBgMARqMRh8NBW1sbiUSCQqFAsVjE\nZrNhMBiw2+1oNBqSySTNZlP+jslkwufzAZDP5zk5OUGv11Ov12k0GthsNgAymQx+v596vU4ulwPA\nbrdTrVYxGo3o9Z/drmg0il6vx2q10mq15G9Uq1VKpZJ851wuh9frxWAwcHJygk6nw+VyAXBycoLf\n76dcLlOr1SiVShiNRnQ6HUajkWq1ikajQavVUqvVMBgMWCwWLBYLyWQSAJvNxsnJCTabDZ1OR6PR\noNlsotVqKZfLtFotDAYD5XIZj8dDsVikVqsBoBDFVquFw+GgXq/TbDbR6XTk83nMZrPc92q1Kv+t\nUCjgdDrJ5XJYrVYAGo0GdrudTCaDXq+n2WzSaDSwWCxotVoqlQo6nY5arUa9XsdqtVKpVLBarWg0\nGvkb6j6Xy2WcTifZbBaLxUKr1aJWq8n9L5fLOBwOeUZGoxEArVZLvV5Hr9eTy+XQ6XTodDo0Gg02\nm41KpUK1WkWr1aLT6ahWq3KNKur1OoDce71eL99ffU+LxSL3yWQyUS6XMZlMci3qu9RqNVqtFvV6\nHa1Wi8lkQqfTkcvlMJlMNJtNqtUqHo+HXC6HXq+X56PRaDCbzQDUajV0Op08I5vNRrlcxmAwUCqV\n0Gq1GAwGtFotjUYDrVYr11Kv1+UZNxoNisUier2earWKyWSSNZfJZDAajWi1WrRarVxLoVDA7XaT\ny+VoNBq0tbVRq9WoVqtYLBYKhQIGgwGNRkOz2cRoNFIsFmk2mzgcDgCKxaLso0qlgsFgoF6vY7fb\nKRQKaDQayuUyWq0Wi8Ui16TX62Wdqv1cq9VoNBrU63X5/vl8Hq1WS7PZBMDpdFKtVikWi2g0Glk3\nTqeTQqEga7nRaKDRaNBoNLL2KpUKer0eg8FAoVCgra2NZrNJqVSS/aLumXpW6lmoZ9psNqnVaphM\nJsxmMzqdjnK5TKVSkfWqflZdv06nk2uoVqsqL8qaVutA5Sr170ajIblA3UP1zE0mE1arlVKpRC6X\nQ6vVYrPZ5H7k83n0ej1Go1FyViqVkrzWarXQarWUSiWcTqfsUYfDIfdDrYNisSjPzGg0YjQayWQy\nNBoNarUaXq8XQPasygvquWQyGSwWC41Gg1arJfc9lUrJenS5XJRKJbk/Wq1WclC5XMZqtX7h/plM\nJlnXer0ejUYj1+B0Or+w3wEqlQpms5lKpSJr0Ol0YjKZqNfrZLNZWq0WOp1OnkOhUJD9pj5TrReV\newD5fJXnTSYThUJB9rpas+qeVioV7HY7BoMBg8FAOp2WdabX62X9VatV9Hq9fEcV1WoVnU73hXVm\nt9spFouSh0qlkuzdz+dXlU/1ej35fJ5Wq4Ver0er1Up+LpVKmEwmGo0GXq+XbDYrf1uj0WCxWADk\nM9R5pPZOqVSSz9Pr9TgcDlKplHyfRqMh+SOdTksu+PzeVPlR3YN6vY7BYJA18PkcWigUcDgccs9U\nTnG5XEQikWSr1fLzP4nfRLH0a4VGo/kh8EP4rCD57ne/y+joKK+++ir/+l//a7LZLE+fPsXj8XDl\nyhUAdDodP/jBD/jkk0+Yn58nm82yubmJ3W7nG9/4Bt3d3fy7f/fvaG9vR6/X09vbi8vlor+/n3g8\nDsAbb7zB9PQ0TqeT9fV1JicnGR4e5l/8i3/B97//fer1OvPz8+TzeXp6eigUCkQiEb7+9a/zi1/8\nAoDu7m7eeustzp8/j06n4x/9o3/Em2++ydraGktLS1y5coVcLkcwGKRSqbCwsMDzzz9PJBLh4sWL\nwP9IMisrK+j1evr6+kin01y6dIlarcb29jaVSoUHDx5w+fJlZmdnabVa3L9/nwsXLgCwtbVFNpuV\ngykej/O7v/u77O/v88Ybb3B4eMjg4CBms5nLly/zJ3/yJwwMDFAsFmUxu91udDod29vb6PV6jo6O\n6OnpkQKo1Wrh8/n4y7/8S0ZGRggGg1y5coU//dM/JRAIAODxeNjf38fn87G1tYXX62VjY4PXXnuN\nlZUVms0m+XyefD6Px+Nhbm6ODz74gNnZWcLhMIBsvLGxMT788EMcDgfRaJRms0m5XMZsNjMzM0M8\nHpfiMZvNotVqmZ6eBmBhYYFarUYymeSFF17g5OSEcrlMR0cHiUQCh8NBJBLhhRdeYGFhga6uLjY3\nNwkGg+zu7gJw8eJFfv7zn+NwOJiZmaFarZJOp+Vg9Hg8eL1ePv30U77yla/wySefcOnSJTm04LPE\nHQqFsFgs5HI5nE4njx8/5uWXX+bhw4dYrVZMJhO3b9/m2rVrBINB7t27x5e//GW5HzabjWazyc7O\nDn19fXJojIyMkM1micfjGAwGDg4OuHbtGkajkb29PbxeL/v7+wBYrVYODw9xOBx4PB5pKu7du0ex\nWGR2dpZSqcSTJ08IBAI4nU7K5TLd3d0cHh4CkM1mOXPmDJFIBJfLRU9PD+vr6wwMDBCJRGhra6Nc\nLlMul6nX65w7d45YLMbe3p4k787OTqanp6lUKuzu7hKPx3G73czNzfHTn/4Ul8vF4uIik5OT3Lhx\ngwcPHhCLxdjY2GB0dBSAq1evsry8jMPhYH9/n0QiwZkzZ6hWq2xvb1MulxkdHSUajfLqq6/yox/9\niOHhYR4/fszZs2cB6Ojo4Pbt20xMTKDX69nc3GRwcJCHDx8yNTVFrVbD4XDQarU4Pj4mm83icDg4\nOjqS/bKwsMBv//Zv09PTw8LCAuVymWq1Si6XY3Z2lkKhwAcffIDP52NwcJCzZ8/SbDZ57733yOfz\nwGcFQ39/P8FgkJOTE7a3t3G5XIyOjnJycsLGxgZ2ux273U4gEMBkMvFXf/VXXLlyhWAwCHx2yC8v\nL2M0GhkaGpIDPplMsrW1xcsvv0xbWxuhUIhsNssHH3yAw+HAZDJJoWM0Gjk8POTMmTP09PSwu7vL\ngwcPcDgceL1eNBqNFDQDAwPAZ4dOOp2mWCwC4PV66evr49GjR5TLZY6Pj5meniYYDJJOp4lEIhwf\nHzM0NMT09DTLy8vs7u6Sz+cJhULyt1dWVjg5OaFardLb20tPTw9Wq5X3338fq9VKMBikv7+fzc1N\njo6O0Ov1tLe3y/c6PDzk5OSE7u5uKYaazSb37t2jr6+PZDKJw+Hg4OAAs9nM7OwsXq+Xw8NDdnZ2\nAHC5XHJIl0ol8vk8zz33HHa7nXQ6TT6f5+nTpzSbTUZHR6Ux3NjYkAO4ra2N9vZ2nE4nT58+lWtR\n1xGPx7l27RqJRIJ8Ps/
"text/plain": [
"<Figure size 720x720 with 1 Axes>"
]
},
"metadata": {
"tags": []
},
"output_type": "display_data"
}
],
"source": [
"img = mpimg.imread('./data/raw_images/public/Class1_def/1.png')\n",
"\n",
"plt.figure(figsize = (10,10))\n",
"plt.imshow(img, cmap='gray')"
]
},
{
"cell_type": "markdown",
"metadata": {
"colab_type": "text",
"id": "Z9zzrzavxLUR"
},
"source": [
"As we can see in this figure, there exists a defective area in the top left corner. We will now load the model and carry out inference on the normalized test image."
]
},
{
"cell_type": "code",
"execution_count": 0,
"metadata": {
"colab": {},
"colab_type": "code",
"id": "iKGu4mpztAv8"
},
"outputs": [],
"source": [
"# Image preprocessing\n",
"img = np.expand_dims(img, axis=2)\n",
"img = np.expand_dims(img, axis=0)\n",
"img = (img-0.5)/0.5"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/",
"height": 1000
},
"colab_type": "code",
"id": "XwsDthGwtAwB",
"outputId": "56172862-e212-4893-9509-981657a41c6d"
},
"outputs": [],
"source": [
"config = tf.ConfigProto()\n",
"config.gpu_options.allow_growth = True\n",
"config.allow_soft_placement = True\n",
"\n",
"graph = tf.Graph()\n",
"with graph.as_default():\n",
" with tf.Session(config=config) as sess:\n",
" network = UNet_v1(\n",
" model_name=\"UNet_v1\",\n",
" input_format='NHWC',\n",
" compute_format='NHWC',\n",
" n_output_channels=1,\n",
" unet_variant='tinyUNet',\n",
" weight_init_method='he_uniform',\n",
" activation_fn='relu'\n",
" )\n",
" \n",
" tf_input = tf.placeholder(tf.float32, [None, 512, 512, 1], name='input')\n",
" \n",
" outputs, logits = network.build_model(tf_input)\n",
" saver = tf.train.Saver()\n",
"\n",
" # Restore variables from disk.\n",
" saver.restore(sess, \"JoC_UNET_Industrial_FP32_TF_20190522/Class+1/model.ckpt-2500\")\n",
" \n",
" \n",
" output = sess.run([outputs, logits], feed_dict={tf_input: img})\n",
" \n"
]
},
{
"cell_type": "code",
"execution_count": 0,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/",
"height": 614
},
"colab_type": "code",
"id": "2vGBGRBBtAwG",
"outputId": "853e4c91-b890-4129-9e8a-c729e12fc793"
},
"outputs": [
{
"data": {
"text/plain": [
"<matplotlib.image.AxesImage at 0x7fbf76d59278>"
]
},
"execution_count": 92,
"metadata": {
"tags": []
},
"output_type": "execute_result"
},
{
"data": {
"image/png": "iVBORw0KGgoAAAANSUhEUgAAAksAAAJCCAYAAADQsoPKAAAABHNCSVQICAgIfAhkiAAAAAlwSFlz\nAAALEgAACxIB0t1+/AAAADl0RVh0U29mdHdhcmUAbWF0cGxvdGxpYiB2ZXJzaW9uIDMuMC4zLCBo\ndHRwOi8vbWF0cGxvdGxpYi5vcmcvnQurowAAHE9JREFUeJzt3W+MXXd95/HP1zO2Y0ISJxBciFOc\nVcMihBaDIjYVPKBUrZIWNTxAiKorUhRhVepKqdpVCX1SFW0fwINCo67opiVqWvUfok0TUdQlCmy3\neQBNUqckENi4NG7shjiBxHEIiePxbx/McTpk6W/G9tw5d+59vaTR3HPu8b3f8YHxO+ece2+11gIA\nwA+2ZewBAACmmVgCAOgQSwAAHWIJAKBDLAEAdIglAICOicRSVV1VVd+oqgNVdcMkngMAYCPUer/P\nUlUtJPm/SX4iyaEkdyf52dba19b1iQAANsAkjiy9NcmB1to3W2vHk/xZkmsm8DwAABO3OIHHvCTJ\nIyuWDyX5z70/UFXeRhwA2GhPtNYuXm2jScTSmlTVviT7xnp+AGDuHVzLRpOIpcNJLl2xvHtY931a\nazcluSlxZAkAmF6TuGbp7iSXV9VlVbUtyfuS3D6B5wEAmLh1P7LUWjtRVf81yf9KspDk5tbaV9f7\neQAANsK6v3XAGQ3hNBwAsPHuba1dsdpG3sEbAKBDLAEAdIglAIAOsQQA0CGWAAA6xBIAQIdYAgDo\nEEsAAB1iCQCgQywBAHSIJQCADrEEANAhlgAAOsQSAECHWAIA6BBLAAAdYgkAoEMsAQB0iCUAgA6x\nBADQIZYAADrEEgBAh1gCAOgQSwAAHWIJAKBDLAEAdIglAIAOsQQA0CGWAAA6xFLH3/7t36a1tuav\nD3/4w2OPDACss2qtjT1Dqmr8IQZve9vbctddd63LY+3YsSPPPffcujwWALDu7m2tXbHaRo4sDb70\npS+ltbZuoZQk3/ve99Jay/79+9ftMQGAjTX3R5a2bt2a48ePb9jzvfnNb06S3HfffRv2nADAD+TI\n0mpuueWWDQ2lJNm/f3/279//4nVOF1100YY+PwBwehbHHmAsR44cycUXXzz2GPn2t7+dJGmtZXFx\nMSdPnhx5IgBgpbk8svQLv/ALUxFKK1VVlpaW0lrL0tJSFhfntmMBYKrM3TVLG32N0tl6/vnnc845\n54w9BgDMItcsAQCcrbmLpc10VClJtm/f/uLF4ADAxpubC2O++c1vjj3CWWutOS0HABtsbq5Zmoaf\nc71dfPHFeeKJJ8YeAwA2K9csnTKLoZQkjz/+uLcaAIAJm4tYmmVVNbMxCADTYOZjaV5CorWWhYWF\nsccAgJkz87E0T06cOJHf/d3fHXsMAJgpM32B99LSUrZsmc8erKqxRwCAaecC73kNpWT5tNxHPvKR\nsccAgE1vZo8sbd++Pc8999x6P+ym5CgTAPxA831k6dixY2OPMDVaa7ntttvGHgMANqWZPbI0DT/X\nNNq+ffum+8gXAJiQ+T6yxA/2/PPPZ2lpaewxAGDTmMlY2rt379gjTLUtW7a8+OG8N99889jjAMBU\nm8nTcMePH8/WrVvX8yFn3mWXXZaHH3547DEAYCPN72k4oXT6/vmf/9nnzAHADzCTsQQAsF7EEi86\n9aG8N95449ijAMDUmMlrlqbhZ9rsjh8/nu3bt489BgBM0vxes8TZ27Ztm+gEgIglVtFay8tf/vKx\nxwCA0YglVnXs2DHBBMDcmrlYWlxcHHuEmXTs2DFvyQDAXJq5WHr9618/9ggzy2fKATCPZi6WPvSh\nD409wkxz0TcA82bmYulNb3rT2CPMPMEEwDyZuVg699xzxx5hLggmAObFzMXSc889N/YIc+Po0aM5\nevTo2GMAwETN3EvHFhYWxh5hbpx//vlJkvPOOy/Hjh0beRoAmAxHljhrTz/99NgjAMDEzFwsHTx4\ncOwR5tLVV1899ggAMBEzF0u/93u/N/YIc+lzn/vc2CMAwETUNLyqqarWbYht27bl+eefX6+H4zR8\n73vfy8te9rKxxwCAtbq3tXbFahvN3JElAID1NHOx5CM5xrNjx4687nWvG3sMAFhXMxdLjOsb3/jG\n2CMAwLoSS6y7kydPjj0CAKwbscS6q6rs3bt37DEAYF3M3KvhkuUjG1W1ng/JGbAPAJhy8/tquEOH\nDo09Akn++q//euwRAOCszeSRpVe+8pV5/PHH1/MhOUOOLgEwxeb3yNITTzwx9ggMTpw4MfYIAHBW\nZjKWmB4LCwtjjwAAZ2VmY+mFF14YewQG999//9gjAMAZm9lYuvDCC8cegcEb3/jGsUcAgDM2s7H0\n3e9+d+wRWMHpOAA2q5mNpcQ7SU+TZ599duwRAOCMzHQsbdu2bewRGNgXAGxWMx1LS0tLY4/ACh/7\n2MfGHgEATttMvinlSk899VQuuOCCST08p8mbVAIwReb3TSkBANbLzMfSzp07xx6BFW699daxRwCA\n0zLzp+GSZBp+Rv6NU3EATAmn4U554IEHxh6BFXbs2DH2CACwZnNxZClxdGmaLC0tZXFxcewxAMCR\nJaaTd/MGYDOZm1jasmVLtmyZmx936jmyBMBmMTf10FpzKm6KHD16dOwRAGBN5iaWTtm9e/fYI5Dk\nZS972dgjAMCarBpLVXVzVR2pqgdWrLuoqu6oqoeG7xcO66uqbqyqA1X1lap6yySHPxOHDx8eewQA\nYBNZy5GlP0hy1UvW3ZDkztba5UnuHJaT5Ooklw9f+5J8cn3GXF+f+MQnxh6BJHv37h17BABY1Zre\nOqCq9iT5bGvtjcPyN5K8o7X2aFW9Osn/bq39x6r6n8PtP33pdqs8/oZfTOT6pfEdOnQol1566dhj\nADC/JvrWAbtWBNC3kuwabl+S5JEV2x0a1k2d22+/fewR5t5rXvOasUcAgFWd9eu3W2vtTI4MVdW+\nLJ+qG8U111zj6NLIvJUDAJvBmf5r9dhw+i3D9yPD+sNJVp5X2T2s+/+01m5qrV2xlsNfk/KqV71q\nrKcGADaJM42l25NcO9y+NsltK9a/f3hV3JVJjq52vdKYHn/88Rw8eHDsMQCAKbbqBd5V9adJ3pHk\nlUkeS/LrSf4qyaeT/HCSg0ne21r7Ti1/nPzvZPnVc88m+UBr7Z5VhxjhAu+VTp48meXR2Wj+3gEY\n0Zou8J6bD9LtEUvj8fcOwIh8kO5audAYAPj3qISBIxwbb2lpaewRAGBVYmkFwbSxHnzwwbFHAIBV\niaWXEEwb54Mf/ODYIwDAqlzg/e+Yhr+XWbe4uOhUHABjcoH32aiqPPvss2OPMdOEEgCbgVjqOPfc\nc/PzP//zY48xk06ePDn2CACwJk7DrdE0/D3Nkl27duXIkSOrbwgAk+M03Hqqqtx111256667xh5l\nJgglADYLR5bO0DT8vW1WLuwGYEo4sjRJVZXzzz9/7DE2nbvvvlsoAbCpOLK0Tny+3Opaaz5aBoBp\n4sjSRtqyZYsQWIW/HwA2I/96raPWWqoqCwsLrml6ia1bt449AgCcEbE0ASdPnnzxSJPrc5LXvOY1\nOXHixNhjAMAZEUsT1FrL4uJiqipPP/302OOM4sILL8yjjz469hgAcMbEEgBAx+LYA8yLCy64IEly\n/PjxJPNxDY9XBwIwCxxZ2mDbtm3Ltm3bUlV55plnxh5nIp588kmhBMDMEEsjOu+881JVueyyy8Ye\nZV2ceq+piy66aOxRAGDdiKUp8PDDD6eqUlX5/Oc/P/Y4Z2RhYSELCwtjjwEA6847eE+pnTt35skn\nnxx7jH/XyZMns7i4fMnbNPxvCADOgHfw3syeeuqpF482/cu//MvY47zooYce+r433hRKAMw6r4bb\nBF772tcmWT7V9cwzz+Scc87Z0OdfWlrKtm3bcvLkyQ19XgCYBo4sbSJLS0vZsWPHi0ecqio/9EM/\nlCNHjqz7cx08ePDFV+0
"text/plain": [
"<Figure size 720x720 with 1 Axes>"
]
},
"metadata": {
"tags": []
},
"output_type": "display_data"
}
],
"source": [
"# Print out model predicted mask\n",
"plt.figure(figsize = (10,10))\n",
"plt.imshow(np.squeeze(output[0]), cmap='gray')"
]
},
{
"cell_type": "markdown",
"metadata": {
"colab_type": "text",
"id": "BPs_nyzcyAxo"
},
"source": [
"As expected, the model points out the correct defective area in this image. Please feel free to try out other defective images within `./data/raw_images/public/Class1_def/`"
]
},
{
"cell_type": "code",
"execution_count": 0,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/",
"height": 335
},
"colab_type": "code",
"id": "HRQiqCSMAOZS",
"outputId": "49cf33cd-4699-41ac-baa3-2ff8c41d77ed"
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"100.png 116.png 131.png 147.png 26.png 41.png 57.png 72.png 88.png\n",
"101.png 117.png 132.png 148.png 27.png 42.png 58.png 73.png 89.png\n",
"102.png 118.png 133.png 149.png 28.png 43.png 59.png 74.png 8.png\n",
"103.png 119.png 134.png 14.png 29.png 44.png 5.png 75.png 90.png\n",
"104.png 11.png 135.png 150.png 2.png 45.png 60.png 76.png 91.png\n",
"105.png 120.png 136.png 15.png 30.png 46.png 61.png 77.png 92.png\n",
"106.png 121.png 137.png 16.png 31.png 47.png 62.png 78.png 93.png\n",
"107.png 122.png 138.png 17.png 32.png 48.png 63.png 79.png 94.png\n",
"108.png 123.png 139.png 18.png 33.png 49.png 64.png 7.png 95.png\n",
"109.png 124.png 13.png 19.png 34.png 4.png 65.png 80.png 96.png\n",
"10.png\t 125.png 140.png 1.png 35.png 50.png 66.png 81.png 97.png\n",
"110.png 126.png 141.png 20.png 36.png 51.png 67.png 82.png 98.png\n",
"111.png 127.png 142.png 21.png 37.png 52.png 68.png 83.png 99.png\n",
"112.png 128.png 143.png 22.png 38.png 53.png 69.png 84.png 9.png\n",
"113.png 129.png 144.png 23.png 39.png 54.png 6.png 85.png labels.txt\n",
"114.png 12.png 145.png 24.png 3.png 55.png 70.png 86.png\n",
"115.png 130.png 146.png 25.png 40.png 56.png 71.png 87.png\n"
]
}
],
"source": [
"!ls ./data/raw_images/public/Class1_def/"
]
},
{
"cell_type": "markdown",
"metadata": {
"colab_type": "text",
"id": "yBeZjO4JtAwL"
},
"source": [
"# Optimize model and inference with TF-TRT\n",
"\n",
"In this section, instead of doing inference with the naitive TensorFlow environment, we will first optimize the model with TF-TRT, then doing inference.\n",
"\n",
"We first need to install NVIDIA TensorRT 5.0 runtime environment on Colab."
]
},
{
"cell_type": "code",
"execution_count": 0,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/",
"height": 598
},
"colab_type": "code",
"id": "3WA9N43UTq_c",
"outputId": "ee1eaaef-3eb7-4f5a-d690-66887fb7b429"
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"(Reading database ... 131335 files and directories currently installed.)\n",
"Preparing to unpack nvidia-machine-learning-repo-ubuntu1804_1.0.0-1_amd64.deb ...\n",
"Unpacking nvidia-machine-learning-repo-ubuntu1804 (1.0.0-1) over (1.0.0-1) ...\n",
"Setting up nvidia-machine-learning-repo-ubuntu1804 (1.0.0-1) ...\n",
"Ign:1 http://developer.download.nvidia.com/compute/machine-learning/repos/ubuntu1804/x86_64 InRelease\n",
"Hit:2 http://developer.download.nvidia.com/compute/machine-learning/repos/ubuntu1804/x86_64 Release\n",
"Ign:3 https://developer.download.nvidia.com/compute/cuda/repos/ubuntu1804/x86_64 InRelease\n",
"Hit:4 https://developer.download.nvidia.com/compute/cuda/repos/ubuntu1804/x86_64 Release\n",
"Get:6 http://security.ubuntu.com/ubuntu bionic-security InRelease [88.7 kB]\n",
"Hit:7 https://cloud.r-project.org/bin/linux/ubuntu bionic-cran35/ InRelease\n",
"Hit:9 http://ppa.launchpad.net/graphics-drivers/ppa/ubuntu bionic InRelease\n",
"Hit:10 http://archive.ubuntu.com/ubuntu bionic InRelease\n",
"Hit:11 http://ppa.launchpad.net/marutter/c2d4u3.5/ubuntu bionic InRelease\n",
"Get:12 http://archive.ubuntu.com/ubuntu bionic-updates InRelease [88.7 kB]\n",
"Get:13 http://archive.ubuntu.com/ubuntu bionic-backports InRelease [74.6 kB]\n",
"Fetched 252 kB in 3s (81.0 kB/s)\n",
"Reading package lists...\n"
]
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"--2019-09-27 04:36:43-- https://developer.download.nvidia.com/compute/machine-learning/repos/ubuntu1804/x86_64/nvidia-machine-learning-repo-ubuntu1804_1.0.0-1_amd64.deb\n",
"Resolving developer.download.nvidia.com (developer.download.nvidia.com)... 192.229.232.112, 2606:2800:247:2063:46e:21d:825:102e\n",
"Connecting to developer.download.nvidia.com (developer.download.nvidia.com)|192.229.232.112|:443... connected.\n",
"HTTP request sent, awaiting response... 200 OK\n",
"Length: 2926 (2.9K) [application/x-deb]\n",
"Saving to: nvidia-machine-learning-repo-ubuntu1804_1.0.0-1_amd64.deb.3\n",
"\n",
" 0K .. 100% 94.3M=0s\n",
"\n",
"2019-09-27 04:36:43 (94.3 MB/s) - nvidia-machine-learning-repo-ubuntu1804_1.0.0-1_amd64.deb.3 saved [2926/2926]\n",
"\n",
"W: Target Packages (Packages) is configured multiple times in /etc/apt/sources.list.d/nvidia-machine-learning.list:1 and /etc/apt/sources.list.d/nvidia-ml.list:1\n",
"W: Target Packages (Packages) is configured multiple times in /etc/apt/sources.list.d/nvidia-machine-learning.list:1 and /etc/apt/sources.list.d/nvidia-ml.list:1\n"
]
}
],
"source": [
"%%bash\n",
"wget https://developer.download.nvidia.com/compute/machine-learning/repos/ubuntu1804/x86_64/nvidia-machine-learning-repo-ubuntu1804_1.0.0-1_amd64.deb\n",
"\n",
"dpkg -i nvidia-machine-learning-repo-*.deb\n",
"apt-get update"
]
},
{
"cell_type": "code",
"execution_count": 0,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/",
"height": 318
},
"colab_type": "code",
"id": "BV6JkgDyUD_Q",
"outputId": "cdcb211f-daf4-4d23-ca95-21349b303405"
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Reading package lists... Done\n",
"Building dependency tree \n",
"Reading state information... Done\n",
"libnvinfer5 is already the newest version (5.1.5-1+cuda10.1).\n",
"0 upgraded, 0 newly installed, 0 to remove and 125 not upgraded.\n",
"W: Target Packages (Packages) is configured multiple times in /etc/apt/sources.list.d/nvidia-machine-learning.list:1 and /etc/apt/sources.list.d/nvidia-ml.list:1\n",
"ii libnvinfer-dev 6.0.1-1+cuda10.1 amd64 TensorRT development libraries and headers\n",
"ii libnvinfer-plugin-dev 6.0.1-1+cuda10.1 amd64 TensorRT plugin libraries\n",
"ii libnvinfer-plugin6 6.0.1-1+cuda10.1 amd64 TensorRT plugin libraries\n",
"ii libnvinfer5 5.1.5-1+cuda10.1 amd64 TensorRT runtime libraries\n",
"ii libnvinfer6 6.0.1-1+cuda10.1 amd64 TensorRT runtime libraries\n",
"ii libnvonnxparsers-dev 6.0.1-1+cuda10.1 amd64 TensorRT ONNX libraries\n",
"ii libnvonnxparsers6 6.0.1-1+cuda10.1 amd64 TensorRT ONNX libraries\n",
"ii libnvparsers-dev 6.0.1-1+cuda10.1 amd64 TensorRT parsers libraries\n",
"ii libnvparsers6 6.0.1-1+cuda10.1 amd64 TensorRT parsers libraries\n"
]
}
],
"source": [
"!sudo apt-get install libnvinfer5\n",
"!dpkg -l | grep TensorRT"
]
},
{
"cell_type": "markdown",
"metadata": {
"colab_type": "text",
"id": "XRMZiFjdUPCZ"
},
"source": [
"A successful TensorRT installation should look like:\n",
"\n",
"```\n",
"ii libnvinfer5 5.1.5-1+cuda10.1 amd64 TensorRT runtime libraries\n",
"```"
]
},
{
"cell_type": "markdown",
"metadata": {
"colab_type": "text",
"id": "UGYe4yTyUN0x"
},
"source": [
"Next, we are ready to optimize the model for inference with TF-TRT. This is carried out in the following steps:\n",
"\n",
"- First, we convert the model checkpoint to a frozen graph that is more relevant for deployment.\n",
"\n",
"- Next, we employ TF-TRT to optimize and convert the frozen graph into a TF-TRT graph. This graph can be employed for inference within the TensorFlow environment, but the underlying runting will be TensorRT. \n",
"\n",
"**Precision mode:** The model that TF-TRT optimizes can have the graph or parameters stored in float32 (FP32) or float16 (FP16). Regardless of the datatype of the model, TensorRT can convert tensors and weights to lower precisions during the optimization. The argument `precision_mode` sets the precision mode; which can be one of FP32, FP16, or INT8\n",
"\n",
"## FP32 Inference"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/",
"height": 1000
},
"colab_type": "code",
"id": "P9T2hwS_tAwM",
"outputId": "99c7d6d3-112d-40a1-bc59-9c18c4dad6a9"
},
"outputs": [],
"source": [
"from tensorflow.python.compiler.tensorrt import trt_convert as trt\n",
"\n",
"config = tf.ConfigProto()\n",
"config.gpu_options.allow_growth=True\n",
"\n",
"SAVED_MODEL_DIR = './TR-TRT-model-FP32'\n",
"\n",
"graph = tf.Graph()\n",
"with graph.as_default():\n",
" with tf.Session(config=config) as sess:\n",
" network = UNet_v1(\n",
" model_name=\"UNet_v1\",\n",
" input_format='NHWC',\n",
" compute_format='NHWC',\n",
" n_output_channels=1,\n",
" unet_variant='tinyUNet',\n",
" weight_init_method='he_uniform',\n",
" activation_fn='relu'\n",
" )\n",
" \n",
" tf_input = tf.placeholder(tf.float32, [None, 512, 512, 1], name='input')\n",
" \n",
" outputs, logits = network.build_model(tf_input)\n",
" \n",
" #print output nodes names\n",
" print(outputs)\n",
" print(logits)\n",
" \n",
" saver = tf.train.Saver()\n",
"\n",
" # Restore variables from disk.\n",
" saver.restore(sess, \"JoC_UNET_Industrial_FP32_TF_20190522/Class+1/model.ckpt-2500\")\n",
" \n",
" # Freeze the graph:\n",
" frozen_graph = tf.graph_util.convert_variables_to_constants(sess,\n",
" tf.get_default_graph().as_graph_def(),\n",
" output_node_names=['UNet_v1/sigmoid', \n",
" 'UNet_v1/ouputs_block/conv2d_2/BiasAdd'])\n",
"\n",
" # Now you can create a TensorRT inference graph from your frozen graph:\n",
" converter = trt.TrtGraphConverter(input_graph_def=frozen_graph,\n",
" nodes_blacklist=['UNet_v1/sigmoid', 'UNet_v1/ouputs_block/conv2d_2/BiasAdd'],\n",
" precision_mode='FP32' ) #output nodes\n",
" trt_graph = converter.convert()\n",
"\n"
]
},
{
"cell_type": "markdown",
"metadata": {
"colab_type": "text",
"id": "X7JIKRHNBJ3M"
},
"source": [
"After this step, the TF-TRT optimized model is stored in `trt_graph`. Next, we carry out inference using this graph. We will also save the TF-TRT graph into a save model which is ready for deployment later elsewhere."
]
},
{
"cell_type": "code",
"execution_count": 0,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/",
"height": 111
},
"colab_type": "code",
"id": "dPLstDqLBGVp",
"outputId": "64f7dd39-540b-4d30-ad9a-75afa85b19d0"
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"rm: cannot remove './TR-TRT-model-FP32': No such file or directory\n",
"Saving model to ./TR-TRT-model-FP32\n",
"INFO:tensorflow:Assets added to graph.\n",
"INFO:tensorflow:No assets to write.\n",
"INFO:tensorflow:SavedModel written to: ./TR-TRT-model-FP32/saved_model.pb\n"
]
}
],
"source": [
"!rm -r $SAVED_MODEL_DIR\n",
"graph = tf.Graph()\n",
"with graph.as_default():\n",
" with tf.Session(config=config) as sess:\n",
" # Import the TensorRT graph into a new graph and run:\n",
" output_node = tf.import_graph_def(trt_graph, return_elements=['UNet_v1/sigmoid', 'UNet_v1/ouputs_block/conv2d_2/BiasAdd'], name=\"\")\n",
" \n",
" output = sess.run([\"UNet_v1/sigmoid:0\"], feed_dict={\"input:0\": img})\n",
"\n",
" #Optionally, save model for serving if an ouput directory argument is presented\n",
" if SAVED_MODEL_DIR:\n",
" print('Saving model to %s'%SAVED_MODEL_DIR)\n",
" tf.saved_model.simple_save(\n",
" session=sess,\n",
" export_dir=SAVED_MODEL_DIR,\n",
" inputs={\"input\":tf.get_default_graph().get_tensor_by_name(\"input:0\")},\n",
" outputs={\"mask\":tf.get_default_graph().get_tensor_by_name(\"UNet_v1/sigmoid:0\")},\n",
" legacy_init_op=None\n",
" )"
]
},
{
"cell_type": "code",
"execution_count": 0,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/",
"height": 614
},
"colab_type": "code",
"id": "yKNEbyhktAwW",
"outputId": "e54780a5-5104-4c1e-e286-76ef1147a47f"
},
"outputs": [
{
"data": {
"text/plain": [
"<matplotlib.image.AxesImage at 0x7fbf76c73d30>"
]
},
"execution_count": 109,
"metadata": {
"tags": []
},
"output_type": "execute_result"
},
{
"data": {
"image/png": "iVBORw0KGgoAAAANSUhEUgAAAksAAAJCCAYAAADQsoPKAAAABHNCSVQICAgIfAhkiAAAAAlwSFlz\nAAALEgAACxIB0t1+/AAAADl0RVh0U29mdHdhcmUAbWF0cGxvdGxpYiB2ZXJzaW9uIDMuMC4zLCBo\ndHRwOi8vbWF0cGxvdGxpYi5vcmcvnQurowAAHE9JREFUeJzt3W+MXXd95/HP1zO2Y0ISJxBciFOc\nVcMihBaDIjYVPKBUrZIWNTxAiKorUhRhVepKqdpVCX1SFW0fwINCo67opiVqWvUfok0TUdQlCmy3\neQBNUqckENi4NG7shjiBxHEIiePxbx/McTpk6W/G9tw5d+59vaTR3HPu8b3f8YHxO+ece2+11gIA\nwA+2ZewBAACmmVgCAOgQSwAAHWIJAKBDLAEAdIglAICOicRSVV1VVd+oqgNVdcMkngMAYCPUer/P\nUlUtJPm/SX4iyaEkdyf52dba19b1iQAANsAkjiy9NcmB1to3W2vHk/xZkmsm8DwAABO3OIHHvCTJ\nIyuWDyX5z70/UFXeRhwA2GhPtNYuXm2jScTSmlTVviT7xnp+AGDuHVzLRpOIpcNJLl2xvHtY931a\nazcluSlxZAkAmF6TuGbp7iSXV9VlVbUtyfuS3D6B5wEAmLh1P7LUWjtRVf81yf9KspDk5tbaV9f7\neQAANsK6v3XAGQ3hNBwAsPHuba1dsdpG3sEbAKBDLAEAdIglAIAOsQQA0CGWAAA6xBIAQIdYAgDo\nEEsAAB1iCQCgQywBAHSIJQCADrEEANAhlgAAOsQSAECHWAIA6BBLAAAdYgkAoEMsAQB0iCUAgA6x\nBADQIZYAADrEEgBAh1gCAOgQSwAAHWIJAKBDLAEAdIglAIAOsQQA0CGWAAA6xFLH3/7t36a1tuav\nD3/4w2OPDACss2qtjT1Dqmr8IQZve9vbctddd63LY+3YsSPPPffcujwWALDu7m2tXbHaRo4sDb70\npS+ltbZuoZQk3/ve99Jay/79+9ftMQGAjTX3R5a2bt2a48ePb9jzvfnNb06S3HfffRv2nADAD+TI\n0mpuueWWDQ2lJNm/f3/279//4nVOF1100YY+PwBwehbHHmAsR44cycUXXzz2GPn2t7+dJGmtZXFx\nMSdPnhx5IgBgpbk8svQLv/ALUxFKK1VVlpaW0lrL0tJSFhfntmMBYKrM3TVLG32N0tl6/vnnc845\n54w9BgDMItcsAQCcrbmLpc10VClJtm/f/uLF4ADAxpubC2O++c1vjj3CWWutOS0HABtsbq5Zmoaf\nc71dfPHFeeKJJ8YeAwA2K9csnTKLoZQkjz/+uLcaAIAJm4tYmmVVNbMxCADTYOZjaV5CorWWhYWF\nsccAgJkz87E0T06cOJHf/d3fHXsMAJgpM32B99LSUrZsmc8erKqxRwCAaecC73kNpWT5tNxHPvKR\nsccAgE1vZo8sbd++Pc8999x6P+ym5CgTAPxA831k6dixY2OPMDVaa7ntttvGHgMANqWZPbI0DT/X\nNNq+ffum+8gXAJiQ+T6yxA/2/PPPZ2lpaewxAGDTmMlY2rt379gjTLUtW7a8+OG8N99889jjAMBU\nm8nTcMePH8/WrVvX8yFn3mWXXZaHH3547DEAYCPN72k4oXT6/vmf/9nnzAHADzCTsQQAsF7EEi86\n9aG8N95449ijAMDUmMlrlqbhZ9rsjh8/nu3bt489BgBM0vxes8TZ27Ztm+gEgIglVtFay8tf/vKx\nxwCA0YglVnXs2DHBBMDcmrlYWlxcHHuEmXTs2DFvyQDAXJq5WHr9618/9ggzy2fKATCPZi6WPvSh\nD409wkxz0TcA82bmYulNb3rT2CPMPMEEwDyZuVg699xzxx5hLggmAObFzMXSc889N/YIc+Po0aM5\nevTo2GMAwETN3EvHFhYWxh5hbpx//vlJkvPOOy/Hjh0beRoAmAxHljhrTz/99NgjAMDEzFwsHTx4\ncOwR5tLVV1899ggAMBEzF0u/93u/N/YIc+lzn/vc2CMAwETUNLyqqarWbYht27bl+eefX6+H4zR8\n73vfy8te9rKxxwCAtbq3tXbFahvN3JElAID1NHOx5CM5xrNjx4687nWvG3sMAFhXMxdLjOsb3/jG\n2CMAwLoSS6y7kydPjj0CAKwbscS6q6rs3bt37DEAYF3M3KvhkuUjG1W1ng/JGbAPAJhy8/tquEOH\nDo09Akn++q//euwRAOCszeSRpVe+8pV5/PHH1/MhOUOOLgEwxeb3yNITTzwx9ggMTpw4MfYIAHBW\nZjKWmB4LCwtjjwAAZ2VmY+mFF14YewQG999//9gjAMAZm9lYuvDCC8cegcEb3/jGsUcAgDM2s7H0\n3e9+d+wRWMHpOAA2q5mNpcQ7SU+TZ599duwRAOCMzHQsbdu2bewRGNgXAGxWMx1LS0tLY4/ACh/7\n2MfGHgEATttMvinlSk899VQuuOCCST08p8mbVAIwReb3TSkBANbLzMfSzp07xx6BFW699daxRwCA\n0zLzp+GSZBp+Rv6NU3EATAmn4U554IEHxh6BFXbs2DH2CACwZnNxZClxdGmaLC0tZXFxcewxAMCR\nJaaTd/MGYDOZm1jasmVLtmyZmx936jmyBMBmMTf10FpzKm6KHD16dOwRAGBN5iaWTtm9e/fYI5Dk\nZS972dgjAMCarBpLVXVzVR2pqgdWrLuoqu6oqoeG7xcO66uqbqyqA1X1lap6yySHPxOHDx8eewQA\nYBNZy5GlP0hy1UvW3ZDkztba5UnuHJaT5Ooklw9f+5J8cn3GXF+f+MQnxh6BJHv37h17BABY1Zre\nOqCq9iT5bGvtjcPyN5K8o7X2aFW9Osn/bq39x6r6n8PtP33pdqs8/oZfTOT6pfEdOnQol1566dhj\nADC/JvrWAbtWBNC3kuwabl+S5JEV2x0a1k2d22+/fewR5t5rXvOasUcAgFWd9eu3W2vtTI4MVdW+\nLJ+qG8U111zj6NLIvJUDAJvBmf5r9dhw+i3D9yPD+sNJVp5X2T2s+/+01m5qrV2xlsNfk/KqV71q\nrKcGADaJM42l25NcO9y+NsltK9a/f3hV3JVJjq52vdKYHn/88Rw8eHDsMQCAKbbqBd5V9adJ3pHk\nlUkeS/LrSf4qyaeT/HCSg0ne21r7Ti1/nPzvZPnVc88m+UBr7Z5VhxjhAu+VTp48meXR2Wj+3gEY\n0Zou8J6bD9LtEUvj8fcOwIh8kO5audAYAPj3qISBIxwbb2lpaewRAGBVYmkFwbSxHnzwwbFHAIBV\niaWXEEwb54Mf/ODYIwDAqlzg/e+Yhr+XWbe4uOhUHABjcoH32aiqPPvss2OPMdOEEgCbgVjqOPfc\nc/PzP//zY48xk06ePDn2CACwJk7DrdE0/D3Nkl27duXIkSOrbwgAk+M03Hqqqtx111256667xh5l\nJgglADYLR5bO0DT8vW1WLuwGYEo4sjRJVZXzzz9/7DE2nbvvvlsoAbCpOLK0Tny+3Opaaz5aBoBp\n4sjSRtqyZYsQWIW/HwA2I/96raPWWqoqCwsLrml6ia1bt449AgCcEbE0ASdPnnzxSJPrc5LXvOY1\nOXHixNhjAMAZEUsT1FrL4uJiqipPP/302OOM4sILL8yjjz469hgAcMbEEgBAx+LYA8yLCy64IEly\n/PjxJPNxDY9XBwIwCxxZ2mDbtm3Ltm3bUlV55plnxh5nIp588kmhBMDMEEsjOu+881JVueyyy8Ye\nZV2ceq+piy66aOxRAGDdiKUp8PDDD6eqUlX5/Oc/P/Y4Z2RhYSELCwtjjwEA6847eE+pnTt35skn\nnxx7jH/XyZMns7i4fMnbNPxvCADOgHfw3syeeuqpF482/cu//MvY47zooYce+r433hRKAMw6r4bb\nBF772tcmWT7V9cwzz+Scc87Z0OdfWlrKtm3bcvLkyQ19XgCYBo4sbSJLS0vZsWPHi0ecqio/9EM/\nlCNHjqz7cx08ePDFV+0
"text/plain": [
"<Figure size 720x720 with 1 Axes>"
]
},
"metadata": {
"tags": []
},
"output_type": "display_data"
}
],
"source": [
"plt.figure(figsize = (10,10))\n",
"plt.imshow(np.squeeze(output[0]), cmap='gray')"
]
},
{
"cell_type": "markdown",
"metadata": {
"colab_type": "text",
"id": "yaTNBqZEEMUW"
},
"source": [
"Next, we load the saved TF-TRT model and carry out inference."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/",
"height": 93
},
"colab_type": "code",
"id": "ZKt61QEFgUaC",
"outputId": "a1e604aa-64e2-4958-d822-1da2842d215b"
},
"outputs": [],
"source": [
"# inference with save TF-TRT model\n",
"with tf.Session(graph=tf.Graph(), config=config) as sess:\n",
" tf.saved_model.loader.load(\n",
" sess, [tf.saved_model.tag_constants.SERVING], SAVED_MODEL_DIR)\n",
" nodes = [n.name for n in tf.get_default_graph().as_graph_def().node]\n",
" #print(nodes)\n",
" output = sess.run([\"UNet_v1/sigmoid:0\"], feed_dict={\"input:0\": img})"
]
},
{
"cell_type": "code",
"execution_count": 0,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/",
"height": 614
},
"colab_type": "code",
"id": "ruWzCVXUj2vq",
"outputId": "20478327-8a29-44bd-ab09-bacd2863af58"
},
"outputs": [
{
"data": {
"text/plain": [
"<matplotlib.image.AxesImage at 0x7fbf7c780780>"
]
},
"execution_count": 112,
"metadata": {
"tags": []
},
"output_type": "execute_result"
},
{
"data": {
"image/png": "iVBORw0KGgoAAAANSUhEUgAAAksAAAJCCAYAAADQsoPKAAAABHNCSVQICAgIfAhkiAAAAAlwSFlz\nAAALEgAACxIB0t1+/AAAADl0RVh0U29mdHdhcmUAbWF0cGxvdGxpYiB2ZXJzaW9uIDMuMC4zLCBo\ndHRwOi8vbWF0cGxvdGxpYi5vcmcvnQurowAAHE9JREFUeJzt3W+MXXd95/HP1zO2Y0ISJxBciFOc\nVcMihBaDIjYVPKBUrZIWNTxAiKorUhRhVepKqdpVCX1SFW0fwINCo67opiVqWvUfok0TUdQlCmy3\neQBNUqckENi4NG7shjiBxHEIiePxbx/McTpk6W/G9tw5d+59vaTR3HPu8b3f8YHxO+ece2+11gIA\nwA+2ZewBAACmmVgCAOgQSwAAHWIJAKBDLAEAdIglAICOicRSVV1VVd+oqgNVdcMkngMAYCPUer/P\nUlUtJPm/SX4iyaEkdyf52dba19b1iQAANsAkjiy9NcmB1to3W2vHk/xZkmsm8DwAABO3OIHHvCTJ\nIyuWDyX5z70/UFXeRhwA2GhPtNYuXm2jScTSmlTVviT7xnp+AGDuHVzLRpOIpcNJLl2xvHtY931a\nazcluSlxZAkAmF6TuGbp7iSXV9VlVbUtyfuS3D6B5wEAmLh1P7LUWjtRVf81yf9KspDk5tbaV9f7\neQAANsK6v3XAGQ3hNBwAsPHuba1dsdpG3sEbAKBDLAEAdIglAIAOsQQA0CGWAAA6xBIAQIdYAgDo\nEEsAAB1iCQCgQywBAHSIJQCADrEEANAhlgAAOsQSAECHWAIA6BBLAAAdYgkAoEMsAQB0iCUAgA6x\nBADQIZYAADrEEgBAh1gCAOgQSwAAHWIJAKBDLAEAdIglAIAOsQQA0CGWAAA6xFLH3/7t36a1tuav\nD3/4w2OPDACss2qtjT1Dqmr8IQZve9vbctddd63LY+3YsSPPPffcujwWALDu7m2tXbHaRo4sDb70\npS+ltbZuoZQk3/ve99Jay/79+9ftMQGAjTX3R5a2bt2a48ePb9jzvfnNb06S3HfffRv2nADAD+TI\n0mpuueWWDQ2lJNm/f3/279//4nVOF1100YY+PwBwehbHHmAsR44cycUXXzz2GPn2t7+dJGmtZXFx\nMSdPnhx5IgBgpbk8svQLv/ALUxFKK1VVlpaW0lrL0tJSFhfntmMBYKrM3TVLG32N0tl6/vnnc845\n54w9BgDMItcsAQCcrbmLpc10VClJtm/f/uLF4ADAxpubC2O++c1vjj3CWWutOS0HABtsbq5Zmoaf\nc71dfPHFeeKJJ8YeAwA2K9csnTKLoZQkjz/+uLcaAIAJm4tYmmVVNbMxCADTYOZjaV5CorWWhYWF\nsccAgJkz87E0T06cOJHf/d3fHXsMAJgpM32B99LSUrZsmc8erKqxRwCAaecC73kNpWT5tNxHPvKR\nsccAgE1vZo8sbd++Pc8999x6P+ym5CgTAPxA831k6dixY2OPMDVaa7ntttvGHgMANqWZPbI0DT/X\nNNq+ffum+8gXAJiQ+T6yxA/2/PPPZ2lpaewxAGDTmMlY2rt379gjTLUtW7a8+OG8N99889jjAMBU\nm8nTcMePH8/WrVvX8yFn3mWXXZaHH3547DEAYCPN72k4oXT6/vmf/9nnzAHADzCTsQQAsF7EEi86\n9aG8N95449ijAMDUmMlrlqbhZ9rsjh8/nu3bt489BgBM0vxes8TZ27Ztm+gEgIglVtFay8tf/vKx\nxwCA0YglVnXs2DHBBMDcmrlYWlxcHHuEmXTs2DFvyQDAXJq5WHr9618/9ggzy2fKATCPZi6WPvSh\nD409wkxz0TcA82bmYulNb3rT2CPMPMEEwDyZuVg699xzxx5hLggmAObFzMXSc889N/YIc+Po0aM5\nevTo2GMAwETN3EvHFhYWxh5hbpx//vlJkvPOOy/Hjh0beRoAmAxHljhrTz/99NgjAMDEzFwsHTx4\ncOwR5tLVV1899ggAMBEzF0u/93u/N/YIc+lzn/vc2CMAwETUNLyqqarWbYht27bl+eefX6+H4zR8\n73vfy8te9rKxxwCAtbq3tXbFahvN3JElAID1NHOx5CM5xrNjx4687nWvG3sMAFhXMxdLjOsb3/jG\n2CMAwLoSS6y7kydPjj0CAKwbscS6q6rs3bt37DEAYF3M3KvhkuUjG1W1ng/JGbAPAJhy8/tquEOH\nDo09Akn++q//euwRAOCszeSRpVe+8pV5/PHH1/MhOUOOLgEwxeb3yNITTzwx9ggMTpw4MfYIAHBW\nZjKWmB4LCwtjjwAAZ2VmY+mFF14YewQG999//9gjAMAZm9lYuvDCC8cegcEb3/jGsUcAgDM2s7H0\n3e9+d+wRWMHpOAA2q5mNpcQ7SU+TZ599duwRAOCMzHQsbdu2bewRGNgXAGxWMx1LS0tLY4/ACh/7\n2MfGHgEATttMvinlSk899VQuuOCCST08p8mbVAIwReb3TSkBANbLzMfSzp07xx6BFW699daxRwCA\n0zLzp+GSZBp+Rv6NU3EATAmn4U554IEHxh6BFXbs2DH2CACwZnNxZClxdGmaLC0tZXFxcewxAMCR\nJaaTd/MGYDOZm1jasmVLtmyZmx936jmyBMBmMTf10FpzKm6KHD16dOwRAGBN5iaWTtm9e/fYI5Dk\nZS972dgjAMCarBpLVXVzVR2pqgdWrLuoqu6oqoeG7xcO66uqbqyqA1X1lap6yySHPxOHDx8eewQA\nYBNZy5GlP0hy1UvW3ZDkztba5UnuHJaT5Ooklw9f+5J8cn3GXF+f+MQnxh6BJHv37h17BABY1Zre\nOqCq9iT5bGvtjcPyN5K8o7X2aFW9Osn/bq39x6r6n8PtP33pdqs8/oZfTOT6pfEdOnQol1566dhj\nADC/JvrWAbtWBNC3kuwabl+S5JEV2x0a1k2d22+/fewR5t5rXvOasUcAgFWd9eu3W2vtTI4MVdW+\nLJ+qG8U111zj6NLIvJUDAJvBmf5r9dhw+i3D9yPD+sNJVp5X2T2s+/+01m5qrV2xlsNfk/KqV71q\nrKcGADaJM42l25NcO9y+NsltK9a/f3hV3JVJjq52vdKYHn/88Rw8eHDsMQCAKbbqBd5V9adJ3pHk\nlUkeS/LrSf4qyaeT/HCSg0ne21r7Ti1/nPzvZPnVc88m+UBr7Z5VhxjhAu+VTp48meXR2Wj+3gEY\n0Zou8J6bD9LtEUvj8fcOwIh8kO5audAYAPj3qISBIxwbb2lpaewRAGBVYmkFwbSxHnzwwbFHAIBV\niaWXEEwb54Mf/ODYIwDAqlzg/e+Yhr+XWbe4uOhUHABjcoH32aiqPPvss2OPMdOEEgCbgVjqOPfc\nc/PzP//zY48xk06ePDn2CACwJk7DrdE0/D3Nkl27duXIkSOrbwgAk+M03Hqqqtx111256667xh5l\nJgglADYLR5bO0DT8vW1WLuwGYEo4sjRJVZXzzz9/7DE2nbvvvlsoAbCpOLK0Tny+3Opaaz5aBoBp\n4sjSRtqyZYsQWIW/HwA2I/96raPWWqoqCwsLrml6ia1bt449AgCcEbE0ASdPnnzxSJPrc5LXvOY1\nOXHixNhjAMAZEUsT1FrL4uJiqipPP/302OOM4sILL8yjjz469hgAcMbEEgBAx+LYA8yLCy64IEly\n/PjxJPNxDY9XBwIwCxxZ2mDbtm3Ltm3bUlV55plnxh5nIp588kmhBMDMEEsjOu+881JVueyyy8Ye\nZV2ceq+piy66aOxRAGDdiKUp8PDDD6eqUlX5/Oc/P/Y4Z2RhYSELCwtjjwEA6847eE+pnTt35skn\nnxx7jH/XyZMns7i4fMnbNPxvCADOgHfw3syeeuqpF482/cu//MvY47zooYce+r433hRKAMw6r4bb\nBF772tcmWT7V9cwzz+Scc87Z0OdfWlrKtm3bcvLkyQ19XgCYBo4sbSJLS0vZsWPHi0ecqio/9EM/\nlCNHjqz7cx08ePDFV+0
"text/plain": [
"<Figure size 720x720 with 1 Axes>"
]
},
"metadata": {
"tags": []
},
"output_type": "display_data"
}
],
"source": [
"plt.figure(figsize = (10,10))\n",
"plt.imshow(np.squeeze(output[0]), cmap='gray')"
]
},
{
"cell_type": "markdown",
"metadata": {
"colab_type": "text",
"id": "k-WdEzncEnI1"
},
"source": [
"Upon inspecting the node list, we can see nodes such as `UNet_v1/TRTEngineOp_0`, `UNet_v1/TRTEngineOp_1`... These are portions of the naitive TensorFlow graph that has been convert and optimized for TensorRT execution. For parts of the graph that are not convertible, execution is carried out by the native TensorFlow runtime. "
]
},
{
"cell_type": "code",
"execution_count": 0,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/",
"height": 1000
},
"colab_type": "code",
"id": "vIPgTM8nhGpK",
"outputId": "b45bdb1a-d3d9-4545-dc0e-7600ca20f1a9"
},
"outputs": [
{
"data": {
"text/plain": [
"['input',\n",
" 'UNet_v1/input_reshape/initial_zero_padding/resize_image_with_crop_or_pad/assert_positive/Const',\n",
" 'UNet_v1/input_reshape/initial_zero_padding/resize_image_with_crop_or_pad/assert_positive/assert_less/Const',\n",
" 'UNet_v1/input_reshape/initial_zero_padding/resize_image_with_crop_or_pad/assert_positive/assert_less/Assert/Assert/data_0',\n",
" 'UNet_v1/input_reshape/initial_zero_padding/resize_image_with_crop_or_pad/Rank',\n",
" 'UNet_v1/input_reshape/initial_zero_padding/resize_image_with_crop_or_pad/assert_greater_equal/y',\n",
" 'UNet_v1/input_reshape/initial_zero_padding/resize_image_with_crop_or_pad/assert_greater_equal/All',\n",
" 'UNet_v1/input_reshape/initial_zero_padding/resize_image_with_crop_or_pad/assert_greater_equal/Assert/Assert/data_0',\n",
" 'UNet_v1/input_reshape/initial_zero_padding/resize_image_with_crop_or_pad/assert_greater_equal/Assert/Assert/data_1',\n",
" 'UNet_v1/input_reshape/initial_zero_padding/resize_image_with_crop_or_pad/assert_greater_equal/Assert/Assert/data_2',\n",
" 'UNet_v1/input_reshape/initial_zero_padding/resize_image_with_crop_or_pad/assert_greater_equal/Assert/Assert/data_4',\n",
" 'UNet_v1/input_reshape/initial_zero_padding/resize_image_with_crop_or_pad/crop_to_bounding_box/assert_positive/Const',\n",
" 'UNet_v1/input_reshape/initial_zero_padding/resize_image_with_crop_or_pad/crop_to_bounding_box/assert_positive/assert_less/Const',\n",
" 'UNet_v1/input_reshape/initial_zero_padding/resize_image_with_crop_or_pad/crop_to_bounding_box/assert_positive/assert_less/Assert/Assert/data_0',\n",
" 'UNet_v1/input_reshape/initial_zero_padding/resize_image_with_crop_or_pad/crop_to_bounding_box/Rank',\n",
" 'UNet_v1/input_reshape/initial_zero_padding/resize_image_with_crop_or_pad/crop_to_bounding_box/assert_greater_equal/y',\n",
" 'UNet_v1/input_reshape/initial_zero_padding/resize_image_with_crop_or_pad/crop_to_bounding_box/assert_greater_equal/All',\n",
" 'UNet_v1/input_reshape/initial_zero_padding/resize_image_with_crop_or_pad/crop_to_bounding_box/assert_greater_equal/Assert/Assert/data_0',\n",
" 'UNet_v1/input_reshape/initial_zero_padding/resize_image_with_crop_or_pad/crop_to_bounding_box/assert_greater_equal/Assert/Assert/data_1',\n",
" 'UNet_v1/input_reshape/initial_zero_padding/resize_image_with_crop_or_pad/crop_to_bounding_box/assert_greater_equal/Assert/Assert/data_2',\n",
" 'UNet_v1/input_reshape/initial_zero_padding/resize_image_with_crop_or_pad/crop_to_bounding_box/assert_greater_equal/Assert/Assert/data_4',\n",
" 'UNet_v1/input_reshape/initial_zero_padding/resize_image_with_crop_or_pad/pad_to_bounding_box/assert_positive/Const',\n",
" 'UNet_v1/input_reshape/initial_zero_padding/resize_image_with_crop_or_pad/pad_to_bounding_box/assert_positive/assert_less/Const',\n",
" 'UNet_v1/input_reshape/initial_zero_padding/resize_image_with_crop_or_pad/pad_to_bounding_box/assert_positive/assert_less/Assert/Assert/data_0',\n",
" 'UNet_v1/input_reshape/initial_zero_padding/resize_image_with_crop_or_pad/pad_to_bounding_box/Rank',\n",
" 'UNet_v1/input_reshape/initial_zero_padding/resize_image_with_crop_or_pad/pad_to_bounding_box/assert_greater_equal/y',\n",
" 'UNet_v1/input_reshape/initial_zero_padding/resize_image_with_crop_or_pad/pad_to_bounding_box/assert_greater_equal/All',\n",
" 'UNet_v1/input_reshape/initial_zero_padding/resize_image_with_crop_or_pad/pad_to_bounding_box/assert_greater_equal/Assert/Assert/data_0',\n",
" 'UNet_v1/input_reshape/initial_zero_padding/resize_image_with_crop_or_pad/pad_to_bounding_box/assert_greater_equal/Assert/Assert/data_1',\n",
" 'UNet_v1/input_reshape/initial_zero_padding/resize_image_with_crop_or_pad/pad_to_bounding_box/assert_greater_equal/Assert/Assert/data_2',\n",
" 'UNet_v1/input_reshape/initial_zero_padding/resize_image_with_crop_or_pad/pad_to_bounding_box/assert_greater_equal/Assert/Assert/data_4',\n",
" 'UNet_v1/bottleneck_block/deconv2d/upsample2d_layer/resize/size',\n",
" 'UNet_v1/upsample_block_1/deconv2d/upsample2d_layer/resize/size',\n",
" 'UNet_v1/upsample_block_2/deconv2d/upsample2d_layer/resize/size',\n",
" 'UNet_v1/upsample_block_3/deconv2d/upsample2d_layer/resize/size',\n",
" 'UNet_v1/ouputs_block/conv2d_2/bias/read',\n",
" 'UNet_v1/input_reshape/initial_zero_padding/resize_image_with_crop_or_pad/Shape',\n",
" 'UNet_v1/input_reshape/initial_zero_padding/resize_image_with_crop_or_pad/assert_greater_equal/Assert/Assert',\n",
" 'UNet_v1/input_reshape/initial_zero_padding/resize_image_with_crop_or_pad/crop_to_bounding_box/assert_greater_equal/Assert/Assert',\n",
" 'UNet_v1/input_reshape/initial_zero_padding/resize_image_with_crop_or_pad/pad_to_bounding_box/assert_greater_equal/Assert/Assert',\n",
" 'UNet_v1/input_reshape/initial_zero_padding/resize_image_with_crop_or_pad/assert_positive/assert_less/Less',\n",
" 'UNet_v1/input_reshape/initial_zero_padding/resize_image_with_crop_or_pad/assert_positive/assert_less/All',\n",
" 'UNet_v1/input_reshape/initial_zero_padding/resize_image_with_crop_or_pad/assert_positive/assert_less/Assert/Assert',\n",
" 'UNet_v1/input_reshape/initial_zero_padding/resize_image_with_crop_or_pad/control_dependency',\n",
" 'UNet_v1/input_reshape/initial_zero_padding/resize_image_with_crop_or_pad/crop_to_bounding_box/Shape',\n",
" 'UNet_v1/input_reshape/initial_zero_padding/resize_image_with_crop_or_pad/crop_to_bounding_box/assert_positive/assert_less/Less',\n",
" 'UNet_v1/input_reshape/initial_zero_padding/resize_image_with_crop_or_pad/crop_to_bounding_box/assert_positive/assert_less/All',\n",
" 'UNet_v1/input_reshape/initial_zero_padding/resize_image_with_crop_or_pad/crop_to_bounding_box/assert_positive/assert_less/Assert/Assert',\n",
" 'UNet_v1/input_reshape/initial_zero_padding/resize_image_with_crop_or_pad/crop_to_bounding_box/TRTEngineOp_2',\n",
" 'UNet_v1/input_reshape/initial_zero_padding/resize_image_with_crop_or_pad/pad_to_bounding_box/Shape',\n",
" 'UNet_v1/input_reshape/initial_zero_padding/resize_image_with_crop_or_pad/pad_to_bounding_box/assert_positive/assert_less/Less',\n",
" 'UNet_v1/input_reshape/initial_zero_padding/resize_image_with_crop_or_pad/pad_to_bounding_box/assert_positive/assert_less/All',\n",
" 'UNet_v1/input_reshape/initial_zero_padding/resize_image_with_crop_or_pad/pad_to_bounding_box/assert_positive/assert_less/Assert/Assert',\n",
" 'UNet_v1/TRTEngineOp_0',\n",
" 'UNet_v1/bottleneck_block/deconv2d/upsample2d_layer/resize/ResizeNearestNeighbor',\n",
" 'UNet_v1/TRTEngineOp_1',\n",
" 'UNet_v1/upsample_block_1/deconv2d/upsample2d_layer/resize/ResizeNearestNeighbor',\n",
" 'UNet_v1/TRTEngineOp_4',\n",
" 'UNet_v1/upsample_block_2/deconv2d/upsample2d_layer/resize/ResizeNearestNeighbor',\n",
" 'UNet_v1/TRTEngineOp_5',\n",
" 'UNet_v1/upsample_block_3/deconv2d/upsample2d_layer/resize/ResizeNearestNeighbor',\n",
" 'UNet_v1/TRTEngineOp_3',\n",
" 'UNet_v1/ouputs_block/conv2d_2/BiasAdd',\n",
" 'UNet_v1/sigmoid']"
]
},
"execution_count": 113,
"metadata": {
"tags": []
},
"output_type": "execute_result"
}
],
"source": [
"nodes"
]
},
{
"cell_type": "markdown",
"metadata": {
"colab_type": "text",
"id": "kyVrRvVbFFzi"
},
"source": [
"## FP16 Inference\n",
"\n",
"Next, we convert the model using FP16 precision."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/",
"height": 1000
},
"colab_type": "code",
"id": "j9fbc3xRFFBD",
"outputId": "97e10dca-f2d2-4617-829d-c183a9001bca"
},
"outputs": [],
"source": [
"SAVED_MODEL_DIR = './TR-TRT-model-FP16'\n",
"\n",
"graph = tf.Graph()\n",
"with graph.as_default():\n",
" with tf.Session(config=config) as sess:\n",
" network = UNet_v1(\n",
" model_name=\"UNet_v1\",\n",
" input_format='NHWC',\n",
" compute_format='NHWC',\n",
" n_output_channels=1,\n",
" unet_variant='tinyUNet',\n",
" weight_init_method='he_uniform',\n",
" activation_fn='relu'\n",
" )\n",
" \n",
" tf_input = tf.placeholder(tf.float32, [None, 512, 512, 1], name='input')\n",
" \n",
" outputs, logits = network.build_model(tf_input)\n",
" \n",
" #print output nodes names\n",
" print(outputs)\n",
" print(logits)\n",
" \n",
" saver = tf.train.Saver()\n",
"\n",
" # Restore variables from disk.\n",
" saver.restore(sess, \"JoC_UNET_Industrial_FP32_TF_20190522/Class+1/model.ckpt-2500\")\n",
" \n",
" # Freeze the graph:\n",
" frozen_graph = tf.graph_util.convert_variables_to_constants(sess,\n",
" tf.get_default_graph().as_graph_def(),\n",
" output_node_names=['UNet_v1/sigmoid', \n",
" 'UNet_v1/ouputs_block/conv2d_2/BiasAdd'])\n",
"\n",
" # Now you can create a TensorRT inference graph from your frozen graph:\n",
" converter = trt.TrtGraphConverter(input_graph_def=frozen_graph,\n",
" nodes_blacklist=['UNet_v1/sigmoid', 'UNet_v1/ouputs_block/conv2d_2/BiasAdd'],\n",
" precision_mode='FP16' ) #output nodes\n",
" trt_graph = converter.convert()\n",
"\n"
]
},
{
"cell_type": "code",
"execution_count": 0,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/",
"height": 111
},
"colab_type": "code",
"id": "twMXj3ANFazk",
"outputId": "202d5243-182f-4f75-bbd7-a8a2d1762605"
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"rm: cannot remove './TR-TRT-model-FP16': No such file or directory\n",
"Saving model to ./TR-TRT-model-FP16\n",
"INFO:tensorflow:Assets added to graph.\n",
"INFO:tensorflow:No assets to write.\n",
"INFO:tensorflow:SavedModel written to: ./TR-TRT-model-FP16/saved_model.pb\n"
]
}
],
"source": [
"!rm -r $SAVED_MODEL_DIR\n",
"graph = tf.Graph()\n",
"with graph.as_default():\n",
" with tf.Session(config=config) as sess:\n",
" # Import the TensorRT graph into a new graph and run:\n",
" output_node = tf.import_graph_def(trt_graph, return_elements=['UNet_v1/sigmoid', 'UNet_v1/ouputs_block/conv2d_2/BiasAdd'], name=\"\")\n",
" \n",
" output = sess.run([\"UNet_v1/sigmoid:0\"], feed_dict={\"input:0\": img})\n",
"\n",
" #Optionally, save model for serving if an ouput directory argument is presented\n",
" if SAVED_MODEL_DIR:\n",
" print('Saving model to %s'%SAVED_MODEL_DIR)\n",
" tf.saved_model.simple_save(\n",
" session=sess,\n",
" export_dir=SAVED_MODEL_DIR,\n",
" inputs={\"input\":tf.get_default_graph().get_tensor_by_name(\"input:0\")},\n",
" outputs={\"mask\":tf.get_default_graph().get_tensor_by_name(\"UNet_v1/sigmoid:0\")},\n",
" legacy_init_op=None\n",
" )"
]
},
{
"cell_type": "code",
"execution_count": 0,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/",
"height": 614
},
"colab_type": "code",
"id": "0wFLBT7BFj_8",
"outputId": "0092d9b9-5a2a-4a35-85fb-c6c7d33a67d6"
},
"outputs": [
{
"data": {
"text/plain": [
"<matplotlib.image.AxesImage at 0x7fbf76a9e438>"
]
},
"execution_count": 116,
"metadata": {
"tags": []
},
"output_type": "execute_result"
},
{
"data": {
"image/png": "iVBORw0KGgoAAAANSUhEUgAAAksAAAJCCAYAAADQsoPKAAAABHNCSVQICAgIfAhkiAAAAAlwSFlz\nAAALEgAACxIB0t1+/AAAADl0RVh0U29mdHdhcmUAbWF0cGxvdGxpYiB2ZXJzaW9uIDMuMC4zLCBo\ndHRwOi8vbWF0cGxvdGxpYi5vcmcvnQurowAAHE9JREFUeJzt3W+MXXd95/HP1zO2Y0ISJxBciFOc\nVcMihBaDIjYVPKBUrZIWNTxAiKorUhRhVepKqdpVCX1SFW0fwINCo67opiVqWvUfok0TUdQlCmy3\neQBNUqckENi4NG7shjiBxHEIiePxbx/McTpk6W/G9tw5d+59vaTR3HPu8b3f8YHxO+ece2+11gIA\nwA+2ZewBAACmmVgCAOgQSwAAHWIJAKBDLAEAdIglAICOicRSVV1VVd+oqgNVdcMkngMAYCPUer/P\nUlUtJPm/SX4iyaEkdyf52dba19b1iQAANsAkjiy9NcmB1to3W2vHk/xZkmsm8DwAABO3OIHHvCTJ\nIyuWDyX5z70/UFXeRhwA2GhPtNYuXm2jScTSmlTVviT7xnp+AGDuHVzLRpOIpcNJLl2xvHtY931a\nazcluSlxZAkAmF6TuGbp7iSXV9VlVbUtyfuS3D6B5wEAmLh1P7LUWjtRVf81yf9KspDk5tbaV9f7\neQAANsK6v3XAGQ3hNBwAsPHuba1dsdpG3sEbAKBDLAEAdIglAIAOsQQA0CGWAAA6xBIAQIdYAgDo\nEEsAAB1iCQCgQywBAHSIJQCADrEEANAhlgAAOsQSAECHWAIA6BBLAAAdYgkAoEMsAQB0iCUAgA6x\nBADQIZYAADrEEgBAh1gCAOgQSwAAHWIJAKBDLAEAdIglAIAOsQQA0CGWAAA6xFLH3/7t36a1tuav\nD3/4w2OPDACss2qtjT1Dqmr8IQZve9vbctddd63LY+3YsSPPPffcujwWALDu7m2tXbHaRo4sDb70\npS+ltbZuoZQk3/ve99Jay/79+9ftMQGAjTX3R5a2bt2a48ePb9jzvfnNb06S3HfffRv2nADAD+TI\n0mpuueWWDQ2lJNm/f3/279//4nVOF1100YY+PwBwehbHHmAsR44cycUXXzz2GPn2t7+dJGmtZXFx\nMSdPnhx5IgBgpbk8svQLv/ALUxFKK1VVlpaW0lrL0tJSFhfntmMBYKrM3TVLG32N0tl6/vnnc845\n54w9BgDMItcsAQCcrbmLpc10VClJtm/f/uLF4ADAxpubC2O++c1vjj3CWWutOS0HABtsbq5Zmoaf\nc71dfPHFeeKJJ8YeAwA2K9csnTKLoZQkjz/+uLcaAIAJm4tYmmVVNbMxCADTYOZjaV5CorWWhYWF\nsccAgJkz87E0T06cOJHf/d3fHXsMAJgpM32B99LSUrZsmc8erKqxRwCAaecC73kNpWT5tNxHPvKR\nsccAgE1vZo8sbd++Pc8999x6P+ym5CgTAPxA831k6dixY2OPMDVaa7ntttvGHgMANqWZPbI0DT/X\nNNq+ffum+8gXAJiQ+T6yxA/2/PPPZ2lpaewxAGDTmMlY2rt379gjTLUtW7a8+OG8N99889jjAMBU\nm8nTcMePH8/WrVvX8yFn3mWXXZaHH3547DEAYCPN72k4oXT6/vmf/9nnzAHADzCTsQQAsF7EEi86\n9aG8N95449ijAMDUmMlrlqbhZ9rsjh8/nu3bt489BgBM0vxes8TZ27Ztm+gEgIglVtFay8tf/vKx\nxwCA0YglVnXs2DHBBMDcmrlYWlxcHHuEmXTs2DFvyQDAXJq5WHr9618/9ggzy2fKATCPZi6WPvSh\nD409wkxz0TcA82bmYulNb3rT2CPMPMEEwDyZuVg699xzxx5hLggmAObFzMXSc889N/YIc+Po0aM5\nevTo2GMAwETN3EvHFhYWxh5hbpx//vlJkvPOOy/Hjh0beRoAmAxHljhrTz/99NgjAMDEzFwsHTx4\ncOwR5tLVV1899ggAMBEzF0u/93u/N/YIc+lzn/vc2CMAwETUNLyqqarWbYht27bl+eefX6+H4zR8\n73vfy8te9rKxxwCAtbq3tXbFahvN3JElAID1NHOx5CM5xrNjx4687nWvG3sMAFhXMxdLjOsb3/jG\n2CMAwLoSS6y7kydPjj0CAKwbscS6q6rs3bt37DEAYF3M3KvhkuUjG1W1ng/JGbAPAJhy8/tquEOH\nDo09Akn++q//euwRAOCszeSRpVe+8pV5/PHH1/MhOUOOLgEwxeb3yNITTzwx9ggMTpw4MfYIAHBW\nZjKWmB4LCwtjjwAAZ2VmY+mFF14YewQG999//9gjAMAZm9lYuvDCC8cegcEb3/jGsUcAgDM2s7H0\n3e9+d+wRWMHpOAA2q5mNpcQ7SU+TZ599duwRAOCMzHQsbdu2bewRGNgXAGxWMx1LS0tLY4/ACh/7\n2MfGHgEATttMvinlSk899VQuuOCCST08p8mbVAIwReb3TSkBANbLzMfSzp07xx6BFW699daxRwCA\n0zLzp+GSZBp+Rv6NU3EATAmn4U554IEHxh6BFXbs2DH2CACwZnNxZClxdGmaLC0tZXFxcewxAMCR\nJaaTd/MGYDOZm1jasmVLtmyZmx936jmyBMBmMTf10FpzKm6KHD16dOwRAGBN5iaWTtm9e/fYI5Dk\nZS972dgjAMCarBpLVXVzVR2pqgdWrLuoqu6oqoeG7xcO66uqbqyqA1X1lap6yySHPxOHDx8eewQA\nYBNZy5GlP0hy1UvW3ZDkztba5UnuHJaT5Ooklw9f+5J8cn3GXF+f+MQnxh6BJHv37h17BABY1Zre\nOqCq9iT5bGvtjcPyN5K8o7X2aFW9Osn/bq39x6r6n8PtP33pdqs8/oZfTOT6pfEdOnQol1566dhj\nADC/JvrWAbtWBNC3kuwabl+S5JEV2x0a1k2d22+/fewR5t5rXvOasUcAgFWd9eu3W2vtTI4MVdW+\nLJ+qG8U111zj6NLIvJUDAJvBmf5r9dhw+i3D9yPD+sNJVp5X2T2s+/+01m5qrV2xlsNfk/KqV71q\nrKcGADaJM42l25NcO9y+NsltK9a/f3hV3JVJjq52vdKYHn/88Rw8eHDsMQCAKbbqBd5V9adJ3pHk\nlUkeS/LrSf4qyaeT/HCSg0ne21r7Ti1/nPzvZPnVc88m+UBr7Z5VhxjhAu+VTp48meXR2Wj+3gEY\n0Zou8J6bD9LtEUvj8fcOwIh8kO5audAYAPj3qISBIxwbb2lpaewRAGBVYmkFwbSxHnzwwbFHAIBV\niaWXEEwb54Mf/ODYIwDAqlzg/e+Yhr+XWbe4uOhUHABjcoH32aiqPPvss2OPMdOEEgCbgVjqOPfc\nc/PzP//zY48xk06ePDn2CACwJk7DrdE0/D3Nkl27duXIkSOrbwgAk+M03Hqqqtx111256667xh5l\nJgglADYLR5bO0DT8vW1WLuwGYEo4sjRJVZXzzz9/7DE2nbvvvlsoAbCpOLK0Tny+3Opaaz5aBoBp\n4sjSRtqyZYsQWIW/HwA2I/96raPWWqoqCwsLrml6ia1bt449AgCcEbE0ASdPnnzxSJPrc5LXvOY1\nOXHixNhjAMAZEUsT1FrL4uJiqipPP/302OOM4sILL8yjjz469hgAcMbEEgBAx+LYA8yLCy64IEly\n/PjxJPNxDY9XBwIwCxxZ2mDbtm3Ltm3bUlV55plnxh5nIp588kmhBMDMEEsjOu+881JVueyyy8Ye\nZV2ceq+piy66aOxRAGDdiKUp8PDDD6eqUlX5/Oc/P/Y4Z2RhYSELCwtjjwEA6847eE+pnTt35skn\nnxx7jH/XyZMns7i4fMnbNPxvCADOgHfw3syeeuqpF482/cu//MvY47zooYce+r433hRKAMw6r4bb\nBF772tcmWT7V9cwzz+Scc87Z0OdfWlrKtm3bcvLkyQ19XgCYBo4sbSJLS0vZsWPHi0ecqio/9EM/\nlCNHjqz7cx08ePDFV+0
"text/plain": [
"<Figure size 720x720 with 1 Axes>"
]
},
"metadata": {
"tags": []
},
"output_type": "display_data"
}
],
"source": [
"plt.figure(figsize = (10,10))\n",
"plt.imshow(np.squeeze(output[0]), cmap='gray')"
]
},
{
"cell_type": "markdown",
"metadata": {
"colab_type": "text",
"id": "DTOuQAkaFpGR"
},
"source": [
"## INT8 Inference"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/",
"height": 1000
},
"colab_type": "code",
"id": "InnNdsJEFtm-",
"outputId": "a515a46f-74a9-487c-defd-8f483f5a1d72"
},
"outputs": [],
"source": [
"SAVED_MODEL_DIR = './TR-TRT-model-INT8'\n",
"\n",
"graph = tf.Graph()\n",
"with graph.as_default():\n",
" with tf.Session(config=config) as sess:\n",
" network = UNet_v1(\n",
" model_name=\"UNet_v1\",\n",
" input_format='NHWC',\n",
" compute_format='NHWC',\n",
" n_output_channels=1,\n",
" unet_variant='tinyUNet',\n",
" weight_init_method='he_uniform',\n",
" activation_fn='relu'\n",
" )\n",
" \n",
" tf_input = tf.placeholder(tf.float32, [None, 512, 512, 1], name='input')\n",
" \n",
" outputs, logits = network.build_model(tf_input)\n",
" \n",
" #print output nodes names\n",
" print(outputs)\n",
" print(logits)\n",
" \n",
" saver = tf.train.Saver()\n",
"\n",
" # Restore variables from disk.\n",
" saver.restore(sess, \"JoC_UNET_Industrial_FP32_TF_20190522/Class+1/model.ckpt-2500\")\n",
" \n",
" # Freeze the graph:\n",
" frozen_graph = tf.graph_util.convert_variables_to_constants(sess,\n",
" tf.get_default_graph().as_graph_def(),\n",
" output_node_names=['UNet_v1/sigmoid', \n",
" 'UNet_v1/ouputs_block/conv2d_2/BiasAdd'])\n",
"\n",
" # Now you can create a TensorRT inference graph from your frozen graph:\n",
" converter = trt.TrtGraphConverter(input_graph_def=frozen_graph,\n",
" nodes_blacklist=['UNet_v1/sigmoid', 'UNet_v1/ouputs_block/conv2d_2/BiasAdd'],\n",
" precision_mode='INT8' ) #output nodes\n",
" trt_graph = converter.convert()\n",
"\n"
]
},
{
"cell_type": "code",
"execution_count": 0,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/",
"height": 111
},
"colab_type": "code",
"id": "PN6Duzd7F5Xk",
"outputId": "3c647fb6-79cf-493a-9858-b7c781d608b5"
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"rm: cannot remove './TR-TRT-model-INT8': No such file or directory\n",
"Saving model to ./TR-TRT-model-INT8\n",
"INFO:tensorflow:Assets added to graph.\n",
"INFO:tensorflow:No assets to write.\n",
"INFO:tensorflow:SavedModel written to: ./TR-TRT-model-INT8/saved_model.pb\n"
]
}
],
"source": [
"!rm -r $SAVED_MODEL_DIR\n",
"graph = tf.Graph()\n",
"with graph.as_default():\n",
" with tf.Session(config=config) as sess:\n",
" # Import the TensorRT graph into a new graph and run:\n",
" output_node = tf.import_graph_def(trt_graph, return_elements=['UNet_v1/sigmoid', 'UNet_v1/ouputs_block/conv2d_2/BiasAdd'], name=\"\")\n",
" \n",
" output = sess.run([\"UNet_v1/sigmoid:0\"], feed_dict={\"input:0\": img})\n",
"\n",
" #Optionally, save model for serving if an ouput directory argument is presented\n",
" if SAVED_MODEL_DIR:\n",
" print('Saving model to %s'%SAVED_MODEL_DIR)\n",
" tf.saved_model.simple_save(\n",
" session=sess,\n",
" export_dir=SAVED_MODEL_DIR,\n",
" inputs={\"input\":tf.get_default_graph().get_tensor_by_name(\"input:0\")},\n",
" outputs={\"mask\":tf.get_default_graph().get_tensor_by_name(\"UNet_v1/sigmoid:0\")},\n",
" legacy_init_op=None\n",
" )"
]
},
{
"cell_type": "code",
"execution_count": 0,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/",
"height": 614
},
"colab_type": "code",
"id": "MHNtVwoWF-lV",
"outputId": "a5ef1660-bf8b-4595-a673-3a6c3aa34ef8"
},
"outputs": [
{
"data": {
"text/plain": [
"<matplotlib.image.AxesImage at 0x7fbf7c6d1fd0>"
]
},
"execution_count": 119,
"metadata": {
"tags": []
},
"output_type": "execute_result"
},
{
"data": {
"image/png": "iVBORw0KGgoAAAANSUhEUgAAAksAAAJCCAYAAADQsoPKAAAABHNCSVQICAgIfAhkiAAAAAlwSFlz\nAAALEgAACxIB0t1+/AAAADl0RVh0U29mdHdhcmUAbWF0cGxvdGxpYiB2ZXJzaW9uIDMuMC4zLCBo\ndHRwOi8vbWF0cGxvdGxpYi5vcmcvnQurowAAHE9JREFUeJzt3W+MXXd95/HP1zO2Y0ISJxBciFOc\nVcMihBaDIjYVPKBUrZIWNTxAiKorUhRhVepKqdpVCX1SFW0fwINCo67opiVqWvUfok0TUdQlCmy3\neQBNUqckENi4NG7shjiBxHEIiePxbx/McTpk6W/G9tw5d+59vaTR3HPu8b3f8YHxO+ece2+11gIA\nwA+2ZewBAACmmVgCAOgQSwAAHWIJAKBDLAEAdIglAICOicRSVV1VVd+oqgNVdcMkngMAYCPUer/P\nUlUtJPm/SX4iyaEkdyf52dba19b1iQAANsAkjiy9NcmB1to3W2vHk/xZkmsm8DwAABO3OIHHvCTJ\nIyuWDyX5z70/UFXeRhwA2GhPtNYuXm2jScTSmlTVviT7xnp+AGDuHVzLRpOIpcNJLl2xvHtY931a\nazcluSlxZAkAmF6TuGbp7iSXV9VlVbUtyfuS3D6B5wEAmLh1P7LUWjtRVf81yf9KspDk5tbaV9f7\neQAANsK6v3XAGQ3hNBwAsPHuba1dsdpG3sEbAKBDLAEAdIglAIAOsQQA0CGWAAA6xBIAQIdYAgDo\nEEsAAB1iCQCgQywBAHSIJQCADrEEANAhlgAAOsQSAECHWAIA6BBLAAAdYgkAoEMsAQB0iCUAgA6x\nBADQIZYAADrEEgBAh1gCAOgQSwAAHWIJAKBDLAEAdIglAIAOsQQA0CGWAAA6xFLH3/7t36a1tuav\nD3/4w2OPDACss2qtjT1Dqmr8IQZve9vbctddd63LY+3YsSPPPffcujwWALDu7m2tXbHaRo4sDb70\npS+ltbZuoZQk3/ve99Jay/79+9ftMQGAjTX3R5a2bt2a48ePb9jzvfnNb06S3HfffRv2nADAD+TI\n0mpuueWWDQ2lJNm/f3/279//4nVOF1100YY+PwBwehbHHmAsR44cycUXXzz2GPn2t7+dJGmtZXFx\nMSdPnhx5IgBgpbk8svQLv/ALUxFKK1VVlpaW0lrL0tJSFhfntmMBYKrM3TVLG32N0tl6/vnnc845\n54w9BgDMItcsAQCcrbmLpc10VClJtm/f/uLF4ADAxpubC2O++c1vjj3CWWutOS0HABtsbq5Zmoaf\nc71dfPHFeeKJJ8YeAwA2K9csnTKLoZQkjz/+uLcaAIAJm4tYmmVVNbMxCADTYOZjaV5CorWWhYWF\nsccAgJkz87E0T06cOJHf/d3fHXsMAJgpM32B99LSUrZsmc8erKqxRwCAaecC73kNpWT5tNxHPvKR\nsccAgE1vZo8sbd++Pc8999x6P+ym5CgTAPxA831k6dixY2OPMDVaa7ntttvGHgMANqWZPbI0DT/X\nNNq+ffum+8gXAJiQ+T6yxA/2/PPPZ2lpaewxAGDTmMlY2rt379gjTLUtW7a8+OG8N99889jjAMBU\nm8nTcMePH8/WrVvX8yFn3mWXXZaHH3547DEAYCPN72k4oXT6/vmf/9nnzAHADzCTsQQAsF7EEi86\n9aG8N95449ijAMDUmMlrlqbhZ9rsjh8/nu3bt489BgBM0vxes8TZ27Ztm+gEgIglVtFay8tf/vKx\nxwCA0YglVnXs2DHBBMDcmrlYWlxcHHuEmXTs2DFvyQDAXJq5WHr9618/9ggzy2fKATCPZi6WPvSh\nD409wkxz0TcA82bmYulNb3rT2CPMPMEEwDyZuVg699xzxx5hLggmAObFzMXSc889N/YIc+Po0aM5\nevTo2GMAwETN3EvHFhYWxh5hbpx//vlJkvPOOy/Hjh0beRoAmAxHljhrTz/99NgjAMDEzFwsHTx4\ncOwR5tLVV1899ggAMBEzF0u/93u/N/YIc+lzn/vc2CMAwETUNLyqqarWbYht27bl+eefX6+H4zR8\n73vfy8te9rKxxwCAtbq3tXbFahvN3JElAID1NHOx5CM5xrNjx4687nWvG3sMAFhXMxdLjOsb3/jG\n2CMAwLoSS6y7kydPjj0CAKwbscS6q6rs3bt37DEAYF3M3KvhkuUjG1W1ng/JGbAPAJhy8/tquEOH\nDo09Akn++q//euwRAOCszeSRpVe+8pV5/PHH1/MhOUOOLgEwxeb3yNITTzwx9ggMTpw4MfYIAHBW\nZjKWmB4LCwtjjwAAZ2VmY+mFF14YewQG999//9gjAMAZm9lYuvDCC8cegcEb3/jGsUcAgDM2s7H0\n3e9+d+wRWMHpOAA2q5mNpcQ7SU+TZ599duwRAOCMzHQsbdu2bewRGNgXAGxWMx1LS0tLY4/ACh/7\n2MfGHgEATttMvinlSk899VQuuOCCST08p8mbVAIwReb3TSkBANbLzMfSzp07xx6BFW699daxRwCA\n0zLzp+GSZBp+Rv6NU3EATAmn4U554IEHxh6BFXbs2DH2CACwZnNxZClxdGmaLC0tZXFxcewxAMCR\nJaaTd/MGYDOZm1jasmVLtmyZmx936jmyBMBmMTf10FpzKm6KHD16dOwRAGBN5iaWTtm9e/fYI5Dk\nZS972dgjAMCarBpLVXVzVR2pqgdWrLuoqu6oqoeG7xcO66uqbqyqA1X1lap6yySHPxOHDx8eewQA\nYBNZy5GlP0hy1UvW3ZDkztba5UnuHJaT5Ooklw9f+5J8cn3GXF+f+MQnxh6BJHv37h17BABY1Zre\nOqCq9iT5bGvtjcPyN5K8o7X2aFW9Osn/bq39x6r6n8PtP33pdqs8/oZfTOT6pfEdOnQol1566dhj\nADC/JvrWAbtWBNC3kuwabl+S5JEV2x0a1k2d22+/fewR5t5rXvOasUcAgFWd9eu3W2vtTI4MVdW+\nLJ+qG8U111zj6NLIvJUDAJvBmf5r9dhw+i3D9yPD+sNJVp5X2T2s+/+01m5qrV2xlsNfk/KqV71q\nrKcGADaJM42l25NcO9y+NsltK9a/f3hV3JVJjq52vdKYHn/88Rw8eHDsMQCAKbbqBd5V9adJ3pHk\nlUkeS/LrSf4qyaeT/HCSg0ne21r7Ti1/nPzvZPnVc88m+UBr7Z5VhxjhAu+VTp48meXR2Wj+3gEY\n0Zou8J6bD9LtEUvj8fcOwIh8kO5audAYAPj3qISBIxwbb2lpaewRAGBVYmkFwbSxHnzwwbFHAIBV\niaWXEEwb54Mf/ODYIwDAqlzg/e+Yhr+XWbe4uOhUHABjcoH32aiqPPvss2OPMdOEEgCbgVjqOPfc\nc/PzP//zY48xk06ePDn2CACwJk7DrdE0/D3Nkl27duXIkSOrbwgAk+M03Hqqqtx111256667xh5l\nJgglADYLR5bO0DT8vW1WLuwGYEo4sjRJVZXzzz9/7DE2nbvvvlsoAbCpOLK0Tny+3Opaaz5aBoBp\n4sjSRtqyZYsQWIW/HwA2I/96raPWWqoqCwsLrml6ia1bt449AgCcEbE0ASdPnnzxSJPrc5LXvOY1\nOXHixNhjAMAZEUsT1FrL4uJiqipPP/302OOM4sILL8yjjz469hgAcMbEEgBAx+LYA8yLCy64IEly\n/PjxJPNxDY9XBwIwCxxZ2mDbtm3Ltm3bUlV55plnxh5nIp588kmhBMDMEEsjOu+881JVueyyy8Ye\nZV2ceq+piy66aOxRAGDdiKUp8PDDD6eqUlX5/Oc/P/Y4Z2RhYSELCwtjjwEA6847eE+pnTt35skn\nnxx7jH/XyZMns7i4fMnbNPxvCADOgHfw3syeeuqpF482/cu//MvY47zooYce+r433hRKAMw6r4bb\nBF772tcmWT7V9cwzz+Scc87Z0OdfWlrKtm3bcvLkyQ19XgCYBo4sbSJLS0vZsWPHi0ecqio/9EM/\nlCNHjqz7cx08ePDFV+0
"text/plain": [
"<Figure size 720x720 with 1 Axes>"
]
},
"metadata": {
"tags": []
},
"output_type": "display_data"
}
],
"source": [
"plt.figure(figsize = (10,10))\n",
"plt.imshow(np.squeeze(output[0]), cmap='gray')"
]
},
{
"cell_type": "markdown",
"metadata": {
"colab_type": "text",
"id": "g8MxXY5GmTc8"
},
"source": [
"# Conclusion\n",
"\n",
"In this notebook, we have walked through the complete process of carrying out inference using a pretrained UNet-Industrial model.\n",
"## What's next\n",
"Now it's time to try the UNet-Industrial model on your own data. "
]
}
],
"metadata": {
"accelerator": "GPU",
"colab": {
"include_colab_link": true,
"name": "Colab_UNet_Industrial_TF_TFTRT_inference_demo.ipynb",
"provenance": []
},
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.8"
}
},
"nbformat": 4,
"nbformat_minor": 1
}