DeepLearningExamples/TensorFlow/Segmentation/UNet_Industrial/notebooks/Colab_UNet_Industrial_TF_TFHub_export.ipynb

1299 lines
726 KiB
Text
Raw Normal View History

2019-11-18 23:11:28 +01:00
{
"nbformat": 4,
"nbformat_minor": 0,
"metadata": {
"accelerator": "GPU",
"colab": {
"name": "Colab_UNet_Industrial_TF_TFHub_export.ipynb",
"provenance": [],
"collapsed_sections": [],
"include_colab_link": true
},
"kernelspec": {
"name": "python3",
"display_name": "Python 3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.8"
}
},
"cells": [
{
"cell_type": "markdown",
"metadata": {
"id": "view-in-github",
"colab_type": "text"
},
"source": [
"<a href=\"https://colab.research.google.com/github/NVIDIA/DeepLearningExamples/tree/master/TensorFlow/Segmentation/UNet_Industrial/notebooks/Colab_UNet_Industrial_TF_TFHub_export.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
]
},
{
"cell_type": "code",
"metadata": {
"colab_type": "code",
"id": "Gwt7z7qdmTbW",
"colab": {}
},
"source": [
"# Copyright 2019 NVIDIA Corporation. All Rights Reserved.\n",
"#\n",
"# Licensed under the Apache License, Version 2.0 (the \"License\");\n",
"# you may not use this file except in compliance with the License.\n",
"# You may obtain a copy of the License at\n",
"#\n",
"# http://www.apache.org/licenses/LICENSE-2.0\n",
"#\n",
"# Unless required by applicable law or agreed to in writing, software\n",
"# distributed under the License is distributed on an \"AS IS\" BASIS,\n",
"# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n",
"# See the License for the specific language governing permissions and\n",
"# limitations under the License.\n",
"# =============================================================================="
],
"execution_count": 0,
"outputs": []
},
{
"cell_type": "markdown",
"metadata": {
"colab_type": "text",
"id": "i4NKCp2VmTbn"
},
"source": [
"<img src=\"http://developer.download.nvidia.com/compute/machine-learning/frameworks/nvidia_logo.png\" style=\"width: 90px; float: right;\">\n",
"\n",
"# UNet Industrial Demo on TensorFLow Hub: Export and Inference"
]
},
{
"cell_type": "markdown",
"metadata": {
"colab_type": "text",
"id": "fW0OKDzvmTbt"
},
"source": [
"## Overview\n",
"\n",
"\n",
"In this notebook, we will demo the process of exporting NVIDIA NGC [Unet Industrial defects detection models](https://ngc.nvidia.com/catalog/model-scripts/nvidia:unet_industrial_for_tensorflow) to TF-Hub modules, which can be persisted to disk, saved to a Google Drive folder or published on to TF-Hub. NVIDIA pre-trained U-Net model is adapted from the original version of the [U-Net model](https://arxiv.org/abs/1505.04597) which is\n",
"a convolutional auto-encoder for 2D image segmentation. U-Net was first introduced by\n",
"Olaf Ronneberger, Philip Fischer, and Thomas Brox in the paper:\n",
"[U-Net: Convolutional Networks for Biomedical Image Segmentation](https://arxiv.org/abs/1505.04597).\n",
"\n",
"[NVIDIA NGC](https://ngc.nvidia.com/catalog/models) is the hub for GPU-optimized software and pre-trained models for deep learning, machine learning, and HPC that takes care of all the plumbing so data scientists, developers, and researchers can focus on building solutions, gathering insights, and delivering business value.\n",
"\n",
"[TensorFlow Hub](https://www.tensorflow.org/hub) is \"a library for the publication, discovery, and consumption of reusable parts of machine learning models. A module is a self-contained piece of a TensorFlow graph, along with its weights and assets, that can be reused across different tasks in a process known as transfer learning.\"\n",
"\n",
"\n",
"\n",
"### Requirement\n",
"1. Before running this notebook, please set the Colab runtime environment to GPU via the menu *Runtime => Change runtime type => GPU*.\n",
"\n"
]
},
{
"cell_type": "code",
"metadata": {
"colab_type": "code",
"id": "HVsrGkj4Zn2L",
"outputId": "4a2df918-cd2f-437a-dc84-5778791b50af",
"colab": {
"base_uri": "https://localhost:8080/",
"height": 326
}
},
"source": [
"!nvidia-smi"
],
"execution_count": 1,
"outputs": [
{
"output_type": "stream",
"text": [
"Mon Oct 28 23:39:15 2019 \n",
"+-----------------------------------------------------------------------------+\n",
"| NVIDIA-SMI 430.50 Driver Version: 418.67 CUDA Version: 10.1 |\n",
"|-------------------------------+----------------------+----------------------+\n",
"| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |\n",
"| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |\n",
"|===============================+======================+======================|\n",
"| 0 Tesla K80 Off | 00000000:00:04.0 Off | 0 |\n",
"| N/A 70C P8 30W / 149W | 0MiB / 11441MiB | 0% Default |\n",
"+-------------------------------+----------------------+----------------------+\n",
" \n",
"+-----------------------------------------------------------------------------+\n",
"| Processes: GPU Memory |\n",
"| GPU PID Type Process name Usage |\n",
"|=============================================================================|\n",
"| No running processes found |\n",
"+-----------------------------------------------------------------------------+\n"
],
"name": "stdout"
}
]
},
{
"cell_type": "markdown",
"metadata": {
"colab_type": "text",
"id": "pV3rzgO8-tSK"
},
"source": [
"The below code checks whether a Tensor-Core GPU is present. Tensor Cores can accelerate large matrix operations by performing mixed-precision matrix multiply and accumulate calculations in a single operation. "
]
},
{
"cell_type": "code",
"metadata": {
"colab_type": "code",
"id": "Djyvo8mm9poq",
"outputId": "3854df86-ba78-4353-bcf8-5778d2e746d0",
"colab": {
"base_uri": "https://localhost:8080/",
"height": 53
}
},
"source": [
"%tensorflow_version 1.x \n",
"import tensorflow as tf\n",
"print(tf.__version__) # This notebook runs on TensorFlow 1.x. \n",
"\n",
"from tensorflow.python.client import device_lib\n",
"\n",
"def check_tensor_core_gpu_present():\n",
" local_device_protos = device_lib.list_local_devices()\n",
" for line in local_device_protos:\n",
" if \"compute capability\" in str(line):\n",
" compute_capability = float(line.physical_device_desc.split(\"compute capability: \")[-1])\n",
" if compute_capability>=7.0:\n",
" return True\n",
" \n",
"print(\"Tensor Core GPU Present:\", check_tensor_core_gpu_present())\n",
"tensor_core_gpu = check_tensor_core_gpu_present()"
],
"execution_count": 2,
"outputs": [
{
"output_type": "stream",
"text": [
"1.15.0\n",
"Tensor Core GPU Present: None\n"
],
"name": "stdout"
}
]
},
{
"cell_type": "markdown",
"metadata": {
"colab_type": "text",
"id": "FCEfkBAbbaLI"
},
"source": [
"2. Next, we clone the NVIDIA Github UNet Industrial repository and set up the workspace."
]
},
{
"cell_type": "code",
"metadata": {
"colab_type": "code",
"id": "y3u_VMjXtAto",
"outputId": "c1c55a89-5115-48ca-df0a-0593a46dc6b3",
"colab": {
"base_uri": "https://localhost:8080/",
"height": 108
}
},
"source": [
"!git clone https://github.com/NVIDIA/DeepLearningExamples"
],
"execution_count": 3,
"outputs": [
{
"output_type": "stream",
"text": [
"Cloning into 'DeepLearningExamples'...\n",
"remote: Enumerating objects: 4151, done.\u001b[K\n",
"remote: Total 4151 (delta 0), reused 0 (delta 0), pack-reused 4151\u001b[K\n",
"Receiving objects: 100% (4151/4151), 32.36 MiB | 25.80 MiB/s, done.\n",
"Resolving deltas: 100% (1858/1858), done.\n"
],
"name": "stdout"
}
]
},
{
"cell_type": "code",
"metadata": {
"colab_type": "code",
"id": "CvvfQ0RttAt9",
"outputId": "50804a40-653e-4c4f-9ba6-3cab78f0be1c",
"colab": {
"base_uri": "https://localhost:8080/",
"height": 53
}
},
"source": [
"%%bash\n",
"cd DeepLearningExamples\n",
"git checkout master"
],
"execution_count": 4,
"outputs": [
{
"output_type": "stream",
"text": [
"Your branch is up to date with 'origin/master'.\n"
],
"name": "stdout"
},
{
"output_type": "stream",
"text": [
"Already on 'master'\n"
],
"name": "stderr"
}
]
},
{
"cell_type": "code",
"metadata": {
"colab_type": "code",
"id": "-rE46y-ftAuQ",
"outputId": "da983df6-6f73-4f7e-e6a5-31b11dbca147",
"colab": {
"base_uri": "https://localhost:8080/",
"height": 35
}
},
"source": [
"import os\n",
"\n",
"WORKSPACE_DIR='/content/DeepLearningExamples/TensorFlow/Segmentation/UNet_Industrial/notebooks'\n",
"os.chdir(WORKSPACE_DIR)\n",
"print (os.getcwd())"
],
"execution_count": 5,
"outputs": [
{
"output_type": "stream",
"text": [
"/content/DeepLearningExamples/TensorFlow/Segmentation/UNet_Industrial/notebooks\n"
],
"name": "stdout"
}
]
},
{
"cell_type": "code",
"metadata": {
"id": "2b2vTOWtPuIE",
"colab_type": "code",
"outputId": "a6848717-d89b-4125-ee1a-fabd0197e4ea",
"colab": {
"base_uri": "https://localhost:8080/",
"height": 108
}
},
"source": [
"!pip install tensorflow_hub==0.6.0"
],
"execution_count": 6,
"outputs": [
{
"output_type": "stream",
"text": [
"Requirement already satisfied: tensorflow_hub==0.6.0 in /usr/local/lib/python3.6/dist-packages (0.6.0)\n",
"Requirement already satisfied: six>=1.10.0 in /usr/local/lib/python3.6/dist-packages (from tensorflow_hub==0.6.0) (1.12.0)\n",
"Requirement already satisfied: protobuf>=3.4.0 in /usr/local/lib/python3.6/dist-packages (from tensorflow_hub==0.6.0) (3.10.0)\n",
"Requirement already satisfied: numpy>=1.12.0 in /usr/local/lib/python3.6/dist-packages (from tensorflow_hub==0.6.0) (1.17.3)\n",
"Requirement already satisfied: setuptools in /usr/local/lib/python3.6/dist-packages (from protobuf>=3.4.0->tensorflow_hub==0.6.0) (41.4.0)\n"
],
"name": "stdout"
}
]
},
{
"cell_type": "markdown",
"metadata": {
"colab_type": "text",
"id": "HqSUGePjmTb9"
},
"source": [
"## Data download\n",
"\n",
"We will first download some data for testing, in particular, the [Weakly Supervised Learning for Industrial Optical Inspection (DAGM 2007)](https://resources.mpi-inf.mpg.de/conference/dagm/2007/prizes.html) competition dataset. \n",
"\n",
"> The competition is inspired by problems from industrial image processing. In order to satisfy their customers' needs, companies have to guarantee the quality of their products, which can often be achieved only by inspection of the finished product. Automatic visual defect detection has the potential to reduce the cost of quality assurance significantly.\n",
">\n",
"> The competitors have to design a stand-alone algorithm which is able to detect miscellaneous defects on various background textures.\n",
">\n",
"> The particular challenge of this contest is that the algorithm must learn, without human intervention, to discern defects automatically from a weakly labeled (i.e., labels are not exact to the pixel level) training set, the exact characteristics of which are unknown at development time. During the competition, the programs have to be trained on new data without any human guidance.\n",
"\n",
"**Source:** https://resources.mpi-inf.mpg.de/conference/dagm/2007/prizes.html\n"
]
},
{
"cell_type": "code",
"metadata": {
"colab_type": "code",
"id": "S2PR7weWmTcK",
"colab": {}
},
"source": [
"! ./download_and_preprocess_dagm2007_public.sh ./data"
],
"execution_count": 0,
"outputs": []
},
{
"cell_type": "markdown",
"metadata": {
"colab_type": "text",
"id": "EQAIszkxmTcT"
},
"source": [
"The final data directory should look like:\n",
"\n",
"```\n",
"./data\n",
" raw_images\n",
" public\n",
" Class1\t \n",
" Class2\t\n",
" Class3\t \n",
" Class4\t\n",
" Class5\t \n",
" Class6\n",
" Class1_def \n",
" Class2_def\t\n",
" Class3_def \n",
" Class4_def\t\n",
" Class5_def \n",
" Class6_def\n",
" private\n",
" zip_files\n",
"```"
]
},
{
"cell_type": "markdown",
"metadata": {
"colab_type": "text",
"id": "xSztH-mf-6hY"
},
"source": [
"Each data directory contains training images corresponding to one of 6 types of defects."
]
},
{
"cell_type": "markdown",
"metadata": {
"colab_type": "text",
"id": "RL8d9IwzmTcV"
},
"source": [
"## Model download from NVIDIA NGC model repository\n",
"\n",
"NVIDIA provides pretrained UNet models along with many other deep learning models such as ResNet, BERT, Transformer, SSD... at https://ngc.nvidia.com/catalog/models. Here, we will download and unzip pretrained UNet models corresponding to the 10 classes of the DAGM 2007 defects detection dataset. "
]
},
{
"cell_type": "code",
"metadata": {
"colab_type": "code",
"id": "wNA8uFflu7gO",
"colab": {}
},
"source": [
"%%bash\n",
"rm unet_model.zip\n",
"wget -nc -q --show-progress -O unet_model.zip \\\n",
"\"https://api.ngc.nvidia.com/v2/models/nvidia/unetindustrial_for_tensorflow_32/versions/1/zip\"\n",
"unzip -o ./unet_model.zip"
],
"execution_count": 0,
"outputs": []
},
{
"cell_type": "markdown",
"metadata": {
"colab_type": "text",
"id": "i6ADZZfGtAvP"
},
"source": [
"Upon completion of the download, the following model directories should exist, containing pre-trained models corresponding to the 10 classes of the DAGM 2007 competition data set."
]
},
{
"cell_type": "code",
"metadata": {
"colab_type": "code",
"id": "jtqhp3X5tAvS",
"outputId": "6252bbeb-2087-46b4-c641-487d82c979a8",
"colab": {
"base_uri": "https://localhost:8080/",
"height": 53
}
},
"source": [
"!ls JoC_UNET_Industrial_FP32_TF_20190522"
],
"execution_count": 10,
"outputs": [
{
"output_type": "stream",
"text": [
"Class+1 Class+2 Class+4 Class+6 Class+8\n",
"Class+10 Class+3 Class+5 Class+7 Class+9\n"
],
"name": "stdout"
}
]
},
{
"cell_type": "markdown",
"metadata": {
"colab_type": "text",
"id": "dt6oArfSmTc5"
},
"source": [
"## Inference with Native TensorFlow\n",
"\n",
"We will now launch an interactive sesssion to verify the correctness of the pretrained models, where you can load new test images. First, we load some required libraries and define some helper functions to load the pretrained UNet models."
]
},
{
"cell_type": "code",
"metadata": {
"id": "ugdOkilQ-Gak",
"colab_type": "code",
"outputId": "582cb668-7d3f-4c6c-dac2-f775547ed837",
"colab": {
"base_uri": "https://localhost:8080/",
"height": 182
}
},
"source": [
"!pip install ../dllogger "
],
"execution_count": 11,
"outputs": [
{
"output_type": "stream",
"text": [
"Processing /content/DeepLearningExamples/TensorFlow/Segmentation/UNet_Industrial/dllogger\n",
"Building wheels for collected packages: DLLogger\n",
" Building wheel for DLLogger (setup.py) ... \u001b[?25l\u001b[?25hdone\n",
" Created wheel for DLLogger: filename=DLLogger-0.3.1-cp36-none-any.whl size=9884 sha256=d55727ea0a3d128257a57d137b278ace7f076fc07e128cb49d07bbe8c4a97b5c\n",
" Stored in directory: /tmp/pip-ephem-wheel-cache-l4bzc15k/wheels/23/a4/72/2606d992c53ecdd7969c79ed3fb0c23dacdbdb438a8c17999a\n",
"Successfully built DLLogger\n",
"Installing collected packages: DLLogger\n",
"Successfully installed DLLogger-0.3.1\n"
],
"name": "stdout"
}
]
},
{
"cell_type": "code",
"metadata": {
"id": "7-nz4ZbQ4r_R",
"colab_type": "code",
"colab": {}
},
"source": [
"import dllogger\n",
"from dllogger.logger import LOGGER"
],
"execution_count": 0,
"outputs": []
},
{
"cell_type": "code",
"metadata": {
"colab_type": "code",
"id": "6NktI1GUtAvb",
"outputId": "0f3425b6-7de6-43c1-ef3d-49f5deff873c",
"colab": {
"base_uri": "https://localhost:8080/",
"height": 146
}
},
"source": [
"try:\n",
" __import__(\"horovod\")\n",
"except ImportError:\n",
" os.system(\"pip install horovod\")\n",
"\n",
"import horovod.tensorflow\n",
"import sys\n",
"\n",
"sys.path.insert(0,'/content/DeepLearningExamples/TensorFlow/Segmentation/UNet_Industrial')\n",
"from model.unet import UNet_v1"
],
"execution_count": 12,
"outputs": [
{
"output_type": "stream",
"text": [
"WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/horovod/tensorflow/__init__.py:117: The name tf.global_variables is deprecated. Please use tf.compat.v1.global_variables instead.\n",
"\n",
"WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/horovod/tensorflow/__init__.py:143: The name tf.get_default_graph is deprecated. Please use tf.compat.v1.get_default_graph instead.\n",
"\n",
"WARNING:tensorflow:From /content/DeepLearningExamples/TensorFlow/Segmentation/UNet_Industrial/utils/hooks/profiler_hook.py:35: The name tf.train.SessionRunHook is deprecated. Please use tf.estimator.SessionRunHook instead.\n",
"\n"
],
"name": "stdout"
}
]
},
{
"cell_type": "code",
"metadata": {
"id": "JUN6_p2ATRUn",
"colab_type": "code",
"colab": {}
},
"source": [
"import numpy as np\n",
"%matplotlib inline\n",
"import matplotlib.pyplot as plt\n",
"import matplotlib.image as mpimg"
],
"execution_count": 0,
"outputs": []
},
{
"cell_type": "markdown",
"metadata": {
"colab_type": "text",
"id": "Ct-izSTsv04V"
},
"source": [
"We will now load and inspect one defect image from Class 1."
]
},
{
"cell_type": "code",
"metadata": {
"colab_type": "code",
"id": "EIOhZBuptAvu",
"outputId": "73203c8b-595e-4921-b20e-2289b8ad5360",
"colab": {
"base_uri": "https://localhost:8080/",
"height": 595
}
},
"source": [
"img = mpimg.imread('./data/raw_images/public/Class1_def/1.png')\n",
"\n",
"plt.figure(figsize = (10,10));\n",
"plt.imshow(img, cmap='gray');"
],
"execution_count": 14,
"outputs": [
{
"output_type": "display_data",
"data": {
"image/png": "iVBORw0KGgoAAAANSUhEUgAAAkcAAAJCCAYAAADKjmNEAAAABHNCSVQICAgIfAhkiAAAAAlwSFlz\nAAALEgAACxIB0t1+/AAAADh0RVh0U29mdHdhcmUAbWF0cGxvdGxpYiB2ZXJzaW9uMy4xLjEsIGh0\ndHA6Ly9tYXRwbG90bGliLm9yZy8QZhcZAAAgAElEQVR4nOy9aYyc55Xf+6t9r+pau6r3fWV3c2uy\nSVMLLMqmrbFnLNmJbMeYARIYGCDABAgQ3EG+MAFugiAfBkiCJEgy+TAYzNgaOwM5M7IkS7IkihTX\nJtnd7H3v6uraumvft/tB85xIwWRBML7XF+gD6AMpdle97/s85znnf/7//6tptVqcxmmcxmmcxmmc\nxmmcxmeh/f/6C5zGaZzGaZzGaZzGafw6xWlxdBqncRqncRqncRqn8bk4LY5O4zRO4zRO4zRO4zQ+\nF6fF0WmcxmmcxmmcxmmcxufitDg6jdM4jdM4jdM4jdP4XJwWR6dxGqdxGqdxGqdxGp+LX0lxpNFo\nbmg0mjWNRrOp0Wj+r1/FZ5zGaZzGaZzGaZzGafwqQvM37XOk0Wh0wDrwMhAGHgDfbbVay3+jH3Qa\np3Eap3Eap3Eap/EriF8FcnQJ2Gy1WtutVqsK/Aj4zV/B55zGaZzGaZzGaZzGafyNh/5X8Ds7gYPP\n/TkMXP6f/YDBYGi5XC6q1Sp+vx+z2Uyj0SASidBsNjEYDAAYjUYcDgdtbW0kEgkKhQLFYhGbzYbB\nYMBut6PRaEgmkzSbTfk9JpMJn88HQD6f5+TkBL1eT71ep9FoYLPZAMhkMvj9fur1OrlcDgC73U61\nWsVoNKLXf3a7otEoer0eq9VKq9WS31GtVimVSvKdc7kcXq8Xg8HAyckJOp0Ol8sFwMnJCX6/n3K5\nTK1Wo1QqYTQa0el0GI1GqtUqGo0GrVZLrVbDYDBgsViwWCwkk0kAbDYbJycn2Gw2dDodjUaDZrOJ\nVqulXC7TarUwGAyUy2U8Hg/FYpFarQaAQgxbrRYOh4N6vU6z2USn05HP5zGbzXLfq9Wq/L9CoYDT\n6SSXy2G1WgFoNBrY7XYymQx6vZ5ms0mj0cBisaDVaqlUKuh0Omq1GvV6HavVSqVSwWq1otFo5Heo\n+1wul3E6nWSzWSwWC61Wi1qtJve/XC7jcDjkGRmNRgC0Wi31eh29Xk8ul0On06HT6dBoNNhsNiqV\nCtVqFa1Wi06no1qtyjWqqNfrAHLv9Xq9fH/1PS0Wi9wnk8lEuVzGZDLJtajvUqvVaLVa1Ot1tFot\nJpMJnU5HLpfDZDLRbDapVqt4PB5yuRx6vV6ej0ajwWw2A1Cr1dDpdPKMbDYb5XIZg8FAqVRCq9Vi\nMBjQarU0Gg20Wq1cS71el2fcaDQoFovo9Xqq1Somk0nWXCaTwWg0otVq0Wq1ci2FQgG3200ul6PR\naNDW1katVqNarWKxWCgUChgMBjQaDc1mE6PRSLFYpNls4nA4ACgWi7KPKpUKBoOBer2O3W6nUCig\n0Wgol8totVosFotck16vl3Wq9nOtVqPRaFCv1+X75/N5tFotzWYTAKfTSbVapVgsotFoZN04nU4K\nhYKs5UajgUajQaPRyNqrVCro9XoMBgOFQoG2tjaazSalUkn2i7pn6lmpZ6GeabPZpFarYTKZMJvN\n6HQ6yuUylUpF1qv6t+r6dTqdXEO1WlV5Uda0WgcqV6k/NxoNyQXqHqpnbjKZsFqtlEolcrkcWq0W\nm80m9yOfz6PX6zEajZKzUqmU5LVWq4VWq6VUKuF0OmWPOhwOuR9qHRSLRXlmRqMRo9FIJpOh0WhQ\nq9Xwer0AsmdVXlDPJZPJYLFYaDQatFotue+pVErWo8vlolQqyf3RarWSg8rlMlar9Qv3z2QyybrW\n6/VoNBq5BqfT+YX9DlCpVDCbzVQqFVmDTqcTk8lEvV4nm83SarXQ6XTyHAqFguw39ZlqvajcA8jn\nqzxvMpkoFAqy19WaVfe0Uqlgt9sxGAwYDAbS6bSsM71eL+uvWq2i1+vlO6qoVqvodLovrDO73U6x\nWJQ8VCqVZO9+Pr+qfKrX68nn87RaLfR6PVqtVvJzqVTCZDLRaDTwer1ks1n53RqNBovFAiCfoc4j\ntXdKpZJ8nl6vx+FwkEql5Ps0Gg3JH+l0WnLB5/emyo/qHtTrdQwGg6yBz+fQQqGAw+GQe6Zyisvl\nIhKJJFutlp//Ln4VxdH/Vmg0mh8CP4TPCpDvfve7jI6O8uqrr/Kv/tW/IpvN8vTpUzweD1euXAFA\np9Pxgx/8gE8++YT5+Xmy2Sybm5vY7Xa+8Y1v0N3dzb/9t/+W9vZ29Ho9vb29uFwu+vv7icfjALzx\nxhtMT0/jdDpZX19ncnKS4eFh/vk//+d8//vfp16vMz8/Tz6fp6enh0KhQCQS4etf/zq/+MUvAOju\n7uatt97i/Pnz6HQ6/uE//Ie8+eabrK2tsbS0xJUrV8jlcgSDQSqVCgsLCzz//PNEIhEuXrwI/Lek\nsrKygl6vp6+vj3Q6zaVLl6jVamxvb1OpVHjw4AGXL19mdnaWVqvF/fv3uXDhAgBbW1tks1k5iOLx\nOL/927/N/v4+b7zxBoeHhwwODmI2m7l8+TJ/9Ed/xMDAAMViURav2+1Gp9Oxvb2NXq/n6OiInp4e\nKXharRY+n4+/+Iu/YGRkhGAwyJUrV/jjP/5jAoEAAB6Ph/39fXw+H1tbW3i9XjY2NnjttddYWVmh\n2WySz+fJ5/N4PB7m5ub44IMPmJ2dJRwOA8hGGxsb48MPP8ThcBCNRmk2m5TLZcxmMzMzM8TjcSkW\ns9ksWq2W6elpABYWFqjVaiSTSV544QVOTk4ol8t0dHSQSCRwOBxEIhFeeOEFFhYW6OrqYnNzk2Aw\nyO7uLgAXL17k5z//OQ6Hg5mZGarVKul0Wg5Cj8eD1+vl008/5Stf+QqffPIJly5dkkMKPkvUoVAI\ni8VCLpfD6XTy+PFjXn75ZR4+fIjVasVkMnH79m2uXbtGMBjk3r17fPnLX5b7YbPZaDab7Ozs0NfX\nJ4fEyMgI2WyWeDyOwWDg4OCAa9euYTQa2dvbw+v1sr+/D4DVauXw8BCHw4HH45Em4t69exSLRWZn\nZymVSjx58oRAIIDT6aRcLtPd3c3h4SEA2WyWM2fOEIlEcLlc9PT0sL6+zsDAAJFIhLa2NsrlMuVy\nmXq9zrlz54jFYuzt7Umy7uzsZHp6mkqlwu7uLvF4HLfbzdzcHD/96U9xuVwsLi4yOTnJjRs3ePDg\nAbFYjI2NDUZHRwG4evUqy8vLOBwO9vf3SSQSnDlzhmq1yvb2NuVymdHRUaLRKK+++io/+tGPGB4e\n5vHjx5w9exaAjo4Obt++zcTEBHq9ns3NTQYHB3n48CFTU1PUajUcDgetVovj42Oy2SwOh4OjoyPZ\nLwsLC/zmb/4mPT09LCwsUC6XqVar5HI5ZmdnKRQKfPDBB/h8PgYHBzl79izNZpP33nuPfD4PfFYg\n9Pf3EwwGOTk5YXt7G5fLxejoKCcnJ2xsbGC327Hb7QQCAUwmE3/5l3/JlStXCAaDwGeH+vLyMkaj\nkaGhITnQk8kkW1tbvPzyy7S1tREKhchms3zwwQc4HA5MJpMUNkajkcPDQ86cOUNPTw+7u7s8ePAA\nh8OB1+tFo9FIATMwMAB8dsik02mKxSIAXq+Xvr4+Hj16RLlc5vj4mOnpaYLBIOl0mkgkwvHxMUND\nQ0xPT7O8vMzu7i75fJ5QKCS/e2VlhZOTE6rVKr29vfT09GC1Wnn//fexWq0Eg0H6+/vZ3Nzk6OgI\nvV5Pe3u7fK/Dw0NOTk7o7u6W4qfZbHLv3j36+vpIJpM4HA4ODg4wm83Mzs7i9Xo5PDxkZ2cHAJfL\nJYdyqVQin8/z3HPPYbfbSafT5PN5nj59SrPZZHR0VBrBjY0NOXDb2tpob2/H6XTy9OlTuRZ1HfF4\nnGvXrpFIJMjn8
"text/plain": [
"<Figure size 720x720 with 1 Axes>"
]
},
"metadata": {
"tags": []
}
}
]
},
{
"cell_type": "markdown",
"metadata": {
"colab_type": "text",
"id": "Z9zzrzavxLUR"
},
"source": [
"As we can see in this figure, there exists a defective area in the top left corner. We will now load the model and carry out inference on the normalized test image."
]
},
{
"cell_type": "code",
"metadata": {
"colab_type": "code",
"id": "iKGu4mpztAv8",
"colab": {}
},
"source": [
"# Image preprocessing\n",
"img = np.expand_dims(img, axis=2)\n",
"img = np.expand_dims(img, axis=0)\n",
"img = (img-0.5)/0.5"
],
"execution_count": 0,
"outputs": []
},
{
"cell_type": "code",
"metadata": {
"colab_type": "code",
"id": "XwsDthGwtAwB",
"colab": {}
},
"source": [
"config = tf.ConfigProto()\n",
"config.gpu_options.allow_growth = True\n",
"config.allow_soft_placement = True\n",
"\n",
"graph = tf.Graph()\n",
"with graph.as_default():\n",
" with tf.Session(config=config) as sess:\n",
" network = UNet_v1(\n",
" model_name=\"UNet_v1\",\n",
" input_format='NHWC',\n",
" compute_format='NHWC',\n",
" n_output_channels=1,\n",
" unet_variant='tinyUNet',\n",
" weight_init_method='he_uniform',\n",
" activation_fn='relu'\n",
" )\n",
" \n",
" tf_input = tf.placeholder(tf.float32, [None, 512, 512, 1], name='input')\n",
" \n",
" outputs, logits = network.build_model(tf_input)\n",
" saver = tf.train.Saver()\n",
"\n",
" # Restore variables from disk.\n",
" saver.restore(sess, \"JoC_UNET_Industrial_FP32_TF_20190522/Class+1/model.ckpt-2500\")\n",
" \n",
" \n",
" output = sess.run([outputs, logits], feed_dict={tf_input: img})\n",
" \n"
],
"execution_count": 0,
"outputs": []
},
{
"cell_type": "code",
"metadata": {
"colab_type": "code",
"id": "2vGBGRBBtAwG",
"outputId": "45f678ee-cd97-433c-8f06-ec34454e6d41",
"colab": {
"base_uri": "https://localhost:8080/",
"height": 613
}
},
"source": [
"# Print out model predicted mask\n",
"plt.figure(figsize = (10,10))\n",
"plt.imshow(np.squeeze(output[0]), cmap='gray')"
],
"execution_count": 21,
"outputs": [
{
"output_type": "execute_result",
"data": {
"text/plain": [
"<matplotlib.image.AxesImage at 0x7f7e5809e4e0>"
]
},
"metadata": {
"tags": []
},
"execution_count": 21
},
{
"output_type": "display_data",
"data": {
"image/png": "iVBORw0KGgoAAAANSUhEUgAAAkcAAAJCCAYAAADKjmNEAAAABHNCSVQICAgIfAhkiAAAAAlwSFlz\nAAALEgAACxIB0t1+/AAAADh0RVh0U29mdHdhcmUAbWF0cGxvdGxpYiB2ZXJzaW9uMy4xLjEsIGh0\ndHA6Ly9tYXRwbG90bGliLm9yZy8QZhcZAAAcB0lEQVR4nO3db6xld13v8c+3czrTUkqnhTJCp9Le\nWC8h5DKQhlsDDxCjATWWB4RgvKESYmPiTTB6I+ATI7k+0AeCxBu8VYjV+I+o2AbJvTSF67UPQFqL\n/CtcRmxlxtJpoZ22lP6ZM7/74KypX4ahc2Zmn1n7nPN6JSdnr7XX7P09s+DMu2utvXeNMQIAwJpz\n5h4AAGCZiCMAgEYcAQA04ggAoBFHAACNOAIAaDYkjqrqdVX15araX1Xv3IjnAADYCLXo9zmqqh1J\n/l+SH01yIMmnk/z0GOOLC30iAIANsBFHjl6ZZP8Y46tjjCeT/HmSazfgeQAAFm5lAx7zsiRfa8sH\nkvznZ/oDVeVtugGAs+2BMcalx6/ciDhal6q6Psn1cz0/ALDt3XOilRsRRweTXN6W907rvsMY44Yk\nNySOHAEAy2Mjrjn6dJKrqurKqtqZ5M1Jbt6A5wEAWLiFHzkaYxypqv+a5H8n2ZHkg2OMLyz6eQAA\nNsLCX8p/WkM4rQYAnH13jDGuPn6ld8gGAGjEEQBAI44AABpxBADQiCMAgEYcAQA04ggAoBFHAACN\nOAIAaMQRAEAjjgAAGnEEANCIIwCARhwBADTiCACgEUcAAI04AgBoxBEAQCOOAAAacQQA0IgjAIBG\nHAEANOIIAKARRwAAjTgCAGjEEQBAI44AABpxBADQiCMAgEYcPYO/+7u/yxhj3V/vete75h4ZADhD\nNcaYe4ZU1fxDTF71qlfltttuW8hjnX/++Xn88ccX8lgAwMLdMca4+viVjhxNPvnJT2aMsbAwSpJv\nf/vbGWPkzjvvXNhjAgAba9sfOTr33HPz5JNPnrXne/nLX54k+cxnPnPWnhMAOCFHjo534403ntUw\nSpI777wzd95559PXKV1yySVn9fkBgGe2MvcAczl06FAuvfTSucfIN77xjSTJGCMrKys5evTozBMB\nwPa2LY8c/fzP//xShFFXVVldXc0YI6urq1lZ2bbdCgCz2nbXHJ3ta4zO1BNPPJHzzjtv7jEAYCty\nzREAwMlsuzjaTEeNkmTXrl1PX7wNAGy8bXNhy1e/+tW5RzhjYwyn2QBgg22ba46W4edctEsvvTQP\nPPDA3GMAwGa1fa852ophlCT333+/l/4DwIJtizjayqpqy8YfAMxhy8fRdgmHMUZ27Ngx9xgAsOlt\n+TjaTo4cOZLf+73fm3sMANjUtvQF2aurqznnnO3Zf1U19wgAsOy23wXZ2zWMkrXTbO9+97vnHgMA\nNp0te+Ro165defzxxxf9sJuSo0gAcELb68jRI488MvcIS2OMkZtuumnuMQBgU9iyR46W4edaRrt2\n7dp0H6ECABtkex054sSeeOKJrK6uzj0GACytLRlH+/btm3uEpXbOOec8/WG2H/zgB+ceBwCWypY8\nrfbkk0/m3HPPXeRDbnlXXnll7r777rnHAICzafucVhNGp+5f/uVffE4bAGSLxhEAwOkSRzzt2IfY\nvu9975t7FACYzZa85mgZfqbN7sknn8yuXbvmHgMANtL2ueaIM7dz506RCcC2JI54RmOMPPvZz557\nDAA4a8QRJ/XII48IJAC2jS0XRysrK3OPsCU98sgj3iIBgG1hy8XRi1/84rlH2LJ8JhsA28GWi6N3\nvOMdc4+wpblIG4CtbsvF0cte9rK5R9jyBBIAW9mWi6MLLrhg7hG2BYEEwFa15eLo8ccfn3uEbePw\n4cM5fPjw3GMAwEJtuZd27dixY+4Rto3nPOc5SZILL7wwjzzyyMzTAMBiOHLEGXv44YfnHgEAFmbL\nxdE999wz9wjb0utf//q5RwCAhdhycfT7v//7c4+wLX30ox+dewQAWIhahlcdVdXChti5c2eeeOKJ\nRT0cp+Db3/52nvWsZ809BgCs1x1jjKuPX7nljhwBAJyJLRdHPuJiPueff35+8Ad/cO4xAOCMbLk4\nYl5f/vKX5x4BAM6IOGLhjh49OvcIAHDaxBELV1XZt2/f3GMAwGnZcq9WS9aOXFTVIh+S02AfALDk\nts+r1Q4cODD3CCT527/927lHAIBTtiWPHD3vec/L/fffv8iH5DQ5egTAEts+R44eeOCBuUdgcuTI\nkblHAIBTsiXjiOWxY8eOuUcAgFOyZePoqaeemnsEJp/73OfmHgEA1m3LxtHFF1889whMXvrSl849\nAgCs25aNo29961tzj0Dj9BoAm8WWjaPEOzUvk8cee2zuEQBgXbZ0HO3cuXPuEZjYFwBsFls6jlZX\nV+cegea3fuu35h4BAE5qS74JZPfQQw/loosu2qiH5xR5U0gAlsj2eRNIAIDTteXjaPfu3XOPQPPh\nD3947hEA4Blt+dNqSbIMPyP/zqk1AJbE9j2t9vnPf37uEWjOP//8uUcAgO9pWxw5Shw9Wiarq6tZ\nWVmZewwA2L5Hjlgu3i0bgGW2beLonHPOyTnnbJsfd+k5cgTAsto2tTDGcGptiRw+fHjuEQDghLZN\nHB2zd+/euUcgybOe9ay5RwCAEzppHFXVB6vqUFV9vq27pKpuqaqvTN8vntZXVb2vqvZX1Wer6hUb\nOfzpOHjw4NwjAABLbD1Hjv4wyeuOW/fOJLeOMa5Kcuu0nCSvT3LV9HV9kvcvZszFeu973zv3CCTZ\nt2/f3CMAwHdZ10v5q+qKJB8ZY7x0Wv5ykteMMe6tqhck+T9jjP9YVf9zuv1nx293ksc/6xcDuf5o\nfgcOHMjll18+9xgAbF8LfSn/nhY8X0+yZ7p9WZKvte0OTOuWzs033zz3CNveC1/4wrlHAIDvcsav\npx5jjNM58lNV12ft1Nssrr32WkePZuatFQBYRqf7r9N90+m0TN8PTesPJunnSfZO677LGOOGMcbV\nJzqcdbY8//nPn+upAYAldbpxdHOS66bb1yW5qa1/y/SqtWuSHD7Z9UZzuv/++3PPPffMPQYAsERO\nekF2Vf1ZktckeV6S+5L8WpK/SfKhJN+f5J4kbxpjfLPWPm79d7P26rbHkrx1jHH7SYeY4YLs7ujR\noz4pfib+3gGY0QkvyN42Hzz7TMTRfPy9AzAjHzz7vbgwGAA4RhVMHME4+1ZXV+ceAQC+izhqBNLZ\nddddd809AgB8F3F0HIF09vzcz/3c3CMAwHdxQfb3sAx/L1vdysqKU2sAzMkF2aeiqvLYY4/NPcaW\nJowAWEbi6BlccMEF+dmf/dm5x9iSjh49OvcIAHBCTqut0zL8PW0le/bsyaFDh06+IQBsHKfVzkRV\n5bbbbsttt9029yhbgjACYFk5cnSaluHvbbNyITYAS8KRo0WqqjznOc+Ze4xN59Of/rQwAmCpOXK0\nID6f7eTGGD6qBYBl4sjRRjrnnHP8w38S/n4A2Az8a7VAY4xUVXbs2OGapOOce+65c48AAOsijjbA\n0aNHnz6S5Pqa5IUvfGGOHDky9xgAsC7iaAONMbKyspKqysMPPzz3OLO4+OKLc++99849BgCsmzgC\nAGhW5h5gu7jooouSJE8++WSS7XENjlfvAbAZOXJ0lu3cuTM7d+5MVeXRRx+de5wN8eCDDwojADYt\ncTSjCy+8MFWVK6+8cu5RFuLYez1dcsklc48CAKdNHC2Bu+++O1WVqsrHPvaxucc5LTt27MiOHTvm\nHgMAzph3yF5Su3fvzoMPPjj3GN/T0aNHs7KydsnaMvxvCABOg3fI3kweeuihp48m/eu//uvc4zzt\nK1/5yne80aUwAmCr8Wq1TeBFL3pRkrVTV48++mjOO++8s/r8q6ur2blzZ44ePXpWnxcA5uDI0Say\nurqa888//+kjS
"text/plain": [
"<Figure size 720x720 with 1 Axes>"
]
},
"metadata": {
"tags": []
}
}
]
},
{
"cell_type": "markdown",
"metadata": {
"colab_type": "text",
"id": "BPs_nyzcyAxo"
},
"source": [
"As expected, the model points out the correct defective area in this image. Please feel free to try out other defective images for Class 1 within `./data/raw_images/public/Class1_def/`, or load the model and test data for other classes from 1 to 10. "
]
},
{
"cell_type": "code",
"metadata": {
"colab_type": "code",
"id": "HRQiqCSMAOZS",
"outputId": "3aea8169-e90f-4716-827a-e2c5eb38aec2",
"colab": {
"base_uri": "https://localhost:8080/",
"height": 326
}
},
"source": [
"!ls ./data/raw_images/public/Class1_def/"
],
"execution_count": 22,
"outputs": [
{
"output_type": "stream",
"text": [
"100.png 116.png 131.png 147.png 26.png 41.png 57.png 72.png 88.png\n",
"101.png 117.png 132.png 148.png 27.png 42.png 58.png 73.png 89.png\n",
"102.png 118.png 133.png 149.png 28.png 43.png 59.png 74.png 8.png\n",
"103.png 119.png 134.png 14.png 29.png 44.png 5.png 75.png 90.png\n",
"104.png 11.png 135.png 150.png 2.png 45.png 60.png 76.png 91.png\n",
"105.png 120.png 136.png 15.png 30.png 46.png 61.png 77.png 92.png\n",
"106.png 121.png 137.png 16.png 31.png 47.png 62.png 78.png 93.png\n",
"107.png 122.png 138.png 17.png 32.png 48.png 63.png 79.png 94.png\n",
"108.png 123.png 139.png 18.png 33.png 49.png 64.png 7.png 95.png\n",
"109.png 124.png 13.png 19.png 34.png 4.png 65.png 80.png 96.png\n",
"10.png\t 125.png 140.png 1.png 35.png 50.png 66.png 81.png 97.png\n",
"110.png 126.png 141.png 20.png 36.png 51.png 67.png 82.png 98.png\n",
"111.png 127.png 142.png 21.png 37.png 52.png 68.png 83.png 99.png\n",
"112.png 128.png 143.png 22.png 38.png 53.png 69.png 84.png 9.png\n",
"113.png 129.png 144.png 23.png 39.png 54.png 6.png 85.png labels.txt\n",
"114.png 12.png 145.png 24.png 3.png 55.png 70.png 86.png\n",
"115.png 130.png 146.png 25.png 40.png 56.png 71.png 87.png\n"
],
"name": "stdout"
}
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "BJhD5L7XTRVP",
"colab_type": "text"
},
"source": [
"## Export UNet models to TF-Hub\n",
"\n",
"We will now export the 10 pretrained UNet models into TensorFlow hub modules."
]
},
{
"cell_type": "code",
"metadata": {
"id": "PquUNH7LZm0x",
"colab_type": "code",
"colab": {}
},
"source": [
"def module_fn(is_training=False):\n",
" \"\"\"A module_fn for use with hub.create_module_spec().\n",
" Args:\n",
" is_training: a boolean meant to control whether batch norm, dropout etc. \n",
" are built in training or inference mode for this graph version (TODO)\n",
"\n",
" \"\"\"\n",
" # Set up the module input\n",
" with tf.name_scope('hub_input'):\n",
" tf_input = tf.placeholder(tf.float32, [None, 512, 512, 1], name='input')\n",
" \n",
" # Build the net.\n",
" network = UNet_v1(\n",
" model_name=\"UNet_v1\",\n",
" input_format='NHWC',\n",
" compute_format='NHWC',\n",
" n_output_channels=1,\n",
" unet_variant='tinyUNet',\n",
" weight_init_method='he_uniform',\n",
" activation_fn='relu'\n",
" )\n",
" outputs, logits = network.build_model(tf_input)\n",
" \n",
" \n",
" # Add the default signature.\n",
" hub.add_signature('default', dict(images=tf_input), dict(default=outputs))\n",
" "
],
"execution_count": 0,
"outputs": []
},
{
"cell_type": "code",
"metadata": {
"id": "yuK68Xcnr3Jw",
"colab_type": "code",
"colab": {}
},
"source": [
"import tensorflow_hub as hub\n",
"\n",
"tags_and_args = [\n",
" # The default graph is built with batch_norm, dropout etc. in inference\n",
" # mode. This graph version is good for inference, not training.\n",
" ([], {\n",
" 'is_training': False\n",
" }),\n",
" # A separate 'train' graph builds batch_norm, dropout etc. in training\n",
" # mode.\n",
" (['train'], {\n",
" 'is_training': True # TODO\n",
" }),\n",
"]\n",
"drop_collections = [\n",
" 'moving_vars', tf.GraphKeys.GLOBAL_STEP,\n",
" tf.GraphKeys.MOVING_AVERAGE_VARIABLES\n",
"]\n",
"spec = hub.create_module_spec(module_fn, tags_and_args, drop_collections)"
],
"execution_count": 0,
"outputs": []
},
{
"cell_type": "code",
"metadata": {
"id": "dedHyuLJsSaZ",
"colab_type": "code",
"outputId": "2a47d292-f679-41da-a604-d019397d87d4",
"colab": {
"base_uri": "https://localhost:8080/",
"height": 362
}
},
"source": [
"for class_id in range (1, 11): \n",
" with tf.Graph().as_default():\n",
" module = hub.Module(spec)\n",
" variables_to_restore = module.variable_map\n",
" init_fn = tf.contrib.framework.assign_from_checkpoint_fn(model_path=\"JoC_UNET_Industrial_FP32_TF_20190522/Class+%d/model.ckpt-2500\"%class_id,\n",
" var_list=variables_to_restore)\n",
" with tf.Session() as session:\n",
" init_fn(session)\n",
" module.export(\"./NVIDIA/Unet/Class_%d/\"%class_id, session=session)\n"
],
"execution_count": 25,
"outputs": [
{
"output_type": "stream",
"text": [
"INFO:tensorflow:Restoring parameters from JoC_UNET_Industrial_FP32_TF_20190522/Class+1/model.ckpt-2500\n",
"INFO:tensorflow:Restoring parameters from JoC_UNET_Industrial_FP32_TF_20190522/Class+2/model.ckpt-2500\n"
],
"name": "stdout"
},
{
"output_type": "stream",
"text": [
"INFO:tensorflow:Restoring parameters from JoC_UNET_Industrial_FP32_TF_20190522/Class+2/model.ckpt-2500\n"
],
"name": "stderr"
},
{
"output_type": "stream",
"text": [
"INFO:tensorflow:Restoring parameters from JoC_UNET_Industrial_FP32_TF_20190522/Class+3/model.ckpt-2500\n"
],
"name": "stdout"
},
{
"output_type": "stream",
"text": [
"INFO:tensorflow:Restoring parameters from JoC_UNET_Industrial_FP32_TF_20190522/Class+3/model.ckpt-2500\n"
],
"name": "stderr"
},
{
"output_type": "stream",
"text": [
"INFO:tensorflow:Restoring parameters from JoC_UNET_Industrial_FP32_TF_20190522/Class+4/model.ckpt-2500\n"
],
"name": "stdout"
},
{
"output_type": "stream",
"text": [
"INFO:tensorflow:Restoring parameters from JoC_UNET_Industrial_FP32_TF_20190522/Class+4/model.ckpt-2500\n"
],
"name": "stderr"
},
{
"output_type": "stream",
"text": [
"INFO:tensorflow:Restoring parameters from JoC_UNET_Industrial_FP32_TF_20190522/Class+5/model.ckpt-2500\n"
],
"name": "stdout"
},
{
"output_type": "stream",
"text": [
"INFO:tensorflow:Restoring parameters from JoC_UNET_Industrial_FP32_TF_20190522/Class+5/model.ckpt-2500\n"
],
"name": "stderr"
},
{
"output_type": "stream",
"text": [
"INFO:tensorflow:Restoring parameters from JoC_UNET_Industrial_FP32_TF_20190522/Class+6/model.ckpt-2500\n"
],
"name": "stdout"
},
{
"output_type": "stream",
"text": [
"INFO:tensorflow:Restoring parameters from JoC_UNET_Industrial_FP32_TF_20190522/Class+6/model.ckpt-2500\n"
],
"name": "stderr"
},
{
"output_type": "stream",
"text": [
"INFO:tensorflow:Restoring parameters from JoC_UNET_Industrial_FP32_TF_20190522/Class+7/model.ckpt-2500\n"
],
"name": "stdout"
},
{
"output_type": "stream",
"text": [
"INFO:tensorflow:Restoring parameters from JoC_UNET_Industrial_FP32_TF_20190522/Class+7/model.ckpt-2500\n"
],
"name": "stderr"
},
{
"output_type": "stream",
"text": [
"INFO:tensorflow:Restoring parameters from JoC_UNET_Industrial_FP32_TF_20190522/Class+8/model.ckpt-2500\n"
],
"name": "stdout"
},
{
"output_type": "stream",
"text": [
"INFO:tensorflow:Restoring parameters from JoC_UNET_Industrial_FP32_TF_20190522/Class+8/model.ckpt-2500\n"
],
"name": "stderr"
},
{
"output_type": "stream",
"text": [
"INFO:tensorflow:Restoring parameters from JoC_UNET_Industrial_FP32_TF_20190522/Class+9/model.ckpt-2500\n"
],
"name": "stdout"
},
{
"output_type": "stream",
"text": [
"INFO:tensorflow:Restoring parameters from JoC_UNET_Industrial_FP32_TF_20190522/Class+9/model.ckpt-2500\n"
],
"name": "stderr"
},
{
"output_type": "stream",
"text": [
"INFO:tensorflow:Restoring parameters from JoC_UNET_Industrial_FP32_TF_20190522/Class+10/model.ckpt-2500\n"
],
"name": "stdout"
},
{
"output_type": "stream",
"text": [
"INFO:tensorflow:Restoring parameters from JoC_UNET_Industrial_FP32_TF_20190522/Class+10/model.ckpt-2500\n"
],
"name": "stderr"
}
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "QjOVDdyZgbT8",
"colab_type": "text"
},
"source": [
"## Save TF-Hub modules to Google Drive (Optional)\n",
"\n",
"In this step we will persist the created TF-Hub modules to Google Drive. Execute the below cell to authorize Colab to access your Google Drive content, then copy the created UNet TF-Hub modules to Google Drive."
]
},
{
"cell_type": "code",
"metadata": {
"id": "zga0RLrjgg4w",
"colab_type": "code",
"colab": {}
},
"source": [
"from google.colab import drive\n",
"\n",
"drive.mount('/content/gdrive')"
],
"execution_count": 0,
"outputs": []
},
{
"cell_type": "code",
"metadata": {
"id": "l8y1092rjAvl",
"colab_type": "code",
"colab": {}
},
"source": [
"!cp -r \"./NVIDIA\" \"/content/gdrive/My Drive/\""
],
"execution_count": 0,
"outputs": []
},
{
"cell_type": "markdown",
"metadata": {
"id": "VunoLwxYtkdU",
"colab_type": "text"
},
"source": [
"## Inference with TF-Hub modules\n",
"\n",
"Next, we will load and do inference with the created TensorFlow Hub modules. In order to load TF Hub modules, there are several options:\n",
"\n",
"- Load from a local cache or directory\n",
"\n",
"- Load from a remote repository\n",
"\n"
]
},
{
"cell_type": "code",
"metadata": {
"id": "doG457_Ut4qI",
"colab_type": "code",
"colab": {}
},
"source": [
"import tensorflow_hub as hub\n",
"\n",
"# Loading from a local cache/directory\n",
"#module = hub.Module(\"NVIDIA/Unet/Class_1\", trainable=False)\n",
"\n",
"# Loading from a remote repository. The 10 NVIDIA UNet TF-Hub modules are available at\n",
"# https://tfhub.dev/nvidia/unet/industrial/class_1/1 (similarly for class 2, 3 ...) and\n",
"# https://developer.download.nvidia.com/compute/redist/Binary_Files/unet_tfhub_modules/class_{1..10}\n",
"\n",
"module = hub.Module(\"https://tfhub.dev/nvidia/unet/industrial/class_1/1\") # or class_2, class_3 etc...\n",
"#module = hub.Module(\"https://developer.download.nvidia.com/compute/redist/Binary_Files/unet_tfhub_modules/class_1/1.tar.gz\") # or class_2, class_3 etc..."
],
"execution_count": 0,
"outputs": []
},
{
"cell_type": "code",
"metadata": {
"id": "pw24AUjct-p7",
"colab_type": "code",
"outputId": "863180aa-2f82-444d-e3c7-60b0fc7dc92c",
"colab": {
"base_uri": "https://localhost:8080/",
"height": 35
}
},
"source": [
"print(module.get_signature_names())"
],
"execution_count": 27,
"outputs": [
{
"output_type": "stream",
"text": [
"['default']\n"
],
"name": "stdout"
}
]
},
{
"cell_type": "code",
"metadata": {
"id": "nKa0L82puB6q",
"colab_type": "code",
"outputId": "2ece4954-698a-452d-ee25-a33c5784a61e",
"colab": {
"base_uri": "https://localhost:8080/",
"height": 35
}
},
"source": [
"print(module.get_input_info_dict()) # When no signature is given, considers it as 'default'\n"
],
"execution_count": 28,
"outputs": [
{
"output_type": "stream",
"text": [
"{'images': <hub.ParsedTensorInfo shape=(?, 512, 512, 1) dtype=float32 is_sparse=False>}\n"
],
"name": "stdout"
}
]
},
{
"cell_type": "code",
"metadata": {
"id": "huUOMsfJuRd9",
"colab_type": "code",
"outputId": "01af0e66-b82e-457a-bb6f-323726474a0b",
"colab": {
"base_uri": "https://localhost:8080/",
"height": 35
}
},
"source": [
"print(module.get_output_info_dict())"
],
"execution_count": 29,
"outputs": [
{
"output_type": "stream",
"text": [
"{'default': <hub.ParsedTensorInfo shape=(?, 512, 512, 1) dtype=float32 is_sparse=False>}\n"
],
"name": "stdout"
}
]
},
{
"cell_type": "code",
"metadata": {
"id": "MK_wZTa_udHk",
"colab_type": "code",
"colab": {}
},
"source": [
"# Load a test image\n",
"img = mpimg.imread('./data/raw_images/public/Class1_def/1.png')\n",
"\n",
"# Image preprocessing\n",
"img = np.expand_dims(img, axis=2)\n",
"img = np.expand_dims(img, axis=0)\n",
"img = (img-0.5)/0.5"
],
"execution_count": 0,
"outputs": []
},
{
"cell_type": "code",
"metadata": {
"id": "oXgfSMuRvoeQ",
"colab_type": "code",
"outputId": "28b9a33e-c0f0-4a99-ee6f-514f1f4f3755",
"colab": {
"base_uri": "https://localhost:8080/",
"height": 53
}
},
"source": [
" with tf.Session() as sess:\n",
" output = module(img)\n",
" sess.run([tf.global_variables_initializer(), tf.tables_initializer()])\n",
" pred = sess.run(output)\n",
" "
],
"execution_count": 31,
"outputs": [
{
"output_type": "stream",
"text": [
"INFO:tensorflow:Saver not created because there are no variables in the graph to restore\n"
],
"name": "stdout"
},
{
"output_type": "stream",
"text": [
"INFO:tensorflow:Saver not created because there are no variables in the graph to restore\n"
],
"name": "stderr"
}
]
},
{
"cell_type": "code",
"metadata": {
"id": "qi9THAABzEHc",
"colab_type": "code",
"outputId": "ea89bc43-0491-4093-b7f3-e918e9146dae",
"colab": {
"base_uri": "https://localhost:8080/",
"height": 613
}
},
"source": [
"# Print out model predicted mask\n",
"plt.figure(figsize = (10,10))\n",
"plt.imshow(np.squeeze(pred), cmap='gray')"
],
"execution_count": 32,
"outputs": [
{
"output_type": "execute_result",
"data": {
"text/plain": [
"<matplotlib.image.AxesImage at 0x7f7e56ce10f0>"
]
},
"metadata": {
"tags": []
},
"execution_count": 32
},
{
"output_type": "display_data",
"data": {
"image/png": "iVBORw0KGgoAAAANSUhEUgAAAkcAAAJCCAYAAADKjmNEAAAABHNCSVQICAgIfAhkiAAAAAlwSFlz\nAAALEgAACxIB0t1+/AAAADh0RVh0U29mdHdhcmUAbWF0cGxvdGxpYiB2ZXJzaW9uMy4xLjEsIGh0\ndHA6Ly9tYXRwbG90bGliLm9yZy8QZhcZAAAcB0lEQVR4nO3db6xld13v8c+3czrTUkqnhTJCp9Le\nWC8h5DKQhlsDDxCjATWWB4RgvKESYmPiTTB6I+ATI7k+0AeCxBu8VYjV+I+o2AbJvTSF67UPQFqL\n/CtcRmxlxtJpoZ22lP6ZM7/74KypX4ahc2Zmn1n7nPN6JSdnr7XX7P09s+DMu2utvXeNMQIAwJpz\n5h4AAGCZiCMAgEYcAQA04ggAoBFHAACNOAIAaDYkjqrqdVX15araX1Xv3IjnAADYCLXo9zmqqh1J\n/l+SH01yIMmnk/z0GOOLC30iAIANsBFHjl6ZZP8Y46tjjCeT/HmSazfgeQAAFm5lAx7zsiRfa8sH\nkvznZ/oDVeVtugGAs+2BMcalx6/ciDhal6q6Psn1cz0/ALDt3XOilRsRRweTXN6W907rvsMY44Yk\nNySOHAEAy2Mjrjn6dJKrqurKqtqZ5M1Jbt6A5wEAWLiFHzkaYxypqv+a5H8n2ZHkg2OMLyz6eQAA\nNsLCX8p/WkM4rQYAnH13jDGuPn6ld8gGAGjEEQBAI44AABpxBADQiCMAgEYcAQA04ggAoBFHAACN\nOAIAaMQRAEAjjgAAGnEEANCIIwCARhwBADTiCACgEUcAAI04AgBoxBEAQCOOAAAacQQA0IgjAIBG\nHAEANOIIAKARRwAAjTgCAGjEEQBAI44AABpxBADQiCMAgEYcPYO/+7u/yxhj3V/vete75h4ZADhD\nNcaYe4ZU1fxDTF71qlfltttuW8hjnX/++Xn88ccX8lgAwMLdMca4+viVjhxNPvnJT2aMsbAwSpJv\nf/vbGWPkzjvvXNhjAgAba9sfOTr33HPz5JNPnrXne/nLX54k+cxnPnPWnhMAOCFHjo534403ntUw\nSpI777wzd95559PXKV1yySVn9fkBgGe2MvcAczl06FAuvfTSucfIN77xjSTJGCMrKys5evTozBMB\nwPa2LY8c/fzP//xShFFXVVldXc0YI6urq1lZ2bbdCgCz2nbXHJ3ta4zO1BNPPJHzzjtv7jEAYCty\nzREAwMlsuzjaTEeNkmTXrl1PX7wNAGy8bXNhy1e/+tW5RzhjYwyn2QBgg22ba46W4edctEsvvTQP\nPPDA3GMAwGa1fa852ophlCT333+/l/4DwIJtizjayqpqy8YfAMxhy8fRdgmHMUZ27Ngx9xgAsOlt\n+TjaTo4cOZLf+73fm3sMANjUtvQF2aurqznnnO3Zf1U19wgAsOy23wXZ2zWMkrXTbO9+97vnHgMA\nNp0te+Ro165defzxxxf9sJuSo0gAcELb68jRI488MvcIS2OMkZtuumnuMQBgU9iyR46W4edaRrt2\n7dp0H6ECABtkex054sSeeOKJrK6uzj0GACytLRlH+/btm3uEpXbOOec8/WG2H/zgB+ceBwCWypY8\nrfbkk0/m3HPPXeRDbnlXXnll7r777rnHAICzafucVhNGp+5f/uVffE4bAGSLxhEAwOkSRzzt2IfY\nvu9975t7FACYzZa85mgZfqbN7sknn8yuXbvmHgMANtL2ueaIM7dz506RCcC2JI54RmOMPPvZz557\nDAA4a8QRJ/XII48IJAC2jS0XRysrK3OPsCU98sgj3iIBgG1hy8XRi1/84rlH2LJ8JhsA28GWi6N3\nvOMdc4+wpblIG4CtbsvF0cte9rK5R9jyBBIAW9mWi6MLLrhg7hG2BYEEwFa15eLo8ccfn3uEbePw\n4cM5fPjw3GMAwEJtuZd27dixY+4Rto3nPOc5SZILL7wwjzzyyMzTAMBiOHLEGXv44YfnHgEAFmbL\nxdE999wz9wjb0utf//q5RwCAhdhycfT7v//7c4+wLX30ox+dewQAWIhahlcdVdXChti5c2eeeOKJ\nRT0cp+Db3/52nvWsZ809BgCs1x1jjKuPX7nljhwBAJyJLRdHPuJiPueff35+8Ad/cO4xAOCMbLk4\nYl5f/vKX5x4BAM6IOGLhjh49OvcIAHDaxBELV1XZt2/f3GMAwGnZcq9WS9aOXFTVIh+S02AfALDk\nts+r1Q4cODD3CCT527/927lHAIBTtiWPHD3vec/L/fffv8iH5DQ5egTAEts+R44eeOCBuUdgcuTI\nkblHAIBTsiXjiOWxY8eOuUcAgFOyZePoqaeemnsEJp/73OfmHgEA1m3LxtHFF1889whMXvrSl849\nAgCs25aNo29961tzj0Dj9BoAm8WWjaPEOzUvk8cee2zuEQBgXbZ0HO3cuXPuEZjYFwBsFls6jlZX\nV+cegea3fuu35h4BAE5qS74JZPfQQw/loosu2qiH5xR5U0gAlsj2eRNIAIDTteXjaPfu3XOPQPPh\nD3947hEA4Blt+dNqSbIMPyP/zqk1AJbE9j2t9vnPf37uEWjOP//8uUcAgO9pWxw5Shw9Wiarq6tZ\nWVmZewwA2L5Hjlgu3i0bgGW2beLonHPOyTnnbJsfd+k5cgTAsto2tTDGcGptiRw+fHjuEQDghLZN\nHB2zd+/euUcgybOe9ay5RwCAEzppHFXVB6vqUFV9vq27pKpuqaqvTN8vntZXVb2vqvZX1Wer6hUb\nOfzpOHjw4NwjAABLbD1Hjv4wyeuOW/fOJLeOMa5Kcuu0nCSvT3LV9HV9kvcvZszFeu973zv3CCTZ\nt2/f3CMAwHdZ10v5q+qKJB8ZY7x0Wv5ykteMMe6tqhck+T9jjP9YVf9zuv1nx293ksc/6xcDuf5o\nfgcOHMjll18+9xgAbF8LfSn/nhY8X0+yZ7p9WZKvte0OTOuWzs033zz3CNveC1/4wrlHAIDvcsav\npx5jjNM58lNV12ft1Nssrr32WkePZuatFQBYRqf7r9N90+m0TN8PTesPJunnSfZO677LGOOGMcbV\nJzqcdbY8//nPn+upAYAldbpxdHOS66bb1yW5qa1/y/SqtWuSHD7Z9UZzuv/++3PPPffMPQYAsERO\nekF2Vf1ZktckeV6S+5L8WpK/SfKhJN+f5J4kbxpjfLPWPm79d7P26rbHkrx1jHH7SYeY4YLs7ujR\noz4pfib+3gGY0QkvyN42Hzz7TMTRfPy9AzAjHzz7vbgwGAA4RhVMHME4+1ZXV+ceAQC+izhqBNLZ\nddddd809AgB8F3F0HIF09vzcz/3c3CMAwHdxQfb3sAx/L1vdysqKU2sAzMkF2aeiqvLYY4/NPcaW\nJowAWEbi6BlccMEF+dmf/dm5x9iSjh49OvcIAHBCTqut0zL8PW0le/bsyaFDh06+IQBsHKfVzkRV\n5bbbbsttt9029yhbgjACYFk5cnSaluHvbbNyITYAS8KRo0WqqjznOc+Ze4xN59Of/rQwAmCpOXK0\nID6f7eTGGD6qBYBl4sjRRjrnnHP8w38S/n4A2Az8a7VAY4xUVXbs2OGapOOce+65c48AAOsijjbA\n0aNHnz6S5Pqa5IUvfGGOHDky9xgAsC7iaAONMbKyspKqysMPPzz3OLO4+OKLc++99849BgCsmzgC\nAGhW5h5gu7jooouSJE8++WSS7XENjlfvAbAZOXJ0lu3cuTM7d+5MVeXRRx+de5wN8eCDDwojADYt\ncTSjCy+8MFWVK6+8cu5RFuLYez1dcsklc48CAKdNHC2Bu+++O1WVqsrHPvaxucc5LTt27MiOHTvm\nHgMAzph3yF5Su3fvzoMPPjj3GN/T0aNHs7KydsnaMvxvCABOg3fI3kweeuihp48m/eu//uvc4zzt\nK1/5yne80aUwAmCr8Wq1TeBFL3pRkrVTV48++mjOO++8s/r8q6ur2blzZ44ePXpWnxcA5uDI0Say\nurqa888//+kjS
"text/plain": [
"<Figure size 720x720 with 1 Axes>"
]
},
"metadata": {
"tags": []
}
}
]
},
{
"cell_type": "markdown",
"metadata": {
"colab_type": "text",
"id": "g8MxXY5GmTc8"
},
"source": [
"# Conclusion\n",
"\n",
"In this notebook, we have walked through the complete process of creating TF-Hub modules from pretrained UNet-Industrial models, then test the correctness of the created TF-Hub modules.\n",
"## What's next\n",
"Now it's time to try the UNet-Industrial TensorFlow Hub modules on your own data. "
]
},
{
"cell_type": "code",
"metadata": {
"id": "u0SS61owr2oO",
"colab_type": "code",
"colab": {}
},
"source": [
""
],
"execution_count": 0,
"outputs": []
}
]
}