DeepLearningExamples/TensorFlow/Segmentation/UNet_Industrial/notebooks/Colab_UNet_Industrial_TF_TFHub_inference_demo.ipynb

817 lines
700 KiB
Text
Raw Normal View History

2019-11-18 23:11:28 +01:00
{
"nbformat": 4,
"nbformat_minor": 0,
"metadata": {
"accelerator": "GPU",
"colab": {
"name": "Colab_UNet_Industrial_TF_TFHub_inference_demo.ipynb",
"provenance": [],
"collapsed_sections": [],
"include_colab_link": true
},
"kernelspec": {
"name": "python3",
"display_name": "Python 3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.8"
}
},
"cells": [
{
"cell_type": "markdown",
"metadata": {
"id": "view-in-github",
"colab_type": "text"
},
"source": [
"<a href=\"https://colab.research.google.com/github/NVIDIA/DeepLearningExamples/tree/master/TensorFlow/Segmentation/UNet_Industrial/notebooks/Colab_UNet_Industrial_TF_TFHub_inference_demo.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
]
},
{
"cell_type": "code",
"metadata": {
"colab_type": "code",
"id": "Gwt7z7qdmTbW",
"colab": {}
},
"source": [
"# Copyright 2019 NVIDIA Corporation. All Rights Reserved.\n",
"#\n",
"# Licensed under the Apache License, Version 2.0 (the \"License\");\n",
"# you may not use this file except in compliance with the License.\n",
"# You may obtain a copy of the License at\n",
"#\n",
"# http://www.apache.org/licenses/LICENSE-2.0\n",
"#\n",
"# Unless required by applicable law or agreed to in writing, software\n",
"# distributed under the License is distributed on an \"AS IS\" BASIS,\n",
"# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n",
"# See the License for the specific language governing permissions and\n",
"# limitations under the License.\n",
"# =============================================================================="
],
"execution_count": 0,
"outputs": []
},
{
"cell_type": "markdown",
"metadata": {
"colab_type": "text",
"id": "i4NKCp2VmTbn"
},
"source": [
"<img src=\"http://developer.download.nvidia.com/compute/machine-learning/frameworks/nvidia_logo.png\" style=\"width: 90px; float: right;\">\n",
"\n",
"# UNet Industrial Inference Demo with TensorFlow Hub"
]
},
{
"cell_type": "markdown",
"metadata": {
"colab_type": "text",
"id": "fW0OKDzvmTbt"
},
"source": [
"## Overview\n",
"\n",
"\n",
"In this notebook, we will demo the process of inference with NVIDIA pre-trained UNet Industrial defects detection TensorFlow Hub modules.\n",
"\n",
"NVIDIA pre-trained U-Net models for defect detection are adapted from the original version of the [U-Net model](https://arxiv.org/abs/1505.04597) which is\n",
"a convolutional auto-encoder for 2D image segmentation. U-Net was first introduced by\n",
"Olaf Ronneberger, Philip Fischer, and Thomas Brox in the paper:\n",
"[U-Net: Convolutional Networks for Biomedical Image Segmentation](https://arxiv.org/abs/1505.04597).\n",
"\n",
"### Requirement\n",
"1. Before running this notebook, please set the Colab runtime environment to GPU via the menu *Runtime => Change runtime type => GPU*.\n",
"\n"
]
},
{
"cell_type": "code",
"metadata": {
"colab_type": "code",
"id": "HVsrGkj4Zn2L",
"outputId": "d593728e-a7b4-49d5-a3d9-397c8d2eede6",
"colab": {
"base_uri": "https://localhost:8080/",
"height": 326
}
},
"source": [
"!nvidia-smi"
],
"execution_count": 1,
"outputs": [
{
"output_type": "stream",
"text": [
"Mon Oct 28 23:17:11 2019 \n",
"+-----------------------------------------------------------------------------+\n",
"| NVIDIA-SMI 430.50 Driver Version: 418.67 CUDA Version: 10.1 |\n",
"|-------------------------------+----------------------+----------------------+\n",
"| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |\n",
"| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |\n",
"|===============================+======================+======================|\n",
"| 0 Tesla K80 Off | 00000000:00:04.0 Off | 0 |\n",
"| N/A 67C P8 33W / 149W | 0MiB / 11441MiB | 0% Default |\n",
"+-------------------------------+----------------------+----------------------+\n",
" \n",
"+-----------------------------------------------------------------------------+\n",
"| Processes: GPU Memory |\n",
"| GPU PID Type Process name Usage |\n",
"|=============================================================================|\n",
"| No running processes found |\n",
"+-----------------------------------------------------------------------------+\n"
],
"name": "stdout"
}
]
},
{
"cell_type": "markdown",
"metadata": {
"colab_type": "text",
"id": "pV3rzgO8-tSK"
},
"source": [
"The below code checks whether a Tensor-Core GPU is present. Tensor Cores can accelerate large matrix operations by performing mixed-precision matrix multiply and accumulate calculations in a single operation. "
]
},
{
"cell_type": "code",
"metadata": {
"colab_type": "code",
"id": "Djyvo8mm9poq",
"outputId": "92dfd62c-81b9-41a8-f999-cf570b557a4d",
"colab": {
"base_uri": "https://localhost:8080/",
"height": 53
}
},
"source": [
"%tensorflow_version 1.x \n",
"import tensorflow as tf\n",
"print(tf.__version__) # This notebook runs on TensorFlow 1.x. \n",
"\n",
"\n",
"from tensorflow.python.client import device_lib\n",
"\n",
"def check_tensor_core_gpu_present():\n",
" local_device_protos = device_lib.list_local_devices()\n",
" for line in local_device_protos:\n",
" if \"compute capability\" in str(line):\n",
" compute_capability = float(line.physical_device_desc.split(\"compute capability: \")[-1])\n",
" if compute_capability>=7.0:\n",
" return True\n",
" \n",
"print(\"Tensor Core GPU Present:\", check_tensor_core_gpu_present())\n",
"tensor_core_gpu = check_tensor_core_gpu_present()"
],
"execution_count": 2,
"outputs": [
{
"output_type": "stream",
"text": [
"1.15.0\n",
"Tensor Core GPU Present: None\n"
],
"name": "stdout"
}
]
},
{
"cell_type": "markdown",
"metadata": {
"colab_type": "text",
"id": "FCEfkBAbbaLI"
},
"source": [
"2. Next, we clone the NVIDIA Github UNet_Industrial repository and set up the workspace."
]
},
{
"cell_type": "code",
"metadata": {
"colab_type": "code",
"id": "y3u_VMjXtAto",
"outputId": "2f3c1e0a-233d-4730-fa60-4b99e4f7dd32",
"colab": {
"base_uri": "https://localhost:8080/",
"height": 108
}
},
"source": [
"!git clone https://github.com/NVIDIA/DeepLearningExamples"
],
"execution_count": 3,
"outputs": [
{
"output_type": "stream",
"text": [
"Cloning into 'DeepLearningExamples'...\n",
"remote: Enumerating objects: 4151, done.\u001b[K\n",
"remote: Total 4151 (delta 0), reused 0 (delta 0), pack-reused 4151\u001b[K\n",
"Receiving objects: 100% (4151/4151), 32.36 MiB | 28.69 MiB/s, done.\n",
"Resolving deltas: 100% (1858/1858), done.\n"
],
"name": "stdout"
}
]
},
{
"cell_type": "code",
"metadata": {
"colab_type": "code",
"id": "CvvfQ0RttAt9",
"outputId": "10036d0d-f2c2-4c0d-cc45-a7bdf7905061",
"colab": {
"base_uri": "https://localhost:8080/",
"height": 53
}
},
"source": [
"%%bash\n",
"cd DeepLearningExamples\n",
"git checkout master"
],
"execution_count": 4,
"outputs": [
{
"output_type": "stream",
"text": [
"Your branch is up to date with 'origin/master'.\n"
],
"name": "stdout"
},
{
"output_type": "stream",
"text": [
"Already on 'master'\n"
],
"name": "stderr"
}
]
},
{
"cell_type": "code",
"metadata": {
"colab_type": "code",
"id": "-rE46y-ftAuQ",
"outputId": "9f4cfeec-aa65-4615-b870-ad1c4248e9ae",
"colab": {
"base_uri": "https://localhost:8080/",
"height": 35
}
},
"source": [
"import os\n",
"\n",
"WORKSPACE_DIR='/content/DeepLearningExamples/TensorFlow/Segmentation/UNet_Industrial/notebooks'\n",
"os.chdir(WORKSPACE_DIR)\n",
"print (os.getcwd())"
],
"execution_count": 5,
"outputs": [
{
"output_type": "stream",
"text": [
"/content/DeepLearningExamples/TensorFlow/Segmentation/UNet_Industrial/notebooks\n"
],
"name": "stdout"
}
]
},
{
"cell_type": "code",
"metadata": {
"id": "2b2vTOWtPuIE",
"colab_type": "code",
"outputId": "cb566caa-d637-4d97-feec-2f3438838447",
"colab": {
"base_uri": "https://localhost:8080/",
"height": 108
}
},
"source": [
"!pip install tensorflow_hub==0.6.0"
],
"execution_count": 6,
"outputs": [
{
"output_type": "stream",
"text": [
"Requirement already satisfied: tensorflow_hub==0.6.0 in /usr/local/lib/python3.6/dist-packages (0.6.0)\n",
"Requirement already satisfied: six>=1.10.0 in /usr/local/lib/python3.6/dist-packages (from tensorflow_hub==0.6.0) (1.12.0)\n",
"Requirement already satisfied: numpy>=1.12.0 in /usr/local/lib/python3.6/dist-packages (from tensorflow_hub==0.6.0) (1.17.3)\n",
"Requirement already satisfied: protobuf>=3.4.0 in /usr/local/lib/python3.6/dist-packages (from tensorflow_hub==0.6.0) (3.10.0)\n",
"Requirement already satisfied: setuptools in /usr/local/lib/python3.6/dist-packages (from protobuf>=3.4.0->tensorflow_hub==0.6.0) (41.4.0)\n"
],
"name": "stdout"
}
]
},
{
"cell_type": "markdown",
"metadata": {
"colab_type": "text",
"id": "HqSUGePjmTb9"
},
"source": [
"## Data download\n",
"\n",
"We will first download some data for testing purposes, in particular, the [Weakly Supervised Learning for Industrial Optical Inspection (DAGM 2007)](https://resources.mpi-inf.mpg.de/conference/dagm/2007/prizes.html) dataset. \n",
"\n",
"> The competition is inspired by problems from industrial image processing. In order to satisfy their customers' needs, companies have to guarantee the quality of their products, which can often be achieved only by inspection of the finished product. Automatic visual defect detection has the potential to reduce the cost of quality assurance significantly.\n",
">\n",
"> The competitors have to design a stand-alone algorithm which is able to detect miscellaneous defects on various background textures.\n",
">\n",
"> The particular challenge of this contest is that the algorithm must learn, without human intervention, to discern defects automatically from a weakly labeled (i.e., labels are not exact to the pixel level) training set, the exact characteristics of which are unknown at development time. During the competition, the programs have to be trained on new data without any human guidance.\n",
"\n",
"**Source:** https://resources.mpi-inf.mpg.de/conference/dagm/2007/prizes.html\n"
]
},
{
"cell_type": "code",
"metadata": {
"colab_type": "code",
"id": "S2PR7weWmTcK",
"colab": {}
},
"source": [
"! ./download_and_preprocess_dagm2007_public.sh ./data"
],
"execution_count": 0,
"outputs": []
},
{
"cell_type": "markdown",
"metadata": {
"colab_type": "text",
"id": "EQAIszkxmTcT"
},
"source": [
"The final data directory should look like:\n",
"\n",
"```\n",
"./data\n",
" raw_images\n",
" public\n",
" Class1\t \n",
" Class2\t\n",
" Class3\t \n",
" Class4\t\n",
" Class5\t \n",
" Class6\n",
" Class1_def \n",
" Class2_def\t\n",
" Class3_def \n",
" Class4_def\t\n",
" Class5_def \n",
" Class6_def\n",
" private\n",
" zip_files\n",
"```"
]
},
{
"cell_type": "markdown",
"metadata": {
"colab_type": "text",
"id": "xSztH-mf-6hY"
},
"source": [
"Each data directory contains training images corresponding to one of the first 6 types of defects."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "QjOVDdyZgbT8",
"colab_type": "text"
},
"source": [
"## Load UNet TF-Hub modules from Google Drive (Optional)\n",
"\n",
"This step allows you to connect and load pretrained UNet TF-Hub modules from Google Drive (only if you have modules saved there - see this [notebook](https://colab.research.google.com/github/NVIDIA/DeepLearningExamples/tree/master/TensorFlow/Segmentation/UNet_Industrial/notebooks/Colab_UNet_Industrial_TF_TFHub_export.ipynb) on UNet TF-Hub module creation and export to Google Drive). Execute the below cell to authorize Colab to access your Google Drive content, then copy the saved TF-Hub modules to Colab."
]
},
{
"cell_type": "code",
"metadata": {
"id": "zga0RLrjgg4w",
"colab_type": "code",
"colab": {}
},
"source": [
"from google.colab import drive\n",
"\n",
"drive.mount('/content/gdrive')"
],
"execution_count": 0,
"outputs": []
},
{
"cell_type": "code",
"metadata": {
"id": "l8y1092rjAvl",
"colab_type": "code",
"colab": {}
},
"source": [
"!cp -r \"/content/gdrive/My Drive/NVIDIA/Unet_modules\" ."
],
"execution_count": 0,
"outputs": []
},
{
"cell_type": "code",
"metadata": {
"id": "0O5ZBrY4zcwq",
"colab_type": "code",
"colab": {}
},
"source": [
"!ls Unet_modules"
],
"execution_count": 0,
"outputs": []
},
{
"cell_type": "markdown",
"metadata": {
"id": "VunoLwxYtkdU",
"colab_type": "text"
},
"source": [
"## Inference with UNet TF-Hub modules\n",
"\n",
"Next, we will load one of the pretrained UNet TF-Hub modules (corresponding to one of the 10 classes of the DAGM 2007 dataset) and carry out inference.\n",
"\n",
"In order to load TF-Hub modules, there are several options:\n",
"\n",
"- Load from a local cache or directory\n",
"\n",
"- Load from a remote repository"
]
},
{
"cell_type": "code",
"metadata": {
"id": "doG457_Ut4qI",
"colab_type": "code",
"colab": {}
},
"source": [
"import tensorflow_hub as hub\n",
"\n",
"# Loading from a local cache/directory\n",
"#module = hub.Module(\"Unet_modules/Class_1\", trainable=False)\n",
"\n",
"# Loading from a remote repository. The 10 NVIDIA UNet TF-Hub modules are available at\n",
"# https://tfhub.dev/nvidia/unet/industrial/class_1/1 (similarly for class 2, 3 ...) and\n",
"# https://developer.download.nvidia.com/compute/redist/Binary_Files/unet_tfhub_modules/class_{1..10}\n",
"\n",
"module = hub.Module(\"https://tfhub.dev/nvidia/unet/industrial/class_1/1\") # or class_2, class_3 etc...\n",
"#module = hub.Module(\"https://developer.download.nvidia.com/compute/redist/Binary_Files/unet_tfhub_modules/class_1/1.tar.gz\") # or cls_as2, class_3 etc...\n",
"\n"
],
"execution_count": 0,
"outputs": []
},
{
"cell_type": "code",
"metadata": {
"id": "pw24AUjct-p7",
"colab_type": "code",
"outputId": "e230bf89-fb0e-480e-99f8-6b94b717494b",
"colab": {
"base_uri": "https://localhost:8080/",
"height": 35
}
},
"source": [
"print(module.get_signature_names())"
],
"execution_count": 9,
"outputs": [
{
"output_type": "stream",
"text": [
"['default']\n"
],
"name": "stdout"
}
]
},
{
"cell_type": "code",
"metadata": {
"id": "nKa0L82puB6q",
"colab_type": "code",
"outputId": "ea1317a3-9f1f-45f5-b22f-d979bc880969",
"colab": {
"base_uri": "https://localhost:8080/",
"height": 35
}
},
"source": [
"print(module.get_input_info_dict()) # When no signature is given, considers it as 'default'\n"
],
"execution_count": 10,
"outputs": [
{
"output_type": "stream",
"text": [
"{'images': <hub.ParsedTensorInfo shape=(?, 512, 512, 1) dtype=float32 is_sparse=False>}\n"
],
"name": "stdout"
}
]
},
{
"cell_type": "code",
"metadata": {
"id": "huUOMsfJuRd9",
"colab_type": "code",
"outputId": "e3ff5232-4889-475d-a025-2360063f7a0b",
"colab": {
"base_uri": "https://localhost:8080/",
"height": 35
}
},
"source": [
"print(module.get_output_info_dict())"
],
"execution_count": 11,
"outputs": [
{
"output_type": "stream",
"text": [
"{'default': <hub.ParsedTensorInfo shape=(?, 512, 512, 1) dtype=float32 is_sparse=False>}\n"
],
"name": "stdout"
}
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "8guf0QjutnOz",
"colab_type": "text"
},
"source": [
"As seen, this module expects inputs as grayscale images of size 512x512, and produce masks of the same size."
]
},
{
"cell_type": "code",
"metadata": {
"id": "xJBWiPsPuX9a",
"colab_type": "code",
"colab": {
"base_uri": "https://localhost:8080/",
"height": 595
},
"outputId": "45df40a3-0a27-4331-9d34-fb4e38c48e22"
},
"source": [
"# Load a test image\n",
"import numpy as np\n",
"%matplotlib inline\n",
"import matplotlib.pyplot as plt\n",
"import matplotlib.image as mpimg\n",
"\n",
"img = mpimg.imread('./data/raw_images/public/Class1_def/1.png')\n",
"\n",
"plt.figure(figsize = (10,10));\n",
"plt.imshow(img, cmap='gray');"
],
"execution_count": 12,
"outputs": [
{
"output_type": "display_data",
"data": {
"image/png": "iVBORw0KGgoAAAANSUhEUgAAAkcAAAJCCAYAAADKjmNEAAAABHNCSVQICAgIfAhkiAAAAAlwSFlz\nAAALEgAACxIB0t1+/AAAADh0RVh0U29mdHdhcmUAbWF0cGxvdGxpYiB2ZXJzaW9uMy4xLjEsIGh0\ndHA6Ly9tYXRwbG90bGliLm9yZy8QZhcZAAAgAElEQVR4nOy9aYyc55Xf+6t9r+pau6r3fWV3c2uy\nSVMLLMqmrbFnLNmJbMeYARIYGCDABAgQ3EG+MAFugiAfBkiCJEgy+TAYzNgaOwM5M7IkS7IkihTX\nJtnd7H3v6uraumvft/tB85xIwWRBML7XF+gD6AMpdle97/s85znnf/7//6tptVqcxmmcxmmcxmmc\nxmmcxmeh/f/6C5zGaZzGaZzGaZzGafw6xWlxdBqncRqncRqncRqn8bk4LY5O4zRO4zRO4zRO4zQ+\nF6fF0WmcxmmcxmmcxmmcxufitDg6jdM4jdM4jdM4jdP4XJwWR6dxGqdxGqdxGqdxGp+LX0lxpNFo\nbmg0mjWNRrOp0Wj+r1/FZ5zGaZzGaZzGaZzGafwqQvM37XOk0Wh0wDrwMhAGHgDfbbVay3+jH3Qa\np3Eap3Eap3Eap/EriF8FcnQJ2Gy1WtutVqsK/Aj4zV/B55zGaZzGaZzGaZzGafyNh/5X8Ds7gYPP\n/TkMXP6f/YDBYGi5XC6q1Sp+vx+z2Uyj0SASidBsNjEYDAAYjUYcDgdtbW0kEgkKhQLFYhGbzYbB\nYMBut6PRaEgmkzSbTfk9JpMJn88HQD6f5+TkBL1eT71ep9FoYLPZAMhkMvj9fur1OrlcDgC73U61\nWsVoNKLXf3a7otEoer0eq9VKq9WS31GtVimVSvKdc7kcXq8Xg8HAyckJOp0Ol8sFwMnJCX6/n3K5\nTK1Wo1QqYTQa0el0GI1GqtUqGo0GrVZLrVbDYDBgsViwWCwkk0kAbDYbJycn2Gw2dDodjUaDZrOJ\nVqulXC7TarUwGAyUy2U8Hg/FYpFarQaAQgxbrRYOh4N6vU6z2USn05HP5zGbzXLfq9Wq/L9CoYDT\n6SSXy2G1WgFoNBrY7XYymQx6vZ5ms0mj0cBisaDVaqlUKuh0Omq1GvV6HavVSqVSwWq1otFo5Heo\n+1wul3E6nWSzWSwWC61Wi1qtJve/XC7jcDjkGRmNRgC0Wi31eh29Xk8ul0On06HT6dBoNNhsNiqV\nCtVqFa1Wi06no1qtyjWqqNfrAHLv9Xq9fH/1PS0Wi9wnk8lEuVzGZDLJtajvUqvVaLVa1Ot1tFot\nJpMJnU5HLpfDZDLRbDapVqt4PB5yuRx6vV6ej0ajwWw2A1Cr1dDpdPKMbDYb5XIZg8FAqVRCq9Vi\nMBjQarU0Gg20Wq1cS71el2fcaDQoFovo9Xqq1Somk0nWXCaTwWg0otVq0Wq1ci2FQgG3200ul6PR\naNDW1katVqNarWKxWCgUChgMBjQaDc1mE6PRSLFYpNls4nA4ACgWi7KPKpUKBoOBer2O3W6nUCig\n0Wgol8totVosFotck16vl3Wq9nOtVqPRaFCv1+X75/N5tFotzWYTAKfTSbVapVgsotFoZN04nU4K\nhYKs5UajgUajQaPRyNqrVCro9XoMBgOFQoG2tjaazSalUkn2i7pn6lmpZ6GeabPZpFarYTKZMJvN\n6HQ6yuUylUpF1qv6t+r6dTqdXEO1WlV5Uda0WgcqV6k/NxoNyQXqHqpnbjKZsFqtlEolcrkcWq0W\nm80m9yOfz6PX6zEajZKzUqmU5LVWq4VWq6VUKuF0OmWPOhwOuR9qHRSLRXlmRqMRo9FIJpOh0WhQ\nq9Xwer0AsmdVXlDPJZPJYLFYaDQatFotue+pVErWo8vlolQqyf3RarWSg8rlMlar9Qv3z2QyybrW\n6/VoNBq5BqfT+YX9DlCpVDCbzVQqFVmDTqcTk8lEvV4nm83SarXQ6XTyHAqFguw39ZlqvajcA8jn\nqzxvMpkoFAqy19WaVfe0Uqlgt9sxGAwYDAbS6bSsM71eL+uvWq2i1+vlO6qoVqvodLovrDO73U6x\nWJQ8VCqVZO9+Pr+qfKrX68nn87RaLfR6PVqtVvJzqVTCZDLRaDTwer1ks1n53RqNBovFAiCfoc4j\ntXdKpZJ8nl6vx+FwkEql5Ps0Gg3JH+l0WnLB5/emyo/qHtTrdQwGg6yBz+fQQqGAw+GQe6Zyisvl\nIhKJJFutlp//Ln4VxdH/Vmg0mh8CP4TPCpDvfve7jI6O8uqrr/Kv/tW/IpvN8vTpUzweD1euXAFA\np9Pxgx/8gE8++YT5+Xmy2Sybm5vY7Xa+8Y1v0N3dzb/9t/+W9vZ29Ho9vb29uFwu+vv7icfjALzx\nxhtMT0/jdDpZX19ncnKS4eFh/vk//+d8//vfp16vMz8/Tz6fp6enh0KhQCQS4etf/zq/+MUvAOju\n7uatt97i/Pnz6HQ6/uE//Ie8+eabrK2tsbS0xJUrV8jlcgSDQSqVCgsLCzz//PNEIhEuXrwI/Lek\nsrKygl6vp6+vj3Q6zaVLl6jVamxvb1OpVHjw4AGXL19mdnaWVqvF/fv3uXDhAgBbW1tks1k5iOLx\nOL/927/N/v4+b7zxBoeHhwwODmI2m7l8+TJ/9Ed/xMDAAMViURav2+1Gp9Oxvb2NXq/n6OiInp4e\nKXharRY+n4+/+Iu/YGRkhGAwyJUrV/jjP/5jAoEAAB6Ph/39fXw+H1tbW3i9XjY2NnjttddYWVmh\n2WySz+fJ5/N4PB7m5ub44IMPmJ2dJRwOA8hGGxsb48MPP8ThcBCNRmk2m5TLZcxmMzMzM8TjcSkW\ns9ksWq2W6elpABYWFqjVaiSTSV544QVOTk4ol8t0dHSQSCRwOBxEIhFeeOEFFhYW6OrqYnNzk2Aw\nyO7uLgAXL17k5z//OQ6Hg5mZGarVKul0Wg5Cj8eD1+vl008/5Stf+QqffPIJly5dkkMKPkvUoVAI\ni8VCLpfD6XTy+PFjXn75ZR4+fIjVasVkMnH79m2uXbtGMBjk3r17fPnLX5b7YbPZaDab7Ozs0NfX\nJ4fEyMgI2WyWeDyOwWDg4OCAa9euYTQa2dvbw+v1sr+/D4DVauXw8BCHw4HH45Em4t69exSLRWZn\nZymVSjx58oRAIIDT6aRcLtPd3c3h4SEA2WyWM2fOEIlEcLlc9PT0sL6+zsDAAJFIhLa2NsrlMuVy\nmXq9zrlz54jFYuzt7Umy7uzsZHp6mkqlwu7uLvF4HLfbzdzcHD/96U9xuVwsLi4yOTnJjRs3ePDg\nAbFYjI2NDUZHRwG4evUqy8vLOBwO9vf3SSQSnDlzhmq1yvb2NuVymdHRUaLRKK+++io/+tGPGB4e\n5vHjx5w9exaAjo4Obt++zcTEBHq9ns3NTQYHB3n48CFTU1PUajUcDgetVovj42Oy2SwOh4OjoyPZ\nLwsLC/zmb/4mPT09LCwsUC6XqVar5HI5ZmdnKRQKfPDBB/h8PgYHBzl79izNZpP33nuPfD4PfFYg\n9Pf3EwwGOTk5YXt7G5fLxejoKCcnJ2xsbGC327Hb7QQCAUwmE3/5l3/JlStXCAaDwGeH+vLyMkaj\nkaGhITnQk8kkW1tbvPzyy7S1tREKhchms3zwwQc4HA5MJpMUNkajkcPDQ86cOUNPTw+7u7s8ePAA\nh8OB1+tFo9FIATMwMAB8dsik02mKxSIAXq+Xvr4+Hj16RLlc5vj4mOnpaYLBIOl0mkgkwvHxMUND\nQ0xPT7O8vMzu7i75fJ5QKCS/e2VlhZOTE6rVKr29vfT09GC1Wnn//fexWq0Eg0H6+/vZ3Nzk6OgI\nvV5Pe3u7fK/Dw0NOTk7o7u6W4qfZbHLv3j36+vpIJpM4HA4ODg4wm83Mzs7i9Xo5PDxkZ2cHAJfL\nJYdyqVQin8/z3HPPYbfbSafT5PN5nj59SrPZZHR0VBrBjY0NOXDb2tpob2/H6XTy9OlTuRZ1HfF4\nnGvXrpFIJMjn8
"text/plain": [
"<Figure size 720x720 with 1 Axes>"
]
},
"metadata": {
"tags": []
}
}
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "TqaxvR09KYrr",
"colab_type": "text"
},
"source": [
"As we can see in this figure, there exists a defective area in the top left corner. We will now start a TF session and carry out inference on the normalized test image with the loaded TF-Hub module."
]
},
{
"cell_type": "code",
"metadata": {
"id": "MK_wZTa_udHk",
"colab_type": "code",
"outputId": "836ef139-9d25-4d65-fff6-db6bae1999f5",
"colab": {
"base_uri": "https://localhost:8080/",
"height": 53
}
},
"source": [
"# Image preprocessing\n",
"img = np.expand_dims(img, axis=2)\n",
"img = np.expand_dims(img, axis=0)\n",
"img = (img-0.5)/0.5\n",
"\n",
"output = module(img)"
],
"execution_count": 13,
"outputs": [
{
"output_type": "stream",
"text": [
"INFO:tensorflow:Saver not created because there are no variables in the graph to restore\n"
],
"name": "stdout"
},
{
"output_type": "stream",
"text": [
"INFO:tensorflow:Saver not created because there are no variables in the graph to restore\n"
],
"name": "stderr"
}
]
},
{
"cell_type": "code",
"metadata": {
"id": "dTwuefRtuzCu",
"colab_type": "code",
"outputId": "1097c265-87f1-4a8a-dfda-fc0e07839e03",
"colab": {
"base_uri": "https://localhost:8080/",
"height": 35
}
},
"source": [
"print(output.shape)"
],
"execution_count": 14,
"outputs": [
{
"output_type": "stream",
"text": [
"(1, 512, 512, 1)\n"
],
"name": "stdout"
}
]
},
{
"cell_type": "code",
"metadata": {
"id": "oXgfSMuRvoeQ",
"colab_type": "code",
"colab": {}
},
"source": [
"import tensorflow as tf\n",
" \n",
"with tf.Session() as sess:\n",
" sess.run([tf.global_variables_initializer(), tf.tables_initializer()])\n",
" pred = sess.run(output)\n",
" "
],
"execution_count": 0,
"outputs": []
},
{
"cell_type": "code",
"metadata": {
"id": "qi9THAABzEHc",
"colab_type": "code",
"outputId": "ce6d0172-1545-4b92-dcc1-031ae2290032",
"colab": {
"base_uri": "https://localhost:8080/",
"height": 595
}
},
"source": [
"# Print out model predicted mask\n",
"plt.figure(figsize = (10,10));\n",
"plt.imshow(np.squeeze(pred), cmap='gray');"
],
"execution_count": 16,
"outputs": [
{
"output_type": "display_data",
"data": {
"image/png": "iVBORw0KGgoAAAANSUhEUgAAAkcAAAJCCAYAAADKjmNEAAAABHNCSVQICAgIfAhkiAAAAAlwSFlz\nAAALEgAACxIB0t1+/AAAADh0RVh0U29mdHdhcmUAbWF0cGxvdGxpYiB2ZXJzaW9uMy4xLjEsIGh0\ndHA6Ly9tYXRwbG90bGliLm9yZy8QZhcZAAAcB0lEQVR4nO3db6xld13v8c+3czrTUkqnhTJCp9Le\nWC8h5DKQhlsDDxCjATWWB4RgvKESYmPiTTB6I+ATI7k+0AeCxBu8VYjV+I+o2AbJvTSF67UPQFqL\n/CtcRmxlxtJpoZ22lP6ZM7/74KypX4ahc2Zmn1n7nPN6JSdnr7XX7P09s+DMu2utvXeNMQIAwJpz\n5h4AAGCZiCMAgEYcAQA04ggAoBFHAACNOAIAaDYkjqrqdVX15araX1Xv3IjnAADYCLXo9zmqqh1J\n/l+SH01yIMmnk/z0GOOLC30iAIANsBFHjl6ZZP8Y46tjjCeT/HmSazfgeQAAFm5lAx7zsiRfa8sH\nkvznZ/oDVeVtugGAs+2BMcalx6/ciDhal6q6Psn1cz0/ALDt3XOilRsRRweTXN6W907rvsMY44Yk\nNySOHAEAy2Mjrjn6dJKrqurKqtqZ5M1Jbt6A5wEAWLiFHzkaYxypqv+a5H8n2ZHkg2OMLyz6eQAA\nNsLCX8p/WkM4rQYAnH13jDGuPn6ld8gGAGjEEQBAI44AABpxBADQiCMAgEYcAQA04ggAoBFHAACN\nOAIAaMQRAEAjjgAAGnEEANCIIwCARhwBADTiCACgEUcAAI04AgBoxBEAQCOOAAAacQQA0IgjAIBG\nHAEANOIIAKARRwAAjTgCAGjEEQBAI44AABpxBADQiCMAgEYcPYO/+7u/yxhj3V/vete75h4ZADhD\nNcaYe4ZU1fxDTF71qlfltttuW8hjnX/++Xn88ccX8lgAwMLdMca4+viVjhxNPvnJT2aMsbAwSpJv\nf/vbGWPkzjvvXNhjAgAba9sfOTr33HPz5JNPnrXne/nLX54k+cxnPnPWnhMAOCFHjo534403ntUw\nSpI777wzd95559PXKV1yySVn9fkBgGe2MvcAczl06FAuvfTSucfIN77xjSTJGCMrKys5evTozBMB\nwPa2LY8c/fzP//xShFFXVVldXc0YI6urq1lZ2bbdCgCz2nbXHJ3ta4zO1BNPPJHzzjtv7jEAYCty\nzREAwMlsuzjaTEeNkmTXrl1PX7wNAGy8bXNhy1e/+tW5RzhjYwyn2QBgg22ba46W4edctEsvvTQP\nPPDA3GMAwGa1fa852ophlCT333+/l/4DwIJtizjayqpqy8YfAMxhy8fRdgmHMUZ27Ngx9xgAsOlt\n+TjaTo4cOZLf+73fm3sMANjUtvQF2aurqznnnO3Zf1U19wgAsOy23wXZ2zWMkrXTbO9+97vnHgMA\nNp0te+Ro165defzxxxf9sJuSo0gAcELb68jRI488MvcIS2OMkZtuumnuMQBgU9iyR46W4edaRrt2\n7dp0H6ECABtkex054sSeeOKJrK6uzj0GACytLRlH+/btm3uEpXbOOec8/WG2H/zgB+ceBwCWypY8\nrfbkk0/m3HPPXeRDbnlXXnll7r777rnHAICzafucVhNGp+5f/uVffE4bAGSLxhEAwOkSRzzt2IfY\nvu9975t7FACYzZa85mgZfqbN7sknn8yuXbvmHgMANtL2ueaIM7dz506RCcC2JI54RmOMPPvZz557\nDAA4a8QRJ/XII48IJAC2jS0XRysrK3OPsCU98sgj3iIBgG1hy8XRi1/84rlH2LJ8JhsA28GWi6N3\nvOMdc4+wpblIG4CtbsvF0cte9rK5R9jyBBIAW9mWi6MLLrhg7hG2BYEEwFa15eLo8ccfn3uEbePw\n4cM5fPjw3GMAwEJtuZd27dixY+4Rto3nPOc5SZILL7wwjzzyyMzTAMBiOHLEGXv44YfnHgEAFmbL\nxdE999wz9wjb0utf//q5RwCAhdhycfT7v//7c4+wLX30ox+dewQAWIhahlcdVdXChti5c2eeeOKJ\nRT0cp+Db3/52nvWsZ809BgCs1x1jjKuPX7nljhwBAJyJLRdHPuJiPueff35+8Ad/cO4xAOCMbLk4\nYl5f/vKX5x4BAM6IOGLhjh49OvcIAHDaxBELV1XZt2/f3GMAwGnZcq9WS9aOXFTVIh+S02AfALDk\nts+r1Q4cODD3CCT527/927lHAIBTtiWPHD3vec/L/fffv8iH5DQ5egTAEts+R44eeOCBuUdgcuTI\nkblHAIBTsiXjiOWxY8eOuUcAgFOyZePoqaeemnsEJp/73OfmHgEA1m3LxtHFF1889whMXvrSl849\nAgCs25aNo29961tzj0Dj9BoAm8WWjaPEOzUvk8cee2zuEQBgXbZ0HO3cuXPuEZjYFwBsFls6jlZX\nV+cegea3fuu35h4BAE5qS74JZPfQQw/loosu2qiH5xR5U0gAlsj2eRNIAIDTteXjaPfu3XOPQPPh\nD3947hEA4Blt+dNqSbIMPyP/zqk1AJbE9j2t9vnPf37uEWjOP//8uUcAgO9pWxw5Shw9Wiarq6tZ\nWVmZewwA2L5Hjlgu3i0bgGW2beLonHPOyTnnbJsfd+k5cgTAsto2tTDGcGptiRw+fHjuEQDghLZN\nHB2zd+/euUcgybOe9ay5RwCAEzppHFXVB6vqUFV9vq27pKpuqaqvTN8vntZXVb2vqvZX1Wer6hUb\nOfzpOHjw4NwjAABLbD1Hjv4wyeuOW/fOJLeOMa5Kcuu0nCSvT3LV9HV9kvcvZszFeu973zv3CCTZ\nt2/f3CMAwHdZ10v5q+qKJB8ZY7x0Wv5ykteMMe6tqhck+T9jjP9YVf9zuv1nx293ksc/6xcDuf5o\nfgcOHMjll18+9xgAbF8LfSn/nhY8X0+yZ7p9WZKvte0OTOuWzs033zz3CNveC1/4wrlHAIDvcsav\npx5jjNM58lNV12ft1Nssrr32WkePZuatFQBYRqf7r9N90+m0TN8PTesPJunnSfZO677LGOOGMcbV\nJzqcdbY8//nPn+upAYAldbpxdHOS66bb1yW5qa1/y/SqtWuSHD7Z9UZzuv/++3PPPffMPQYAsERO\nekF2Vf1ZktckeV6S+5L8WpK/SfKhJN+f5J4kbxpjfLPWPm79d7P26rbHkrx1jHH7SYeY4YLs7ujR\noz4pfib+3gGY0QkvyN42Hzz7TMTRfPy9AzAjHzz7vbgwGAA4RhVMHME4+1ZXV+ceAQC+izhqBNLZ\nddddd809AgB8F3F0HIF09vzcz/3c3CMAwHdxQfb3sAx/L1vdysqKU2sAzMkF2aeiqvLYY4/NPcaW\nJowAWEbi6BlccMEF+dmf/dm5x9iSjh49OvcIAHBCTqut0zL8PW0le/bsyaFDh06+IQBsHKfVzkRV\n5bbbbsttt9029yhbgjACYFk5cnSaluHvbbNyITYAS8KRo0WqqjznOc+Ze4xN59Of/rQwAmCpOXK0\nID6f7eTGGD6qBYBl4sjRRjrnnHP8w38S/n4A2Az8a7VAY4xUVXbs2OGapOOce+65c48AAOsijjbA\n0aNHnz6S5Pqa5IUvfGGOHDky9xgAsC7iaAONMbKyspKqysMPPzz3OLO4+OKLc++99849BgCsmzgC\nAGhW5h5gu7jooouSJE8++WSS7XENjlfvAbAZOXJ0lu3cuTM7d+5MVeXRRx+de5wN8eCDDwojADYt\ncTSjCy+8MFWVK6+8cu5RFuLYez1dcsklc48CAKdNHC2Bu+++O1WVqsrHPvaxucc5LTt27MiOHTvm\nHgMAzph3yF5Su3fvzoMPPjj3GN/T0aNHs7KydsnaMvxvCABOg3fI3kweeuihp48m/eu//uvc4zzt\nK1/5yne80aUwAmCr8Wq1TeBFL3pRkrVTV48++mjOO++8s/r8q6ur2blzZ44ePXpWnxcA5uDI0Say\nurqa888//+kjS
"text/plain": [
"<Figure size 720x720 with 1 Axes>"
]
},
"metadata": {
"tags": []
}
}
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "8ycE4aicK5GW",
"colab_type": "text"
},
"source": [
"As expected, the TF-Hub module points out the correct defective area in this image. Please feel free to try out other defective images for Class 1 within `./data/raw_images/public/Class1_def/`, or load the other UNet modules and test data for other classes from 1 to 10. "
]
},
{
"cell_type": "code",
"metadata": {
"id": "z8MG1scbK8Ez",
"colab_type": "code",
"outputId": "f375cc88-8c68-46bb-e6e9-8bb7960e94b5",
"colab": {
"base_uri": "https://localhost:8080/",
"height": 326
}
},
"source": [
"!ls ./data/raw_images/public/Class1_def/"
],
"execution_count": 0,
"outputs": [
{
"output_type": "stream",
"text": [
"100.png 116.png 131.png 147.png 26.png 41.png 57.png 72.png 88.png\n",
"101.png 117.png 132.png 148.png 27.png 42.png 58.png 73.png 89.png\n",
"102.png 118.png 133.png 149.png 28.png 43.png 59.png 74.png 8.png\n",
"103.png 119.png 134.png 14.png 29.png 44.png 5.png 75.png 90.png\n",
"104.png 11.png 135.png 150.png 2.png 45.png 60.png 76.png 91.png\n",
"105.png 120.png 136.png 15.png 30.png 46.png 61.png 77.png 92.png\n",
"106.png 121.png 137.png 16.png 31.png 47.png 62.png 78.png 93.png\n",
"107.png 122.png 138.png 17.png 32.png 48.png 63.png 79.png 94.png\n",
"108.png 123.png 139.png 18.png 33.png 49.png 64.png 7.png 95.png\n",
"109.png 124.png 13.png 19.png 34.png 4.png 65.png 80.png 96.png\n",
"10.png\t 125.png 140.png 1.png 35.png 50.png 66.png 81.png 97.png\n",
"110.png 126.png 141.png 20.png 36.png 51.png 67.png 82.png 98.png\n",
"111.png 127.png 142.png 21.png 37.png 52.png 68.png 83.png 99.png\n",
"112.png 128.png 143.png 22.png 38.png 53.png 69.png 84.png 9.png\n",
"113.png 129.png 144.png 23.png 39.png 54.png 6.png 85.png labels.txt\n",
"114.png 12.png 145.png 24.png 3.png 55.png 70.png 86.png\n",
"115.png 130.png 146.png 25.png 40.png 56.png 71.png 87.png\n"
],
"name": "stdout"
}
]
},
{
"cell_type": "markdown",
"metadata": {
"colab_type": "text",
"id": "g8MxXY5GmTc8"
},
"source": [
"# Conclusion\n",
"\n",
"In this notebook, we have walked through the process of loading a pretrained UNet-Industrial TF-Hub module and carrying out inference on a test image.\n",
"## What's next\n",
"Now it's time to try the UNet-Industrial TF Hub modules on your own data. "
]
},
{
"cell_type": "code",
"metadata": {
"id": "u0SS61owr2oO",
"colab_type": "code",
"colab": {}
},
"source": [
""
],
"execution_count": 0,
"outputs": []
}
]
}