2017-07-22 21:54:03 +02:00
# Docker Engine Utility for NVIDIA GPUs
2015-11-17 01:33:03 +01:00
2017-10-07 02:50:12 +02:00
[![GitHub license ](https://img.shields.io/badge/license-New%20BSD-blue.svg?style=flat-square )](https://raw.githubusercontent.com/NVIDIA/nvidia-docker/master/LICENSE)
[![Package repository ](https://img.shields.io/badge/packages-repository-b956e8.svg?style=flat-square )](https://nvidia.github.io/nvidia-docker)
2017-10-07 02:59:46 +02:00
2016-01-09 10:13:34 +01:00
![nvidia-gpu-docker ](https://cloud.githubusercontent.com/assets/3028125/12213714/5b208976-b632-11e5-8406-38d379ec46aa.png )
2015-11-17 01:33:03 +01:00
2017-11-14 06:10:52 +01:00
**Warning: This project is based on an alpha release (libnvidia-container). It is already more stable than 1.0 but we need help testing it.**
2015-11-04 21:55:32 +01:00
2017-10-07 02:50:12 +02:00
## Differences with 1.0
2017-11-14 06:10:52 +01:00
* Doesn't require wrapping the Docker CLI and doesn't need a separate daemon,
2017-10-07 02:50:12 +02:00
* GPU isolation is now achieved with environment variable `NVIDIA_VISIBLE_DEVICES` ,
* Can enable GPU support for any Docker image. Not just the ones based on our official CUDA images,
* Package repositories are available for Ubuntu and CentOS,
* Uses a new implementation based on [libnvidia-container ](https://github.com/NVIDIA/libnvidia-container ).
2017-07-22 21:54:03 +02:00
2017-10-07 02:50:12 +02:00
## Removing nvidia-docker 1.0
2015-11-04 21:55:32 +01:00
2017-10-07 02:50:12 +02:00
Version 1.0 of the nvidia-docker package must be cleanly removed before continuing.
You must stop and remove **all** containers started with nvidia-docker 1.0.
2015-11-21 03:08:10 +01:00
2017-10-07 02:50:12 +02:00
#### Ubuntu distributions
2016-03-29 03:31:56 +02:00
```sh
2017-10-07 02:50:12 +02:00
docker volume ls -q -f driver=nvidia-docker | xargs -r -I{} -n1 docker ps -q -a -f volume={} | xargs -r docker rm -f
sudo apt-get purge nvidia-docker
```
#### CentOS distributions
2016-03-29 03:31:56 +02:00
2017-10-07 02:50:12 +02:00
```
docker volume ls -q -f driver=nvidia-docker | xargs -r -I{} -n1 docker ps -q -a -f volume={} | xargs -r docker rm -f
sudo yum remove nvidia-docker
2016-03-29 03:31:56 +02:00
```
2017-10-07 02:50:12 +02:00
## Installation
2017-11-01 18:37:27 +01:00
**If you have a custom `/etc/docker/daemon.json` , the `nvidia-docker2` package will override it.**
2016-03-29 03:31:56 +02:00
2017-10-07 02:50:12 +02:00
#### Ubuntu distributions
1. Install the repository for your distribution by following the instructions [here ](http://nvidia.github.io/nvidia-docker/ ).
2. Install the `nvidia-docker2` package and restart the Docker daemon:
```
sudo apt-get install nvidia-docker2
sudo pkill -SIGHUP dockerd
2016-03-29 03:31:56 +02:00
```
2017-10-07 02:50:12 +02:00
#### CentOS distributions
1. Install the repository for your distribution by following the instructions [here ](http://nvidia.github.io/nvidia-docker/ ).
2. Install the `nvidia-docker2` package and restart the Docker daemon:
```
sudo yum install nvidia-docker2
sudo pkill -SIGHUP dockerd
```
2015-11-04 21:55:32 +01:00
2017-10-07 02:50:12 +02:00
## Usage
2015-11-04 21:55:32 +01:00
2017-10-07 02:50:12 +02:00
#### NVIDIA runtime
2017-11-14 06:10:52 +01:00
nvidia-docker registers a new container runtime to the Docker daemon.
2017-10-07 02:50:12 +02:00
You must select the `nvidia` runtime when using `docker run` :
2015-11-04 21:55:32 +01:00
```
2017-10-07 02:50:12 +02:00
docker run --runtime=nvidia --rm nvidia/cuda nvidia-smi
```
#### GPU isolation
Set the environment variable `NVIDIA_VISIBLE_DEVICES` in the container:
```
docker run --runtime=nvidia -e NVIDIA_VISIBLE_DEVICES=0 --rm nvidia/cuda nvidia-smi
```
#### Non-CUDA image:
Setting `NVIDIA_VISIBLE_DEVICES` will enable GPU support for any container image:
```
docker run --runtime=nvidia -e NVIDIA_VISIBLE_DEVICES=all --rm debian:stretch nvidia-smi
```
## Advanced
#### Backward compatibility
To help transitioning code from 1.0 to 2.0, a bash script is provided in `/usr/bin/nvidia-docker` for backward compatibility.
It will automatically inject the `--runtime=nvidia` argument and convert `NV_GPU` to `NVIDIA_VISIBLE_DEVICES` .
2017-11-14 06:10:52 +01:00
#### Existing `daemon.json`
If you have a custom `/etc/docker/daemon.json` , the `nvidia-docker2` package will override it.
In this case, it is recommended to install [nvidia-container-runtime ](https://github.com/nvidia/nvidia-container-runtime#installation ) instead and register the new runtime manually.
2015-11-04 21:55:32 +01:00
2017-10-07 02:50:12 +02:00
#### Default runtime
The default runtime used by the Docker® Engine is [runc ](https://github.com/opencontainers/runc ), our runtime can become the default one by configuring the docker daemon with `--default-runtime=nvidia` .
Doing so will remove the need to add the `--runtime=nvidia` argument to `docker run` .
It is also the only way to have GPU access during `docker build` .
2017-04-28 18:54:26 +02:00
2017-11-01 18:37:27 +01:00
#### Environment variables
The behavior of the runtime can be modified through environment variables (such as `NVIDIA_VISIBLE_DEVICES` ).
Those environment variables are consumed by [nvidia-container-runtime ](https://github.com/nvidia/nvidia-container-runtime ) and are documented [here ](https://github.com/nvidia/nvidia-container-runtime#environment-variables-oci-spec ).
Our official CUDA images use default values for these variables.
2017-04-28 18:54:26 +02:00
2017-10-07 02:50:12 +02:00
## Issues and Contributing
2015-11-04 21:55:32 +01:00
2017-10-07 02:50:12 +02:00
A signed copy of the [Contributor License Agreement ](https://raw.githubusercontent.com/NVIDIA/nvidia-docker/master/CLA ) needs to be provided to < a href = "mailto:digits@nvidia.com" > digits@nvidia.com</ a > before any change can be accepted.
2015-11-04 21:55:32 +01:00
* Please let us know by [filing a new issue ](https://github.com/NVIDIA/nvidia-docker/issues/new )
2016-01-09 10:13:34 +01:00
* You can contribute by opening a [pull request ](https://help.github.com/articles/using-pull-requests/ )