fe1874942b
Signed-off-by: Felix Abecassis <fabecassis@nvidia.com> |
||
---|---|---|
debian | ||
rpm | ||
.gitignore | ||
daemon.json | ||
Dockerfile.centos7 | ||
Dockerfile.xenial | ||
LICENSE | ||
Makefile | ||
nvidia-docker | ||
README.md |
Docker Engine Utility for NVIDIA GPUs
Warning: This project is based on an alpha release (libnvidia-container). It is already more stable than 1.0 but we need help testing it.
Differences with 1.0
- Doesn't require wrapping the Docker CLI and doesn't need a separate daemon,
- GPU isolation is now achieved with environment variable
NVIDIA_VISIBLE_DEVICES
, - Can enable GPU support for any Docker image. Not just the ones based on our official CUDA images,
- Package repositories are available for Ubuntu and CentOS,
- Uses a new implementation based on libnvidia-container.
Removing nvidia-docker 1.0
Version 1.0 of the nvidia-docker package must be cleanly removed before continuing.
You must stop and remove all containers started with nvidia-docker 1.0.
Ubuntu distributions
docker volume ls -q -f driver=nvidia-docker | xargs -r -I{} -n1 docker ps -q -a -f volume={} | xargs -r docker rm -f
sudo apt-get purge nvidia-docker
CentOS distributions
docker volume ls -q -f driver=nvidia-docker | xargs -r -I{} -n1 docker ps -q -a -f volume={} | xargs -r docker rm -f
sudo yum remove nvidia-docker
Installation
If you have a custom /etc/docker/daemon.json
, the nvidia-docker2
package will override it.
Ubuntu distributions
- Install the repository for your distribution by following the instructions here.
- Install the
nvidia-docker2
package and restart the Docker daemon:
sudo apt-get install nvidia-docker2
sudo pkill -SIGHUP dockerd
CentOS distributions
- Install the repository for your distribution by following the instructions here.
- Install the
nvidia-docker2
package and restart the Docker daemon:
sudo yum install nvidia-docker2
sudo pkill -SIGHUP dockerd
Usage
NVIDIA runtime
nvidia-docker registers a new container runtime to the Docker daemon.
You must select the nvidia
runtime when using docker run
:
docker run --runtime=nvidia --rm nvidia/cuda nvidia-smi
GPU isolation
Set the environment variable NVIDIA_VISIBLE_DEVICES
in the container:
docker run --runtime=nvidia -e NVIDIA_VISIBLE_DEVICES=0 --rm nvidia/cuda nvidia-smi
Non-CUDA image:
Setting NVIDIA_VISIBLE_DEVICES
will enable GPU support for any container image:
docker run --runtime=nvidia -e NVIDIA_VISIBLE_DEVICES=all --rm debian:stretch nvidia-smi
Advanced
Backward compatibility
To help transitioning code from 1.0 to 2.0, a bash script is provided in /usr/bin/nvidia-docker
for backward compatibility.
It will automatically inject the --runtime=nvidia
argument and convert NV_GPU
to NVIDIA_VISIBLE_DEVICES
.
Existing daemon.json
If you have a custom /etc/docker/daemon.json
, the nvidia-docker2
package will override it.
In this case, it is recommended to install nvidia-container-runtime instead and register the new runtime manually.
Default runtime
The default runtime used by the Docker® Engine is runc, our runtime can become the default one by configuring the docker daemon with --default-runtime=nvidia
.
Doing so will remove the need to add the --runtime=nvidia
argument to docker run
.
It is also the only way to have GPU access during docker build
.
Environment variables
The behavior of the runtime can be modified through environment variables (such as NVIDIA_VISIBLE_DEVICES
).
Those environment variables are consumed by nvidia-container-runtime and are documented here.
Our official CUDA images use default values for these variables.
Issues and Contributing
A signed copy of the Contributor License Agreement needs to be provided to digits@nvidia.com before any change can be accepted.
- Please let us know by filing a new issue
- You can contribute by opening a pull request