Simplify README, move advanced documentation to the wiki
This commit is contained in:
parent
fe1874942b
commit
2d04ea2897
110
README.md
110
README.md
|
@ -7,93 +7,57 @@
|
|||
|
||||
**Warning: This project is based on an alpha release (libnvidia-container). It is already more stable than 1.0 but we need help testing it.**
|
||||
|
||||
## Differences with 1.0
|
||||
* Doesn't require wrapping the Docker CLI and doesn't need a separate daemon,
|
||||
* GPU isolation is now achieved with environment variable `NVIDIA_VISIBLE_DEVICES`,
|
||||
* Can enable GPU support for any Docker image. Not just the ones based on our official CUDA images,
|
||||
* Package repositories are available for Ubuntu and CentOS,
|
||||
* Uses a new implementation based on [libnvidia-container](https://github.com/NVIDIA/libnvidia-container).
|
||||
# Documentation
|
||||
|
||||
## Removing nvidia-docker 1.0
|
||||
The full documentation is available on the [repository wiki](https://github.com/NVIDIA/nvidia-docker/wiki).
|
||||
|
||||
Version 1.0 of the nvidia-docker package must be cleanly removed before continuing.
|
||||
You must stop and remove **all** containers started with nvidia-docker 1.0.
|
||||
|
||||
#### Ubuntu distributions
|
||||
```sh
|
||||
docker volume ls -q -f driver=nvidia-docker | xargs -r -I{} -n1 docker ps -q -a -f volume={} | xargs -r docker rm -f
|
||||
sudo apt-get purge nvidia-docker
|
||||
```
|
||||
|
||||
#### CentOS distributions
|
||||
|
||||
```
|
||||
docker volume ls -q -f driver=nvidia-docker | xargs -r -I{} -n1 docker ps -q -a -f volume={} | xargs -r docker rm -f
|
||||
sudo yum remove nvidia-docker
|
||||
```
|
||||
|
||||
## Installation
|
||||
## Quickstart
|
||||
|
||||
**If you have a custom `/etc/docker/daemon.json`, the `nvidia-docker2` package will override it.**
|
||||
|
||||
#### Ubuntu distributions
|
||||
#### Xenial x86_64
|
||||
```sh
|
||||
# If you have nvidia-docker 1.0 installed: we need to remove all existing GPU containers
|
||||
docker volume ls -q -f driver=nvidia-docker | xargs -r -I{} -n1 docker ps -q -a -f volume={} | xargs -r docker rm -f
|
||||
sudo apt-get purge -y nvidia-docker
|
||||
|
||||
1. Install the repository for your distribution by following the instructions [here](http://nvidia.github.io/nvidia-docker/).
|
||||
2. Install the `nvidia-docker2` package and restart the Docker daemon:
|
||||
```
|
||||
sudo apt-get install nvidia-docker2
|
||||
# Add the package repositories
|
||||
curl -s -L https://nvidia.github.io/nvidia-docker/gpgkey | \
|
||||
sudo apt-key add -
|
||||
curl -s -L https://nvidia.github.io/nvidia-docker/ubuntu16.04/amd64/nvidia-docker.list | \
|
||||
sudo tee /etc/apt/sources.list.d/nvidia-docker.list
|
||||
sudo apt-get update
|
||||
|
||||
# Install nvidia-docker2
|
||||
sudo apt-get install -y nvidia-docker2
|
||||
sudo pkill -SIGHUP dockerd
|
||||
```
|
||||
|
||||
#### CentOS distributions
|
||||
1. Install the repository for your distribution by following the instructions [here](http://nvidia.github.io/nvidia-docker/).
|
||||
2. Install the `nvidia-docker2` package and restart the Docker daemon:
|
||||
```
|
||||
sudo yum install nvidia-docker2
|
||||
sudo pkill -SIGHUP dockerd
|
||||
```
|
||||
|
||||
## Usage
|
||||
|
||||
#### NVIDIA runtime
|
||||
nvidia-docker registers a new container runtime to the Docker daemon.
|
||||
You must select the `nvidia` runtime when using `docker run`:
|
||||
```
|
||||
# Test nvidia-smi
|
||||
docker run --runtime=nvidia --rm nvidia/cuda nvidia-smi
|
||||
```
|
||||
|
||||
#### GPU isolation
|
||||
Set the environment variable `NVIDIA_VISIBLE_DEVICES` in the container:
|
||||
```
|
||||
docker run --runtime=nvidia -e NVIDIA_VISIBLE_DEVICES=0 --rm nvidia/cuda nvidia-smi
|
||||
#### CentOS/RHEL 7 x86_64
|
||||
|
||||
```sh
|
||||
# If you have nvidia-docker 1.0 installed: we need to remove all existing GPU containers
|
||||
docker volume ls -q -f driver=nvidia-docker | xargs -r -I{} -n1 docker ps -q -a -f volume={} | xargs -r docker rm -f
|
||||
sudo yum remove nvidia-docker
|
||||
|
||||
# Add the package repositories
|
||||
curl -s -L https://nvidia.github.io/nvidia-docker/centos7/x86_64/nvidia-docker.repo | \
|
||||
sudo tee /etc/yum.repos.d/nvidia-docker.repo
|
||||
|
||||
# Install nvidia-docker2
|
||||
sudo yum install -y nvidia-docker2
|
||||
sudo pkill -SIGHUP dockerd
|
||||
|
||||
# Test nvidia-smi
|
||||
docker run --runtime=nvidia --rm nvidia/cuda nvidia-smi
|
||||
```
|
||||
|
||||
#### Non-CUDA image:
|
||||
Setting `NVIDIA_VISIBLE_DEVICES` will enable GPU support for any container image:
|
||||
```
|
||||
docker run --runtime=nvidia -e NVIDIA_VISIBLE_DEVICES=all --rm debian:stretch nvidia-smi
|
||||
```
|
||||
#### Other distributions and architectures
|
||||
|
||||
## Advanced
|
||||
|
||||
#### Backward compatibility
|
||||
|
||||
To help transitioning code from 1.0 to 2.0, a bash script is provided in `/usr/bin/nvidia-docker` for backward compatibility.
|
||||
It will automatically inject the `--runtime=nvidia` argument and convert `NV_GPU` to `NVIDIA_VISIBLE_DEVICES`.
|
||||
|
||||
#### Existing `daemon.json`
|
||||
If you have a custom `/etc/docker/daemon.json`, the `nvidia-docker2` package will override it.
|
||||
In this case, it is recommended to install [nvidia-container-runtime](https://github.com/nvidia/nvidia-container-runtime#installation) instead and register the new runtime manually.
|
||||
|
||||
#### Default runtime
|
||||
The default runtime used by the Docker® Engine is [runc](https://github.com/opencontainers/runc), our runtime can become the default one by configuring the docker daemon with `--default-runtime=nvidia`.
|
||||
Doing so will remove the need to add the `--runtime=nvidia` argument to `docker run`.
|
||||
It is also the only way to have GPU access during `docker build`.
|
||||
|
||||
#### Environment variables
|
||||
The behavior of the runtime can be modified through environment variables (such as `NVIDIA_VISIBLE_DEVICES`).
|
||||
Those environment variables are consumed by [nvidia-container-runtime](https://github.com/nvidia/nvidia-container-runtime) and are documented [here](https://github.com/nvidia/nvidia-container-runtime#environment-variables-oci-spec).
|
||||
Our official CUDA images use default values for these variables.
|
||||
Look at the [Installation section](https://github.com/nvidia/nvidia-docker/wiki/Installation-(version-2.0)) of the wiki.
|
||||
|
||||
## Issues and Contributing
|
||||
|
||||
|
|
Loading…
Reference in a new issue