Cudnn
Using the NVIDIA cuDNN library with DL4J.

Using Deeplearning4j with cuDNN

There are 2 ways of using cudnn with deeplearning4j. One is an older way described below that is built in to the various deeplearning4j layers at the java level.
The other is to use the new nd4j cuda bindings that link to cudnn at the c++ level. Both will be described below. The newer way first, followed by the old way.

Cudnn setup

The actual library for cuDNN is not bundled, so be sure to download and install the appropriate package for your platform from NVIDIA:
Note there are multiple combinations of cuDNN and CUDA supported. Deeplearning4j's cuda support is based on javacpp's cuda bindings. The way to read the versioning is: cuda version - cudnn version - javacpp version. For example, if the cuda version is set to 11.2, you can expect us to support cudnn 8.1.
To install, simply extract the library to a directory found in the system path used by native libraries. The easiest way is to place it alongside other libraries from CUDA in the default directory (/usr/local/cuda/lib64/ on Linux, /usr/local/cuda/lib/ on Mac OS X, and C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v10.0\bin\, C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v10.1\bin\, or C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v10.2\bin\ on Windows).
Alternatively, in the case of the most recent supported cuda version, cuDNN comes bundled with the "redist" package of the JavaCPP Presets for CUDA. After agreeing to the license, we can add the following dependencies instead of installing CUDA and cuDNN:
1
<dependency>
2
<groupId>org.bytedeco</groupId>
3
<artifactId>cuda-platform-redist</artifactId>
4
<version>$CUDA_VERSION-$CUDNN_VERSIUON-$JAVACPP_VERSION</version>
5
</dependency>
Copied!
The same versioning scheme for redist applies to the cuda bindings that leverage an installed cuda.

Using cuDNN via nd4j

Similar to our avx bindings, nd4j leverages our c++ library libnd4j for running mathematical operations. In order to use cudnn, all you need to do is change the cuda backend dependency from:
1
<dependency>
2
<groupId>org.nd4j</groupId>
3
<artifactId>nd4j-cuda-11.2</artifactId>
4
<version>1.0.0-M1</version>
5
</dependency>
Copied!
or for cuda 11.0:
1
<dependency>
2
<groupId>org.nd4j</groupId>
3
<artifactId>nd4j-cuda-11.0</artifactId>
4
<version>1.0.0-M1</version>
5
</dependency>
Copied!
to
1
<dependency>
2
<groupId>org.nd4j</groupId>
3
<artifactId>nd4j-cuda-11.2</artifactId>
4
<version>1.0.0-M1</version>
5
</dependency>
6
<dependency>
7
<groupId>org.nd4j</groupId>
8
<artifactId>nd4j-cuda-11.2</artifactId>
9
<version>1.0.0-M1</version>
10
<classifier>linux-x86_64-cudnn</classifier>
11
</dependency>
Copied!
or for cuda 11.0:
1
<dependency>
2
<groupId>org.nd4j</groupId>
3
<artifactId>nd4j-cuda-11.2</artifactId>
4
<version>1.0.0-M1</version>
5
</dependency>
6
<dependency>
7
<groupId>org.nd4j</groupId>
8
<artifactId>nd4j-cuda-11.2</artifactId>
9
<version>1.0.0-M1</version>
10
<classifier>linux-x86_64-cudnn</classifier>
11
</dependency>
Copied!
For jetson nano cuda 10.2:
1
<dependency>
2
<groupId>org.nd4j</groupId>
3
<artifactId>nd4j-cuda-10.2</artifactId>
4
<version>1.0.0-M1.1</version>
5
</dependency>
6
7
<dependency>
8
<groupId>org.nd4j</groupId>
9
<artifactId>nd4j-cuda-10.2</artifactId>
10
<version>1.0.0-M1.1</version>
11
<version>linux-arm64</version>
12
</dependency>
Copied!
Note that we are only adding an additional dependency. The reason we use an additional classifier is to pull in an optional dependency on cudnn based routines. The default does not use cudnn, but instead built in standalone routines for various operations implemented in cudnn such as conv2d and lstm.
For users of the -platform dependencies such as nd4j-cuda-11.2-platform, this classifier is still required. The -platform dependencies try to set sane defaults for each platform, but give users the option to include whatever they want. If you need optimizations, please become familiar with this.

Using cudnn via deeplearning4j

Deeplearning4j supports CUDA but can be further accelerated with cuDNN. Most 2D CNN layers (such as ConvolutionLayer, SubsamplingLayer, etc), and also LSTM and BatchNormalization layers support CuDNN.
The only thing we need to do to have DL4J load cuDNN is to add a dependency on deeplearning4j-cuda-11.0, or deeplearning4j-cuda-11.2, for example:
1
<dependency>
2
<groupId>org.deeplearning4j</groupId>
3
<artifactId>deeplearning4j-cuda-11.0</artifactId>
4
<version>1.0.0-M1.1</version>
5
</dependency>
Copied!
or
1
<dependency>
2
<groupId>org.deeplearning4j</groupId>
3
<artifactId>deeplearning4j-cuda-11.2</artifactId>
4
<version>1.0.0-M1.1</version>
5
</dependency>
Copied!
or
1
<dependency>
2
<groupId>org.deeplearning4j</groupId>
3
<artifactId>deeplearning4j-cuda-11.2</artifactId>
4
<version>1.0.0-M1.1</version>
5
</dependency>
Copied!
Also note that, by default, Deeplearning4j will use the fastest algorithms available according to cuDNN, but memory usage may be excessive, causing strange launch errors. When this happens, try to reduce memory usage by using the NO_WORKSPACE mode settable via the network configuration, instead of the default of ConvolutionLayer.AlgoMode.PREFER_FASTEST, for example:
1
// for the whole network
2
new NeuralNetConfiguration.Builder()
3
.cudnnAlgoMode(ConvolutionLayer.AlgoMode.NO_WORKSPACE)
4
// ...
5
// or separately for each layer
6
new ConvolutionLayer.Builder(h, w)
7
.cudnnAlgoMode(ConvolutionLayer.AlgoMode.NO_WORKSPACE)
8
// ...
Copied!
Last modified 4mo ago