cuDNN
Using the NVIDIA cuDNN library with DL4J.
Using Deeplearning4j with cuDNN
Deeplearning4j supports CUDA but can be further accelerated with cuDNN. Most 2D CNN layers (such as ConvolutionLayer, SubsamplingLayer, etc), and also LSTM and BatchNormalization layers support CuDNN.
The only thing we need to do to have DL4J load cuDNN is to add a dependency on deeplearning4j-cuda-9.2
, deeplearning4j-cuda-10.0
, deeplearning4j-cuda-10.1,
or deeplearning4j-cuda-10.2
, for example:
or
or
or
The actual library for cuDNN is not bundled, so be sure to download and install the appropriate package for your platform from NVIDIA:
Note there are multiple combinations of cuDNN and CUDA supported. At this time the following combinations are supported by Deeplearning4j:
CUDA Version
cuDNN Version
9.2
7.2
10.0
7.4
10.1
7.6
10.2
7.6
To install, simply extract the library to a directory found in the system path used by native libraries. The easiest way is to place it alongside other libraries from CUDA in the default directory (/usr/local/cuda/lib64/
on Linux, /usr/local/cuda/lib/
on Mac OS X, and C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v9.2\bin\
, C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v10.0\bin\
, or C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v10.1\bin\
on Windows).
Alternatively, in the case of CUDA 10.2, cuDNN comes bundled with the "redist" package of the JavaCPP Presets for CUDA. After agreeing to the license, we can add the following dependencies instead of installing CUDA and cuDNN:
Also note that, by default, Deeplearning4j will use the fastest algorithms available according to cuDNN, but memory usage may be excessive, causing strange launch errors. When this happens, try to reduce memory usage by using the NO_WORKSPACE
mode settable via the network configuration, instead of the default of ConvolutionLayer.AlgoMode.PREFER_FASTEST
, for example:
Last updated