Workspaces are an efficient model for memory paging in DL4J.
ND4J offers an additional memory-management model: workspaces. That allows you to reuse memory for cyclic workloads without the JVM Garbage Collector for off-heap memory tracking. In other words, at the end of the workspace loop, all INDArray
s' memory content is invalidated. Workspaces are integrated into DL4J for training and inference.
The basic idea is simple: You can do what you need within a workspace (or spaces), and if you want to get an INDArray out of it (i.e. to move result out of the workspace), you just call INDArray.detach()
and you'll get an independent INDArray
copy.
For DL4J users, workspaces provide better performance out of the box, and are enabled by default from 1.0.0-alpha onwards. Thus for most users, no explicit worspaces configuration is required.
To benefit from worspaces, they need to be enabled. You can configure the workspace mode using:
.trainingWorkspaceMode(WorkspaceMode.SEPARATE)
and/or .inferenceWorkspaceMode(WorkspaceMode.SINGLE)
in your neural network configuration.
The difference between SEPARATE and SINGLE workspaces is a tradeoff between the performance & memory footprint:
SEPARATE is slightly slower, but uses less memory.
SINGLE is slightly faster, but uses more memory.
That said, it’s fine to use different modes for training & inference (i.e. use SEPARATE for training, and use SINGLE for inference, since inference only involves a feed-forward loop without backpropagation or updaters involved).
With workspaces enabled, all memory used during training will be reusable and tracked without the JVM GC interference. The only exclusion is the output()
method that uses workspaces (if enabled) internally for the feed-forward loop. Subsequently, it detaches the resulting INDArray
from the workspaces, thus providing you with independent INDArray
which will be handled by the JVM GC.
Please note: After the 1.0.0-alpha release, workspaces in DL4J were refactored - SEPARATE/SINGLE modes have been deprecated, and users should use ENABLED instead.
If your training process uses workspaces, we recommend that you disable (or reduce the frequency of) periodic GC calls. That can be done like so:
Put that somewhere before your model.fit(...)
call.
For ParallelWrapper
, the workspace-mode configuration option was also added. As such, each of the trainer threads will use a separate workspace attached to the designated device.
We provide asynchronous prefetch iterators, AsyncDataSetIterator
and AsyncMultiDataSetIterator
, which are usually used internally.
These iterators optionally use a special, cyclic workspace mode to obtain a smaller memory footprint. The size of the workspace, in this case, will be determined by the memory requirements of the first DataSet
coming out of the underlying iterator, whereas the buffer size is defined by the user. The workspace will be adjusted if memory requirements change over time (e.g. if you’re using variable-length time series).
Caution: If you’re using a custom iterator or the RecordReader
, please make sure you’re not initializing something huge within the first next()
call. Do that in your constructor to avoid undesired workspace growth.
Caution: With AsyncDataSetIterator
being used, DataSets
are supposed to be used before calling the next()
DataSet. You are not supposed to store them, in any way, without the detach()
call. Otherwise, the memory used for INDArrays
within DataSet will be overwritten within AsyncDataSetIterator
eventually.
If for some reason you don’t want your iterator to be wrapped into an asynchronous prefetch (e.g. for debugging purposes), special wrappers are provided: AsyncShieldDataSetIterator
and AsyncShieldMultiDataSetIterator
. Basically, those are just thin wrappers that prevent prefetch.
Usually, evaluation assumes use of the model.output()
method, which essentially returns an INDArray
detached from the workspace. In the case of regular evaluations during training, it might be better to use the built-in methods for evaluation. For example:
This piece of code will run a single cycle over iteratorTest
, and it will update both (or less/more if required by your needs) IEvaluation
implementations without any additional INDArray
allocation.
There are also some situations, say, where you're short on RAM, and might want do release all workspaces created out of your control; e.g. during evaluation or training.
That could be done like so: Nd4j.getWorkspaceManager().destroyAllWorkspacesForCurrentThread();
This method will destroy all workspaces that were created within the calling thread. If you've created workspaces in some external threads on your own, you can use the same method in that thread, after the workspaces are no longer needed.
If workspaces are used incorrectly (such as a bug in a custom layer or data pipeline, for example), you may see an error message such as:
DL4J's Layer API includes the concept of a "layer workspace manager".
The idea with this class is that it allows us to easily and precisely control the location of a given array, given different possible configurations for the workspaces. For example, the activations out of a layer may be placed in one workspace during inference, and another during training; this is for performance reasons. However, with the LayerWorkspaceMgr design, implementers of layers don't need to worry about this.
What does this mean in practice? Usually it's quite simple...
When returning activations (activate(boolean training, LayerWorkspaceMgr workspaceMgr)
method), make sure the returned array is defined in ArrayType.ACTIVATIONS
(i.e., use LayerWorkspaceMgr.create(ArrayType.ACTIVATIONS, ...) or similar)
When returning activation gradients (backpropGradient(INDArray epsilon, LayerWorkspaceMgr workspaceMgr)
), similarly return an array defined in ArrayType.ACTIVATION_GRAD
You can also leverage an array defined in any workspace to the appropriate workspace using, for example, LayerWorkspaceMgr.leverageTo(ArrayType.ACTIVATIONS, myArray)
Note that if you are not implementing a custom layer (and instead just want to perform forward pass for a layer outside of a MultiLayerNetwork/ComputationGraph) you can use LayerWorkspaceMgr.noWorkspaces()
.
Setting available Memory/RAM for a DL4J application
ND4J uses off-heap memory to store NDArrays, to provide better performance while working with NDArrays from native code such as BLAS and CUDA libraries.
"Off-heap" means that the memory is allocated outside of the JVM (Java Virtual Machine) and hence isn't managed by the JVM's garbage collection (GC). On the Java/JVM side, we only hold pointers to the off-heap memory, which can be passed to the underlying C++ code via JNI for use in ND4J operations.
To manage memory allocations, we use two approaches:
JVM Garbage Collector (GC) and WeakReference tracking
MemoryWorkspaces - see Workspaces guide for details
Despite the differences between these two approaches, the idea is the same: once an NDArray is no longer required on the Java side, the off-heap associated with it should be released so that it can be reused later. The difference between the GC and MemoryWorkspaces
approaches is in when and how the memory is released.
For JVM/GC memory: whenever an INDArray is collected by the garbage collector, its off-heap memory will be deallocated, assuming it is not used elsewhere.
For MemoryWorkspaces
: whenever an INDArray leaves the workspace scope - for example, when a layer finished forward pass/predictions - its memory may be reused without deallocation and reallocation. This results in better performance for cyclical workloads like neural network training and inference.
With DL4J/ND4J, there are two types of memory limits to be aware of and configure: The on-heap JVM memory limit, and the off-heap memory limit, where NDArrays live. Both limits are controlled via Java command-line arguments:
-Xms
- this defines how much memory JVM heap will use at application start.
-Xmx
- this allows you to specify JVM heap memory limit (maximum, at any point). Only allocated up to this amount (at the discretion of the JVM) if required.
-Dorg.bytedeco.javacpp.maxbytes
- this allows you to specify the off-heap memory limit. This can also be a percentage, in which case it would apply to maxMemory.
-Dorg.bytedeco.javacpp.maxphysicalbytes
- this specifies the maximum bytes for the entire process - usually set to maxbytes
plus Xmx plus a bit extra, in case other libraries require some off-heap memory also. Unlike setting maxbytes
setting maxphysicalbytes
is optional. This can also be a percentage (>100%), in which case it would apply to maxMemory.
Example: Configuring 1GB initial on-heap, 2GB max on-heap, 8GB off-heap, 10GB maximum for process:
With GPU systems, the maxbytes and maxphysicalbytes settings currently also effectively defines the memory limit for the GPU, since the off-heap memory is mapped (via NDArrays) to the GPU - read more about this in the GPU-section below.
For many applications, you want less RAM to be used in JVM heap, and more RAM to be used in off-heap, since all NDArrays are stored there. If you allocate too much to the JVM heap, there will not be enough memory left for the off-heap memory.
If you get a "RuntimeException: Can't allocate [HOST] memory: xxx; threadId: yyy", you have run out of off-heap memory. You should most often use a WorkspaceConfiguration to handle your NDArrays allocation, in particular in e.g. training or evaluation/inference loops - if you do not, the NDArrays and their off-heap (and GPU) resources are reclaimed using the JVM GC, which might introduce severe latency and possible out of memory situations.
If you don't specify JVM heap limit, it will use 1/4 of your total system RAM as the limit, by default.
If you don't specify off-heap memory limit, the JVM heap limit (Xmx) will be used by default. i.e. -Xmx8G
will mean that 8GB can be used by JVM heap, and an additional 8GB can be used by ND4j in off-heap.
In limited memory environments, it's usually a bad idea to use high -Xmx
value together with -Xms
option. That is because doing so won't leave enough off-heap memory. Consider a 16GB system in which you set -Xms14G
: 14GB of 16GB would be allocated to the JVM, leaving only 2GB for the off-heap memory, the OS and all other programs.
ND4J supports the use of a memory-mapped file instead of RAM when using the nd4j-native
backend. On one hand, it's slower then RAM, but on other hand, it allows you to allocate memory chunks in a manner impossible otherwise.
Here's sample code:
In this case, a 1GB temporary file will be created and mmap'ed, and NDArray x
will be created in that space. Obviously, this option is mostly viable for cases when you need NDArrays that can't fit into your RAM.
When using GPUs, oftentimes your CPU RAM will be greater than GPU RAM. When GPU RAM is less than CPU RAM, you need to monitor how much RAM is being used off-heap. You can check this based on the JavaCPP options specified above.
We allocate memory on the GPU equivalent to the amount of off-heap memory you specify. We don't use any more of your GPU than that. You are also allowed to specify heap space greater than your GPU (that's not encouraged, but it's possible). If you do so, your GPU will run out of RAM when trying to run jobs.
We also allocate off-heap memory on the CPU RAM as well. This is for efficient communicaton of CPU to GPU, and CPU accessing data from an NDArray without having to fetch data from the GPU each time you call for it.
If JavaCPP or your GPU throw an out-of-memory error (OOM), or even if your compute slows down due to GPU memory being limited, then you may want to either decrease batch size or increase the amount of off-heap memory that JavaCPP is allowed to allocate, if that's possible.
Try to run with an off-heap memory equal to your GPU's RAM. Also, always remember to set up a small JVM heap space using the Xmx
option.
Note that if your GPU has < 2g of RAM, it's probably not usable for deep learning. You should consider using your CPU if this is the case. Typical deep-learning workloads should have 4GB of RAM at minimum. Even that is small. 8GB of RAM on a GPU is recommended for deep learning workloads.
It is possible to use HOST-only memory with a CUDA backend. That can be done using workspaces.
Example:
It's not recommended to use HOST-only arrays directly, since they will dramatically reduce performance. But they might be useful as in-memory cache pairs with the INDArray.unsafeDuplication()
method.