Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
In light of the coming 1.0, the project has decided to cut a number of modules before the final release. These modules have not had many users in the past and have created confusion for many users just trying to use a few simple apis. Many of these modules have not been maintained.
There will likely be 1 or 2 more milestone releases before the final 1.0. These should be considered checkpoints.
These modules include:
Arbiter
Jumpy
Datavec modules for video, audio, audio, sound. The computer vision datavec module
will continue to be available.
Tokenizers: The tokenizers for chinese, japanese, korean were imported from other frameworks
and not really updated.
Scalnet, Nd4s: We removed the scala modules due to the small user base. We welcome 3rd party enhancements
to the framework for syntatic sugar such as kotlin and scala. The framework's focus will be on providing
the underlying technology rather than the defacto interfaces. If there is interest in something higher level, please discuss it on community forums
ARM support: We have included armcompute modules for core convolution routines. These routines can be found here
TVM: We now support running TVM modules. Docs coming soon.
We've updated our shaded modules to newer versions to mitigate security risks. These modules include: 1. jackson 2. guava
Cuda 11: We've upgraded dl4j and associated modules to support cuda 11 and 11.2.
A more modular model import framework supporting tensorflow and onnx: 1. Model mapping procedures loadable as protobuf 2. Defining custom rules for import to work around unsupported or custom layers/operations 3. Op descriptor for all operations in nd4j
This will enable users to override model import behavior to run their own custom models. This means, in most circumstances, there will be no need to modify model import core code anymore. Instead, users will be able to provide definitions and custom rules for their graphs.
Users will be expected to convert their models in an external process. This means running standalone conversions for their models. This extends to keras import as well. Sometimes users convert their models in production directly from keras.
The workflow going forward is to ensure that your model is converted ahead of time to avoid performance issues with converting large models.
Removed ppc from nd4j-native-platform and nd4j-cuda-platform. If you need this architecture, please contact us or build from source.
Added more support for avx/mkldnn/cudnn linked acceleration in our c++ library. We now have the ability to distribute more combinations of pre compiled math kernels via different combinations of classifiers. See the ADR here for more details.
The class loader is now overridable. This is useful for OSGI and application server environments.
We've upgraded arrow to 4.0.0 enabling the associated nd4j-arrow and datavec-arrow modules to be used without netty clashes.
Improved keras model import support for NWHC as well as NCHW input formats for both rnn and cnn
CTC Loss: We now have basic support for CTC loss in nd4j. This will enable the import of CTC loss based models for speech recognition as well as OCR.
Rewritten and more stable python execution. This allows better support for multi threaded environments.
Contributors: https://github.com/eclipse/deeplearning4j/issues?q=is%3Apr+author%3Amjlorenzo305
Adds proper support for java 9 modules: https://github.com/eclipse/deeplearning4j/pull/9631 https://github.com/eclipse/deeplearning4j/pull/9626
As part of the same work flatbuffers has been upgraded to 1.12.1. This affects the samediff file format and the user interfaces. Flatbuffers as a file format is forwards and backwards compatible but if you have any issues please do let us know. The relevant files have been updated using the flatc compiler.
Removed rl4j: in continuing to cut unmaintained modules, the 1.0 will focus the framework on a few key use cases. This invites other folks to build external modules for a tightly maintained core that focuses on deployment, framework interop and training models in java.
Added new model zoo module called omnihub for dl4j and new samediff models. These can be found here: https://github.com/KonduitAI/omnihub-zoo See more in the new omnihub section.
Migrated the snapshots to sonatype's new repository https://s01.oss.sonatype.org. More context can be found here: https://twitter.com/Brian_Fox/status/1357414532512104448 https://github.com/eclipse/deeplearning4j/pull/9618
Consolidated tests to platform-tests to allow for easy testing of behavior against different backends.
Adds proper support for jetson nano with curated binaries and an updated cuda 10.2
Adds Spark 3 support: https://github.com/eclipse/deeplearning4j/pull/9444
Reduce binary size using selective compilation: https://github.com/eclipse/deeplearning4j/pull/9443
https://github.com/eclipse/deeplearning4j/pull/9451 Remove scala 11 support. Only supporting scala 2.12: https://github.com/eclipse/deeplearning4j/pull/9440
Extensive enhancements for samediff model training: https://github.com/eclipse/deeplearning4j/pull/9501
Add beginnings of graph optimization framework: https://github.com/eclipse/deeplearning4j/pull/9402
Many onnx model import improvements (add new ops): https://github.com/eclipse/deeplearning4j/pull/9411 https://github.com/eclipse/deeplearning4j/pull/9489https://github.com/eclipse/deeplearning4j/pull/9475 https://github.com/eclipse/deeplearning4j/pull/9526 https://github.com/eclipse/deeplearning4j/pull/9502https://github.com/eclipse/deeplearning4j/pull/9587 https://github.com/eclipse/deeplearning4j/pull/9599
Add new op subset frameworks: allows selective inclusion of operations to enable users to reduce binary size: https://github.com/eclipse/deeplearning4j/pull/9443 https://github.com/eclipse/deeplearning4j/pull/9451 https://github.com/eclipse/deeplearning4j/pull/9569
Add updated jetson nano support: https://github.com/eclipse/deeplearning4j/pull/9432
Enhance codegen exposing more functions for samediff: https://github.com/eclipse/deeplearning4j/pull/9478 https://github.com/eclipse/deeplearning4j/pull/9503 https://github.com/eclipse/deeplearning4j/pull/9500
Add new samediff eager mode (mainly used for model import use cases): https://github.com/eclipse/deeplearning4j/pull/9538 https://github.com/eclipse/deeplearning4j/pull/9535 https://github.com/eclipse/deeplearning4j/pull/9533
Add dimensions as input variables: https://github.com/eclipse/deeplearning4j/pull/9584
Update samediff api to allow dimensions as variables
Fix up conditions/matching: https://github.com/eclipse/deeplearning4j/pull/9551
ImageResize updates to improve compatibility with onnx: https://github.com/eclipse/deeplearning4j/pull/9495
Rewrite compat sparse to dense op: https://github.com/eclipse/deeplearning4j/pull/9566
Fix creation of string scalar ndarrays: https://github.com/eclipse/deeplearning4j/pull/9556
Fix serialization with conv/pooling3d: https://github.com/eclipse/deeplearning4j/pull/9648
Add Spark 3 support: https://github.com/eclipse/deeplearning4j/pull/9553
Added Deconvolution3D for keras import https://github.com/eclipse/deeplearning4j/pull/9399
Add full channels last support for 3d convolutions: https://github.com/eclipse/deeplearning4j/pull/9578
Fix confusion matrix count increments: https://github.com/eclipse/deeplearning4j/pull/9553
Fix Conv3D data format serialization: https://github.com/eclipse/deeplearning4j/pull/9648
Add LabelsSource to BagOfWordsVectorizer (thanks to XAI!): https://github.com/eclipse/deeplearning4j/pull/9624
Performance enhancement for mnist related datasetiterators: https://github.com/eclipse/deeplearning4j/pull/9612
Fix memory leak in datavec-arrow: https://github.com/eclipse/deeplearning4j/pull/9441
Launches new Omnihub module. Allows access to models from: https://github.com/KonduitAI/omnihub-zoo
A pretrained omnihub module will provide access to pretrained samediff and dl4j modules. This will also supplant the old dl4j zoo.
Modules will be made available from a Pretrained class:https://github.com/eclipse/deeplearning4j/blob/feb8eee5eb07239c49a4d14786114dc0394aad4e/omnihub/src/main/java/org/eclipse/deeplearning4j/omnihub/models/Pretrained.java#L30
Clean up tests/consolidate tests to platform-tests
Added model server - remote inference of SameDiff and DL4J models using JSON or (optionally) binary serialization
Server: See
Client: See
Tests/examples: See and
Added Scala 2.12 support, dropped Scala 2.10 support. Modules with Scala dependencies are now released with Scala 2.11 and 2.12 versions
Apache Spark 1.x support dropped (now only Spark 2.x is supported). Note: Spark version suffix dropped: For upgrading: 1.0.0-beta4_spark2 -> 1.0.0-beta5
Added FastText support to deeplearning4j-nlp
CUDA support for all ND4J/SameDiff Operations
In 1.0.0-beta4, some operations were CPU only. Now, all operations have full CUDA support
Added support for new data types in ND4J (and DL4J/SameDiff): BFLOAT16, UINT16, UINT32, UINT64
ND4J: Implicit broadcasting support added to INDArray (already present in SameDiff - for example shape [3,1]+[3,2]=[3,2]
)
CUDA 9.2, 10.0 and 10.1-Update2 still supported
NOTE: For CUDA 10.1, CUDA 10.1 update 2 is recommended. CUDA 10.1 and 10.1 Update 1 will still run, but rare internal cuBLAS issues may be encountered in heavily multi-threaded code on some systems
Dependency upgrades: Jackson (2.5.1 to 2.9.9/2.9.9.3), Commons Compress (1.16.1 to 1.18), Play Framework (2.4.8 to 2.7.3), Guava: (20.0 to 28.0-jre, and shaded to avoid dependency clashes)
CUDA: now host (RAM) buffers are only allocated when required (previously: host buffers were always allocated), in addition to device (GPU) buffer
DL4J AsyncDataSetIterator and AsyncMultiDataSetIterator moved to ND4J, use org.nd4j.linalg.dataset.Async(Multi)DataSetIterator
instead
Apache Spark 1.x support dropped (now only Spark 2.x is supported). Note: Spark version suffix dropped: For upgrading, change versions as follows: 1.0.0-beta4_spark2 -> 1.0.0-beta5
Scala 2.10 dropped, Scala 2.12 added (for modules with Scala dependencies)
Some layers (such as LSTM) may run slower on 1.0.0-beta5 than 1.0.0-beta4 on CUDA when not using cuDNN, due to added synchronization. This synchronization will be removed in the next release after 1.0.0-beta5
CUDA 10.1: Rare internal cuBLAS issues may be encountered in heavily multi-threaded code on some systems, when running CUDA 10.1 Update 1 (and maybe 10.1). CUDA 10.1 update 2 is recommended.
CUDA: now host (RAM) buffers are only allocated when required (previously: host buffers were always allocated), in addition to device (GPU) buffer
OldAddOp, OldSubOp, etc removed: Replace with AddOp, SubOp, etc
Nd4j.trueScalar and trueVector removed; use Nd4j.scalar and Nd4j.createFromArray methods
INDArray.javaTensorAlongDimension removed; use INDArray.tensorAlongDimension instead
INDArray.lengthLong() removed; use INDArray.length() instead
Added FastText - inference and training, including OOV (out of vocabulary) support ()
Scala 2.12 support added, Scala 2.10 support dropped ()
Added model server (DL4J and SameDiff models, JSON and binary communication) - , , ,
Added saved model format validation utilities - DL4JModelValidator, DL4JKerasModelValidator ()
Added LabelLastTimeStepPreProcessor ()
BertIterator: added option to prepend token to the output (such as [cls]
expected by some models) ()
Added trace level logging to MultiLayerNetwork and ComputationGraph assist with debugging certain issues ()
Upsampling3D: Added NDHWC support ()
MergeVertex now supports broadcasting ()
LSTM and Dropout will now fall back on built-in implementations if an exception is encountered from cuDNN (same as Subsampling/ConvolutionLayer) ()
Improved JavaDoc and cleanup up API for WordVectorSerializer (, )
Updated deeplearning4j-ui theme ()
Fixed an issue with MergeVertex and CNN3D activations ()
Fixed typo in Yolo2OutputLayer builder/configuration method name ()
Improved ComputationGraph builder InputType validation ()
Removed dl4j-spark-ml module until it can be properly maintained ()
Fixed an issue with BertWordPieceTokenizerFactory and bad character encoding ()
Fixed an issue with LearnedSelfAttentionLayer and variable minibatch size (, )
Fixed issue with SharedTrainingMaster controller address when set from environment variable ()
Fixed issue with SameDiffOutputLayer initialization under some circumstances ()
https is now used by default for data and zoo model downloads (, )
Fixed an issue where UI WebJars dependencies would check for updates on every single build (, )
Fixed issue where Upsampling layer memory report could produce an OOM exception ()
Improved UX/validation for RecordReaderDataSetIterator ()
Fixed an issue where EmbeddingSequenceLayer would not check mask array datatype ()
Improved validation when initializing networks with a non rank-2 (shape [1, numParams]) array ()
Fixed a DataType issue for BertIterator ()
Fixed Word2Vec model backward compatibilty (beta3 and earlier models now loadable again)
Fixed issue where some Keras import models could fail with Could not read abnormally long HDF5 attribute
()
Added validation for RnnOutputLayer - feature/label array lengths ()
Fixed an issue where SameDiffOutputLayer would not support variable minibatch size ()
Fixed DL4J SameDiff layer mask support ()
DL4J UI: Fixed an issue where tab switching did not work when visualizing saved/stored data (, )
DL4J UI: Fixed a rare UI threading issue ()
Fixed a Keras import issue with JSON format change ()
Fixed a Keras import issue where updater learning rate schedule could be imported incorrectly ()
Fixed an issue with CnnSentenceDataSetIterator when using UnknownWordHandling.UseUnknownVector
(, )
Fixes and optimizations to DL4J SameDiff layers ()
MultiLayerNetwork/ComputationGraph will now log the original exception if a second exception occurs during workspace closing, instead of swallowing it (inference/fit operation try/finally blocks) ()
Upgraded dependencies: Jackson (2.5.1 to 2.9.9/2.9.9.3), Commons Compress (1.16.1 to 1.18), Play Framework (2.4.8 to 2.7.3), Guava: (20.0 to 28.0-jre, shaded to avoid dependency clashes) ()
Logging framework can now be configured for DL4J UI (due to Play framework dependency upgrade) ()
Reduced amount of garbage produced by MnistDataFetcher (impacts MNIST and EMNIST DataSetIterators) ()
Activation function backpropagation has been optimized for many activation functions (, )
Saved models with custom layers from 1.0.0-alpha and before can no longer be loaded. Workaround: load in 1.0.0-beta4, and re-save the model (). Models without custom layers can still be loaded back to 0.5.0
dl4j-spark_2.11 and _2.12 dependencies incorrectly pull in datavec-spark_2.11/2.12 version 1.0.0-SNAPSHOT
. Workaround: control version using dependency management as per or
Added new data types: BFLOAT16, UINT16, UINT32, UINT64 ()
CUDA support for all operations without CUDA implementations (, , , , )
Added model server (DL4J and SameDiff models, JSON and binary communication) - , , ,
Added support for empty arrays with zeros in shape, for compatibility with TensorFlow import ()
Improved SameDiff training API - added "in line" test set evaluation, returning History object with loss curve, etc ()
Added saved model format validation utilities - Nd4jValidator, Nd4jCommonValidator ()
Added SameDiff ScoreListener (equivalent to DL4J ScoreIterationListener/PerformanceListener) (, )
Added SameDiff.convertDataTypes method, for variable dtype conversion ()
Added crop and resize op ()
DL4J AsyncDataSetIterator and AsyncMultiDataSetIterator moved to ND4J
Added basic/MVP SameDiff UI listener ()
Added SameDiff CheckpointListener (, )
Added SameDiff name scopes ()
SameDiff: Updater state and training configuration is now written to FlatBuffers format ()
Added c++ benchmark suite callable from Java - call using Nd4j.getExecutioner().runLightBenchmarkSuit()
and Nd4j.getExecutioner().runFullBenchmarkSuit()
()
Added SameDiff.save/load methods with InputStream/OutputStream arguments (, )
Added axis configuraiton for evaluation instances (Evaluation, RegressionEvaluation, ROC, etc - getAxis and setAxis methods) to allow different data formats (NCHW vs. NHWC for CNNs, for example) ()
SameDiff: Added support to convert constants to placeholders, via SDVariable.convertToConstant() method ()
SameDiff: Added GradCheckUtil.checkActivationGradients method to check activation gradients for SameDiff instance (not just parameter gradients as in existing gradient check methods) ()
Added CheckNumerics op ()
Added FakeQuantWithMinMaxArgs and FakeQuantWithMinMaxVars ops ()
Added INDArray reduction methods with "keep dimensions" option - for example, INDArray.mean(boloean, int... dimension)
()
Added Nd4j SystemInfo class - SystemInfo.getSystemInfo, .writeSystemInfo(File) to aid with debugging issues (, )
Added INDArray.toString(NDArrayStrings options), toStringFull() and toString overloads for easier control of array printing ()
Added HashCode op, INDArray.hashCode() ()
SameDiff: added whileLoop, ifCond methods for loops/conditional ops ()
Cleaned up some infrequently used Nd4j methods (, , , )
Added bitwise integer operations: left/right bit shift, left/right cyclical bit shift, bitwise Hamming distance (, , , , )
deeplearning4j-nlp: renamed AggregatingSentencePreProcessor to sentencePreProcessor method ()
Upgraded (and shaded) Protobuf version - 3.5.1 to 3.8.0 ()
Switched to c=style error handling for libnd4j native operations ()
Renamed FlatBuffers enum org.nd4j.graph.DataType
to org.nd4j.graph.DType
to avoid users importing incorrect type when using Nd4j methods (, )
Added SameDiff.bitwise namespace for bitwise ops (, )
Updated to JavaCPP/JavaCV 1.5.1-1 ()
SameDiff: Placeholders must now only be provided if required to calculate the requested variables ()
SameDiff: Fixed an issue with duplicate variable name validation ()
SameDiff: Fixed an issue with SDVariable.getArr for scalars ()
Added delayed mode to DeviceLocalNDArray (don't replicate to device until needed) ()
ND4J: Fixed an issue with writing 0d (scalar) NDArrays in numpy .npy format ()
Fixed an issue with Pad operation for some constant cases ()
Fixed some issues with strided_slice operation (, , )
SameDiff: Fixed issue with DataType inference for some ops using ND4J default datatype ()
INDArray.castTo(DataType) is now a no-op when array is already the correct type ()
SameDiff: Fixed an issue with training mixed precision networks ()
Fixed an issue where Evaluation class was incorrectly reporting macro-averaged precision for binary case ()
Removed trainableParams config/field from SameDiff TrainingConfig (no longer required) ()
Improvements and cleanup to ND4J Javadoc (, , , )
Fixed an issue with Cholesky Lapack op on CUDA (, )
Fixed an issue where [1,N] and [N,1] arrays were not considered a matrix (rank 2 array) according to INDArray.isMatrix() ()
Fixed RegressionEvaluation for 4D arrays (CNNs / segmentation) (, )
Fixed issue with INDArray.median(int... dimension) ()
Fixed NPE that could occur when executing gather operation backprop ()
Fixed issue with LogSumExp operation Java/C++ mapping ()
Added header validation when reading Numpy .npy files, to ensure file is valid ()
Fixed a possible issue with reading Numpy .npy files on CUDA ()
Fixed an issue when reading Numpy .npy boolean files ()
Various fixes for TensorFlow import ()
Fixed an issue with a small number of Nd4j.create methods not creating arrays corresponding to the java primitive ()
Improved shape validation for some Nd4j.create methods ()
Cleaned up unmaintained Nd4j.createSparse methods ()
Fixed a CUDA issue for CUDA GPUs with CC 3.0 ()
Fixed some possible integer overflows in c++ code ()
Removed deprecated methods: Nd4j.trueScalar and Nd4j.trueVector (, )
Fixed an issue where some JVMs could warn about "Illegal reflective access" due to a (now removed) SameDiff dependency ()
SDVariable now no longer extends DifferentialFunction ()
Moved numerous operation calculateOutputShape instances from Java to C++ ()
Fixed an issue where maxpool2d_bp could throw an exception when NaN values are present ()
Fixed an issue with concatenation of empty shapes (with zeros) ()
Removed INDArray.javaTensorAlongDimension ()
LayerNorm operation now properly supports axis arg, NCHW format data ()
libnd4j: cuBLAS hgemm (FP16 gemm) wil only be called for devices with compute capability >= 5.3 due to cuBLAS limitations ()
Nd4j.readNumpy optimized ()
Added configurable alpha parameter to ELU and lrelu_bp operations in c++ ()
Cleaned up SameDiff SDCNN/SDRNN (SameDiff.cnn, .rnn) API/methods (, )
nd4j-native on some OSX systems can fail with Symbol not found: ___emutls_get_address
- See
SBT 1.3.0 can fail with an Illegal character in path
error; SBT 1.2.8 is OK. This is an SBT issue, not an ND4J issue. See for details
ImageRecordReader: Support for 16-bit TIFF added ()
Added SequenceTrimToLengthTransform ()
Fixed an issue with AnalyzeSpark and String columns ()
Fixed an issue with URL scheme detection in NumberedFileInputScheme ()
Fixed an issue with RandomPathFilter sampling being biased (, )
API cleanup and refactoring (, , , )
Fixed issue with compression for HistoryProcessor ()
Updated EvaluationScoreFunction to use ND4J Evaluation class metrics ()
Fixed incorrect search size in GridSearchCandidateGenerator ()
The Jackson version upgrade necessitated a change to how generic object serialization was performed; Arbiter JSON data stored in 1.0.0-beta4 or earlier format may not be readable in 1.0.0-beta5 ()
Added full data type support to ND4S as per ND4J ()
Added syntactic sugar for SameDiff (implicits, operator overloads) ()
ND4J/Deeplearning4j: Added support for CUDA 10.0. Dropped support for CUDA 8.0. (1.0.0-beta3 release has CUDA 9.0, 9.2 and 10.0 support)
SameDiff now supports training and evaluation from DataSetIterator and MultiDataSetIterator. Evaluation classes have been moved to ND4J.
DL4J Spark training (gradient sharing) is now fully fault tolerant, and has improvements for threshold adaption (potentially more robust convergence). Ports can now be easily configured independently on master/workers.
Added ComputationGraph/MultiLayerNetwork rnnTimeStep overload with user-specified workspace. Link
Added Cnn3DLossLayer Link
ParallelInference: Instances can now update the model in real-time (without re-init) Link
ParallelInferenc: Added ParallelInference INPLACE mode Link
Added validation for incompatible loss/activation function combinations (such as softmax+nOut=1, or sigmoid+mcxent). New validation can be disabled using outputValidation(false) Link
Spark training: overhauled gradient sharing threshold adaption algorithms; made it possible to customize threshold settings, plus made defaults more robust to initial threshold configuration improving convergence speed in some cases. Link
Spark training: implemented chunked messaging to reduce memory requirements (and insufficient buffer length issues) for large messages Link
Spark training: Added MeshBuildMode configuration for improved scalability for large clusters Link
Spark network data pipelines: added FileBatch, FileBatchRecordReader etc for "small files" (images etc) distributed training use cases Link
Added FailureTestingListener for fault tolerance/debugging purposes Link
Upgraded Apache Lucene/Solr to version 7.5.0 (from 7.4.0) Link
Mode MultiLayerNetwork/ComputationGraph.clearLayerStates methods public (was protected) Link
AbstactLayer.layerConf()
method is now public Link
ParallelWrapper module now no longer has a Scala version suffix for artifact id; new artifact id is deeplearning4j-parallel-wrapper
Link
Improved validation and error mesages for invalid inputs/labels in Yolo2OutputLayer Link
Spark training: added SharedTrainingMaster.Builder.workerTogglePeriodicGC and .workerPeriodicGCFrequency to easily configure the ND4J garbage collection configuration on workers. Set default GC to 5 seconds on workers Link
Spark training: added threshold encoding debug mode (logs current threshold and encoding statistics on each worker during training). Enable using SharedTrainingConfiguration.builder.encodingDebugMode(true)
. Note this operation has computational overhead. Link
Fixed an issue where L1/L2 and updaters (Adam, Nesterov, etc) were applied before dividing gradients by minibatch to obtain average gradient. To maintain old behaviour, use NeuralNetConfiguration.Builder.legacyBatchScaledL2(true)
Link.
Note that learning rates may need to be decreased for some updaters (such as Adam) to account for this change vs. earlier versions. Some other updaters (such as SGD, NoOp, etc) should be unaffected.
Note that deserialized (loaded) configurations/networks saved in 1.0.0-beta2 or earlier will default to old behaviour for backward compatibility. All new networks (created in 1.0.0-beta3) will default to the new behaviour.
Fixed an issue where EarlyStoppingScoreCalculator would not correctly handle "maximize score" cases instead of minimizing Link
Fixed order (BGR vs. RGB) for VGG16ImagePreProcessor channel offset values Link
Fixed bug with variational autoencoders using weight noise Link
Fixed issue with BaseDataSetIterator not respecting the 'maximum examples' configuration Link
Optimization: A workspace is now used for ComputationGraph/MultiLayerNetwork evaluation methods (avoids allocating off-heap memory during evaluation that must be cleaned up by garbage collector) Link
Fixed an issue where shuffling combined with a subset for MnistDataSetIterator would not maintain the same subset between resets Link
Fixed issue with StackVertex.getOutputType Link
Fix issue with CNN to/from RNN preprocessors handling of mask arrays Link
Fixed issue with VGG16 non-pretrained configuration in model zoo Link
Fixed issue with TransferLearning nOutReplace where multiple layers in a row are modified Link
Fixed issue with CuDNN workspaces where backpropagation is performed outside of a standard fit call Link
Fixed an issue with dropout masks being cleared prematurely on output layers in ComputationGraph Link
RecordReaderMultiDataSetIterator now supports 5D arrays (for 3D CNNs) Link
Fixed bug in multi input/output ComputationGraphs with TBPTT combined with both masking and different number of input/output arrays Link
Improved input validation/exceptions for batch normalization layer Link
Fixed bug with TransferLearning GraphBuilder nOutReplace when combined with subsampling layers Link
SimpleRnnParamInitializer now properly respects bias initialization configuration Link
Fixed SqueezeNet zoo model non-pretrained configuration Link
Fixed Xception zoo model non-pretrained configuration Link
Fixed an issue with some evaluation signatures for multi-output ComputationGraphs Link
Improved MultiLayerNetwork/ComputationGraph summary method formatting for large nets Link
Fixed an issue where gradient normalization could result in NaNs if gradient is exactly 0.0 for all parameters in a layer Link
Fixed an issue where MultiLayerNetwork/ComputationGraph.setLearningRate could throw an exception for SGD and NoOp updaters Link
Fixed an issue with StackVertex plus masking in some rare cases Link
Fixed an issue with JSON deserialization of frozen layers in pre-1.0.0-alpha format Link
Fixed an issue where GraphBuilder.removeVertex can fail under some limited circumstances Link
Fixed a bug in CacheableExtractableDataSetFetcher Link
DL4J Spark training: Fixed issues with thread/device affinity for multi-GPU training + evaluation Link
DL4J Spark training: Made all Aeron threads daemon threads to prevent Aeron from stopping JVM shutdown when all other threads have completed Link
Added cudnnAllowFallback configuration for BatchNormalization layer (fallback to built-in implementation if CuDNN fails unexpectedly) Link
Fixed an issue with BatchNormalization layers that prevented the mean/variance estimates from being synced properly on each worker for GradientSharing training, causing convergence issues Link
Added a check to detect ZipSlip CVE attempts in ArchiveUtils Link
DL4J Spark training and evaluation: methods now use Hadoop Configuration from Spark context to ensure runtime-set configuration is available in Spark functions reading directly from remote storage (HDFS etc) Link
Added data validation for Nd4j.readTxt - now throws exception on invalid input instead of returning incorrect values Link
Fixed an issue with KNN implementation where a deadlock could occur if an invalid distance function (one returning "distances" less than 0) was utilized Link
Added synchronization to loading of Keras import models to avoid thread safety issues in the underlying HDFS library used for loading Link
Fixed rare issue for Async(Multi)DataSetIterator with large prefetch values Link
IEvaluation classes in DL4J have been deprecated and moved to ND4J so they are available for SameDiff training. Functionality and APIs are unchanged
MultiLayerConfiguration/ComputationGraphConfiguration pretrain(boolean)
and backprop(boolean)
have been deprecated and are no longer used. Use fit and pretrain/pretrainLayer methods instead. Link
ParallelWrapper module now no longer has a Scala version suffix for artifact id; new artifact id is deeplearning4j-parallel-wrapper
which should be used instead Link
deeplearning4j-nlp-korean module now has Scala version suffix due to scala dependencies; new artifact ID is deeplearning4j-nlp-korean_2.10
and deeplearning4j-nlp-korean_2.11
Link
Running multiple Spark training jobs simultaneously on the one physical node (i.e., multiple JVMs from one or more Spark jobs) may cause problems with network communication. A workaround for this is to manually set a unique stream ID manually in the VoidConfiguration. Use a unique (or random) integer value for different jobs Link
Fixed import issue due to Keras JSON format changes for Keras 2.2.3+ Link
Added Keras import for timeseries preprocessing Link
Elephas Link
Fixed issue with importing models with reshaping after an embedding layer Link
Added support for Keras masking layers Link
Fixed JSON deserialization issue with some layers/preprocessors, such as Permute Link
Fixed issue with Keras import of Nadam configuration Link
Added SameDiff training and evaluation: SameDiff instances can now be trained directly using DataSetIterator and MultiDataSetIterator, and evaluated using IEvaluation instances (that have been moved from ND4J to DL4J) Link
Added GraphServer implementation: c++ inference server for SameDiff (and Tensorflow, via TF import) with Java API Link
Added MKL-DNN support for some operations (Conv2d, etc) Link
Added Nd4j.where op method (same semantics as numpy.where) Link
Added Nd4j.stack op method (combine arrays + increase array rank by 1) Link
Libnd4j new ops:
Matrix band part Link
Scatter ND, ND-add, ND-sub and ND-update ops Link
Sparse softmax cross entropy loss with logits Link
Histogram fixed width op Link
broadcast_to op Link
deconv3d op added Link
Unsorted segment ops added Link
Segment_X backprop ops added Link
batchnorm_new op added that supports multiple axes for mean/variance Link
GRU cell backprop added Link
SameDiff loss functions: cleanup plus forward pass implementation Link
CudaGridExecutioner now warns that exception stack traces may be delayed to avoid confusion in debugging exceptions occuring during asynchronous execution of ops Link
JavaCPP and JavaCPP-presets have been upgraded to version 1.4.3 Link
Improved Javadoc on SDVariable class Link
Fixes for android: Remove use of RawIndexer Link
Libnd4j native op fixes:
Backprop op fix for the broadcast case for some pairwise transform custom op implementations Link
Fix for reverse custom op with rank 1 inputs Link
ATan2 op is now broadcastable Link
Boolean custom op broadcast fixes/additions Link
Scatter op edge case fixes Link
Unique op fix Link
Pad op fix Link
Fixed where op shape function Link
SVD rank 1 edge case fix Link
Range op Link
Split and space_to_batch fixes Link
Broadcast dynamic shape Link
embedding_lookup op now supports multiple input arrays Link
Matrix determinant op edge case (rank 0 result) shape fix Link
SameDiff: Improved error handling for multiple outputs case Link
Fixed issue where INDArray.permute would not correctly throw an exception for invalid length case Link
Minor change to DataSet.merge - signature now accepts any DataSet subtypes Link
INDArray.transposei operation was not in-place Link
Fixed issues with INDArray.mmul with MMulTranspose Link
Added additional order validation for ND4J creation methods (create, rand, etc) Link
Fix for ND4J binary deserialization (BinarySerde) when deserializing from heap byte buffers Link
Fixed issue with Nd4j-common ClassPathResource path resolution in some IDEs Link
Fixed issue where INDArray.get(interval) on rank 1 array would return rank 2 array Link
INDArray.assign(INDArray) no longer allows assigning different shape arrays (other than scalar/vector cases) Link
NDarrayStrings (and INDArray.toString()) now always uses US locale when formatting numbers Link
Fixed an issue with GaussianDistribution specific to V100 GPUs Link
Fixed an issue with bitmap compression/encoding specific to V100 GPUs Link
Transforms.softmax now throws an error on unsupported shapes instead of simply not applying operation Link
VersionCheck functionality: handle case where SimpleFileVisitor is not available on earlier versions of Android Link
SameDiff convolution layer configuration (Conv2dConfig/Conv3dConfig/Pooling3dConfig etc) have had parameter names aligned Link
CUDA 8.0 support has been removed. CUDA 9.0, 9.2 and 10.0 support is available in 1.0.0-beta3
nd4j-base64 module contents have been deprecated; use the equivalent classes in nd4j-api from now on Link
Some classes in nd4j-jackson module has been deprecated; use the equivalent classes in nd4j-api from now on Link
Android users may need to manually exclude the (now deprecated) module nd4j-base64. This is due to org.nd4j.serde.base64.Nd4jBase64
class being present in both nd4j-api and nd4j-base64 modules. Both versions have identical content. Use exclude group: 'org.nd4j', module: 'nd4j-base64'
to exclude.
Added NativeImageLoader method overloads for org.opencv.core.Mat and String as filename Link
Fix for JDBCRecordReader handling of null values Link
Improved errors/validation for ObjectDetectionRecordReader for invalid input (where image object centers are outside of image bounds) Link
Fixed issue where FileSplit using methods that are unavailable on earlier versions of Android Link
Fixed issue with JDBCRecordReader's handling of real-valued column result types Link
Added validation and useful exception for CSVRecordReader/LineRecordReader being used without initialization Link
Fixed some issues with dropout layers Link
Added conversion between org.nd4j.linalg.primitives.Pair/Triple and Scala Tuple Link
A number of bug fixes following the M1 release, thanks to the feedback from the community, allowed us to quickly sort out a few issues. This is a minor bug fix release to address short comings found with M1. Most fixes were related to keras import, the cnn/rnn helpers, and python4j.
Snapshots will also be published every 2 days automatically now https://github.com/eclipse/deeplearning4j/pull/9355 to get around sonatype ossrh deletion of snapshots every 3 days. This should increase robustness of the snapshots.
Worked around an issue with github actions pre emptively upgrading visual studio breaking the cuda builds: https://github.com/eclipse/deeplearning4j/pull/9364
Added backwards compatibility for centos 6 via a new linux-x86_64-compat classifier enabling use of older glibcs on centos 7:
https://github.com/eclipse/deeplearning4j/pull/9368 https://github.com/eclipse/deeplearning4j/pull/9368https://github.com/eclipse/deeplearning4j/pull/9373
A number of bugs were fixed with LSTM and CUDNN: https://github.com/eclipse/deeplearning4j/pull/9372
https://github.com/eclipse/deeplearning4j/issues/9142 - avoid shuffle operations on gpu. Pre save data on cpu in mini batches. For more help, please post on the forums at https://community.konduit.ai/
Add batch normalization support for RNNs: https://github.com/eclipse/deeplearning4j/pull/9338
Disable old helpers by default https://github.com/eclipse/deeplearning4j/pull/9343
Minor unit test fixes: https://github.com/eclipse/deeplearning4j/pull/9346
Add keras support for cnn 1d NWHC: https://github.com/eclipse/deeplearning4j/pull/9353
Move the warning about version check to tracing so it stops logging this during normal usage confusing users: https://github.com/eclipse/deeplearning4j/pull/9356
Allow 1d convolutions to accept feed forward as input type: https://github.com/eclipse/deeplearning4j/pull/9365
Remove the old benchmark suite and migrate it to contrib: https://github.com/eclipse/deeplearning4j/pull/9374
Remove old MKLDNNLSTM helper (it never fully functioned anyways): https://github.com/eclipse/deeplearning4j/pull/9381
Fixed an issue with helper reflection ensuring the classes would be loaded properly https://github.com/eclipse/deeplearning4j/pull/9333 https://github.com/eclipse/deeplearning4j/pull/9350
Fix minor workspace activation bug: https://github.com/eclipse/deeplearning4j/pull/9341
Fixed compilation error when running anything more than jdk 8 and NIO buffers: https://github.com/eclipse/deeplearning4j/pull/9351
Move logback to be a test dependency for some modules: https://github.com/eclipse/deeplearning4j/pull/9362
Keras model import fixes for GlobalPooling: https://github.com/eclipse/deeplearning4j/pull/9378 https://github.com/eclipse/deeplearning4j/pull/9384
Add Eigen op as public ensuring easier use when running eigenvalue decomposition https://github.com/eclipse/deeplearning4j/pull/9328
Fixes minor issue with choice(..) op https://github.com/eclipse/deeplearning4j/pull/9360 thanks to https://github.com/Romira915
Minor applyScalar typo fix: https://github.com/eclipse/deeplearning4j/pull/9385
Fixed serialization bug with StringToTimeTransform: https://github.com/eclipse/deeplearning4j/pull/9377 thanks to community member https://github.com/yumg
Made python4j's python path setting more robust by migrating from set path calls to add path calls: https://github.com/eclipse/deeplearning4j/pull/9386
Fixes bug with numpy import array jvm crashes: https://github.com/eclipse/deeplearning4j/pull/9348
Fixed inconsistent conventions between SameDiffVariable getArr and getArrForName().. https://github.com/eclipse/deeplearning4j/pull/9357
Deeplearning4J
Fixed issue with incorrect version dependencies in 0.9.0
Added EmnistDataSetIterator Link
Numerical stability improvements to LossMCXENT / LossNegativeLogLikelihood with softmax (should reduce NaNs with very large activations)
ND4J
Added runtime version checking for ND4J, DL4J, RL4J, Arbiter, DataVec Link
Known Issues
Deeplearning4j: Use of Evaluation class no-arg constructor (i.e., new Evaluation()) can result in accuracy/stats being reported as 0.0. Other Evaluation class constructors, and ComputationGraph/MultiLayerNetwork.evaluate(DataSetIterator) methods work as expected.
This also impacts Spark (distributed) evaluation: workaround is to replace sparkNet.evaluate(testData);
with sparkNet.doEvaluation(testData, 64, new Evaluation(10))[0];
, where 10 is the number of classes and 64 in the evaluation minibatch size to use.
SequenceRecordReaderDataSetIterator applies preprocessors (such as normalization) twice to each DataSet (possible workaround: use RecordReaderMultiDataSetIterator + MultiDataSetWrapperIterator)
TransferLearning: ComputationGraph may incorrectly apply l1/l2 regularization (defined in FinetuneConfiguration) to frozen layers. Workaround: set 0.0 l1/l2 on FineTuneConfiguration, and required l1/l2 on new/non-frozen layers directly. Note that MultiLayerNetwork with TransferLearning appears to be unaffected.
ND4J/Deeplearning4j: Added support for CUDA 9.2. Dropped support for CUDA 9.1. (1.0.0-beta2 release has CUDA 8.0, 9.0 and 9.2 support)
Deeplearning4j resource (datasets, pretrained models) storage directory can now be configured via DL4JResources.setBaseDirectory
method or org.deeplearning4j.resources.directory
system property
ND4J: all indexing is now done with longs instead of ints to allow for arrays with dimensions and lengths greater than Integer.MAX_VALUE (approx. 2.1 billion)
ND4J: nd4j-native-platform will now use Intel MKL-DNN as the default/bundled BLAS implementation (replacing OpenBLAS as the previous default)
Deeplearning4j: Added Out-of-memory (OOM) crash dump reporting functionality. Provides a dump with memory use and configuration if training/inference OOMs (to assist with debugging and tuning memory configuration).
Added new SameDiff layers (automatic differentiation - only single class, forward pass definition required) to DL4J with full training support - SameDiffLayer, SameDiffVertex, SameDiffOutputLayer, SameDiffLambdaLayer, SameDiffLambdaVertex - note that these are CPU-only execution for now Link Link Link
Resource (datasets, pretrained models) storage directory can now be configured via DL4JResources.setBaseDirectory
method or org.deeplearning4j.resources.directory
system property. Note that it is also possible to set a different base location for downloads (for local mirrors of DL4J resources) Link
Added Out-of-memory (OOM) crash dump reporting functionality. Provides a dump with memory use and configuration if training/inference OOMs. Same information is available (without a crash) for MultiLayerNetwork/ComputationGraph.memoryInfo methods. Can be disabled (or output directory set) using system properties - Link
Added Composite[Multi]DataSetPreProcessor to enable multiple [Multi]DataSetPreProcessors to be applied in a single iterator Link
Added ComputationGraph evaluate methods for multi-output networks: evaluate(DataSetIterator, Map<Integer,IEvaluation[]>)
and evaluate(MultiDataSetIterator, Map<Integer,IEvaluation[]>)
Link
Added JointMultiDataSetIterator - utility iterator used to create MultiDataSetIterator from multiple DataSetIterators Link
GraphVertices may now have trainable parameters directly (not just enclose layers with trainable parameters) Link
Added MultiLayerNetwork/ComputationGraph getLearningRate methods Link
Added cyclical "1cycle" schedule for learning rate schedules etc - Link
RDD repartitioning for Spark training is more configurable (adds Repartitioner interface) Link
Added ComputationGraph.getIterationCount() and .getEpochCount() for consistency with MultiLayerNetwork Link
Spark evaluation: added evaluation method overloads that allow specifying the number of evaluation workers (less than number of Spark threads) Link
CnnSentenceDataSetIterator now has a Format argument, and supports outputting data for RNNs and 1D CNNs Link
Added ComputationGraph/MultiLayerNetwork.pretrain((Multi)DataSetIterator, int epochs)
method overloads Link
EmbeddingSequenceLayer now supports [minibatch,1,seqLength]
format sequence data in addition to [minibatch,seqLength]
format data Link
CuDNN batch norm implementation will now be used for rank 2 input, not just rank 4 input Link
MultiLayerNetwork and ComputationGraph output/feedForward/fit methods are now thread-safe via synchronization. Note that concurrent use is not recommended due to performance (instead: use ParallelInference); however the now-synchronized methods should avoid obscure errors due to concurrent modifications Link
BarnesHutTSNE now throws a useful exception in the case where the distance metric is undefined (for example, all zeros plus cosine similarity) Link
BatchNormalization layer now correctly asserts that nOut is set if required (instead of unfriendly shape errors later) Link
Fixed issue where OutputLayer may not initialize parameter constraints correctly Link
Fixed performance issue with Nesterov updater using CPU-only op for CUDA execution Link
Removed TerminationCondition for DL4J optimizers - was not used in practice, and had minor overhead Link
Fixed issue where EvaluativeListener could hit a workspace validation exception when workspaces are enabled Link
Fixed issue where TrainingListener.onEpochStart/onEpochEnd were not being called correctly for ComputationGraph Link
Fixed workspace issue with TensorFlowCnnToFeedForwardPreProcessor Link
Performance optimization for BatchNormalization when using CuDNN Link
Performance optimization: Dropout will be applied in-place when safe to do so, avoiding a copy Link
Added CuDNN implementation of Dropout Link
Reduced memory use for CuDNN: CuDNN working memory is now shared and reused between layers within a network Link
CuDNN batch normalization implementation would fail with FP16 datatype Link
Fixed issue Bidirectional LSTM may incorrectly use workspaces causing an exception Link
Fixed issue with early stopping where scores to be maximized (accuracy, f1, etc) were not properly triggering termination conditions Link
Fixed issue where label mask counter could be incorrectly incremented in ComputationGraph.computeGradientAndScore() Link
ComputationGraph was not setting lastEtlTime field during training Link
Fixed issue with AutoEncoder layer when workspaces are enabled Link
Fixed issue with EmbeddingSequenceLayer use of mask arrays Link
Lombok is now provided scope everywhere, isn't on user classpath when using DL4J Link
Fixed issue where WordVectorSerializer.readParagraphVectors(File) initialization of label source Link
Spark training (gradient sharing) now properly handles empty partition edge case when encountered during training Link
Errors are propagated better/more consistently for Spark gradient sharing training Link
Fixed issue with 1D CNN layers with mask arrays and stride > 1 (masks not being correctly downsized) Link
DL4J Batch norm implementation was not correctly adding epsilon value during inference, only during training (CuDNN unaffected) Link
CuDNN subsampling layers with max pooling and ConvolutionMode.SAME may have taken padding value (0) as the maximum for border values when all non-padding values are less than 0 Link
Spark training with gradient sharing now passes listeners to workers correctly Link
Fixed rare (and non-terminal) concurrent modification issue with UI and FileStatsStorage Link
CuDNN convolution layer now supports dilation > 2 (previously: used DL4J conv layer implementation as a fallback) Link
Yolo2OutputLayer now implements computeScoreForExamples() Link
SequenceRecordReeaderDataSetIterator now handles the "no labels" case correctly Link
Fixed issue where BarnesHutTSNE could hit a workspace validation exception Link
EMNIST iterator could produce incorrect data in some cases after a reset Link
GravesLSTM has been deprecated in favor of LSTM due to lack of CuDNN support but otherwise similar accuracy to in practice. Use LSTM class instead.
deeplearning4j-modelexport-solr: now uses Lucene/Solr version 7.4.0 (was 7.3.0) Link
Mask arrays for CNN2d layers must be in broadcastable 4d format: [minibatch,depth or 1, height or 1, width or 1]
- previously they were 2d with shape [minibatch,height]
or [minibatch,width]
. This provents ambiguity in later cases (pooling layers), and allows for more complex masking scenarios (such as masking for different image sizes in same minibatch). Link
Some older/deprecated Model and Layer methods have been removed. (validateInput(), initParams()). Some custom layers may need to be updated as a result Link
Windows users are unable to load the HDF5 files used in SvhnLabelProvider (used in HouseNumberDetection example). Linux/Mac users are unaffected. A workaround for windows users is to add the sonatype snapshot dependency org.bytedeco.javacpp-presets:hdf5-platform:jar:1.10.2-1.4.3-SNAPSHOT
Link
Keras model import now imports every Keras application
Supports GlobalPooling3D layer import
Supports RepeatVector layer import
Supports LocallyConnected1D and LocallyConnected2D layers
Keras Lambda layers can now be imported by registering custom SameDiff layers
All Keras optimizers are now supported
All advanced activation functions can now be imported.
Many minor bugs have been fixed, including proper weight setting for all configurations of BatchNormalization, improvements to Reshape SeparableConvolution2D, and full support of Bidirectional layers.
ND4J: all indexing is now done with longs instead of ints to allow for arrays with dimensions and lengths greater than Integer.MAX_VALUE (approx. 2.1 billion)
Added the ability to write Numpy .npy format using Nd4j.writeAsNumpy(INDArray,File)
and convert an INDArray to a numpy strict in-memory using Nd4j.convertToNumpy(INDArray)
Link
ND4j-common ClassPathResource: added ClassPathResource.copyDirectory(File) Link
SameDiff: A significant number of new ops, and backprop implementations for existing ops
Added Nd4j.randomBernoulli/Binomial/Exponential convenience methods Link
Added way to disable/suppress ND4J initialization logging via org.nd4j.log.initialization
system property Link
SameDiff class - most op/constructor methods now have complete/useful javadoc Link
Workspaces can now be disabled globally, ignoring workspace configuration. This is mainly used for debugging; use Nd4j.getWorkspaceManager().setDebugMode(DebugMode.DISABLED)
or Nd4j.getWorkspaceManager().setDebugMode(DebugMode.SPILL_EVERYTHING);
to enable this. Link [Link]
Added EnvironmentalAction API for environment variable processing Link
SameDiff: a significant number of bug fixes for execution and individual ops
Fixed issue where INDArray.toDoubleArray() with true scalars (rank 0 arrays) Link
Fixed issue with DataSet.sample() not working for rank 3+ features Link
IActivation implementations now validate/enforce same shape for activations and gradients Link
Fixed issue with muliColumnVector where vector is 1d Link
ImagePreProcessingScaler now supports serialization via NormalizerSerializerStrategy and ModelSerializer Link
Performance optimization for threshold encoding used in DL4J's Spark gradient sharing distributed training implementation Link
SameDiff: Fixed issue where memory wasn't always released after execution Link
DataSet.save() and MultiDataSet.save() methods now save example metadata when present Link
Fixed issue with KFoldIterator when dataset does not divide equally into folds with no remainder Link
Fixed issue where version check functionality could fail to load resources if resources are on a path with spaces Link
CUDA 9.1 support has been removed. CUDA 8.0, 9.0 and 9.2 support is available
Due to long indexing changes, long/long[] should be used in place of int/int[] in some places (such as INDArray.size(int), INDArray.shape())
Simplified DataSetIterator API: totalExamples(), cursor() and numExamples() - these were unsupported on most DataSetIterator implementations, and not used in practice for training. Custom iterators should remove these methods also Link
Long-deprecated DataSet.getFeatureMatrix() has been removed. Use DataSet.getFeatures() instead. Link
Unused and not properly tested/maintained utility class BigDecimalMath has been removed. Users should find an aternative library for this functionality, if required.
Not properly maintained complex number support classes (IComplexNumber, IComplexNDArray) have been removed entirely Link
Added AnalyzeLocal class to mirror functionality of AnalyzeSpark (but without Spark dependency) Link
Added JacksonLineSequenceRecordReader: RecordReader used for multi-example JSON/XML where each line in a file is an independent example Link
Added RecordConvert.toRecord(Schema, List<Object>)
Link
Added missing FloatColumnCondition Link
Added CSVLineSequenceRecordReader for "each line in CSV is a sequence, and sequence is single-valued/univariate" Link
Added CSVMultiSequenceRecordReader for "multiple multi-valued sequences in a single CSV" data Link
Fixed issue with NativeImageLoader on Android Link
Fixed issue with ExcelRecordReader Link
Fixed issue where bad args for CSVRecordReader.next(int)
could cause an unnecessarily large list to be generated Link
Added DataSource interface. Unlike old DataProvider, this does not require JSON serializability (only a no-arg constructor) Link
DataProvider has been deprecated. Use DataSource instead.
Added support for CUDA 10.2. 1.0.0-beta6 released with CUDA 9.2, 10.0, 10.1 and 10.2 support
SameDiff optimizations - memory use for inference and training significantly reduced, with some performance improvements also
Deeplearning4j UI - Play framework replaced with Vertx; deeplearning4j-ui dependency now no longer has Scala dependency or Scala version suffix Link
Note: No API changes, only artifact ID change: replace deeplearning4j-ui_2.1x
with deeplearning4j-ui
ND4j namespace operation methods: operations are available through the Nd4j.math, Nd4j.random, Nd4j.bitwise, Nd4j.nn (neural network), for example Nd4j.math.abs(INDArray)
, Nd4j.random.logNormal
etc Link.
Note that additional ND4J namespaces API will have additions (new namespaces and methods), and may have some API changes, in the next release
OpenMP replaced with thread pool c++ parallelism framework; enabled c++ parallelism for platforms without C++-level threading for operations
DNNL (MKL-DNN) upgraded to version 1.1
Added causal convolution mode for Convolution1D layer (ConvolutionMode.Causal) and added causal conv1d support for Keras import Link
Keras import now supports scaled identity weight initialization Link
BertIterator now supports sentence pairs for supervised training Link
Added TimeDistributed wrapper layer Link
KDTree implementation optimized Link
Deeplearning4j zoo models and datasets hosting location updated Link
Fixed nIn validation for Deconv2D layer Link
Fixed an issue with incorrect Deconvolution2d results for Keras import models Link
Fixed various integer casts to avoid overflows for very large arrays (with dimensions or length > Integer.MAX_VALUE) Link
Fixed an issue with UNet non-pretrained model architecture (last layer kernel size) Link
Deeplearning4j SameDiff layers now use DL4J workspaces for better performance and reduced memory consumption Link
Updated broken links in afew error messages Link
Cleaned up a few unused dependencies in various modules Link
Cleaned up duplicate SamplingDataSetIterator class Link
Fixed an issue where ComputationGraph instances with a single input going into multiple embedding layers could throw a NPE Link
Fixed an issue where loss function weights were not automatically cast to network datatype, resulting in an exception if not already correct type Link
Shaded Jackson version upgraded from 2.9.9/2.9.9.3 to 2.10.1 Link
Fixed an issue with KNN where getMostPopulatedClusters actually returned the least populated clusters Link
Deeplearning4j UI artifact ID has changed: deeplearning4j-ui_2.1x
(beta5 and earlier) with deeplearning4j-ui
Added suport for CUDA 10.2 Link
DNNL (MKL-DNN) upgraded to version 1.1 Link
Added ND4j namespaces to match SameDiff: Nd4j.math, Nd4j.random, Nd4j.bitwise, Nd4j.nn (neural network) Link
Additional SameDiff single batch .output method overloads for DataSet/MultiDataSet added Link
PRelu op added Link
adjust_contrast, igamma and igammac ops added Link
ND4J/SameDiff: BitCast, CompareAndBitpack, DivideNoNan, DrawBoundingBoxes, FakeQuantWithMinMaxVarsPerChannel ops added Link
non_max_suppression_overlaps op added Link
ImagePreProcessingScaler now supports segmentation use cases Link
concat operation now supports the concatenation axis being specified via the last input array Link
Added Gamma and Poisson RNG distributions Link
SameDiff’s use of DeviceLocal for variables/constants etc is now configurable Link
Uniform distribution op now supports random integer generation, not just random floating point generation Link
SameDiff: Added simple OpBenchmarkListener for benchmarking purposes Link
Added the ability to disable platform helpers (DNNL/MKLDNN etc) via Nd4jCPU.Environment.getInstance().allowHelpers(false);
and Nd4jCuda.Environment.getInstance().allowHelpers(false);
Link
Added draw_bounding_boxes operation Link
Added resize_bicubic operation Link
Added causal padding mode to conv1d operation Link
DNNL (MKLDNN) is included and enabled by default for non-AVX builds Link
Added SameDiff ArraySavingListener for debugging purposes Link
OpenMP replaced with ThreadPool abstraction, enables parallelism for platforms without OpenMP support Link
Switched to Clang instead of gcc for OSX compilation to avoid compiler-related issues Link
Removed SameDiff.outputs()
“best guess” output inference due to being unreliable, in favor of explicit SameDiff.setOutputs(String...)
call Link
Fixed an issue with Nd4j.hstack on 1D arrays Link
SameDiff no longer allows empty arrays for variables Link
Fixed an issue with Nadam updater LR schedules not being cloned Link
Cleaned up IActivation interface Link
Added new LSTM op implementation with DNNL/MKLDNN support (forward pass only so far) Link
SameDiff API cleaned up; deprecated methods removed Link
Switched SameDiff variable initialization to non-lazy, to avoid unexpected behaviour when mixing execution and ND4J RNG seed setting Link
SameDiff.zero and .one methods now create constants, not vairables Link
Moved CUDA build version and device logging to Java logging, from c++ stdout to enable disabling logging (via ND4J config or slf4j config) Link
Added DNNL/MKLDNN support for batch normalization Link
SameDiff: Fixed an issue where listeners weren’t being called for gradient calculation Link
Added DNNL/MKLDNN support for deconv2d/3d operations Link
Fixed an issue with biasadd_bp operation and NHWC data format Link
INDArray.toString() now has correct brackets for rank 1+ scalars to avoid ambiguity Link
Fixed an issue where some ND4J methods could fail when the library is compiled on Java 9+ but run on Java 8 Link
Fixed empty input arrays for legacy ops (transform, scalar, pairwise, broadcast) Link
CUDA compute capability 3.0 is supported again Link
Improved performance for Scatter operations (1D case) + index validation Link
SameDiff execution will now throw an exception when assertion operations in the graph fail Link
PolyGamma function now returns NaNs when passed double for args requiring integer values Link
Fixed some issues for pad and mirror_pad ops to ensure they conform with Tensorflow for imported networks Link
Updated and fixed some issues for TensorFlow graph runner Link
Improved performance for Reverse operation Link
Removed/cleanup up unused ND4J list functionality Link
Fixed reduce bool operation results (such as any, all, IsInf, etc) for empty array inputs Link
SameDiff.outputs()
now requires user to call SameDiff.setOutputs(String...)
first; previous “best guess” output inference was unreliable Link
SameDiff.zero and .one methods now create constants, not vairables Link
NativeImageLoader now checks for empty input streams and throws an exception instead of crashing Link
NDArrayScalarOpTransform now supports modulus operator Link
Added AsyncTrainingListener Link
Replaced multiple uses of java.util.Random with ND4J Random Link
Added Observable and LegacyMDPWrapper Link
Refactored RL4J video recording to separate VideoRecorder class Link
Refactoring for DQN and double DQN for improved maintainability Link
Internal refactoring and various bug fixes Link
PyDataVec TransformProcess now supports non-inplace operations Link
Custom layer support
Support for custom loss functions
Support for compressed INDArrays, for memory saving on huge data
Native support for BooleanIndexing where applicable
Initial support for combined operations on CUDA
Significant performance improvements on CPU & CUDA backends
Better support for Spark environments using CUDA & cuDNN with multi-gpu clusters
New UI tools: FlowIterationListener and ConvolutionIterationListener, for better insights of processes within NN.
Special IterationListener implementation for performance tracking: PerformanceListener
Inference implementation added for ParagraphVectors, together with option to use existing Word2Vec model
Severely decreased file size on the deeplearnning4j api
nd4j-cuda-8.0
backend is available now for cuda 8 RC
Added multiple new built-in loss functions
Custom preprocessor support
Performance improvements to Spark training implementation
Improved network configuration validation using InputType functionality
RBM and AutoEncoder key fixes:
Ensured visual bias updated and applied during pretraining.
RBM HiddenUnit is the activation function for this layer; thus, established derivative calculations for backprop according to respective HiddenUnit.
RNG performance issues fixed for CUDA backend
OpenBLAS issues fixed for macOS, powerpc, linux.
DataVec is back to Java 7 now.
Multiple minor bugs fixed for ND4J/DL4J
Read the announcement at for the highlights of this release.
Added Keras model import support for tf.keras models ,
Full inference and training support is available for ops/layers in the tf.keras namespace; inference only for general Tensorflow operations outside of the tf.keras namespace
Note also improvements to Keras import for reshape, permute, etc operations due to NHWC and NWC support in DL4J
DL4J now supports NHWC (channels last) data format for all CNN 2D layers, in addition to NCHW
DL4J now supports NWC (channels last - [minibatch, sequence_length, size]) for all RNN and CNN 1D layers, in addition to NCW
Added Deconvolution3D layer
Keras import: added ReLU, ELU and Softmax advanced activation layers and Swish activation function
Added DL4J SameDiffLoss class (for easily-defined DL4J ILossFunction's via SameDiff)
Useful exceptions are now thrown when attempting to perform unsupported operations on FastText
Added MultiLayerNetwork.evaluate(MultiDataSetIterator) and .evaluateRegression(MultiDataSetIterator) methods ,
Added new Image operations namespace operations:
Added new Random operations namespace operations:
Added new Math namespace operations:
Added new NN namespace operations:
Added new CNN namespace operations:
Added new linalg operations namespace
Added new RNN operation namespace operations:
Mapped operations for Tensorflow import:
Improved performance for bias_add operation
Performance and memory optimizations for DL4J
New or enhanced layers:
Added Cropping1D layer
Added Convolution3D, Cropping3D, UpSampling3D, ZeroPadding3D, Subsampling3D layers (all with Keras import support):
Added EmbeddingSequenceLayer (EmbeddingLayer for time series)
Added OCNNOutputLayer (one-class neural network) - implementation of -
Added FrozenLayerWithBackprop layer
Added DepthwiseConvolution2D layer
Added ComputationGraph.output(DataSetIterator) method
Added MultiLayerNetwork/ComputationGraph.layerInputSize methods
Added SparkComputationGraph.feedForwardWithKey overload with feature mask support
Added MultiLayerNetwork.calculateGradients method (for easily getting parameter and input gradients, for example for some model interpretabilithy approaches)
Added support to get input/activation types for each layer from configuration: ComputationGraphConfiguration.getLayerActivationTypes(InputType...)
, ComputationGraphConfiguration.GraphBuilder.getLayerActivationTypes()
, NeuralNetConfiguration.ListBuilder.getLayerActivationTypes()
, MultiLayerConfiguration.getLayerActivationTypes(InputType)
methods
Evaluation.stats() now prints confusion matrix in easier to read matrix format, rather than list format
Added ModelSerializer.addObjectToFile, .getObjectFromFile and .listObjectsInFile for storing arbitrary Java objects in same file as saved network
Added SpatialDropout support (with Keras import support)
Added MultiLayerNetwork/ComputationGraph.fit((Multi)DataSetIterator, int numEpochs)
overloads
Added performance (hardware) listeners: SystemInfoPrintListener
and SystemInfoFilePrintListener
Fixes issues with custom and some Keras import layers on Android
Added new model zoo models:
(to do)
WorkspaceMode.SINGLE and SEPARATE have been deprecated; use WorkspaceMode.ENABLED instead
Internal layer API changes: custom layers will need to be updated to the new Layer API - see built-in layers or custom layer example
Custom layers etc in pre-1.0.0-beta JSON (ModelSerializer) format need to be registered before they can be deserialized due to JSON format change. Built-in layers and models saved in 1.0.0-beta or later do not require this. Use NeuralNetConfiguration.registerLegacyCustomClassesForJSON(Class)
for this purpose
ExistingDataSetIterator has been deprecated; use fit(DataSetIterator, int numEpochs)
method instead
ComputationGraph TrainingListener onEpochStart and onEpochEnd methods are not being called correctly
DL4J Zoo Model FaceNetNN4Small2 model configuration is incorrect, causing issues during forward pass
Early stopping score calculators with values thar should be maximized (accuracy, f1 etc) are not working properly (values are minimized not maximized). Workaround: override ScoreCalculator.calculateScore(...)
and return 1.0 - super.calculateScore(...)
.
Not all op gradients implemented for automatic differentiation
Vast majority of new operations added in 1.0.0-beta do NOT use GPU yet.
Added LayerSpace for OCNN (one-class neural network)
UI overhaul: new training UI has considerably more information, supports persistence (saving info and loading later), Japanese/Korean/Russian support. Replaced Dropwizard with Play framework.
Import of models configured and trained using
Imports both Keras model and
Supported models: models
Supported : Dense, Dropout, Activation, Convolution2D, MaxPooling2D, LSTM
Added ‘Same’ padding more for CNNs (ConvolutionMode network configuration option)
Weighted loss functions: Loss functions now support a per-output weight array (row vector)
ROC and AUC added for binary classifiers
Improved error messages on invalid configuration or data; improved validation on both
Added metadata functionality: track source of data (file, line number, etc) from data import to evaluation. Loading a subset of examples/data from this metadata is now supported.
Removed Jackson as core dependency (shaded); users can now use any version of Jackson without issue
Added LossLayer: version of OutputLayer that only applies loss function (unlike OutputLayer: it has no weights/biases)
Functionality required to build triplet embedding model (L2 vertex, LossLayer, Stack/Unstack vertices etc)
Reduced DL4J and ND4J ‘cold start’ initialization/start-up time
Pretrain default changed to false and backprop default changed to true. No longer needed to set these when setting up a network configuration unless defaults need to be changed.
Added TrainingListener interface (extends IterationListener). Provides access to more information/state as network training occurs
Numerous bug fixes across DL4J and ND4J
Performance improvements for nd4j-native & nd4j-cuda backends
Standalone Word2Vec/ParagraphVectors overhaul:
Performance improvements
ParaVec inference available for both PV-DM & PV-DBOW
Parallel tokenization support was added, to address computation-heavy tokenizers.
Native RNG introduced for better reproducibility within multi-threaded execution environment.
Additional RNG calls added: Nd4j.choice(), and BernoulliDistribution op.
Off-gpu storage introduced, to keep large things, like Word2Vec model in host memory. Available via WordVectorSerializer.loadStaticModel()
Two new options for performance tuning on nd4j-native backend: setTADThreshold(int) & setElementThreshold(int)
Notable changes for upgrading codebases based on 0.6.0 to 0.7.0:
UI: new UI package name is deeplearning4j-ui_2.10 or deeplearning4j-ui_2.11 (previously: deeplearning4j-ui). Scala version suffix is necessary due to Play framework (written in Scala) being used now.
DataVec ImageRecordReader: labels are now sorted alphabetically by default before assigning an integer class index to each - previously (0.6.0 and earlier) they were according to file iteration order. Use .setLabels(List) to manually specify the order if required.
CNNs: configuration validation is now less strict. With new ConvolutionMode option, 0.6.0 was equivalent to ‘Strict’ mode, but new default is ‘Truncate’
Xavier weight initialization change for CNNs and LSTMs: Xavier now aligns better with original Glorot paper and other libraries. Xavier weight init. equivalent to 0.6.0 is available as XAVIER_LEGACY
DataVec: Custom RecordReader and SequenceRecordReader classes require additional methods, for the new metadata functionality. Refer to existing record reader implementations for how to implement these methods.
Word2Vec/ParagraphVectors:
Few new builder methods:
allowParallelTokenization(boolean)
useHierarchicSoftmax(boolean)
Behaviour change: batchSize: now batch size is ALSO used as threshold to execute number of computational batches for sg/cbow
Main highlight: full multi-datatype support for ND4J and DL4J. In past releases, all N-Dimensional arrays in ND4J were limited to a single datatype (float or double), set globally. Now, arrays of all datatypes may be used simultaneously. The following are supported:
DOUBLE: double precision floating point, 64-bit (8 byte)
FLOAT: single precision floating point, 32-bit (4 byte)
HALF: half precision floating point, 16-bit (2 byte), "FP16"
LONG: long signed integer, 64 bit (8 byte)
INT: signed integer, 32 bit (4 byte)
SHORT: signed short integer, 16 bit (2 byte)
UBYTE: unsigned byte, 8 bit (1 byte), 0 to 255
BYTE: signed byte, 8 bit (1 byte), -128 to 127
BOOL: boolean type, (0/1, true/false). Uses ubyte storage for easier op parallelization
UTF8: String array type, UTF8 format
ND4J Behaviour changes of note:
When creating an INDArray from a Java primitive array, the INDArray datatype will be determined by the primitive array type (unless a datatype is specified)
For example: Nd4j.createFromArray(double[]) -> DOUBLE datatype INDArray
Similarly, Nd4j.scalar(1), Nd4j.scalar(1L), Nd4j.scalar(1.0) and Nd4j.scalar(1.0f) will produce INT, LONG, DOUBLE and FLOAT type scalar INDArrays respectively
Some operations require matched datatypes for operands
For example, if x and y are different datatypes, a cast may be required: x.add(y.castTo(x.dataType()))
Some operations have datatype restrictions: for example, sum on a UTF8 array is not supported, nor is variance on a BOOL array. For some operations on boolean arrays (such as sum), casting to an integer or floating point type first may make sense.
DL4J Behaviour changes of note:
MultiLayerNetwork/ComputationGraph no longer depend in any way on ND4J global datatype.
The datatype of a network (DataType for it's parameters and activations) can be set during construction using NeuralNetConfigutation.Builder().dataType(DataType)
Networks can be converted from one type to another (double to float, float to half etc) using MultiLayerNetwork/ComputationGraph.convertDataType(DataType)
method
Main new methods:
Nd4j.create(), zeros(), ones(), linspace(), etc methods with DataType argument
INDArray.castTo(DataType) method - to convert INDArrays from one datatype to another
New Nd4j.createFromArray(...) methods for
ND4J/DL4J: CUDA - 10.1 support added, CUDA 9.0 support dropped
CUDA versions supported in 1.0.0-beta4: CUDA 9.2, 10.0, 10.1.
ND4J: Mac/OSX CUDA support dropped
Mac (OSX) CUDA binaries are no longer provided. Linux (x86_64, ppc64le) and Windows (x86_64) CUDA support remains. OSX CPU support (x86_64) is still available.
DL4J/ND4J: MKL-DNN Support Added DL4J (and ND4J conv2d etc ops) now support MKL-DNN by default when running on CPU/native backend. MKL-DNN support is implemented for the following layer types:
ConvolutionLayer and Convolution1DLayer (and Conv2D/Conv2DDerivative ND4J ops)
SubsamplingLayer and Subsampling1DLayer (and MaxPooling2D/AvgPooling2D/Pooling2DDerivative ND4J ops)
BatchNormalization layer (and BatchNorm ND4J op)
LocalResponseNormalization layer (and LocalResponseNormalization ND4J op)
Convolution3D layer (and Conv3D/Conv3DDerivative ND4J ops)
MKL-DNN support for other layer types (such as LSTM) will be added in a future release.
MKL-DNN can be disabled globally (ND4J and DL4J) using Nd4jCpu.Environment.getInstance().setUseMKLDNN(false);
MKL-DNN can be disabled globally for specific ops by setting ND4J_MKL_FALLBACK
environment variable to the name of the operations to have MKL-DNN support disabled for. For example: ND4J_MKL_FALLBACK=conv2d,conv2d_bp
ND4J: Improved Performance due to Memory Management Changes
Prior releases of ND4J used periodic garbage collection (GC) to release memory that was not allocated in a memory workspace. (Note that DL4J uses workspaces for almost all operations by default hence periodic GC could frequently be disabled when training DL4J networks). However, the reliance on garbage collection resulted in a performance overhead that scaled with the number of objects in the JVM heap.
In 1.0.0-beta4, the periodic garbage collection is disabled by default; instead, GC will be called only when it is required to reclaim memory from arrays that are allocated outside of workspaces.
To re-enable periodic GC (as per the default in beta3) and set the GC frequency to every 5 seconds (5000ms) you can use:
ND4J: Improved Rank 0/1 Array Support
In prior versions of ND4J, scalars and vectors would sometimes be rank 2 instead of rank 0/1 when getting rows/columns, getting sub-arrays using INDArray.get(NDArrayIndex...) or when creating arrays from Java arrays/scalars. Now, behaviour should be more consistent for these rank 0/1 cases. Note to maintain old behaviour for getRow and getColumn (i.e., return rank 2 array with shape [1,x] and [x,1] respectively), the getRow(long,boolean)
and getColumn(long,boolean)
methods can be used.
DL4J: Attention layers added
Added basic ("technology preview") of SameDiff UI. Should be considered early WIP with breaking API changes expected in future releases. Supports plotting of SameDiff graphs as well as various metrics (line charts, histograms, etc)
Currenty embedding in the DL4J UI - call UIServer.getInstance()
then go to localhost:9000/samediff
to access.
ND4J/SameDiff - new operations added:
SameDiff TensorFlow Import
ND4J datatypes - significant changes, see highlights at top of this section
SameDiff: Numerous fixes and enhancements
Most CustomOperation operations (such as those used in SameDiff) are CPU only until next release. GPU support was not completed in time for 1.0.0-beta4 release.
Deeplearning4J
Workspaces feature added (faster training performance + less memory)
SharedTrainingMaster added for Spark network training (improved performance) ,
ParallelInference added - wrapper that server inference requests using internal batching and queues
ParallelWrapper now able to work with gradients sharing, in addition to existing parameters averaging mode
VPTree performance significantly improved
CacheMode network configuration option added - improved CNN and LSTM performance at the expense of additional memory use
LSTM layer added, with CuDNN support (Note that the existing GravesLSTM implementation does not support CuDNN)
New native model zoo with pretrained ImageNet, MNIST, and VGG-Face weights
Convolution performance improvements, including activation caching
Custom/user defined updaters are now supported
Evaluation improvements
EvaluationBinary, ROCBinary classes added: for evaluation of binary multi-class networks (sigmoid + xent output layers)
Evaluation and others now have G-Measure and Matthews Correlation Coefficient support; also macro + micro-averaging support for Evaluation class metrics
ComputationGraph and SparkComputationGraph evaluation convenience methods added (evaluateROC, etc)
ROC and ROCMultiClass support exact calculation (previous: thresholded calculation was used)
ROC classes now support area under precision-recall curve calculation; getting precision/recall/confusion matrix at specified thresholds (via PrecisionRecallCurve class)
RegressionEvaluation, ROCBinary etc now support per-output masking (in addition to per-example/per-time-step masking)
EvaluationCalibration added (residual plots, reliability diagrams, histogram of probabilities)
Evaluation and EvaluationBinary: now supports custom classification threshold or cost array
Optimizations: updaters, bias calculation
Network memory estimation functionality added. Memory requirements can be estimated from configuration without instantiating networks
New loss functions:
Mixture density loss function
F-Measure loss function
ND4J
Native parallel sort was added
New ops added: SELU/SELUDerivative, TAD-based comparisons, percentile/median, Reverse, Tan/TanDerivative, SinH, CosH, Entropy, ShannonEntropy, LogEntropy, AbsoluteMin/AbsoluteMax/AbsoluteSum, Atan2
New distance functions added: CosineDistance, HammingDistance, JaccardDistance
DataVec
TransformProcess and Transforms now support NDArrayWritables and NDArrayWritable columns
Multiple new Transform classes
Arbiter
UI now uses Play framework, integrates with DL4J UI (replaces Dropwizard backend). Dependency issues/clashing versions fixed.
Supports DL4J StatsStorage and StatsStorageRouter mechanisms (FileStatsStorage, Remote UI via RemoveUIStatsStorageRouter)
General UI improvements (additional information, formatting fixes)
Deeplearning4j
Updater configuration methods such as .momentum(double) and .epsilon(double) have been deprecated. Instead: use .updater(new Nesterovs(0.9))
and .updater(Adam.builder().beta1(0.9).beta2(0.999).build())
etc to configure
DataVec
CsvRecordReader constructors: now uses characters for delimiters, instead of Strings (i.e., ',' instead of ",")
Arbiter
Arbiter UI is now a separate module, with Scala version suffixes: arbiter-ui_2.10
and arbiter-ui_2.11
Added transfer learning API
Spark 2.0 support (DL4J and DataVec; see transition notes below)
New layers
Global pooling (aka "pooling over time"; usable with both RNNs and CNNs)
Center loss output layer
1D Convolution and subsampling layers
ZeroPaddingLayer
New ComputationGraph vertices
L2 distance vertex
L2 normalization vertex
Per-output masking is now supported for most loss functions (for per output masking, use a mask array equal in size/shape to the labels array; previous masking functionality was per-example for RNNs)
L1 and L2 regularization can now be configured for biases (via l1Bias and l2Bias configuration options)
Evaluation improvements:
DL4J now has an IEvaluation class (that Evaluation, RegressionEvaluation, etc all implement. Also allows custom evaluation on Spark)
Added multi-class (one vs. all) ROC: ROCMultiClass
For both MultiLayerNetwork and SparkDl4jMultiLayer: added evaluateRegression, evaluateROC, evaluateROCMultiClass convenience methods
HTML export functionality added for ROC charts
TSNE re-added to new UI
Training UI: now usable without an internet connection (no longer relies on externally hosted fonts)
UI: improvements to error handling for ‘no data’ condition
Epsilon configuration now used for Adam and RMSProp updaters
Fix for bidirectional LSTMs + variable-length time series (using masking)
Added CnnSentenceDataSetIterator (for use with ‘CNN for Sentence Classification’ architecture)
Spark + Kryo: now test serialization + throw exception if misconfigured (instead of logging an error that can be missed)
MultiLayerNetwork now adds default layer names if no name is specified
DataVec:
JSON/YAML support for DataAnalysis, custom Transforms etc
ImageRecordReader refactored to reduce garbage collection load (hence improve performance with large training sets)
Faster quality analysis.
Arbiter: added new layer types to match DL4J
Performance improvement for Word2Vec/ParagraphVectors tokenization & training.
Batched inference introduced for ParagraphVectors
Nd4j improvements
New native operations available for ND4j: firstIndex, lastIndex, remainder, fmod, or, and, xor.
OpProfiler NAN_PANIC & INF_PANIC now also checks result of BLAS calls.
Nd4.getMemoryManager() now provides methods to tweak GC behavior.
Alpha version of parameter server for Word2Vec/ParagraphVectors were introduced for Spark. Please note: It’s not recommended for production use yet.
Performance improvements for CNN inference
Spark versioning schemes: with the addition of Spark 2 support, the versions for Deeplearning4j and DataVec Spark modules has changed
For Spark 1: use <version>0.8.0_spark_1</version>
For Spark 2: use <version>0.8.0_spark_2</version>
Also note: Modules with Spark 2 support are released with Scala 2.11 support only. Spark 1 modules are released with both Scala 2.10 and 2.11 support
Keras 1D convolutional and pooling layers cannot be imported yet. Will be supported in forthcoming release.
Keras v2 model configurations cannot be imported yet. Will be supported in forthcoming release.
Initial multi-GPU support viable for standalone and Spark.
Refactored the Spark API significantly
Added CuDNN wrapper
Performance improvements for ND4J
Introducing : Lots of new functionality for transforming, preprocessing, cleaning data. (This replaces Canova)
New DataSetIterators for feeding neural nets with existing data: ExistingDataSetIterator, Floats(Double)DataSetIterator, IteratorDataSetIterator
New learning algorithms for word2vec and paravec: CBOW and PV-DM respectively
New native ops for better performance: DropOut, DropOutInverted, CompareAndSet, ReplaceNaNs
Shadow asynchronous datasets prefetch enabled by default for both MultiLayerNetwork and ComputationGraph
Better memory handling with JVM GC and CUDA backend, resulting in significantly lower memory footprint
FP16 support for CUDA
Better performance for multi-gpu
Including optional P2P memory access support
Normalization support for time series and images
Normalization support for labels
Removal of Canova and shift to DataVec: Javadoc,
Numerous bug fixes
Spark improvements
Added variational autoencoder
Activation function refactor
Activation functions are now an interface
Configuration now via enumeration, not via String (see examples - )
Custom activation functions now supported
New activation functions added: hard sigmoid, randomized leaky rectified linear units (RReLU)
Multiple fixes/improvements for Keras model import
Added P-norm pooling for CNNs (option as part of SubsamplingLayer configuration)
Iteration count persistence: stored/persisted properly in model configuration + fixes to learning rate schedules for Spark network training
LSTM: gate activation function can now be configured (previously: hard-coded to sigmoid)
UI:
Added Chinese translation
Fixes for UI + pretrain layers
Added Java 7 compatible stats collection compatibility
Improvements in front-end for handling NaNs
Added UIServer.stop() method
Fixed score vs. iteration moving average line (with subsampling)
Solved Jaxb/Jackson issue with Spring Boot based applications
RecordReaderDataSetIterator now supports NDArrayWritable for the labels (set regression == true; used for multi-label classification + images, etc)
Activation functions (built-in): now specified using Activation enumeration, not String (String-based configuration has been deprecated)
Updaters (Adam, AdaGrad, etc) optimized via C++ operations (significant training performance boost) for DL4J and SameDiff ,
Some packages relocated to avoid split packages (that can be a problem for OSGi and Java 9 modules)
Note: this is a breaking change for some class packages/imports. See for details on exact package changes
Deeplearning4j UI: Webjars versions locked down using dependency management to avoid check on each build
Added MKLDNN (DNNL/OneDNN) support for depthwise_conv2d operation for DL4J and SameDiff
Refactored/merged modules dl4j-perf and dl4j-util into deeplearning4j-core
Fixed an issue with BertWordPieceTokenizer - potential StackOverflowError with certain inputs
Fixed an issue with GlobalPooling layer with masks of different datatype to the activations datatype
Fixed an issue with DL4JModelValidator for ComputationGraph
Fixed an issue where SameDiff layers in DL4J could throw an exception when used with transfer learning
Weight initialization for EmbeddingLayer and EmbeddingSequenceLayer now no longer depend on the vocabulary size (only the vector size)
Fixed an issue with Keras import with bidirectional layers + preprocessors
DL4J UI: added redirect from /train to /train/overview
Fixed an issue where RecordReaderDataSetIterator builder collectMetaData configuration was not being applied
Fixed an issue where MultiLayerNetwork evaluation was not passing metadata to the IEvaluation instances during evaluation ,
Fixed an issue with Spark training SharedTrainingMaster when training with a ComputationGraph and MultiDataSets
Assorted fixes for edge cases for DL4J Keras import
deelpearning4j-nlp-korean will no longer be released for Scala 2.12 due to required dependency only having Scala 2.11 version avairable
Fix for ConvolutionalIterationListener for ComputationGraph
Fixed an issue where dataset and model zoo downloads could get stuck if the server fails to send any data (now: timeout + retry)
DL4J ModelSerializer no longer writes temporary files when restoring models from InputStream
Fixes issues with UIServer multi session mode, and potential shutdown race condition
Fixed an issue where TfidfVectorizer.vectorize() could throw a NPE when fit from LabelAwareIterator
SameDiff multi-threaded inference enhanced (and fixed) - a single SameDiff instance can now be used for inference safely and efficiently from multiple threads
cuDNN support added to SameDiff (automatically enabled for nd4j-cuda-10.x backend)
Added ND4J namespaces: Nd4j.cnn, Nd4j.rnn, Nd4j.image
rgbToHsv, hsvToRgb
rgbToYiq, yiqToRgb, rgbToYuv, yuvToRgb
imageResize
gamma, poisson, shuffle
clipByAvgNorm, embeddingLookup
mergeMaxIndex
cReLU
upsampling3d
triangular_solve
tri operation
triu operation
lstmLayer (note old lstmLayer method renamed to lstmBlock)
gru
Added new Loss operations namespace - Nd4j.loss
HSVToRGB, RGBToHSV, Igamma, Igammac, RandomGamma, RandomPoisson, RandomPoissonV2, RandomShuffle
Added SameDiff ProfilingListener - writes op performance profiles in Chrome profiler format (load in chrome://tracing/
)
Added SameDiff ProfileAnalyzer tool to compare profiles output from ProfilingListener (or Tensorflow)
SameDiff listener API: added frame and iteration information for listener methods
Added (non-backend-specific) method of accessing Nd4j environment: Nd4j.getEnvironment()
method (environment info and low-level configuration options)
Improved memory limits/configuration support for libnd4j (c++)
Added pairwise (broadcastable) power backprop operation
Updated JavaCPP presets MKL version to 2020.0 from 2019.5
Added DynamicCustomOp dargs - datatype arguments
Output datatype configuration for Range op , SequenceOp , ConfusionMatrix
Added tensormmul_bp op
OpenBLAS version upgraded to 0.3.8
libnd4j (c++ codebase underlying DL4J, ND4J and SameDiff) refactored to be more easily embeddable in other C++ projects
ImagePreProcessingScaler now supports preprocessing of labels (for segmentation)
Additional datatypes now supported for nd4j-tensorflow TensorflowConversion
SameDiff operation namespaces (sd.math, sd.image, etc) are now code generated to ensure SameDiff and ND4J namespaces are identical (all operations included, same API)
Added ND4J ArchiveUtils.unzipFileTo(String, String, boolean logFiles)
overload to enable/disable extracted file path logging
Added weight format configuration for following operations: conv1D, conv2D, conv3D, deconv2d, deconv3d, depthwiseConv2d, pointwiseConv2d, sconv2d
Added backprop operation implementations for mergemax, mergeadd, mergeavg operations
MKL version upgraded to 2020.0 2020.1; OpenCV upgraded from 4.2.0 to 4.3.0
SameDiff: DifferentialFunctionFactory class removed in favor of namespace methods (sd.math, sd.linalg, etc)
Added lstmLayer_bp operation
Added gru_bp operation
linspace operation can now use both targs and arrays for start/end/size arguments
Assorted dependency updates - OpenBLAS (0.3.9), OpenCV (4.3.0), Leptonica (1.79.0)
Upgraded assorted dependency versions: javax.activation:activation (1.1 -> 1.1.1), stream analytics (2.7.0->2.9.8), Apache Spark (2.4.3->2.4.5), Jackson databind (2.10.1 -> 2.10.3), Vertx (3.8.3 -> 3.9.0)
Added nd4j-common-tests ResourceUtils.listClassPathfiles method
Updaters (Adam, AdaGrad, etc) optimized via C++ operations (significant training performance boost) for DL4J and SameDiff ,
SameDiff - added CuDNN support
Some packages relocated to avoid split packages (that can be a problem for OSGi and Java 9 modules)
Note: this is a breaking change for some class packages/imports. See for details on exact package changes
Fixed some issues with Tensorflow import of FusedBatchNorm operation
Fixed an issue where the Roll operation did not match Tensorflow operation
Fixed an issue where ArchiveUtils could fail to create the top level destination directory when it does not exist
Fixed an issue where resize_bicubic operation did not match Tensorflow for some configuration values
Pad operation now supports long/int64 values for padding array
Fixed an issue where hashcode operation shape function wasn't always returning int64/long dtype
Fixed an issue with reshape operation on empty arrays with -1s
Improved performance on CUDA for concat operation and CPU/GPU
On CPU for NHWC case
Generally
On CUDA for 2D case
Added MKLDNN (DNNL/OneDNN) support for depthwise_conv2d operation for DL4J and SameDiff
Fixed a small SameDiff execution issue for switch operation where the predicate is a constant
Fixed an issue with batchnorm operation when input arrays have unusual strides
Merged nd4j-buffer, nd4j-content modules into nd4j-api
Deleted deprecated nd4j-jackson module (remaining functionality available in nd4j-api)
Deleted unused/unmaintained nd4j-camel and nd4j-gson modules
Optimization for legacy random ops
Optimization for broadcast operations , , , ,
Performance optimization for multiple operations: softmax, squeeze, expand_dims, tanh
Optimization for transpose/permute operations
Performance enhancement: MKLDNN matmul used for some mmul operation cases
Optimization for gather operation on CPU
Optimization for stack/unstack operations on CPU
Optimization for split operation (CPU and CUDA)
ND4J initialization no longer logs number of OpenMP BLAS threads for CUDA
Optimization: Fixed issues with auto-vectorization on multple CPU operations
Optimization for reshape operation ,
Fixed an issue where INDArray.hashCode() could cause an exception on some datatypes
Optimization for CPU: MKLDNN is now used for softmax, tanh, softmax_bp and tanh_bp operations , , ,
Fixed random_exponential operation
Improved performance on C++ SameDiff graph execution via reduced array zeroing where safe to do so
Improved C++ indexing implementation impacting CPU performance on some operations
Fixed an issue where Split operation could have incorrect output shapes for empty arrays
Fixed some issues with SameDiff.equals method
Fixed an issue with reshape operation output shape on empty arrays ,
Nd4j.gemm now uses Mmul operation internally to avoid potential threading issues with direct BLAS calls on CUDA
Fixed an edge case issue with percentile operation
Fixed an edge case issue for cusolved (CUDA) in libnd4j
Fixed an issue with error formatting for segment operations for incorrect lengths
Fixed an issue where ND4J workspaces were not guaranteed to be unique
Fixed some operation implementations when operating on views (Batch/Space to Space/Batch/Depth; batchnorm_bp)
Fixed an issue where exponential distribution random number generation operation could produce infinities extremely rarely (~1 in 10^9 values)
Fixed an issue with long file paths for memory mapped workspaces on Windows
Memory for memory mapped workspaces are now deallocated immediately when workspace is destroyed, instead of waiting for GC to free memory
Fall-back to other BLAS implementation for cases where MKLDNN GEMM implementation is slow
Set nd4j-native source/target to Java 7 ,
datavec-python: added zero-copy support for bytes/byte buffers
datavec-python: Python exceptions are now thrown as Java exceptions
datavec-python: Added support for additional NumPy datatypes
datavec-python: Python version upgraded from 3.7.6 to 3.7.7
Deleted not properly maintained modules: datavec-camel, datavec-perf
Fixed missing BOOL datatype support for arrow conversion functionality
Assorted fixes for datavec-python ,
Fixed an issue with LineRecordReader where initialization was performed unnecessarily (adding performance overhead)
Refactoring to decouple configuration and learning methods from their implementations
Added builder patterns for all configuration classes
Fixes an issue with GridSearchCandidateGenerator not working correctly for some cases ,
Performance and memory optimizations via optimizations of internal use of workspaces
Reflections library has entirely been removed from DL4J and is no longer required for custom layer serialization/deserialization ,
RecordReaderMultiDataSetIterator will no longer try to convert unused columns to numerical values
Fixes for Android compilation (removed duplicate classes, aligned versions, removed some dependencies)
Fix for RecordReaderMulitDataSetIterator where output could be incorrect for some constructors
Non-frozen layers before a frozen layer will no longer be skipped during backprop (useful for GANs and similar architectures)
Fixed issue where ComputationGraph topological sort may not be consistent on all platforms; could sometimes break ComputationGraphs (with multiple valid topological orderings) trained on PC and deployed on Android
Fixed issue with CuDNN batch norm using 1-decay
instead of decay
deeplearning4j-cuda no longer throws exceptions if present on classpath with nd4j-native backend set to higher priority
Added RNG control for CifarDataSetIterator
WordVectorSerializer now deletes temp files immediately once done
IterationListener has been deprecated in favor of TrainingListener. For existing custom listeners, switch from implements TrainingListener
to extends BaseTrainingListener
ImageRecordReader now logs number of inferred label classes (to reduce risk of users missing a problem if something is misconfigured)
Added AnalyzeSpark.getUnique overload for multiple columns
Added performance/timing module
Reduced ImageRecordReader garbage generation via buffer reuse
Fixes for Android compilation (aligned versions, removed some dependencies)
Removed Reflections library use in DataVec
Fix for TransformProcessRecordReader batch support
Fix for TransformProcessRecordReader with filter operations
Fixed issue with ImageRecordReader/ParentPathLabelGenerator incorrectly filtering directories containing .
character(s)
ShowImageTransform now initializes frame lazily to avoid blank windows
DataVec ClassPathResource has been deprecated; use nd4j-common version instead
Fixed timestamp issue that could cause incorrect rendering of first model's results in UI
Execution now waits for last model(s) to complete before returning when a termination condition is hit
As per DL4J etc: use of Reflections library has been removed entirely from Arbiter
Remove use of Eclipse Collections library due to issues with Android compilation
Improved cleanup of completed models to reduce maximum memory requirements for training
Histogram and Flow iteration listeners deprecated. They are still functional, but using new UI is recommended
See ConvolutionMode javadoc for more details:
Added MKL-DNN support for Conv/Pool/BatchNorm/LRN layers. MKL-DNN will be used automatically when using nd4j-native backend. (, )
L1/L2 regularization now made into a class; weight decay added, with better control as to when/how it is applied. See for more details on the difference between L2 and weight decay. In general, weight decay should be preferred to L2 regularization. (, )
Added dot product attention layers: , , and
The parameter/activation datatypes for new models can be set for new networks using the dataType(DataType)
method on NeuralNetConfiguration.Builder ()
MultiLayerNetwork/ComputationGraph can be converted between (floating point) datatypes FP16/32/64 for the parameters and activations using the MultiLayerNetwork/ComputationGraph.convertDataType(DataType)
methods (, )
EmbeddingLayer and EmbeddingSequenceLayer builders now have .weightInit(INDArray)
and .weightInit(Word2Vec)
methods for initializing parameters from pretrained word vectors ()
PerformanceListener can now be configured to report garbage collection information (number/duration)
Evaluation class will now check for NaNs in the predicted output and throw an exception instead treating argMax(NaNs) as having value 0 ()
Added ModelAdapter for ParallelInference for convenience and for use cases such as YOLO (allows improved performance by avoiding detached (out-of-workspace) arrays) ()
Added GELU Activation function ()
Added BertIterator (a MultiDataSetIterator for BERT training - supervised and unsupervised)
Added validation to MultiLayerNetwork/ComputationGraph that throws an exception when attempting to perform Regression evaluation on a classifier, or vice-versa (, )
Added ComputationGraph.output(List<String> layers, boolean train, INDArray[] features, INDArray[] featureMasks)
method to get the activations for a specific set of layers/vertices only (without redundant calculations) ()
Weight initialization for networks is now implemented as classes (not just enumerations) and hence is now extesible via IWeightInit interface (); i.e., custom weight initializations are now supported (, )
Added Capsule Network layers (no GPU acceleration until next release) - , and ()
Added Cifar10DataSetIterator
to replace CifarDataSetIterator
(, )
Keras import: Importing models from InputStream is now supported (, )
Layer/NeuralNetConfiguration builders now have getter/setter methods also, for better Kotlin support ()
Most JavaScript dependencies and fonts for UI have been migrated to WebJars ()
CheckpointListener now has static availableCheckpoints(File), loadCheckpointMLN(File, int) and lostLastCheckpointMLN(File) etc methods ()
MultiLayerNetwork/ComputationGraph now validate and throw an exception in certain incompatible RNN configurations, like truncated backpropagation through time combined with LastTimeStepLayer/Vertex ()
Added BERT WordPiece tokenizers ()
Deeplearning4j UI now has multi-user/multi-session support - use UIServer.getInstance(boolean multiSession, Function<String,StatsStorage>)
to start UI in multi-session mode ()
Layer/NeuralNetworkConfiguration builder method validation standardized and improved ()
WordVectorSerializer now supports reading and exporting text forwat vectors via WordVectorSerializer.writeLookupTable and readLookupTable (]
Updated to JavaCPP, JavaCPP presets, and JavaCV version 1.5 ()
Added EvaluationBinary false alarm rate calculation ()
ComputationGraph GraphBuilder now has an appendLayer method that can be used to add layers connected to the last added layer/vertex ()
Added Wasserstein loss function ()
Keras import: Improved errors/exceptions for lambda layer import ()
Apache Lucene/Solr upgraded from 7.5.0 to 7.7.1 ()
KMeans clustering strategy is now configurable ()
DL4J Spark training: fix for shared clusters (multiple simultaneous training jobs) - Aeron stream ID now generated randomly ()
cuDNN helpers will no longer attempt to fall back on built-in layer implementations if an out-of-memory exception is thrown ()
Batch normalization global variance reparameterized to avoid underflow and zero/negative variance in some cases during distributed training ()
Fixed a bug where dropout instances were incorrectly shared between layers when using transfer learning with dropout (, )
Fixed issue where tensorAlongDimension could result in an incorrect array order for edge cases and hence exceptions in LSTMs ()
Fixed an edge case issue with ComputationGraph.getParam(String) where the layer name contains underscores ()
Fixed an edge case with ParallelInference on CUDA where (very rarely) input array operations (such as normalization) may not be fully completed before transferring an array between threads (, )
Fixed an edge case with KFoldIterator when the total number of examples is not a multiple of the batch size (, )
Fixed an issue where DL4J UI could throw a NoClassDefFoundError
on Java 9/10/11 (, )
Keras import: added aliases for weight initialization ()
Fixed issue where dropout instances would not be correctly cloned when network configuration was cloned ()
Fixed workspace issue with ElementwiseVertex with single input ()
Fixed issue with UI where detaching StatsStorage could attempt to remove storage twice, resulting in an exception ()
Fixed issue where LossMultiLabel would generate NaNs when all labels in minibatch are the same class. Now 0 gradient is returned instead. (, )
Fixed an issue where DepthwiseConv2D weight could be wrong shape on restoring network from saved format ()
Fixed issue where BaseDatasetIterator.next() would not apply preprocessors, if one was set ()
Improved default configuration for CenterLossOutputLayer ()
Fixed an issue for UNet non-pretrained configuration ()
Fixed an issue where Word2Vec VocabConstructor could deadlock under some circumstances ()
SkipGram and CBOW (used in Word2Vec) were made native operations for better performance ()
Fixed an issue where references to detached StatsListener instances would be maintained, potentially leading to memory issues when using InMemoryStatsListener ()
Optimization: Workspaces were added to SequenceVectors and Word2Vec ()
Improved validation for RecordReaderDataSetIterator ()
Improved handling of unknown words in WordVectors implementation ()
Yolo2OutputLayer: Added validation for incorrect labels shape. ()
LastTimeStepLayer will now throw an exception when the input mask is all 0s (no data - no last time step) ()
Fixed an issue where MultiLayerNetwork/ComputationGraph.setLearningRate method could lead to invalid updater state in some rare cases ()
Fixed an issue where Conv1D layer would calculate output length in MultiLayerNetwork.summary() ()
Async iterators are now used in EarlyStoppingTrained to improve data loading performance ()
EmbeddingLayer and EmbeddingSequenceLayer performance has been improved on CUDA ()
Removed outdated/legacy scala tools repository (, )
Fixed issues in L2NormalizeVertex equals/hashcode methods ()
Fixed Workspace issue in ConvolutionalListener ()
Fixed EvaluationBinary falsePositiveRate calculation ()
Added validation and useful exception for MultiLayerNetwork.output(DataSetIterator) methods ()
Fixed minor issue where ComputationGraph.summary() would throw a NullPointerException if init() had not already been called ()
Fixed a ComputationGraph issue where an input into a single layer/vertex repeated multiple times could fail during training ()
Improved performance for KMeans implementation ()
Fixed an issue with rnnGetPreviousState for RNNs in 'wrapper' layers such as FrozenLayer ()
Keras import: Fixed an issue with order of words when importing some Keras tokenizers ()
Keras import: fixed issue with possible UnsupportedOperationException in KerasTokenizer class ()
Keras import: fixed an import issue with models combining embeddings, reshape and convolution layers ()
Keras import: fixed an import issue with input type inference for some RNN models ()
Fixed some padding issues in LocallyConnected1D/2D layers ()
Removed reliance on periodic garbage collection calls for handling memory management of out-of-workspace (detached) INDArrays ()
Added INDArray.close() method to allow users to manually release off-heap memory immediately ()
SameDiff: Added TensorFlowImportValidator tool to determine if a TensorFlow graph can likely be imported into SameDiff. Reports the operations used and whether they are supported in SameDiff ()
Added Nd4j.createFromNpzFile method to load Numpy npz files ()
Added support for importing BERT models into SameDiff (, )
Added SameDiff GraphTransformUtil for performing transfer learning and other graph modifications (, , )
Evaluation, RegressionEvaluation etc now support 4d (CNN segmentation) data formats; also added Evaluation.setAxis(int) method to support other data formats such as channels-last/NHWC for CNNs and NWC for CNN1D/RNNs. Defaults to axis 1 (which matches DL4J CNN and RNN data formats) (, )
For more details, see , ,
Added DotProductAttention and MultiHeadDotProductAttention operations ()
Added Nd4j.exec(Op) and Nd4j.exec(CustomOp) convenience methods ()
, , ,
, ,
, , ),
, ,
Import of TF Assertions added ()
Support/fixes for control dependencies ()
Support/fixes for TensorArray and related ops (, , )
nd4j-common - tar/tar.gz support added; Zip file listing and single file extraction added (, )
SameDiff: reductions operations now support "dynamic" (non-constant) inputs for axis argument ()
ROCBinary now has .getROC(int outputNum) method ()
SameDiff: L1/L2 regularization added (, )
SameDiff: Added SDVariable.convertToVariable() and convertToConstant() - to change SDVariable type ()
Added checks and useful exceptions for reductions on empty arrays ()
SameDiff "op creator" methods (SameDiff.tanh(), SameDiff.conv2d(...) etc) have been moved to subclasses - access creators via SameDiff.math()/random()/nn()/cnn()/rnn()/loss() methods or SameDiff.math/random/nn/cnn/rnn/loss fields ()
SameDiff TensorFlow import: import can now be overridden for cases such as user-defined functions (, )
Libnd4j (c++) benchmarking framework added ()
Added OpExecutioner.inspectArray(INDArray) method to get summary statistics for analysis/debugging purposes ()
Added INDArray.reshape(char order, boolean enforceView, long... newShape)
to reshape array whilst throwing an exception (instead of returning a copy) if the reshape cannot be performed (, )
Added SDVariable method overloads (plus, minus, times, etc) for Kotlin ()
Added SDVariable convenience methods for dot, reshape, permute ()
Added SameDiff SDIndex.point(long, boolean keepDim) method (to keep point indices in output array as size 1 axis) ()
Added SameDiff ProtoBufToFlatBufConversion command line tool for doing TensorFlow frozen model (protobuf) to SameDiff FlatBuffers conversion ()
Improved DataType validation for SameDiff operations ()
nd4j-base64 module (deprecated in beta3) has been removed. Nd4jBase64 class has been moved to nd4j-api ()
When specifying arguments for op execution along dimension (for example, reductions) the reduction axis are now specified in the operation constructor - not separately in the OpExecutioner call. ()
Removed old Java loop-based BooleanIndexing methods. Equivalent native ops should be used instead. ()
Removed Nd4j.ENFORCE_NUMERICAL_STABILITY, Nd4j.copyOnOps, etc ()
SameDiff "op creator" methods (SameDiff.tanh(), SameDiff.conv2d(...) etc) have been moved to subclasses - access creators via SameDiff.math()/random()/nn()/cnn()/rnn()/loss() methods or SameDiff.math/random/nn/cnn/rnn/loss fields ()
Nd4j.emptyLike(INDArray) has been removed. Use Nd4j.like(INDArray) instead ()
org.nd4jutil.StringUtils removed; suggest using Apache commons lang3 StringUtils instead ()
ND4J Jackson RowVector(De)Serializer has been deprecated due to datatype changes; NDArrayText(De)Serializer should be used instead (, )
nd4j-instrumentation module has been removed due to lack of use/maintenance ()
Fixed bug with InvertMatrix.invert() with [1,1] shape matrices ()
Fixed edge case bug for Updater instances with length 1 state arrays ()
Fixed edge case with FileDocumentIterator with empty documents ()
, , ,
Improved functionality for losses (, , , )
Improved errors for missing/misspelled placeholders ()
Fixed edge cases in loops (, )
Fixed issue with Nd4j.vstack on 1d arrays returning 1d output, not 2d stacked output ()
Conv2D op can infer kernel size from input arrays directly when required (, )
Fixed an issue with Numpy format export - Nd4j.toNpyByteArray(INDArray)
()
Fixes for SameDiff when it is used within an external workspace ()
Fixed an issue where empty NDArrays would be reported as having scalar shape information, length 1 ()
Optimization: libnd4j (c++) indexing for ops will use uint for faster offset calculations when required and possible ()
Optimization: libnd4j loops performance improved for faster execution of some operations (, , )
Local response normalization op optimized (, )
Fixed an issue with INDArray.repeat on some view arrays ()
Improved performance for execution of some operations on view arrays ()
Improved performance on broadcast operations (, , )
Improved performance for non-EWS reduction along dimension operations ()
Improved performance fo IndexReduce operations () and small reductions ()
Improved performonce of one_hot operation (), tanh operation ()
Improved performance for transform operations ()
Optimization: empty arrays are created only once and cached (as they are immutable) ()
Improved performance on operations using tensor along dimension for parallelization (, )
Improved performance on "reduce 3" reduction operations ()
Improved handling of CUDA contexts in heavily multi-threaded environments ()
Fixed an issue where Evaluation.reset() would incorrectly clear the String class labels ()
SameDiff: Improved gradient calculation performance/efficiency; "gradients" are now no longer defined for non-floating-point variables, and variables that aren't required to calculate loss or parameter gradients ()
Behaviour of IEvaluation instances now no longer depends on the global (default) datatype setting ()
INDArray.get(point(x), y) or .get(y, point(x)) now returns rank 1 arrays when performed on rank 2 arrays ()
Removed reliance on Guava for SameDiff, fixing potential issue for Java 11/12 and when earlier versions of Guava are on the classpath (, )
ND4J indexing (INDArray.get) implementation rewritten for better performance and reliability ()
Fixes for local response normalization backprop op ()
Some users with Intel Skylake CPUs have reported deadlocks on MKL-DNN convolution 2d backprop operations (DL4J ConvolutionLayer backprop, ND4J "conv2d_bp" operation) when OMP_NUM_THREADS is set to 8 or higher. Investigations suggest this is likely an issue with MKL-DNN, not DL4J/ND4J. See . Workaround: Disable MKL-DNN for conv2d_bp operation via ND4J_MKL_FALLBACK (see earlier) or disable MKL-DNN globally, for Skylake CPUs.
Added PythonTransform (arbitrary python code execution for pre processing) (, )
Added FirstDigit (Benford's law) transform (, )
StringToTimeTransform now supports setting Locale (, )
Added StreamInputSplit for creating local data pipelines where data is stored remotely on storage such as HDFS or S3 (, )
LineRecordReader (and subtypes) now have the option to define the character set ()
Added TokenizerBagOfWordsTermSequenceIndexTransform (TFIDF transform), GazeteerTransform (binary vector for word present) and MultiNlpTransform transforms; added BagOfWordsTransform interface ()
Fixed issue with ImageLoader.scalingIfNeeded ()
Arbiter now supports genetic algorithm search ()
Fixed an issue where early stopping used in Arbiter would result in a serialization exception ()
Workspaces feature added
MapFileRecordReader and MapFileSequenceRecordReader added
Spark: Utilities to save and load JavaRDD<List<Writable>>
and JavaRDD<List<List<Writable>>
data to Hadoop MapFile and SequenceFile formats
Arbiter UI:
UI/CUDA/Linux issue:
Dirty shutdown on JVM exit is possible for CUDA backend sometimes:
Issues with RBM implementation
ND4J: Added SameDiff - Java automatic differentiation library (alpha release) with Tensorflow import (technology preview) and hundreds of new operations
ND4J: Added CUDA 9.0 and 9.1 support (with cuDNN), dropped support for CUDA 7.5, continued support for CUDA 8.0
ND4J: Native binaries (nd4j-native on Maven Central) now ship with AVX/AVX2/AVX-512 support (Windows/Linux)
DL4J: Large number of new layers and API improvements
DL4J: Keras 2.0 import support
Layers (new and enhanced)
Added Yolo2OutputLayer CNN layer for object detection (Link). See also DataVec's ObjectDetectionRecordReader
Adds support for 'no bias' layers via hasBias(boolean)
config (DenseLayer, EmbeddingLayer, OutputLayer, RnnOutputLayer, CenterLossOutputLayer, ConvolutionLayer, Convolution1DLayer). EmbeddingLayer now defaults to no bias (Link)
Adds support for dilated convolutions (aka 'atrous' convolutions) - ConvolutionLayer, SubsamplingLayer, and 1D versions there-of. (Link)
ElementWiseVertex now (additionally) supports Average
and Max
modes in addition to Add/Subtract/Product (Link)
Added SeparableConvolution2D layer (Link)
Added Deconvolution2D layer (aka transpose convolution, fractionally strided convolution layer) (Link)
Added ReverseTimeSeriesVertex (Link)
Added RnnLossLayer - no-parameter version of RnnOutputLayer, or RNN equivalent of LossLayer (Link)
Added CnnLossLayer - no-parameter CNN output layer for use cases such as segmentation, denoising, etc. (Link)
Added Bidirectional layer wrapper (converts any uni-directional RNN to a bidirectional RNN) (Link)
Added SimpleRnn layer (aka "vanilla" RNN layer) (Link)
Added LastTimeStep wrapper layer (wraps a RNN layer to get last time step, accounting for masking if present) (Link)
Added MaskLayer utility layer that simply zeros out activations on forward pass when a mask array is present (Link)
Added alpha-version (not yet stable) SameDiff layer support to DL4J (Note: forward pass, CPU only for now)(Link)
Added Cropping2D layer (Link)
Added parameter constraints API (LayerConstraint interface), and MaxNormConstraint, MinMaxNormConstraint, NonNegativeConstraint, UnitNormConstraint implementations (Link)
Significant refactoring of learning rate schedules (Link)
Added ISchedule interface; added Exponential, Inverse, Map, Poly, Sigmoid and Step schedule implementations (Link)
Added support for both iteration-based and epoch-based schedules via ISchedule. Also added support for custom (user defined) schedules
Learning rate schedules are configured on the updaters, via the .updater(IUpdater)
method
Added dropout API (IDropout - previously dropout was available but not a class); added Dropout, AlphaDropout (for use with self-normalizing NNs), GaussianDropout (multiplicative), GaussianNoise (additive). Added support for custom dropout types (Link)
Added support for dropout schedules via ISchedule interface (Link)
Added weight/parameter noise API (IWeightNoise interface); added DropConnect and WeightNoise (additive/multiplicative Gaussian noise) implementations (Link); dropconnect and dropout can now be used simultaneously
Adds layer configuration alias .units(int)
equivalent to .nOut(int)
(Link)
Adds ComputationGraphConfiguration GraphBuilder .layer(String, Layer, String...)
alias for .addLayer(String, Layer, String...)
Layer index no longer required for MultiLayerConfiguration ListBuilder (i.e., .list().layer(<layer>)
can now be used for configs) (Link)
Added MultiLayerNetwork.summary(InputType)
and ComputationGraph.summary(InputType...)
methods (shows layer and activation size information) (Link)
MultiLayerNetwork, ComputationGraph and layerwise trainable layers now track the number of epochs (Link)
Added deeplearning4j-ui-standalone module: uber-jar for easy launching of UI server (usage: java -jar deeplearning4j-ui-standalone-1.0.0-alpha.jar -p 9124 -r true -f c:/UIStorage.bin
)
Weight initializations:
Added .weightInit(Distribution)
convenience/overload (previously: required .weightInit(WeightInit.DISTRIBUTION).dist(Distribution)
) (Link)
WeightInit.NORMAL (for self-normalizing neural networks) (Link)
Ones, Identity weight initialization (Link)
Added new distributions (LogNormalDistribution, TruncatedNormalDistribution, OrthogonalDistribution, ConstantDistribution) which can be used for weight initialization (Link)
RNNs: Added ability to specify weight initialization for recurrent weights separately to "input" weights (Link)
Added layer alias: Convolution2D (ConvolutionLayer), Pooling1D (Subsampling1DLayer), Pooling2D (SubsamplingLayer) (Link)
Added Spark IteratorUtils - wraps a RecordReaderMultiDataSetIterator for use in Spark network training (Link)
CuDNN-supporting layers (ConvolutionLayer, etc) now warn the user if using CUDA without CuDNN (Link)
Binary cross entropy (LossBinaryXENT) now implements clipping (1e-5 to (1 - 1e-5) by default) to avoid numerical underflow/NaNs (Link)
SequenceRecordReaderDataSetIterator now supports multi-label regression (Link)
TransferLearning FineTuneConfiguration now has methods for setting training/inference workspace modes (Link)
IterationListener iterationDone method now reports both current iteration and epoch count; removed unnecessary invoke/invoked methods (Link)
Added MultiLayerNetwork.layerSize(int), ComputationGraph.layerSize(int)/layerSize(String) to easily determine size of layers (Link)
Added MultiLayerNetwork.toComputationGraph() method (Link)
Added NetworkUtils convenience methods to easily change the learning rate of an already initialized network (Link)
Added MultiLayerNetwork.save(File)/.load(File) and ComputationGraph.save(File)/.load(File) convenience methods (Link)
Added CheckpointListener to periodically save a copy of the model during training (every N iter/epochs, every T time units) (Link)
Added ComputationGraph output method overloads with mask arrays (Link)
New LossMultiLabel loss function for multi-label classification (Link)
New iterators, and iterator improvements:
Added additional score functions for early stopping (ROC metrics, full set of Evaluation/Regression metrics, etc) (Link)
Added additional ROC and ROCMultiClass evaluation overloads for MultiLayerNetwork and ComputationGraph (Link)
Clarified Evaluation.stats() output to refer to "Predictions" instead of "Examples" (former is more correct for RNNs) (Link)
EarlyStoppingConfiguration now supports Supplier<ScoreCalculator>
for use with non-serializable score calculators (Link)
Improved ModelSerializer exceptions when trying to load a model via wrong method (i.e., try to load ComputationGraph via restoreMultiLayerNetwork) (Link)
Added SparkDataValidation utility methods to validate saved DataSet and MultiDataSet on HDFS or local (Link)
ModelSerializer: added restoreMultiLayerNetworkAndNormalizer and restoreComputationGraphAndNormalizer methods (Link)
ParallelInference now has output overloads with support for input mask arrays (Link)
Lombok is no longer included as a transitive dependency (Link)
Performance improvement for J7FileStatsStorage with large amount of history (Link)
Fixed UI layer sizes for variational autoencoder layers (Link)
UI Play servers switch to production (PROD) mode (Link)
Related to the above: users can now set play.crypto.secret
system property to manually set the Play application secret; is randomly generated by default (Link).
SequenceRecordReaderDataSetIterator would apply preprocessor twice (Link)
Evaluation no-arg constructor could cause NaN evaluation metrics when used on Spark
CollectScoresIterationListener could recurse endlessly (Link)
Async(Multi)DataSetIterator calling reset() on underlying iterator could cause issues in some situations (Link)
In some cases, L2 regularization could be (incorrectly) applied to frozen layers (Link)
Logging fixes for NearestNeighboursServer (Link)
Memory optimization for BaseStatsListener (Link)
ModelGuesser fix for loading Keras models from streams (previously would fail) (Link)
Fix for incorrect condition in DuplicateToTimeSeriesVertex (Link)
Fix for getMemoryReport exception on some valid ComputationGraph networks (Link)
RecordReaderDataSetIterator when used with preprocessors could cause an exception under some circumstances (Link)
CnnToFeedForwardPreProcessor could silently reshape invalid input, as long as the input array length matches the expected length (Link)
ModelSerializer temporary files would not be deleted if JVM crashes; now are deleted immediately when no longer required (Link)
RecordReaderMultiDataSetIterator may not add mask arrays under some circumstances, when set to ALIGN_END mode (Link)
ConvolutionIterationListener previously produced an IndexOutOfBoundsException when all convolution layers are frozen (Link)
PrecisionRecallCurve.getPointAtRecall could return a point with a correct but sub-optimal precision when multiple points had identical recall (Link)
Setting dropout(0) on transfer learning FineTuneConfiguration did not remove dropout if present on existing layer (Link)
Under some rare circumstances, Spark evaluation could lead to a NullPointerException (Link)
ComputationGraph: disconnected vertices were not always detected in configuration validation (Link)
Activation layers would not always inherit the global activation function configuration (Link)
PerformanceListener is now serializable (Link)
ScoreIterationListener and PerformanceListener now report model iteration, not "iterations since listener creation" (Link)
Precision/recall curves cached values in ROC class may not be updated after merging ROC instances (Link)
ROC merging after evaluating a large number of examples may produce IllegalStateException (Link)
Added checks for invalid input indices to EmbeddingLayer (Link)
Fixed possible NPE when loading legacy (pre-0.9.0) model configurations from JSON (Link)
Fixed issues with EvaluationCalibration HTML export chart rendering (Link)
Fixed possible incorrect redering of UI/StatsStorage charts with J7FileStatsStorage when used with Spark training (Link)
MnistDataSetIterator would not always reliably detect and automatically fix/redownload on corrupted download data (Link)
Fixes to propagation of thread interruptions (Link)
Fixes for TSNE posting of data to UI for visualization (Link)
PerformanceListener now throws a useful exception (in constructor) on invalid frequency argument, instead of runtime ArithmeticException (Link)
RecordReader(Multi)DataSetIterator now throws more useful exceptions when Writable values are non-numerical (Link)
UI: Fixed possible character encoding issues for non-English languages when internationalization data .txt files are read from uber JARs (Link)
UI: Fixed UI incorrectly trying to parse non-DL4J UI resources when loading I18N data (Link)
Various threading fixes (Link)
Evaluation: no-arg methods (f1(), precion(), etc) now return single class value for binary case instead of macro-averaged value; clarify values in stats() method and javadoc (Link)
Early stopping training: TrainingListener opEpochStart/End (etc) methods were not being called correctly (Link)
Fixes issue where dropout was not always applied to input of RNN layers (Link)
ModelSerializer: improved validation/exceptions when reading from invalid/empty/closed streams (Link)
ParallelInference fixes:
fixes for variable size inputs (variable length time series, variable size CNN inputs) when using batch mode (Link)
fixes undelying model exceptions during output method are now properly propagated back to the user (Link)
fixes support for 'pre-batched' inputs (i.e., inputs where minibatch size is > 1) (Link)
Memory optimization for network weight initialization via in-place random ops (Link)
Fix for VariationalAutoencoder builder decoder layer size validation (Link)
Improved Kmeans throughputlink
Add RPForest to nearest neighbors link
Default training workspace mode has been switched to SEPARATE from NONE for MultiLayerNetwork and ComputationGraph (Link)
Behaviour change: fit(DataSetIterator)
and similar methods no longer perform layerwise pretraining followed by backprop - only backprop is performed in these methods. For pretraining, use pretrain(DataSetIterator)
and pretrain(MultiDataSetIterator)
methods (Link)
Previously deprecated updater configuration methods (.learningRate(double)
, .momentum(double)
etc) all removed
To configure learning rate: use .updater(new Adam(lr))
instead of .updater(Updater.ADAM).learningRate(lr)
To configure bias learning rate: use .biasUpdater(IUpdater)
method
To configure learning rate schedules: use .updater(new Adam(ISchedule))
and similar
Updater configuration via enumeration (i.e., .updater(Updater)
) has been deprecated; use .updater(IUpdater)
.regularization(boolean)
config removed; functionality is now always equivalent to .regularization(true)
.useDropConnect(boolean)
removed; use .weightNoise(new DropConnect(double))
instead
.iterations(int)
method has been removed (was rarely used and confusing to users)
Multiple utility classes (in org.deeplearning4j.util
) have been deprecated and/or moved to nd4j-common. Use same class names in nd4j-common org.nd4j.util
instead.
DataSetIterators in DL4J have been moved from deeplearning4j-nn module to new deeplearning4j-datasets, deeplearning4j-datavec-iterators and deeplearning4j-utility-iterators modules. Packages/imports are unchanged; deeplearning4j-core pulls these in as transitive dependencies hence no user changes should be required in most cases (Link)
Previously deprecated .activation(String)
has been removed; use .activation(Activation)
or .activation(IActivation)
instead
Layer API change: Custom layers may need to implement applyConstraints(int iteration, int epoch)
method
Parameter initializer API change: Custom parameter initializers may need to implement isWeightParam(String)
and isBiasParam(String)
methods
RBM (Restricted Boltzmann Machine) layers have been removed entirely. Consider using VariationalAutoencoder layers as a replacement (Link)
GravesBidirectionalLSTM has been deprecated; use new Bidirectional(Bidirectional.Mode.ADD, new GravesLSTM.Builder()....build()))
instead
Previously deprecated WordVectorSerializer methods have now been removed (Link)
Removed deeplearning4j-ui-remote-iterationlisteners module and obsolete RemoteConvolutionalIterationListener (Link)
Performance on some networks types may be reduced on CUDA compared to 0.9.1 (with workspaces configured). This will be addressed in the next release
Some issues have been noted with FP16 support on CUDA (Link)
Keras 2 support, keeping backward compatibility for keras 1
Keras 2 and 1 import use exact same API and are inferred by DL4J
Keras unit test coverage increased by 10x, many more real-world integration tests
Unit tests for importing and checking layer weights
Leaky ReLU, ELU, SELU support for model import
All Keras layers can be imported with optional bias terms
Old deeplearning4j-keras module removed, old "Model" API removed
All Keras initializations (Lecun normal, Lecun uniform, ones, zeros, Orthogonal, VarianceScaling, Constant) supported
1D convolution and pooling supported in DL4J and Keras model import
Atrous Convolution 1D and 2D layers supported in Keras model import
1D Zero padding layers supported
Keras constraints module fully supported in DL4J and model import
Upsampling 1D and 2D layers in DL4J and Keras model import (including GAN examples in tests)
Most merge modes supported in Keras model import, Keras 2 Merge layer API supported
Separable Convolution 2D layer supported in DL4J and Keras model import
Deconvolution 2D layer supported in DL4J and Keras model import
Full support of Keras noise layers on import (Alpha dropout, Gaussian dropout and noise)
Support for SimpleRNN layer in Keras model import
Support for Bidirectional layer wrapper Keras model import
Addition of LastTimestepVertex in DL4J to support return_sequences=False for Keras RNN layers.
DL4J support for recurrent weight initializations and Keras import integration.
SpaceToBatch and BatchToSpace layers in DL4J for better YOLO support, plus end-to-end YOLO Keras import test.
Cropping2D support in DL4J and Keras model import
In 0.9.1 deprecated Model
and ModelConfiguration
have been permanently removed. Use KerasModelImport instead, which is now the only entry point for Keras model import.
Embedding layer: In DL4J the output of an embedding layer is 2D by default, unless preprocessors are specified. In Keras the output is always 3D, but depending on specified parameters can be interpreted as 2D. This often leads to difficulties when importing Embedding layers. Many cases have been covered and issues fixed, but inconsistencies remain.
Batchnormalization layer: DL4J's batch normalization layer is much more restrictive (in a good way) than Keras' version of it. For instance, DL4J only allows to normalize spatial dimensions for 4D convolutional inputs, while in Keras any axis can be used for normalization. Depending on the dimension ordering (NCHW vs. NHWC) and the specific configuration used by a Keras user, this can lead to expected (!) and unexpected import errors.
Support for importing a Keras model for training purposes in DL4J (enforceTrainingConfig == true) is still very limited and will be tackled properly for the next release.
Keras Merge layers: seem to work fine with the Keras functional API, but have issues when used in a Sequential model.
Reshape layers: can be somewhat unreliable on import. DL4J rarely has a need to explicitly reshape input beyond (inferred) standard input preprocessors. In Keras, Reshape layers are used quite often. Mapping the two paradigms can be difficult in edge cases.
Hundreds of new operations added
New DifferentialFunction api with automatic differentiation (see samediff section) Link
Technology preview of tensorflow import added (supports 1.4.0 and up)
Apache Arrow serialization added supporting new tensor API Link
Add support for AVX/AVX2 and AVX-512 instruction sets for Windows/Linux for nd4j-native backend Link
nVidia CUDA 8/9.0/9.1 now supported
Worskpaces improvements were introduced to ensure safety: SCOPE_PANIC profiling mode is enabled by default
FlatBuffers support for INDArray serde
Support for auto-broadcastable operations was added
libnd4j, underlying c++ library, got functionality boost and now offers: NDArray class, Graph class, and can be used as standalone library or executable.
Convolution-related ops now support NHWC in addition to NCHW data format.
Accumulation ops now have option to keep reduced dimensions.
Not all op gradients implemented for automatic differentiation
Vast majority of new operations added in 1.0.0-alpha do NOT use GPU yet.
Initial tech preview Link
Control flow is supported with IF and WHILE primitives.
Alpha release of SameDiff auto-differentiation engine for ND4J.
Two execution modes available: Java-driven execution, and Native execution for serialized graphs.
SameDiff graphs can be serialized using FlatBuffers
Building and running computation graphs build from SameDiff operations.
Graphs can run forward pass on input data and compute gradients for the backward pass.
Already supports many high-level layers, like dense layers, convolutions (1D-3D) deconvolutions, separable convolutions, pooling and upsampling, batch normalization, local response normalization, LSTMs and GRUs.
In total there are about 350 SameDiff operations available, including many basic operations used in building complex graphs.
Supports rudimentary import of TensorFlow and ONNX graphs for inference.
TFOpTests is a dedicated project for creating test resources for TensorFlow import.
Vast majority of new operations added in 1.0.0-alpha do NOT use GPU yet.
While many of the widely used base operations and high-level layers used in practice are supported, op coverage is still limited. Goal is to achieve feature parity with TensorFlow and fully support import for TF graphs.
Some of the existing ops do not have a backward pass implemented (called doDiff
in SameDiff).
Added LocalTransformExecutor for single machine execution (without Spark dependency) (Link)
Added ArrowRecordReader (for reading Apache Arrow format data) (Link)
Added RecordMapper class for conversion between RecordReader and RecordWriter (Link)
Added BoxImageTransform - an ImageTransform that either crops or pads without changing aspect ratio (Link)
Added CSVVariableSlidingWindowRecordReader (Link)
ImageRecordReader: supports regression use cases for labels (previously: only classification) (Link)
DataAnalysis/AnalyzeSpark now includes quantiles (via t-digest) (Link)
Added AndroidNativeImageLoader.asBitmap(), Java2DNativeImageLoader.asBufferedImage() (Link)
StringToTimeTransform will con try to guess time format if format isn't provided (Link)
Improved performance for NativeImageLoader on Android (Link)
Added BytesWritable (Writable for byte[] data) (Link)
Added TranformProcess.inferCategories methods to auto-infer categories from a RecordReader (Link)
Lombok is no longer included as a transitive dependency (Link)
MapFileRecordReader and MapFileSequenceRecordReader can handle empty partitions/splits for multi-part map files (Link)
Writables: equality semantics have been changed: for example, now DoubleWritable(1.0) is equal to IntWritable(1) (Link)
NumberedFileInputSplit now supports leading zeros (Link)
CSVSparkTransformServer and ImageSparkTransformServer Play severs changed to production mode (Link)
Fix for JSON subtype info for FloatMetaData (Link)
Serialization fixes for JacksonRecordReader, RegexSequenceRecordReader (Link)
Added RecordReader.resetSupported() method (Link)
SVMLightRecordReader now implements nextRecord() method (Link)
Fix for custom reductions when using conditions (Link)
Remove use of backported java.util.functions; use ND4J functions API instead (Link)
Fix for transforms data quality analysis for time columns (Link)
Many of the util classes (in org.datavec.api.util
mainly) have been deprecated or removed; use equivalently named util clases in nd4j-common module (Link)
RecordReader.next(int) method now returns List<List<Writable>>
for batches, not List<Writable>
. See also NDArrayRecordBatch
RecordWriter and SequenceRecordWriter APIs have been updated with multiple new methods
As per DL4J API changes: Updater configuration options (learning rate, momentum, epsilon, rho etc) have been moved to ParameterSpace instead. Updater spaces (AdamSpace, AdaGradSpace etc) introduced (Link)
As per DL4J API changes: Dropout configuration is now via ParameterSpace<IDropout>
, DropoutSpace introduced (Link)
RBM layer spaces removed (Link)
ComputationGraphSpace: added layer/vertex methods with overloads for preprocessors (Link)
Added support to specify 'fixed' layers using DL4J layers directly (instead of using LayerSpaces, even for layers without hyperparameters) (Link)
Added LogUniformDistribution (Link)
Improvements to score functions; added ROC score function (Link)
Learning rate schedule support added (Link)
Add math ops for ParameterSpace<Double>
and ParameterSpace<Integer>
(Link)
Improved logging for failed task execution (Link)
Fix for UI JSON serialization (Link)
Rename saved model file to model.bin (Link)
Fix threading issues with non thread-safe candidates / parameter spaces (Link)
Lombok is no longer included as a transitive dependency (Link)
As per DL4J updater API changes: old updater configuration (learningRate, momentum, etc) methods have been removed. Use .updater(IUpdater)
or .updater(ParameterSpace<IUpdater>)
methods instead
Add support for LSTM layer to A3C
Fix A3C to make it actually work using new ActorCriticLoss
and correct use of randomness
Fix cases when QLearning
would fail (non-flat input, incomplete serialization, incorrect normalization)
Fix logic of HistoryProcessor
with async algorithms and failures when preprocessing images
Tidy up and correct the output of statistics, also allowing the use of IterationListener
Fix issues preventing efficient execution with CUDA
Provide access to more of the internal structures with NeuralNet.getNeuralNetworks()
, Policy.getNeuralNet()
, and convenience constructors for Policy
Add MDPs for ALE (Arcade Learning Environment) and MALMO to support Atari games and Minecraft
Update MDP for Doom to allow using the latest version of VizDoom
First release of ScalNet Scala API, which closely resembles Keras' API.
Can be built with sbt and maven.
Supports both Keras inspired Sequential models, corresponding to DL4J's MultiLayerNetwork
, and Model, corresponding to ComputationGraph
.
Project structure is closely aligned to both DL4J model-import module and Keras.
Supports the following layers: Convolution2D, Dense, EmbeddingLayer, AvgPooling2D, MaxPooling2D, GravesLSTM, LSTM, Bidirectional layer wrapper, Flatten, Reshape. Additionally, DL4J OutputLayers are supported.
Scala 2.12 support