githubEdit

Release Notes

New changes in each release of Eclipse Deeplearning4j.

Version 1.0.0-beta7

Read the announcement at https://blog.konduit.ai/2020/05/14/deeplearning4j-1-0-0-beta7-released/arrow-up-right for the highlights of this release.

Deeplearning4j

Features and Enhancements

  • Added Keras model import support for tf.keras models Linkarrow-up-right, Linkarrow-up-right

    • Full inference and training support is available for ops/layers in the tf.keras namespace; inference only for general Tensorflow operations outside of the tf.keras namespace

    • Note also improvements to Keras import for reshape, permute, etc operations due to NHWC and NWC support in DL4J

  • DL4J now supports NHWC (channels last) data format for all CNN 2D layers, in addition to NCHW Linkarrow-up-right

  • DL4J now supports NWC (channels last - [minibatch, sequence_length, size]) for all RNN and CNN 1D layers, in addition to NCW Linkarrow-up-right

  • Added Deconvolution3D layer Linkarrow-up-right

  • Keras import: added ReLU, ELU and Softmax advanced activation layers Linkarrow-up-right and Swish activation function Linkarrow-up-right

  • Added DL4J SameDiffLoss class (for easily-defined DL4J ILossFunction's via SameDiff) Linkarrow-up-right

  • Useful exceptions are now thrown when attempting to perform unsupported operations on FastText Linkarrow-up-right

  • Added MultiLayerNetwork.evaluate(MultiDataSetIterator) and .evaluateRegression(MultiDataSetIterator) methods Linkarrow-up-right, Linkarrow-up-right

Bug Fixes and Optimizations

  • Updaters (Adam, AdaGrad, etc) optimized via C++ operations (significant training performance boost) for DL4J and SameDiff Linkarrow-up-right, Linkarrow-up-right

  • Some packages relocated to avoid split packages (that can be a problem for OSGi and Java 9 modules) Linkarrow-up-right

    • Note: this is a breaking change for some class packages/imports. See this linkarrow-up-right for details on exact package changes

  • Deeplearning4j UI: Webjars versions locked down using dependency management to avoid check on each build Linkarrow-up-right

  • Added MKLDNN (DNNL/OneDNN) support for depthwise_conv2d operation for DL4J and SameDiff Linkarrow-up-right

  • Refactored/merged modules dl4j-perf and dl4j-util into deeplearning4j-core Linkarrow-up-right

  • Fixed an issue with BertWordPieceTokenizer - potential StackOverflowError with certain inputs Linkarrow-up-right

  • Fixed an issue with GlobalPooling layer with masks of different datatype to the activations datatype Linkarrow-up-right

  • Fixed an issue with DL4JModelValidator for ComputationGraph Linkarrow-up-right

  • Fixed an issue where SameDiff layers in DL4J could throw an exception when used with transfer learning Linkarrow-up-right

  • Weight initialization for EmbeddingLayer and EmbeddingSequenceLayer now no longer depend on the vocabulary size (only the vector size) Linkarrow-up-right

  • Fixed an issue with Keras import with bidirectional layers + preprocessors Linkarrow-up-right

  • DL4J UI: added redirect from /train to /train/overview Linkarrow-up-right

  • Fixed an issue where RecordReaderDataSetIterator builder collectMetaData configuration was not being applied Linkarrow-up-right

  • Fixed an issue where MultiLayerNetwork evaluation was not passing metadata to the IEvaluation instances during evaluation Linkarrow-up-right, Linkarrow-up-right

  • Fixed an issue with Spark training SharedTrainingMaster when training with a ComputationGraph and MultiDataSets Linkarrow-up-right

  • Assorted fixes for edge cases for DL4J Keras import Linkarrow-up-right

  • deelpearning4j-nlp-korean will no longer be released for Scala 2.12 due to required dependency only having Scala 2.11 version avairable Linkarrow-up-right

  • Fix for ConvolutionalIterationListener for ComputationGraph Linkarrow-up-right

  • Fixed an issue where dataset and model zoo downloads could get stuck if the server fails to send any data (now: timeout + retry) Linkarrow-up-right

  • DL4J ModelSerializer no longer writes temporary files when restoring models from InputStream Linkarrow-up-right

  • Fixes issues with UIServer multi session mode, and potential shutdown race condition Linkarrow-up-right

  • Fixed an issue where TfidfVectorizer.vectorize() could throw a NPE when fit from LabelAwareIterator Linkarrow-up-right

ND4J/SameDiff:

Features and Enhancements

Bug Fixes and Optimizations

DataVec

Features and Enhancements

Bug Fixes and Optimizations

RL4J

Features and Enhancements

Arbiter

Bug Fixes and Optimizations

Version 1.0.0-beta6

Highlights - 1.0.0-beta6 Release

  • Added support for CUDA 10.2. 1.0.0-beta6 released with CUDA 9.2, 10.0, 10.1 and 10.2 support

  • SameDiff optimizations - memory use for inference and training significantly reduced, with some performance improvements also

  • Deeplearning4j UI - Play framework replaced with Vertx; deeplearning4j-ui dependency now no longer has Scala dependency or Scala version suffix Linkarrow-up-right

    • Note: No API changes, only artifact ID change: replace deeplearning4j-ui_2.1x with deeplearning4j-ui

  • ND4j namespace operation methods: operations are available through the Nd4j.math, Nd4j.random, Nd4j.bitwise, Nd4j.nn (neural network), for example Nd4j.math.abs(INDArray), Nd4j.random.logNormal etc Linkarrow-up-right.

    • Note that additional ND4J namespaces API will have additions (new namespaces and methods), and may have some API changes, in the next release

  • OpenMP replaced with thread pool c++ parallelism framework; enabled c++ parallelism for platforms without C++-level threading for operations

Deeplearning4J

Deeplearning4J: Features and Enhancements

Deeplearning4J: Bug Fixes and Optimizations

  • KDTree implementation optimized Linkarrow-up-right

  • Deeplearning4j zoo models and datasets hosting location updated Linkarrow-up-right

  • Fixed nIn validation for Deconv2D layer Linkarrow-up-right

  • Fixed an issue with incorrect Deconvolution2d results for Keras import models Linkarrow-up-right

  • Added DNNL/MKLDNN support for batch normalization layer Linkarrow-up-right, Linkarrow-up-right

  • Fixed various integer casts to avoid overflows for very large arrays (with dimensions or length > Integer.MAX_VALUE) Linkarrow-up-right

  • Fixed an issue with UNet non-pretrained model architecture (last layer kernel size) Linkarrow-up-right

  • Deeplearning4j SameDiff layers now use DL4J workspaces for better performance and reduced memory consumption Linkarrow-up-right

  • Updated broken links in afew error messages Linkarrow-up-right

  • Cleaned up a few unused dependencies in various modules Linkarrow-up-right

  • Cleaned up duplicate SamplingDataSetIterator class Linkarrow-up-right

  • Fixed an issue where ComputationGraph instances with a single input going into multiple embedding layers could throw a NPE Linkarrow-up-right

  • Fixed an issue where loss function weights were not automatically cast to network datatype, resulting in an exception if not already correct type Linkarrow-up-right

  • Shaded Jackson version upgraded from 2.9.9/2.9.9.3 to 2.10.1 Linkarrow-up-right

  • Fixed an issue with KNN where getMostPopulatedClusters actually returned the least populated clusters Linkarrow-up-right

Deeplearning4j: Transition Guide, 1.0.0-beta5 to 1.0.0-beta6

  • Deeplearning4j UI artifact ID has changed: deeplearning4j-ui_2.1x (beta5 and earlier) with deeplearning4j-ui

ND4J and SameDiff

ND4J/SameDiff: Features and Enhancements

ND4J/SameDiff: Bug Fixes and Optimizations

ND4J: Transition Guide, 1.0.0-beta5 to 1.0.0-beta6

  • SameDiff.outputs() now requires user to call SameDiff.setOutputs(String...) first; previous “best guess” output inference was unreliable Linkarrow-up-right

  • SameDiff.zero and .one methods now create constants, not vairables Linkarrow-up-right

DataVec

DataVec: Bug Fixes and Optimizations

  • NativeImageLoader now checks for empty input streams and throws an exception instead of crashing Linkarrow-up-right

  • NDArrayScalarOpTransform now supports modulus operator Linkarrow-up-right

RL4J

RL4J: Features and Enhancements

RL4J: Bug Fixes and Optimizations

PyDataVec

PyDataVec Features and Enhancements

PyDataVec Bug Fixes and Optimizations

  • Fixed various issues with PyDataVec Linkarrow-up-right

  • Fixed an issue with data locality that could cause incorrect results under some circumstances when running on CUDA Linkarrow-up-right

Version 1.0.0-beta5

Highlights - 1.0.0-beta5 Release

  • Added model server - remote inference of SameDiff and DL4J models using JSON or (optionally) binary serialization

  • Added Scala 2.12 support, dropped Scala 2.10 support. Modules with Scala dependencies are now released with Scala 2.11 and 2.12 versions

  • Apache Spark 1.x support dropped (now only Spark 2.x is supported). Note: Spark version suffix dropped: For upgrading: 1.0.0-beta4_spark2 -> 1.0.0-beta5

  • Added FastText support to deeplearning4j-nlp

  • CUDA support for all ND4J/SameDiff Operations

    • In 1.0.0-beta4, some operations were CPU only. Now, all operations have full CUDA support

  • Added support for new data types in ND4J (and DL4J/SameDiff): BFLOAT16, UINT16, UINT32, UINT64

  • ND4J: Implicit broadcasting support added to INDArray (already present in SameDiff - for example shape [3,1]+[3,2]=[3,2])

  • CUDA 9.2, 10.0 and 10.1-Update2 still supported

    • NOTE: For CUDA 10.1, CUDA 10.1 update 2 is recommended. CUDA 10.1 and 10.1 Update 1 will still run, but rare internal cuBLAS issues may be encountered in heavily multi-threaded code on some systems

  • Dependency upgrades: Jackson (2.5.1 to 2.9.9/2.9.9.3), Commons Compress (1.16.1 to 1.18), Play Framework (2.4.8 to 2.7.3), Guava: (20.0 to 28.0-jre, and shaded to avoid dependency clashes)

  • CUDA: now host (RAM) buffers are only allocated when required (previously: host buffers were always allocated), in addition to device (GPU) buffer

Deeplearning4J

Deeplearning4J: Features and Enhancements

Deeplearning4J: Bug Fixes and Optimizations

Deeplearning4j: Transition Guide, 1.0.0-beta4 to 1.0.0-beta5

  • DL4J AsyncDataSetIterator and AsyncMultiDataSetIterator moved to ND4J, use org.nd4j.linalg.dataset.Async(Multi)DataSetIterator instead

  • Saved models with custom layers from 1.0.0-alpha and before can no longer be loaded. Workaround: load in 1.0.0-beta4, and re-save the model (Linkarrow-up-right). Models without custom layers can still be loaded back to 0.5.0

  • Apache Spark 1.x support dropped (now only Spark 2.x is supported). Note: Spark version suffix dropped: For upgrading, change versions as follows: 1.0.0-beta4_spark2 -> 1.0.0-beta5

  • Scala 2.10 dropped, Scala 2.12 added (for modules with Scala dependencies)

Deeplearning4j: 1.0.0-beta5 Known Issues

  • dl4j-spark_2.11 and _2.12 dependencies incorrectly pull in datavec-spark_2.11/2.12 version 1.0.0-SNAPSHOT. Workaround: control version using dependency management as per herearrow-up-right or herearrow-up-right

  • Some layers (such as LSTM) may run slower on 1.0.0-beta5 than 1.0.0-beta4 on CUDA when not using cuDNN, due to added synchronization. This synchronization will be removed in the next release after 1.0.0-beta5

  • CUDA 10.1: Rare internal cuBLAS issues may be encountered in heavily multi-threaded code on some systems, when running CUDA 10.1 Update 1 (and maybe 10.1). CUDA 10.1 update 2 is recommended.

ND4J and SameDiff

ND4J/SameDiff: Features and Enhancements

ND4J/SameDiff: Bug Fixes and Optimizations

ND4J: Transition Guide, 1.0.0-beta4 to 1.0.0-beta5

  • OldAddOp, OldSubOp, etc removed: Replace with AddOp, SubOp, etc

  • Nd4j.trueScalar and trueVector removed; use Nd4j.scalar and Nd4j.createFromArray methods

  • INDArray.javaTensorAlongDimension removed; use INDArray.tensorAlongDimension instead

  • INDArray.lengthLong() removed; use INDArray.length() instead

ND4J: 1.0.0-beta5 Known Issues

  • nd4j-native on some OSX systems can fail with Symbol not found: ___emutls_get_address - See this linkarrow-up-right

  • SBT 1.3.0 can fail with an Illegal character in path error; SBT 1.2.8 is OK. This is an SBT issue, not an ND4J issue. See this linkarrow-up-right for details

DataVec

DataVec: Features and Enhancements

DataVec: Bug Fixes and Optimizations

RL4J

RL4J: Features and Enhancements

RL4J: Bug Fixes and Optimizations

Arbiter

Bug Fixes and Optimizations

Arbiter: Known Issues

  • The Jackson version upgrade necessitated a change to how generic object serialization was performed; Arbiter JSON data stored in 1.0.0-beta4 or earlier format may not be readable in 1.0.0-beta5 (Linkarrow-up-right)

ND4S

ND4S Features and Enhancements

Version 1.0.0-beta4

Highlights - 1.0.0-beta4 Release

Main highlight: full multi-datatype support for ND4J and DL4J. In past releases, all N-Dimensional arrays in ND4J were limited to a single datatype (float or double), set globally. Now, arrays of all datatypes may be used simultaneously. The following datatypesarrow-up-right are supported:

  • DOUBLE: double precision floating point, 64-bit (8 byte)

  • FLOAT: single precision floating point, 32-bit (4 byte)

  • HALF: half precision floating point, 16-bit (2 byte), "FP16"

  • LONG: long signed integer, 64 bit (8 byte)

  • INT: signed integer, 32 bit (4 byte)

  • SHORT: signed short integer, 16 bit (2 byte)

  • UBYTE: unsigned byte, 8 bit (1 byte), 0 to 255

  • BYTE: signed byte, 8 bit (1 byte), -128 to 127

  • BOOL: boolean type, (0/1, true/false). Uses ubyte storage for easier op parallelization

  • UTF8: String array type, UTF8 format

ND4J Behaviour changes of note:

  • When creating an INDArray from a Java primitive array, the INDArray datatype will be determined by the primitive array type (unless a datatype is specified)

    • For example: Nd4j.createFromArray(double[]) -> DOUBLE datatype INDArray

    • Similarly, Nd4j.scalar(1), Nd4j.scalar(1L), Nd4j.scalar(1.0) and Nd4j.scalar(1.0f) will produce INT, LONG, DOUBLE and FLOAT type scalar INDArrays respectively

  • Some operations require matched datatypes for operands

    • For example, if x and y are different datatypes, a cast may be required: x.add(y.castTo(x.dataType()))

  • Some operations have datatype restrictions: for example, sum on a UTF8 array is not supported, nor is variance on a BOOL array. For some operations on boolean arrays (such as sum), casting to an integer or floating point type first may make sense.

DL4J Behaviour changes of note:

  • MultiLayerNetwork/ComputationGraph no longer depend in any way on ND4J global datatype.

    • The datatype of a network (DataType for it's parameters and activations) can be set during construction using NeuralNetConfigutation.Builder().dataType(DataType)

    • Networks can be converted from one type to another (double to float, float to half etc) using MultiLayerNetwork/ComputationGraph.convertDataType(DataType) method

Main new methods:

  • Nd4j.create(), zeros(), ones(), linspace(), etc methods with DataType argument

  • INDArray.castTo(DataType) method - to convert INDArrays from one datatype to another

  • New Nd4j.createFromArray(...) methods for

ND4J/DL4J: CUDA - 10.1 support added, CUDA 9.0 support dropped

CUDA versions supported in 1.0.0-beta4: CUDA 9.2, 10.0, 10.1.

ND4J: Mac/OSX CUDA support dropped

Mac (OSX) CUDA binaries are no longer provided. Linux (x86_64, ppc64le) and Windows (x86_64) CUDA support remains. OSX CPU support (x86_64) is still available.

DL4J/ND4J: MKL-DNN Support Added DL4J (and ND4J conv2d etc ops) now support MKL-DNN by default when running on CPU/native backend. MKL-DNN support is implemented for the following layer types:

  • ConvolutionLayer and Convolution1DLayer (and Conv2D/Conv2DDerivative ND4J ops)

  • SubsamplingLayer and Subsampling1DLayer (and MaxPooling2D/AvgPooling2D/Pooling2DDerivative ND4J ops)

  • BatchNormalization layer (and BatchNorm ND4J op)

  • LocalResponseNormalization layer (and LocalResponseNormalization ND4J op)

  • Convolution3D layer (and Conv3D/Conv3DDerivative ND4J ops)

MKL-DNN support for other layer types (such as LSTM) will be added in a future release.

MKL-DNN can be disabled globally (ND4J and DL4J) using Nd4jCpu.Environment.getInstance().setUseMKLDNN(false);

MKL-DNN can be disabled globally for specific ops by setting ND4J_MKL_FALLBACK environment variable to the name of the operations to have MKL-DNN support disabled for. For example: ND4J_MKL_FALLBACK=conv2d,conv2d_bp

ND4J: Improved Performance due to Memory Management Changes

Prior releases of ND4J used periodic garbage collection (GC) to release memory that was not allocated in a memory workspace. (Note that DL4J uses workspaces for almost all operations by default hence periodic GC could frequently be disabled when training DL4J networks). However, the reliance on garbage collection resulted in a performance overhead that scaled with the number of objects in the JVM heap.

In 1.0.0-beta4, the periodic garbage collection is disabled by default; instead, GC will be called only when it is required to reclaim memory from arrays that are allocated outside of workspaces.

To re-enable periodic GC (as per the default in beta3) and set the GC frequency to every 5 seconds (5000ms) you can use:

ND4J: Improved Rank 0/1 Array Support

In prior versions of ND4J, scalars and vectors would sometimes be rank 2 instead of rank 0/1 when getting rows/columns, getting sub-arrays using INDArray.get(NDArrayIndex...) or when creating arrays from Java arrays/scalars. Now, behaviour should be more consistent for these rank 0/1 cases. Note to maintain old behaviour for getRow and getColumn (i.e., return rank 2 array with shape [1,x] and [x,1] respectively), the getRow(long,boolean) and getColumn(long,boolean) methods can be used.

DL4J: Attention layers added

Deeplearning4J

Deeplearning4J: Features and Enhancements

Deeplearning4J: Bug Fixes and Optimizations

  • DL4J Spark training: fix for shared clusters (multiple simultaneous training jobs) - Aeron stream ID now generated randomly (Linkarrow-up-right)

  • cuDNN helpers will no longer attempt to fall back on built-in layer implementations if an out-of-memory exception is thrown (Linkarrow-up-right)

  • Batch normalization global variance reparameterized to avoid underflow and zero/negative variance in some cases during distributed training (Linkarrow-up-right)

  • Fixed a bug where dropout instances were incorrectly shared between layers when using transfer learning with dropout (Linkarrow-up-right, Linkarrow-up-right)

  • Fixed issue where tensorAlongDimension could result in an incorrect array order for edge cases and hence exceptions in LSTMs (Linkarrow-up-right)

  • Fixed an edge case issue with ComputationGraph.getParam(String) where the layer name contains underscores (Linkarrow-up-right)

  • Fixed an edge case with ParallelInference on CUDA where (very rarely) input array operations (such as normalization) may not be fully completed before transferring an array between threads (Linkarrow-up-right, Linkarrow-up-right)

  • Fixed an edge case with KFoldIterator when the total number of examples is not a multiple of the batch size (Linkarrow-up-right, Linkarrow-up-right)

  • Fixed an issue where DL4J UI could throw a NoClassDefFoundError on Java 9/10/11 (Linkarrow-up-right, Linkarrow-up-right)

  • Keras import: added aliases for weight initialization (Linkarrow-up-right)

  • Fixed issue where dropout instances would not be correctly cloned when network configuration was cloned (Linkarrow-up-right)

  • Fixed workspace issue with ElementwiseVertex with single input (Linkarrow-up-right)

  • Fixed issue with UI where detaching StatsStorage could attempt to remove storage twice, resulting in an exception (Linkarrow-up-right)

  • Fixed issue where LossMultiLabel would generate NaNs when all labels in minibatch are the same class. Now 0 gradient is returned instead. (Linkarrow-up-right, Linkarrow-up-right)

  • Fixed an issue where DepthwiseConv2D weight could be wrong shape on restoring network from saved format (Linkarrow-up-right)

  • Fixed issue where BaseDatasetIterator.next() would not apply preprocessors, if one was set (Linkarrow-up-right)

  • Improved default configuration for CenterLossOutputLayer (Linkarrow-up-right)

  • Fixed an issue for UNet non-pretrained configuration (Linkarrow-up-right)

  • Fixed an issue where Word2Vec VocabConstructor could deadlock under some circumstances (Linkarrow-up-right)

  • SkipGram and CBOW (used in Word2Vec) were made native operations for better performance (Linkarrow-up-right)

  • Fixed an issue where references to detached StatsListener instances would be maintained, potentially leading to memory issues when using InMemoryStatsListener (Linkarrow-up-right)

  • Optimization: Workspaces were added to SequenceVectors and Word2Vec (Linkarrow-up-right)

  • Improved validation for RecordReaderDataSetIterator (Linkarrow-up-right)

  • Improved handling of unknown words in WordVectors implementation (Linkarrow-up-right)

  • Yolo2OutputLayer: Added validation for incorrect labels shape. (Linkarrow-up-right)

  • LastTimeStepLayer will now throw an exception when the input mask is all 0s (no data - no last time step) (Linkarrow-up-right)

  • Fixed an issue where MultiLayerNetwork/ComputationGraph.setLearningRate method could lead to invalid updater state in some rare cases (Linkarrow-up-right)

  • Fixed an issue where Conv1D layer would calculate output length in MultiLayerNetwork.summary() (Linkarrow-up-right)

  • Async iterators are now used in EarlyStoppingTrained to improve data loading performance (Linkarrow-up-right)

  • EmbeddingLayer and EmbeddingSequenceLayer performance has been improved on CUDA (Linkarrow-up-right)

  • Removed outdated/legacy scala tools repository (Linkarrow-up-right, Linkarrow-up-right)

  • Fixed issues in L2NormalizeVertex equals/hashcode methods (Linkarrow-up-right)

  • Fixed Workspace issue in ConvolutionalListener (Linkarrow-up-right)

  • Fixed EvaluationBinary falsePositiveRate calculation (Linkarrow-up-right)

  • Added validation and useful exception for MultiLayerNetwork.output(DataSetIterator) methods (Linkarrow-up-right)

  • Fixed minor issue where ComputationGraph.summary() would throw a NullPointerException if init() had not already been called (Linkarrow-up-right)

  • Fixed a ComputationGraph issue where an input into a single layer/vertex repeated multiple times could fail during training (Linkarrow-up-right)

  • Improved performance for KMeans implementation (Linkarrow-up-right)

  • Fixed an issue with rnnGetPreviousState for RNNs in 'wrapper' layers such as FrozenLayer (Linkarrow-up-right)

  • Keras import: Fixed an issue with order of words when importing some Keras tokenizers (Linkarrow-up-right)

  • Keras import: fixed issue with possible UnsupportedOperationException in KerasTokenizer class (Linkarrow-up-right)

  • Keras import: fixed an import issue with models combining embeddings, reshape and convolution layers (Linkarrow-up-right)

  • Keras import: fixed an import issue with input type inference for some RNN models (Linkarrow-up-right)

  • Fixed some padding issues in LocallyConnected1D/2D layers (Linkarrow-up-right)

ND4J and SameDiff

ND4J/SameDiff: Features and Enhancements

ND4J/SameDiff: API Changes (Transition Guide): 1.0.0-beta3 to 1.0.0-beta4

  • ND4J datatypes - significant changes, see highlights at top of this section

  • nd4j-base64 module (deprecated in beta3) has been removed. Nd4jBase64 class has been moved to nd4j-api (Linkarrow-up-right)

  • When specifying arguments for op execution along dimension (for example, reductions) the reduction axis are now specified in the operation constructor - not separately in the OpExecutioner call. (Linkarrow-up-right)

  • Removed old Java loop-based BooleanIndexing methods. Equivalent native ops should be used instead. (Linkarrow-up-right)

  • Removed Nd4j.ENFORCE_NUMERICAL_STABILITY, Nd4j.copyOnOps, etc (Linkarrow-up-right)

  • SameDiff "op creator" methods (SameDiff.tanh(), SameDiff.conv2d(...) etc) have been moved to subclasses - access creators via SameDiff.math()/random()/nn()/cnn()/rnn()/loss() methods or SameDiff.math/random/nn/cnn/rnn/loss fields (Linkarrow-up-right)

  • Nd4j.emptyLike(INDArray) has been removed. Use Nd4j.like(INDArray) instead (Linkarrow-up-right)

  • org.nd4jutil.StringUtils removed; suggest using Apache commons lang3 StringUtils instead (Linkarrow-up-right)

  • ND4J Jackson RowVector(De)Serializer has been deprecated due to datatype changes; NDArrayText(De)Serializer should be used instead (Linkarrow-up-right, Linkarrow-up-right)

  • nd4j-instrumentation module has been removed due to lack of use/maintenance (Linkarrow-up-right)

ND4J/SameDiff: Bug Fixes and Optimizations

ND4J: Known Issues

  • Most CustomOperation operations (such as those used in SameDiff) are CPU only until next release. GPU support was not completed in time for 1.0.0-beta4 release.

  • Some users with Intel Skylake CPUs have reported deadlocks on MKL-DNN convolution 2d backprop operations (DL4J ConvolutionLayer backprop, ND4J "conv2d_bp" operation) when OMP_NUM_THREADS is set to 8 or higher. Investigations suggest this is likely an issue with MKL-DNN, not DL4J/ND4J. See Issue 7637arrow-up-right. Workaround: Disable MKL-DNN for conv2d_bp operation via ND4J_MKL_FALLBACK (see earlier) or disable MKL-DNN globally, for Skylake CPUs.

DataVec

DataVec: Features and Enhancements

DataVec: Optimizations and Bug Fixes

Arbiter

Arbiter: Enhancements

Arbiter: Fixes

  • Fixed an issue where early stopping used in Arbiter would result in a serialization exception (Linkarrow-up-right)

Version 1.0.0-beta3

Highlights - 1.0.0-beta3 Release

  • ND4J/Deeplearning4j: Added support for CUDA 10.0. Dropped support for CUDA 8.0. (1.0.0-beta3 release has CUDA 9.0, 9.2 and 10.0 support)

  • SameDiff now supports training and evaluation from DataSetIterator and MultiDataSetIterator. Evaluation classes have been moved to ND4J.

  • DL4J Spark training (gradient sharing) is now fully fault tolerant, and has improvements for threshold adaption (potentially more robust convergence). Ports can now be easily configured independently on master/workers.

Deeplearning4J

Deeplearning4J: New Features

  • Added OutputAdapter interface and MultiLayerNetwork/ComputationGraph.output method overloads using OutputAdapter (avoids allocating off-heap memory that needs to be cleaned up by GC) Linkarrow-up-right, Linkarrow-up-right, Linkarrow-up-right

  • Added ComputationGraph/MultiLayerNetwork rnnTimeStep overload with user-specified workspace. Linkarrow-up-right

  • Added Cnn3DLossLayer Linkarrow-up-right

  • ParallelInference: Instances can now update the model in real-time (without re-init) Linkarrow-up-right

  • ParallelInferenc: Added ParallelInference INPLACE mode Linkarrow-up-right

  • Added validation for incompatible loss/activation function combinations (such as softmax+nOut=1, or sigmoid+mcxent). New validation can be disabled using outputValidation(false) Linkarrow-up-right

  • Spark training: Added full fault tolerance (robust failure recovery) for gradient sharing implementation Linkarrow-up-right Linkarrow-up-right

  • Spark training now supports configuring ports more flexibly (and differently for different workers) using PortSupplier Linkarrow-up-right Linkarrow-up-right

  • Spark training: overhauled gradient sharing threshold adaption algorithms; made it possible to customize threshold settings, plus made defaults more robust to initial threshold configuration improving convergence speed in some cases. Linkarrow-up-right

  • Spark training: implemented chunked messaging to reduce memory requirements (and insufficient buffer length issues) for large messages Linkarrow-up-right

  • Spark training: Added MeshBuildMode configuration for improved scalability for large clusters Linkarrow-up-right

  • Spark network data pipelines: added FileBatch, FileBatchRecordReader etc for "small files" (images etc) distributed training use cases Linkarrow-up-right

  • Added FailureTestingListener for fault tolerance/debugging purposes Linkarrow-up-right

  • Upgraded Apache Lucene/Solr to version 7.5.0 (from 7.4.0) Linkarrow-up-right

  • Added system properties (org.deeplearning4j.tempdir and org.nd4j.tempdir) to allow overriding of the temporary directories ND4J and DL4J use Linkarrow-up-right Linkarrow-up-right

  • Mode MultiLayerNetwork/ComputationGraph.clearLayerStates methods public (was protected) Linkarrow-up-right

  • AbstactLayer.layerConf() method is now public Linkarrow-up-right

  • ParallelWrapper module now no longer has a Scala version suffix for artifact id; new artifact id is deeplearning4j-parallel-wrapper Linkarrow-up-right

  • Improved validation and error mesages for invalid inputs/labels in Yolo2OutputLayer Linkarrow-up-right

  • Spark training: added SharedTrainingMaster.Builder.workerTogglePeriodicGC and .workerPeriodicGCFrequency to easily configure the ND4J garbage collection configuration on workers. Set default GC to 5 seconds on workers Linkarrow-up-right

  • Spark training: added threshold encoding debug mode (logs current threshold and encoding statistics on each worker during training). Enable using SharedTrainingConfiguration.builder.encodingDebugMode(true). Note this operation has computational overhead. Linkarrow-up-right

Deeplearning4J: Bug Fixes and Optimizations

  • Fixed an issue where L1/L2 and updaters (Adam, Nesterov, etc) were applied before dividing gradients by minibatch to obtain average gradient. To maintain old behaviour, use NeuralNetConfiguration.Builder.legacyBatchScaledL2(true) Linkarrow-up-right.

    • Note that learning rates may need to be decreased for some updaters (such as Adam) to account for this change vs. earlier versions. Some other updaters (such as SGD, NoOp, etc) should be unaffected.

    • Note that deserialized (loaded) configurations/networks saved in 1.0.0-beta2 or earlier will default to old behaviour for backward compatibility. All new networks (created in 1.0.0-beta3) will default to the new behaviour.

  • Fixed an issue where EarlyStoppingScoreCalculator would not correctly handle "maximize score" cases instead of minimizing Linkarrow-up-right

  • Fixed order (BGR vs. RGB) for VGG16ImagePreProcessor channel offset values Linkarrow-up-right

  • Fixed bug with variational autoencoders using weight noise Linkarrow-up-right

  • Fixed issue with BaseDataSetIterator not respecting the 'maximum examples' configuration Linkarrow-up-right

  • Optimization: A workspace is now used for ComputationGraph/MultiLayerNetwork evaluation methods (avoids allocating off-heap memory during evaluation that must be cleaned up by garbage collector) Linkarrow-up-right

  • Fixed an issue where shuffling combined with a subset for MnistDataSetIterator would not maintain the same subset between resets Linkarrow-up-right

  • Fixed issue with StackVertex.getOutputType Linkarrow-up-right

  • Fix issue with CNN to/from RNN preprocessors handling of mask arrays Linkarrow-up-right

  • Fixed issue with VGG16 non-pretrained configuration in model zoo Linkarrow-up-right

  • Fixed issue with TransferLearning nOutReplace where multiple layers in a row are modified Linkarrow-up-right

  • Fixed issue with CuDNN workspaces where backpropagation is performed outside of a standard fit call Linkarrow-up-right

  • Fixed an issue with dropout masks being cleared prematurely on output layers in ComputationGraph Linkarrow-up-right

  • RecordReaderMultiDataSetIterator now supports 5D arrays (for 3D CNNs) Linkarrow-up-right

  • Fixed bug in multi input/output ComputationGraphs with TBPTT combined with both masking and different number of input/output arrays Linkarrow-up-right

  • Improved input validation/exceptions for batch normalization layer Linkarrow-up-right

  • Fixed bug with TransferLearning GraphBuilder nOutReplace when combined with subsampling layers Linkarrow-up-right

  • SimpleRnnParamInitializer now properly respects bias initialization configuration Linkarrow-up-right

  • Fixed SqueezeNet zoo model non-pretrained configuration Linkarrow-up-right

  • Fixed Xception zoo model non-pretrained configuration Linkarrow-up-right

  • Fixed an issue with some evaluation signatures for multi-output ComputationGraphs Linkarrow-up-right

  • Improved MultiLayerNetwork/ComputationGraph summary method formatting for large nets Linkarrow-up-right

  • Fixed an issue where gradient normalization could result in NaNs if gradient is exactly 0.0 for all parameters in a layer Linkarrow-up-right

  • Fixed an issue where MultiLayerNetwork/ComputationGraph.setLearningRate could throw an exception for SGD and NoOp updaters Linkarrow-up-right

  • Fixed an issue with StackVertex plus masking in some rare cases Linkarrow-up-right

  • Fixed an issue with JSON deserialization of frozen layers in pre-1.0.0-alpha format Linkarrow-up-right

  • Fixed an issue where GraphBuilder.removeVertex can fail under some limited circumstances Linkarrow-up-right

  • Fixed a bug in CacheableExtractableDataSetFetcher Linkarrow-up-right

  • DL4J Spark training: Fixed issues with thread/device affinity for multi-GPU training + evaluation Linkarrow-up-right

  • DL4J Spark training: Made all Aeron threads daemon threads to prevent Aeron from stopping JVM shutdown when all other threads have completed Linkarrow-up-right

  • Added cudnnAllowFallback configuration for BatchNormalization layer (fallback to built-in implementation if CuDNN fails unexpectedly) Linkarrow-up-right

  • Fixed some rare concurrency issues with multi-worker (multi-GPU) nodes for Spark training Linkarrow-up-right Linkarrow-up-right

  • Fixed an issue with BatchNormalization layers that prevented the mean/variance estimates from being synced properly on each worker for GradientSharing training, causing convergence issues Linkarrow-up-right

  • Added a check to detect ZipSlip CVE attempts in ArchiveUtils Linkarrow-up-right

  • DL4J Spark training and evaluation: methods now use Hadoop Configuration from Spark context to ensure runtime-set configuration is available in Spark functions reading directly from remote storage (HDFS etc) Linkarrow-up-right

  • MultiLayerNetwork and ComputationGraph now properly support more than Integer.MAX_VALUE parameters Linkarrow-up-right Linkarrow-up-right

  • Added data validation for Nd4j.readTxt - now throws exception on invalid input instead of returning incorrect values Linkarrow-up-right

  • Fixed an issue with KNN implementation where a deadlock could occur if an invalid distance function (one returning "distances" less than 0) was utilized Linkarrow-up-right

  • Added synchronization to loading of Keras import models to avoid thread safety issues in the underlying HDFS library used for loading Linkarrow-up-right

  • Fixed rare issue for Async(Multi)DataSetIterator with large prefetch values Linkarrow-up-right

Deeplearning4J: API Changes (Transition Guide): 1.0.0-beta2 to 1.0.0-beta3

  • IEvaluation classes in DL4J have been deprecated and moved to ND4J so they are available for SameDiff training. Functionality and APIs are unchanged

  • MultiLayerConfiguration/ComputationGraphConfiguration pretrain(boolean) and backprop(boolean) have been deprecated and are no longer used. Use fit and pretrain/pretrainLayer methods instead. Linkarrow-up-right

  • ParallelWrapper module now no longer has a Scala version suffix for artifact id; new artifact id is deeplearning4j-parallel-wrapper which should be used instead Linkarrow-up-right

  • deeplearning4j-nlp-korean module now has Scala version suffix due to scala dependencies; new artifact ID is deeplearning4j-nlp-korean_2.10 and deeplearning4j-nlp-korean_2.11 Linkarrow-up-right

Deeplearning4J: Known issues: 1.0.0-beta3

  • Running multiple Spark training jobs simultaneously on the one physical node (i.e., multiple JVMs from one or more Spark jobs) may cause problems with network communication. A workaround for this is to manually set a unique stream ID manually in the VoidConfiguration. Use a unique (or random) integer value for different jobs Linkarrow-up-right

Deeplearning4J: Keras Import

ND4J

ND4J: New Features

ND4J: Bug Fixes and Optimizations

ND4J: API Changes (Transition Guide): 1.0.0-beta2 to 1.0.0-beta3

  • CUDA 8.0 support has been removed. CUDA 9.0, 9.2 and 10.0 support is available in 1.0.0-beta3

  • nd4j-base64 module contents have been deprecated; use the equivalent classes in nd4j-api from now on Linkarrow-up-right

  • Some classes in nd4j-jackson module has been deprecated; use the equivalent classes in nd4j-api from now on Linkarrow-up-right

ND4J: Known issues: 1.0.0-beta3

  • Android users may need to manually exclude the (now deprecated) module nd4j-base64. This is due to org.nd4j.serde.base64.Nd4jBase64 class being present in both nd4j-api and nd4j-base64 modules. Both versions have identical content. Use exclude group: 'org.nd4j', module: 'nd4j-base64' to exclude.

DataVec

DataVec: New Features

  • Added NativeImageLoader method overloads for org.opencv.core.Mat and String as filename Linkarrow-up-right

DataVec: Optimizations and Bug Fixes

  • Fix for JDBCRecordReader handling of null values Linkarrow-up-right

  • Improved errors/validation for ObjectDetectionRecordReader for invalid input (where image object centers are outside of image bounds) Linkarrow-up-right

  • Fixed issue where FileSplit using methods that are unavailable on earlier versions of Android Linkarrow-up-right

  • Added SerializableHadoopConfiguration and BroadcastHadoopConfigHolder for cases where a Hadoop configuration is required in Spark functions Linkarrow-up-right Linkarrow-up-right

  • Fixed issue with JDBCRecordReader's handling of real-valued column result types Linkarrow-up-right

  • Added validation and useful exception for CSVRecordReader/LineRecordReader being used without initialization Linkarrow-up-right

Arbiter

Arbiter: Fixes

ND4S

  • Added conversion between org.nd4j.linalg.primitives.Pair/Triple and Scala Tuple Linkarrow-up-right

Version 1.0.0-beta2

Highlights - 1.0.0-beta2 Release

  • ND4J/Deeplearning4j: Added support for CUDA 9.2. Dropped support for CUDA 9.1. (1.0.0-beta2 release has CUDA 8.0, 9.0 and 9.2 support)

  • Deeplearning4j: New SameDiff layers with training support - Linkarrow-up-right Linkarrow-up-right

  • Deeplearning4j resource (datasets, pretrained models) storage directory can now be configured via DL4JResources.setBaseDirectory method or org.deeplearning4j.resources.directory system property

  • ND4J: all indexing is now done with longs instead of ints to allow for arrays with dimensions and lengths greater than Integer.MAX_VALUE (approx. 2.1 billion)

  • ND4J: nd4j-native-platform will now use Intel MKL-DNN as the default/bundled BLAS implementation (replacing OpenBLAS as the previous default)

  • Deeplearning4j: Added Out-of-memory (OOM) crash dump reporting functionality. Provides a dump with memory use and configuration if training/inference OOMs (to assist with debugging and tuning memory configuration).

  • Deeplearning4j - new layers: Locally connected 1d Linkarrow-up-right, Locally connected 2d Linkarrow-up-right

Deeplearning4J

Deeplearning4J: New Features

  • Added new SameDiff layers (automatic differentiation - only single class, forward pass definition required) to DL4J with full training support - SameDiffLayer, SameDiffVertex, SameDiffOutputLayer, SameDiffLambdaLayer, SameDiffLambdaVertex - note that these are CPU-only execution for now Linkarrow-up-right Linkarrow-up-right Linkarrow-up-right

  • Resource (datasets, pretrained models) storage directory can now be configured via DL4JResources.setBaseDirectory method or org.deeplearning4j.resources.directory system property. Note that it is also possible to set a different base location for downloads (for local mirrors of DL4J resources) Linkarrow-up-right

  • Added Out-of-memory (OOM) crash dump reporting functionality. Provides a dump with memory use and configuration if training/inference OOMs. Same information is available (without a crash) for MultiLayerNetwork/ComputationGraph.memoryInfo methods. Can be disabled (or output directory set) using system propertiesarrow-up-right - Linkarrow-up-right

  • Added Composite[Multi]DataSetPreProcessor to enable multiple [Multi]DataSetPreProcessors to be applied in a single iterator Linkarrow-up-right

  • Added ComputationGraph evaluate methods for multi-output networks: evaluate(DataSetIterator, Map<Integer,IEvaluation[]>) and evaluate(MultiDataSetIterator, Map<Integer,IEvaluation[]>) Linkarrow-up-right

  • Added JointMultiDataSetIterator - utility iterator used to create MultiDataSetIterator from multiple DataSetIterators Linkarrow-up-right

  • GraphVertices may now have trainable parameters directly (not just enclose layers with trainable parameters) Linkarrow-up-right

  • Added MultiLayerNetwork/ComputationGraph getLearningRate methods Linkarrow-up-right

  • Added RandomDataSetIterator and RandomMultiDataSetIterator (mainly for testing/debugging) Linkarrow-up-right Linkarrow-up-right

  • Added cyclical "1cycle" schedule for learning rate schedules etc - Linkarrow-up-right

  • RDD repartitioning for Spark training is more configurable (adds Repartitioner interface) Linkarrow-up-right

  • Added ComputationGraph.getIterationCount() and .getEpochCount() for consistency with MultiLayerNetwork Linkarrow-up-right

  • Added locally connected 1d layer Linkarrow-up-right Linkarrow-up-right

  • Spark "data loader" API (mainly for Spark) Linkarrow-up-right Linkarrow-up-right Linkarrow-up-right

  • Spark evaluation: added evaluation method overloads that allow specifying the number of evaluation workers (less than number of Spark threads) Linkarrow-up-right

  • CnnSentenceDataSetIterator now has a Format argument, and supports outputting data for RNNs and 1D CNNs Linkarrow-up-right

  • Added ComputationGraph/MultiLayerNetwork.pretrain((Multi)DataSetIterator, int epochs) method overloads Linkarrow-up-right

  • MultiLayerNetwork and ComputationGraph now have output method overloads where the network output can be placed in the user-specified workspace, instead of being detached Linkarrow-up-right Linkarrow-up-right. This can be used to avoid creating INDArrays that need to be garbage collected before native memory can be freed.

  • EmbeddingSequenceLayer now supports [minibatch,1,seqLength] format sequence data in addition to [minibatch,seqLength] format data Linkarrow-up-right

  • CuDNN batch norm implementation will now be used for rank 2 input, not just rank 4 input Linkarrow-up-right

  • Environment variables and system properties for DL4J have been centralized into DL4JResources and DL4JEnvironmentVars classes, with proper descriptions Linkarrow-up-right Linkarrow-up-right

  • MultiLayerNetwork and ComputationGraph output/feedForward/fit methods are now thread-safe via synchronization. Note that concurrent use is not recommended due to performance (instead: use ParallelInference); however the now-synchronized methods should avoid obscure errors due to concurrent modifications Linkarrow-up-right

  • BarnesHutTSNE now throws a useful exception in the case where the distance metric is undefined (for example, all zeros plus cosine similarity) Linkarrow-up-right

Deeplearning4J: Bug Fixes and Optimizations

  • ComputationGraph.addListeners was not working correctly if listeners were already present Linkarrow-up-right, Linkarrow-up-right

  • TinyImageNetDataSetIterator did not validate/correctly use input shape configuration Linkarrow-up-right, Linkarrow-up-right

  • BatchNormalization layer now correctly asserts that nOut is set if required (instead of unfriendly shape errors later) Linkarrow-up-right

  • Fixed issue where OutputLayer may not initialize parameter constraints correctly Linkarrow-up-right

  • Fixed performance issue with Nesterov updater using CPU-only op for CUDA execution Linkarrow-up-right

  • Removed TerminationCondition for DL4J optimizers - was not used in practice, and had minor overhead Linkarrow-up-right

  • Fixed issue where EvaluativeListener could hit a workspace validation exception when workspaces are enabled Linkarrow-up-right

  • Fixed issue where TrainingListener.onEpochStart/onEpochEnd were not being called correctly for ComputationGraph Linkarrow-up-right

  • Fixed workspace issue with TensorFlowCnnToFeedForwardPreProcessor Linkarrow-up-right

  • Performance optimization for BatchNormalization when using CuDNN Linkarrow-up-right

  • Performance optimization: Dropout will be applied in-place when safe to do so, avoiding a copy Linkarrow-up-right

  • Added CuDNN implementation of Dropout Linkarrow-up-right

  • Reduced memory use for CuDNN: CuDNN working memory is now shared and reused between layers within a network Linkarrow-up-right

  • CuDNN batch normalization implementation would fail with FP16 datatype Linkarrow-up-right

  • Fixed issue Bidirectional LSTM may incorrectly use workspaces causing an exception Linkarrow-up-right

  • Fixed issue with early stopping where scores to be maximized (accuracy, f1, etc) were not properly triggering termination conditions Linkarrow-up-right

  • Fixed issue where label mask counter could be incorrectly incremented in ComputationGraph.computeGradientAndScore() Linkarrow-up-right

  • ComputationGraph was not setting lastEtlTime field during training Linkarrow-up-right

  • Fixed issue with AutoEncoder layer when workspaces are enabled Linkarrow-up-right

  • Fixed issue with EmbeddingSequenceLayer use of mask arrays Linkarrow-up-right

  • Lombok is now provided scope everywhere, isn't on user classpath when using DL4J Linkarrow-up-right

  • Fixed issue where WordVectorSerializer.readParagraphVectors(File) initialization of label source Linkarrow-up-right

  • Spark training (gradient sharing) now properly handles empty partition edge case when encountered during training Linkarrow-up-right

  • Errors are propagated better/more consistently for Spark gradient sharing training Linkarrow-up-right

  • Fixed issue with 1D CNN layers with mask arrays and stride > 1 (masks not being correctly downsized) Linkarrow-up-right

  • DL4J Batch norm implementation was not correctly adding epsilon value during inference, only during training (CuDNN unaffected) Linkarrow-up-right

  • CuDNN subsampling layers with max pooling and ConvolutionMode.SAME may have taken padding value (0) as the maximum for border values when all non-padding values are less than 0 Linkarrow-up-right

  • Spark training with gradient sharing now passes listeners to workers correctly Linkarrow-up-right

  • Fixed rare (and non-terminal) concurrent modification issue with UI and FileStatsStorage Linkarrow-up-right

  • CuDNN convolution layer now supports dilation > 2 (previously: used DL4J conv layer implementation as a fallback) Linkarrow-up-right

  • Yolo2OutputLayer now implements computeScoreForExamples() Linkarrow-up-right

  • SequenceRecordReeaderDataSetIterator now handles the "no labels" case correctly Linkarrow-up-right

  • Fixed issue where BarnesHutTSNE could hit a workspace validation exception Linkarrow-up-right

  • EMNIST iterator could produce incorrect data in some cases after a reset Linkarrow-up-right

Deeplearning4J: API Changes (Transition Guide): 1.0.0-beta to 1.0.0-beta2

  • GravesLSTM has been deprecated in favor of LSTM due to lack of CuDNN support but otherwise similar accuracy to in practice. Use LSTM class instead.

  • deeplearning4j-modelexport-solr: now uses Lucene/Solr version 7.4.0 (was 7.3.0) Linkarrow-up-right

  • Mask arrays for CNN2d layers must be in broadcastable 4d format: [minibatch,depth or 1, height or 1, width or 1] - previously they were 2d with shape [minibatch,height] or [minibatch,width]. This provents ambiguity in later cases (pooling layers), and allows for more complex masking scenarios (such as masking for different image sizes in same minibatch). Linkarrow-up-right

  • Some older/deprecated Model and Layer methods have been removed. (validateInput(), initParams()). Some custom layers may need to be updated as a result Linkarrow-up-right

Deelpearning4J: 1.0.0-beta2 Known Issues

  • Windows users are unable to load the HDF5 files used in SvhnLabelProvider (used in HouseNumberDetection example). Linux/Mac users are unaffected. A workaround for windows users is to add the sonatype snapshot dependency org.bytedeco.javacpp-presets:hdf5-platform:jar:1.10.2-1.4.3-SNAPSHOT Linkarrow-up-right

Deeplearing4J: Keras Import

  • Keras model import now imports every Keras application

  • Supports GlobalPooling3D layer import

  • Supports RepeatVector layer import

  • Supports LocallyConnected1D and LocallyConnected2D layers

  • Keras Lambda layers can now be imported by registering custom SameDiff layers

  • All Keras optimizers are now supported

  • All advanced activation functions can now be imported.

  • Many minor bugs have been fixed, including proper weight setting for all configurations of BatchNormalization, improvements to Reshape SeparableConvolution2D, and full support of Bidirectional layers.

ND4J

ND4J: New Features

  • ND4J: all indexing is now done with longs instead of ints to allow for arrays with dimensions and lengths greater than Integer.MAX_VALUE (approx. 2.1 billion)

  • Added the ability to write Numpy .npy format using Nd4j.writeAsNumpy(INDArray,File) and convert an INDArray to a numpy strict in-memory using Nd4j.convertToNumpy(INDArray) Linkarrow-up-right

  • ND4j-common ClassPathResource: added ClassPathResource.copyDirectory(File) Linkarrow-up-right

  • SameDiff: A significant number of new ops, and backprop implementations for existing ops

  • Added Nd4j.randomBernoulli/Binomial/Exponential convenience methods Linkarrow-up-right

  • Added way to disable/suppress ND4J initialization logging via org.nd4j.log.initialization system property Linkarrow-up-right

  • SameDiff class - most op/constructor methods now have complete/useful javadoc Linkarrow-up-right

  • Workspaces can now be disabled globally, ignoring workspace configuration. This is mainly used for debugging; use Nd4j.getWorkspaceManager().setDebugMode(DebugMode.DISABLED) or Nd4j.getWorkspaceManager().setDebugMode(DebugMode.SPILL_EVERYTHING); to enable this. Linkarrow-up-right [Link]

  • Added EnvironmentalAction API for environment variable processing Linkarrow-up-right

  • ND4J environment variables and system properties have been centralized in ND4jEnvironmentVars and ND4jSystemProperties classes Linkarrow-up-right and Linkarrow-up-right

ND4J: Bug Fixes and Optimizations

  • SameDiff: a significant number of bug fixes for execution and individual ops

  • Fixed issue where INDArray.toDoubleArray() with true scalars (rank 0 arrays) Linkarrow-up-right

  • Fixed issue with DataSet.sample() not working for rank 3+ features Linkarrow-up-right

  • IActivation implementations now validate/enforce same shape for activations and gradients Linkarrow-up-right

  • Fixed issue with muliColumnVector where vector is 1d Linkarrow-up-right

  • ImagePreProcessingScaler now supports serialization via NormalizerSerializerStrategy and ModelSerializer Linkarrow-up-right

  • Performance optimization for threshold encoding used in DL4J's Spark gradient sharing distributed training implementation Linkarrow-up-right

  • SameDiff: Fixed issue where memory wasn't always released after execution Linkarrow-up-right

  • DataSet.save() and MultiDataSet.save() methods now save example metadata when present Linkarrow-up-right

  • Fixed issue with KFoldIterator when dataset does not divide equally into folds with no remainder Linkarrow-up-right

  • Fixed issue where version check functionality could fail to load resources if resources are on a path with spaces Linkarrow-up-right

ND4J: Known Issues

ND4J: API Changes (Transition Guide): 1.0.0-beta to 1.0.0-beta2

  • CUDA 9.1 support has been removed. CUDA 8.0, 9.0 and 9.2 support is available

  • Due to long indexing changes, long/long[] should be used in place of int/int[] in some places (such as INDArray.size(int), INDArray.shape())

  • Simplified DataSetIterator API: totalExamples(), cursor() and numExamples() - these were unsupported on most DataSetIterator implementations, and not used in practice for training. Custom iterators should remove these methods also Linkarrow-up-right

  • Long-deprecated DataSet.getFeatureMatrix() has been removed. Use DataSet.getFeatures() instead. Linkarrow-up-right

  • Unused and not properly tested/maintained utility class BigDecimalMath has been removed. Users should find an aternative library for this functionality, if required.

  • Not properly maintained complex number support classes (IComplexNumber, IComplexNDArray) have been removed entirely Linkarrow-up-right

DataVec

DataVec: New Features

  • Added AnalyzeLocal class to mirror functionality of AnalyzeSpark (but without Spark dependency) Linkarrow-up-right

  • Added JacksonLineSequenceRecordReader: RecordReader used for multi-example JSON/XML where each line in a file is an independent example Linkarrow-up-right

  • Added RecordConvert.toRecord(Schema, List<Object>) Linkarrow-up-right

  • Added missing FloatColumnCondition Linkarrow-up-right

  • Added CSVLineSequenceRecordReader for "each line in CSV is a sequence, and sequence is single-valued/univariate" Linkarrow-up-right

  • Added CSVMultiSequenceRecordReader for "multiple multi-valued sequences in a single CSV" data Linkarrow-up-right

DataVec: Optimizations and Bug Fixes

DataVec: API Changes (Transition Guide): 1.0.0-beta to 1.0.0-beta2

Arbiter

Arbiter: New Features

  • Added DataSource interface. Unlike old DataProvider, this does not require JSON serializability (only a no-arg constructor) Linkarrow-up-right

  • Added numerous enhancements and missing configuration options (constraints, dilation, etc) Linkarrow-up-right Linkarrow-up-right

Arbiter: Fixes

  • DataProvider has been deprecated. Use DataSource instead.

RL4J

Version 1.0.0-beta

Highlights - 1.0.0-beta Release

  • Performance and memory optimizations for DL4J

Deeplearning4J

Deeplearning4J: New Features

  • New or enhanced layers:

  • Added ComputationGraph.output(DataSetIterator) method Linkarrow-up-right

  • Added MultiLayerNetwork/ComputationGraph.layerInputSize methods Linkarrow-up-right Linkarrow-up-right

  • Added SparkComputationGraph.feedForwardWithKey overload with feature mask support Linkarrow-up-right

  • Added MultiLayerNetwork.calculateGradients method (for easily getting parameter and input gradients, for example for some model interpretabilithy approaches) Linkarrow-up-right Linkarrow-up-right

  • Added support to get input/activation types for each layer from configuration: ComputationGraphConfiguration.getLayerActivationTypes(InputType...), ComputationGraphConfiguration.GraphBuilder.getLayerActivationTypes(), NeuralNetConfiguration.ListBuilder.getLayerActivationTypes(), MultiLayerConfiguration.getLayerActivationTypes(InputType) methods Linkarrow-up-right

  • Evaluation.stats() now prints confusion matrix in easier to read matrix format, rather than list format Linkarrow-up-right

  • Added ModelSerializer.addObjectToFile, .getObjectFromFile and .listObjectsInFile for storing arbitrary Java objects in same file as saved network Linkarrow-up-right

  • Added SpatialDropout support (with Keras import support) Linkarrow-up-right

  • Added MultiLayerNetwork/ComputationGraph.fit((Multi)DataSetIterator, int numEpochs) overloads Linkarrow-up-right

  • Added performance (hardware) listeners: SystemInfoPrintListener and SystemInfoFilePrintListener Linkarrow-up-right

Deeplearning4J: Bug Fixes and Optimizations

  • Performance and memory optimizations via optimizations of internal use of workspaces Linkarrow-up-right

  • Reflections library has entirely been removed from DL4J and is no longer required for custom layer serialization/deserialization Linkarrow-up-right, Linkarrow-up-right

    • Fixes issues with custom and some Keras import layers on Android

  • RecordReaderMultiDataSetIterator will no longer try to convert unused columns to numerical values Linkarrow-up-right

  • Added new model zoo models:

    • (to do)

  • Fixes for Android compilation (removed duplicate classes, aligned versions, removed some dependencies) Linkarrow-up-right Linkarrow-up-right Linkarrow-up-right

  • Fix for RecordReaderMulitDataSetIterator where output could be incorrect for some constructors Linkarrow-up-right

  • Non-frozen layers before a frozen layer will no longer be skipped during backprop (useful for GANs and similar architectures) Linkarrow-up-right Linkarrow-up-right

  • Fixed issue where ComputationGraph topological sort may not be consistent on all platforms; could sometimes break ComputationGraphs (with multiple valid topological orderings) trained on PC and deployed on Android Linkarrow-up-right

  • Fixed issue with CuDNN batch norm using 1-decay instead of decay Linkarrow-up-right

  • deeplearning4j-cuda no longer throws exceptions if present on classpath with nd4j-native backend set to higher priority Linkarrow-up-right

  • Added RNG control for CifarDataSetIterator Linkarrow-up-right

  • WordVectorSerializer now deletes temp files immediately once done Linkarrow-up-right

Deeplearning4J: API Changes (Transition Guide): 1.0.0-alpha to 1.0.0-beta

  • WorkspaceMode.SINGLE and SEPARATE have been deprecated; use WorkspaceMode.ENABLED instead

  • Internal layer API changes: custom layers will need to be updated to the new Layer API - see built-in layers or custom layer example

  • Custom layers etc in pre-1.0.0-beta JSON (ModelSerializer) format need to be registered before they can be deserialized due to JSON format change. Built-in layers and models saved in 1.0.0-beta or later do not require this. Use NeuralNetConfiguration.registerLegacyCustomClassesForJSON(Class) for this purpose

  • IterationListener has been deprecated in favor of TrainingListener. For existing custom listeners, switch from implements TrainingListener to extends BaseTrainingListener Linkarrow-up-right

  • ExistingDataSetIterator has been deprecated; use fit(DataSetIterator, int numEpochs) method instead

Deelpearning4J: 1.0.0-beta Known Issues

  • ComputationGraph TrainingListener onEpochStart and onEpochEnd methods are not being called correctly

  • DL4J Zoo Model FaceNetNN4Small2 model configuration is incorrect, causing issues during forward pass

  • Early stopping score calculators with values thar should be maximized (accuracy, f1 etc) are not working properly (values are minimized not maximized). Workaround: override ScoreCalculator.calculateScore(...) and return 1.0 - super.calculateScore(...).

Deeplearing4J: Keras Import

Deeplearning4J: Keras Import - API Changes (Transition Guide): 1.0.0-alpha to 1.0.0-beta

ND4J

ND4J: New Features

ND4J: Known Issues

  • Not all op gradients implemented for automatic differentiation

  • Vast majority of new operations added in 1.0.0-beta do NOT use GPU yet.

ND4J: API Changes (Transition Guide): 1.0.0-alpha to 1.0.0-beta

DataVec

DataVec: New Features

  • ImageRecordReader now logs number of inferred label classes (to reduce risk of users missing a problem if something is misconfigured) Linkarrow-up-right

  • Added AnalyzeSpark.getUnique overload for multiple columns Linkarrow-up-right

  • Added performance/timing module Linkarrow-up-right

DataVec: Optimizations and Bug Fixes

DataVec: API Changes (Transition Guide): 1.0.0-alpha to 1.0.0-beta

  • DataVec ClassPathResource has been deprecated; use nd4j-common version instead Linkarrow-up-right

Arbiter

Arbiter: New Features

  • Added LayerSpace for OCNN (one-class neural network)

Arbiter: Fixes

  • Fixed timestamp issue that could cause incorrect rendering of first model's results in UI Linkarrow-up-right

  • Execution now waits for last model(s) to complete before returning when a termination condition is hit Linkarrow-up-right

  • As per DL4J etc: use of Reflections library has been removed entirely from Arbiter Linkarrow-up-right

  • Remove use of Eclipse Collections library due to issues with Android compilation Linkarrow-up-right

  • Improved cleanup of completed models to reduce maximum memory requirements for training Linkarrow-up-right

Version 1.0.0-alpha

Highlights - 1.0.0-alpha Release

  • ND4J: Added SameDiff - Java automatic differentiation library (alpha release) with Tensorflow import (technology preview) and hundreds of new operations

  • ND4J: Added CUDA 9.0 and 9.1 support (with cuDNN), dropped support for CUDA 7.5, continued support for CUDA 8.0

  • ND4J: Native binaries (nd4j-native on Maven Central) now ship with AVX/AVX2/AVX-512 support (Windows/Linux)

  • DL4J: Large number of new layers and API improvements

  • DL4J: Keras 2.0 import support

Deeplearning4J

Deeplearning4J: New Features

  • Layers (new and enhanced)

  • Added parameter constraints API (LayerConstraint interface), and MaxNormConstraint, MinMaxNormConstraint, NonNegativeConstraint, UnitNormConstraint implementations (Linkarrow-up-right)

  • Significant refactoring of learning rate schedules (Linkarrow-up-right)

    • Added ISchedule interface; added Exponential, Inverse, Map, Poly, Sigmoid and Step schedule implementations (Linkarrow-up-right)

    • Added support for both iteration-based and epoch-based schedules via ISchedule. Also added support for custom (user defined) schedules

    • Learning rate schedules are configured on the updaters, via the .updater(IUpdater) method

  • Added dropout API (IDropout - previously dropout was available but not a class); added Dropout, AlphaDropout (for use with self-normalizing NNs), GaussianDropout (multiplicative), GaussianNoise (additive). Added support for custom dropout types (Linkarrow-up-right)

  • Added support for dropout schedules via ISchedule interface (Linkarrow-up-right)

  • Added weight/parameter noise API (IWeightNoise interface); added DropConnect and WeightNoise (additive/multiplicative Gaussian noise) implementations (Linkarrow-up-right); dropconnect and dropout can now be used simultaneously

  • Adds layer configuration alias .units(int) equivalent to .nOut(int) (Linkarrow-up-right)

  • Adds ComputationGraphConfiguration GraphBuilder .layer(String, Layer, String...) alias for .addLayer(String, Layer, String...)

  • Layer index no longer required for MultiLayerConfiguration ListBuilder (i.e., .list().layer(<layer>) can now be used for configs) (Linkarrow-up-right)

  • Added MultiLayerNetwork.summary(InputType) and ComputationGraph.summary(InputType...) methods (shows layer and activation size information) (Linkarrow-up-right)

  • MultiLayerNetwork, ComputationGraph and layerwise trainable layers now track the number of epochs (Linkarrow-up-right)

  • Added deeplearning4j-ui-standalone module: uber-jar for easy launching of UI server (usage: java -jar deeplearning4j-ui-standalone-1.0.0-alpha.jar -p 9124 -r true -f c:/UIStorage.bin)

  • Weight initializations:

    • Added .weightInit(Distribution) convenience/overload (previously: required .weightInit(WeightInit.DISTRIBUTION).dist(Distribution)) (Linkarrow-up-right)

    • WeightInit.NORMAL (for self-normalizing neural networks) (Linkarrow-up-right)

    • Ones, Identity weight initialization (Linkarrow-up-right)

    • Added new distributions (LogNormalDistribution, TruncatedNormalDistribution, OrthogonalDistribution, ConstantDistribution) which can be used for weight initialization (Linkarrow-up-right)

    • RNNs: Added ability to specify weight initialization for recurrent weights separately to "input" weights (Linkarrow-up-right)

  • Added layer alias: Convolution2D (ConvolutionLayer), Pooling1D (Subsampling1DLayer), Pooling2D (SubsamplingLayer) (Linkarrow-up-right)

  • Added Spark IteratorUtils - wraps a RecordReaderMultiDataSetIterator for use in Spark network training (Linkarrow-up-right)

  • CuDNN-supporting layers (ConvolutionLayer, etc) now warn the user if using CUDA without CuDNN (Linkarrow-up-right)

  • Binary cross entropy (LossBinaryXENT) now implements clipping (1e-5 to (1 - 1e-5) by default) to avoid numerical underflow/NaNs (Linkarrow-up-right)

  • SequenceRecordReaderDataSetIterator now supports multi-label regression (Linkarrow-up-right)

  • TransferLearning FineTuneConfiguration now has methods for setting training/inference workspace modes (Linkarrow-up-right)

  • IterationListener iterationDone method now reports both current iteration and epoch count; removed unnecessary invoke/invoked methods (Linkarrow-up-right)

  • Added MultiLayerNetwork.layerSize(int), ComputationGraph.layerSize(int)/layerSize(String) to easily determine size of layers (Linkarrow-up-right)

  • Added MultiLayerNetwork.toComputationGraph() method (Linkarrow-up-right)

  • Added NetworkUtils convenience methods to easily change the learning rate of an already initialized network (Linkarrow-up-right)

  • Added MultiLayerNetwork.save(File)/.load(File) and ComputationGraph.save(File)/.load(File) convenience methods (Linkarrow-up-right)

  • Added CheckpointListener to periodically save a copy of the model during training (every N iter/epochs, every T time units) (Linkarrow-up-right)

  • Added ComputationGraph output method overloads with mask arrays (Linkarrow-up-right)

  • New LossMultiLabel loss function for multi-label classification (Linkarrow-up-right)

  • Added new model zoo models:

  • New iterators, and iterator improvements:

  • Added additional score functions for early stopping (ROC metrics, full set of Evaluation/Regression metrics, etc) (Linkarrow-up-right)

  • Added additional ROC and ROCMultiClass evaluation overloads for MultiLayerNetwork and ComputationGraph (Linkarrow-up-right)

  • Clarified Evaluation.stats() output to refer to "Predictions" instead of "Examples" (former is more correct for RNNs) (Linkarrow-up-right)

  • EarlyStoppingConfiguration now supports Supplier<ScoreCalculator> for use with non-serializable score calculators (Linkarrow-up-right)

  • Improved ModelSerializer exceptions when trying to load a model via wrong method (i.e., try to load ComputationGraph via restoreMultiLayerNetwork) (Linkarrow-up-right)

  • Added SparkDataValidation utility methods to validate saved DataSet and MultiDataSet on HDFS or local (Linkarrow-up-right)

  • ModelSerializer: added restoreMultiLayerNetworkAndNormalizer and restoreComputationGraphAndNormalizer methods (Linkarrow-up-right)

  • ParallelInference now has output overloads with support for input mask arrays (Linkarrow-up-right)

Deeplearning4J: Bug Fixes and Optimizations

  • Lombok is no longer included as a transitive dependency (Linkarrow-up-right)

  • ComputationGraph can now have a vertex as the output (not just layers) (Linkarrow-up-right, Linkarrow-up-right)

  • Performance improvement for J7FileStatsStorage with large amount of history (Linkarrow-up-right)

  • Fixed UI layer sizes for variational autoencoder layers (Linkarrow-up-right)

  • Fixes to avoid HDF5 library crashes (Linkarrow-up-right, Linkarrow-up-right)

  • UI Play servers switch to production (PROD) mode (Linkarrow-up-right)

  • Related to the above: users can now set play.crypto.secret system property to manually set the Play application secret; is randomly generated by default (Linkarrow-up-right).

  • SequenceRecordReaderDataSetIterator would apply preprocessor twice (Linkarrow-up-right)

  • Evaluation no-arg constructor could cause NaN evaluation metrics when used on Spark

  • CollectScoresIterationListener could recurse endlessly (Linkarrow-up-right)

  • Async(Multi)DataSetIterator calling reset() on underlying iterator could cause issues in some situations (Linkarrow-up-right)

  • In some cases, L2 regularization could be (incorrectly) applied to frozen layers (Linkarrow-up-right)

  • Logging fixes for NearestNeighboursServer (Linkarrow-up-right)

  • Memory optimization for BaseStatsListener (Linkarrow-up-right)

  • ModelGuesser fix for loading Keras models from streams (previously would fail) (Linkarrow-up-right)

  • Various fixes for workspaces in MultiLayerNetwork and ComputationGraph (Linkarrow-up-right, Linkarrow-up-right, Linkarrow-up-right, Linkarrow-up-right, Linkarrow-up-right, Linkarrow-up-right)

  • Fix for incorrect condition in DuplicateToTimeSeriesVertex (Linkarrow-up-right)

  • Fix for getMemoryReport exception on some valid ComputationGraph networks (Linkarrow-up-right)

  • RecordReaderDataSetIterator when used with preprocessors could cause an exception under some circumstances (Linkarrow-up-right)

  • CnnToFeedForwardPreProcessor could silently reshape invalid input, as long as the input array length matches the expected length (Linkarrow-up-right)

  • ModelSerializer temporary files would not be deleted if JVM crashes; now are deleted immediately when no longer required (Linkarrow-up-right)

  • RecordReaderMultiDataSetIterator may not add mask arrays under some circumstances, when set to ALIGN_END mode (Linkarrow-up-right)

  • ConvolutionIterationListener previously produced an IndexOutOfBoundsException when all convolution layers are frozen (Linkarrow-up-right)

  • PrecisionRecallCurve.getPointAtRecall could return a point with a correct but sub-optimal precision when multiple points had identical recall (Linkarrow-up-right)

  • Setting dropout(0) on transfer learning FineTuneConfiguration did not remove dropout if present on existing layer (Linkarrow-up-right)

  • Under some rare circumstances, Spark evaluation could lead to a NullPointerException (Linkarrow-up-right)

  • ComputationGraph: disconnected vertices were not always detected in configuration validation (Linkarrow-up-right)

  • Activation layers would not always inherit the global activation function configuration (Linkarrow-up-right)

  • RNN evaluation memory optimization: when TBPTT is configured for training, also use TBPTT-style splitting for evaluation (identical result, less memory) (Linkarrow-up-right, Linkarrow-up-right)

  • PerformanceListener is now serializable (Linkarrow-up-right)

  • ScoreIterationListener and PerformanceListener now report model iteration, not "iterations since listener creation" (Linkarrow-up-right)

  • Precision/recall curves cached values in ROC class may not be updated after merging ROC instances (Linkarrow-up-right)

  • ROC merging after evaluating a large number of examples may produce IllegalStateException (Linkarrow-up-right)

  • Added checks for invalid input indices to EmbeddingLayer (Linkarrow-up-right)

  • Fixed possible NPE when loading legacy (pre-0.9.0) model configurations from JSON (Linkarrow-up-right)

  • Fixed issues with EvaluationCalibration HTML export chart rendering (Linkarrow-up-right)

  • Fixed possible incorrect redering of UI/StatsStorage charts with J7FileStatsStorage when used with Spark training (Linkarrow-up-right)

  • MnistDataSetIterator would not always reliably detect and automatically fix/redownload on corrupted download data (Linkarrow-up-right)

  • MnistDataSetIterator / EmnistDataSetIterator: updated download location after hosting URL change (Linkarrow-up-right, Linkarrow-up-right)

  • Fixes to propagation of thread interruptions (Linkarrow-up-right)

  • MultiLayerNetwork/ComputationGraph will no longer throw an ND4JIllegalStateException during initialization if a network contains no parameters (Linkarrow-up-right, Linkarrow-up-right)

  • Fixes for TSNE posting of data to UI for visualization (Linkarrow-up-right)

  • PerformanceListener now throws a useful exception (in constructor) on invalid frequency argument, instead of runtime ArithmeticException (Linkarrow-up-right)

  • RecordReader(Multi)DataSetIterator now throws more useful exceptions when Writable values are non-numerical (Linkarrow-up-right)

  • UI: Fixed possible character encoding issues for non-English languages when internationalization data .txt files are read from uber JARs (Linkarrow-up-right)

  • UI: Fixed UI incorrectly trying to parse non-DL4J UI resources when loading I18N data (Linkarrow-up-right)

  • Various threading fixes (Linkarrow-up-right)

  • Evaluation: no-arg methods (f1(), precion(), etc) now return single class value for binary case instead of macro-averaged value; clarify values in stats() method and javadoc (Linkarrow-up-right)

  • Early stopping training: TrainingListener opEpochStart/End (etc) methods were not being called correctly (Linkarrow-up-right)

  • Fixes issue where dropout was not always applied to input of RNN layers (Linkarrow-up-right)

  • ModelSerializer: improved validation/exceptions when reading from invalid/empty/closed streams (Linkarrow-up-right)

  • ParallelInference fixes:

    • fixes for variable size inputs (variable length time series, variable size CNN inputs) when using batch mode (Linkarrow-up-right)

    • fixes undelying model exceptions during output method are now properly propagated back to the user (Linkarrow-up-right)

    • fixes support for 'pre-batched' inputs (i.e., inputs where minibatch size is > 1) (Linkarrow-up-right)

  • Memory optimization for network weight initialization via in-place random ops (Linkarrow-up-right)

  • Fixes for CuDNN with SAME mode padding (Linkarrow-up-right, Linkarrow-up-right)

  • Fix for VariationalAutoencoder builder decoder layer size validation (Linkarrow-up-right)

  • Improved Kmeans throughputlinkarrow-up-right

  • Add RPForest to nearest neighbors linkarrow-up-right

Deeplearning4J: API Changes (Transition Guide): 0.9.1 to 1.0.0-alpha

  • Default training workspace mode has been switched to SEPARATE from NONE for MultiLayerNetwork and ComputationGraph (Link)

  • Behaviour change: fit(DataSetIterator) and similar methods no longer perform layerwise pretraining followed by backprop - only backprop is performed in these methods. For pretraining, use pretrain(DataSetIterator) and pretrain(MultiDataSetIterator) methods (Linkarrow-up-right)

  • Previously deprecated updater configuration methods (.learningRate(double), .momentum(double) etc) all removed

    • To configure learning rate: use .updater(new Adam(lr)) instead of .updater(Updater.ADAM).learningRate(lr)

    • To configure bias learning rate: use .biasUpdater(IUpdater) method

    • To configure learning rate schedules: use .updater(new Adam(ISchedule)) and similar

  • Updater configuration via enumeration (i.e., .updater(Updater)) has been deprecated; use .updater(IUpdater)

  • .regularization(boolean) config removed; functionality is now always equivalent to .regularization(true)

  • .useDropConnect(boolean) removed; use .weightNoise(new DropConnect(double)) instead

  • .iterations(int) method has been removed (was rarely used and confusing to users)

  • Multiple utility classes (in org.deeplearning4j.util) have been deprecated and/or moved to nd4j-common. Use same class names in nd4j-common org.nd4j.util instead.

  • DataSetIterators in DL4J have been moved from deeplearning4j-nn module to new deeplearning4j-datasets, deeplearning4j-datavec-iterators and deeplearning4j-utility-iterators modules. Packages/imports are unchanged; deeplearning4j-core pulls these in as transitive dependencies hence no user changes should be required in most cases (Linkarrow-up-right)

  • Previously deprecated .activation(String) has been removed; use .activation(Activation) or .activation(IActivation) instead

  • Layer API change: Custom layers may need to implement applyConstraints(int iteration, int epoch) method

  • Parameter initializer API change: Custom parameter initializers may need to implement isWeightParam(String) and isBiasParam(String) methods

  • RBM (Restricted Boltzmann Machine) layers have been removed entirely. Consider using VariationalAutoencoder layers as a replacement (Linkarrow-up-right)

  • GravesBidirectionalLSTM has been deprecated; use new Bidirectional(Bidirectional.Mode.ADD, new GravesLSTM.Builder()....build())) instead

  • Previously deprecated WordVectorSerializer methods have now been removed (Linkarrow-up-right)

  • Removed deeplearning4j-ui-remote-iterationlisteners module and obsolete RemoteConvolutionalIterationListener (Linkarrow-up-right)

Deeplearning4J: 1.0.0-alpha Known Issues

  • Performance on some networks types may be reduced on CUDA compared to 0.9.1 (with workspaces configured). This will be addressed in the next release

  • Some issues have been noted with FP16 support on CUDA (Linkarrow-up-right)

Deeplearing4J: Keras Import

  • Keras 2 support, keeping backward compatibility for keras 1

  • Keras 2 and 1 import use exact same API and are inferred by DL4J

  • Keras unit test coverage increased by 10x, many more real-world integration tests

  • Unit tests for importing and checking layer weights

  • Leaky ReLU, ELU, SELU support for model import

  • All Keras layers can be imported with optional bias terms

  • Old deeplearning4j-keras module removed, old "Model" API removed

  • All Keras initializations (Lecun normal, Lecun uniform, ones, zeros, Orthogonal, VarianceScaling, Constant) supported

  • 1D convolution and pooling supported in DL4J and Keras model import

  • Atrous Convolution 1D and 2D layers supported in Keras model import

  • 1D Zero padding layers supported

  • Keras constraints module fully supported in DL4J and model import

  • Upsampling 1D and 2D layers in DL4J and Keras model import (including GAN examples in tests)

  • Most merge modes supported in Keras model import, Keras 2 Merge layer API supported

  • Separable Convolution 2D layer supported in DL4J and Keras model import

  • Deconvolution 2D layer supported in DL4J and Keras model import

  • Full support of Keras noise layers on import (Alpha dropout, Gaussian dropout and noise)

  • Support for SimpleRNN layer in Keras model import

  • Support for Bidirectional layer wrapper Keras model import

  • Addition of LastTimestepVertex in DL4J to support return_sequences=False for Keras RNN layers.

  • DL4J support for recurrent weight initializations and Keras import integration.

  • SpaceToBatch and BatchToSpace layers in DL4J for better YOLO support, plus end-to-end YOLO Keras import test.

  • Cropping2D support in DL4J and Keras model import

Deeplearning4J: Keras Import - API Changes (Transition Guide): 0.9.1 to 1.0.0-alpha

  • In 0.9.1 deprecated Model and ModelConfiguration have been permanently removed. Use KerasModelImportarrow-up-right instead, which is now the only entry point for Keras model import.

Deeplearning4J: Keras Import - Known Issues

  • Embedding layer: In DL4J the output of an embedding layer is 2D by default, unless preprocessors are specified. In Keras the output is always 3D, but depending on specified parameters can be interpreted as 2D. This often leads to difficulties when importing Embedding layers. Many cases have been covered and issues fixed, but inconsistencies remain.

  • Batchnormalization layer: DL4J's batch normalization layer is much more restrictive (in a good way) than Keras' version of it. For instance, DL4J only allows to normalize spatial dimensions for 4D convolutional inputs, while in Keras any axis can be used for normalization. Depending on the dimension ordering (NCHW vs. NHWC) and the specific configuration used by a Keras user, this can lead to expected (!) and unexpected import errors.

  • Support for importing a Keras model for training purposes in DL4J (enforceTrainingConfig == true) is still very limited and will be tackled properly for the next release.

  • Keras Merge layers: seem to work fine with the Keras functional API, but have issues when used in a Sequential model.

  • Reshape layers: can be somewhat unreliable on import. DL4J rarely has a need to explicitly reshape input beyond (inferred) standard input preprocessors. In Keras, Reshape layers are used quite often. Mapping the two paradigms can be difficult in edge cases.

ND4J

ND4J: New Features

  • Hundreds of new operations added

  • New DifferentialFunction api with automatic differentiation (see samediff section) Linkarrow-up-right

  • Technology preview of tensorflow import added (supports 1.4.0 and up)

  • Apache Arrow serialization added supporting new tensor API Linkarrow-up-right

  • Add support for AVX/AVX2 and AVX-512 instruction sets for Windows/Linux for nd4j-native backend Linkarrow-up-right

  • nVidia CUDA 8/9.0/9.1 now supported

  • Worskpaces improvements were introduced to ensure safety: SCOPE_PANIC profiling mode is enabled by default

  • FlatBuffers support for INDArray serde

  • Support for auto-broadcastable operations was added

  • libnd4j, underlying c++ library, got functionality boost and now offers: NDArray class, Graph class, and can be used as standalone library or executable.

  • Convolution-related ops now support NHWC in addition to NCHW data format.

  • Accumulation ops now have option to keep reduced dimensions.

ND4J: Known Issues

  • Not all op gradients implemented for automatic differentiation

  • Vast majority of new operations added in 1.0.0-alpha do NOT use GPU yet.

ND4J: API Changes (Transition Guide): 0.9.1 to 1.0.0-alpha

ND4J - SameDiff

  • Initial tech preview Linkarrow-up-right

  • Control flow is supported with IF and WHILE primitives.

Alpha release of SameDiffarrow-up-right auto-differentiation engine for ND4J.

Features

  • Two execution modes available: Java-driven execution, and Native execution for serialized graphs.

  • SameDiff graphs can be serialized using FlatBuffers

  • Building and running computation graphs build from SameDiff operations.

  • Graphs can run forward pass on input data and compute gradients for the backward pass.

  • Already supports many high-level layers, like dense layers, convolutions (1D-3D) deconvolutions, separable convolutions, pooling and upsampling, batch normalization, local response normalization, LSTMs and GRUs.

  • In total there are about 350 SameDiff operations available, including many basic operations used in building complex graphs.

  • Supports rudimentary import of TensorFlowarrow-up-right and ONNX graphs for inference.

  • TFOpTestsarrow-up-right is a dedicated project for creating test resources for TensorFlow import.

Known Issues and Limitations

  • Vast majority of new operations added in 1.0.0-alpha do NOT use GPU yet.

  • While many of the widely used base operations and high-level layers used in practice are supported, op coverage is still limited. Goal is to achieve feature parity with TensorFlow and fully support import for TF graphs.

  • Some of the existing ops do not have a backward pass implemented (called doDiff in SameDiff).

DataVec

DataVec: New Features

DataVec: Fixes

DataVec: API Changes (Transition Guide): 0.9.1 to 1.0.0-alpha

  • Many of the util classes (in org.datavec.api.util mainly) have been deprecated or removed; use equivalently named util clases in nd4j-common module (Linkarrow-up-right)

  • RecordReader.next(int) method now returns List<List<Writable>> for batches, not List<Writable>. See also NDArrayRecordBatcharrow-up-right

  • RecordWriter and SequenceRecordWriter APIs have been updated with multiple new methods

Arbiter

Arbiter: New Features

Arbiter: Fixes

Arbiter: API Changes (Transition Guide): 0.9.1 to 1.0.0-alpha

  • As per DL4J updater API changes: old updater configuration (learningRate, momentum, etc) methods have been removed. Use .updater(IUpdater) or .updater(ParameterSpace<IUpdater>) methods instead

RL4J

  • Add support for LSTM layer to A3C

  • Fix A3C to make it actually work using new ActorCriticLoss and correct use of randomness

  • Fix cases when QLearning would fail (non-flat input, incomplete serialization, incorrect normalization)

  • Fix logic of HistoryProcessor with async algorithms and failures when preprocessing images

  • Tidy up and correct the output of statistics, also allowing the use of IterationListener

  • Fix issues preventing efficient execution with CUDA

  • Provide access to more of the internal structures with NeuralNet.getNeuralNetworks(), Policy.getNeuralNet(), and convenience constructors for Policy

  • Add MDPs for ALE (Arcade Learning Environment) and MALMO to support Atari games and Minecraft

  • Update MDP for Doom to allow using the latest version of VizDoom

ScalNet

  • First release of ScalNet Scala APIarrow-up-right, which closely resembles Keras' API.

  • Can be built with sbt and maven.

  • Supports both Keras inspired Sequentialarrow-up-right models, corresponding to DL4J's MultiLayerNetwork, and Modelarrow-up-right, corresponding to ComputationGraph.

  • Project structure is closely aligned to both DL4J model-import module and Keras.

  • Supports the following layers: Convolution2D, Dense, EmbeddingLayer, AvgPooling2D, MaxPooling2D, GravesLSTM, LSTM, Bidirectional layer wrapper, Flatten, Reshape. Additionally, DL4J OutputLayers are supported.

ND4S

  • Scala 2.12 support

Version 0.9.1

Deeplearning4J

  • Fixed issue with incorrect version dependencies in 0.9.0

  • Added EmnistDataSetIterator Linkarrow-up-right

  • Numerical stability improvements to LossMCXENT / LossNegativeLogLikelihood with softmax (should reduce NaNs with very large activations)

ND4J

Known Issues

  • Deeplearning4j: Use of Evaluation class no-arg constructor (i.e., new Evaluation()) can result in accuracy/stats being reported as 0.0. Other Evaluation class constructors, and ComputationGraph/MultiLayerNetwork.evaluate(DataSetIterator) methods work as expected.

    • This also impacts Spark (distributed) evaluation: workaround is to replace sparkNet.evaluate(testData); with sparkNet.doEvaluation(testData, 64, new Evaluation(10))[0];, where 10 is the number of classes and 64 in the evaluation minibatch size to use.

  • SequenceRecordReaderDataSetIterator applies preprocessors (such as normalization) twice to each DataSet (possible workaround: use RecordReaderMultiDataSetIterator + MultiDataSetWrapperIterator)

  • TransferLearning: ComputationGraph may incorrectly apply l1/l2 regularization (defined in FinetuneConfiguration) to frozen layers. Workaround: set 0.0 l1/l2 on FineTuneConfiguration, and required l1/l2 on new/non-frozen layers directly. Note that MultiLayerNetwork with TransferLearning appears to be unaffected.

Version 0.9.0

Deeplearning4J

  • Workspaces feature added (faster training performance + less memory) Link

  • SharedTrainingMaster added for Spark network training (improved performance) Link 1arrow-up-right, Link 2arrow-up-right

  • ParallelInference added - wrapper that server inference requests using internal batching and queues Linkarrow-up-right

  • ParallelWrapper now able to work with gradients sharing, in addition to existing parameters averaging mode Linkarrow-up-right

  • VPTree performance significantly improved

  • CacheMode network configuration option added - improved CNN and LSTM performance at the expense of additional memory use Linkarrow-up-right

  • LSTM layer added, with CuDNN support Linkarrow-up-right (Note that the existing GravesLSTM implementation does not support CuDNN)

  • New native model zoo with pretrained ImageNet, MNIST, and VGG-Face weights Linkarrow-up-right

  • Convolution performance improvements, including activation caching

  • Custom/user defined updaters are now supported Linkarrow-up-right

  • Evaluation improvements

    • EvaluationBinary, ROCBinary classes added: for evaluation of binary multi-class networks (sigmoid + xent output layers) Linkarrow-up-right

    • Evaluation and others now have G-Measure and Matthews Correlation Coefficient support; also macro + micro-averaging support for Evaluation class metrics Linkarrow-up-right

    • ComputationGraph and SparkComputationGraph evaluation convenience methods added (evaluateROC, etc)

    • ROC and ROCMultiClass support exact calculation (previous: thresholded calculation was used) Linkarrow-up-right

    • ROC classes now support area under precision-recall curve calculation; getting precision/recall/confusion matrix at specified thresholds (via PrecisionRecallCurve class) Linkarrow-up-right

    • RegressionEvaluation, ROCBinary etc now support per-output masking (in addition to per-example/per-time-step masking)

    • EvaluationCalibration added (residual plots, reliability diagrams, histogram of probabilities) Link 1arrow-up-right Link 2arrow-up-right

    • Evaluation and EvaluationBinary: now supports custom classification threshold or cost array Linkarrow-up-right

  • Optimizations: updaters, bias calculation

  • Network memory estimation functionality added. Memory requirements can be estimated from configuration without instantiating networks Link 1arrow-up-right Link 2arrow-up-right

  • New loss functions:

ND4J

  • Workspaces feature added Linkarrow-up-right

  • Native parallel sort was added

  • New ops added: SELU/SELUDerivative, TAD-based comparisons, percentile/median, Reverse, Tan/TanDerivative, SinH, CosH, Entropy, ShannonEntropy, LogEntropy, AbsoluteMin/AbsoluteMax/AbsoluteSum, Atan2

  • New distance functions added: CosineDistance, HammingDistance, JaccardDistance

DataVec

  • MapFileRecordReader and MapFileSequenceRecordReader added Link 1arrow-up-right Link 2arrow-up-right

  • Spark: Utilities to save and load JavaRDD<List<Writable>> and JavaRDD<List<List<Writable>> data to Hadoop MapFile and SequenceFile formats Linkarrow-up-right

  • TransformProcess and Transforms now support NDArrayWritables and NDArrayWritable columns

  • Multiple new Transform classes

Arbiter

  • Arbiter UI: Linkarrow-up-right

    • UI now uses Play framework, integrates with DL4J UI (replaces Dropwizard backend). Dependency issues/clashing versions fixed.

    • Supports DL4J StatsStorage and StatsStorageRouter mechanisms (FileStatsStorage, Remote UI via RemoveUIStatsStorageRouter)

    • General UI improvements (additional information, formatting fixes)

0.8.0 -> 0.9.0 Transition Notes

Deeplearning4j

  • Updater configuration methods such as .momentum(double) and .epsilon(double) have been deprecated. Instead: use .updater(new Nesterovs(0.9)) and .updater(Adam.builder().beta1(0.9).beta2(0.999).build()) etc to configure

DataVec

  • CsvRecordReader constructors: now uses characters for delimiters, instead of Strings (i.e., ',' instead of ",")

Arbiter

  • Arbiter UI is now a separate module, with Scala version suffixes: arbiter-ui_2.10 and arbiter-ui_2.11

Version 0.8.0

  • Added transfer learning API Linkarrow-up-right

  • Spark 2.0 support (DL4J and DataVec; see transition notes below)

  • New layers

  • New ComputationGraph vertices

    • L2 distance vertex

    • L2 normalization vertex

  • Per-output masking is now supported for most loss functions (for per output masking, use a mask array equal in size/shape to the labels array; previous masking functionality was per-example for RNNs)

  • L1 and L2 regularization can now be configured for biases (via l1Bias and l2Bias configuration options)

  • Evaluation improvements:

    • DL4J now has an IEvaluation class (that Evaluation, RegressionEvaluation, etc all implement. Also allows custom evaluation on Spark) Linkarrow-up-right

    • Added multi-class (one vs. all) ROC: ROCMultiClass Linkarrow-up-right

    • For both MultiLayerNetwork and SparkDl4jMultiLayer: added evaluateRegression, evaluateROC, evaluateROCMultiClass convenience methods

    • HTML export functionality added for ROC charts Linkarrow-up-right

    • TSNE re-added to new UI

    • Training UI: now usable without an internet connection (no longer relies on externally hosted fonts)

    • UI: improvements to error handling for ‘no data’ condition

  • Epsilon configuration now used for Adam and RMSProp updaters

  • Fix for bidirectional LSTMs + variable-length time series (using masking)

  • Added CnnSentenceDataSetIterator (for use with ‘CNN for Sentence Classification’ architecture) Linkarrow-up-right Link2arrow-up-right

  • Spark + Kryo: now test serialization + throw exception if misconfigured (instead of logging an error that can be missed)

  • MultiLayerNetwork now adds default layer names if no name is specified

  • DataVec:

    • JSON/YAML support for DataAnalysis, custom Transforms etc

    • ImageRecordReader refactored to reduce garbage collection load (hence improve performance with large training sets)

    • Faster quality analysis.

  • Arbiter: added new layer types to match DL4J

    • Performance improvement for Word2Vec/ParagraphVectors tokenization & training.

  • Batched inference introduced for ParagraphVectors

  • Nd4j improvements

    • New native operations available for ND4j: firstIndex, lastIndex, remainder, fmod, or, and, xor.

    • OpProfiler NAN_PANIC & INF_PANIC now also checks result of BLAS calls.

    • Nd4.getMemoryManager() now provides methods to tweak GC behavior.

  • Alpha version of parameter server for Word2Vec/ParagraphVectors were introduced for Spark. Please note: It’s not recommended for production use yet.

  • Performance improvements for CNN inference

0.7.2 -> 0.8.0 Transition Notes

  • Spark versioning schemes: with the addition of Spark 2 support, the versions for Deeplearning4j and DataVec Spark modules has changed

    • For Spark 1: use <version>0.8.0_spark_1</version>

    • For Spark 2: use <version>0.8.0_spark_2</version>

    • Also note: Modules with Spark 2 support are released with Scala 2.11 support only. Spark 1 modules are released with both Scala 2.10 and 2.11 support

0.8.0 Known Issues (At Launch)

  • UI/CUDA/Linux issue: Linkarrow-up-right

  • Dirty shutdown on JVM exit is possible for CUDA backend sometimes: Linkarrow-up-right

  • Issues with RBM implementation Linkarrow-up-right

  • Keras 1D convolutional and pooling layers cannot be imported yet. Will be supported in forthcoming release.

  • Keras v2 model configurations cannot be imported yet. Will be supported in forthcoming release.

Version 0.7.2

  • Added variational autoencoder Linkarrow-up-right

  • Activation function refactor

    • Activation functions are now an interface Linkarrow-up-right

    • Configuration now via enumeration, not via String (see examples - Linkarrow-up-right)

    • Custom activation functions now supported Linkarrow-up-right

    • New activation functions added: hard sigmoid, randomized leaky rectified linear units (RReLU)

  • Multiple fixes/improvements for Keras model import

  • Added P-norm pooling for CNNs (option as part of SubsamplingLayer configuration)

  • Iteration count persistence: stored/persisted properly in model configuration + fixes to learning rate schedules for Spark network training

  • LSTM: gate activation function can now be configured (previously: hard-coded to sigmoid)

  • UI:

    • Added Chinese translation

    • Fixes for UI + pretrain layers

    • Added Java 7 compatible stats collection compatibility Linkarrow-up-right

    • Improvements in front-end for handling NaNs

    • Added UIServer.stop() method

    • Fixed score vs. iteration moving average line (with subsampling)

  • Solved Jaxb/Jackson issue with Spring Boot based applications

  • RecordReaderDataSetIterator now supports NDArrayWritable for the labels (set regression == true; used for multi-label classification + images, etc)

0.7.1 -> 0.7.2 Transition Notes

  • Activation functions (built-in): now specified using Activation enumeration, not String (String-based configuration has been deprecated)

Version 0.7.1

  • RBM and AutoEncoder key fixes:

    • Ensured visual bias updated and applied during pretraining.

    • RBM HiddenUnit is the activation function for this layer; thus, established derivative calculations for backprop according to respective HiddenUnit.

  • RNG performance issues fixed for CUDA backend

  • OpenBLAS issues fixed for macOS, powerpc, linux.

  • DataVec is back to Java 7 now.

  • Multiple minor bugs fixed for ND4J/DL4J

Version 0.7.0

  • UI overhaul: new training UI has considerably more information, supports persistence (saving info and loading later), Japanese/Korean/Russian support. Replaced Dropwizard with Play framework. Linkarrow-up-right

  • Import of models configured and trained using Kerasarrow-up-right

  • Added ‘Same’ padding more for CNNs (ConvolutionMode network configuration option) Linkarrow-up-right

  • Weighted loss functions: Loss functions now support a per-output weight array (row vector)

  • ROC and AUC added for binary classifiers Linkarrow-up-right

  • Improved error messages on invalid configuration or data; improved validation on both

  • Added metadata functionality: track source of data (file, line number, etc) from data import to evaluation. Loading a subset of examples/data from this metadata is now supported. Linkarrow-up-right

  • Removed Jackson as core dependency (shaded); users can now use any version of Jackson without issue

  • Added LossLayer: version of OutputLayer that only applies loss function (unlike OutputLayer: it has no weights/biases)

  • Functionality required to build triplet embedding model (L2 vertex, LossLayer, Stack/Unstack vertices etc)

  • Reduced DL4J and ND4J ‘cold start’ initialization/start-up time

  • Pretrain default changed to false and backprop default changed to true. No longer needed to set these when setting up a network configuration unless defaults need to be changed.

  • Added TrainingListener interface (extends IterationListener). Provides access to more information/state as network training occurs Linkarrow-up-right

  • Numerous bug fixes across DL4J and ND4J

  • Performance improvements for nd4j-native & nd4j-cuda backends

  • Standalone Word2Vec/ParagraphVectors overhaul:

    • Performance improvements

    • ParaVec inference available for both PV-DM & PV-DBOW

    • Parallel tokenization support was added, to address computation-heavy tokenizers.

  • Native RNG introduced for better reproducibility within multi-threaded execution environment.

  • Additional RNG calls added: Nd4j.choice(), and BernoulliDistribution op.

  • Off-gpu storage introduced, to keep large things, like Word2Vec model in host memory. Available via WordVectorSerializer.loadStaticModel()

  • Two new options for performance tuning on nd4j-native backend: setTADThreshold(int) & setElementThreshold(int)

0.6.0 -> 0.7.0 Transition Notes

Notable changes for upgrading codebases based on 0.6.0 to 0.7.0:

  • UI: new UI package name is deeplearning4j-ui_2.10 or deeplearning4j-ui_2.11 (previously: deeplearning4j-ui). Scala version suffix is necessary due to Play framework (written in Scala) being used now.

  • Histogram and Flow iteration listeners deprecated. They are still functional, but using new UI is recommended Linkarrow-up-right

  • DataVec ImageRecordReader: labels are now sorted alphabetically by default before assigning an integer class index to each - previously (0.6.0 and earlier) they were according to file iteration order. Use .setLabels(List) to manually specify the order if required.

  • CNNs: configuration validation is now less strict. With new ConvolutionMode option, 0.6.0 was equivalent to ‘Strict’ mode, but new default is ‘Truncate’

  • Xavier weight initialization change for CNNs and LSTMs: Xavier now aligns better with original Glorot paper and other libraries. Xavier weight init. equivalent to 0.6.0 is available as XAVIER_LEGACY

  • DataVec: Custom RecordReader and SequenceRecordReader classes require additional methods, for the new metadata functionality. Refer to existing record reader implementations for how to implement these methods.

  • Word2Vec/ParagraphVectors:

    • Few new builder methods:

      • allowParallelTokenization(boolean)

      • useHierarchicSoftmax(boolean)

    • Behaviour change: batchSize: now batch size is ALSO used as threshold to execute number of computational batches for sg/cbow

Version 0.6.0

  • Custom layer support

  • Support for custom loss functions

  • Support for compressed INDArrays, for memory saving on huge data

  • Native support for BooleanIndexing where applicable

  • Initial support for combined operations on CUDA

  • Significant performance improvements on CPU & CUDA backends

  • Better support for Spark environments using CUDA & cuDNN with multi-gpu clusters

  • New UI tools: FlowIterationListener and ConvolutionIterationListener, for better insights of processes within NN.

  • Special IterationListener implementation for performance tracking: PerformanceListener

  • Inference implementation added for ParagraphVectors, together with option to use existing Word2Vec model

  • Severely decreased file size on the deeplearnning4j api

  • nd4j-cuda-8.0 backend is available now for cuda 8 RC

  • Added multiple new built-in loss functions

  • Custom preprocessor support

  • Performance improvements to Spark training implementation

  • Improved network configuration validation using InputType functionality

Version 0.5.0

  • FP16 support for CUDA

  • Better performance for multi-gpu

  • Including optional P2P memory access support

  • Normalization support for time series and images

  • Normalization support for labels

  • Removal of Canova and shift to DataVec: Javadoc, Github Repoarrow-up-right

  • Numerous bug fixes

  • Spark improvements

Version 0.4.0

  • Initial multi-GPU support viable for standalone and Spark.

  • Refactored the Spark API significantly

  • Added CuDNN wrapper

  • Performance improvements for ND4J

  • Introducing DataVecarrow-up-right: Lots of new functionality for transforming, preprocessing, cleaning data. (This replaces Canova)

  • New DataSetIterators for feeding neural nets with existing data: ExistingDataSetIterator, Floats(Double)DataSetIterator, IteratorDataSetIterator

  • New learning algorithms for word2vec and paravec: CBOW and PV-DM respectively

  • New native ops for better performance: DropOut, DropOutInverted, CompareAndSet, ReplaceNaNs

  • Shadow asynchronous datasets prefetch enabled by default for both MultiLayerNetwork and ComputationGraph

  • Better memory handling with JVM GC and CUDA backend, resulting in significantly lower memory footprint

Last updated

Was this helpful?