1.0.0-beta7
Version 1.0.0-beta7
Read the announcement at https://blog.konduit.ai/2020/05/14/deeplearning4j-1-0-0-beta7-released/ for the highlights of this release.
Deeplearning4j
Features and Enhancements
- Added Keras model import support for tf.keras models Link, Link - Full inference and training support is available for ops/layers in the tf.keras namespace; inference only for general Tensorflow operations outside of the tf.keras namespace 
- Note also improvements to Keras import for reshape, permute, etc operations due to NHWC and NWC support in DL4J 
 
- DL4J now supports NHWC (channels last) data format for all CNN 2D layers, in addition to NCHW Link 
- DL4J now supports NWC (channels last - [minibatch, sequence_length, size]) for all RNN and CNN 1D layers, in addition to NCW Link 
- Added Deconvolution3D layer Link 
- Added DL4J SameDiffLoss class (for easily-defined DL4J ILossFunction's via SameDiff) Link 
- Useful exceptions are now thrown when attempting to perform unsupported operations on FastText Link 
Bug Fixes and Optimizations
- Deeplearning4j UI: Webjars versions locked down using dependency management to avoid check on each build Link 
- Added MKLDNN (DNNL/OneDNN) support for depthwise_conv2d operation for DL4J and SameDiff Link 
- Refactored/merged modules dl4j-perf and dl4j-util into deeplearning4j-core Link 
- Fixed an issue with BertWordPieceTokenizer - potential StackOverflowError with certain inputs Link 
- Fixed an issue with GlobalPooling layer with masks of different datatype to the activations datatype Link 
- Fixed an issue with DL4JModelValidator for ComputationGraph Link 
- Fixed an issue where SameDiff layers in DL4J could throw an exception when used with transfer learning Link 
- Weight initialization for EmbeddingLayer and EmbeddingSequenceLayer now no longer depend on the vocabulary size (only the vector size) Link 
- Fixed an issue with Keras import with bidirectional layers + preprocessors Link 
- DL4J UI: added redirect from /train to /train/overview Link 
- Fixed an issue where RecordReaderDataSetIterator builder collectMetaData configuration was not being applied Link 
- Fixed an issue with Spark training SharedTrainingMaster when training with a ComputationGraph and MultiDataSets Link 
- Assorted fixes for edge cases for DL4J Keras import Link 
- deelpearning4j-nlp-korean will no longer be released for Scala 2.12 due to required dependency only having Scala 2.11 version avairable Link 
- Fix for ConvolutionalIterationListener for ComputationGraph Link 
- Fixed an issue where dataset and model zoo downloads could get stuck if the server fails to send any data (now: timeout + retry) Link 
- DL4J ModelSerializer no longer writes temporary files when restoring models from InputStream Link 
- Fixes issues with UIServer multi session mode, and potential shutdown race condition Link 
- Fixed an issue where TfidfVectorizer.vectorize() could throw a NPE when fit from LabelAwareIterator Link 
ND4J/SameDiff:
Features and Enhancements
- cuDNN support added to SameDiff (automatically enabled for nd4j-cuda-10.x backend) Link 
- Added ND4J namespaces: Nd4j.cnn, Nd4j.rnn, Nd4j.image Link 
- Added new Random operations namespace operations: - gamma, poisson, shuffle Link 
 
- Added new NN namespace operations: - cReLU Link 
 
- Added new CNN namespace operations: - upsampling3d Link 
 
- Added new Loss operations namespace - Nd4j.loss Link 
- Mapped operations for Tensorflow import: - HSVToRGB, RGBToHSV, Igamma, Igammac, RandomGamma, RandomPoisson, RandomPoissonV2, RandomShuffle Link 
 
- Improved memory limits/configuration support for libnd4j (c++) Link 
- Added pairwise (broadcastable) power backprop operation Link 
- Updated JavaCPP presets MKL version to 2020.0 from 2019.5 Link 
- Added tensormmul_bp op Link 
- OpenBLAS version upgraded to 0.3.8 Link 
- libnd4j (c++ codebase underlying DL4J, ND4J and SameDiff) refactored to be more easily embeddable in other C++ projects Link 
- ImagePreProcessingScaler now supports preprocessing of labels (for segmentation) Link 
- Additional datatypes now supported for nd4j-tensorflow TensorflowConversion Link 
- SameDiff operation namespaces (sd.math, sd.image, etc) are now code generated to ensure SameDiff and ND4J namespaces are identical (all operations included, same API) Link 
- Added ND4J - ArchiveUtils.unzipFileTo(String, String, boolean logFiles)overload to enable/disable extracted file path logging Link
- Added weight format configuration for following operations: conv1D, conv2D, conv3D, deconv2d, deconv3d, depthwiseConv2d, pointwiseConv2d, sconv2d Link 
- Added backprop operation implementations for mergemax, mergeadd, mergeavg operations Link 
- MKL version upgraded to 2020.0 2020.1; OpenCV upgraded from 4.2.0 to 4.3.0 Link 
- SameDiff: DifferentialFunctionFactory class removed in favor of namespace methods (sd.math, sd.linalg, etc) Link 
- Added lstmLayer_bp operation Link 
- Added gru_bp operation Link 
- linspace operation can now use both targs and arrays for start/end/size arguments Link 
- Assorted dependency updates - OpenBLAS (0.3.9), OpenCV (4.3.0), Leptonica (1.79.0) Link 
- Upgraded assorted dependency versions: javax.activation:activation (1.1 -> 1.1.1), stream analytics (2.7.0->2.9.8), Apache Spark (2.4.3->2.4.5), Jackson databind (2.10.1 -> 2.10.3), Vertx (3.8.3 -> 3.9.0) Link 
- Added nd4j-common-tests ResourceUtils.listClassPathfiles method Link 
Bug Fixes and Optimizations
- SameDiff - added CuDNN support Link 
- Fixed some issues with Tensorflow import of FusedBatchNorm operation Link 
- Fixed an issue where ArchiveUtils could fail to create the top level destination directory when it does not exist Link 
- Fixed an issue where hashcode operation shape function wasn't always returning int64/long dtype Link 
- Added MKLDNN (DNNL/OneDNN) support for depthwise_conv2d operation for DL4J and SameDiff Link 
- Fixed a small SameDiff execution issue for switch operation where the predicate is a constant Link 
- Fixed an issue with batchnorm operation when input arrays have unusual strides Link 
- Merged nd4j-buffer, nd4j-content modules into nd4j-api Link 
- Deleted deprecated nd4j-jackson module (remaining functionality available in nd4j-api) Link 
- Deleted unused/unmaintained nd4j-camel and nd4j-gson modules Link 
- Optimization for legacy random ops Link 
- Performance optimization for multiple operations: softmax, squeeze, expand_dims, tanh Link 
- Optimization for transpose/permute operations Link 
- Performance enhancement: MKLDNN matmul used for some mmul operation cases Link 
- Optimization for gather operation on CPU Link 
- Optimization for stack/unstack operations on CPU Link 
- ND4J initialization no longer logs number of OpenMP BLAS threads for CUDA Link 
- Optimization: Fixed issues with auto-vectorization on multple CPU operations Link 
- Fixed an issue where INDArray.hashCode() could cause an exception on some datatypes Link 
- Fixed random_exponential operation Link 
- Improved performance on C++ SameDiff graph execution via reduced array zeroing where safe to do so Link 
- Improved C++ indexing implementation impacting CPU performance on some operations Link 
- Fixed an issue where Split operation could have incorrect output shapes for empty arrays Link 
- Fixed some issues with SameDiff.equals method Link 
- Nd4j.gemm now uses Mmul operation internally to avoid potential threading issues with direct BLAS calls on CUDA Link 
- Fixed an edge case issue with percentile operation link 
- Fixed an edge case issue for cusolved (CUDA) in libnd4j Link 
- Fixed an issue with error formatting for segment operations for incorrect lengths Link 
- Fixed an issue where ND4J workspaces were not guaranteed to be unique Link 
- Fixed some operation implementations when operating on views (Batch/Space to Space/Batch/Depth; batchnorm_bp) Link 
- Fixed an issue where exponential distribution random number generation operation could produce infinities extremely rarely (~1 in 10^9 values) Link 
- Fixed an issue with long file paths for memory mapped workspaces on Windows Link 
- Memory for memory mapped workspaces are now deallocated immediately when workspace is destroyed, instead of waiting for GC to free memory Link 
- Fall-back to other BLAS implementation for cases where MKLDNN GEMM implementation is slow Link 
DataVec
Features and Enhancements
- datavec-python: added zero-copy support for bytes/byte buffers Link 
- datavec-python: Python exceptions are now thrown as Java exceptions Link 
- datavec-python: Added support for additional NumPy datatypes Link 
- datavec-python: Python version upgraded from 3.7.6 to 3.7.7 Link 
Bug Fixes and Optimizations
- Deleted not properly maintained modules: datavec-camel, datavec-perf Link 
- Fixed missing BOOL datatype support for arrow conversion functionality Link 
- Fixed an issue with LineRecordReader where initialization was performed unnecessarily (adding performance overhead) Link 
RL4J
Features and Enhancements
- Refactoring to decouple configuration and learning methods from their implementations Link 
- Added builder patterns for all configuration classes Link 
Arbiter
Bug Fixes and Optimizations
Was this helpful?
