1.0.0-beta7
Read the announcement at https://blog.konduit.ai/2020/05/14/deeplearning4j-1-0-0-beta7-released/ for the highlights of this release.
- Full inference and training support is available for ops/layers in the tf.keras namespace; inference only for general Tensorflow operations outside of the tf.keras namespace
- Note also improvements to Keras import for reshape, permute, etc operations due to NHWC and NWC support in DL4J
- DL4J now supports NWC (channels last - [minibatch, sequence_length, size]) for all RNN and CNN 1D layers, in addition to NCW Link
- Useful exceptions are now thrown when attempting to perform unsupported operations on FastText Link
- Deeplearning4j UI: Webjars versions locked down using dependency management to avoid check on each build Link
- Fixed an issue with GlobalPooling layer with masks of different datatype to the activations datatype Link
- Fixed an issue where SameDiff layers in DL4J could throw an exception when used with transfer learning Link
- Weight initialization for EmbeddingLayer and EmbeddingSequenceLayer now no longer depend on the vocabulary size (only the vector size) Link
- Fixed an issue where RecordReaderDataSetIterator builder collectMetaData configuration was not being applied Link
- Fixed an issue with Spark training SharedTrainingMaster when training with a ComputationGraph and MultiDataSets Link
- deelpearning4j-nlp-korean will no longer be released for Scala 2.12 due to required dependency only having Scala 2.11 version avairable Link
- Fixed an issue where dataset and model zoo downloads could get stuck if the server fails to send any data (now: timeout + retry) Link
- Fixed an issue where TfidfVectorizer.vectorize() could throw a NPE when fit from LabelAwareIterator Link
- Added new Random operations namespace operations:
- Added new NN namespace operations:
- Added new CNN namespace operations:
- Mapped operations for Tensorflow import:
- HSVToRGB, RGBToHSV, Igamma, Igammac, RandomGamma, RandomPoisson, RandomPoissonV2, RandomShuffle Link
- libnd4j (c++ codebase underlying DL4J, ND4J and SameDiff) refactored to be more easily embeddable in other C++ projects Link
- SameDiff operation namespaces (sd.math, sd.image, etc) are now code generated to ensure SameDiff and ND4J namespaces are identical (all operations included, same API) Link
- Added ND4J
ArchiveUtils.unzipFileTo(String, String, boolean logFiles)
overload to enable/disable extracted file path logging Link - Added weight format configuration for following operations: conv1D, conv2D, conv3D, deconv2d, deconv3d, depthwiseConv2d, pointwiseConv2d, sconv2d Link
- SameDiff: DifferentialFunctionFactory class removed in favor of namespace methods (sd.math, sd.linalg, etc) Link
- Upgraded assorted dependency versions: javax.activation:activation (1.1 -> 1.1.1), stream analytics (2.7.0->2.9.8), Apache Spark (2.4.3->2.4.5), Jackson databind (2.10.1 -> 2.10.3), Vertx (3.8.3 -> 3.9.0) Link
- Fixed an issue where ArchiveUtils could fail to create the top level destination directory when it does not exist Link
- Fixed an issue where hashcode operation shape function wasn't always returning int64/long dtype Link
- Improved performance on C++ SameDiff graph execution via reduced array zeroing where safe to do so Link
- Nd4j.gemm now uses Mmul operation internally to avoid potential threading issues with direct BLAS calls on CUDA Link
- Fixed some operation implementations when operating on views (Batch/Space to Space/Batch/Depth; batchnorm_bp) Link
- Fixed an issue where exponential distribution random number generation operation could produce infinities extremely rarely (~1 in 10^9 values) Link
- Memory for memory mapped workspaces are now deallocated immediately when workspace is destroyed, instead of waiting for GC to free memory Link
- Fixed an issue with LineRecordReader where initialization was performed unnecessarily (adding performance overhead) Link
Last modified 9mo ago