0.9.0

Deeplearning4J

  • Workspaces feature added (faster training performance + less memory) Link

  • SharedTrainingMaster added for Spark network training (improved performance) Link 1, Link 2

  • ParallelInference added - wrapper that server inference requests using internal batching and queues Link

  • ParallelWrapper now able to work with gradients sharing, in addition to existing parameters averaging mode Link

  • VPTree performance significantly improved

  • CacheMode network configuration option added - improved CNN and LSTM performance at the expense of additional memory use Link

  • LSTM layer added, with CuDNN support Link (Note that the existing GravesLSTM implementation does not support CuDNN)

  • New native model zoo with pretrained ImageNet, MNIST, and VGG-Face weights Link

  • Convolution performance improvements, including activation caching

  • Custom/user defined updaters are now supported Link

  • Evaluation improvements

    • EvaluationBinary, ROCBinary classes added: for evaluation of binary multi-class networks (sigmoid + xent output layers) Link

    • Evaluation and others now have G-Measure and Matthews Correlation Coefficient support; also macro + micro-averaging support for Evaluation class metrics Link

    • ComputationGraph and SparkComputationGraph evaluation convenience methods added (evaluateROC, etc)

    • ROC and ROCMultiClass support exact calculation (previous: thresholded calculation was used) Link

    • ROC classes now support area under precision-recall curve calculation; getting precision/recall/confusion matrix at specified thresholds (via PrecisionRecallCurve class) Link

    • RegressionEvaluation, ROCBinary etc now support per-output masking (in addition to per-example/per-time-step masking)

    • EvaluationCalibration added (residual plots, reliability diagrams, histogram of probabilities) Link 1 Link 2

    • Evaluation and EvaluationBinary: now supports custom classification threshold or cost array Link

  • Optimizations: updaters, bias calculation

  • Network memory estimation functionality added. Memory requirements can be estimated from configuration without instantiating networks Link 1 Link 2

  • New loss functions:

    • Mixture density loss function Link

    • F-Measure loss function Link

ND4J

  • Workspaces feature added Link

  • Native parallel sort was added

  • New ops added: SELU/SELUDerivative, TAD-based comparisons, percentile/median, Reverse, Tan/TanDerivative, SinH, CosH, Entropy, ShannonEntropy, LogEntropy, AbsoluteMin/AbsoluteMax/AbsoluteSum, Atan2

  • New distance functions added: CosineDistance, HammingDistance, JaccardDistance

DataVec

  • MapFileRecordReader and MapFileSequenceRecordReader added Link 1 Link 2

  • Spark: Utilities to save and load JavaRDD<List<Writable>> and JavaRDD<List<List<Writable>> data to Hadoop MapFile and SequenceFile formats Link

  • TransformProcess and Transforms now support NDArrayWritables and NDArrayWritable columns

  • Multiple new Transform classes

Arbiter

  • Arbiter UI: Link

    • UI now uses Play framework, integrates with DL4J UI (replaces Dropwizard backend). Dependency issues/clashing versions fixed.

    • Supports DL4J StatsStorage and StatsStorageRouter mechanisms (FileStatsStorage, Remote UI via RemoveUIStatsStorageRouter)

    • General UI improvements (additional information, formatting fixes)

Last updated