1.0.0-beta3
Last updated
Was this helpful?
Last updated
Was this helpful?
ND4J/Deeplearning4j: Added support for CUDA 10.0. Dropped support for CUDA 8.0. (1.0.0-beta3 release has CUDA 9.0, 9.2 and 10.0 support)
SameDiff now supports training and evaluation from DataSetIterator and MultiDataSetIterator. Evaluation classes have been moved to ND4J.
DL4J Spark training (gradient sharing) is now fully fault tolerant, and has improvements for threshold adaption (potentially more robust convergence). Ports can now be easily configured independently on master/workers.
Added OutputAdapter interface and MultiLayerNetwork/ComputationGraph.output
method overloads using OutputAdapter (avoids allocating off-heap memory that needs to be cleaned up by GC) , ,
Added ComputationGraph/MultiLayerNetwork rnnTimeStep overload with user-specified workspace.
Added Cnn3DLossLayer
ParallelInference: Instances can now update the model in real-time (without re-init)
ParallelInferenc: Added ParallelInference INPLACE mode
Added validation for incompatible loss/activation function combinations (such as softmax+nOut=1, or sigmoid+mcxent). New validation can be disabled using outputValidation(false)
Spark training: Added full fault tolerance (robust failure recovery) for gradient sharing implementation
Spark training now supports configuring ports more flexibly (and differently for different workers) using PortSupplier
Spark training: overhauled gradient sharing threshold adaption algorithms; made it possible to customize threshold settings, plus made defaults more robust to initial threshold configuration improving convergence speed in some cases.
Spark training: implemented chunked messaging to reduce memory requirements (and insufficient buffer length issues) for large messages
Spark training: Added MeshBuildMode configuration for improved scalability for large clusters
Spark network data pipelines: added FileBatch, FileBatchRecordReader etc for "small files" (images etc) distributed training use cases
Added FailureTestingListener for fault tolerance/debugging purposes
Upgraded Apache Lucene/Solr to version 7.5.0 (from 7.4.0)
Added system properties (org.deeplearning4j.tempdir
and org.nd4j.tempdir
) to allow overriding of the temporary directories ND4J and DL4J use
Mode MultiLayerNetwork/ComputationGraph.clearLayerStates methods public (was protected)
AbstactLayer.layerConf()
method is now public
ParallelWrapper module now no longer has a Scala version suffix for artifact id; new artifact id is deeplearning4j-parallel-wrapper
Improved validation and error mesages for invalid inputs/labels in Yolo2OutputLayer
Spark training: added SharedTrainingMaster.Builder.workerTogglePeriodicGC and .workerPeriodicGCFrequency to easily configure the ND4J garbage collection configuration on workers. Set default GC to 5 seconds on workers
Spark training: added threshold encoding debug mode (logs current threshold and encoding statistics on each worker during training). Enable using SharedTrainingConfiguration.builder.encodingDebugMode(true)
. Note this operation has computational overhead.
Note that learning rates may need to be decreased for some updaters (such as Adam) to account for this change vs. earlier versions. Some other updaters (such as SGD, NoOp, etc) should be unaffected.
Note that deserialized (loaded) configurations/networks saved in 1.0.0-beta2 or earlier will default to old behaviour for backward compatibility. All new networks (created in 1.0.0-beta3) will default to the new behaviour.
IEvaluation classes in DL4J have been deprecated and moved to ND4J so they are available for SameDiff training. Functionality and APIs are unchanged
Libnd4j new ops:
Libnd4j native op fixes:
CUDA 8.0 support has been removed. CUDA 9.0, 9.2 and 10.0 support is available in 1.0.0-beta3
Android users may need to manually exclude the (now deprecated) module nd4j-base64. This is due to org.nd4j.serde.base64.Nd4jBase64
class being present in both nd4j-api and nd4j-base64 modules. Both versions have identical content. Use exclude group: 'org.nd4j', module: 'nd4j-base64'
to exclude.
Fixed an issue where L1/L2 and updaters (Adam, Nesterov, etc) were applied before dividing gradients by minibatch to obtain average gradient. To maintain old behaviour, use NeuralNetConfiguration.Builder.legacyBatchScaledL2(true)
.
Fixed an issue where EarlyStoppingScoreCalculator would not correctly handle "maximize score" cases instead of minimizing
Fixed order (BGR vs. RGB) for VGG16ImagePreProcessor channel offset values
Fixed bug with variational autoencoders using weight noise
Fixed issue with BaseDataSetIterator not respecting the 'maximum examples' configuration
Optimization: A workspace is now used for ComputationGraph/MultiLayerNetwork evaluation methods (avoids allocating off-heap memory during evaluation that must be cleaned up by garbage collector)
Fixed an issue where shuffling combined with a subset for MnistDataSetIterator would not maintain the same subset between resets
Fixed issue with StackVertex.getOutputType
Fix issue with CNN to/from RNN preprocessors handling of mask arrays
Fixed issue with VGG16 non-pretrained configuration in model zoo
Fixed issue with TransferLearning nOutReplace where multiple layers in a row are modified
Fixed issue with CuDNN workspaces where backpropagation is performed outside of a standard fit call
Fixed an issue with dropout masks being cleared prematurely on output layers in ComputationGraph
RecordReaderMultiDataSetIterator now supports 5D arrays (for 3D CNNs)
Fixed bug in multi input/output ComputationGraphs with TBPTT combined with both masking and different number of input/output arrays
Improved input validation/exceptions for batch normalization layer
Fixed bug with TransferLearning GraphBuilder nOutReplace when combined with subsampling layers
SimpleRnnParamInitializer now properly respects bias initialization configuration
Fixed SqueezeNet zoo model non-pretrained configuration
Fixed Xception zoo model non-pretrained configuration
Fixed an issue with some evaluation signatures for multi-output ComputationGraphs
Improved MultiLayerNetwork/ComputationGraph summary method formatting for large nets
Fixed an issue where gradient normalization could result in NaNs if gradient is exactly 0.0 for all parameters in a layer
Fixed an issue where MultiLayerNetwork/ComputationGraph.setLearningRate could throw an exception for SGD and NoOp updaters
Fixed an issue with StackVertex plus masking in some rare cases
Fixed an issue with JSON deserialization of frozen layers in pre-1.0.0-alpha format
Fixed an issue where GraphBuilder.removeVertex can fail under some limited circumstances
Fixed a bug in CacheableExtractableDataSetFetcher
DL4J Spark training: Fixed issues with thread/device affinity for multi-GPU training + evaluation
DL4J Spark training: Made all Aeron threads daemon threads to prevent Aeron from stopping JVM shutdown when all other threads have completed
Added cudnnAllowFallback configuration for BatchNormalization layer (fallback to built-in implementation if CuDNN fails unexpectedly)
Fixed some rare concurrency issues with multi-worker (multi-GPU) nodes for Spark training
Fixed an issue with BatchNormalization layers that prevented the mean/variance estimates from being synced properly on each worker for GradientSharing training, causing convergence issues
Added a check to detect ZipSlip CVE attempts in ArchiveUtils
DL4J Spark training and evaluation: methods now use Hadoop Configuration from Spark context to ensure runtime-set configuration is available in Spark functions reading directly from remote storage (HDFS etc)
MultiLayerNetwork and ComputationGraph now properly support more than Integer.MAX_VALUE parameters
Added data validation for Nd4j.readTxt - now throws exception on invalid input instead of returning incorrect values
Fixed an issue with KNN implementation where a deadlock could occur if an invalid distance function (one returning "distances" less than 0) was utilized
Added synchronization to loading of Keras import models to avoid thread safety issues in the underlying HDFS library used for loading
Fixed rare issue for Async(Multi)DataSetIterator with large prefetch values
MultiLayerConfiguration/ComputationGraphConfiguration pretrain(boolean)
and backprop(boolean)
have been deprecated and are no longer used. Use fit and pretrain/pretrainLayer methods instead.
ParallelWrapper module now no longer has a Scala version suffix for artifact id; new artifact id is deeplearning4j-parallel-wrapper
which should be used instead
deeplearning4j-nlp-korean module now has Scala version suffix due to scala dependencies; new artifact ID is deeplearning4j-nlp-korean_2.10
and deeplearning4j-nlp-korean_2.11
Running multiple Spark training jobs simultaneously on the one physical node (i.e., multiple JVMs from one or more Spark jobs) may cause problems with network communication. A workaround for this is to manually set a unique stream ID manually in the VoidConfiguration. Use a unique (or random) integer value for different jobs
Fixed import issue due to Keras JSON format changes for Keras 2.2.3+
Added Keras import for timeseries preprocessing
Elephas
Fixed issue with importing models with reshaping after an embedding layer
Added support for Keras masking layers
Fixed JSON deserialization issue with some layers/preprocessors, such as Permute
Fixed issue with Keras import of Nadam configuration
Added SameDiff training and evaluation: SameDiff instances can now be trained directly using DataSetIterator and MultiDataSetIterator, and evaluated using IEvaluation instances (that have been moved from ND4J to DL4J)
Added GraphServer implementation: c++ inference server for SameDiff (and Tensorflow, via TF import) with Java API
SameDiff instances can now be loaded from serialized FlatBuffers format (SameDiff.asFlatFile plus fromFlatFile)
Added MKL-DNN support for some operations (Conv2d, etc)
Upgraded ND4J (and DataVec) to Arrow 0.11.0 , which also fixes
Added Nd4j.where op method (same semantics as numpy.where)
Added Nd4j.stack op method (combine arrays + increase array rank by 1)
Matrix band part
Scatter ND, ND-add, ND-sub and ND-update ops
Sparse softmax cross entropy loss with logits
Histogram fixed width op
broadcast_to op
deconv3d op added
Unsorted segment ops added
Segment_X backprop ops added
batchnorm_new op added that supports multiple axes for mean/variance
GRU cell backprop added
Nd4j Preconditions class now has methods for formatting INDArray arguments ,
SameDiff loss functions: cleanup plus forward pass implementation
CudaGridExecutioner now warns that exception stack traces may be delayed to avoid confusion in debugging exceptions occuring during asynchronous execution of ops
JavaCPP and JavaCPP-presets have been upgraded to version 1.4.3
Improved Javadoc on SDVariable class
Fixes for android: Remove use of RawIndexer
Libnd4j custom ops: conv op weight layouts are now not dependent on the input format (NCHW/NHWC) - now always [kH, kW, inChannels, outChannels]
for 2d CNNs, [kH, kW, kD, inChannels, outChannels]
for 3d CNNs. ,
Dot operation backprop , determinant
Backprop op fix for the broadcast case for some pairwise transform custom op implementations
Fix for reverse custom op with rank 1 inputs
ATan2 op is now broadcastable
Boolean custom op broadcast fixes/additions
Scatter op edge case fixes
ArgMin shape function fix , negative axis fix
Unique op fix
Pad op fix
Fixed where op shape function
SVD rank 1 edge case fix
Range op
Split and space_to_batch fixes
Broadcast dynamic shape
embedding_lookup op now supports multiple input arrays
Matrix determinant op edge case (rank 0 result) shape fix
SameDiff TensorFlow import: fixes for multiple operations , , ,
SameDiff: Improved error handling for multiple outputs case
Fixed issue where INDArray.permute would not correctly throw an exception for invalid length case
Fixed issues with INDArray.get/put with SpecifiedIndex ,
Minor change to DataSet.merge - signature now accepts any DataSet subtypes
INDArray.transposei operation was not in-place
Fixed issues with INDArray.mmul with MMulTranspose
Added additional order validation for ND4J creation methods (create, rand, etc)
Fix for ND4J binary deserialization (BinarySerde) when deserializing from heap byte buffers
Fixed issue with Nd4j-common ClassPathResource path resolution in some IDEs
Fixed issue where INDArray.get(interval) on rank 1 array would return rank 2 array
Fixed a validation issue with Nd4j.gemm/mmuli on views
INDArray.assign(INDArray) no longer allows assigning different shape arrays (other than scalar/vector cases)
NDarrayStrings (and INDArray.toString()) now always uses US locale when formatting numbers
Fixed an issue with GaussianDistribution specific to V100 GPUs
Fixed an issue with bitmap compression/encoding specific to V100 GPUs
Transforms.softmax now throws an error on unsupported shapes instead of simply not applying operation
VersionCheck functionality: handle case where SimpleFileVisitor is not available on earlier versions of Android
SameDiff convolution layer configuration (Conv2dConfig/Conv3dConfig/Pooling3dConfig etc) have had parameter names aligned
nd4j-base64 module contents have been deprecated; use the equivalent classes in nd4j-api from now on
Some classes in nd4j-jackson module has been deprecated; use the equivalent classes in nd4j-api from now on
Added NativeImageLoader method overloads for org.opencv.core.Mat and String as filename
Fix for JDBCRecordReader handling of null values
Improved errors/validation for ObjectDetectionRecordReader for invalid input (where image object centers are outside of image bounds)
Fixed issue where FileSplit using methods that are unavailable on earlier versions of Android
Added SerializableHadoopConfiguration and BroadcastHadoopConfigHolder for cases where a Hadoop configuration is required in Spark functions
Fixed issue with JDBCRecordReader's handling of real-valued column result types
Added validation and useful exception for CSVRecordReader/LineRecordReader being used without initialization
Fixed some issues with dropout layers
Added conversion between org.nd4j.linalg.primitives.Pair/Triple and Scala Tuple