1.0.0-beta
Last updated
Was this helpful?
Last updated
Was this helpful?
Performance and memory optimizations for DL4J
New or enhanced layers:
Added Cropping1D layer
Added Convolution3D, Cropping3D, UpSampling3D, ZeroPadding3D, Subsampling3D layers (all with Keras import support):
Added EmbeddingSequenceLayer (EmbeddingLayer for time series)
Added OCNNOutputLayer (one-class neural network) - implementation of -
Added FrozenLayerWithBackprop layer
Added DepthwiseConvolution2D layer
Added ComputationGraph.output(DataSetIterator) method
Added MultiLayerNetwork/ComputationGraph.layerInputSize methods
Added SparkComputationGraph.feedForwardWithKey overload with feature mask support
Added MultiLayerNetwork.calculateGradients method (for easily getting parameter and input gradients, for example for some model interpretabilithy approaches)
Added support to get input/activation types for each layer from configuration: ComputationGraphConfiguration.getLayerActivationTypes(InputType...)
, ComputationGraphConfiguration.GraphBuilder.getLayerActivationTypes()
, NeuralNetConfiguration.ListBuilder.getLayerActivationTypes()
, MultiLayerConfiguration.getLayerActivationTypes(InputType)
methods
Evaluation.stats() now prints confusion matrix in easier to read matrix format, rather than list format
Added ModelSerializer.addObjectToFile, .getObjectFromFile and .listObjectsInFile for storing arbitrary Java objects in same file as saved network
Added SpatialDropout support (with Keras import support)
Added MultiLayerNetwork/ComputationGraph.fit((Multi)DataSetIterator, int numEpochs)
overloads
Added performance (hardware) listeners: SystemInfoPrintListener
and SystemInfoFilePrintListener
Fixes issues with custom and some Keras import layers on Android
Added new model zoo models:
(to do)
WorkspaceMode.SINGLE and SEPARATE have been deprecated; use WorkspaceMode.ENABLED instead
Internal layer API changes: custom layers will need to be updated to the new Layer API - see built-in layers or custom layer example
Custom layers etc in pre-1.0.0-beta JSON (ModelSerializer) format need to be registered before they can be deserialized due to JSON format change. Built-in layers and models saved in 1.0.0-beta or later do not require this. Use NeuralNetConfiguration.registerLegacyCustomClassesForJSON(Class)
for this purpose
ExistingDataSetIterator has been deprecated; use fit(DataSetIterator, int numEpochs)
method instead
ComputationGraph TrainingListener onEpochStart and onEpochEnd methods are not being called correctly
DL4J Zoo Model FaceNetNN4Small2 model configuration is incorrect, causing issues during forward pass
Early stopping score calculators with values thar should be maximized (accuracy, f1 etc) are not working properly (values are minimized not maximized). Workaround: override ScoreCalculator.calculateScore(...)
and return 1.0 - super.calculateScore(...)
.
Not all op gradients implemented for automatic differentiation
Vast majority of new operations added in 1.0.0-beta do NOT use GPU yet.
Added LayerSpace for OCNN (one-class neural network)
Performance and memory optimizations via optimizations of internal use of workspaces
Reflections library has entirely been removed from DL4J and is no longer required for custom layer serialization/deserialization ,
RecordReaderMultiDataSetIterator will no longer try to convert unused columns to numerical values
Fixes for Android compilation (removed duplicate classes, aligned versions, removed some dependencies)
Fix for RecordReaderMulitDataSetIterator where output could be incorrect for some constructors
Non-frozen layers before a frozen layer will no longer be skipped during backprop (useful for GANs and similar architectures)
Fixed issue where ComputationGraph topological sort may not be consistent on all platforms; could sometimes break ComputationGraphs (with multiple valid topological orderings) trained on PC and deployed on Android
Fixed issue with CuDNN batch norm using 1-decay
instead of decay
deeplearning4j-cuda no longer throws exceptions if present on classpath with nd4j-native backend set to higher priority
Added RNG control for CifarDataSetIterator
WordVectorSerializer now deletes temp files immediately once done
IterationListener has been deprecated in favor of TrainingListener. For existing custom listeners, switch from implements TrainingListener
to extends BaseTrainingListener
ImageRecordReader now logs number of inferred label classes (to reduce risk of users missing a problem if something is misconfigured)
Added AnalyzeSpark.getUnique overload for multiple columns
Added performance/timing module
Reduced ImageRecordReader garbage generation via buffer reuse
Fixes for Android compilation (aligned versions, removed some dependencies)
Removed Reflections library use in DataVec
Fix for TransformProcessRecordReader batch support
Fix for TransformProcessRecordReader with filter operations
Fixed issue with ImageRecordReader/ParentPathLabelGenerator incorrectly filtering directories containing .
character(s)
ShowImageTransform now initializes frame lazily to avoid blank windows
DataVec ClassPathResource has been deprecated; use nd4j-common version instead
Fixed timestamp issue that could cause incorrect rendering of first model's results in UI
Execution now waits for last model(s) to complete before returning when a termination condition is hit
As per DL4J etc: use of Reflections library has been removed entirely from Arbiter
Remove use of Eclipse Collections library due to issues with Android compilation
Improved cleanup of completed models to reduce maximum memory requirements for training