Performance and memory optimizations for DL4J
New or enhanced layers:
Added ComputationGraph.output(DataSetIterator) method Link
Added SparkComputationGraph.feedForwardWithKey overload with feature mask support Link
Added support to get input/activation types for each layer from configuration: ComputationGraphConfiguration.getLayerActivationTypes(InputType...)
, ComputationGraphConfiguration.GraphBuilder.getLayerActivationTypes()
, NeuralNetConfiguration.ListBuilder.getLayerActivationTypes()
, MultiLayerConfiguration.getLayerActivationTypes(InputType)
methods Link
Evaluation.stats() now prints confusion matrix in easier to read matrix format, rather than list format Link
Added ModelSerializer.addObjectToFile, .getObjectFromFile and .listObjectsInFile for storing arbitrary Java objects in same file as saved network Link
Added SpatialDropout support (with Keras import support) Link
Added MultiLayerNetwork/ComputationGraph.fit((Multi)DataSetIterator, int numEpochs)
overloads Link
Added performance (hardware) listeners: SystemInfoPrintListener
and SystemInfoFilePrintListener
Link
Performance and memory optimizations via optimizations of internal use of workspaces Link
RecordReaderMultiDataSetIterator will no longer try to convert unused columns to numerical values Link
Added new model zoo models:
(to do)
Fix for RecordReaderMulitDataSetIterator where output could be incorrect for some constructors Link
Fixed issue where ComputationGraph topological sort may not be consistent on all platforms; could sometimes break ComputationGraphs (with multiple valid topological orderings) trained on PC and deployed on Android Link
Fixed issue with CuDNN batch norm using 1-decay
instead of decay
Link
deeplearning4j-cuda no longer throws exceptions if present on classpath with nd4j-native backend set to higher priority Link
Added RNG control for CifarDataSetIterator Link
WordVectorSerializer now deletes temp files immediately once done Link
WorkspaceMode.SINGLE and SEPARATE have been deprecated; use WorkspaceMode.ENABLED instead
Internal layer API changes: custom layers will need to be updated to the new Layer API - see built-in layers or custom layer example
Custom layers etc in pre-1.0.0-beta JSON (ModelSerializer) format need to be registered before they can be deserialized due to JSON format change. Built-in layers and models saved in 1.0.0-beta or later do not require this. Use NeuralNetConfiguration.registerLegacyCustomClassesForJSON(Class)
for this purpose
IterationListener has been deprecated in favor of TrainingListener. For existing custom listeners, switch from implements TrainingListener
to extends BaseTrainingListener
Link
ExistingDataSetIterator has been deprecated; use fit(DataSetIterator, int numEpochs)
method instead
ComputationGraph TrainingListener onEpochStart and onEpochEnd methods are not being called correctly
DL4J Zoo Model FaceNetNN4Small2 model configuration is incorrect, causing issues during forward pass
Early stopping score calculators with values thar should be maximized (accuracy, f1 etc) are not working properly (values are minimized not maximized). Workaround: override ScoreCalculator.calculateScore(...)
and return 1.0 - super.calculateScore(...)
.
Not all op gradients implemented for automatic differentiation
Vast majority of new operations added in 1.0.0-beta do NOT use GPU yet.
ImageRecordReader now logs number of inferred label classes (to reduce risk of users missing a problem if something is misconfigured) Link
Added AnalyzeSpark.getUnique overload for multiple columns Link
Added performance/timing module Link
Reduced ImageRecordReader garbage generation via buffer reuse Link
Removed Reflections library use in DataVec Link
Fix for TransformProcessRecordReader batch support Link
Fix for TransformProcessRecordReader with filter operations Link
Fixed issue with ImageRecordReader/ParentPathLabelGenerator incorrectly filtering directories containing .
character(s) Link
ShowImageTransform now initializes frame lazily to avoid blank windows Link
DataVec ClassPathResource has been deprecated; use nd4j-common version instead Link
Added LayerSpace for OCNN (one-class neural network)
Fixed timestamp issue that could cause incorrect rendering of first model's results in UI Link
Execution now waits for last model(s) to complete before returning when a termination condition is hit Link
As per DL4J etc: use of Reflections library has been removed entirely from Arbiter Link
Remove use of Eclipse Collections library due to issues with Android compilation Link
Improved cleanup of completed models to reduce maximum memory requirements for training Link