In light of the coming 1.0, the project has decided to cut a number of modules before the final release. These modules have not had many users in the past and have created confusion for many users just trying to use a few simple apis. Many of these modules have not been maintained.
There will likely be 1 or 2 more milestone releases before the final 1.0. These should be considered checkpoints.
These modules include:
- 3.Datavec modules for video, audio, audio, sound. The computer vision datavec modulewill continue to be available.
- 4.Tokenizers: The tokenizers for chinese, japanese, korean were imported from other frameworksand not really updated.
- 5.Scalnet, Nd4s: We removed the scala modules due to the small user base. We welcome 3rd party enhancementsto the framework for syntatic sugar such as kotlin and scala. The framework's focus will be on providingthe underlying technology rather than the defacto interfaces. If there is interest in something higher level, please discuss it on community forums
TVM: We now support running TVM modules. Docs coming soon.
We've updated our shaded modules to newer versions to mitigate security risks. These modules include: 1. jackson 2. guava
Cuda 11: We've upgraded dl4j and associated modules to support cuda 11 and 11.2.
A more modular model import framework supporting tensorflow and onnx: 1. Model mapping procedures loadable as protobuf 2. Defining custom rules for import to work around unsupported or custom layers/operations 3. Op descriptor for all operations in nd4j
This will enable users to override model import behavior to run their own custom models. This means, in most circumstances, there will be no need to modify model import core code anymore. Instead, users will be able to provide definitions and custom rules for their graphs.
Users will be expected to convert their models in an external process. This means running standalone conversions for their models. This extends to keras import as well. Sometimes users convert their models in production directly from keras.
The workflow going forward is to ensure that your model is converted ahead of time to avoid performance issues with converting large models.
Removed ppc from nd4j-native-platform and nd4j-cuda-platform. If you need this architecture, please contact us or build from source.
Added more support for avx/mkldnn/cudnn linked acceleration in our c++ library. We now have the ability to distribute more combinations of pre compiled math kernels via different combinations of classifiers. See the ADR here for more details.
We've upgraded arrow to 4.0.0 enabling the associated nd4j-arrow and datavec-arrow modules to be used without netty clashes.
Rewritten and more stable python execution. This allows better support for multi threaded environments.