Deeplearning4j
Community ForumND4J JavadocDL4J Javadoc
EN 1.0.0-M2
EN 1.0.0-M2
  • Deeplearning4j Suite Overview
  • Release Notes
    • 1.0.0-M2
    • 1.0.0-M1.1
    • 1.0.0-M1
    • 1.0.0-beta7
    • 1.0.0-beta6
    • 1.0.0-beta5
    • 1.0.0-beta4
    • 1.0.0-beta3
    • 1.0.0-beta2
    • 1.0.0-beta
    • 1.0.0-alpha
    • 0.9.1
    • 0.9.0
    • 0.8.0
    • 0.7.2
    • 0.7.1
    • 0.7.0
    • 0.6.0
    • 0.5.0
    • 0.4.0
    • 1.00-M2.2
  • Multi-Project
    • Tutorials
      • Beginners
      • Quickstart
    • How To Guides
      • Import in to your favorite IDE
      • Contribute
        • Eclipse Contributors
      • Developer Docs
        • Github Actions/Build Infra
        • Javacpp
        • Release
        • Testing
      • Build From Source
      • Benchmark
      • Beginners
    • Reference
      • Examples Tour
    • Explanation
      • The core workflow
      • Configuration
        • Backends
          • Performance Issues
          • CPU
          • Cudnn
        • Memory
          • Workspaces
      • Build Tools
      • Snapshots
      • Maven
  • Deeplearning4j
    • Tutorials
      • Quick Start
      • Language Processing
        • Doc2Vec
        • Sentence Iterator
        • Tokenization
        • Vocabulary Cache
    • How To Guides
      • Custom Layers
      • Keras Import
        • Functional Models
        • Sequential Models
        • Custom Layers
        • Keras Import API Overview
          • Advanced Activations
          • Convolutional Layers
          • Core Layers
          • Embedding Layers
          • Local Layers
          • Noise Layers
          • Normalization Layers
          • Pooling Layers
          • Recurrent Layers
          • Wrapper Layers
        • Supported Features Overview
          • Activations
          • Constraints
          • Initializers
          • Losses
          • Optimizers
          • Regularizers
      • Tuning and Training
        • Visualization
        • Troubleshooting Training
        • Early Stopping
        • Evaluation
        • Transfer Learning
    • Reference
      • Model Zoo
        • Zoo Models
      • Activations
      • Auto Encoders
      • Computation Graph
      • Convolutional Layers
      • DataSet Iterators
      • Layers
      • Model Listeners
      • Saving and Loading Models
      • Multi Layer Network
      • Recurrent Layers
      • Updaters/Optimizers
      • Vertices
      • Word2vec/Glove/Doc2Vec
    • Explanation
  • datavec
    • Tutorials
      • Overview
    • How To Guides
    • Reference
      • Analysis
      • Conditions
      • Executors
      • Filters
      • Normalization
      • Operations
      • Transforms
      • Readers
      • Records
      • Reductions
      • Schemas
      • Serialization
      • Visualization
    • Explanation
  • Nd4j
    • Tutorials
      • Quickstart
    • How To Guides
      • Other Framework Interop
        • Tensorflow
        • TVM
        • Onnx
      • Matrix Manipulation
      • Element wise Operations
      • Basics
    • Reference
      • Op Descriptor Format
      • Tensor
      • Syntax
    • Explanation
  • Samediff
    • Tutorials
      • Quickstart
    • How To Guides
      • Importing Tensorflow
      • Adding Operations
        • codegen
    • Reference
      • Operation Namespaces
        • Base Operations
        • Bitwise
        • CNN
        • Image
        • LinAlg
        • Loss
        • Math
        • NN
        • Random
        • RNN
      • Variables
    • Explanation
      • Model Import Framework
  • Libnd4j
    • How To Guides
      • Building on Windows
      • Building for raspberry pi or Jetson Nano
      • Building on ios
      • How to Add Operations
      • How to Setup CLion
    • Reference
      • Understanding graph execution
      • Overview of working with libnd4j
      • Helpers Overview (CUDNN, OneDNN,Armcompute)
    • Explanation
  • Python4j
    • Tutorials
      • Quickstart
    • How To Guides
      • Write Python Script
    • Reference
      • Python Types
      • Python Path
      • Garbage Collection
      • Python Script Execution
    • Explanation
  • Spark
    • Tutorials
      • DL4J on Spark Quickstart
    • How To Guides
      • How To
      • Data How To
    • Reference
      • Parameter Server
      • Technical Reference
    • Explanation
      • Spark API Reference
  • codegen
Powered by GitBook
On this page
  • Compile libnd4j on different cpu architectures
  • Ensure the current javacpp dependencies such as python, mkldnn, cuda, .. are up to date
  • Run all integration tests on core platforms (windows, mac, linux) with both cpu and gpu
  • Update the examples to be compatible with the latest release
  • Ensure different classifiers work
  • Android
  • Run the deeplearning4j-examples as a litmus tests on all platforms (including embedded)
  • Double check any user related bugs to see if they should block a release
  • Hit release button
  • Ensure a tag exists

Was this helpful?

Export as PDF
  1. Multi-Project
  2. How To Guides
  3. Developer Docs

Release

How to conduct a release to Maven Central

PreviousJavacppNextTesting

Last updated 3 years ago

Was this helpful?

Deeplearning4j has several steps to a release. Below is a brief outline with follow on descriptions.

  1. Compile libnd4j for different cpu architectures

  2. Ensure the current javacpp dependencies such as python, mkldnn, cuda, .. are up to date

  3. Run all integration tests on core platforms (windows, mac, linux) with both cpu and gpu

  4. Create a staging repository for testing using github actions running manually on each platform

  5. Update the examples to be compatible with the latest release

  6. Run the deeplearning4j-examples as a litmus tests on all platforms (including embedded)

    to sanity check platform specific numerical bugs using the staging repository

  7. Double check any user related bugs to see if they should block a release

  8. Hit release button

  9. Perform follow up release of -platform projects under same version

  10. Tag release

Compile libnd4j on different cpu architectures

Compiling libnd4j on different cpu architectures ensures there is platform optimized math in c++ for each platform. The is a self contained cmake project that can be run on different platforms. In each there are steps for deploying for each platform.

At the core of compiling from source for libnd4j is a maven pom.xml that is run as part of the overall build process that invokes our with various parameters that then get passed to our overall cmake structure for compilation. This script exists to formalize some of the required parameters for invokving cmake. Any developer is welcome to invoke cmake directly.

  • Platform compatibility

    We currently compile libnd4j on ubuntu 16.04. This means glibc 2.23.

    For our cuda builds, we use gcc7.

    Users of older glibc versions may need to compile from source. For our standard release, we try to keep it reasonably old, but do not support end of lifed

    end of linux distributions for public builds.

  • Platform specific helpers

Ensure the current javacpp dependencies such as python, mkldnn, cuda, .. are up to date

Of note here is that certain older versions of libraries can use older javacpp versions. It is recommended that that the desired version be up to date if possible. Otherwise, if an older version of javacpp is the only version available, this is generally ok.

Run all integration tests on core platforms (windows, mac, linux) with both cpu and gpu

We run all of the major integration tests on the core major platforms where higher end compute is accessible. This is generally a bigger machine. It is expected that some builds can take up to 2 hours depending on the specs of the desired machine.

Update the examples to be compatible with the latest release

To ensure the examples stay compatible with the current release, we also tag the release version to be the latest version found on maven central. This step may also involve adding or removing examples for new or deprecated features respectivley.

Ensure different classifiers work

  1. Different supported cuda versions with and without cudnn

  2. Onednn and associated classifiers per platform

Android

Ensure testing happens on the android emulator.

Run the deeplearning4j-examples as a litmus tests on all platforms (including embedded)

Double check any user related bugs to see if they should block a release

Hit release button

Ensure a tag exists

After a release happens, a version update to the stable version + a github tag needs to happen. This is achived in the desktop app by going to: 1. History 2. Right click on target commit you want to tag 3. Click tag 4. Push the revision 5. Update the version back to snapshot after tag.

Each build of libnd4j links against an accelerated backend for and convolution operations such as , , or The implementations for each platform can be found

This is a step that just ensures that the dl4j release matches the current state of the dependencies provided by javacpp on maven central. This affects every module including python4j, nd4j-native/cuda, datavec-image, among others. The versions of everything can be found in the top level The general convention is library version followed by a - and the version of javacpp that that version uses.

This step may also involve invoking tests with specific tags if only running a subset of tests is desired. This can be achived using the -Dgroups flag.

The examples contain a set of tests which just allow us to run maven clean test on a small number of examples. Instead of us picking examples manually, we can just run mvn clean test on any platform we need by just specifying a version of dl4j to depend on and usually a

Generally, sometimes users will raise issues right before a release that can be critical. It is the sole discretion of the maintainers to ask the user to use snapshots or to wait for a follow on version. For certain fixes, we will publish quick bugfix releases. If your team has specific requirements on a release, please contact us on the

This means after , hitting the release button initiating a sync of the staging repository with the desired version to maven central. Sync usually takes 2 hours or less.

single code base
github actions workflow
build script
blas
onednn
cudnn
armcompute
here
deeplearning4j pom
surefire plugin
staging repository
community forums
closing a staging repository