Deeplearning4j
Community ForumND4J JavadocDL4J Javadoc
EN 1.0.0-M2
EN 1.0.0-M2
  • Deeplearning4j Suite Overview
  • Release Notes
    • 1.0.0-M2
    • 1.0.0-M1.1
    • 1.0.0-M1
    • 1.0.0-beta7
    • 1.0.0-beta6
    • 1.0.0-beta5
    • 1.0.0-beta4
    • 1.0.0-beta3
    • 1.0.0-beta2
    • 1.0.0-beta
    • 1.0.0-alpha
    • 0.9.1
    • 0.9.0
    • 0.8.0
    • 0.7.2
    • 0.7.1
    • 0.7.0
    • 0.6.0
    • 0.5.0
    • 0.4.0
    • 1.00-M2.2
  • Multi-Project
    • Tutorials
      • Beginners
      • Quickstart
    • How To Guides
      • Import in to your favorite IDE
      • Contribute
        • Eclipse Contributors
      • Developer Docs
        • Github Actions/Build Infra
        • Javacpp
        • Release
        • Testing
      • Build From Source
      • Benchmark
      • Beginners
    • Reference
      • Examples Tour
    • Explanation
      • The core workflow
      • Configuration
        • Backends
          • Performance Issues
          • CPU
          • Cudnn
        • Memory
          • Workspaces
      • Build Tools
      • Snapshots
      • Maven
  • Deeplearning4j
    • Tutorials
      • Quick Start
      • Language Processing
        • Doc2Vec
        • Sentence Iterator
        • Tokenization
        • Vocabulary Cache
    • How To Guides
      • Custom Layers
      • Keras Import
        • Functional Models
        • Sequential Models
        • Custom Layers
        • Keras Import API Overview
          • Advanced Activations
          • Convolutional Layers
          • Core Layers
          • Embedding Layers
          • Local Layers
          • Noise Layers
          • Normalization Layers
          • Pooling Layers
          • Recurrent Layers
          • Wrapper Layers
        • Supported Features Overview
          • Activations
          • Constraints
          • Initializers
          • Losses
          • Optimizers
          • Regularizers
      • Tuning and Training
        • Visualization
        • Troubleshooting Training
        • Early Stopping
        • Evaluation
        • Transfer Learning
    • Reference
      • Model Zoo
        • Zoo Models
      • Activations
      • Auto Encoders
      • Computation Graph
      • Convolutional Layers
      • DataSet Iterators
      • Layers
      • Model Listeners
      • Saving and Loading Models
      • Multi Layer Network
      • Recurrent Layers
      • Updaters/Optimizers
      • Vertices
      • Word2vec/Glove/Doc2Vec
    • Explanation
  • datavec
    • Tutorials
      • Overview
    • How To Guides
    • Reference
      • Analysis
      • Conditions
      • Executors
      • Filters
      • Normalization
      • Operations
      • Transforms
      • Readers
      • Records
      • Reductions
      • Schemas
      • Serialization
      • Visualization
    • Explanation
  • Nd4j
    • Tutorials
      • Quickstart
    • How To Guides
      • Other Framework Interop
        • Tensorflow
        • TVM
        • Onnx
      • Matrix Manipulation
      • Element wise Operations
      • Basics
    • Reference
      • Op Descriptor Format
      • Tensor
      • Syntax
    • Explanation
  • Samediff
    • Tutorials
      • Quickstart
    • How To Guides
      • Importing Tensorflow
      • Adding Operations
        • codegen
    • Reference
      • Operation Namespaces
        • Base Operations
        • Bitwise
        • CNN
        • Image
        • LinAlg
        • Loss
        • Math
        • NN
        • Random
        • RNN
      • Variables
    • Explanation
      • Model Import Framework
  • Libnd4j
    • How To Guides
      • Building on Windows
      • Building for raspberry pi or Jetson Nano
      • Building on ios
      • How to Add Operations
      • How to Setup CLion
    • Reference
      • Understanding graph execution
      • Overview of working with libnd4j
      • Helpers Overview (CUDNN, OneDNN,Armcompute)
    • Explanation
  • Python4j
    • Tutorials
      • Quickstart
    • How To Guides
      • Write Python Script
    • Reference
      • Python Types
      • Python Path
      • Garbage Collection
      • Python Script Execution
    • Explanation
  • Spark
    • Tutorials
      • DL4J on Spark Quickstart
    • How To Guides
      • How To
      • Data How To
    • Reference
      • Parameter Server
      • Technical Reference
    • Explanation
      • Spark API Reference
  • codegen
Powered by GitBook
On this page

Was this helpful?

Export as PDF
  1. Nd4j
  2. Reference

Op Descriptor Format

Model import framework overview and examples

PreviousReferenceNextTensor

Last updated 3 years ago

Was this helpful?

An nd4j "op" is an individual math operator (think add, subtract). This is best described in nd4j's . Given a set of input arguments as

It then outputs ndarrays to be potentially passed to other ops for execution. Each op may have other input parameters such as booleans, doubles, floats, ints, and longs.

For performance reasons, output arrays may be passed in if the memory has already been pre allocated. Otherwise, each op can calculate the output shape of its outputs and dynamically generate the result ndarrays.

Every op has a calculateOutputShape implemented in c++ that's used to dynnamically create result arrays. More information can be found in the for implementing this feature.

The op descriptor format is a serialized state of nd4j's operations. The current set of operations is saved in a protobuf format. You can find the latest op set

This op descriptor format is generated from a utility that parses the java and c++ code bases to automatically generate the set of op descriptors based on the code. It is possible for a user to run this utility to generate their own op descriptors as well. If needed, the utility can be found It is a self contained maven project that you can import and run yourself. Note, that at this time we do not publish the tool on maven central. It is not really meant for end users, but please feel free to ask about it on our . The core protobuf op descriptor files can be found

This is equivalent to an or a . "Ops" in deep learning frameworks are essentially math operations as simple as add or subtract or as complex as convolution that operate on input and output ndarrays. They also may contain numerical or string attributes as parameters for the execution of the operation.

Neural network execution in deep learning frameworks happens as a . This allows a graph to be processed as a sequential list of operations.

For an end user, this means that every graph can be saved as a list of sequential steps for execution, eventually ending in 1 or more outputs.

The main concepts are fairly straightforward. A maps input attributes and input/output ndarrays to their equivalents in nd4j.

Nd4j's graph format is a flatbuffers format. The method for export maybe found

DynamicCustomOp
ndarrays
architecture decision record
here
here
forums
here
onnx op
tensroflow op
directed acyclic graph
Mapper
here