Op Descriptor Format
Model import framework overview and examples
It then outputs ndarrays to be potentially passed to other ops for execution. Each op may have other input parameters such as booleans, doubles, floats, ints, and longs.
For performance reasons, output arrays may be passed in if the memory has already been pre allocated. Otherwise, each op can calculate the output shape of its outputs and dynamically generate the result ndarrays.
Every op has a calculateOutputShape implemented in c++ that's used to dynnamically create result arrays. More information can be found in the architecture decision record for implementing this feature.
This op descriptor format is generated from a utility that parses the java and c++ code bases to automatically generate the set of op descriptors based on the code. It is possible for a user to run this utility to generate their own op descriptors as well. If needed, the utility can be found here It is a self contained maven project that you can import and run yourself. Note, that at this time we do not publish the tool on maven central. It is not really meant for end users, but please feel free to ask about it on our forums. The core protobuf op descriptor files can be found here
This is equivalent to an onnx op or a tensroflow op. "Ops" in deep learning frameworks are essentially math operations as simple as add or subtract or as complex as convolution that operate on input and output ndarrays. They also may contain numerical or string attributes as parameters for the execution of the operation.
For an end user, this means that every graph can be saved as a list of sequential steps for execution, eventually ending in 1 or more outputs.