Deeplearning4j
Community ForumND4J JavadocDL4J Javadoc
EN 1.0.0-M2
EN 1.0.0-M2
  • Deeplearning4j Suite Overview
  • Release Notes
    • 1.0.0-M2
    • 1.0.0-M1.1
    • 1.0.0-M1
    • 1.0.0-beta7
    • 1.0.0-beta6
    • 1.0.0-beta5
    • 1.0.0-beta4
    • 1.0.0-beta3
    • 1.0.0-beta2
    • 1.0.0-beta
    • 1.0.0-alpha
    • 0.9.1
    • 0.9.0
    • 0.8.0
    • 0.7.2
    • 0.7.1
    • 0.7.0
    • 0.6.0
    • 0.5.0
    • 0.4.0
    • 1.00-M2.2
  • Multi-Project
    • Tutorials
      • Beginners
      • Quickstart
    • How To Guides
      • Import in to your favorite IDE
      • Contribute
        • Eclipse Contributors
      • Developer Docs
        • Github Actions/Build Infra
        • Javacpp
        • Release
        • Testing
      • Build From Source
      • Benchmark
      • Beginners
    • Reference
      • Examples Tour
    • Explanation
      • The core workflow
      • Configuration
        • Backends
          • Performance Issues
          • CPU
          • Cudnn
        • Memory
          • Workspaces
      • Build Tools
      • Snapshots
      • Maven
  • Deeplearning4j
    • Tutorials
      • Quick Start
      • Language Processing
        • Doc2Vec
        • Sentence Iterator
        • Tokenization
        • Vocabulary Cache
    • How To Guides
      • Custom Layers
      • Keras Import
        • Functional Models
        • Sequential Models
        • Custom Layers
        • Keras Import API Overview
          • Advanced Activations
          • Convolutional Layers
          • Core Layers
          • Embedding Layers
          • Local Layers
          • Noise Layers
          • Normalization Layers
          • Pooling Layers
          • Recurrent Layers
          • Wrapper Layers
        • Supported Features Overview
          • Activations
          • Constraints
          • Initializers
          • Losses
          • Optimizers
          • Regularizers
      • Tuning and Training
        • Visualization
        • Troubleshooting Training
        • Early Stopping
        • Evaluation
        • Transfer Learning
    • Reference
      • Model Zoo
        • Zoo Models
      • Activations
      • Auto Encoders
      • Computation Graph
      • Convolutional Layers
      • DataSet Iterators
      • Layers
      • Model Listeners
      • Saving and Loading Models
      • Multi Layer Network
      • Recurrent Layers
      • Updaters/Optimizers
      • Vertices
      • Word2vec/Glove/Doc2Vec
    • Explanation
  • datavec
    • Tutorials
      • Overview
    • How To Guides
    • Reference
      • Analysis
      • Conditions
      • Executors
      • Filters
      • Normalization
      • Operations
      • Transforms
      • Readers
      • Records
      • Reductions
      • Schemas
      • Serialization
      • Visualization
    • Explanation
  • Nd4j
    • Tutorials
      • Quickstart
    • How To Guides
      • Other Framework Interop
        • Tensorflow
        • TVM
        • Onnx
      • Matrix Manipulation
      • Element wise Operations
      • Basics
    • Reference
      • Op Descriptor Format
      • Tensor
      • Syntax
    • Explanation
  • Samediff
    • Tutorials
      • Quickstart
    • How To Guides
      • Importing Tensorflow
      • Adding Operations
        • codegen
    • Reference
      • Operation Namespaces
        • Base Operations
        • Bitwise
        • CNN
        • Image
        • LinAlg
        • Loss
        • Math
        • NN
        • Random
        • RNN
      • Variables
    • Explanation
      • Model Import Framework
  • Libnd4j
    • How To Guides
      • Building on Windows
      • Building for raspberry pi or Jetson Nano
      • Building on ios
      • How to Add Operations
      • How to Setup CLion
    • Reference
      • Understanding graph execution
      • Overview of working with libnd4j
      • Helpers Overview (CUDNN, OneDNN,Armcompute)
    • Explanation
  • Python4j
    • Tutorials
      • Quickstart
    • How To Guides
      • Write Python Script
    • Reference
      • Python Types
      • Python Path
      • Garbage Collection
      • Python Script Execution
    • Explanation
  • Spark
    • Tutorials
      • DL4J on Spark Quickstart
    • How To Guides
      • How To
      • Data How To
    • Reference
      • Parameter Server
      • Technical Reference
    • Explanation
      • Spark API Reference
  • codegen
Powered by GitBook
On this page
  • Accumulations
  • Subset Operations on Arrays

Was this helpful?

Export as PDF
  1. Nd4j
  2. Reference

Syntax

PreviousTensorNextExplanation

Last updated 3 years ago

Was this helpful?

For the complete nd4j-api index, please consult the .

There are three types of operations used in ND4J: scalars, transforms and accumulations. We’ll use the word op synonymously with operation.

Most of the ops just take , or a list of discrete values that you can autocomplete. Activation functions are the exception, because they take strings such as "relu" or "tanh".

Scalars, transforms and accumulations each have their own patterns. Transforms are the simplest, since the take a single argument and perform an operation on it. Absolute value is a transform that takes the argument x like so abs(IComplexNDArray ndarray) and produces the result which is the absolute value of x. Similarly, you would apply to the sigmoid transform sigmoid() to produce the “sigmoid of x”.

Scalars just take two arguments: the input and the scalar to be applied to that input. For example, ScalarAdd() takes two arguments: the input INDArray x and the scalar Number num; i.e. ScalarAdd(INDArray x, Number num). The same format applies to every Scalar op.

Finally, we have accumulations, which are also known as reductions in GPU-land. Accumulations add arrays and vectors to one another and can reduce the dimensions of those arrays in the result by adding their elements in a rowwise op. For example, we might run an accumulation on the array

[1 2
3 4]

Which would give us the vector

[3
7]

Reducing the columns (i.e. dimensions) from two to one.

Accumulations can be either pairwise or scalar. In a pairwise reduction, we might be dealing with two arrays, x and y, which have the same shape. In that case, we could calculate the cosine similarity of x and y by taking their elements two by two.

    cosineSim(x[i], y[i])

Or take EuclideanDistance(arr, arr2), a reduction between one array arr and another arr2.

Many ND4J ops are overloaded, meaning methods sharing a common name have different argument lists. Below we will explain only the simplest configurations.

As you can see, there are three possible argument types with ND4J ops: inputs, optional arguments and outputs. The outputs are specified in the ops’ constructor. The inputs are specified in the parentheses following the method name, always in the first position, and the optional arguments are used to transform the inputs; e.g. the scalar to add; the coefficient to multiply by, always in the second position.

Method

What it does

Transforms

ACos(INDArray x)

Trigonometric inverse cosine, elementwise. The inverse of cos such that, if y = cos(x), then x = ACos(y).

ASin(INDArray x)

Also known as arcsin. Inverse sine, elementwise.

ATan(INDArray x)

Trigonometric inverse tangent, elementwise. The inverse of tan, such that, if y = tan(x) then x = ATan(y).

Transforms.tanh(myArray)

Hyperbolic tangent: a sigmoidal function. This applies elementwise tanh inplace.

Nd4j.getExecutioner().exec(Nd4j.getOpFactory() .createTransform(“tanh”, myArray))

equivalent to the above

Here are two examples of performing z = tanh(x), in which the original array x is unmodified.

INDArray x = Nd4j.rand(3,2);    //input
INDArray z = Nd4j.create(3,2); //output
Nd4j.getExecutioner().exec(new Tanh(x,z));
Nd4j.getExecutioner().exec(Nd4j.getOpFactory().createTransform("tanh",x,z));

The latter two examples above use ND4J’s basic convention for all ops, in which we have 3 NDArrays, x, y and z.

x is input, always required
y is (optional) input, only used in some ops (like CosineSimilarity, AddOp etc)
z is output

Frequently, z = x (this is the default if you use a constructor with only one argument). But there are exceptions for situations like x = x + y. Another possibility is z = x + y, etc.

Accumulations

Most accumulations are accessable directly via the INDArray interface.

For example, to add up all elements of an NDArray:

double sum = myArray.sumNumber().doubleValue();

Accum along dimension example - i.e., sum values in each row:

INDArray tenBy3 = Nd4j.ones(10,3);    //10 rows, 3 columns
INDArray sumRows = tenBy3.sum(0);
System.out.println(sumRows);    //Output: [ 10.00, 10.00, 10.00]

Accumulations along dimensions generalize, so you can sum along two dimensions of any array with two or more dimensions.

Subset Operations on Arrays

A simple example:

INDArray random = Nd4j.rand(3, 3);
System.out.println(random);
[[0.93,0.32,0.18]
[0.20,0.57,0.60]
[0.96,0.65,0.75]]

INDArray lastTwoRows = random.get(NDArrayIndex.interval(1,3),NDArrayIndex.all());

Interval is fromInclusive, toExclusive; note that can equivalently use inclusive version: NDArrayIndex.interval(1,2,true);

System.out.println(lastTwoRows);
[[0.20,0.57,0.60]
[0.96,0.65,0.75]]

INDArray twoValues = random.get(NDArrayIndex.point(1),NDArrayIndex.interval(0, 2));
System.out.println(twoValues);
[ 0.20, 0.57]

These are views of the underlying array, not copy operations (which provides greater flexibility and doesn’t have cost of copying).

twoValues.addi(5.0);
System.out.println(twoValues);
[ 5.20, 5.57]

System.out.println(random);
[[0.93,0.32,0.18]
[5.20,5.57,0.60]
[0.96,0.65,0.75]]

To avoid in-place behaviour, random.get(…).dup() to make a copy.

Scalar

INDArray.add(number)

Returns the result of adding number to each entry of INDArray x; e.g. myArray.add(2.0)

INDArray.addi(number)

Returns the result of adding number to each entry of INDArray x.

ScalarAdd(INDArray x, Number num)

Returns the result of adding num to each entry of INDArray x.

ScalarDivision(INDArray x, Number num)

Returns the result of dividing each entry of INDArray x by num.

ScalarMax(INDArray x, Number num)

Compares each entry of INDArray x to num and returns the higher quantity.

ScalarMultiplication(INDArray x, Number num)

Returns the result of multiplying each entry of INDArray x by num.

ScalarReverseDivision(INDArray x, Number num)

Returns the result of dividing num by each element of INDArray x.

ScalarReverseSubtraction(INDArray x, Number num)

Returns the result of subtracting each entry of INDArray x from num.

ScalarSet(INDArray x, Number num)

This sets the value of each entry of INDArray x to num.

ScalarSubtraction(INDArray x, Number num)

Returns the result of subtracting num from each entry of INDArray x.

For other transforms, .

If you do not understand the explanation of ND4J’s syntax, cannot find a definition for a method, or would like to request that a function be added, please let us know on the .

Javadoc
enums
please see this page
community forums