Deeplearning4j
Community ForumND4J JavadocDL4J Javadoc
EN 1.0.0-M2.1
EN 1.0.0-M2.1
  • Deeplearning4j Suite Overview
  • Release Notes
    • 1.0.0-M2
    • 1.0.0-M2.1
    • 1.0.0-M1.1
    • 1.0.0-M1
    • 1.0.0-beta7
    • 1.0.0-beta6
    • 1.0.0-beta5
    • 1.0.0-beta4
    • 1.0.0-beta3
    • 1.0.0-beta2
    • 1.0.0-beta
    • 1.0.0-alpha
    • 0.9.1
    • 0.9.0
    • 0.8.0
    • 0.7.2
    • 0.7.1
    • 0.7.0
    • 0.6.0
    • 0.5.0
    • 0.4.0
    • 1.00-M2.2
  • Multi-Project
    • Tutorials
      • Beginners
      • Quickstart
    • How To Guides
      • Import in to your favorite IDE
      • Contribute
        • Eclipse Contributors
      • Developer Docs
        • Github Actions/Build Infra
        • Javacpp
        • Release
        • Testing
      • Build From Source
      • Benchmark
      • Beginners
    • Reference
      • Examples Tour
      • Core Concepts
      • Trouble Shooting
    • Explanation
      • Required Dependencies
      • The core workflow
      • Configuration
        • Backends
          • Performance Issues
          • CPU
          • Cudnn
        • Memory
          • Workspaces
      • Build Tools
      • Snapshots
      • Maven
  • Deeplearning4j
    • Tutorials
      • Quick Start
      • Language Processing
        • Doc2Vec
        • Sentence Iterator
        • Tokenization
        • Vocabulary Cache
    • How To Guides
      • Custom Layers
      • Keras Import
        • Functional Models
        • Sequential Models
        • Custom Layers
        • Keras Import API Overview
          • Advanced Activations
          • Convolutional Layers
          • Core Layers
          • Embedding Layers
          • Local Layers
          • Noise Layers
          • Normalization Layers
          • Pooling Layers
          • Recurrent Layers
          • Wrapper Layers
        • Supported Features Overview
          • Activations
          • Constraints
          • Initializers
          • Losses
          • Optimizers
          • Regularizers
      • Tuning and Training
        • Visualization
        • Troubleshooting Training
        • Early Stopping
        • Evaluation
        • Transfer Learning
    • Reference
      • Model Zoo
        • Zoo Models
      • Activations
      • Auto Encoders
      • Computation Graph
      • Convolutional Layers
      • DataSet Iterators
      • Layers
      • Model Listeners
      • Saving and Loading Models
      • Multi Layer Network
      • Recurrent Layers
      • Updaters/Optimizers
      • Vertices
      • Word2vec/Glove/Doc2Vec
    • Explanation
  • datavec
    • Tutorials
      • Overview
    • How To Guides
    • Reference
      • Analysis
      • Conditions
      • Executors
      • Filters
      • Normalization
      • Operations
      • Transforms
      • Readers
      • Records
      • Reductions
      • Schemas
      • Serialization
      • Visualization
    • Explanation
  • Nd4j
    • Tutorials
      • Quickstart
    • How To Guides
      • Other Framework Interop
        • Tensorflow
        • TVM
        • Onnx
      • Matrix Manipulation
      • Element wise Operations
      • Basics
    • Reference
      • Op Descriptor Format
      • Tensor
      • Syntax
    • Explanation
  • Samediff
    • Tutorials
      • Quickstart
    • How To Guides
      • Importing Tensorflow
      • Adding Operations
        • codegen
    • Reference
      • Operation Namespaces
        • Base Operations
        • Bitwise
        • CNN
        • Image
        • LinAlg
        • Loss
        • Math
        • NN
        • Random
        • RNN
      • Variables
    • Explanation
      • Model Import Framework
  • Libnd4j
    • How To Guides
      • Building on Windows
      • Building for raspberry pi or Jetson Nano
      • Building on ios
      • How to Add Operations
      • How to Setup CLion
    • Reference
      • Understanding graph execution
      • Overview of working with libnd4j
      • Helpers Overview (CUDNN, OneDNN,Armcompute)
    • Explanation
  • Python4j
    • Tutorials
      • Quickstart
    • How To Guides
      • Write Python Script
    • Reference
      • Python Types
      • Python Path
      • Garbage Collection
      • Python Script Execution
    • Explanation
  • Spark
    • Tutorials
      • DL4J on Spark Quickstart
    • How To Guides
      • How To
      • Data How To
    • Reference
      • Parameter Server
      • Technical Reference
    • Explanation
      • Spark API Reference
  • codegen
Powered by GitBook
On this page
  • Elementwise scalar operations
  • Elementwise vector operations
  • Elementwise matrix operations

Was this helpful?

Export as PDF
  1. Nd4j
  2. How To Guides

Basics

Elementwise Operations And Basic Usage

The basic operations of linear algebra are matrix creation, addition and multiplication. This guide will show you how to perform those operations with ND4J, as well as various advanced transforms.

The Java code below will create a simple 2 x 2 matrix, populate it with integers, and place it in the nd-array variable nd:

INDArray nd = Nd4j.create(new float[]{1,2,3,4},new int[]{2,2});

If you print out this array

System.out.println(nd);

you’ll see this

[[1.0 ,3.0]
[2.0 ,4.0]
]

A matrix with two rows and two columns, which orders its elements by column and which we’ll call matrix nd.

A matrix that ordered its elements by row would look like this:

[[1.0 ,2.0]
[3.0 ,4.0]
]

Elementwise scalar operations

The simplest operations you can perform on a matrix are elementwise scalar operations; for example, adding the scalar 1 to each element of the matrix, or multiplying each element by the scalar 5. Let’s try it.

nd.add(1);

This line of code represents this operation:

[[1.0 + 1 ,3.0 + 1]
[2.0 + 1,4.0 + 1]
]

and here is the result

[[2.0 ,4.0]
[3.0 ,5.0]
]

There are two ways to perform any operation in ND4J, destructive and nondestructive; i.e. operations that change the underlying data, or operations that simply work with a copy of the data. Destructive operations will have an “i” at the end – addi, subi, muli, divi. The “i” means the operation is performed “in place,” directly on the data rather than a copy, while nd.add() leaves the original untouched.

Elementwise scalar multiplication looks like this:

nd.mul(5);

And produces this:

[[10.0 ,20.0]
[15.0 ,25.0]
]

Subtraction and division follow a similar pattern:

nd.subi(3);
nd.divi(2);

If you perform all these operations on your initial 2 x 2 matrix, you should end up with this matrix:

[[3.5 ,8.5]
[6.0 ,11.0]
]

Elementwise vector operations

When performed with simple units like scalars, the operations of arithmetic are unambiguous. But working with matrices, addition and multiplication can mean several things. With vector-on-matrix operations, you have to know what kind of addition or multiplication you’re performing in each case.

First, we’ll create a 2 x 2 matrix, a column vector and a row vector.

INDArray nd = Nd4j.create(new float[]{1,2,3,4},new int[]{2,2});
INDArray nd2 = Nd4j.create(new float[]{5,6},new int[]{2,1}); //vector as column
INDArray nd3 = Nd4j.create(new float[]{5,6},new int[]{2}); //vector as row

Notice that the shape of the two vectors is specified with their final parameters. {2,1} means the vector is vertical, with elements populating two rows and one column. A simple {2} means the vector populates along a single row that spans two columns – horizontal. You’re first matrix will look like this

[[1.00, 2.00],
 [3.00, 4.00]]

Here’s how you add a column vector to a matrix:

    nd.addColumnVector(nd2);

And here’s the best way to visualize what’s happening. The top element of the column vector combines with the top elements of each column in the matrix, and so forth. The sum matrix represents the march of that column vector across the matrix from left to right, adding itself along the way.

[1.0 ,2.0]     [5.0]    [6.0 ,7.0]
[3.0 ,4.0]  +  [6.0] =  [9.0 ,10.0]

But let’s say you preserved the initial matrix and instead added a row vector.

nd.addRowVector(nd3);

Then your equation is best visualized like this:

[1.0 ,2.0]                   [6.0 ,8.0]
[3.0 ,4.0]  +  [5.0 ,6.0] =  [8.0 ,10.0]

In this case, the leftmost element of the row vector combines with the leftmost elements of each row in the matrix, and so forth. The sum matrix represents that row vector falling down the matrix from top to bottom, adding itself at each level.

So vector addition can lead to different results depending on the orientation of your vector. The same is true for multiplication, subtraction and division and every other vector operation.

In ND4J, row vectors and column vectors look the same when you print them out with

System.out.println(nd);

They will appear like this.

[5.0 ,6.0]

Don’t be fooled. Getting the parameters right at the beginning is crucial. addRowVector and addColumnVector will not produce different results when using the same initial vector, because they do not change a vector’s orientation as row or column.

Elementwise matrix operations

To carry out scalar and vector elementwise operations, we basically pretend we have two matrices of equal shape. Elementwise scalar multiplication can be represented several ways.

    [1.0 ,3.0]   [c , c]   [1.0 ,3.0]   [1c ,3c]
c * [2.0 ,4.0] = [c , c] * [2.0 ,4.0] = [2c ,4c]

So you see, elementwise operations match the elements of one matrix with their precise counterparts in another matrix. The element in row 1, column 1 of matrix nd will only be added to the element in row one column one of matrix c.

This is clearer when we start elementwise vector operations. We imaginee the vector, like the scalar, as populating a matrix of equal dimensions to matrix nd. Below, you can see why row and column vectors lead to different sums.

Column vector:

[1.0 ,3.0]     [5.0]   [1.0 ,3.0]   [5.0 ,5.0]   [6.0 ,8.0]
[2.0 ,4.0]  +  [6.0] = [2.0 ,4.0] + [6.0 ,6.0] = [8.0 ,10.0]

Row vector:

[1.0 ,3.0]                   [1.0 ,3.0]    [5.0 ,6.0]   [6.0 ,9.0]    
[2.0 ,4.0]  +  [5.0 ,6.0] =  [2.0 ,4.0] +  [5.0 ,6.0] = [7.0 ,10.0]

Now you can see why row vectors and column vectors produce different results. They are simply shorthand for different matrices.

Given that we’ve already been doing elementwise matrix operations implicitly with scalars and vectors, it’s a short hop to do them with more varied matrices:

INDArray nd4 = Nd4j.create(new float[]{5,6,7,8},new int[]{2,2});

nd.add(nd4);

Here’s how you can visualize that command:

[1.0 ,3.0]   [5.0 ,7.0]   [6.0 ,10.0]
[2.0 ,4.0] + [6.0 ,8.0] = [8.0 ,12.0]

Muliplying the initial matrix nd with matrix nd4 works the same way:

nd.muli(nd4);

[1.0 ,3.0]   [5.0 ,7.0]   [5.0 ,21.0]
[2.0 ,4.0] * [6.0 ,8.0] = [12.0 ,32.0]

These toy matrices are a useful heuristic to introduce the ND4J interface as well as basic ideas in linear algebra. This framework, however, is built to handle billions of parameters in n dimensions (and beyond…).

PreviousElement wise OperationsNextReference

Last updated 2 years ago

Was this helpful?

The term of art for this particular matrix manipulation is a .

Hadamard product