A vector, that column of numbers we feed into neural nets, is simply a subclass of a more general mathematical structure called a tensor. A tensor is a multidimensional array.
You are already familiar with a matrix composed of rows and columns: the rows extend along the y axis and the columns along the x axis. Each axis is a dimension. Tensors have additional dimensions.
Tensors also have a so-called rank: a scalar, or single number, is of rank 0; a vector is rank 1; a matrix is rank 2; and entities of rank 3 and above are all simply called tensors.
It may be helpful to think of a scalar as a point, a vector as a line, a matrix as a plane, and tensors as objects of three dimensions or more. A matrix has rows and columns, two dimensions, and therefore is of rank 2. A three-dimensional tensor, such as those we use to represent color images, has channels, rows and columns, and therefore counts as rank 3.
As mathematical objects with multiple dimensions, tensors have a shape, and we specify that shape by treating tensors as n-dimensional arrays.
With ND4J, we do that by creating a new nd array and feeding it data, shape and order as its parameters. In pseudo code, this would be
In real code, this line
creates an array with four elements, whose shape is 2 by 2, and whose order is “row major”, or rows first, which is the default in C. (In contrast, Fortran uses “column major” ordering, and could be specified with an ‘f’ as the third parameter.) The distinction between thetwo orderings, for the array created above, is best illustrated with a table:
Once we create an n-dimensional array, we may want to work with slices of it. Rather than copying the data, which is expensive, we can simply “view” muli-dimensional slices. A slice of array “a” could be defined like this:
which would give you the first 5 channels, rows 3 to 4 and columns 6 to 7, and so forth for n dimensions, which each individual dimension’s slice starting before the colon and ending after it.
Now, while it is useful to imagine matrices as two-dimensional planes, and 3-D tensors are cubic volumes, we store all tensors as a linear buffer. That is, they are all flattened to a row of numbers.
For that linear buffer, we specify something called stride. Stride tells the computation layer how to interpret the flattened representation. It is the number of elements you skip in the buffer to get to the next channel or row or column. There’s a stride for each dimension.
Here’s a brief video summarizing how tensors are converted into linear byte buffers for ND4J.
The word tensor derives from the Latin tendere, or “to stretch”; therefore, tensor relates to that which stretches, the stretcher. Tensor was introduced to English from the German in 1915, after being coined by Woldemar Voigt in 1898. The mathematical object is called a tensor because an early application of the idea was the study of materials stretching under tension.
Row-major (C)
Column-major (Fortran)
[1,2]
[1,3]
[3,4]
[2,4]