Deeplearning4j
Community ForumND4J JavadocDL4J Javadoc
EN 1.0.0-beta7
EN 1.0.0-beta7
  • Eclipse DeepLearning4J
  • Getting Started
    • Quickstart
      • Untitled
    • Tutorials
      • Quickstart with MNIST
      • MultiLayerNetwork And ComputationGraph
      • Logistic Regression
      • Built-in Data Iterators
      • Feed Forward Networks
      • Basic Autoencoder
      • Advanced Autoencoder
      • Convolutional Networks
      • Recurrent Networks
      • Early Stopping
      • Layers and Preprocessors
      • Hyperparameter Optimization
      • Using Multiple GPUs
      • Clinical Time Series LSTM
      • Sea Temperature Convolutional LSTM
      • Sea Temperature Convolutional LSTM 2
      • Instacart Multitask Example
      • Instacart Single Task Example
      • Cloud Detection Example
    • Core Concepts
    • Cheat Sheet
    • Examples Tour
    • Deep Learning Beginners
    • Build from Source
    • Contribute
      • Eclipse Contributors
    • Benchmark Guide
    • About
    • Release Notes
  • Configuration
    • Backends
      • CPU and AVX
      • cuDNN
      • Performance Issues
    • Memory Management
      • Memory Workspaces
    • Snapshots
    • Maven
    • SBT, Gradle, & Others
  • Models
    • Autoencoders
    • Multilayer Network
    • Computation Graph
    • Convolutional Neural Network
    • Recurrent Neural Network
    • Layers
    • Vertices
    • Iterators
    • Listeners
    • Custom Layers
    • Model Persistence
    • Activations
    • Updaters
  • Model Zoo
    • Overview
    • Zoo Models
  • ND4J
    • Overview
    • Quickstart
    • Basics
    • Elementwise Operations
    • Matrix Manipulation
    • Syntax
    • Tensors
  • SAMEDIFF
    • Importing TensorFlow models
    • Variables
    • Ops
    • Adding Ops
  • ND4J & SameDiff Ops
    • Overview
    • Bitwise
    • Linalg
    • Math
    • Random
    • BaseOps
    • CNN
    • Image
    • Loss
    • NN
    • RNN
  • Tuning & Training
    • Evaluation
    • Visualization
    • Trouble Shooting
    • Early Stopping
    • t-SNE Visualization
    • Transfer Learning
  • Keras Import
    • Overview
    • Get Started
    • Supported Features
      • Activations
      • Losses
      • Regularizers
      • Initializers
      • Constraints
      • Optimizers
    • Functional Model
    • Sequential Model
    • Custom Layers
    • API Reference
      • Core Layers
      • Convolutional Layers
      • Embedding Layers
      • Local Layers
      • Noise Layers
      • Normalization Layers
      • Pooling Layers
      • Recurrent Layers
      • Wrapper Layers
      • Advanced Activations
  • DISTRIBUTED DEEP LEARNING
    • Introduction/Getting Started
    • Technical Explanation
    • Spark Guide
    • Spark Data Pipelines Guide
    • API Reference
    • Parameter Server
  • Arbiter
    • Overview
    • Layer Spaces
    • Parameter Spaces
  • Datavec
    • Overview
    • Records
    • Reductions
    • Schema
    • Serialization
    • Transforms
    • Analysis
    • Readers
    • Conditions
    • Executors
    • Filters
    • Operations
    • Normalization
    • Visualization
  • Language Processing
    • Overview
    • Word2Vec
    • Doc2Vec
    • Sentence Iteration
    • Tokenization
    • Vocabulary Cache
  • Mobile (Android)
    • Setup
    • Tutorial: First Steps
    • Tutorial: Classifier
    • Tutorial: Image Classifier
    • FAQ
    • Press
    • Support
    • Why Deep Learning?
Powered by GitBook
On this page
  • Linear Buffer
  • Additional Resources and Definitions

Was this helpful?

Edit on Git
Export as PDF
  1. ND4J

Tensors

PreviousSyntaxNextImporting TensorFlow models

Last updated 5 years ago

Was this helpful?

A vector, that column of numbers we feed into neural nets, is simply a subclass of a more general mathematical structure called a tensor. A tensor is a multidimensional array.

You are already familiar with a matrix composed of rows and columns: the rows extend along the y axis and the columns along the x axis. Each axis is a dimension. Tensors have additional dimensions.

Tensors also have a so-called : a scalar, or single number, is of rank 0; a vector is rank 1; a matrix is rank 2; and entities of rank 3 and above are all simply called tensors.

It may be helpful to think of a scalar as a point, a vector as a line, a matrix as a plane, and tensors as objects of three dimensions or more. A matrix has rows and columns, two dimensions, and therefore is of rank 2. A three-dimensional tensor, such as those we use to represent color images, has channels, rows and columns, and therefore counts as rank 3.

As mathematical objects with multiple dimensions, tensors have a shape, and we specify that shape by treating tensors as n-dimensional arrays.

With ND4J, we do that by creating a new nd array and feeding it data, shape and order as its parameters. In pseudo code, this would be

nd4j.createArray(data, shape, order)

In real code, this line

INDArray arr = Nd4j.create(new float[]{1,2,3,4},new int[]{2,2},'c');

creates an array with four elements, whose shape is 2 by 2, and whose order is “row major”, or rows first, which is the default in C. (In contrast, Fortran uses “column major” ordering, and could be specified with an ‘f’ as the third parameter.) The distinction between thetwo orderings, for the array created above, is best illustrated with a table:

Row-major (C)

Column-major (Fortran)

[1,2]

[1,3]

[3,4]

[2,4]

Once we create an n-dimensional array, we may want to work with slices of it. Rather than copying the data, which is expensive, we can simply “view” muli-dimensional slices. A slice of array “a” could be defined like this:

a[0:5,3:4,6:7]

which would give you the first 5 channels, rows 3 to 4 and columns 6 to 7, and so forth for n dimensions, which each individual dimension’s slice starting before the colon and ending after it.

Linear Buffer

Now, while it is useful to imagine matrices as two-dimensional planes, and 3-D tensors are cubic volumes, we store all tensors as a linear buffer. That is, they are all flattened to a row of numbers.

For that linear buffer, we specify something called stride. Stride tells the computation layer how to interpret the flattened representation. It is the number of elements you skip in the buffer to get to the next channel or row or column. There’s a stride for each dimension.

Here’s a brief video summarizing how tensors are converted into linear byte buffers for ND4J.

Additional Resources and Definitions

The word tensor derives from the Latin tendere, or “to stretch”; therefore, tensor relates to that which stretches, the stretcher. Tensor was introduced to English from the German in 1915, after being coined by Woldemar Voigt in 1898. The mathematical object is called a tensor because an early application of the idea was the study of materials stretching under tension.

Tensors are generalizations of scalars (that have no indices), vectors (that have exactly one index), and matrices (that have exactly two indices) to an arbitrary number of indices. - Mathworld

tensor, n. a mathematical object analogous to but more general than a vector, represented by an array of components that are functions of the coordinates of a space.

rank
Multidimensional Arrays
Tensor on Wikipedia