Convolutional Layers
Also known as CNN.

Available layers

Convolution1D

[source]
1D convolution layer. Expects input activations of shape [minibatch,channels,sequenceLength]

Convolution2D

[source]
2D convolution layer

Convolution3D

[source]
3D convolution layer configuration
hasBias
1
public boolean hasBias()
Copied!
An optional dataFormat: “NDHWC” or “NCDHW”. Defaults to “NCDHW”. The data format of the input and output data. For “NCDHW” (also known as ‘channels first’ format), the data storage order is: [batchSize, inputChannels, inputDepth, inputHeight, inputWidth]. For “NDHWC” (‘channels last’ format), the data is stored in the order of: [batchSize, inputDepth, inputHeight, inputWidth, inputChannels].
kernelSize
1
public Builder kernelSize(int... kernelSize)
Copied!
The data format for input and output activations. NCDHW: activations (in/out) should have shape [minibatch, channels, depth, height, width] NDHWC: activations (in/out) should have shape [minibatch, depth, height, width, channels]
stride
1
public Builder stride(int... stride)
Copied!
Set stride size for 3D convolutions in (depth, height, width) order
    param stride kernel size
    return 3D convolution layer builder
padding
1
public Builder padding(int... padding)
Copied!
Set padding size for 3D convolutions in (depth, height, width) order
    param padding kernel size
    return 3D convolution layer builder
dilation
1
public Builder dilation(int... dilation)
Copied!
Set dilation size for 3D convolutions in (depth, height, width) order
    param dilation kernel size
    return 3D convolution layer builder
dataFormat
1
public Builder dataFormat(DataFormat dataFormat)
Copied!
The data format for input and output activations. NCDHW: activations (in/out) should have shape [minibatch, channels, depth, height, width] NDHWC: activations (in/out) should have shape [minibatch, depth, height, width, channels]
    param dataFormat Data format to use for activations
setKernelSize
1
public void setKernelSize(int... kernelSize)
Copied!
Set kernel size for 3D convolutions in (depth, height, width) order
    param kernelSize kernel size
setStride
1
public void setStride(int... stride)
Copied!
Set stride size for 3D convolutions in (depth, height, width) order
    param stride kernel size
setPadding
1
public void setPadding(int... padding)
Copied!
Set padding size for 3D convolutions in (depth, height, width) order
    param padding kernel size
setDilation
1
public void setDilation(int... dilation)
Copied!
Set dilation size for 3D convolutions in (depth, height, width) order
    param dilation kernel size

Deconvolution2D

[source]
2D deconvolution layer configuration
Deconvolutions are also known as transpose convolutions or fractionally strided convolutions. In essence, deconvolutions swap forward and backward pass with regular 2D convolutions.
For an intuitive guide to convolution arithmetic and shapes, see: https://arxiv.org/abs/1603.07285v1
hasBias
1
public boolean hasBias()
Copied!
Deconvolution2D layer nIn in the input layer is the number of channels nOut is the number of filters to be used in the net or in other words the channels The builder specifies the filter/kernel size, the stride and padding The pooling layer takes the kernel size
convolutionMode
1
public Builder convolutionMode(ConvolutionMode convolutionMode)
Copied!
Set the convolution mode for the Convolution layer. See {- link ConvolutionMode} for more details
    param convolutionMode Convolution mode for layer
kernelSize
1
public Builder kernelSize(int... kernelSize)
Copied!
Size of the convolution rows/columns
    param kernelSize the height and width of the kernel

Cropping1D

[source]
Cropping layer for convolutional (1d) neural networks. Allows cropping to be done separately for top/bottom
getOutputType
1
public InputType getOutputType(int layerIndex, InputType inputType)
Copied!
    param cropTopBottom Amount of cropping to apply to both the top and the bottom of the input activations
setCropping
1
public void setCropping(int... cropping)
Copied!
Cropping amount for top/bottom (in that order). Must be length 1 or 2 array.
build
1
public Cropping1D build()
Copied!
    param cropping Cropping amount for top/bottom (in that order). Must be length 1 or 2 array.

Cropping2D

[source]
Cropping layer for convolutional (2d) neural networks. Allows cropping to be done separately for top/bottom/left/right
getOutputType
1
public InputType getOutputType(int layerIndex, InputType inputType)
Copied!
    param cropTopBottom Amount of cropping to apply to both the top and the bottom of the input activations
    param cropLeftRight Amount of cropping to apply to both the left and the right of the input activations
setCropping
1
public void setCropping(int... cropping)
Copied!
Cropping amount for top/bottom/left/right (in that order). A length 4 array.
build
1
public Cropping2D build()
Copied!
    param cropping Cropping amount for top/bottom/left/right (in that order). Must be length 4 array.

Cropping3D

[source]
Cropping layer for convolutional (3d) neural networks. Allows cropping to be done separately for upper and lower bounds of depth, height and width dimensions.
getOutputType
1
public InputType getOutputType(int layerIndex, InputType inputType)
Copied!
    param cropDepth Amount of cropping to apply to both depth boundaries of the input activations
    param cropHeight Amount of cropping to apply to both height boundaries of the input activations
    param cropWidth Amount of cropping to apply to both width boundaries of the input activations
setCropping
1
public void setCropping(int... cropping)
Copied!
Cropping amount, a length 6 array, i.e. crop left depth, crop right depth, crop left height, crop right height, crop left width, crop right width
build
1
public Cropping3D build()
Copied!
    param cropping Cropping amount, must be length 3 or 6 array, i.e. either crop depth, crop height, crop width or crop left depth, crop right depth, crop left height, crop right height, crop left width, crop right width