All pages
Powered by GitBook
1 of 7

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Activations

Supported Keras activations.

We support all Keras activation functions, namely:

  • softmax

  • elu

  • selu

  • softplus

  • softsign

  • relu

  • tanh

  • sigmoid

  • hard_sigmoid

  • linear

The mapping of Keras to DL4J activation functions is defined in KerasActivationUtils

Regularizers

Supported Keras regularizers.

All [Keras regularizers] are supported by DL4J model import:

  • l1

  • l2

  • l1_l2

Mapping of regularizers can be found in KerasRegularizerUtils.

Initializers

Supported Keras weight initializers.

DL4J supports all available Keras initializers, namely:

  • Zeros

  • Ones

  • Constant

  • RandomNormal

  • RandomUniform

  • TruncatedNormal

  • VarianceScaling

  • Orthogonal

  • Identity

  • lecun_uniform

  • lecun_normal

  • glorot_normal

  • glorot_uniform

  • he_normal

  • he_uniform

The mapping of Keras to DL4J initializers can be found in KerasInitilizationUtils.

Constraints

Supported Keras constraints.

All Keras constraints are supported:

  • max_norm

  • non_neg

  • unit_norm

  • min_max_norm

Mapping Keras to DL4J constraints happens in KerasConstraintUtils.

Optimizers

Supported Keras optimizers

All standard Keras optimizers are supported, but importing custom TensorFlow optimizers won't work:

  • SGD

  • RMSprop

  • Adagrad

  • Adadelta

  • Adam

  • Adamax

  • Nadam

  • TFOptimizer

Losses

Supported Keras loss functions.

DL4J supports all available Keras losses (except for logcosh), namely:

  • mean_squared_error

  • mean_absolute_error

  • mean_absolute_percentage_error

  • mean_squared_logarithmic_error

  • squared_hinge

  • hinge

  • categorical_hinge

  • logcosh

  • categorical_crossentropy

  • sparse_categorical_crossentropy

  • binary_crossentropy

  • kullback_leibler_divergence

  • poisson

  • cosine_proximity

The mapping of Keras loss functions can be found in KerasLossUtils.

Supported Features

Supported Keras features.

Keras Model Import: Supported Features

While not every concept in DL4J has an equivalent in Keras and vice versa, many of the key concepts can be matched. Importing keras models into DL4J is done in our deeplearning4j-modelimport module. Below is a comprehensive list of currently supported features.

  • Layers

  • Losses

  • Activations

  • Initializers

  • Regularizers

  • Constraints

  • Metrics

  • Optimizers

Layers

Mapping keras to DL4J layers is done in the layers sub-module of model import. The structure of this project loosely reflects the structure of Keras.

Core Layers

  • βœ… Dense

  • βœ… Activation

  • βœ… Dropout

  • βœ… Flatten

  • βœ… Reshape

  • βœ… Merge

  • βœ… Permute

  • βœ… RepeatVector

  • βœ… Lambda

  • ❌ ActivityRegularization

  • βœ… Masking

  • βœ… SpatialDropout1D

  • βœ… SpatialDropout2D

  • βœ… SpatialDropout3D

Convolutional Layers

  • βœ… Conv1D

  • βœ… Conv2D

  • βœ… Conv3D

  • βœ… AtrousConvolution1D

  • βœ… AtrousConvolution2D

  • ❌ SeparableConv1D

  • βœ… SeparableConv2D

  • βœ… Conv2DTranspose

  • ❌ Conv3DTranspose

  • βœ… Cropping1D

  • βœ… Cropping2D

  • βœ… Cropping3D

  • βœ… UpSampling1D

  • βœ… UpSampling2D

  • βœ… UpSampling3D

  • βœ… ZeroPadding1D

  • βœ… ZeroPadding2D

  • βœ… ZeroPadding3D

Pooling Layers

  • βœ… MaxPooling1D

  • βœ… MaxPooling2D

  • βœ… MaxPooling3D

  • βœ… AveragePooling1D

  • βœ… AveragePooling2D

  • βœ… AveragePooling3D

  • βœ… GlobalMaxPooling1D

  • βœ… GlobalMaxPooling2D

  • βœ… GlobalMaxPooling3D

  • βœ… GlobalAveragePooling1D

  • βœ… GlobalAveragePooling2D

  • βœ… GlobalAveragePooling3D

Locally-connected Layers

  • βœ… LocallyConnected1D

  • βœ… LocallyConnected2D

Recurrent Layers

  • βœ… SimpleRNN

  • ❌ GRU

  • βœ… LSTM

  • ❌ ConvLSTM2D

Embedding Layers

  • βœ… Embedding

Merge Layers

  • βœ… Add / add

  • βœ… Multiply / multiply

  • βœ… Subtract / subtract

  • βœ… Average / average

  • βœ… Maximum / maximum

  • βœ… Concatenate / concatenate

  • ❌ Dot / dot

Advanced Activation Layers

  • βœ… LeakyReLU

  • βœ… PReLU

  • βœ… ELU

  • βœ… ThresholdedReLU

Normalization Layers

  • βœ… BatchNormalization

Noise Layers

  • βœ… GaussianNoise

  • βœ… GaussianDropout

  • βœ… AlphaDropout

Layer Wrappers

  • ❌ TimeDistributed

  • βœ… Bidirectional

Losses

  • βœ… mean_squared_error

  • βœ… mean_absolute_error

  • βœ… mean_absolute_percentage_error

  • βœ… mean_squared_logarithmic_error

  • βœ… squared_hinge

  • βœ… hinge

  • βœ… categorical_hinge

  • ❌ logcosh

  • βœ… categorical_crossentropy

  • βœ… sparse_categorical_crossentropy

  • βœ… binary_crossentropy

  • βœ… kullback_leibler_divergence

  • βœ… poisson

  • βœ… cosine_proximity

Activations

  • βœ… softmax

  • βœ… elu

  • βœ… selu

  • βœ… softplus

  • βœ… softsign

  • βœ… relu

  • βœ… tanh

  • βœ… sigmoid

  • βœ… hard_sigmoid

  • βœ… linear

Initializers

  • βœ… Zeros

  • βœ… Ones

  • βœ… Constant

  • βœ… RandomNormal

  • βœ… RandomUniform

  • βœ… TruncatedNormal

  • βœ… VarianceScaling

  • βœ… Orthogonal

  • βœ… Identity

  • βœ… lecun_uniform

  • βœ… lecun_normal

  • βœ… glorot_normal

  • βœ… glorot_uniform

  • βœ… he_normal

  • βœ… he_uniform

Regularizers

  • βœ… l1

  • βœ… l2

  • βœ… l1_l2

Constraints

  • βœ… max_norm

  • βœ… non_neg

  • βœ… unit_norm

  • βœ… min_max_norm

Optimizers

  • βœ… SGD

  • βœ… RMSprop

  • βœ… Adagrad

  • βœ… Adadelta

  • βœ… Adam

  • βœ… Adamax

  • βœ… Nadam

  • ❌ TFOptimizer