Deeplearning4j
Community ForumND4J JavadocDL4J Javadoc
EN 1.0.0-M2.1
EN 1.0.0-M2.1
  • Deeplearning4j Suite Overview
  • Release Notes
    • 1.0.0-M2
    • 1.0.0-M2.1
    • 1.0.0-M1.1
    • 1.0.0-M1
    • 1.0.0-beta7
    • 1.0.0-beta6
    • 1.0.0-beta5
    • 1.0.0-beta4
    • 1.0.0-beta3
    • 1.0.0-beta2
    • 1.0.0-beta
    • 1.0.0-alpha
    • 0.9.1
    • 0.9.0
    • 0.8.0
    • 0.7.2
    • 0.7.1
    • 0.7.0
    • 0.6.0
    • 0.5.0
    • 0.4.0
    • 1.00-M2.2
  • Multi-Project
    • Tutorials
      • Beginners
      • Quickstart
    • How To Guides
      • Import in to your favorite IDE
      • Contribute
        • Eclipse Contributors
      • Developer Docs
        • Github Actions/Build Infra
        • Javacpp
        • Release
        • Testing
      • Build From Source
      • Benchmark
      • Beginners
    • Reference
      • Examples Tour
      • Core Concepts
      • Trouble Shooting
    • Explanation
      • Required Dependencies
      • The core workflow
      • Configuration
        • Backends
          • Performance Issues
          • CPU
          • Cudnn
        • Memory
          • Workspaces
      • Build Tools
      • Snapshots
      • Maven
  • Deeplearning4j
    • Tutorials
      • Quick Start
      • Language Processing
        • Doc2Vec
        • Sentence Iterator
        • Tokenization
        • Vocabulary Cache
    • How To Guides
      • Custom Layers
      • Keras Import
        • Functional Models
        • Sequential Models
        • Custom Layers
        • Keras Import API Overview
          • Advanced Activations
          • Convolutional Layers
          • Core Layers
          • Embedding Layers
          • Local Layers
          • Noise Layers
          • Normalization Layers
          • Pooling Layers
          • Recurrent Layers
          • Wrapper Layers
        • Supported Features Overview
          • Activations
          • Constraints
          • Initializers
          • Losses
          • Optimizers
          • Regularizers
      • Tuning and Training
        • Visualization
        • Troubleshooting Training
        • Early Stopping
        • Evaluation
        • Transfer Learning
    • Reference
      • Model Zoo
        • Zoo Models
      • Activations
      • Auto Encoders
      • Computation Graph
      • Convolutional Layers
      • DataSet Iterators
      • Layers
      • Model Listeners
      • Saving and Loading Models
      • Multi Layer Network
      • Recurrent Layers
      • Updaters/Optimizers
      • Vertices
      • Word2vec/Glove/Doc2Vec
    • Explanation
  • datavec
    • Tutorials
      • Overview
    • How To Guides
    • Reference
      • Analysis
      • Conditions
      • Executors
      • Filters
      • Normalization
      • Operations
      • Transforms
      • Readers
      • Records
      • Reductions
      • Schemas
      • Serialization
      • Visualization
    • Explanation
  • Nd4j
    • Tutorials
      • Quickstart
    • How To Guides
      • Other Framework Interop
        • Tensorflow
        • TVM
        • Onnx
      • Matrix Manipulation
      • Element wise Operations
      • Basics
    • Reference
      • Op Descriptor Format
      • Tensor
      • Syntax
    • Explanation
  • Samediff
    • Tutorials
      • Quickstart
    • How To Guides
      • Importing Tensorflow
      • Adding Operations
        • codegen
    • Reference
      • Operation Namespaces
        • Base Operations
        • Bitwise
        • CNN
        • Image
        • LinAlg
        • Loss
        • Math
        • NN
        • Random
        • RNN
      • Variables
    • Explanation
      • Model Import Framework
  • Libnd4j
    • How To Guides
      • Building on Windows
      • Building for raspberry pi or Jetson Nano
      • Building on ios
      • How to Add Operations
      • How to Setup CLion
    • Reference
      • Understanding graph execution
      • Overview of working with libnd4j
      • Helpers Overview (CUDNN, OneDNN,Armcompute)
    • Explanation
  • Python4j
    • Tutorials
      • Quickstart
    • How To Guides
      • Write Python Script
    • Reference
      • Python Types
      • Python Path
      • Garbage Collection
      • Python Script Execution
    • Explanation
  • Spark
    • Tutorials
      • DL4J on Spark Quickstart
    • How To Guides
      • How To
      • Data How To
    • Reference
      • Parameter Server
      • Technical Reference
    • Explanation
      • Spark API Reference
  • codegen
Powered by GitBook
On this page
  • Implementing a custom layer for Keras import
  • SameDiffLambdaLayer
  • KerasLayer

Was this helpful?

Export as PDF
  1. Deeplearning4j
  2. How To Guides
  3. Keras Import

Custom Layers

How to implement custom Keras layers for import in Deeplearning4J.

Many more advanced models will contain custom layers, i.e. layers that aren't included in Keras.

You can import those models too, but you will have to provide an implementation of that layer yourself, as the exported model file only provides us with a name for it.

Usually, you will have found out about needing to implement a custom layer, when you saw an exception like the following:

org.deeplearning4j.nn.modelimport.keras.exceptions.UnsupportedKerasConfigurationException:
No SameDiff Lambda layer found for Lambda layer lambda_123. You can register a SameDiff Lambda layer using 
KerasLayer.registerLambdaLayer(lambdaLayerName, sameDiffLambdaLayer);

or

org.deeplearning4j.nn.modelimport.keras.exceptions.UnsupportedKerasConfigurationException: 
Unsupported keras layer type LayerName.

Implementing a custom layer for Keras import

There are two ways of implementing a custom layer for Keras import. Which one is the right approach for you, depends on the type of layer you need to implement.

  1. SameDiffLambdaLayer Use this approach if your layer doesn't have any weights and defines just a computation. It is most useful when you have to define a custom layer because you are using a lambda in your model definition. This is the approach you should be using when you've gotten the exception about no lambda layer being found.

  2. KerasLayer Use this approach if your layer needs its own weights. It is most useful when you have to define some complex layer that is more than just a simple computation. This is the approach you should be using when you've gotten the exception about an unsupported layer type.

SameDiffLambdaLayer

Using a SameDiffLambdaLayer is pretty easy. You create a new class that extends it, and override the defineLayer and getOutputType methods.

public class TimesThreeLambda extends SameDiffLambdaLayer {
    @Override
    public SDVariable defineLayer(SameDiff sd, SDVariable x) { 
        return x.mul(3); 
    }

    @Override
    public InputType getOutputType(int layerIndex, InputType inputType) {
        return inputType; 
    }
}

This simple lambda layer just multiplies its input by 3.

defineLayer will only be called once to create the SameDiff graph that is used as the definition of this layer. Do not use information about the size of the inputs or other non-static sizes, like batch size, when defining the layer, or it may fail later on.

After defining your layer, you have to register it to make it available on import.

KerasLayer.registerLambdaLayer("lambda_2", new TimesThreeLambda());

The correct name for your lambda layer will depend on the model you are importing. As you, most likely, were made aware of needing to implement the lambda layer by an exception, this exception should have given you the proper name already.

KerasLayer

Implementing a full layer with weights is more complex than defining a lambda layer. You will have to create a new class that extends KerasLayer and that reads the configuration of that layer and defines it appropriately.

After you've defined your layer, you will have to register it to make it available on import:

KerasLayer.registerCustomLayer("PoolHelper", KerasPoolHelper.class);

Again, the appropriate name will we apparent from the exception that has notified you about needing to implement the custom layer in the first place.

PreviousSequential ModelsNextKeras Import API Overview

Last updated 2 years ago

Was this helpful?

For examples on how this was done, take a look at and which are custom layers that were needed to be able to import GoogLeNet.

KerasLRN
KerasPoolHelper