bilstmLayer

Bidirectional long short-term memory (BiLSTM) layer

Description

A bidirectional LSTM (BiLSTM) layer learns bidirectional long-term dependencies between time steps of time series or sequence data. These dependencies can be useful when you want the network to learn from the complete time series at each time step.

Creation

Description

example

layer = bilstmLayer(numHiddenUnits) creates a bidirectional LSTM layer and sets the NumHiddenUnits property.

example

layer = bilstmLayer(numHiddenUnits,Name,Value) sets additional OutputMode, Activations, , Parameters and Initialization, Learn Rate and Regularization, and Name properties using one or more name-value pair arguments. You can specify multiple name-value pair arguments. Enclose each property name in quotes.

Properties

expand all

BiLSTM

Number of hidden units (also known as the hidden size), specified as a positive integer.

The number of hidden units corresponds to the amount of information remembered between time steps (the hidden state). The hidden state can contain information from all previous time steps, regardless of the sequence length. If the number of hidden units is too large, then the layer might overfit to the training data. This value can vary from a few dozen to a few thousand.

The hidden state does not limit the number of time steps that are processed in an iteration. To split your sequences into smaller sequences for training, use the 'SequenceLength' option in trainingOptions.

Example: 200

Format of output, specified as one of the following:

  • 'sequence' – Output the complete sequence.

  • 'last' – Output the last time step of the sequence.

Input size, specified as a positive integer or 'auto'. If InputSize is 'auto', then the software automatically assigns the input size at training time.

Example: 100

Activations

Activation function to update the cell and hidden state, specified as one of the following:

  • 'tanh' – Use the hyperbolic tangent function (tanh).

  • 'softsign' – Use the softsign function softsign(x)=x1+|x|.

The layer uses this option as the function σc in the calculations to update the cell and hidden state. For more information on how activation functions are used in an LSTM layer, see Long Short-Term Memory Layer.

Activation function to apply to the gates, specified as one of the following:

  • 'sigmoid' – Use the sigmoid function σ(x)=(1+ex)1.

  • 'hard-sigmoid' – Use the hard sigmoid function

    σ(x)={00.2x+0.51if x<2.5if2.5x2.5if x>2.5.

The layer uses this option as the function σg in the calculations for the layer gates.

State

Initial value of the cell state, specified as a 2*NumHiddenUnits-by-1 numeric vector. This value corresponds to the cell state at time step 0.

After setting this property, calls to the resetState function set the cell state to this value.

Initial value of the hidden state, specified as a 2*NumHiddenUnits-by-1 numeric vector. This value corresponds to the hidden state at time step 0.

After setting this property, calls to the resetState function set the hidden state to this value.

Parameters and Initialization

Function to initialize the input weights, specified as one of the following:

  • 'glorot' – Initialize the input weights with the Glorot initializer [1] (also known as Xavier initializer). The Glorot initializer independently samples from a uniform distribution with zero mean and variance 2/(InputSize + numOut), where numOut = 8*NumHiddenUnits.

  • 'he' – Initialize the input weights with the He initializer [2]. The He initializer samples from a normal distribution with zero mean and variance 2/InputSize.

  • 'orthogonal' – Initialize the input weights with Q, the orthogonal matrix given by the QR decomposition of Z = QR for a random matrix Z sampled from a unit normal distribution. [3]

  • 'narrow-normal' – Initialize the input weights by independently sampling from a normal distribution with zero mean and standard deviation 0.01.

  • 'zeros' – Initialize the input weights with zeros.

  • 'ones' – Initialize the input weights with ones.

  • Function handle – Initialize the input weights with a custom function. If you specify a function handle, then the function must be of the form weights = func(sz), where sz is the size of the input weights.

The layer only initializes the input weights when the InputWeights property is empty.

Data Types: char | string | function_handle

Function to initialize the recurrent weights, specified as one of the following:

  • 'orthogonal' – Initialize the input weights with Q, the orthogonal matrix given by the QR decomposition of Z = QR for a random matrix Z sampled from a unit normal distribution. [3]

  • 'glorot' – Initialize the recurrent weights with the Glorot initializer [1] (also known as Xavier initializer). The Glorot initializer independently samples from a uniform distribution with zero mean and variance 2/(numIn + numOut), where numIn = NumHiddenUnits and numOut = 8*NumHiddenUnits.

  • 'he' – Initialize the recurrent weights with the He initializer [2]. The He initializer samples from a normal distribution with zero mean and variance 2/NumHiddenUnits.

  • 'narrow-normal' – Initialize the recurrent weights by independently sampling from a normal distribution with zero mean and standard deviation 0.01.

  • 'zeros' – Initialize the recurrent weights with zeros.

  • 'ones' – Initialize the recurrent weights with ones.

  • Function handle – Initialize the recurrent weights with a custom function. If you specify a function handle, then the function must be of the form weights = func(sz), where sz is the size of the recurrent weights.

The layer only initializes the recurrent weights when the RecurrentWeights property is empty.

Data Types: char | string | function_handle

Function to initialize the bias, specified as one of the following:

  • 'unit-forget-gate' – Initialize the forget gate bias with ones and the remaining biases with zeros.

  • 'narrow-normal' – Initialize the bias by independently sampling from a normal distribution with zero mean and standard deviation 0.01.

  • 'ones' – Initialize the bias with ones.

  • Function handle – Initialize the bias with a custom function. If you specify a function handle, then the function must be of the form bias = func(sz), where sz is the size of the bias.

The layer only initializes the bias when the Bias property is empty.

Data Types: char | string | function_handle

Input weights, specified as a matrix.

The input weight matrix is a concatenation of the eight input weight matrices for the components (gates) in the bidirectional LSTM layer. The eight matrices are concatenated vertically in the following order:

  1. Input gate (Forward)

  2. Forget gate (Forward)

  3. Cell candidate (Forward)

  4. Output gate (Forward)

  5. Input gate (Backward)

  6. Forget gate (Backward)

  7. Cell candidate (Backward)

  8. Output gate (Backward)

The input weights are learnable parameters. When training a network, if InputWeights is nonempty, then trainNetwork uses the InputWeights property as the initial value. If InputWeights is empty, then trainNetwork uses the initializer specified by InputWeightsInitializer.

At training time, InputWeights is an 8*NumHiddenUnits-by-InputSize matrix.

Recurrent weights, specified as a matrix.

The recurrent weight matrix is a concatenation of the eight recurrent weight matrices for the components (gates) in the bidirectional LSTM layer. The eight matrices are concatenated vertically in the following order:

  1. Input gate (Forward)

  2. Forget gate (Forward)

  3. Cell candidate (Forward)

  4. Output gate (Forward)

  5. Input gate (Backward)

  6. Forget gate (Backward)

  7. Cell candidate (Backward)

  8. Output gate (Backward)

The recurrent weights are learnable parameters. When training a network, if RecurrentWeights is nonempty, then trainNetwork uses the RecurrentWeights property as the initial value. If RecurrentWeights is empty, then trainNetwork uses the initializer specified by RecurrentWeightsInitializer.

At training time, RecurrentWeights is an 8*NumHiddenUnits-by-NumHiddenUnits matrix.

Layer biases, specified as a numeric vector.

The bias vector is a concatenation of the eight bias vectors for the components (gates) in the bidirectional LSTM layer. The eight vectors are concatenated vertically in the following order:

  1. Input gate (Forward)

  2. Forget gate (Forward)

  3. Cell candidate (Forward)

  4. Output gate (Forward)

  5. Input gate (Backward)

  6. Forget gate (Backward)

  7. Cell candidate (Backward)

  8. Output gate (Backward)

The layer biases are learnable parameters. When training a network, if Bias is nonempty, then trainNetwork uses the Bias property as the initial value. If Bias is empty, then trainNetwork uses the initializer specified by BiasInitializer.

At training time, Bias is an 8*NumHiddenUnits-by-1 numeric vector.

Learn Rate and Regularization

Learning rate factor for the input weights, specified as a numeric scalar or a 1-by-8 numeric vector.

The software multiplies this factor by the global learning rate to determine the learning rate factor for the input weights of the layer. For example, if InputWeightsLearnRateFactor is 2, then the learning rate factor for the input weights of the layer is twice the current global learning rate. The software determines the global learning rate based on the settings specified with the trainingOptions function.

To control the value of the learning rate factor for the four individual matrices in InputWeights, assign a 1-by-8 vector, where the entries correspond to the learning rate factor of the following:

  1. Input gate (Forward)

  2. Forget gate (Forward)

  3. Cell candidate (Forward)

  4. Output gate (Forward)

  5. Input gate (Backward)

  6. Forget gate (Backward)

  7. Cell candidate (Backward)

  8. Output gate (Backward)

To specify the same value for all the matrices, specify a nonnegative scalar.

Example: 0.1

Learning rate factor for the recurrent weights, specified as a numeric scalar or a 1-by-8 numeric vector.

The software multiplies this factor by the global learning rate to determine the learning rate for the recurrent weights of the layer. For example, if RecurrentWeightsLearnRateFactor is 2, then the learning rate for the recurrent weights of the layer is twice the current global learning rate. The software determines the global learning rate based on the settings specified with the trainingOptions function.

To control the value of the learn rate for the four individual matrices in RecurrentWeights, assign a 1-by-8 vector, where the entries correspond to the learning rate factor of the following:

  1. Input gate (Forward)

  2. Forget gate (Forward)

  3. Cell candidate (Forward)

  4. Output gate (Forward)

  5. Input gate (Backward)

  6. Forget gate (Backward)

  7. Cell candidate (Backward)

  8. Output gate (Backward)

To specify the same value for all the matrices, specify a nonnegative scalar.

Example: 0.1

Example: [1 2 1 1 1 2 1 1]

Learning rate factor for the biases, specified as a nonnegative scalar or a 1-by-8 numeric vector.

The software multiplies this factor by the global learning rate to determine the learning rate for the biases in this layer. For example, if BiasLearnRateFactor is 2, then the learning rate for the biases in the layer is twice the current global learning rate. The software determines the global learning rate based on the settings specified with the trainingOptions function.

To control the value of the learning rate factor for the four individual matrices in Bias, assign a 1-by-8 vector, where the entries correspond to the learning rate factor of the following:

  1. Input gate (Forward)

  2. Forget gate (Forward)

  3. Cell candidate (Forward)

  4. Output gate (Forward)

  5. Input gate (Backward)

  6. Forget gate (Backward)

  7. Cell candidate (Backward)

  8. Output gate (Backward)

To specify the same value for all the matrices, specify a nonnegative scalar.

Example: 2

Example: [1 2 1 1 1 2 1 1]

L2 regularization factor for the input weights, specified as a numeric scalar or a 1-by-8 numeric vector.

The software multiplies this factor by the global L2 regularization factor to determine the L2 regularization factor for the input weights of the layer. For example, if InputWeightsL2Factor is 2, then the L2 regularization factor for the input weights of the layer is twice the current global L2 regularization factor. The software determines the L2 regularization factor based on the settings specified with the trainingOptions function.

To control the value of the L2 regularization factor for the four individual matrices in InputWeights, assign a 1-by-8 vector, where the entries correspond to the L2 regularization factor of the following:

  1. Input gate (Forward)

  2. Forget gate (Forward)

  3. Cell candidate (Forward)

  4. Output gate (Forward)

  5. Input gate (Backward)

  6. Forget gate (Backward)

  7. Cell candidate (Backward)

  8. Output gate (Backward)

To specify the same value for all the matrices, specify a nonnegative scalar.

Example: 0.1

Example: [1 2 1 1 1 2 1 1]

L2 regularization factor for the recurrent weights, specified as a numeric scalar or a 1-by-8 numeric vector.

The software multiplies this factor by the global L2 regularization factor to determine the L2 regularization factor for the recurrent weights of the layer. For example, if RecurrentWeightsL2Factor is 2, then the L2 regularization factor for the recurrent weights of the layer is twice the current global L2 regularization factor. The software determines the L2 regularization factor based on the settings specified with the trainingOptions function.

To control the value of the L2 regularization factor for the four individual matrices in RecurrentWeights, assign a 1-by-8 vector, where the entries correspond to the L2 regularization factor of the following:

  1. Input gate (Forward)

  2. Forget gate (Forward)

  3. Cell candidate (Forward)

  4. Output gate (Forward)

  5. Input gate (Backward)

  6. Forget gate (Backward)

  7. Cell candidate (Backward)

  8. Output gate (Backward)

To specify the same value for all the matrices, specify a nonnegative scalar.

Example: 0.1

Example: [1 2 1 1 1 2 1 1]

L2 regularization factor for the biases, specified as a nonnegative scalar.

The software multiplies this factor by the global L2 regularization factor to determine the L2 regularization for the biases in this layer. For example, if BiasL2Factor is 2, then the L2 regularization for the biases in this layer is twice the global L2 regularization factor. You can specify the global L2 regularization factor using the trainingOptions function.

To control the value of the L2 regularization factor for the four individual matrices in Bias, assign a 1-by-8 vector, where the entries correspond to the L2 regularization factor of the following:

  1. Input gate (Forward)

  2. Forget gate (Forward)

  3. Cell candidate (Forward)

  4. Output gate (Forward)

  5. Input gate (Backward)

  6. Forget gate (Backward)

  7. Cell candidate (Backward)

  8. Output gate (Backward)

To specify the same value for all the matrices, specify a nonnegative scalar.

Example: 2

Example: [1 2 1 1 1 2 1 1]

Layer

Layer name, specified as a character vector or a string scalar. If Name is set to '', then the software automatically assigns a name at training time.

Data Types: char | string

Number of inputs of the layer. This layer accepts a single input only.

Data Types: double

Input names of the layer. This layer accepts a single input only.

Data Types: cell

Number of outputs of the layer. This layer has a single output only.

Data Types: double

Output names of the layer. This layer has a single output only.

Data Types: cell

Examples

collapse all

Create a bidirectional LSTM layer with the name 'bilstm1' and 100 hidden units.

layer = bilstmLayer(100,'Name','bilstm1')
layer = 
  BiLSTMLayer with properties:

                       Name: 'bilstm1'

   Hyperparameters
                  InputSize: 'auto'
             NumHiddenUnits: 100
                 OutputMode: 'sequence'
    StateActivationFunction: 'tanh'
     GateActivationFunction: 'sigmoid'

   Learnable Parameters
               InputWeights: []
           RecurrentWeights: []
                       Bias: []

   State Parameters
                HiddenState: []
                  CellState: []

  Show all properties

Include a bidirectional LSTM layer in a Layer array.

inputSize = 12;
numHiddenUnits = 100;
numClasses = 9;

layers = [ ...
    sequenceInputLayer(inputSize)
    bilstmLayer(numHiddenUnits)
    fullyConnectedLayer(numClasses)
    softmaxLayer
    classificationLayer]
layers = 
  5x1 Layer array with layers:

     1   ''   Sequence Input          Sequence input with 12 dimensions
     2   ''   BiLSTM                  BiLSTM with 100 hidden units
     3   ''   Fully Connected         9 fully connected layer
     4   ''   Softmax                 softmax
     5   ''   Classification Output   crossentropyex

Compatibility Considerations

expand all

Behavior changed in R2019a

Behavior changed in R2019a

References

[1] Glorot, Xavier, and Yoshua Bengio. "Understanding the difficulty of training deep feedforward neural networks." In Proceedings of the thirteenth international conference on artificial intelligence and statistics, pp. 249-256. 2010.

[2] He, Kaiming, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. "Delving deep into rectifiers: Surpassing human-level performance on imagenet classification." In Proceedings of the IEEE international conference on computer vision, pp. 1026-1034. 2015.

[3] Saxe, Andrew M., James L. McClelland, and Surya Ganguli. "Exact solutions to the nonlinear dynamics of learning in deep linear neural networks." arXiv preprint arXiv:1312.6120 (2013).

Extended Capabilities

C/C++ Code Generation
Generate C and C++ code using MATLAB® Coder™.

Introduced in R2018a