Long short-term memory (LSTM) layer
An LSTM layer learns long-term dependencies between time steps in time series and sequence data.
The layer performs additive interactions, which can help improve gradient flow over long sequences during training.
creates an LSTM layer and sets the layer
= lstmLayer(numHiddenUnits
)NumHiddenUnits
property.
sets additional layer
= lstmLayer(numHiddenUnits
,Name,Value
)OutputMode
, Activations, State, Parameters and Initialization, Learn Rate and Regularization, and
Name
properties using one or more name-value pair arguments. You can specify multiple
name-value pair arguments. Enclose each property name in quotes.
NumHiddenUnits
— Number of hidden unitsNumber of hidden units (also known as the hidden size), specified as a positive integer.
The number of hidden units corresponds to the amount of information remembered between time steps (the hidden state). The hidden state can contain information from all previous time steps, regardless of the sequence length. If the number of hidden units is too large, then the layer might overfit to the training data. This value can vary from a few dozen to a few thousand.
The hidden state does not limit the number of time steps that are processed in an
iteration. To split your sequences into smaller sequences for training, use the
'SequenceLength'
option in trainingOptions
.
Example: 200
OutputMode
— Format of output'sequence'
(default) | 'last'
Format of output, specified as one of the following:
'sequence'
– Output the complete sequence.
'last'
– Output the last time step of the
sequence.
InputSize
— Input size'auto'
(default) | positive integer Input size, specified as a positive integer or 'auto'
. If InputSize
is 'auto'
, then the software automatically assigns the input size at training time.
Example: 100
StateActivationFunction
— Activation function to update the cell and hidden state'tanh'
(default) | 'softsign'
Activation function to update the cell and hidden state, specified as one of the following:
'tanh'
– Use the hyperbolic tangent function (tanh).
'softsign'
– Use the softsign function .
The layer uses this option as the function in the calculations to update the cell and hidden state. For more information on how activation functions are used in an LSTM layer, see Long Short-Term Memory Layer.
GateActivationFunction
— Activation function to apply to the gates'sigmoid'
(default) | 'hard-sigmoid'
Activation function to apply to the gates, specified as one of the following:
'sigmoid'
– Use the sigmoid function .
'hard-sigmoid'
– Use the hard sigmoid function
The layer uses this option as the function in the calculations for the layer gates.
CellState
— Initial value of cell stateInitial value of the cell state, specified as a
NumHiddenUnits
-by-1 numeric vector. This value
corresponds to the cell state at time step 0.
After setting this property, calls to the
resetState
function set the cell state to this
value.
HiddenState
— Initial value of the hidden stateInitial value of the hidden state, specified as a NumHiddenUnits
-by-1 numeric vector. This value corresponds to the hidden state at time step 0.
After setting this property, calls to the resetState
function set the hidden state to this value.
InputWeightsInitializer
— Function to initialize input weights'glorot'
(default) | 'he'
| 'orthogonal'
| 'narrow-normal'
| 'zeros'
| 'ones'
| function handleFunction to initialize the input weights, specified as one of the following:
'glorot'
– Initialize the input weights
with the Glorot initializer [4]
(also known as Xavier initializer). The Glorot initializer
independently samples from a uniform distribution with zero
mean and variance 2/(InputSize + numOut)
,
where numOut = 4*NumHiddenUnits
.
'he'
– Initialize the input weights
with the He initializer [5].
The He initializer samples from a normal distribution with
zero mean and variance
2/InputSize
.
'orthogonal'
– Initialize the input
weights with Q, the orthogonal matrix
given by the QR decomposition of Z =
QR for a random
matrix Z sampled from a unit normal
distribution. [6]
'narrow-normal'
– Initialize the input
weights by independently sampling from a normal distribution
with zero mean and standard deviation 0.01.
'zeros'
– Initialize the input weights
with zeros.
'ones'
– Initialize the input weights
with ones.
Function handle – Initialize the input weights with a
custom function. If you specify a function handle, then the
function must be of the form weights =
func(sz)
, where sz
is the
size of the input weights.
The layer only initializes the input weights when the
InputWeights
property is empty.
Data Types: char
| string
| function_handle
RecurrentWeightsInitializer
— Function to initialize recurrent weights'orthogonal'
(default) | 'glorot'
| 'he'
| 'narrow-normal'
| 'zeros'
| 'ones'
| function handleFunction to initialize the recurrent weights, specified as one of the following:
'orthogonal'
– Initialize the recurrent
weights with Q, the orthogonal matrix
given by the QR decomposition of Z =
QR for a random
matrix Z sampled from a unit normal
distribution. [6]
'glorot'
– Initialize the recurrent
weights with the Glorot initializer [4]
(also known as Xavier initializer). The Glorot initializer
independently samples from a uniform distribution with zero
mean and variance 2/(numIn + numOut)
,
where numIn = NumHiddenUnits
and
numOut = 4*NumHiddenUnits
.
'he'
– Initialize the recurrent weights
with the He initializer [5].
The He initializer samples from a normal distribution with
zero mean and variance
2/NumHiddenUnits
.
'narrow-normal'
– Initialize the
recurrent weights by independently sampling from a normal
distribution with zero mean and standard deviation
0.01.
'zeros'
– Initialize the recurrent
weights with zeros.
'ones'
– Initialize the recurrent
weights with ones.
Function handle – Initialize the recurrent weights with a
custom function. If you specify a function handle, then the
function must be of the form weights =
func(sz)
, where sz
is the
size of the recurrent weights.
The layer only initializes the recurrent weights when the
RecurrentWeights
property is empty.
Data Types: char
| string
| function_handle
BiasInitializer
— Function to initialize bias'unit-forget-gate'
(default) | 'narrow-normal'
| 'ones'
| function handleFunction to initialize the bias, specified as one of the following:
'unit-forget-gate'
– Initialize the forget gate bias
with ones and the remaining biases with zeros.
'narrow-normal'
– Initialize the bias by independently
sampling from a normal distribution with zero mean and standard deviation
0.01.
'ones'
– Initialize the bias with ones.
Function handle – Initialize the bias with a custom function. If you specify a function handle, then the function must be of the form bias = func(sz)
, where sz
is the size of the bias.
The layer only initializes the bias when the Bias
property is
empty.
Data Types: char
| string
| function_handle
InputWeights
— Input weights[]
(default) | matrixInput weights, specified as a matrix.
The input weight matrix is a concatenation of the four input weight matrices for the components (gates) in the LSTM layer. The four matrices are concatenated vertically in the following order:
Input gate
Forget gate
Cell candidate
Output gate
The input weights are learnable parameters. When training a network, if InputWeights
is nonempty, then trainNetwork
uses the InputWeights
property as the initial value. If InputWeights
is empty, then trainNetwork
uses the initializer specified by InputWeightsInitializer
.
At training time, InputWeights
is a
4*NumHiddenUnits
-by-InputSize
matrix.
RecurrentWeights
— Recurrent weights[]
(default) | matrixRecurrent weights, specified as a matrix.
The recurrent weight matrix is a concatenation of the four recurrent weight matrices for the components (gates) in the LSTM layer. The four matrices are vertically concatenated in the following order:
Input gate
Forget gate
Cell candidate
Output gate
The recurrent weights are learnable parameters. When training a network, if RecurrentWeights
is nonempty, then trainNetwork
uses the RecurrentWeights
property as the initial value. If RecurrentWeights
is empty, then trainNetwork
uses the initializer specified by RecurrentWeightsInitializer
.
At training time RecurrentWeights
is a
4*NumHiddenUnits
-by-NumHiddenUnits
matrix.
Bias
— Layer biases[]
(default) | numeric vectorLayer biases for the LSTM layer, specified as a numeric vector.
The bias vector is a concatenation of the four bias vectors for the components (gates) in the LSTM layer. The four vectors are concatenated vertically in the following order:
Input gate
Forget gate
Cell candidate
Output gate
The layer biases are learnable parameters. When training a network, if Bias
is nonempty, then trainNetwork
uses the Bias
property as the initial value. If Bias
is empty, then trainNetwork
uses the initializer specified by BiasInitializer
.
At training time, Bias
is a
4*NumHiddenUnits
-by-1 numeric vector.
InputWeightsLearnRateFactor
— Learning rate factor for input weightsLearning rate factor for the input weights, specified as a numeric scalar or a 1-by-4 numeric vector.
The software multiplies this factor by the global learning rate to determine the learning rate factor for the input weights of the layer. For example, if InputWeightsLearnRateFactor
is 2, then the learning rate factor for the input weights of the layer is twice the current global learning rate. The software determines the global learning rate based on the settings specified with the trainingOptions
function.
To control the value of the learning rate factor for the four
individual matrices in InputWeights
, specify a
1-by-4 vector. The entries of
InputWeightsLearnRateFactor
correspond to the
learning rate factor of the following:
Input gate
Forget gate
Cell candidate
Output gate
To specify the same value for all the matrices, specify a nonnegative scalar.
Example: 2
Example:
[1 2 1 1]
RecurrentWeightsLearnRateFactor
— Learning rate factor for recurrent weightsLearning rate factor for the recurrent weights, specified as a numeric scalar or a 1-by-4 numeric vector.
The software multiplies this factor by the global learning rate to determine the learning rate for the recurrent weights of the layer. For example, if RecurrentWeightsLearnRateFactor
is 2, then the learning rate for the recurrent weights of the layer is twice the current global learning rate. The software determines the global learning rate based on the settings specified with the trainingOptions
function.
To control the value of the learning rate factor for the four
individual matrices in RecurrentWeights
, specify a
1-by-4 vector. The entries of
RecurrentWeightsLearnRateFactor
correspond to
the learning rate factor of the following:
Input gate
Forget gate
Cell candidate
Output gate
To specify the same value for all the matrices, specify a nonnegative scalar.
Example: 2
Example:
[1 2 1 1]
BiasLearnRateFactor
— Learning rate factor for biasesLearning rate factor for the biases, specified as a nonnegative scalar or a 1-by-4 numeric vector.
The software multiplies this factor by the global learning rate
to determine the learning rate for the biases in this layer. For example, if
BiasLearnRateFactor
is 2, then the learning rate for the biases in the
layer is twice the current global learning rate. The software determines the global learning
rate based on the settings specified with the trainingOptions
function.
To control the value of the learning rate factor for the four
individual vectors in Bias
, specify a 1-by-4
vector. The entries of BiasLearnRateFactor
correspond to the learning rate factor of the following:
Input gate
Forget gate
Cell candidate
Output gate
To specify the same value for all the vectors, specify a nonnegative scalar.
Example:
2
Example:
[1 2 1 1]
InputWeightsL2Factor
— L2 regularization factor for input weightsL2 regularization factor for the input weights, specified as a numeric scalar or a 1-by-4 numeric vector.
The software multiplies this factor by the global L2 regularization factor to determine the L2 regularization factor for the input weights of the layer. For example, if InputWeightsL2Factor
is 2, then the L2 regularization factor for the input weights of the layer is twice the current global L2 regularization factor. The software determines the L2 regularization factor based on the settings specified with the trainingOptions
function.
To control the value of the L2 regularization factor for the four
individual matrices in InputWeights
, specify a
1-by-4 vector. The entries of InputWeightsL2Factor
correspond to the L2 regularization factor of the following:
Input gate
Forget gate
Cell candidate
Output gate
To specify the same value for all the matrices, specify a nonnegative scalar.
Example: 2
Example:
[1 2 1 1]
RecurrentWeightsL2Factor
— L2 regularization factor for recurrent weightsL2 regularization factor for the recurrent weights, specified as a numeric scalar or a 1-by-4 numeric vector.
The software multiplies this factor by the global L2 regularization factor to determine the L2 regularization factor for the recurrent weights of the layer. For example, if RecurrentWeightsL2Factor
is 2, then the L2 regularization factor for the recurrent weights of the layer is twice the current global L2 regularization factor. The software determines the L2 regularization factor based on the settings specified with the trainingOptions
function.
To control the value of the L2 regularization factor for the four
individual matrices in RecurrentWeights
, specify a
1-by-4 vector. The entries of
RecurrentWeightsL2Factor
correspond to the L2
regularization factor of the following:
Input gate
Forget gate
Cell candidate
Output gate
To specify the same value for all the matrices, specify a nonnegative scalar.
Example: 2
Example:
[1 2 1 1]
BiasL2Factor
— L2 regularization factor for biasesL2 regularization factor for the biases, specified as a nonnegative scalar or a 1-by-4 numeric vector.
The software multiplies this factor by the global L2
regularization factor to determine the L2 regularization for the biases in this layer. For
example, if BiasL2Factor
is 2, then the L2 regularization for the biases in
this layer is twice the global L2 regularization factor. You can specify the global L2
regularization factor using the trainingOptions
function.
To control the value of the L2 regularization factor for the four
individual vectors in Bias
, specify a 1-by-4
vector. The entries of BiasL2Factor
correspond to
the L2 regularization factor of the following:
Input gate
Forget gate
Cell candidate
Output gate
To specify the same value for all the vectors, specify a nonnegative scalar.
Example:
2
Example:
[1 2 1 1]
Name
— Layer name''
(default) | character vector | string scalarLayer name, specified as a character vector or a string scalar. If Name
is set to ''
, then the software automatically assigns a name at
training time.
Data Types: char
| string
NumInputs
— Number of inputsNumber of inputs of the layer. This layer accepts a single input only.
Data Types: double
InputNames
— Input names{'in'}
(default)Input names of the layer. This layer accepts a single input only.
Data Types: cell
NumOutputs
— Number of outputsNumber of outputs of the layer. This layer has a single output only.
Data Types: double
OutputNames
— Output names{'out'}
(default)Output names of the layer. This layer has a single output only.
Data Types: cell
Create an LSTM layer with the name 'lstm1'
and 100 hidden units.
layer = lstmLayer(100,'Name','lstm1')
layer = LSTMLayer with properties: Name: 'lstm1' Hyperparameters InputSize: 'auto' NumHiddenUnits: 100 OutputMode: 'sequence' StateActivationFunction: 'tanh' GateActivationFunction: 'sigmoid' Learnable Parameters InputWeights: [] RecurrentWeights: [] Bias: [] State Parameters HiddenState: [] CellState: [] Show all properties
Include an LSTM layer in a Layer
array.
inputSize = 12;
numHiddenUnits = 100;
numClasses = 9;
layers = [ ...
sequenceInputLayer(inputSize)
lstmLayer(numHiddenUnits)
fullyConnectedLayer(numClasses)
softmaxLayer
classificationLayer]
layers = 5x1 Layer array with layers: 1 '' Sequence Input Sequence input with 12 dimensions 2 '' LSTM LSTM with 100 hidden units 3 '' Fully Connected 9 fully connected layer 4 '' Softmax softmax 5 '' Classification Output crossentropyex
Train a deep learning LSTM network for sequence-to-label classification.
Load the Japanese Vowels data set as described in [1] and [2]. XTrain
is a cell array containing 270 sequences of varying length with 12 features corresponding to LPC cepstrum coefficients. Y
is a categorical vector of labels 1,2,...,9. The entries in XTrain
are matrices with 12 rows (one row for each feature) and a varying number of columns (one column for each time step).
[XTrain,YTrain] = japaneseVowelsTrainData;
Visualize the first time series in a plot. Each line corresponds to a feature.
figure plot(XTrain{1}') title("Training Observation 1") numFeatures = size(XTrain{1},1); legend("Feature " + string(1:numFeatures),'Location','northeastoutside')
Define the LSTM network architecture. Specify the input size as 12 (the number of features of the input data). Specify an LSTM layer to have 100 hidden units and to output the last element of the sequence. Finally, specify nine classes by including a fully connected layer of size 9, followed by a softmax layer and a classification layer.
inputSize = 12; numHiddenUnits = 100; numClasses = 9; layers = [ ... sequenceInputLayer(inputSize) lstmLayer(numHiddenUnits,'OutputMode','last') fullyConnectedLayer(numClasses) softmaxLayer classificationLayer]
layers = 5×1 Layer array with layers: 1 '' Sequence Input Sequence input with 12 dimensions 2 '' LSTM LSTM with 100 hidden units 3 '' Fully Connected 9 fully connected layer 4 '' Softmax softmax 5 '' Classification Output crossentropyex
Specify the training options. Specify the solver as 'adam'
and 'GradientThreshold'
as 1. Set the mini-batch size to 27 and set the maximum number of epochs to 70.
Because the mini-batches are small with short sequences, the CPU is better suited for training. Set 'ExecutionEnvironment'
to 'cpu'
. To train on a GPU, if available, set 'ExecutionEnvironment'
to 'auto'
(the default value).
maxEpochs = 70; miniBatchSize = 27; options = trainingOptions('adam', ... 'ExecutionEnvironment','cpu', ... 'MaxEpochs',maxEpochs, ... 'MiniBatchSize',miniBatchSize, ... 'GradientThreshold',1, ... 'Verbose',false, ... 'Plots','training-progress');
Train the LSTM network with the specified training options.
net = trainNetwork(XTrain,YTrain,layers,options);
Load the test set and classify the sequences into speakers.
[XTest,YTest] = japaneseVowelsTestData;
Classify the test data. Specify the same mini-batch size used for training.
YPred = classify(net,XTest,'MiniBatchSize',miniBatchSize);
Calculate the classification accuracy of the predictions.
acc = sum(YPred == YTest)./numel(YTest)
acc = 0.9514
To create an LSTM network for sequence-to-label classification, create a layer array containing a sequence input layer, an LSTM layer, a fully connected layer, a softmax layer, and a classification output layer.
Set the size of the sequence input layer to the number of features of the input data. Set the size of the fully connected layer to the number of classes. You do not need to specify the sequence length.
For the LSTM layer, specify the number of hidden units and the output mode 'last'
.
numFeatures = 12; numHiddenUnits = 100; numClasses = 9; layers = [ ... sequenceInputLayer(numFeatures) lstmLayer(numHiddenUnits,'OutputMode','last') fullyConnectedLayer(numClasses) softmaxLayer classificationLayer];
For an example showing how to train an LSTM network for sequence-to-label classification and classify new data, see Sequence Classification Using Deep Learning.
To create an LSTM network for sequence-to-sequence classification, use the same architecture as for sequence-to-label classification, but set the output mode of the LSTM layer to 'sequence'
.
numFeatures = 12; numHiddenUnits = 100; numClasses = 9; layers = [ ... sequenceInputLayer(numFeatures) lstmLayer(numHiddenUnits,'OutputMode','sequence') fullyConnectedLayer(numClasses) softmaxLayer classificationLayer];
To create an LSTM network for sequence-to-one regression, create a layer array containing a sequence input layer, an LSTM layer, a fully connected layer, and a regression output layer.
Set the size of the sequence input layer to the number of features of the input data. Set the size of the fully connected layer to the number of responses. You do not need to specify the sequence length.
For the LSTM layer, specify the number of hidden units and the output mode 'last'
.
numFeatures = 12; numHiddenUnits = 125; numResponses = 1; layers = [ ... sequenceInputLayer(numFeatures) lstmLayer(numHiddenUnits,'OutputMode','last') fullyConnectedLayer(numResponses) regressionLayer];
To create an LSTM network for sequence-to-sequence regression, use the same architecture as for sequence-to-one regression, but set the output mode of the LSTM layer to 'sequence'
.
numFeatures = 12; numHiddenUnits = 125; numResponses = 1; layers = [ ... sequenceInputLayer(numFeatures) lstmLayer(numHiddenUnits,'OutputMode','sequence') fullyConnectedLayer(numResponses) regressionLayer];
For an example showing how to train an LSTM network for sequence-to-sequence regression and predict on new data, see Sequence-to-Sequence Regression Using Deep Learning.
You can make LSTM networks deeper by inserting extra LSTM layers with the output mode 'sequence'
before the LSTM layer. To prevent overfitting, you can insert dropout layers after the LSTM layers.
For sequence-to-label classification networks, the output mode of the last LSTM layer must be 'last'
.
numFeatures = 12; numHiddenUnits1 = 125; numHiddenUnits2 = 100; numClasses = 9; layers = [ ... sequenceInputLayer(numFeatures) lstmLayer(numHiddenUnits1,'OutputMode','sequence') dropoutLayer(0.2) lstmLayer(numHiddenUnits2,'OutputMode','last') dropoutLayer(0.2) fullyConnectedLayer(numClasses) softmaxLayer classificationLayer];
For sequence-to-sequence classification networks, the output mode of the last LSTM layer must be 'sequence'
.
numFeatures = 12; numHiddenUnits1 = 125; numHiddenUnits2 = 100; numClasses = 9; layers = [ ... sequenceInputLayer(numFeatures) lstmLayer(numHiddenUnits1,'OutputMode','sequence') dropoutLayer(0.2) lstmLayer(numHiddenUnits2,'OutputMode','sequence') dropoutLayer(0.2) fullyConnectedLayer(numClasses) softmaxLayer classificationLayer];
An LSTM layer learns long-term dependencies between time steps in time series and sequence data.
The state of the layer consists of the hidden state (also known as the output state) and the cell state. The hidden state at time step t contains the output of the LSTM layer for this time step. The cell state contains information learned from the previous time steps. At each time step, the layer adds information to or removes information from the cell state. The layer controls these updates using gates.
The following components control the cell state and hidden state of the layer.
Component | Purpose |
---|---|
Input gate (i) | Control level of cell state update |
Forget gate (f) | Control level of cell state reset (forget) |
Cell candidate (g) | Add information to cell state |
Output gate (o) | Control level of cell state added to hidden state |
This diagram illustrates the flow of data at time step t. The diagram highlights how the gates forget, update, and output the cell and hidden states.
The learnable weights of an LSTM layer are the input weights W
(InputWeights
), the recurrent weights R
(RecurrentWeights
), and the bias b
(Bias
). The matrices W, R,
and b are concatenations of the input weights, the recurrent weights, and
the bias of each component, respectively. These matrices are concatenated as follows:
where i, f, g, and o denote the input gate, forget gate, cell candidate, and output gate, respectively.
The cell state at time step t is given by
where denotes the Hadamard product (element-wise multiplication of vectors).
The hidden state at time step t is given by
where denotes the state activation function. The lstmLayer
function, by default, uses the hyperbolic tangent function (tanh) to compute the state
activation function.
The following formulas describe the components at time step t.
Component | Formula |
---|---|
Input gate | |
Forget gate | |
Cell candidate | |
Output gate |
In these calculations, denotes the gate activation function. The lstmLayer
function, by default, uses the sigmoid function given by to compute the gate activation function.
Behavior changed in R2019a
Starting in R2019a, the software, by default, initializes the layer input weights of this layer using the Glorot initializer. This behavior helps stabilize training and usually reduces the training time of deep networks.
In previous releases, the software, by default, initializes the layer input weights using the
by sampling from a normal distribution with zero mean and variance 0.01. To reproduce this
behavior, set the 'InputWeightsInitializer'
option of the layer to
'narrow-normal'
.
Behavior changed in R2019a
Starting in R2019a, the software, by default, initializes the layer recurrent weights of this layer with Q, the orthogonal matrix given by the QR decomposition of Z = QR for a random matrix Z sampled from a unit normal distribution. This behavior helps stabilize training and usually reduces the training time of deep networks.
In previous releases, the software, by default, initializes the layer recurrent weights using
the by sampling from a normal distribution with zero mean and variance 0.01. To reproduce
this behavior, set the 'RecurrentWeightsInitializer'
option of the layer
to 'narrow-normal'
.
[1] M. Kudo, J. Toyama, and M. Shimbo. "Multidimensional Curve Classification Using Passing-Through Regions." Pattern Recognition Letters. Vol. 20, No. 11–13, pages 1103–1111.
[2] UCI Machine Learning Repository: Japanese Vowels Dataset. https://archive.ics.uci.edu/ml/datasets/Japanese+Vowels
[3] Hochreiter, S, and J. Schmidhuber, 1997. Long short-term memory. Neural computation, 9(8), pp.1735–1780.
[4] Glorot, Xavier, and Yoshua Bengio. "Understanding the difficulty of training deep feedforward neural networks." In Proceedings of the thirteenth international conference on artificial intelligence and statistics, pp. 249-256. 2010.
[5] He, Kaiming, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. "Delving deep into rectifiers: Surpassing human-level performance on imagenet classification." In Proceedings of the IEEE international conference on computer vision, pp. 1026-1034. 2015.
[6] Saxe, Andrew M., James L. McClelland, and Surya Ganguli. "Exact solutions to the nonlinear dynamics of learning in deep linear neural networks." arXiv preprint arXiv:1312.6120 (2013).
Usage notes and limitations:
For code generation, the StateActivationFunction
property must be set to 'tanh'
.
For code generation, the GateActivationFunction
property must be set to 'sigmoid'
.
bilstmLayer
| classifyAndUpdateState
| Deep Network
Designer | flattenLayer
| gruLayer
| predictAndUpdateState
| resetState
| sequenceFoldingLayer
| sequenceInputLayer
| sequenceUnfoldingLayer
You have a modified version of this example. Do you want to open this example with your edits?