Code Generation for LSTM Network on Raspberry Pi

This example shows how to generate code for a pretrained long short-term memory (LSTM) network that uses the ARM® Compute Library and deploy the code on a Raspberry Pi™ target. In this example, the LSTM network predicts the Remaining Useful Life (RUL) of a machine. The network takes as input time series data sets that represent various sensors in the engine. The network returns the Remaining Useful Life of an engine, measured in cycles, as its output.

This example uses the Turbofan Engine Degradation Simulation Data Set as described in [1]. This data set contains 100 training observations and 100 test observations. The training data contains simulated time series data for 100 engines. Each sequence has 17 features, varies in length, and corresponds to a full run to failure (RTF) instance. The test data contains 100 partial sequences and corresponding values of the Remaining Useful Life at the end of each sequence.

This example uses a pretrained LSTM network. For more information on how to train an LSTM network, see the example Sequence Classification Using Deep Learning (Deep Learning Toolbox).

This example demonstrates two different approaches for performing prediction by using an LSTM network:

  • The first approach uses a standard LSTM network and runs inference on a set of time series data.

  • The second approach leverages the stateful behavior of the same LSTM network. In this method, you pass a single timestep of data at a time, and have the network update its state at each time step.

This example uses the PIL based workflow to generate a MEX function, which in turn calls the executable generated in the target hardware from MATLAB.

The code lines in this example are commented out. Uncomment them before you run the example.

This example is not supported in MATLAB Online.

Prerequisites

  • MATLAB® Coder™

  • Embedded Coder®

  • Deep Learning Toolbox™

  • MATLAB Coder Interface for Deep Learning Libraries. To install this support package, use the Add-On Explorer.

  • MATLAB Support Package for Raspberry Pi Hardware. To install this support package, use the Add-On Explorer.

  • Raspberry Pi hardware

  • ARM Compute Library (on the target ARM hardware)

  • Environment variables for the compilers and libraries. For information on the supported versions of the compilers and libraries, see Third-Party Hardware and Software. For setting up the environment variables, see Environment Variables.

Set Up a Code Generation Configuration Object for a Static Library

To generate a PIL MEX function for a specified entry-point function, create a code configuration object for a static library and set the verification mode to 'PIL'. Set the target language to C++.

% cfg = coder.config('lib', 'ecoder', true);
% cfg.VerificationMode = 'PIL';
% cfg.TargetLang = 'C++';

Set Up a Configuration Object for Deep Learning Code Generation

Create a coder.ARMNEONConfig object. Specify the Compute Library version. For this example, suppose that the ARM Compute Library in the Raspberry Pi hardware is version 19.05.

% dlcfg = coder.DeepLearningConfig('arm-compute');
% dlcfg.ArmComputeVersion = '19.05';

Set the DeepLearningConfig property of the code generation configuration object to the deep learning configuration object.

% cfg.DeepLearningConfig = dlcfg;

Create a Connection to the Raspberry Pi

Use the MATLAB Support Package for Raspberry Pi Support Package function, raspi, to create a connection to the Raspberry Pi. In the following code, replace:

  • raspiname with the name of your Raspberry Pi

  • username with your user name

  • password with your password

% r = raspi('raspiname','username','password');

Configure Code Generation Hardware Parameters for Raspberry Pi

Create a coder.Hardware object for Raspberry Pi and attach it to the code generation configuration object.

% hw = coder.hardware('Raspberry Pi');
% cfg.Hardware = hw;

First Approach: Generate PIL MEX Function for LSTM Network

In this approach, you generate code for the entry-point function rul_lstmnet_predict.

The rul_lstmnet_predict.m entry-point function takes an entire time series data set as an input and passes it to the network for prediction. Specifically, the function uses the LSTM network that is trained in the example Sequence Classification Using Deep Learning (Deep Learning Toolbox). The function loads the network object from the rul_lstmnet.mat file into a persistent variable and reuses this persistent object in subsequent prediction calls. A sequence-to-sequence LSTM network enables you to make different predictions for each individual time step of a data sequence.

To display an interactive visualization of the network architecture and information about the network layers, use the analyzeNetwork (Deep Learning Toolbox) function.

type('rul_lstmnet_predict.m')
function out =  rul_lstmnet_predict(in) %#codegen

% Copyright 2019 The MathWorks, Inc. 

persistent mynet;

if isempty(mynet)
    mynet = coder.loadDeepLearningNetwork('rul_lstmnet.mat');
end


out = mynet.predict(in); 

To generate code by using the codegen command, use the coder.typeof function to specify the type and size of the input argument to the entry-point function. In this example, the input is of double data type with a feature dimension value of 17 and a variable sequence length. Specify the sequence length as variable-size to perform prediction on an input sequence of any length.

% matrixInput = coder.typeof(double(0),[17 Inf],[false true]);

Run the codegen command to generate a PIL based mex function rul_lstmnet_predict_pil on the host platform.

% codegen -config cfg rul_lstmnet_predict -args {matrixInput} -report

Run Generated PIL MEX Function on Test Data

Load the MAT-file RULTestData. This MAT-file stores the variables XTest and YTest that contain sample timeseries of sensor readings on which you can test the generated code. This test data is taken from the example Sequence Classification Using Deep Learning (Deep Learning Toolbox) after data pre-processing.

load RULTestData;

The XTest variable contains 100 input observations. Each observation has 17 features with varying sequence length.

XTest(1:5)
ans=5×1 cell array
    {17×31  double}
    {17×49  double}
    {17×126 double}
    {17×106 double}
    {17×98  double}

The YTest variable contains 100 output observations that correspond to the XTest input variable. Each output observation is a Remaining Useful Life (RUI) value, measured in cycles, for each time step data in entire sequence.

YTest(1:5)
ans=5×1 cell array
    {1×31  double}
    {1×49  double}
    {1×126 double}
    {1×106 double}
    {1×98  double}

Run the generated MEX function rul_lstmnet_predict_pil on a random test data set.

% idx = randperm(numel(XTest), 1);
% inputData = XTest{idx};

% YPred1 = rul_lstmnet_predict_pil(inputData);

Compare Predictions with Test Data

Use a plot to compare the MEX output data with the test data.

% figure('Name', 'Standard LSTM', 'NumberTitle', 'off');
%     
% plot(YTest{idx},'--')
% hold on
% plot(YPred1,'.-')
% hold off
% 
% ylim([0 175])
% title("Test Observation " + idx)
% xlabel("Time Step")
% ylabel("RUL measured in cycles")

Clear PIL

% clear rul_lstmnet_predict_pil;

Second Approach: Generate PIL MEX Function for Stateful LSTM Network

Instead of passing the entire timeseries data all at once to predict, you can run prediction by streaming the input data segment-wise by using the predictAndUpdateState function.

The entry-point function rul_lstmnet_predict_and_update.m accepts a single-timestep input and processes it by using the predictAndUpdateState (Deep Learning Toolbox) function. predictAndUpdateState returns a prediction for the input timestep and updates the network so that subsequent parts of the input are treated as subsequent timesteps of the same sample.

type('rul_lstmnet_predict_and_update.m')
function out = rul_lstmnet_predict_and_update(in) %#codegen

% Copyright 2019 The MathWorks, Inc. 

persistent mynet;

if isempty(mynet)
    mynet = coder.loadDeepLearningNetwork('rul_lstmnet.mat');
end

[mynet, out] = predictAndUpdateState(mynet, in);

end

Create the input type for the codegen command. Because rul_lstmnet_predict_and_update accepts a single timestep data in each call, specify the input type matrixInput to have a fixed sequence length of 1 instead of a variable sequence length.

% matrixInput = coder.typeof(double(0),[17 1]);

Run the codegen command to generate PIL based mex function rul_lstmnet_predict_and_update_pil on the host platform.

% codegen -config cfg rul_lstmnet_predict_and_update -args {matrixInput} -report

Run Generated PIL MEX Function on Test Data

% Run generated MEX function(|rul_lstmnet_predict_and_update_pil|) for each
% time step data in the inputData sequence.

% sequenceLength = size(inputData,2);
% YPred2 = zeros(1, sequenceLength);
% for i=1:sequenceLength
%     inTimeStep = inputData(:,i);
%     YPred2(:, i) = rul_lstmnet_predict_and_update_pil(inTimeStep);
% end

After you pass all timesteps, one at a time, to the rul_lstmnet_predict_and_update function, the resulting output is the same as that in the first approach in which you passed all inputs at once.

Compare Predictions with Test Data

Use a plot to compare the MEX output data with the test data.

% figure('Name', 'Statefull LSTM', 'NumberTitle', 'off');
% 
% 
% plot(YTest{idx},'--')
% hold on
% plot(YPred2,'.-')
% hold off
% 
% ylim([0 175])
% title("Test Observation " + idx)
% xlabel("Time Step")
% ylabel("RUL measured in cycles")

Clear PIL

% clear rul_lstmnet_predict_and_update_pil;

References

[1] Saxena, Abhinav, Kai Goebel, Don Simon, and Neil Eklund. "Damage propagation modeling for aircraft engine run-to-failure simulation." In Prognostics and Health Management, 2008. PHM 2008. International Conference on, pp. 1-9. IEEE, 2008.

See Also

| | | (Deep Learning Toolbox)

Related Topics