This example shows how to create, compile, and deploy a dlhdl.Workflow
object that has a handwritten character detection series network as the network object by using the Deep Learning HDL Toolbox™ Support Package for Xilinx FPGA and SoC. Use MATLAB® to retrieve the prediction results from the target device.
Xilinx ZCU102 SoC development kit.
Deep Learning HDL Toolbox™
Deep Learning HDL Toolbox™ Support Package for Xilinx FPGA and SoC
Deep Learning Toolbox™
To load the pretrained series network, that has been trained on the Modified National Institue Standards of Technolofy (MNIST) database, enter:
snet = getDigitsNetwork();
To view the layers of the pretrained series network, enter:
analyzeNetwork(snet)
Create a target object that has a custom name for your target device and an interface to connect your target device to the host computer. Interface options are JTAG and Ethernet.
hTarget = dlhdl.Target('Xilinx','Interface','Ethernet')
hTarget = Target with properties: Vendor: 'Xilinx' Interface: Ethernet IPAddress: '10.10.10.15' Username: 'root' Port: 22
Create an object of the dlhdl.Workflow
class. Specify the network and the bitstream name during the object creation. Specify saved pretrained MNIST neural network, snet, as the network. Make sure that the bitstream name matches the data type and the FPGA board that you are targeting. In this example, the target FPGA board is the Xilinx ZCU102 SOC board and the bitstream uses a single data type.
hW = dlhdl.Workflow('network', snet, 'Bitstream', 'zcu102_single','Target',hTarget)
hW = Workflow with properties: Network: [1×1 SeriesNetwork] Bitstream: 'zcu102_single' ProcessorConfig: [] Target: [1×1 dlhdl.Target]
To compile the MNIST series network, run the compile function of the dlhdl.Workflow
object.
dn = hW.compile;
### Optimizing series network: Fused 'nnet.cnn.layer.BatchNormalizationLayer' into 'nnet.cnn.layer.Convolution2DLayer' offset_name offset_address allocated_space _______________________ ______________ ________________ "InputDataOffset" "0x00000000" "4.0 MB" "OutputResultOffset" "0x00400000" "4.0 MB" "SystemBufferOffset" "0x00800000" "28.0 MB" "InstructionDataOffset" "0x02400000" "4.0 MB" "ConvWeightDataOffset" "0x02800000" "4.0 MB" "FCWeightDataOffset" "0x02c00000" "4.0 MB" "EndOffset" "0x03000000" "Total: 48.0 MB"
To deploy the network on the Xilinx ZCU102 SoC hardware, run the deploy function of the dlhdl.Workflow
object. This function uses the output of the compile function to program the FPGA board by using the programming file. It also downloads the network weights and biases. The deploy function starts programming the FPGA device, displays progress messages, and the time it takes to deploy the network.
hW.deploy
### FPGA bitstream programming has been skipped as the same bitstream is already loaded on the target FPGA. ### Loading weights to FC Processor. ### FC Weights loaded. Current time is 28-Jun-2020 12:37:32
To load the example image, execute the predict function of the dlhdl.Workflow
object, and then display the FPGA result, enter:
inputImg = imread('five_28x28.pgm');
imshow(inputImg);
Run prediction with the profile 'on' to see the latency and throughput results.
[prediction, speed] = hW.predict(single(inputImg),'Profile','on');
### Finished writing input activations. ### Running single input activations. Deep Learning Processor Profiler Performance Results LastLayerLatency(cycles) LastLayerLatency(seconds) FramesNum Total Latency Frames/s ------------- ------------- --------- --------- --------- Network 73717 0.00034 1 73759 2982.7 conv_module 27207 0.00012 conv_1 6673 0.00003 maxpool_1 4891 0.00002 conv_2 4999 0.00002 maxpool_2 3569 0.00002 conv_3 7135 0.00003 fc_module 46510 0.00021 fc 46510 0.00021 * The clock frequency of the DL processor is: 220MHz
[val, idx] = max(prediction);
fprintf('The prediction result is %d\n', idx-1);
The prediction result is 5