Deep Learning HDL Toolbox™ provides classes to create objects to deploy series deep learning networks to target FPGA and SoC boards. Before deploying deep learning networks onto target FPGA and SoC boards, leverage the methods to estimate the performance and resource utilization of the custom deep learning network. After you deploy the deep learning network, use MATLAB to retrieve the network prediction results from the target FPGA board.
dlhdl.Workflow | Configure deployment workflow for deep learning neural network |
dlhdl.Target | Configure interface to target board for workflow deployment |
activations | Retrieve intermediate layer results for deployed deep learning network |
validateConnection | Validate SSH connection and deployed bitstream |
release | Release the connection to the target device |
predict | Run inference on deployed network and profile speed of neural network deployed on specified target device |
estimate | Estimate performance of specified deep learning network and bitstream for target device board |
deploy | Deploy the specified neural network to the target FPGA board |
compile | Compile workflow object |
Prototype Deep Learning Networks on FPGA and SoCs Workflow
Accelerate the prototyping, deployment, design verification, and iteration of your
custom deep learning network running on a fixed bitstream by using the
dlhdl.Workflow
object .
LIBIIO/Ethernet Connection Based Deployment
Rapidly deploy deep learning networks to FPGA boards using MATLAB.
Estimate Performance of Deep Learning Network Running with Bitstream
Estimate the throughput and initial latency for a given trained deep learning network running on a fixed bitstream.
Obtain performance parameters of an inference run performed for a pretrained series network and a specified target FPGA board.
Improve the performance of your deployed deep learning network by using the multiple frame support feature.