The figure illustrates the MATLAB® solution for implementing deep learning on FPGA.
The FPGA deep learning solution provides an end to end solution that allows you to estimate, compile, profile and debug your custom pretrained series network. You can also generate a custom deep learning processor IP. The estimator is used for estimating the performance of the deep learning framework in terms of speed. The compiler converts the pretrained deep learning network for the current application for deploying it on the intended target FPGA boards.
To learn more about the deep learning processor IP, see Deep Learning Processor IP Core .
FPGAs provide advantages, such as :
High performance
Flexible interfacing
Data parallelism
Model parallelism
Pipeline parallelism
To run certain Deep Learning on FPGA tasks, see the information listed in this table.
Task | Workflow |
Run a pretrained series network on your target FPGA board. | Prototype Deep Learning Networks on FPGA and SoCs Workflow |
Obtain the performance of your pretrained series network for a preconfigured deep learning processor. | Estimate Performance of Deep Learning Network Running with Bitstream |
Customize the deep learning processor to meet your area or performance constraints. | Estimate Performance of Deep Learning Network by Using Custom Processor Configuration |
Generate a custom deep learning processor for your FPGA. | Generate Custom Bitstream |
Learn about the benefits of quantizing your pretrained series networks. | Quantization of Deep Neural Networks |
Compare the accuracy of your quantized pretrained series networks against your single data type pretrained series network. | Validation |
Run a quantized pretrained series network on your target FPGA board. | Code Generation and Deployment |