Supported Networks, Layers and Boards

Supported Pretrained Networks

Deep Learning HDL Toolbox™ supports code generation for series convolutional neural networks (CNNs or ConvNets). You can generate code for any trained convolutional neural network whose computational layers are supported for code generation. See Supported Layers. You can use one of the pretrained networks listed in the table and generate code for your target Intel® or Xilinx® FPGA boards.

NetworkNetwork DescriptionTypeSingle Data Type (with Shipping Bitstreams)INT8 data type (with Shipping Bitstreams)Application Area
   ZCU102ZC706Arria10 SoCZCU102ZC706Arria10 SoCClassification
AlexNet

AlexNet convolutional neural network.

Series NetworkYesYesYesYesYesYesClassification
LogoNet

Logo recognition network (LogoNet) is a MATLAB® developed logo identification network. For more information, see Logo Recognition Network.

Series NetworkYesYesYesYesYesYesClassification
MNIST

MNIST Digit Classification.

Series NetworkYesYesYesYesYesYesRegression
Lane detection

LaneNet convolutional neural network. For more information, see Deploy Transfer Learning Network for Lane Detection

Series NetworkYesYesYesYesYesYesClassification
VGG-16

VGG-16 convolutional neural network. For the pretrained VGG-16 model, see vgg16.

Series NetworkNo. Network exceeds PL DDR memory sizeNo. Network exceeds FC module memory size.YesYesNo. Network exceeds FC module memory size.YesClassification
VGG-19

VGG-19 convolutional neural network. For the pretrained VGG-19 model, see vgg19 .

Series NetworkNo. Network exceeds PL DDR memory sizeNo. Network exceeds FC module memory size.YesYesNo. Network exceeds FC module memory size.YesClassification
Darknet-19

Darknet-19 convolutional neural network. For the pretrained darknet-19 model, see darknet19.

Series NetworkYesYesYesNo. the network contains a globalAveragePooling layer that is not supported for INT8 quantization.No. the network contains a globalAveragePooling layer that is not supported for INT8 quantization.No. the network contains a globalAveragePooling layer that is not supported for INT8 quantization.Classification
Radar ClassificationConvolutional neural network that uses micro-Doppler signatures to identify and classify the object. For more information, see Bicyclist and Pedestrian Classification by Using FPGA.Series NetworkYesYesYesNo. the network contains a AveragePooling layer that is not supported for INT8 quantization.No. the network contains a AveragePooling layer that is not supported for INT8 quantization.No. the network contains a AveragePooling layer that is not supported for INT8 quantization.Classification and Software Defined Radio (SDR)
Defect Detection snet_defnetsnet_defnet is a custom AlexNet network used to identify and classify defects. For more information, see Defect Detection.Series NetworkYesYesYesYesYesYesClassification
Defect Detectionsnet_blemdetnetsnet_blemdetnet is a custom convolutional neural network used to identify and classify defects. For more information, see Defect Detection.Series NetworkYesYesYesYesYesYesClassification
YOLO v2 Vehicle DetectionYou look only once (YOLO) is an object detector that decodes the predictions from a convolutional neural network and generates bounding boxes around the objects. For more information, see Vehicle Detection Using YOLO v2 Deployed to FPGASeries Network basedYesYesYesYesYesYesObject detection

Supported Layers

The following layers are supported by Deep Learning HDL Toolbox.

Input Layers

Layer Layer Type Hardware (HW) or Software(SW)Description and LimitationsINT8 Compatible

imageInputLayer

SW

An image input layer inputs 2-D images to a network and applies data normalization.

Yes. Runs as single datatype in SW.

Convolution and Fully Connected Layers

Layer Layer Type Hardware (HW) or Software(SW)Description and LimitationsINT8 Compatible

convolution2dLayer

HW

A 2-D convolutional layer applies sliding convolutional filters to the input.

These limitations apply when generating code for a network using this layer:

  • Filter size must be 1-12 and square. For example [1 1] or [12 12].

  • Stride size must be 1,2 or 4 and square.

  • Padding size must be in the range 0-8.

  • Dilation factor must be [1 1].

Yes

groupedConvolution2dLayer

HW

A 2-D grouped convolutional layer separates the input channels into groups and applies sliding convolutional filters. Use grouped convolutional layers for channel-wise separable (also known as depth-wise separable) convolution.

These limitations apply when generating code for a network using this layer:

  • Filter size must be 1-12 and square. For example [1 1] or [12 12].

  • Stride size must be 1,2 or 4 and square.

  • Padding size must be in the range 0-8.

  • Dilation factor must be [1 1].

  • Number of groups must be 1 or 2.

Yes

fullyConnectedLayer

HW

A fully connected layer multiplies the input by a weight matrix, and then adds a bias vector.

These limitations apply when generating code for a network using this layer:

Yes

Activation Layers

LayerLayer Type Hardware (HW) or Software(SW)DescriptionINT8 Compatible

reluLayer

HW

A ReLU layer performs a threshold operation to each element of the input where any value less than zero is set to zero.

A clipped ReLU layer is supported only when it is preceded by a convolution layer.

Yes

leakyReluLayer

HW

A leaky ReLU layer performs a threshold operation where any input value less than zero is multiplied by a fixed scalar.

A leaky ReLU layer is supported only when it is preceded by a convolution layer.

No

clippedReluLayer

HW

A clipped ReLU layer performs a threshold operation where any input value less than zero is set to zero and any value above the clipping ceiling is set to that clipping ceiling value.

A clipped ReLU layer is supported only when it is preceded by a convolution layer.

No

Normalization, Dropout, and Cropping Layers

LayerLayer Type Hardware (HW) or Software(SW)DescriptionINT8 Compatible

batchNormalizationLayer

HW

A batch normalization layer normalizes each input channel across a mini-batch.

A batch normalization layer is only supported only when it is preceded by a convolution layer.

Yes

crossChannelNormalizationLayer

HW

A channel-wise local response (cross-channel) normalization layer carries out channel-wise normalization.

The WindowChannelSize must be in the range of 3-9 for code generation.

Yes. Runs as single datatype in HW.

dropoutLayer

NoOP on inference

A dropout layer randomly sets input elements to zero with a given probability.

Yes

Pooling and Unpooling Layers

LayerLayer Type Hardware (HW) or Software(SW)DescriptionINT8 Compatible

maxPooling2dLayer

HW

A max pooling layer performs down sampling by dividing the input into rectangular pooling regions and computing the maximum of each region.

These limitations apply when generating code for a network using this layer:

  • Pool size must be 1-12and square. For example [1 1] or [12 12].

  • Stride size must be 1-7 and square.

  • Padding size must be in the range 0-2. Padding size can only be used when the pool size is 3-by-3.

Yes

averagePooling2dLayer

HW

An average pooling layer performs down sampling by dividing the input into rectangular pooling regions and computing the average values of each region.

These limitations apply when generating code for a network using this layer:

  • Pool size must be 1-12 and square. For example [3 3]

  • Stride size must be 1-7 and square.

  • Padding size must be in the range 0-2. Padding size can only be used when the pool size is 3-by-3.

No

globalAveragePooling2dLayer

HW

A global average pooling layer performs down sampling by computing the mean of the height and width dimensions of the input.

These limitations apply when generating code for a network using this layer:

  • Pool size value must be in the range 1-12 and be square. For example, [1 1] or [12 12].

  • Total activation pixel size must be smaller than the deep learning processor convolution module input memory size. For more information, see InputMemorySize

No

Output Layer

LayerLayer Type Hardware (HW) or Software(SW)DescriptionINT8 Compatible

softmax

SW

A softmax layer applies a softmax function to the input.

Yes. Runs as single datatype in SW.

classificationLayer

SW

A classification layer computes the cross-entropy loss for multi class classification issues with mutually exclusive classes.

Yes

regressionLayer

SW

A regression layer computes the half-mean-squared-error loss for regression problems.

Yes

Keras and ONNX Layers

LayerLayer Type Hardware (HW) or Software(SW)DescriptionINT8 Compatible
nnet.keras.layer.FlattenCStyleLayerHW

Flatten activations into 1-D layers assuming C-style (row-major) order.

A nnet.keras.layer.FlattenCStyleLayer is only supported only when it is followed by a fully connected layer.

Yes

nnet.keras.layer.ZeroPadding2dLayerHW

Zero padding layer for 2-D input.

A nnet.keras.layer.ZeroPadding2dLayer is only supported only when it is followed by a convolution layer or a maxpool layer.

Yes

Supported Boards

These boards are supported by Deep Learning HDL Toolbox:

  • Xilinx Zynq®-7000 ZC706.

  • Intel Arria® 10 SoC.

  • Xilinx Zynq UltraScale+™ MPSoC ZCU102.

Related Topics