MATLAB® Coder™ supports code generation for series and directed acyclic graph (DAG) convolutional neural networks (CNNs or ConvNets). You can generate code for any trained convolutional neural network whose layers are supported for code generation. See Supported Layers.
The following pretrained networks, available in Deep Learning Toolbox™, are supported for code generation.
Network Name | Description | ARM® Compute Library | Intel® MKL-DNN |
---|---|---|---|
AlexNet | AlexNet convolutional neural network. For the pretrained AlexNet model,
see | Yes | Yes |
DarkNet | DarkNet-19 and DarkNet-53 convolutional neural networks. For the pretrained
DarkNet models, see darknet19
and darknet53 . | Yes | Yes |
DenseNet-201 | DenseNet-201 convolutional neural network. For the pretrained
DenseNet-201 model, see | Yes | Yes |
GoogLeNet | GoogLeNet convolutional neural network. For the pretrained GoogLeNet
model, see | Yes | Yes |
Inception-ResNet-v2 | Inception-ResNet-v2 convolutional neural network. For the pretrained
Inception-ResNet-v2 model, see | Yes | Yes |
Inception-v3 | Inception-v3 convolutional neural network. For the pretrained Inception-v3
model, see inceptionv3 . | Yes | Yes |
MobileNet-v2 | MobileNet-v2 convolutional neural network. For the pretrained
MobileNet-v2 model, see | Yes | Yes |
NASNet-Large | NASNet-Large convolutional neural network. For the pretrained
NASNet-Large model, see | Yes | Yes |
NASNet-Mobile | NASNet-Mobile convolutional neural network. For the pretrained
NASNet-Mobile model, see | Yes | Yes |
ResNet | ResNet-18, ResNet-50, and ResNet-101 convolutional neural networks. For
the pretrained ResNet models, see | Yes | Yes |
SegNet | Multi-class pixelwise segmentation network. For more information, see
| No | Yes |
SqueezeNet | Small, deep neural network. For the pretrained SqeezeNet models, see
| Yes | Yes |
VGG-16 | VGG-16 convolutional neural network. For the pretrained VGG-16 model, see
| Yes | Yes |
VGG-19 | VGG-19 convolutional neural network. For the pretrained VGG-19 model, see
| Yes | Yes |
Xception | Xception convolutional neural network. For the pretrained Xception model,
see | Yes | Yes |
The following layers are supported for code generation by MATLAB Coder for the target deep learning libraries specified in the table.
Once you install the support package MATLAB Coder Interface for Deep Learning Libraries, you can use coder.getDeepLearningLayers
to see a list of the layers supported for a
specific deep learning library. For example:
coder.getDeepLearningLayers('mkldnn')
Layer Name | Description | ARM Compute Library | Intel MKL-DNN |
---|---|---|---|
additionLayer | Addition layer | Yes | Yes |
anchorBoxLayer | Anchor box layer | Yes | Yes |
averagePooling2dLayer | Average pooling layer | Yes | Yes |
batchNormalizationLayer | Batch normalization layer | Yes | Yes |
bilstmLayer | Bidirectional LSTM layer | Yes | No |
classificationLayer | Create classification output layer | Yes | Yes |
clippedReluLayer | Clipped Rectified Linear Unit (ReLU) layer | Yes | Yes |
concatenationLayer | Concatenation layer | Yes | Yes |
convolution2dLayer | 2-D convolution layer | Yes | Yes |
crop2dLayer | Layer that applies 2-D cropping to the input | Yes | Yes |
CrossChannelNormalizationLayer | Channel-wise local response normalization layer | Yes | Yes |
Custom output layers | All output layers including
custom classification or regression output layers created by using
For an example showing how to define a custom classification output layer and specify a loss function, see Define Custom Classification Output Layer (Deep Learning Toolbox). For an example showing how to define a custom regression output layer and specify a loss function, see Define Custom Regression Output Layer (Deep Learning Toolbox). | Yes | Yes |
depthConcatenationLayer | Depth concatenation layer | Yes | Yes |
dropoutLayer | Dropout layer | Yes | Yes |
eluLayer | Exponential linear unit (ELU) layer | Yes | Yes |
fullyConnectedLayer | Fully connected layer | Yes | Yes |
globalAveragePooling2dLayer | Global average pooling layer for spatial data | Yes | Yes |
globalMaxPooling2dLayer | 2-D global max pooling layer | Yes | Yes |
2-D grouped convolutional layer | Yes
| Yes | |
imageInputLayer | Image input layer
| Yes | Yes |
leakyReluLayer | Leaky Rectified Linear Unit (ReLU) layer | Yes | Yes |
lstmLayer | Long short-term memory (LSTM) layer | Yes | No |
maxPooling2dLayer | Max pooling layer | Yes | Yes |
maxUnpooling2dLayer | Max unpooling layer | No | Yes |
pixelClassificationLayer | Create pixel classification layer for semantic segmentation | Yes | Yes |
regressionLayer | Create a regression output layer | Yes | Yes |
reluLayer | Rectified Linear Unit (ReLU) layer | Yes | Yes |
sequenceInputLayer | Sequence input layer | Yes | No |
softmaxLayer | Softmax layer | Yes | Yes |
ssdMergeLayer | SSD merge layer for object detection | Yes | Yes |
| Flattens activations into 1-D assuming C-style (row-major) order | Yes | Yes |
nnet.keras.layer.GlobalAveragePooling2dLayer | Global average pooling layer for spatial data | Yes | Yes |
| Sigmoid activation layer | Yes | Yes |
| Hyperbolic tangent activation layer | Yes | Yes |
| Zero padding layer for 2-D input | Yes | Yes |
nnet.onnx.layer.ElementwiseAffineLayer | Layer that performs element-wise scaling of the input followed by an addition | Yes | Yes |
| Flatten layer for ONNX™ network | Yes | Yes |
| Layer that implements ONNX identity operator | Yes | Yes |
Hyperbolic tangent (tanh) layer | Yes | Yes | |
Transposed 2-D convolution layer Code generation does not
support asymmetric cropping of the input. For example, specifying a vector
| Yes | Yes | |
A word embedding layer maps word indices to vectors | Yes | No | |
| Output layer for YOLO v2 object detection network | Yes | Yes |
| Reorganization layer for YOLO v2 object detection network | Yes | Yes |
| Transform layer for YOLO v2 object detection network | Yes | Yes |
Class | Description | ARM Compute Library | Intel MKL-DNN |
---|---|---|---|
| Yes | Yes | |
ssdObjectDetector | Object to detect objects using the SSD-based detector.
| Yes | Yes |