Deep learning array for custom training loops
A deep learning array stores data with optional data format labels for custom training loops, and enables functions to compute and use derivatives through automatic differentiation.
Tip
For most deep learning tasks, you can use a pretrained network and adapt it to your own data. For an example showing how to use transfer learning to retrain a convolutional neural network to classify a new set of images, see Train Deep Learning Network to Classify New Images. Alternatively, you can create and train networks from scratch using layerGraph
objects with the trainNetwork
and trainingOptions
functions.
If the trainingOptions
function does not provide the training options that you need for your task, then you can create a custom training loop using automatic differentiation. To learn more, see Define Deep Learning Network for Custom Training Loops.
dlarray
labels enable you to use the functions in this table to execute
with assurance that the data has the appropriate format.
Function | Operation | Validates Input Dimension | Affects Size of Input Dimension |
---|---|---|---|
avgpool | Compute the average of the input data over moving rectangular (or cuboidal)
spatial ('S' ) regions defined by a pool size parameter. | 'S' | 'S' |
batchnorm | Normalize the values contained in each channel ('C' ) of the
input data. | 'C' | |
crossentropy | Compute the cross-entropy between estimates and target values, averaged by the
size of the batch ('B' ) dimension. | 'S' , 'C' , 'B' ,
'T' , 'U' (Estimates and target arrays must
have the same sizes.) | 'S' , 'C' , 'B' ,
'T' , 'U' (The output is an unlabeled
scalar.) |
dlconv | Compute the deep learning convolution of the input data using an array of
filters, matching the number of spatial ('S' ) and (a function of
the) channel ('C' ) dimensions of the input, and adding a constant
bias. | 'S' , 'C' | 'S' , 'C' |
dltranspconv | Compute the deep learning transposed convolution of the input data using an array
of filters, matching the number of spatial ('S' ) and (a function of
the) channel ('C' ) dimensions of the input, and adding a constant
bias. | 'S' , 'C' | 'S' , 'C' |
fullyconnect | Compute a weighted sum of the input data and apply a bias for each batch
('B' ) and time ('T' ) dimension. | 'S' , 'C' , 'U' | 'S' , 'C' , 'B' ,
'T' , 'U' (The output always has labels
'CB' , 'CT' , or
'CTB' .) |
gru | Apply a gated recurrent unit calculation to the input data. | 'S' , 'C' , 'T' | 'C' |
lstm | Apply a long short-term memory calculation to the input data. | 'S' , 'C' , 'T' | 'C' |
maxpool | Compute the maximum of the input data over moving rectangular spatial
('S' ) regions defined by a pool size parameter. | 'S' | 'S' |
maxunpool | Compute the unpooling operation over the spatial ('S' )
dimensions. | 'S' | 'S' |
mse | Compute the half mean squared error between estimates and target values, averaged
by the size of the batch ('B' ) dimension. | 'S' , 'C' , 'B' ,
'T' , 'U' (Estimates and target arrays must
have the same sizes.) | 'S' , 'C' , 'B' ,
'T' , 'U' (The output is an unlabeled
scalar.) |
softmax | Apply the softmax activation to each channel ('C' ) of the
input data. | 'C' |
These functions require each dimension to have a label, specified either as the labels of
their first dlarray
input, or as the 'DataFormat'
name-value pair argument containing dimension labels.
dlarray
enforces the order of labels 'SCBTU'
. This
enforcement eliminates ambiguous semantics in operations, which implicitly match labels
between inputs. dlarray
also enforces that the labels 'C'
,
'B'
, and 'T'
can each appear at most once. The
functions that use these labels accept at most one dimension for each label.
dlarray
provides functions for removing labels (stripdims
),
obtaining the dimensions associated with labels (finddim
), and
listing the labels associated with a dlarray
(dims
).
For more information on how a dlarray
behaves with labels, see Notable dlarray Behaviors.
avgpool | Pool data to average values over spatial dimensions |
batchnorm | Normalize each channel of mini-batch |
crossentropy | Cross-entropy loss for classification tasks |
dims | Dimension labels of dlarray |
dlconv | Deep learning convolution |
dlgradient | Compute gradients for custom training loops using automatic differentiation |
dltranspconv | Deep learning transposed convolution |
extractdata | Extract data from dlarray |
finddim | Find dimensions with specified label |
fullyconnect | Sum all weighted input data and apply a bias |
gru | Gated recurrent unit |
leakyrelu | Apply leaky rectified linear unit activation |
lstm | Long short-term memory |
maxpool | Pool data to maximum value |
maxunpool | Unpool the output of a maximum pooling operation |
mse | Half mean squared error |
relu | Apply rectified linear unit activation |
sigmoid | Apply sigmoid activation |
softmax | Apply softmax activation to channel dimension |
stripdims | Remove dlarray labels |
A dlarray
also allows functions for numeric, matrix, and other
operations. See the full list in List of Functions with dlarray Support.
A dlgradient
call must be inside a function. To obtain a numeric
value of a gradient, you must evaluate the function using dlfeval
,
and the argument to the function must be a dlarray
. See Use Automatic Differentiation In Deep Learning Toolbox.
To enable the correct evaluation of gradients, dlfeval
must call
functions that use only supported functions for dlarray
. See List of Functions with dlarray Support.