Options for training deep learning neural network
returns training options for the optimizer specified by
options
= trainingOptions(solverName
)solverName
. To train a network, use the training
options as an input argument to the trainNetwork
function.
returns training options with additional options specified by one or more
name-value pair arguments.options
= trainingOptions(solverName
,Name,Value
)
Create a set of options for training a network using stochastic gradient descent with momentum. Reduce the learning rate by a factor of 0.2 every 5 epochs. Set the maximum number of epochs for training to 20, and use a mini-batch with 64 observations at each iteration. Turn on the training progress plot.
options = trainingOptions('sgdm', ... 'LearnRateSchedule','piecewise', ... 'LearnRateDropFactor',0.2, ... 'LearnRateDropPeriod',5, ... 'MaxEpochs',20, ... 'MiniBatchSize',64, ... 'Plots','training-progress')
options = TrainingOptionsSGDM with properties: Momentum: 0.9000 InitialLearnRate: 0.0100 LearnRateSchedule: 'piecewise' LearnRateDropFactor: 0.2000 LearnRateDropPeriod: 5 L2Regularization: 1.0000e-04 GradientThresholdMethod: 'l2norm' GradientThreshold: Inf MaxEpochs: 20 MiniBatchSize: 64 Verbose: 1 VerboseFrequency: 50 ValidationData: [] ValidationFrequency: 50 ValidationPatience: Inf Shuffle: 'once' CheckpointPath: '' ExecutionEnvironment: 'auto' WorkerLoad: [] OutputFcn: [] Plots: 'training-progress' SequenceLength: 'longest' SequencePaddingValue: 0 SequencePaddingDirection: 'right' DispatchInBackground: 0 ResetInputNormalization: 1
When you train networks for deep learning, it is often useful to monitor the training progress. By plotting various metrics during training, you can learn how the training is progressing. For example, you can determine if and how quickly the network accuracy is improving, and whether the network is starting to overfit the training data.
When you specify 'training-progress'
as the 'Plots'
value in trainingOptions
and start network training, trainNetwork
creates a figure and displays training metrics at every iteration. Each iteration is an estimation of the gradient and an update of the network parameters. If you specify validation data in trainingOptions
, then the figure shows validation metrics each time trainNetwork
validates the network. The figure plots the following:
Training accuracy — Classification accuracy on each individual mini-batch.
Smoothed training accuracy — Smoothed training accuracy, obtained by applying a smoothing algorithm to the training accuracy. It is less noisy than the unsmoothed accuracy, making it easier to spot trends.
Validation accuracy — Classification accuracy on the entire validation set (specified using trainingOptions
).
Training loss, smoothed training loss, and validation loss — The loss on each mini-batch, its smoothed version, and the loss on the validation set, respectively. If the final layer of your network is a classificationLayer
, then the loss function is the cross entropy loss. For more information about loss functions for classification and regression problems, see Output Layers.
For regression networks, the figure plots the root mean square error (RMSE) instead of the accuracy.
The figure marks each training Epoch using a shaded background. An epoch is a full pass through the entire data set.
During training, you can stop training and return the current state of the network by clicking the stop button in the top-right corner. For example, you might want to stop training when the accuracy of the network reaches a plateau and it is clear that the accuracy is no longer improving. After you click the stop button, it can take a while for the training to complete. Once training is complete, trainNetwork
returns the trained network.
When training finishes, view the Results showing the final validation accuracy and the reason that training finished. The final validation metrics are labeled Final in the plots. If your network contains batch normalization layers, then the final validation metrics are often different from the validation metrics evaluated during training. This is because batch normalization layers in the final network perform different operations than during training.
On the right, view information about the training time and settings. To learn more about training options, see Set Up Parameters and Train Convolutional Neural Network.
Plot Training Progress During Training
Train a network and plot the training progress during training.
Load the training data, which contains 5000 images of digits. Set aside 1000 of the images for network validation.
[XTrain,YTrain] = digitTrain4DArrayData; idx = randperm(size(XTrain,4),1000); XValidation = XTrain(:,:,:,idx); XTrain(:,:,:,idx) = []; YValidation = YTrain(idx); YTrain(idx) = [];
Construct a network to classify the digit image data.
layers = [ imageInputLayer([28 28 1]) convolution2dLayer(3,8,'Padding','same') batchNormalizationLayer reluLayer maxPooling2dLayer(2,'Stride',2) convolution2dLayer(3,16,'Padding','same') batchNormalizationLayer reluLayer maxPooling2dLayer(2,'Stride',2) convolution2dLayer(3,32,'Padding','same') batchNormalizationLayer reluLayer fullyConnectedLayer(10) softmaxLayer classificationLayer];
Specify options for network training. To validate the network at regular intervals during training, specify validation data. Choose the 'ValidationFrequency'
value so that the network is validated about once per epoch. To plot training progress during training, specify 'training-progress'
as the 'Plots'
value.
options = trainingOptions('sgdm', ... 'MaxEpochs',8, ... 'ValidationData',{XValidation,YValidation}, ... 'ValidationFrequency',30, ... 'Verbose',false, ... 'Plots','training-progress');
Train the network.
net = trainNetwork(XTrain,YTrain,layers,options);
solverName
— Solver for training network'sgdm'
| 'rmsprop'
| 'adam'
Solver for training network, specified as one of the following:
'sgdm'
— Use the stochastic
gradient descent with momentum (SGDM) optimizer.
You can specify the momentum value using the
'Momentum'
name-value pair
argument.
'rmsprop'
— Use the RMSProp
optimizer. You can specify the decay rate of the
squared gradient moving average using the
'SquaredGradientDecayFactor'
name-value pair argument.
'adam'
— Use the Adam
optimizer. You can specify the decay rates of the
gradient and squared gradient moving averages
using the 'GradientDecayFactor'
and 'SquaredGradientDecayFactor'
name-value pair arguments, respectively.
For more information about the different solvers, see Stochastic Gradient Descent.
Specify optional
comma-separated pairs of Name,Value
arguments. Name
is
the argument name and Value
is the corresponding value.
Name
must appear inside quotes. You can specify several name and value
pair arguments in any order as
Name1,Value1,...,NameN,ValueN
.
'InitialLearnRate',0.03,'L2Regularization',0.0005,'LearnRateSchedule','piecewise'
specifies the initial learning rate as 0.03 and
theL2
regularization
factor as 0.0005, and instructs the software to drop the learning rate every
given number of epochs by multiplying with a certain factor.'Plots'
— Plots to display during network training'none'
(default) | 'training-progress'
Plots to display during network training, specified as the
comma-separated pair consisting of
'Plots'
and one of the
following:
'none'
— Do not
display plots during training.
'training-progress'
—
Plot training progress. The plot shows mini-batch
loss and accuracy, validation loss and accuracy,
and additional information on the training
progress. The plot has a stop button
in the top-right
corner. Click the button to stop training and
return the current state of the network. For more
information on the training progress plot, see
Monitor Deep Learning Training Progress.
Example: 'Plots','training-progress'
'Verbose'
— Indicator to display training progress information1
(true
) (default) | 0
(false
)Indicator to display training progress information in the
command window, specified as the comma-separated pair
consisting of 'Verbose'
and either
1
(true
) or
0
(false
).
The verbose output displays the following information:
Classification Networks
Field | Description |
---|---|
Epoch | Epoch number. An epoch corresponds to a full pass of the data. |
Iteration | Iteration number. An iteration corresponds to a mini-batch. |
Time Elapsed | Time elapsed in hours, minutes, and seconds. |
Mini-batch
Accuracy | Classification accuracy on the mini-batch. |
Validation
Accuracy | Classification accuracy on the validation data. If you do not specify validation data, then the function does not display this field. |
Mini-batch Loss | Loss on the mini-batch. If the output layer
is a ClassificationOutputLayer
object, then the loss is the cross entropy loss
for multi-class classification problems with
mutually exclusive classes. |
Validation Loss | Loss on the validation data. If the output
layer is a
ClassificationOutputLayer object,
then the loss is the cross entropy loss for
multi-class classification problems with mutually
exclusive classes. If you do not specify
validation data, then the function does not
display this field. |
Base Learning
Rate | Base learning rate. The software multiplies the learn rate factors of the layers by this value. |
Regression Networks
Field | Description |
---|---|
Epoch | Epoch number. An epoch corresponds to a full pass of the data. |
Iteration | Iteration number. An iteration corresponds to a mini-batch. |
Time Elapsed | Time elapsed in hours, minutes, and seconds. |
Mini-batch RMSE | Root-mean-squared-error (RMSE) on the mini-batch. |
Validation RMSE | RMSE on the validation data. If you do not specify validation data, then the software does not display this field. |
Mini-batch Loss | Loss on the mini-batch. If the output layer
is a RegressionOutputLayer
object, then the loss is the
half-mean-squared-error. |
Validation Loss | Loss on the validation data. If the output
layer is a RegressionOutputLayer
object, then the loss is the
half-mean-squared-error. If you do not specify
validation data, then the software does not
display this field. |
Base Learning
Rate | Base learning rate. The software multiplies the learn rate factors of the layers by this value. |
To specify validation data, use the 'ValidationData'
name-value pair.
Example: 'Verbose',false
'VerboseFrequency'
— Frequency of verbose printingFrequency of verbose printing, which is the number of
iterations between printing to the command window,
specified as the comma-separated pair consisting of
'VerboseFrequency'
and a positive
integer. This option only has an effect when the
'Verbose'
value equals
true
.
If you validate the network during training, then
trainNetwork
also prints to the
command window every time validation occurs.
Example: 'VerboseFrequency',100
'MaxEpochs'
— Maximum number of epochsMaximum number of epochs to use for training, specified as
the comma-separated pair consisting of
'MaxEpochs'
and a positive
integer.
An iteration is one step taken in the gradient descent algorithm towards minimizing the loss function using a mini-batch. An epoch is the full pass of the training algorithm over the entire training set.
Example: 'MaxEpochs',20
'MiniBatchSize'
— Size of mini-batchSize of the mini-batch to use for each training iteration,
specified as the comma-separated pair consisting of
'MiniBatchSize'
and a positive
integer. A mini-batch is a subset of the training set that
is used to evaluate the gradient of the loss function and
update the weights. See Stochastic Gradient Descent.
Example: 'MiniBatchSize',256
'Shuffle'
— Option for data shuffling'once'
(default) | 'never'
| 'every-epoch'
Option for data shuffling, specified as the
comma-separated pair consisting of
'Shuffle'
and one of the
following:
'once'
— Shuffle the
training and validation data once before
training.
'never'
— Do not
shuffle the data.
'every-epoch'
—
Shuffle the training data before each training
epoch, and shuffle the validation data before each
network validation. If the mini-batch size does
not evenly divide the number of training samples,
then trainNetwork
discards
the training data that does not fit into the final
complete mini-batch of each epoch. To avoid
discarding the same data every epoch, set the
'Shuffle'
value to
'every-epoch'
.
Example: 'Shuffle','every-epoch'
'ValidationData'
— Data to use for validation during trainingData to use for validation during training, specified as an image datastore, a
datastore, a table, or a cell array. The format of the validation data depends on the
type of task and correspond to valid inputs to the trainNetwork
function.
Specify validation data as one of the following:
Input | trainNetwork Argument | |
---|---|---|
Image datastore | imds | |
Datastore | ds | |
Table | tbl | |
Cell array {X,Y} | X | X |
Y | Y | |
Cell array {sequences,Y} | sequences | sequences |
Y | Y |
During training, trainNetwork
calculates the validation accuracy
and validation loss on the validation data. To specify the validation frequency, use the
'ValidationFrequency'
name-value pair argument. You can also use the
validation data to stop training automatically when the validation loss stops
decreasing. To turn on automatic validation stopping, use the 'ValidationPatience'
name-value pair argument.
If your network has layers that behave differently during prediction than during training (for example, dropout layers), then the validation accuracy can be higher than the training (mini-batch) accuracy.
The validation data is shuffled according to the 'Shuffle'
value. If the
'Shuffle'
value equals 'every-epoch'
, then
the validation data is shuffled before each network validation.
'ValidationFrequency'
— Frequency of network validationFrequency of network validation in number of iterations,
specified as the comma-separated pair consisting of
'ValidationFrequency'
and a
positive integer.
The 'ValidationFrequency'
value is
the number of iterations between evaluations of validation
metrics. To specify validation data, use the 'ValidationData'
name-value pair
argument.
Example: 'ValidationFrequency',20
'ValidationPatience'
— Patience of validation stoppingInf
(default) | positive integerPatience of validation stopping of network training,
specified as the comma-separated pair consisting of
'ValidationPatience'
and a
positive integer or Inf
.
The 'ValidationPatience'
value is the
number of times that the loss on the validation set can be
larger than or equal to the previously smallest loss
before network training stops. To turn on automatic
validation stopping, specify a positive integer as the
'ValidationPatience'
value. If
you use the default value of Inf
, then
the training stops after the maximum number of epochs. To
specify validation data, use the 'ValidationData'
name-value pair
argument.
Example: 'ValidationPatience',5
'InitialLearnRate'
— Initial learning rateInitial learning rate used for training, specified as the
comma-separated pair consisting of
'InitialLearnRate'
and a positive
scalar. The default value is 0.01 for the
'sgdm'
solver and 0.001 for the
'rmsprop'
and
'adam'
solvers. If the learning
rate is too low, then training takes a long time. If the
learning rate is too high, then training might reach a
suboptimal result or diverge.
Example: 'InitialLearnRate',0.03
Data Types: single
| double
'LearnRateSchedule'
— Option for dropping learning rate during training'none'
(default) | 'piecewise'
Option for dropping the learning rate during training,
specified as the comma-separated pair consisting of
'LearnRateSchedule'
and one of
the following:
'none'
— The
learning rate remains constant throughout
training.
'piecewise'
— The
software updates the learning rate every certain
number of epochs by multiplying with a certain
factor. Use the
LearnRateDropFactor
name-value pair argument to specify the value of
this factor. Use the
LearnRateDropPeriod
name-value pair argument to specify the number of
epochs between multiplications.
Example: 'LearnRateSchedule','piecewise'
'LearnRateDropPeriod'
— Number of epochs for dropping the learning rateNumber of epochs for dropping the learning rate, specified
as the comma-separated pair consisting of
'LearnRateDropPeriod'
and a
positive integer. This option is valid only when the value
of LearnRateSchedule
is
'piecewise'
.
The software multiplies the global learning rate with the
drop factor every time the specified number of epochs
passes. Specify the drop factor using the
LearnRateDropFactor
name-value
pair argument.
Example: 'LearnRateDropPeriod',3
'LearnRateDropFactor'
— Factor for dropping the learning rateFactor for dropping the learning rate, specified as the
comma-separated pair consisting of
'LearnRateDropFactor'
and a
scalar from 0 to 1. This option is valid only when the
value of LearnRateSchedule
is
'piecewise'
.
LearnRateDropFactor
is a
multiplicative factor to apply to the learning rate every
time a certain number of epochs passes. Specify the number
of epochs using the
LearnRateDropPeriod
name-value
pair argument.
Example: 'LearnRateDropFactor',0.1
Data Types: single
| double
'L2Regularization'
— Factor for L2 regularizationFactor for L2 regularization
(weight decay), specified as the comma-separated pair
consisting of 'L2Regularization'
and a
nonnegative scalar. For more information, see L2 Regularization.
You can specify a multiplier for the L2 regularization for network layers with learnable parameters. For more information, see Set Up Parameters in Convolutional and Fully Connected Layers.
Example: 'L2Regularization',0.0005
Data Types: single
| double
'Momentum'
— Contribution of previous stepContribution of the parameter update step of the previous
iteration to the current iteration of stochastic gradient
descent with momentum, specified as the comma-separated
pair consisting of 'Momentum'
and a
scalar from 0 to 1. A value of 0 means no contribution
from the previous step, whereas a value of 1 means maximal
contribution from the previous step.
To specify the 'Momentum'
value, you
must set solverName
to be
'sgdm'
. The default value works
well for most problems. For more information about the
different solvers, see Stochastic Gradient Descent.
Example: 'Momentum',0.95
Data Types: single
| double
'GradientDecayFactor'
— Decay rate of gradient moving averageDecay rate of gradient moving average for the Adam solver,
specified as the comma-separated pair consisting of
'GradientDecayFactor'
and a
nonnegative scalar less than 1. The gradient decay rate is
denoted by β1
in
[4].
To specify the 'GradientDecayFactor'
value, you must set solverName
to be
'adam'
. The default value works
well for most problems. For more information about the
different solvers, see Stochastic Gradient Descent.
Example: 'GradientDecayFactor',0.95
Data Types: single
| double
'SquaredGradientDecayFactor'
— Decay rate of squared gradient moving averageDecay rate of squared gradient moving average for the Adam
and RMSProp solvers, specified as the comma-separated pair
consisting of
'SquaredGradientDecayFactor'
and a nonnegative scalar less than 1. The squared gradient
decay rate is denoted by
β2
in
[4].
To specify the
'SquaredGradientDecayFactor'
value, you must set solverName
to be
'adam'
or
'rmsprop'
. Typical values of the
decay rate are 0.9, 0.99, and 0.999, corresponding to
averaging lengths of 10, 100, and 1000 parameter updates,
respectively. The default value is 0.999 for the Adam
solver. The default value is 0.9 for the RMSProp solver.
For more information about the different solvers, see Stochastic Gradient Descent.
Example: 'SquaredGradientDecayFactor',0.99
Data Types: single
| double
'Epsilon'
— Denominator offsetDenominator offset for Adam and RMSProp solvers, specified
as the comma-separated pair consisting of
'Epsilon'
and a positive scalar.
The solver adds the offset to the denominator in the
network parameter updates to avoid division by
zero.
To specify the 'Epsilon'
value, you
must set solverName
to be
'adam'
or
'rmsprop'
. The default value
works well for most problems. For more information about
the different solvers, see Stochastic Gradient Descent.
Example: 'Epsilon',1e-6
Data Types: single
| double
'ResetInputNormalization'
— Option to reset input layer normalizationtrue
(default) | false
Option to reset input layer normalization, specified as one of the following:
true
– Reset the input layer normalization statistics and recalculate them at training time.
false
– Calculate normalization statistics at training time when they are empty.
'GradientThreshold'
— Gradient thresholdInf
(default) | positive scalarGradient threshold, specified as the comma-separated pair
consisting of 'GradientThreshold'
and
Inf
or a positive scalar. If the
gradient exceeds the value of
GradientThreshold
, then the
gradient is clipped according to
GradientThresholdMethod
.
Example: 'GradientThreshold',6
'GradientThresholdMethod'
— Gradient threshold method'l2norm'
(default) | 'global-l2norm'
| 'absolute-value'
Gradient threshold method used to clip gradient values
that exceed the gradient threshold, specified as the
comma-separated pair consisting of
'GradientThresholdMethod'
and one
of the following:
'l2norm'
— If the L2 norm of the
gradient of a learnable parameter is larger than
GradientThreshold
, then scale the gradient so that the
L2 norm equals
GradientThreshold
.
'global-l2norm'
— If the global L2 norm,
L, is larger than GradientThreshold
,
then scale all gradients by a factor of
GradientThreshold/
L. The global
L2 norm considers all learnable parameters.
'absolute-value'
— If the absolute value of an individual
partial derivative in the gradient of a learnable parameter is larger than
GradientThreshold
, then scale the partial derivative to
have magnitude equal to GradientThreshold
and retain the sign
of the partial derivative.
For more information, see Gradient Clipping.
Example: 'GradientThresholdMethod','global-l2norm'
'SequenceLength'
— Option to pad, truncate, or split input sequences'longest'
(default) | 'shortest'
| positive integerOption to pad, truncate, or split input sequences, specified as one of the following:
'longest'
— Pad sequences in each mini-batch to have the same length as the longest sequence. This option does not discard any data, though padding can introduce noise to the network.
'shortest'
— Truncate sequences in each mini-batch to have the same length as the shortest sequence. This option ensures that no padding is added, at the cost of discarding data.
Positive integer — For each mini-batch, pad the sequences to the nearest multiple
of the specified length that is greater than the longest sequence length in the
mini-batch, and then split the sequences into smaller sequences of the specified
length. If splitting occurs, then the software creates extra mini-batches. Use this
option if the full sequences do not fit in memory. Alternatively, try reducing the
number of sequences per mini-batch by setting the 'MiniBatchSize'
option to a lower value.
To learn more about the effect of padding, truncating, and splitting the input sequences, see Sequence Padding, Truncation, and Splitting.
Example: 'SequenceLength','shortest'
'SequencePaddingDirection'
— Direction of padding or truncation'right'
(default) | 'left'
Direction of padding or truncation, specified as one of the following:
'right'
— Pad or truncate sequences on the right. The
sequences start at the same time step and the software truncates or adds
padding to the end of the sequences.
'left'
— Pad or truncate sequences on the left. The software truncates or adds padding to the start of the sequences so that the sequences end at the same time step.
Because LSTM layers process sequence data one time step at a time, when the layer OutputMode
property is 'last'
, any padding in the final time steps can negatively influence the layer output. To pad or truncate sequence data on the left, set the 'SequencePaddingDirection'
option to 'left'
.
For sequence-to-sequence networks (when the OutputMode
property is 'sequence'
for each LSTM layer), any padding in the first time steps can negatively influence the predictions for the earlier time steps. To pad or truncate sequence data on the right, set the 'SequencePaddingDirection'
option to 'right'
.
To learn more about the effect of padding, truncating, and splitting the input sequences, see Sequence Padding, Truncation, and Splitting.
'SequencePaddingValue'
— Value to pad input sequencesValue by which to pad input sequences, specified as a scalar. The option is valid only
when SequenceLength
is 'longest'
or a positive
integer. Do not pad sequences with NaN
, because doing so can
propagate errors throughout the network.
Example: 'SequencePaddingValue',-1
'ExecutionEnvironment'
— Hardware resource for training network'auto'
(default) | 'cpu'
| 'gpu'
| 'multi-gpu'
| 'parallel'
Hardware resource for training network, specified as one of the following:
'auto'
— Use a GPU if one is available. Otherwise, use the CPU.
'cpu'
— Use the CPU.
'gpu'
— Use the GPU.
'multi-gpu'
— Use multiple GPUs on one machine, using a local
parallel pool based on your default cluster profile. If there is no current parallel
pool, the software starts a parallel pool with pool size equal to the number of
available GPUs.
'parallel'
— Use a local or remote parallel pool based on your
default cluster profile. If there is no current parallel pool, the software starts
one using the default cluster profile. If the pool has access to GPUs, then only
workers with a unique GPU perform training computation. If the pool does not have
GPUs, then training takes place on all available CPU workers instead.
For more information on when to use the different execution environments, see Scale Up Deep Learning in Parallel and in the Cloud.
GPU, multi-GPU, and parallel options require Parallel Computing Toolbox™. To use a GPU for deep learning, you must also have a CUDA® enabled NVIDIA® GPU with compute capability 3.0 or higher. If you choose one of these options and Parallel Computing Toolbox or a suitable GPU is not available, then the software returns an error.
To see an improvement in performance when training
in parallel, try scaling up the MiniBatchSize
and
InitialLearnRate
training options by the number of GPUs.
Training long short-term memory networks supports single CPU or single GPU training only.
Datastores used for multi-GPU training or parallel training must be partitionable. For more information, see Use Datastore for Parallel Training and Background Dispatching.
If you use the 'multi-gpu'
option with
a partitionable input datastore and the
'DispatchInBackground'
option,
then the software starts a parallel pool with size equal
to the default pool size. Workers with unique GPUs perform
training computation. The remaining workers are used for
background dispatch.
Example: 'ExecutionEnvironment','cpu'
'WorkerLoad'
— Parallel worker load divisionParallel worker load division between GPUs or CPUs,
specified as the comma-separated pair consisting of
'WorkerLoad'
and one of the following:
Scalar from 0 to 1 — Fraction of workers on each machine to use for network training computation. If you train the network using data in a mini-batch datastore with background dispatch enabled, then the remaining workers fetch and preprocess data in the background.
Positive integer — Number of workers on each machine to use for network training computation. If you train the network using data in a mini-batch datastore with background dispatch enabled, then the remaining workers fetch and preprocess data in the background.
Numeric vector — Network training load for
each worker in the parallel pool. For a vector
W
, worker i
gets a fraction W(i)/sum(W)
of
the work (number of examples per mini-batch). If
you train a network using data in a mini-batch
datastore with background dispatch enabled, then
you can assign a worker load of 0 to use that
worker for fetching data in the background. The
specified vector must contain one value per worker
in the parallel pool.
If the parallel pool has access to GPUs, then workers without a unique GPU are never used for training computation. The default for pools with GPUs is to use all workers with a unique GPU for training computation, and the remaining workers for background dispatch. If the pool does not have access to GPUs and CPUs are used for training, then the default is to use one worker per machine for background data dispatch.
'DispatchInBackground'
— Use background dispatchfalse
(default) | true
Use background dispatch (asynchronous prefetch queuing) to
read training data from datastores, specified as
false
or true
.
Background dispatch requires Parallel Computing Toolbox.
DispatchInBackground
is only
supported for datastores that are partitionable. For more
information, see Use Datastore for Parallel Training and Background Dispatching.
'CheckpointPath'
— Path for saving checkpoint networks''
(default) | character vectorPath for saving the checkpoint networks, specified as the
comma-separated pair consisting of
'CheckpointPath'
and a character
vector.
If you do not specify a path (that is, you use
the default ''
), then the
software does not save any checkpoint
networks.
If you specify a path, then trainNetwork
saves checkpoint networks
to this path after every epoch and assigns a
unique name to each network. You can then load any
checkpoint network and resume training from that
network.
If the folder does not exist, then you must
first create it before specifying the path for
saving the checkpoint networks. If the path you
specify does not exist, then
trainingOptions
returns an
error.
For more information about saving network checkpoints, see Save Checkpoint Networks and Resume Training.
Example: 'CheckpointPath','C:\Temp\checkpoint'
Data Types: char
'OutputFcn'
— Output functionsOutput functions to call during training, specified as the
comma-separated pair consisting of
'OutputFcn'
and a function handle
or cell array of function handles.
trainNetwork
calls the
specified functions once before the start of training,
after each iteration, and once after training has
finished. trainNetwork
passes a
structure containing information in the following
fields:
Field | Description |
---|---|
Epoch | Current epoch number |
Iteration | Current iteration number |
TimeSinceStart | Time in seconds since the start of training |
TrainingLoss | Current mini-batch loss |
ValidationLoss | Loss on the validation data |
BaseLearnRate | Current base learning rate |
TrainingAccuracy
| Accuracy on the current mini-batch (classification networks) |
TrainingRMSE | RMSE on the current mini-batch (regression networks) |
ValidationAccuracy | Accuracy on the validation data (classification networks) |
ValidationRMSE | RMSE on the validation data (regression networks) |
State | Current training state, with a possible
value of "start" ,
"iteration" , or
"done" |
If a field is not calculated or relevant for a certain call to the output functions, then that field contains an empty array.
You can use output functions to display or plot progress
information, or to stop training. To stop training early,
make your output function return true
.
If any output function returns true
,
then training finishes and
trainNetwork
returns the latest
network. For an example showing how to use output
functions, see Customize Output During Deep Learning Network Training.
Data Types: function_handle
| cell
options
— Training optionsTrainingOptionsSGDM
| TrainingOptionsRMSProp
| TrainingOptionsADAM
Training options, returned as a TrainingOptionsSGDM
, TrainingOptionsRMSProp
, or TrainingOptionsADAM
object. To train a neural
network, use the training options as an input argument to the
trainNetwork
function.
If solverName
equals 'sgdm'
,
'rmsprop'
, or
'adam'
, then the training options are
returned as a TrainingOptionsSGDM
,
TrainingOptionsRMSProp
, or
TrainingOptionsADAM
object, respectively.
You can edit training option properties of
TrainingOptionsSGDM
,
TrainingOptionsADAM
, and
TrainingOptionsRMSProp
objects directly.
For example, to change the mini-batch size after using the
trainingOptions
function, you can
edit the MiniBatchSize
property directly:
options = trainingOptions('sgdm'); options.MiniBatchSize = 64;
For most deep learning tasks, you can use a pretrained network and adapt it to your own data. For an example showing how to use transfer learning to retrain a convolutional neural network to classify a new set of images, see Train Deep Learning Network to Classify New Images. Alternatively, you can create and train networks from scratch using layerGraph
objects with the trainNetwork
and trainingOptions
functions.
If the trainingOptions
function does not provide the training options that you need for your task, then you can create a custom training loop using automatic differentiation. To learn more, see Define Deep Learning Network for Custom Training Loops.
For convolutional and fully connected layers, the initialization for the weights and biases
are given by the WeightsInitializer
and
BiasInitializer
properties of the layers,
respectively. For examples showing how to change the initialization for the
weights and biases, see Specify Initial Weights and Biases in Convolutional Layer and
Specify Initial Weights and Biases in Fully Connected Layer.
The standard gradient descent algorithm updates the network parameters (weights and biases) to minimize the loss function by taking small steps at each iteration in the direction of the negative gradient of the loss,
where is the iteration number, is the learning rate, is the parameter vector, and is the loss function. In the standard gradient descent algorithm, the gradient of the loss function, , is evaluated using the entire training set, and the standard gradient descent algorithm uses the entire data set at once.
By contrast, at each iteration the stochastic gradient descent algorithm
evaluates the gradient and updates the parameters using a subset of the
training data. A different subset, called a mini-batch, is used at each
iteration. The full pass of the training algorithm over the entire training
set using mini-batches is one epoch. Stochastic
gradient descent is stochastic because the parameter updates computed using
a mini-batch is a noisy estimate of the parameter update that would result
from using the full data set. You can specify the mini-batch size and the
maximum number of epochs by using the 'MiniBatchSize'
and 'MaxEpochs'
name-value pair arguments, respectively.
The stochastic gradient descent algorithm can oscillate along the path of steepest descent towards the optimum. Adding a momentum term to the parameter update is one way to reduce this oscillation [2]. The stochastic gradient descent with momentum (SGDM) update is
where determines the contribution of the previous gradient
step to the current iteration. You can specify this value using the
'Momentum'
name-value pair argument. To train a neural network using the
stochastic gradient descent with momentum algorithm, specify solverName
as 'sgdm'
. To specify
the initial value of the learning rate α, use the 'InitialLearnRate'
name-value pair argument. You can
also specify different learning rates for different layers and
parameters. For more information, see Set Up Parameters in Convolutional and Fully Connected Layers.
Stochastic gradient descent with momentum uses a single learning rate for all the parameters. Other optimization algorithms seek to improve network training by using learning rates that differ by parameter and can automatically adapt to the loss function being optimized. RMSProp (root mean square propagation) is one such algorithm. It keeps a moving average of the element-wise squares of the parameter gradients,
β2 is the
decay rate of the moving average. Common values of the decay rate are
0.9, 0.99, and 0.999. The corresponding averaging lengths of the
squared gradients equal
1/(1-β2), that is, 10,
100, and 1000 parameter updates, respectively. You can specify
β2 by using the
'SquaredGradientDecayFactor'
name-value pair
argument. The RMSProp algorithm uses this moving average to normalize
the updates of each parameter individually,
where the division is performed element-wise. Using
RMSProp effectively decreases the learning rates of parameters with
large gradients and increases the learning rates of parameters with
small gradients. ɛ is a small constant added to
avoid division by zero. You can specify ɛ by using
the 'Epsilon'
name-value pair argument, but the default
value usually works well. To use RMSProp to train a neural network,
specify solverName
as 'rmsprop'
.
Adam (derived from adaptive moment estimation) [4] uses a parameter update that is similar to RMSProp, but with an added momentum term. It keeps an element-wise moving average of both the parameter gradients and their squared values,
You can specify the
β1 and
β2 decay rates
using the 'GradientDecayFactor'
and 'SquaredGradientDecayFactor'
name-value pair
arguments, respectively. Adam uses the moving averages to update the
network parameters as
If gradients over many iterations are similar, then
using a moving average of the gradient enables the parameter updates
to pick up momentum in a certain direction. If the gradients contain
mostly noise, then the moving average of the gradient becomes smaller,
and so the parameter updates become smaller too. You can specify
ɛ by using the 'Epsilon'
name-value pair argument. The default
value usually works well, but for certain problems a value as large as
1 works better. To use Adam to train a neural network, specify
solverName
as 'adam'
. The full
Adam update also includes a mechanism to correct a bias the appears in
the beginning of training. For more information, see [4].
Specify the learning rate α for all optimization
algorithms using the'InitialLearnRate'
name-value pair argument. The
effect of the learning rate is different for the different
optimization algorithms, so the optimal learning rates are also
different in general. You can also specify learning rates that differ
by layers and by parameter. For more information, see Set Up Parameters in Convolutional and Fully Connected Layers.
If the gradients increase in magnitude exponentially, then the training is
unstable and can diverge within a few iterations. This "gradient explosion"
is indicated by a training loss that goes to NaN
or
Inf
. Gradient clipping helps prevent gradient
explosion by stabilizing the training at higher learning rates and in the
presence of outliers [3]. Gradient
clipping enables networks to be trained faster, and does not usually impact
the accuracy of the learned task.
There are two types of gradient clipping.
Norm-based gradient clipping rescales the gradient based
on a threshold, and does not change the direction of the
gradient. The 'l2norm'
and
'global-l2norm'
values of
GradientThresholdMethod
are
norm-based gradient clipping methods.
Value-based gradient clipping clips any partial derivative
greater than the threshold, which can result in the
gradient arbitrarily changing direction. Value-based
gradient clipping can have unpredictable behavior, but
sufficiently small changes do not cause the network to
diverge. The 'absolute-value'
value of
GradientThresholdMethod
is a
value-based gradient clipping method.
For examples, see Time Series Forecasting Using Deep Learning and Sequence-to-Sequence Classification Using Deep Learning.
Adding a regularization term for the weights to the loss function is one way to reduce overfitting [1], [2]. The regularization term is also called weight decay. The loss function with the regularization term takes the form
where is the weight vector, is the regularization factor (coefficient), and the regularization function is
Note that the biases are not regularized [2]. You can specify the regularization factor by using the 'L2Regularization'
name-value pair argument. You can also
specify different regularization factors for different layers and
parameters. For more information, see Set Up Parameters in Convolutional and Fully Connected Layers.
The loss function that the software uses for network training includes the regularization term. However, the loss value displayed in the command window and training progress plot during training is the loss on the data only and does not include the regularization term.
'ValidationPatience'
training option default is Inf
Behavior changed in R2018b
Starting in R2018b, the default value of the 'ValidationPatience'
training option is
Inf
, which means that automatic stopping via
validation is turned off. This behavior prevents the training from stopping
before sufficiently learning from the data.
In previous versions, the default value is 5
. To
reproduce this behavior, set the 'ValidationPatience'
option to
5
.
Behavior changed in R2018b
Starting in R2018b, when saving checkpoint networks, the software assigns
file names beginning with net_checkpoint_
. In previous
versions, the software assigns file names beginning with
convnet_checkpoint_
.
If you have code that saves and loads checkpoint networks, then update your code to load files with the new name.
[1] Bishop, C. M. Pattern Recognition and Machine Learning. Springer, New York, NY, 2006.
[2] Murphy, K. P. Machine Learning: A Probabilistic Perspective. The MIT Press, Cambridge, Massachusetts, 2012.
[3] Pascanu, R., T. Mikolov, and Y. Bengio. "On the difficulty of training recurrent neural networks". Proceedings of the 30th International Conference on Machine Learning. Vol. 28(3), 2013, pp. 1310–1318.
[4] Kingma, Diederik, and Jimmy Ba. "Adam: A method for stochastic optimization." arXiv preprint arXiv:1412.6980 (2014).
analyzeNetwork
| Deep Network
Designer | trainNetwork
You have a modified version of this example. Do you want to open this example with your edits?