Training options for RMSProp optimizer
Training options for RMSProp (root mean square propagation) optimizer, including learning rate information, L2 regularization factor, and mini-batch size.
Create a TrainingOptionsRMSProp
object using trainingOptions
and specifying 'rmsprop'
as the
solverName
input argument.
Plots
— Plots to display during network training'none'
| 'training-progress'
Plots to display during network training, specified as one of the following:
'none'
— Do not display plots during training.
'training-progress'
— Plot training progress. The plot shows
mini-batch loss and accuracy, validation loss and accuracy, and additional
information on the training progress. The plot has a stop button
in the top-right corner. Click the button to
stop training and return the current state of the network.
Verbose
— Indicator to display training progress information1
| 0
Indicator to display training progress information in the command window, specified as 1
(true
) or 0
(false
).
The displayed information includes the epoch number, iteration number, time elapsed, mini-batch loss, mini-batch accuracy, and base learning rate. When you train a regression network, root mean square error (RMSE) is shown instead of accuracy. If you validate the network during training, then the displayed information also includes the validation loss and validation accuracy (or RMSE).
Data Types: logical
VerboseFrequency
— Frequency of verbose printingFrequency of verbose printing, which is the number of iterations between printing to the command window, specified as a positive integer. This property only has an effect when the Verbose
value equals true
.
If you validate the network during training, then trainNetwork
prints to the command window every time validation occurs.
MaxEpochs
— Maximum number of epochsMaximum number of epochs to use for training, specified as a positive integer.
An iteration is one step taken in the gradient descent algorithm towards minimizing the loss function using a mini-batch. An epoch is the full pass of the training algorithm over the entire training set.
MiniBatchSize
— Size of mini-batchSize of the mini-batch to use for each training iteration, specified as a positive integer. A mini-batch is a subset of the training set that is used to evaluate the gradient of the loss function and update the weights.
Shuffle
— Option for data shuffling'once'
| 'never'
| 'every-epoch'
Option for data shuffling, specified as one of the following:
'once'
— Shuffle the training and validation data once before training.
'never'
— Do not shuffle the data.
'every-epoch'
— Shuffle the training data before each training epoch, and shuffle the validation data before each network validation. If the mini-batch size does not evenly divide the number of training samples, then trainNetwork
discards the training data that does not fit into the final complete mini-batch of each epoch. Set the Shuffle
value to 'every-epoch'
to avoid discarding the same data every epoch.
ValidationData
— Data to use for validation during trainingData to use for validation during training, specified as an image datastore, a
datastore, a table, or a cell array. The format of the validation data depends on the
type of task and correspond to valid inputs to the trainNetwork
function.
Specify validation data as one of the following:
Input | trainNetwork Argument | |
---|---|---|
Image datastore | imds | |
Datastore | ds | |
Table | tbl | |
Cell array {X,Y} | X | X |
Y | Y | |
Cell array {sequences,Y} | sequences | sequences |
Y | Y |
During training, trainNetwork
calculates the validation accuracy
and validation loss on the validation data. To specify the validation frequency, use the
'ValidationFrequency'
name-value pair argument. You can also use the
validation data to stop training automatically when the validation loss stops
decreasing. To turn on automatic validation stopping, use the 'ValidationPatience'
name-value pair argument.
If your network has layers that behave differently during prediction than during training (for example, dropout layers), then the validation accuracy can be higher than the training (mini-batch) accuracy.
The validation data is shuffled according to the 'Shuffle'
value. If the
'Shuffle'
value equals 'every-epoch'
, then
the validation data is shuffled before each network validation.
ValidationFrequency
— Frequency of network validationFrequency of network validation in number of iterations, specified as a positive integer.
The ValidationFrequency
value is the number of iterations between evaluations of validation metrics.
ValidationPatience
— Patience of validation stoppingInf
Patience of validation stopping of network training, specified as a positive integer or Inf
.
The 'ValidationPatience'
value is the number of times that the loss on the validation set can be larger than or equal to the previously smallest loss before network training stops.
InitialLearnRate
— Initial learning rateInitial learning rate used for training, specified as a positive scalar. If the learning rate is too low, then training takes a long time. If the learning rate is too high, then training can reach a suboptimal result.
LearnRateScheduleSettings
— Settings for learning rate scheduleSettings for the learning rate schedule, specified as a structure. LearnRateScheduleSettings
has the field Method
, which specifies the type of method for adjusting the learning rate. The possible methods are:
'none'
— The learning rate is constant throughout training.
'piecewise'
— The learning rate drops periodically during training.
If Method
is 'piecewise'
, then LearnRateScheduleSettings
contains two more fields:
DropRateFactor
— The multiplicative factor by which the learning rate drops during training
DropPeriod
— The number of epochs that passes between adjustments to the learning rate during training
Specify the settings for the learning schedule rate using trainingOptions
.
Data Types: struct
L2Regularization
— Factor for L2 regularizerFactor for L2 regularizer (weight decay), specified as a nonnegative scalar.
You can specify a multiplier for the L2 regularizer for network layers with learnable parameters.
SquaredGradientDecayFactor
— Decay rate of squared gradient moving averageDecay rate of squared gradient moving average, specified as a scalar from 0 to 1. For more information about the different solvers, see Stochastic Gradient Descent.
Epsilon
— Denominator offsetDenominator offset, specified as a positive scalar. The solver adds the offset to the denominator in the network parameter updates to avoid division by zero.
ResetInputNormalization
— Option to reset input layer normalizationtrue
(default) | false
Option to reset input layer normalization, specified as one of the following:
true
– Reset the input layer normalization statistics and recalculate them at training time.
false
– Calculate normalization statistics at training time when they are empty.
GradientThreshold
— Gradient thresholdInf
Positive threshold for the gradient, specified as positive scalar or Inf
. When the gradient exceeds the value of GradientThreshold
, then the gradient is clipped according to GradientThresholdMethod
.
GradientThresholdMethod
— Gradient threshold method'l2norm'
| 'global-l2norm'
| 'absolutevalue'
Gradient threshold method used to clip gradient values that exceed the gradient threshold, specified as one of the following:
'l2norm'
— If the L2 norm of the
gradient of a learnable parameter is larger than
GradientThreshold
, then scale the gradient so that the
L2 norm equals
GradientThreshold
.
'global-l2norm'
— If the global L2 norm,
L, is larger than GradientThreshold
,
then scale all gradients by a factor of
GradientThreshold/
L. The global
L2 norm considers all learnable parameters.
'absolute-value'
— If the absolute value of an individual
partial derivative in the gradient of a learnable parameter is larger than
GradientThreshold
, then scale the partial derivative to
have magnitude equal to GradientThreshold
and retain the sign
of the partial derivative.
For more information, see Gradient Clipping.
SequenceLength
— Option to pad or truncate sequences'longest'
| 'shortest'
| positive integerOption to pad, truncate, or split input sequences, specified as one of the following:
'longest'
— Pad sequences in each mini-batch to have the same length as the longest sequence. This option does not discard any data, though padding can introduce noise to the network.
'shortest'
— Truncate sequences in each mini-batch to have the same length as the shortest sequence. This option ensures that no padding is added, at the cost of discarding data.
Positive integer — For each mini-batch, pad the sequences to the nearest multiple
of the specified length that is greater than the longest sequence length in the
mini-batch, and then split the sequences into smaller sequences of the specified
length. If splitting occurs, then the software creates extra mini-batches. Use this
option if the full sequences do not fit in memory. Alternatively, try reducing the
number of sequences per mini-batch by setting the 'MiniBatchSize'
option to a lower value.
To learn more about the effect of padding, truncating, and splitting the input sequences, see Sequence Padding, Truncation, and Splitting.
SequencePaddingDirection
— Direction of padding or truncation'right'
(default) | 'left'
Direction of padding or truncation, specified as one of the following:
'right'
— Pad or truncate sequences on the right. The
sequences start at the same time step and the software truncates or adds
padding to the end of the sequences.
'left'
— Pad or truncate sequences on the left. The software truncates or adds padding to the start of the sequences so that the sequences end at the same time step.
Because LSTM layers process sequence data one time step at a time, when the layer OutputMode
property is 'last'
, any padding in the final time steps can negatively influence the layer output. To pad or truncate sequence data on the left, set the 'SequencePaddingDirection'
option to 'left'
.
For sequence-to-sequence networks (when the OutputMode
property is 'sequence'
for each LSTM layer), any padding in the first time steps can negatively influence the predictions for the earlier time steps. To pad or truncate sequence data on the right, set the 'SequencePaddingDirection'
option to 'right'
.
To learn more about the effect of padding, truncating, and splitting the input sequences, see Sequence Padding, Truncation, and Splitting.
SequencePaddingValue
— Value to pad sequencesValue by which to pad input sequences, specified as a scalar. The option is valid only when
SequenceLength
is 'longest'
or a positive
integer. Do not pad sequences with NaN
, because doing so can
propagate errors throughout the network.
ExecutionEnvironment
— Hardware resource for training network'auto'
| 'cpu'
| 'gpu'
| 'multi-gpu'
| 'parallel'
Hardware resource for training network, specified as one of the following:
'auto'
— Use a GPU if one is available. Otherwise, use the CPU.
'cpu'
— Use the CPU.
'gpu'
— Use the GPU.
'multi-gpu'
— Use multiple GPUs on one machine, using a local
parallel pool based on your default cluster profile. If there is no current parallel
pool, the software starts a parallel pool with pool size equal to the number of
available GPUs.
'parallel'
— Use a local or remote parallel pool based on your
default cluster profile. If there is no current parallel pool, the software starts
one using the default cluster profile. If the pool has access to GPUs, then only
workers with a unique GPU perform training computation. If the pool does not have
GPUs, then training takes place on all available CPU workers instead.
For more information on when to use the different execution environments, see Scale Up Deep Learning in Parallel and in the Cloud.
GPU, multi-GPU, and parallel options require Parallel Computing Toolbox™. To use a GPU for deep learning, you must also have a CUDA® enabled NVIDIA® GPU with compute capability 3.0 or higher. If you choose one of these options and Parallel Computing Toolbox or a suitable GPU is not available, then the software returns an error.
To see an improvement in performance when training
in parallel, try scaling up the MiniBatchSize
and
InitialLearnRate
training options by the number of GPUs.
Training long short-term memory networks supports single CPU or single GPU training only.
Specify the execution environment using trainingOptions
.
Data Types: char
| string
WorkerLoad
— Parallel worker load divisionWorker load division for GPUs or CPUs, specified as a scalar from 0 to 1, a positive
integer, or a numeric vector. This property has an effect only when the
ExecutionEnvironment
value equals 'multi-gpu'
or 'parallel'
.
CheckpointPath
— Path for saving checkpoint networksPath where checkpoint networks are saved, specified as a character vector.
Data Types: char
OutputFcn
— Output functionsOutput functions to call during training, specified as a function handle or cell array of function handles. trainNetwork
calls the specified functions once before the start of training, after each iteration, and once after training has finished. trainNetwork
passes a structure containing information in the following fields:
Field | Description |
---|---|
Epoch | Current epoch number |
Iteration | Current iteration number |
TimeSinceStart | Time in seconds since the start of training |
TrainingLoss | Current mini-batch loss |
ValidationLoss | Loss on the validation data |
BaseLearnRate | Current base learning rate |
TrainingAccuracy | Accuracy on the current mini-batch (classification networks) |
TrainingRMSE | RMSE on the current mini-batch (regression networks) |
ValidationAccuracy | Accuracy on the validation data (classification networks) |
ValidationRMSE | RMSE on the validation data (regression networks) |
State | Current training state, with a possible value of "start" , "iteration" , or "done" . |
If a field is not calculated or relevant for a certain call to the output functions, then that field contains an empty array.
You can use output functions to display or plot progress information, or to stop training. To
stop training early, make your output function return true
. If any
output function returns true
, then training finishes and
trainNetwork
returns the latest network. For an example showing how to
use output functions, see Customize Output During Deep Learning Network Training .
Data Types: function_handle
| cell
Create a set of options for training a neural network using the RMSProp optimizer. Set the maximum number of epochs for training to 20, and use a mini-batch with 64 observations at each iteration. Specify the learning rate and the decay rate of the moving average of the squared gradient. Turn on the training progress plot.
options = trainingOptions('rmsprop', ... 'InitialLearnRate',3e-4, ... 'SquaredGradientDecayFactor',0.99, ... 'MaxEpochs',20, ... 'MiniBatchSize',64, ... 'Plots','training-progress')
options = TrainingOptionsRMSProp with properties: SquaredGradientDecayFactor: 0.9900 Epsilon: 1.0000e-08 InitialLearnRate: 3.0000e-04 LearnRateSchedule: 'none' LearnRateDropFactor: 0.1000 LearnRateDropPeriod: 10 L2Regularization: 1.0000e-04 GradientThresholdMethod: 'l2norm' GradientThreshold: Inf MaxEpochs: 20 MiniBatchSize: 64 Verbose: 1 VerboseFrequency: 50 ValidationData: [] ValidationFrequency: 50 ValidationPatience: Inf Shuffle: 'once' CheckpointPath: '' ExecutionEnvironment: 'auto' WorkerLoad: [] OutputFcn: [] Plots: 'training-progress' SequenceLength: 'longest' SequencePaddingValue: 0 SequencePaddingDirection: 'right' DispatchInBackground: 0 ResetInputNormalization: 1
trainingOptions
| trainNetwork
You have a modified version of this example. Do you want to open this example with your edits?