Conjugate gradient backpropagation with Powell-Beale restarts
net.trainFcn = 'traincgb'
[net,tr] = train(net,...)
traincgb
is a network training function that updates weight and bias
values according to the conjugate gradient backpropagation with Powell-Beale restarts.
net.trainFcn = 'traincgb'
sets the network trainFcn
property.
[net,tr] = train(net,...)
trains the network with
traincgb
.
Training occurs according to traincgb
training parameters, shown here
with their default values:
net.trainParam.epochs | 1000 | Maximum number of epochs to train |
net.trainParam.show | 25 | Epochs between displays ( |
net.trainParam.showCommandLine | false | Generate command-line output |
net.trainParam.showWindow | true | Show training GUI |
net.trainParam.goal | 0 | Performance goal |
net.trainParam.time | inf | Maximum time to train in seconds |
net.trainParam.min_grad | 1e-10 | Minimum performance gradient |
net.trainParam.max_fail | 6 | Maximum validation failures |
net.trainParam.searchFcn | 'srchcha' | Name of line search routine to use |
Parameters related to line search methods (not all used for all methods):
net.trainParam.scal_tol | 20 | Divide into |
net.trainParam.alpha | 0.001 | Scale factor that determines sufficient reduction in
|
net.trainParam.beta | 0.1 | Scale factor that determines sufficiently large step size |
net.trainParam.delta | 0.01 | Initial step size in interval location step |
net.trainParam.gama | 0.1 | Parameter to avoid small reductions in performance, usually set to
|
net.trainParam.low_lim | 0.1 | Lower limit on change in step size |
net.trainParam.up_lim | 0.5 | Upper limit on change in step size |
net.trainParam.maxstep | 100 | Maximum step length |
net.trainParam.minstep | 1.0e-6 | Minimum step length |
net.trainParam.bmax | 26 | Maximum step size |
You can create a standard network that uses traincgb
with
feedforwardnet
or cascadeforwardnet
.
To prepare a custom network to be trained with traincgb
,
Set net.trainFcn
to 'traincgb'
.
This sets net.trainParam
to traincgb
’s default
parameters.
Set net.trainParam
properties to desired
values.
In either case, calling train
with the resulting network trains the
network with traincgb
.
traincgb
can train any network as long as its weight, net input, and
transfer functions have derivative functions.
Backpropagation is used to calculate derivatives of performance perf
with respect to the weight and bias variables X
. Each variable is adjusted
according to the following:
X = X + a*dX;
where dX
is the search direction. The parameter a
is
selected to minimize the performance along the search direction. The line search function
searchFcn
is used to locate the minimum point. The first search direction is
the negative of the gradient of performance. In succeeding iterations the search direction is
computed from the new gradient and the previous search direction according to the formula
dX = -gX + dX_old*Z;
where gX
is the gradient. The parameter Z
can be
computed in several different ways. The Powell-Beale variation of conjugate gradient is
distinguished by two features. First, the algorithm uses a test to determine when to reset the
search direction to the negative of the gradient. Second, the search direction is computed from
the negative gradient, the previous search direction, and the last search direction before the
previous reset. See Powell, Mathematical Programming, Vol. 12, 1977, pp.
241 to 254, for a more detailed discussion of the algorithm.
Training stops when any of these conditions occurs:
The maximum number of epochs
(repetitions) is reached.
The maximum amount of time
is exceeded.
Performance is minimized to the goal
.
The performance gradient falls below min_grad
.
Validation performance has increased more than max_fail
times since
the last time it decreased (when using validation).
Powell, M.J.D., “Restart procedures for the conjugate gradient method,” Mathematical Programming, Vol. 12, 1977, pp. 241–254
trainbfg
| traincgf
| traincgp
| traingda
| traingdm
| traingdx
| trainlm
| trainoss
| trainscg