trainscg

Scaled conjugate gradient backpropagation

Syntax

net.trainFcn = 'trainscg'
[net,tr] = train(net,...)

Description

trainscg is a network training function that updates weight and bias values according to the scaled conjugate gradient method.

net.trainFcn = 'trainscg' sets the network trainFcn property.

[net,tr] = train(net,...) trains the network with trainscg.

Training occurs according to trainscg training parameters, shown here with their default values:

net.trainParam.epochs1000

Maximum number of epochs to train

net.trainParam.show25

Epochs between displays (NaN for no displays)

net.trainParam.showCommandLinefalse

Generate command-line output

net.trainParam.showWindowtrue

Show training GUI

net.trainParam.goal0

Performance goal

net.trainParam.timeinf

Maximum time to train in seconds

net.trainParam.min_grad1e-6

Minimum performance gradient

net.trainParam.max_fail6

Maximum validation failures

net.trainParam.sigma5.0e-5

Determine change in weight for second derivative approximation

net.trainParam.lambda5.0e-7

Parameter for regulating the indefiniteness of the Hessian

Network Use

You can create a standard network that uses trainscg with feedforwardnet or cascadeforwardnet. To prepare a custom network to be trained with trainscg,

  1. Set net.trainFcn to 'trainscg'. This sets net.trainParam to trainscg’s default parameters.

  2. Set net.trainParam properties to desired values.

In either case, calling train with the resulting network trains the network with trainscg.

Examples

Here is a problem consisting of inputs p and targets t to be solved with a network.

p = [0 1 2 3 4 5];
t = [0 0 0 1 1 1];

A two-layer feed-forward network with two hidden neurons and this training function is created.

net = feedforwardnet(2,'trainscg');

Here the network is trained and retested.

net = train(net,p,t);
a = net(p)

See help feedforwardnet and help cascadeforwardnet for other examples.

Algorithms

trainscg can train any network as long as its weight, net input, and transfer functions have derivative functions. Backpropagation is used to calculate derivatives of performance perf with respect to the weight and bias variables X.

The scaled conjugate gradient algorithm is based on conjugate directions, as in traincgp, traincgf, and traincgb, but this algorithm does not perform a line search at each iteration. See Moller (Neural Networks, Vol. 6, 1993, pp. 525–533) for a more detailed discussion of the scaled conjugate gradient algorithm.

Training stops when any of these conditions occurs:

  • The maximum number of epochs (repetitions) is reached.

  • The maximum amount of time is exceeded.

  • Performance is minimized to the goal.

  • The performance gradient falls below min_grad.

  • Validation performance has increased more than max_fail times since the last time it decreased (when using validation).

References

Moller, Neural Networks, Vol. 6, 1993, pp. 525–533

Introduced before R2006a