Nguyen-Widrow layer initialization function
net = initnw(net,i)
initnw
is a layer initialization function that initializes a layer’s
weights and biases according to the Nguyen-Widrow initialization algorithm. This algorithm
chooses values in order to distribute the active region of each neuron in the layer
approximately evenly across the layer’s input space. The values contain a degree of randomness,
so they are not the same each time this function is called.
initnw
requires that the layer it initializes have a transfer function
with a finite active input range. This includes transfer functions such as
tansig
and satlin
, but not purelin
,
whose active input range is the infinite interval [-inf, inf]
. Transfer
functions, such as tansig
, will return their active input range as
follows:
activeInputRange = tansig('active') activeInputRange = -2 2
net = initnw(net,i)
takes two arguments,
net | Neural network |
i | Index of a layer |
and returns the network with layer i
’s weights and biases
updated.
There is a random element to Nguyen-Widrow initialization. Unless the default random
generator is set to the same seed before each call to initnw
, it will
generate different weight and bias values each time.
You can create a standard network that uses initnw
by calling
feedforwardnet
or cascadeforwardnet
.
To prepare a custom network to be initialized with initnw
,
Set net.initFcn
to 'initlay'
.
This sets net.initParam
to the empty matrix []
, because
initlay
has no initialization parameters.
Set net.layers{i}.initFcn
to
'initnw'
.
To initialize the network, call init
.
The Nguyen-Widrow method generates initial weight and bias values for a layer so that the active regions of the layer’s neurons are distributed approximately evenly over the input space.
Advantages over purely random weights and biases are
Few neurons are wasted (because all the neurons are in the input space).
Training works faster (because each area of the input space has neurons). The Nguyen-Widrow method can only be applied to layers
With a bias
With weights whose weightFcn
is dotprod
With netInputFcn
set to netsum
With transferFcn
whose active region is finite
If these conditions are not met, then initnw
uses
rands
to initialize the layer’s weights and biases.
cascadeforwardnet
| feedforwardnet
| init
| initlay
| initwb