Conscience bias learning function
[dB,LS] = learncon(B,P,Z,N,A,T,E,gW,gA,D,LP,LS)
info = learncon('code
')
learncon
is the conscience bias learning function used to increase the
net input to neurons that have the lowest average output until each neuron responds
approximately an equal percentage of the time.
[dB,LS] = learncon(B,P,Z,N,A,T,E,gW,gA,D,LP,LS)
takes several inputs,
B |
|
P |
|
Z |
|
N |
|
A |
|
T |
|
E |
|
gW |
|
gA |
|
D |
|
LP | Learning parameters, none, |
LS | Learning state, initially should be = |
and returns
dB |
|
LS | New learning state |
Learning occurs according to learncon
’s learning parameter, shown here
with its default value.
LP.lr - 0.001 | Learning rate |
info = learncon('
returns useful
information for each supported code
')code
character vector:
'pnames' | Names of learning parameters |
'pdefaults' | Default learning parameters |
'needg' | Returns 1 if this function uses |
Deep Learning Toolbox™ 2.0 compatibility: The LP.lr
described above equals 1 minus the
bias time constant used by trainc
in the Deep Learning Toolbox 2.0 software.
Here you define a random output A
and bias vector W
for a layer with three neurons. You also define the learning rate LR
.
a = rand(3,1); b = rand(3,1); lp.lr = 0.5;
Because learncon
only needs these values to calculate a bias change (see
“Algorithm” below), use them to do so.
dW = learncon(b,[],[],[],a,[],[],[],[],[],lp,[])
To prepare the bias of layer i
of a custom network to learn with
learncon
,
Set net.trainFcn
to 'trainr'
.
(net.trainParam
automatically becomes trainr
’s default
parameters.)
Set net.adaptFcn
to 'trains'
.
(net.adaptParam
automatically becomes trains
’s default
parameters.)
Set net.inputWeights{i}.learnFcn
to
'learncon'
Set each net.layerWeights{i,j}.learnFcn
to
'learncon'
. .(Each weight learning parameter property is automatically set
to learncon
’s default parameters.)
To train the network (or enable it to adapt),
Set net.trainParam
(or
net.adaptParam
) properties as desired.
Call train
(or adapt
).
learncon
calculates the bias change db
for a given
neuron by first updating each neuron’s conscience, i.e., the running
average of its output:
c = (1-lr)*c + lr*a
The conscience is then used to compute a bias for the neuron that is greatest for smaller conscience values.
b = exp(1-log(c)) - b
(learncon
recovers C
from the bias values each time
it is called.)