Kohonen weight learning function
[dW,LS] = learnk(W,P,Z,N,A,T,E,gW,gA,D,LP,LS)
info = learnk('code
')
learnk
is the Kohonen weight learning function.
[dW,LS] = learnk(W,P,Z,N,A,T,E,gW,gA,D,LP,LS)
takes several inputs,
W |
|
P |
|
Z |
|
N |
|
A |
|
T |
|
E |
|
gW |
|
gA |
|
D |
|
LP | Learning parameters, none, |
LS | Learning state, initially should be = |
and returns
dW |
|
LS | New learning state |
Learning occurs according to learnk
’s learning parameter, shown here
with its default value.
LP.lr - 0.01 | Learning rate |
info = learnk('
returns useful
information for each code
')code
character vector:
'pnames' | Names of learning parameters |
'pdefaults' | Default learning parameters |
'needg' | Returns 1 if this function uses |
Here you define a random input P
, output A
, and
weight matrix W
for a layer with a two-element input and three neurons. Also
define the learning rate LR
.
p = rand(2,1); a = rand(3,1); w = rand(3,2); lp.lr = 0.5;
Because learnk
only needs these values to calculate a weight change (see
“Algorithm” below), use them to do so.
dW = learnk(w,p,[],[],a,[],[],[],[],[],lp,[])
To prepare the weights of layer i
of a custom network to learn with
learnk
,
Set net.trainFcn
to 'trainr'
.
(net.trainParam
automatically becomes trainr
’s default
parameters.)
Set net.adaptFcn
to 'trains'
.
(net.adaptParam
automatically becomes trains
’s default
parameters.)
Set each net.inputWeights{i,j}.learnFcn
to
'learnk'
.
Set each net.layerWeights{i,j}.learnFcn
to
'learnk'
. (Each weight learning parameter property is automatically set to
learnk
’s default parameters.)
To train the network (or enable it to adapt),
Set
net
.
trainParam
(or
net.adaptParam
) properties as desired.
Call train
(or adapt
).
learnk
calculates the weight change dW
for a given
neuron from the neuron’s input P
, output A
, and learning
rate LR
according to the Kohonen learning rule:
dw = lr*(p'-w)
, if a ~= 0
; = 0
,
otherwise
Kohonen, T., Self-Organizing and Associative Memory, New York, Springer-Verlag, 1984