Batch self-organizing map weight learning function
[dW,LS] = learnsomb(W,P,Z,N,A,T,E,gW,gA,D,LP,LS)
info = learnsomb('code
')
learnsomb
is the batch self-organizing map weight learning
function.
[dW,LS] = learnsomb(W,P,Z,N,A,T,E,gW,gA,D,LP,LS)
takes several inputs:
W |
|
P |
|
Z |
|
N |
|
A |
|
T |
|
E |
|
gW |
|
gA |
|
D |
|
LP | Learning parameters, none, |
LS | Learning state, initially should be = |
and returns the following:
dW |
|
LS | New learning state |
Learning occurs according to learnsomb
’s learning parameter, shown here
with its default value:
LP.init_neighborhood | 3 | Initial neighborhood size |
LP.steps | 100 | Ordering phase steps |
info = learnsomb('
returns useful
information for each code
')code
character vector:
'pnames' | Returns names of learning parameters. |
'pdefaults' | Returns default learning parameters. |
'needg' | Returns |
This example defines a random input P
, output A
, and
weight matrix W
for a layer with a 2-element input and 6 neurons. This
example also calculates the positions and distances for the neurons, which appear in a 2-by-3
hexagonal pattern.
p = rand(2,1); a = rand(6,1); w = rand(6,2); pos = hextop(2,3); d = linkdist(pos); lp = learnsomb('pdefaults');
Because learnsom
only needs these values to calculate a weight change
(see Algorithm).
ls = []; [dW,ls] = learnsomb(w,p,[],[],a,[],[],[],[],d,lp,ls)
You can create a standard network that uses learnsomb
with
selforgmap
. To prepare the weights of layer i of a custom network to learn
with learnsomb
:
Set NET.trainFcn
to 'trainr'
.
(NET.trainParam
automatically becomes trainr
’s default
parameters.)
Set NET.adaptFcn
to 'trains'
.
(NET.adaptParam
automatically becomes trains
’s default
parameters.)
Set each NET.inputWeights{i,j}.learnFcn
to
'learnsomb'
.
Set each NET.layerWeights{i,j}.learnFcn
to
'learnsomb'
. (Each weight learning parameter property is automatically set
to learnsomb
’s default parameters.)
To train the network (or enable it to adapt):
Set NET.trainParam
(or
NET.adaptParam
) properties as desired.
Call train
(or adapt
).
learnsomb
calculates the weight changes so that each neuron’s new weight
vector is the weighted average of the input vectors that the neuron and neurons in its
neighborhood responded to with an output of 1.
The ordering phase lasts as many steps as LP.steps
.
During this phase, the neighborhood is gradually reduced from a maximum size of
LP.init_neighborhood
down to 1
, where it remains from
then on.
adapt
| selforgmap
| train