Design radial basis network
net = newrb(P,T,goal,spread,MN,DF)
Radial basis networks can be used to approximate functions. newrb
adds
neurons to the hidden layer of a radial basis network until it meets the specified mean squared
error goal.
net = newrb(P,T,goal,spread,MN,DF)
takes two of these arguments,
P |
|
T |
|
goal | Mean squared error goal (default = 0.0) |
spread | Spread of radial basis functions (default = 1.0) |
MN | Maximum number of neurons (default is |
DF | Number of neurons to add between displays (default = 25) |
and returns a new radial basis network.
The larger spread
is, the smoother the function approximation. Too large
a spread means a lot of neurons are required to fit a fast-changing function. Too small a spread
means many neurons are required to fit a smooth function, and the network might not generalize
well. Call newrb
with different spreads to find the best value for a given
problem.
Here you design a radial basis network, given inputs P
and targets
T
.
P = [1 2 3]; T = [2.0 4.1 5.9]; net = newrb(P,T);
The network is simulated for a new input.
P = 1.5; Y = sim(net,P)
newrb
creates a two-layer network. The first layer has
radbas
neurons, and calculates its weighted inputs with
dist
and its net input with netprod
. The second layer has
purelin
neurons, and calculates its weighted input with
dotprod
and its net inputs with netsum
. Both layers have
biases.
Initially the radbas
layer has no neurons. The following steps are
repeated until the network’s mean squared error falls below goal
.
The network is simulated.
The input vector with the greatest error is found.
A radbas
neuron is added with weights equal to that
vector.
The purelin
layer weights are redesigned to
minimize error.