Elliot symmetric sigmoid transfer function
A = elliotsig(N)
Transfer functions convert a neural network layer’s net input into its net output.
A = elliotsig(N)
takes an S
-by-Q
matrix of S
N
-element net input column vectors and returns an
S
-by-Q
matrix A
of output vectors,
where each element of N
is squashed from the interval [-inf
inf]
to the interval [-1 1]
with an “S-shaped”
function.
The advantage of this transfer function over other sigmoids is that it is fast to calculate on simple computing hardware as it does not require any exponential or trigonometric functions. Its disadvantage is that it only flattens out for large inputs, so its effect is not as local as other sigmoid functions. This might result in more training iterations, or require more neurons to achieve the same accuracy.
Calculate a layer output from a single net input vector:
n = [0; 1; -0.5; 0.5]; a = elliotsig(n);
Plot the transfer function:
n = -5:0.01:5; plot(n, elliotsig(n)) set(gca,'dataaspectratio',[1 1 1],'xgrid','on','ygrid','on')
For a network you have already defined, change the transfer function for layer
i
:
net.layers{i}.transferFcn = 'elliotsig';
elliot2sig
| logsig
| tansig