Fully connected layer
A fully connected layer multiplies the input by a weight matrix and then adds a bias vector.
returns a fully connected layer and specifies the layer
= fullyConnectedLayer(outputSize
)OutputSize
property.
sets the optional Parameters and Initialization,
Learn Rate and Regularization, and
layer
= fullyConnectedLayer(outputSize
,Name,Value
)Name
properties using name-value pairs. For
example, fullyConnectedLayer(10,'Name','fc1')
creates a fully
connected layer with an output size of 10 and the name 'fc1'
.
You can specify multiple name-value pairs. Enclose each property name in single
quotes.
OutputSize
— Output sizeOutput size for the fully connected layer, specified as a positive integer.
Example:
10
InputSize
— Input size'auto'
(default) | positive integerInput size for the fully connected layer, specified as a positive
integer or 'auto'
. If InputSize
is 'auto'
, then the software automatically determines
the input size during training.
WeightsInitializer
— Function to initialize weights'glorot'
(default) | 'he'
| 'orthogonal'
| 'narrow-normal'
| 'zeros'
| 'ones'
| function handleFunction to initialize the weights, specified as one of the following:
'glorot'
– Initialize the weights with
the Glorot initializer [1]
(also known as Xavier initializer). The Glorot initializer
independently samples from a uniform distribution with zero
mean and variance 2/(InputSize +
OutputSize)
.
'he'
– Initialize the weights with the
He initializer [2].
The He initializer samples from a normal distribution with
zero mean and variance
2/InputSize
.
'orthogonal'
– Initialize the input
weights with Q, the orthogonal matrix
given by the QR decomposition of Z =
QR for a random
matrix Z sampled from a unit normal
distribution. [3]
'narrow-normal'
– Initialize the
weights by independently sampling from a normal distribution
with zero mean and standard deviation 0.01.
'zeros'
– Initialize the weights with
zeros.
'ones'
– Initialize the weights with
ones.
Function handle – Initialize the weights with a custom
function. If you specify a function handle, then the
function must be of the form weights =
func(sz)
, where sz
is the
size of the weights. For an example, see Specify Custom Weight Initialization Function.
The layer only initializes the weights when the
Weights
property is empty.
Data Types: char
| string
| function_handle
BiasInitializer
— Function to initialize bias'zeros'
(default) | 'narrow-normal'
| 'ones'
| function handleFunction to initialize the bias, specified as one of the following:
'zeros'
– Initialize the bias with zeros.
'ones'
– Initialize the bias with ones.
'narrow-normal'
– Initialize the bias by independently
sampling from a normal distribution with zero mean and standard deviation
0.01.
Function handle – Initialize the bias with a custom function. If you specify a function handle, then the function must be of the form bias = func(sz)
, where sz
is the size of the bias.
The layer only initializes the bias when the Bias
property is
empty.
Data Types: char
| string
| function_handle
Weights
— Layer weights[]
(default) | matrixLayer weights, specified as a matrix.
The layer weights are learnable parameters. You can specify the
initial value for the weights directly using the Weights
property of the layer. When training a network, if the Weights
property of the layer is nonempty, then trainNetwork
uses the Weights
property as the
initial value. If the Weights
property is empty, then
trainNetwork
uses the initializer specified by the WeightsInitializer
property of the layer.
At training time, Weights
is an
OutputSize
-by-InputSize
matrix.
Data Types: single
| double
Bias
— Layer biases[]
(default) | matrixLayer biases, specified as a matrix.
The layer biases are learnable parameters. When training a network, if Bias
is nonempty, then trainNetwork
uses the Bias
property as the initial value. If Bias
is empty, then trainNetwork
uses the initializer specified by BiasInitializer
.
At training time, Bias
is an
OutputSize
-by-1
matrix.
Data Types: single
| double
WeightLearnRateFactor
— Learning rate factor for weightsLearning rate factor for the weights, specified as a nonnegative scalar.
The software multiplies this factor by the global learning rate to determine the
learning rate for the weights in this layer. For example, if
WeightLearnRateFactor
is 2, then the learning rate for the
weights in this layer is twice the current global learning rate. The software determines
the global learning rate based on the settings specified with the trainingOptions
function.
Example:
2
BiasLearnRateFactor
— Learning rate factor for biasesLearning rate factor for the biases, specified as a nonnegative scalar.
The software multiplies this factor by the global learning rate
to determine the learning rate for the biases in this layer. For example, if
BiasLearnRateFactor
is 2, then the learning rate for the biases in the
layer is twice the current global learning rate. The software determines the global learning
rate based on the settings specified with the trainingOptions
function.
Example:
2
WeightL2Factor
— L2 regularization factor for weightsL2 regularization factor for the weights, specified as a nonnegative scalar.
The software multiplies this factor by the global L2 regularization factor to determine the L2
regularization for the weights in this layer. For example, if
WeightL2Factor
is 2, then the L2 regularization for the weights
in this layer is twice the global L2 regularization factor. You can specify the global
L2 regularization factor using the trainingOptions
function.
Example:
2
BiasL2Factor
— L2 regularization factor for biasesL2 regularization factor for the biases, specified as a nonnegative scalar.
The software multiplies this factor by the global L2
regularization factor to determine the L2 regularization for the biases in this layer. For
example, if BiasL2Factor
is 2, then the L2 regularization for the biases in
this layer is twice the global L2 regularization factor. You can specify the global L2
regularization factor using the trainingOptions
function.
Example:
2
Name
— Layer name''
(default) | character vector | string scalar
Layer name, specified as a character vector or a string scalar.
To include a layer in a layer graph, you must specify a nonempty unique layer name. If you train
a series network with the layer and Name
is set to ''
,
then the software automatically assigns a name to the layer at training time.
Data Types: char
| string
NumInputs
— Number of inputsNumber of inputs of the layer. This layer accepts a single input only.
Data Types: double
InputNames
— Input names{'in'}
(default)Input names of the layer. This layer accepts a single input only.
Data Types: cell
NumOutputs
— Number of outputsNumber of outputs of the layer. This layer has a single output only.
Data Types: double
OutputNames
— Output names{'out'}
(default)Output names of the layer. This layer has a single output only.
Data Types: cell
Create a fully connected layer with an output size of 10 and the name 'fc1'
.
layer = fullyConnectedLayer(10,'Name','fc1')
layer = FullyConnectedLayer with properties: Name: 'fc1' Hyperparameters InputSize: 'auto' OutputSize: 10 Learnable Parameters Weights: [] Bias: [] Show all properties
Include a fully connected layer in a Layer
array.
layers = [ ... imageInputLayer([28 28 1]) convolution2dLayer(5,20) reluLayer maxPooling2dLayer(2,'Stride',2) fullyConnectedLayer(10) softmaxLayer classificationLayer]
layers = 7x1 Layer array with layers: 1 '' Image Input 28x28x1 images with 'zerocenter' normalization 2 '' Convolution 20 5x5 convolutions with stride [1 1] and padding [0 0 0 0] 3 '' ReLU ReLU 4 '' Max Pooling 2x2 max pooling with stride [2 2] and padding [0 0 0 0] 5 '' Fully Connected 10 fully connected layer 6 '' Softmax softmax 7 '' Classification Output crossentropyex
To specify the weights and bias initializer functions, use the WeightsInitializer
and BiasInitializer
properties respectively. To specify the weights and biases directly, use the Weights
and Bias
properties respectively.
Specify Initialization Function
Create a fully connected layer with an output size of 10 and specify the weights initializer to be the He initializer.
outputSize = 10; layer = fullyConnectedLayer(outputSize,'WeightsInitializer','he')
layer = FullyConnectedLayer with properties: Name: '' Hyperparameters InputSize: 'auto' OutputSize: 10 Learnable Parameters Weights: [] Bias: [] Show all properties
Note that the Weights
and Bias
properties are empty. At training time, the software initializes these properties using the specified initialization functions.
Specify Custom Initialization Function
To specify your own initialization function for the weights and biases, set the WeightsInitializer
and BiasInitializer
properties to a function handle. For these properties, specify function handles that take the size of the weights and biases as input and output the initialized value.
Create a fully connected layer with output size 10 and specify initializers that sample the weights and biases from a Gaussian distribution with a standard deviation of 0.0001.
outputSize = 10; weightsInitializationFcn = @(sz) rand(sz) * 0.0001; biasInitializationFcn = @(sz) rand(sz) * 0.0001; layer = fullyConnectedLayer(outputSize, ... 'WeightsInitializer',@(sz) rand(sz) * 0.0001, ... 'BiasInitializer',@(sz) rand(sz) * 0.0001)
layer = FullyConnectedLayer with properties: Name: '' Hyperparameters InputSize: 'auto' OutputSize: 10 Learnable Parameters Weights: [] Bias: [] Show all properties
Again, the Weights
and Bias
properties are empty. At training time, the software initializes these properties using the specified initialization functions.
Specify Weights and Bias Directly
Create a fully connected layer with an output size of 10 and set the weights and bias to W
and b
in the MAT file FCWeights.mat
respectively.
outputSize = 10; load FCWeights layer = fullyConnectedLayer(outputSize, ... 'Weights',W, ... 'Bias',b)
layer = FullyConnectedLayer with properties: Name: '' Hyperparameters InputSize: 720 OutputSize: 10 Learnable Parameters Weights: [10x720 double] Bias: [10x1 double] Show all properties
Here, the Weights
and Bias
properties contain the specified values. At training time, if these properties are non-empty, then the software uses the specified values as the initial weights and biases. In this case, the software does not use the initializer functions.
A fully connected layer multiplies the input by a weight matrix and then adds a bias vector.
The convolutional (and down-sampling) layers are followed by one or more fully connected layers.
As the name suggests, all neurons in a fully connected layer connect to all the neurons in the previous layer. This layer combines all of the features (local information) learned by the previous layers across the image to identify the larger patterns. For classification problems, the last fully connected layer combines the features to classify the images. This is the reason that the outputSize
argument of the last fully connected layer of the network is equal to the number of classes of the data set. For regression problems, the output size must be equal to the number of response variables.
You can also adjust the learning rate and the regularization parameters for this layer using
the related name-value pair arguments when creating the fully connected layer. If you choose
not to adjust them, then trainNetwork
uses the global training
parameters defined by the trainingOptions
function. For details on
global and layer training options, see Set Up Parameters and Train Convolutional Neural Network.
A fully connected layer multiplies the input by a weight matrix W and then adds a bias vector b.
If the input to the layer is a sequence (for example, in an LSTM network), then the fully connected layer acts independently on each time step. For example, if the layer before the fully connected layer outputs an array X of size D-by-N-by-S, then the fully connected layer outputs an array Z of size outputSize
-by-N-by-S. At time step t, the corresponding entry of Z is , where denotes time step t of X.
Behavior changed in R2019a
Starting in R2019a, the software, by default, initializes the layer weights of this layer using the Glorot initializer. This behavior helps stabilize training and usually reduces the training time of deep networks.
In previous releases, the software, by default, initializes the layer weights by sampling from
a normal distribution with zero mean and variance 0.01. To reproduce this behavior, set the
'WeightsInitializer'
option of the layer to
'narrow-normal'
.
[1] Glorot, Xavier, and Yoshua Bengio. "Understanding the difficulty of training deep feedforward neural networks." In Proceedings of the thirteenth international conference on artificial intelligence and statistics, pp. 249-256. 2010.
[2] He, Kaiming, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. "Delving deep into rectifiers: Surpassing human-level performance on imagenet classification." In Proceedings of the IEEE international conference on computer vision, pp. 1026-1034. 2015.
[3] Saxe, Andrew M., James L. McClelland, and Surya Ganguli. "Exact solutions to the nonlinear dynamics of learning in deep linear neural networks." arXiv preprint arXiv:1312.6120 (2013).
Deep Network
Designer | batchNormalizationLayer
| convolution2dLayer
| reluLayer
| trainNetwork
You have a modified version of this example. Do you want to open this example with your edits?