Global average pooling layer
A global average pooling layer performs down-sampling by computing the mean of the height and width dimensions of the input.
In an image classification network, you can use a
globalAveragePooling2dLayer
before the final fully connected layer to
reduce the size of the activations without sacrificing performance. The reduced size of
the activations means that the downstream fully connected layers will have fewer weights,
reducing the size of your network.
You can use a globalAveragePooling2dLayer
towards the end of a
classification network instead of a fullyConnectedLayer
. Since global pooling layers have no learnable parameters,
they can be less prone to overfitting and can reduce the size of the network. These
networks can also be more robust to spatial translations of input data. You can also
replace a fully connected layer with a globalMaxPooling2dLayer
instead. Whether a
globalMaxPooling2dLayer
or a
globalAveragePooling2dLayer
is more appropriate depends on your data
set.
To use a global average pooling layer instead of a fully connected layer, the size of
the input to globalAveragePooling2dLayer
must match the number of
classes in the classification problem
averagePooling2dLayer
| convolution2dLayer
| globalAveragePooling3dLayer
| globalMaxPooling2dLayer
| maxPooling2dLayer