3-D convolutional layer
A 3-D convolutional layer applies sliding cuboidal convolution filters to three-dimensional input. The layer convolves the input by moving the filters along the input vertically, horizontally, and along the depth, computing the dot product of the weights and the input, and then adding a bias term.
creates a 3-D convolutional layer and sets the layer
= convolution3dLayer(filterSize
,numFilters
)FilterSize
and NumFilters
properties.
sets the optional layer
= convolution3dLayer(filterSize
,numFilters
,Name,Value
)Stride
, DilationFactor
, NumChannels
, Parameters and Initialization,
Learn Rate and Regularization, and Name
properties using name-value pairs. To specify input
padding, use the 'Padding'
name-value pair argument. For example,
convolution3dLayer(11,96,'Stride',4,'Padding',1)
creates a 3-D
convolutional layer with 96 filters of size [11 11 11]
, a stride of
[4 4 4]
, and zero padding of size 1 along all edges of the layer
input. You can specify multiple name-value pairs. Enclose each property name in single
quotes.
Use comma-separated name-value pair arguments to specify the size of the zero padding
to add along the edges of the layer input or to set the Stride
, DilationFactor
, NumChannels
, Parameters and Initialization,
Learn Rate and Regularization, and Name
properties. Enclose names in single quotes.
convolution3dLayer(3,16,'Padding','same')
creates a 3-D
convolutional layer with 16 filters of size [3 3 3]
and
'same'
padding. At training time, the software calculates and sets
the size of the zero padding so that the layer output has the same size as the
input.'Padding'
— Input edge padding0
(default) | array of nonnegative integers | 'same'
Input edge padding, specified as the comma-separated pair consisting of
'Padding'
and one of these values:
'same'
— Add padding of size calculated by the software at
training or prediction time so that the output has the same size as the input
when the stride equals 1. If the stride is larger than 1, then the output size is
ceil(inputSize/stride)
, where inputSize
is the height,
width, or depth of the input and stride
is the stride in the corresponding
dimension. The software adds the same amount of padding to the top and bottom, to the left and
right, and to the front and back, if possible. If the padding in a given dimension has an odd
value, then the software adds the extra padding to the input as postpadding. In other words, the
software adds extra vertical padding to the bottom, extra horizontal padding to the right, and
extra depth padding to the back of the input.
Nonnegative integer p
— Add padding of size
p
to all the edges of the input.
Three-element vector [a b c]
of nonnegative integers — Add
padding of size a
to the top and bottom, padding of size
b
to the left and right, and padding of size
c
to the front and back of the input.
2-by-3 matrix [t l f;b r k]
of nonnegative integers — Add
padding of size t
to the top, b
to the
bottom, l
to the left, r
to the right,
f
to the front, and k
to the back of
the input. In other words, the top row specifies the prepadding and the second
row defines the postpadding in the three dimensions.
Example:
'Padding',1
adds one row of padding to the top and bottom, one column
of padding to the left and right, and one plane of padding to the front and back of the
input.
Example:
'Padding','same'
adds padding so that the output has the same size as
the input (if the stride equals 1).
FilterSize
— Height, width, and depth of filtersHeight, width, and depth of the filters, specified as a vector [h w
d]
of three positive integers, where h
is the height,
w
is the width, and d
is the depth.
FilterSize
defines the size of the local regions to which the
neurons connect in the input.
When creating the layer, you can specify FilterSize
as a
scalar to use the same value for the height, width, and depth.
Example:
[5 5 5]
specifies filters with a height, width, and depth of
5.
NumFilters
— Number of filtersNumber of filters, specified as a positive integer. This number corresponds to the number of neurons in the convolutional layer that connect to the same region in the input. This parameter determines the number of channels (feature maps) in the output of the convolutional layer.
Example:
96
Stride
— Step size for traversing input[1 1 1]
(default) | vector of three positive integersStep size for traversing the input in three dimensions, specified as a vector
[a b c]
of three positive integers, where a
is
the vertical step size, b
is the horizontal step size, and
c
is the step size along the depth. When creating the layer, you
can specify Stride
as a scalar to use the same value for step sizes
in all three directions.
Example:
[2 3 1]
specifies a vertical step size of 2, a horizontal step size
of 3, and a step size along the depth of 1.
DilationFactor
— Factor for dilated convolution[1 1 1]
(default) | vector of three positive integersFactor for dilated convolution (also known as atrous convolution), specified as a
vector [h w d]
of three positive integers, where
h
is the vertical dilation, w
is the
horizontal dilation, and d
is the dilation along the depth. When
creating the layer, you can specify DilationFactor
as a scalar to
use the same value for dilation in all three directions.
Use dilated convolutions to increase the receptive field (the area of the input which the layer can see) of the layer without increasing the number of parameters or computation.
The layer expands the filters by inserting zeros between each filter element. The
dilation factor determines the step size for sampling the input or equivalently the
upsampling factor of the filter. It corresponds to an effective filter size of
(Filter Size – 1) .* Dilation Factor + 1. For
example, a 3-by-3-by-3 filter with the dilation factor [2 2 2]
is
equivalent to a 5-by-5-by-5 filter with zeros between the elements.
Example: [2 3 1]
dilates the filter vertically by a factor of 2,
horizontally by a factor of 3, and along the depth by a factor of 1.
PaddingSize
— Size of padding[0 0 0;0 0 0]
(default) | 2-by-3 matrix of nonnegative integersSize of padding to apply to input borders, specified as 2-by-3 matrix
[t l f;b r k]
of nonnegative
integers, where t
and b
are the padding applied to the top and bottom in the vertical
direction, l
and r
are the
padding applied to the left and right in the horizontal
direction, and f
and k
are
the padding applied to the front and back along the depth. In
other words, the top row specifies the prepadding and the second
row defines the postpadding in the three dimensions.
When you create a layer, use the 'Padding'
name-value pair argument to specify the padding size.
Example:
[1 2 4;1 2 4]
adds one row of padding to the
top and bottom, two columns of padding to the left and right,
and four planes of padding to the front and back of the
input.
PaddingMode
— Method to determine padding size'manual'
(default) | 'same'
Method to determine padding size, specified as 'manual'
or
'same'
.
The software automatically sets the value of PaddingMode
based on the 'Padding' value you specify when creating a layer.
If you set the 'Padding'
option to a scalar or a vector
of nonnegative integers, then the software automatically sets PaddingMode
to
'manual'
.
If you set the 'Padding'
option to
'same'
, then the software automatically sets
PaddingMode
to
'same'
and calculates the size of the padding at
training time so that the output has the same size as the input when the
stride equals 1. If the stride is larger than 1, then the output size is
ceil(inputSize/stride)
, where inputSize
is the height,
width, or depth of the input and stride
is the stride in the corresponding
dimension. The software adds the same amount of padding to the top and bottom, to the left and
right, and to the front and back, if possible. If the padding in a given dimension has an odd
value, then the software adds the extra padding to the input as postpadding. In other words, the
software adds extra vertical padding to the bottom, extra horizontal padding to the right, and
extra depth padding to the back of the input.
NumChannels
— Number of channels for each filter'auto'
(default) | positive integerNumber of channels for each filter, specified as 'auto'
or a
positive integer.
This parameter is always equal to the number of channels of the input to the convolutional layer. For example, if the input is a color image, then the number of channels for the input is 3. If the number of filters for the convolutional layer prior to the current layer is 16, then the number of channels for the current layer is 16.
If NumChannels
is 'auto'
, then the
software determines the number of channels at training time.
Example:
256
WeightsInitializer
— Function to initialize weights'glorot'
(default) | 'he'
| 'narrow-normal'
| 'zeros'
| 'ones'
| function handleFunction to initialize the weights, specified as one of the following:
'glorot'
– Initialize the weights with the Glorot
initializer [1] (also known as Xavier
initializer). The Glorot initializer independently samples from a uniform
distribution with zero mean and variance 2/(numIn + numOut)
,
where numIn =
FilterSize(1)*FilterSize(2)*FilterSize(3)*NumChannels
and
numOut =
FilterSize(1)*FilterSize(2)*FilterSize(3)*NumFilters
.
'he'
– Initialize the weights with the He initializer
[2]. The He initializer
samples from a normal distribution with zero mean and variance
2/numIn
, where numIn =
FilterSize(1)*FilterSize(2)*FilterSize(3)*NumChannels
.
'narrow-normal'
– Initialize the weights by independently
sampling from a normal distribution with zero mean and standard deviation
0.01.
'zeros'
– Initialize the weights with zeros.
'ones'
– Initialize the weights with ones.
Function handle – Initialize the weights with a custom function. If you
specify a function handle, then the function must be of the form
weights = func(sz)
, where sz
is the size
of the weights. For an example, see Specify Custom Weight Initialization Function.
The layer only initializes the weights when the Weights
property is empty.
Data Types: char
| string
| function_handle
BiasInitializer
— Function to initialize bias'zeros'
(default) | 'narrow-normal'
| 'ones'
| function handleFunction to initialize the bias, specified as one of the following:
'zeros'
– Initialize the bias with zeros.
'ones'
– Initialize the bias with ones.
'narrow-normal'
– Initialize the bias by independently
sampling from a normal distribution with zero mean and standard deviation
0.01.
Function handle – Initialize the bias with a custom function. If you specify a function handle, then the function must be of the form bias = func(sz)
, where sz
is the size of the bias.
The layer only initializes the bias when the Bias
property is
empty.
Data Types: char
| string
| function_handle
Weights
— Layer weights[]
(default) | numeric arrayLayer weights for the convolutional layer, specified as a numeric array.
The layer weights are learnable parameters. You can specify the
initial value for the weights directly using the Weights
property of the layer. When training a network, if the Weights
property of the layer is nonempty, then trainNetwork
uses the Weights
property as the
initial value. If the Weights
property is empty, then
trainNetwork
uses the initializer specified by the WeightsInitializer
property of the layer.
At training time, Weights
is a
FilterSize(1)
-by-FilterSize(2)
-by-FilterSize(3)
-by-NumChannels
-by-NumFilters
array.
Data Types: single
| double
Bias
— Layer biases[]
(default) | numeric arrayLayer biases for the convolutional layer, specified as a numeric array.
The layer biases are learnable parameters. When training a network, if Bias
is nonempty, then trainNetwork
uses the Bias
property as the initial value. If Bias
is empty, then trainNetwork
uses the initializer specified by BiasInitializer
.
At training time, Bias
is a
1-by-1-by-1-by-NumFilters
array.
Data Types: single
| double
WeightLearnRateFactor
— Learning rate factor for weightsLearning rate factor for the weights, specified as a nonnegative scalar.
The software multiplies this factor by the global learning rate to determine the
learning rate for the weights in this layer. For example, if
WeightLearnRateFactor
is 2, then the learning rate for the
weights in this layer is twice the current global learning rate. The software determines
the global learning rate based on the settings specified with the trainingOptions
function.
Example:
2
BiasLearnRateFactor
— Learning rate factor for biasesLearning rate factor for the biases, specified as a nonnegative scalar.
The software multiplies this factor by the global learning rate
to determine the learning rate for the biases in this layer. For example, if
BiasLearnRateFactor
is 2, then the learning rate for the biases in the
layer is twice the current global learning rate. The software determines the global learning
rate based on the settings specified with the trainingOptions
function.
Example:
2
WeightL2Factor
— L2 regularization factor for weightsL2 regularization factor for the weights, specified as a nonnegative scalar.
The software multiplies this factor by the global L2 regularization factor to determine the L2
regularization for the weights in this layer. For example, if
WeightL2Factor
is 2, then the L2 regularization for the weights
in this layer is twice the global L2 regularization factor. You can specify the global
L2 regularization factor using the trainingOptions
function.
Example:
2
BiasL2Factor
— L2 regularization factor for biasesL2 regularization factor for the biases, specified as a nonnegative scalar.
The software multiplies this factor by the global L2
regularization factor to determine the L2 regularization for the biases in this layer. For
example, if BiasL2Factor
is 2, then the L2 regularization for the biases in
this layer is twice the global L2 regularization factor. You can specify the global L2
regularization factor using the trainingOptions
function.
Example:
2
Name
— Layer name''
(default) | character vector | string scalar
Layer name, specified as a character vector or a string scalar.
To include a layer in a layer graph, you must specify a nonempty unique layer name. If you train
a series network with the layer and Name
is set to ''
,
then the software automatically assigns a name to the layer at training time.
Data Types: char
| string
NumInputs
— Number of inputsNumber of inputs of the layer. This layer accepts a single input only.
Data Types: double
InputNames
— Input names{'in'}
(default)Input names of the layer. This layer accepts a single input only.
Data Types: cell
NumOutputs
— Number of outputsNumber of outputs of the layer. This layer has a single output only.
Data Types: double
OutputNames
— Output names{'out'}
(default)Output names of the layer. This layer has a single output only.
Data Types: cell
Create a 3-D convolution layer with 16 filters, each with a height, width, and depth of 5. Use a stride (step size) of 4 in all three directions.
layer = convolution3dLayer(5,16,'Stride',4)
layer = Convolution3DLayer with properties: Name: '' Hyperparameters FilterSize: [5 5 5] NumChannels: 'auto' NumFilters: 16 Stride: [4 4 4] DilationFactor: [1 1 1] PaddingMode: 'manual' PaddingSize: [2x3 double] Learnable Parameters Weights: [] Bias: [] Show all properties
Include a 3-D convolution layer in a Layer
array.
layers = [ ... image3dInputLayer([28 28 28 3]) convolution3dLayer(5,16,'Stride',4) reluLayer maxPooling3dLayer(2,'Stride',4) fullyConnectedLayer(10) softmaxLayer classificationLayer]
layers = 7x1 Layer array with layers: 1 '' 3-D Image Input 28x28x28x3 images with 'zerocenter' normalization 2 '' Convolution 16 5x5x5 convolutions with stride [4 4 4] and padding [0 0 0; 0 0 0] 3 '' ReLU ReLU 4 '' 3-D Max Pooling 2x2x2 max pooling with stride [4 4 4] and padding [0 0 0; 0 0 0] 5 '' Fully Connected 10 fully connected layer 6 '' Softmax softmax 7 '' Classification Output crossentropyex
To specify the weights and bias initializer functions, use the WeightsInitializer
and BiasInitializer
properties respectively. To specify the weights and biases directly, use the Weights
and Bias
properties respectively.
Specify Initialization Functions
Create a 3-D convolutional layer with 32 filters, each with a height, width, and depth of 5. Specify the weights initializer to be the He initializer.
filterSize = 5; numFilters = 32; layer = convolution3dLayer(filterSize,numFilters, ... 'WeightsInitializer','he')
layer = Convolution3DLayer with properties: Name: '' Hyperparameters FilterSize: [5 5 5] NumChannels: 'auto' NumFilters: 32 Stride: [1 1 1] DilationFactor: [1 1 1] PaddingMode: 'manual' PaddingSize: [2x3 double] Learnable Parameters Weights: [] Bias: [] Show all properties
Note that the Weights
and Bias
properties are empty. At training time, the software initializes these properties using the specified initialization functions.
Specify Custom Initialization Functions
To specify your own initialization function for the weights and biases, set the WeightsInitializer
and BiasInitializer
properties to a function handle. For these properties, specify function handles that take the size of the weights and biases as input and output the initialized value.
Create a convolutional layer with 32 filters, each with a height, width, and depth of 5. Specify initializers that sample the weights and biases from a Gaussian distribution with a standard deviation of 0.0001.
filterSize = 5; numFilters = 32; layer = convolution3dLayer(filterSize,numFilters, ... 'WeightsInitializer', @(sz) rand(sz) * 0.0001, ... 'BiasInitializer', @(sz) rand(sz) * 0.0001)
layer = Convolution3DLayer with properties: Name: '' Hyperparameters FilterSize: [5 5 5] NumChannels: 'auto' NumFilters: 32 Stride: [1 1 1] DilationFactor: [1 1 1] PaddingMode: 'manual' PaddingSize: [2x3 double] Learnable Parameters Weights: [] Bias: [] Show all properties
Again, the Weights
and Bias
properties are empty. At training time, the software initializes these properties using the specified initialization functions.
Specify Weights and Bias Directly
Create a 3-D convolutional layer compatible with color images. Set the weights and bias to W
and b
in the MAT file Conv3dWeights.mat
respectively.
filterSize = 5; numFilters = 32; load Conv3dWeights layer = convolution3dLayer(filterSize,numFilters, ... 'Weights',W, ... 'Bias',b)
layer = Convolution3DLayer with properties: Name: '' Hyperparameters FilterSize: [5 5 5] NumChannels: 3 NumFilters: 32 Stride: [1 1 1] DilationFactor: [1 1 1] PaddingMode: 'manual' PaddingSize: [2x3 double] Learnable Parameters Weights: [5-D double] Bias: [1x1x1x32 double] Show all properties
Here, the Weights
and Bias
properties contain the specified values. At training time, if these properties are non-empty, then the software uses the specified values as the initial weights and biases. In this case, the software does not use the initializer functions.
Suppose the size of the input is 28-by-28-by-28-by-1. Create a 3-D convolutional layer with 16 filters, each with a height of 6, a width of 4, and a depth of 5. Set the stride in all dimensions to 4.
Make sure the convolution covers the input completely. For the convolution to fully cover the input, the output dimensions must be integer numbers. When there is no dilation, the i-th output dimension is calculated as (imageSize(i) - filterSize(i) + padding(i)) / stride(i) + 1.
For the horizontal output dimension to be an integer, two rows of zero padding are required: (28 – 6 + 2)/4 + 1 = 7. Distribute the padding symmetrically by adding one row of padding at the top and bottom of the image.
For the vertical output dimension to be an integer, no zero padding is required: (28 – 4+ 0)/4 + 1 = 7.
For the depth output dimension to be an integer, one plane of zero padding is required: (28 – 5 + 1)/4 + 1 = 7. You must distribute the padding asymmetrically across the front and back of the image. This example adds one plane of zero padding to the back of the image.
Construct the convolutional layer. Specify 'Padding' as a 2-by-3 matrix. The first row specifies prepadding and the second row specifies postpadding in the three dimensions.
layer = convolution3dLayer([6 4 5],16,'Stride',4,'Padding',[1 0 0;1 0 1])
layer = Convolution3DLayer with properties: Name: '' Hyperparameters FilterSize: [6 4 5] NumChannels: 'auto' NumFilters: 16 Stride: [4 4 4] DilationFactor: [1 1 1] PaddingMode: 'manual' PaddingSize: [2x3 double] Learnable Parameters Weights: [] Bias: [] Show all properties
A convolutional layer applies sliding convolutional filters to the
input. A 3-D convolutional layer extends the functionality of a 2-D convolutional layer to a
third dimension, depth. The layer convolves the input by moving the filters along the input
vertically, horizontally, and along the depth, computing the dot product of the weights and
the input, and then adding a bias term. To learn more, see the definition of convolutional layer
on the convolution2dLayer
reference page.
[1] Glorot, Xavier, and Yoshua Bengio. "Understanding the difficulty of training deep feedforward neural networks." In Proceedings of the thirteenth international conference on artificial intelligence and statistics, pp. 249-256. 2010.
[2] He, Kaiming, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. "Delving deep into rectifiers: Surpassing human-level performance on imagenet classification." In Proceedings of the IEEE international conference on computer vision, pp. 1026-1034. 2015.
You have a modified version of this example. Do you want to open this example with your edits?