Create 3-D U-Net layers for semantic segmentation of volumetric images
returns a 3-D U-Net network. lgraph
= unet3dLayers(inputSize
,numClasses
)unet3dLayers
includes a pixel classification
layer in the network to predict the categorical label for each pixel in an input volumetric
image.
Use unet3dLayers
to create the network architecture for 3-D U-Net.
Train the network using the Deep Learning Toolbox™ function trainNetwork
(Deep Learning Toolbox).
[
also returns the size of an output volumetric image from the 3-D U-Net network.lgraph
,outputSize
] = unet3dLayers(inputSize
,numClasses
)
[___] = unet3dLayers(
specifies options using one or more name-value pair arguments in addition to the input
arguments in previous syntax.inputSize
,numClasses
,Name,Value
)
Use 'same'
padding in convolution layers to maintain the same data
size from input to output and enable the use of a broad set of input image sizes.
Use patch-based approaches for seamless segmentation of large images. You can extract
image patches by using the randomPatchExtractionDatastore
function in Image Processing Toolbox™.
Use 'valid'
padding in convolution layers to prevent border
artifacts while you use patch-based approaches for segmentation.
[1] Çiçek, Ö., A. Abdulkadir, S. S. Lienkamp, T. Brox, and O. Ronneberger. "3D U-Net: Learning Dense Volumetric Segmentation from Sparse Annotation." Medical Image Computing and Computer-Assisted Intervention – MICCAI 2016. MICCAI 2016. Lecture Notes in Computer Science. Vol. 9901, pp. 424–432. Springer, Cham.
dicePixelClassificationLayer
| pixelClassificationLayer
| DAGNetwork
(Deep Learning Toolbox) | layerGraph
(Deep Learning Toolbox)deeplabv3plusLayers
| evaluateSemanticSegmentation
| fcnLayers
| segnetLayers
| semanticseg
| unetLayers
| trainNetwork
(Deep Learning Toolbox)