This example shows how to train a SqueezeSegV2 semantic segmentation network on 3-D organized lidar point cloud data.
SqueezeSegV2 [1] is a convolutional neural network (CNN) for performing end-to-end semantic segmentation of road objects based on an organized lidar point cloud. The training procedure shown in this example requires 2-D spherical projected images as inputs to the deep learning network.
This example uses a highway scene data set collected using an Ouster OS1 sensor. It contains organized lidar point cloud scans of highway scenes and corresponding ground truth labels for car and truck objects. The size of the data file is approximately 760 MB.
Execute this code to download the highway scene data set. This example uses a data set that contains 1617 point clouds stored as pointCloud
objects in a cell array. Corresponding ground truth data, which is attached to the example, contains bounding box information of cars and trucks in each point cloud.
url = 'https://www.mathworks.com/supportfiles/lidar/data/WPI_LidarData.tar.gz'; outputFolder = fullfile(tempdir,'WPI'); lidarDataTarFile = fullfile(outputFolder,'WPI_LidarData.tar.gz'); if ~exist(lidarDataTarFile, 'file') mkdir(outputFolder); disp('Downloading WPI Lidar driving data (760 MB)...'); websave(lidarDataTarFile, url); untar(lidarDataTarFile,outputFolder); end % Check if tar.gz file is downloaded, but not uncompressed. if ~exist(fullfile(outputFolder, 'WPI_LidarData.mat'), 'file') untar(lidarDataTarFile,outputFolder); end lidarData = load(fullfile(outputFolder, 'WPI_LidarData.mat')); groundTruthData = load('WPI_LidarGroundTruth.mat');
Note: Depending on your Internet connection, the download process can take some time. The code suspends MATLAB® execution until the download process is complete. Alternatively, you can download the data set to your local disk using your web browser, and then extract WPI_LidarData
. To use the file you downloaded from the web, change the outputFolder
variable in the code to the location of the downloaded file.
Download the pretrained network to avoid having to wait for training to complete. If you want to train the network, set the doTraining
variable to true
.
doTraining = false; if ~doTraining && ~exist('trainedSqueezeSegV2Net.mat','file') disp('Downloading pretrained network (2 MB)...'); pretrainedURL = 'https://www.mathworks.com/supportfiles/lidar/data/trainedSqueezeSegV2Net.mat'; websave('trainedSqueezeSegV2Net.mat', pretrainedURL); end
Use the helperGenerateLidarData
supporting function, attached to this example, to generate training data from the lidar point clouds. The function uses point cloud and bounding box data to create five-channel input images and pixel label images. To create the pixel label images, the function selects points inside the bounding box and labels them with the bounding box class ID. Each training image is specified as a 64-by-1024-by-5 array:
The height of each image is 64 pixels,
The width of each image is 1024 pixels.
Each image has five channels. The five channels specify the 3-D coordinates of the point cloud, intensity, and range: .
A visual representation of the training data follows.
Generate the five-channel training images and pixel label images.
imagesFolder = fullfile(outputFolder, 'images'); labelsFolder = fullfile(outputFolder, 'labels'); helperGenerateLidarData(lidarData, groundTruthData, imagesFolder, labelsFolder);
Preprocessing data 100% complete
The five-channel images are saved as MAT files. Labels are saved as PNG files.
Note: Processing can take some time. The code suspends MATLAB® execution until processing is complete.
ImageDatastore
and PixelLabelDatastore
Use the imageDatastore
object to extract and store the five channels of the 2-D spherical images using the helperImageMatReader
supporting function, which is a custom MAT file reader. This function is attached to this example as a supporting file.
imds = imageDatastore(imagesFolder, ... 'FileExtensions', '.mat', ... 'ReadFcn', @helperImageMatReader);
Use the pixelLabelDatastore
(Computer Vision Toolbox) object to store pixel-wise labels from pixel label images. The object maps each pixel label to a class name. In this example, cars and trucks are the only objects of interest; all other pixels are the background. Specify these classes and assign a unique label ID to each class.
classNames = [ "background" "car" "truck" ]; numClasses = numel(classNames); % Specify label IDs from 1 to the number of classes. labelIDs = 1 : numClasses; pxds = pixelLabelDatastore(labelsFolder, classNames, labelIDs);
Load and display one of the labeled images by overlaying it on the corresponding intensity image using the helperDisplayLidarOverlayImage
function, defined in the Supporting Functions section of this example.
imageNumber = 228; % Point cloud (channels 1, 2, and 3 are for location, and channel 4 is for intensity). I = readimage(imds, imageNumber); labelMap = readimage(pxds, imageNumber); figure; helperDisplayLidarOverlayImage(I, labelMap, classNames); title('Ground Truth');
Use the helperPartitionLidarData
supporting function, attached to this example, to split the data into training, validation, and test sets that contain 970, 216, and 431 images, respectively.
[imdsTrain, imdsVal, imdsTest, pxdsTrain, pxdsVal, pxdsTest] = ...
helperPartitionLidarData(imds, pxds);
Use the combine
function to combine the pixel and image datastores for the training and validation data sets.
trainingData = combine(imdsTrain, pxdsTrain); validationData = combine(imdsVal, pxdsVal);
Data augmentation is used to improve network accuracy by randomly transforming the original data during training. By using data augmentation, you can add more variety to the training data without actually having to increase the number of labeled training samples.
Augment the training data by using the transform
function with custom preprocessing operations specified by the augmentData
function, defined in the Supporting Functions section of this example. This function randomly flips the multichannel 2-D image and associated labels in the horizontal direction. Apply data augmentation to only the training data set.
augmentedTrainingData = transform(trainingData, @(x) augmentData(x));
Create a standard SqueezeSegV2 [1] network by using the squeezesegv2Layers
(Lidar Toolbox) function. In the SqueezeSegV2 network, the encoder subnetwork consists of FireModules interspersed with max-pooling layers. This arrangement successively decreases the resolution of the input image. In addition, the SqueezeSegV2 network uses the focal loss function to mitigate the effect of the imbalanced class distribution on network accuracy. For more details on how to use the focal loss function in semantic segmentation, see focalLossLayer
(Computer Vision Toolbox).
Execute this code to create a layer graph that can be used to train the network.
inputSize = [64 1024 5]; lgraph = squeezesegv2Layers(inputSize, numClasses);
Use the analyzeNetwork
function to display an interactive visualization of the network architecture.
analyzeNetwork(lgraph)
Use the rmsprop
optimization algorithm to train the network. Use the trainingOptions
function to specify the hyperparameters for rmsprop
.
maxEpochs = 30; initialLearningRate= 5e-4; miniBatchSize = 8; l2reg = 2e-4; options = trainingOptions('rmsprop', ... 'InitialLearnRate', initialLearningRate, ... 'L2Regularization', l2reg, ... 'MaxEpochs', maxEpochs, ... 'MiniBatchSize', miniBatchSize, ... 'LearnRateSchedule', 'piecewise', ... 'LearnRateDropFactor', 0.1, ... 'LearnRateDropPeriod', 10, ... 'ValidationData', validationData, ... 'Plots', 'training-progress', ... 'VerboseFrequency', 20);
Note: Reduce miniBatchSize
to control memory usage when training.
The doTraining
argument is set to false
(default) to load the pretrained network. You can train the network yourself by setting the doTraining
argument to true
. If you train the network, you can use a CPU or a GPU. Using a GPU requires Parallel Computing Toolbox™ and a CUDA® enabled NVIDIA® GPU with compute capability 3.0 or higher. .
if doTraining [net, info] = trainNetwork(trainingData, lgraph, options); else pretrainedNetwork = 'trainedSqueezeSegV2Net.mat'; load(pretrainedNetwork,'net'); end
Use the trained network to predict results on a test point cloud and display the segmentation result. First, read a PCD file and convert the point cloud to a five-channel input image. Predict the labels using the trained network.
Display the figure with the segmentation as an overlay.
ptCloud = pcread('ousterLidarDrivingData.pcd'); I = helperPointCloudToImage(ptCloud); predictedResult = semanticseg(I, net); figure; helperDisplayLidarOverlayImage(I, predictedResult, classNames); title('Semantic Segmentation Result');
Use the helperDisplayLidarOverlayPointCloud
function, defined in the Supporting Functions section of this example, to display the segmentation result on the ptCloud
object.
figure;
helperDisplayLidarOverlayPointCloud(ptCloud, predictedResult, numClasses);
view([95.71 24.14])
title('Semantic Segmentation Result on Point Cloud');
Run the semanticseg
function on the entire test set to measure the accuracy of the network. Set MiniBatchSize
to 8 in order to reduce memory usage when segmenting images. You can increase or decrease this value depending on the amount of GPU memory you have on your system.
outputLocation = fullfile(tempdir, 'output'); if ~exist(outputLocation,'dir') mkdir(outputLocation); end pxdsResults = semanticseg(imdsTest, net, ... 'MiniBatchSize', 8, ... 'WriteLocation', outputLocation, ... 'Verbose', false);
The semanticseg
function returns the segmentation results on the test data set as a PixelLabelDatastore
object. The function writes the actual pixel label data for each test image in the imdsTest
object to the disk in the location specified by the 'WriteLocation'
argument.
Use the evaluateSemanticSegmentation
(Computer Vision Toolbox) function to compute the semantic segmentation metrics from the test set results.
metrics = evaluateSemanticSegmentation(pxdsResults, pxdsTest, 'Verbose', false);
You can measure the amount of overlap per class using the intersection-over-union (IoU) metric.
The evaluateSemanticSegmentation
function returns metrics for the entire data set, for individual classes, and for each test image. To see the metrics at the data set level, use the metrics.DataSetMetrics
property.
metrics.DataSetMetrics
ans=1×5 table
GlobalAccuracy MeanAccuracy MeanIoU WeightedIoU MeanBFScore
______________ ____________ _______ ___________ ___________
0.99376 0.71176 0.63813 0.98912 0.90017
The data set metrics provide a high-level overview of network performance. To see the impact each class has on the overall performance, inspect the metrics for each class using the metrics.ClassMetrics
property.
metrics.ClassMetrics
ans=3×3 table
Accuracy IoU MeanBFScore
________ _______ ___________
background 0.99756 0.99463 0.98424
car 0.71507 0.52999 0.77356
truck 0.42264 0.38977 0.75395
Although the overall network performance is good, the class metrics indicate that you can improve the network performance by training the network on more labeled data containing the car and truck classes.
The augmentData
function randomly flips the spherical image and associated labels in the horizontal direction.
function out = augmentData(inp) % Apply random horizontal flipping. out = cell(size(inp)); % Randomly flip the five-channel image and pixel labels horizontally. I = inp{1}; sz = size(I); tform = randomAffine2d('XReflection',true); rout = affineOutputView(sz,tform,'BoundsStyle','centerOutput'); out{1} = imwarp(I,tform,'OutputView',rout); out{2} = imwarp(inp{2},tform,'OutputView',rout); end
The helperDisplayLidarOverlayImage
function overlays the semantic segmentation map over the intensity channel of the 2-D spherical image. The function also resizes the overlaid image for better visualization.
function helperDisplayLidarOverlayImage(lidarImage, labelMap, classNames) %helperDisplayLidarOverlayImage Overlay labels over the intensity image. % % helperDisplayLidarOverlayImage(lidarImage, labelMap, classNames) % displays the overlaid image. lidarImage is a five-channel lidar input. % labelMap contains pixel labels and classNames is an array of label % names. % Read the intensity channel from the lidar image. intensityChannel = uint8(lidarImage(:,:,4)); % Load the lidar color map. cmap = helperLidarColorMap(); % Overlay the labels over the intensity image. B = labeloverlay(intensityChannel,labelMap,'Colormap',cmap,'Transparency',0.4); % Resize for better visualization. B = imresize(B, 'Scale', [3 1], 'method', 'nearest'); imshow(B); helperPixelLabelColorbar(cmap, classNames); end
The helperDisplayLidarOverPointCloud
function overlays the segmentation result over a 3-D organized point cloud.
function helperDisplayLidarOverlayPointCloud(ptCloud, labelMap, numClasses) %helperDisplayLidarOverlayPointCloud Overlay labels over point cloud object. % % helperDisplayLidarOverlayPointCloud(ptCloud, labelMap, numClasses) % displays the overlaid pointCloud object. ptCloud is the organized % 3-D point cloud input. labelMap contains pixel labels and numClasses % is the number of predicted classes. sz = size(labelMap); % Apply the color red to cars. carClassCar = zeros(sz(1), sz(2), numClasses, 'uint8'); carClassCar(:,:,1) = 255*ones(sz(1), sz(2), 'uint8'); % Apply the color blue to trucks. truckClassColor = zeros(sz(1), sz(2), numClasses, 'uint8'); truckClassColor(:,:,3) = 255*ones(sz(1), sz(2), 'uint8'); % Apply the color gray to the background. backgroundClassColor = 153*ones(sz(1), sz(2), numClasses, 'uint8'); % Extract indices from the labels. carIndices = labelMap == 'car'; truckIndices = labelMap == 'truck'; backgroundIndices = labelMap == 'background'; % Extract a point cloud of different classes. carPointCloud = select(ptCloud, carIndices, 'OutputSize','full'); truckPointCloud = select(ptCloud, truckIndices, 'OutputSize','full'); backgroundPointCloud = select(ptCloud, backgroundIndices, 'OutputSize','full'); % Apply colors to different classes. carPointCloud.Color = carClassCar; truckPointCloud.Color = truckClassColor; backgroundPointCloud.Color = backgroundClassColor; % Merge and add all the processed point clouds with class information. coloredCloud = pcmerge(carPointCloud, truckPointCloud, 0.01); coloredCloud = pcmerge(coloredCloud, backgroundPointCloud, 0.01); % Plot the colored point cloud. Set an ROI for better visualization. ax = pcshow(coloredCloud); set(ax,'XLim',[-35.0 35.0],'YLim',[-32.0 32.0],'ZLim',[-3.0 8.0], ... 'XColor','none','YColor','none','ZColor','none'); set(get(ax,'parent'), 'units','normalized'); end
The helperLidarColorMap
function defines the colormap used by the lidar data set.
function cmap = helperLidarColorMap() cmap = [ 0.00 0.00 0.00 % background 0.98 0.00 0.00 % car 0.00 0.00 0.98 % truck ]; end
The helperPixelLabelColorbar
function adds a colorbar to the current axis. The colorbar is formatted to display the class names with the color.
function helperPixelLabelColorbar(cmap, classNames) colormap(gca, cmap); % Add a colorbar to the current figure. c = colorbar('peer', gca); % Use class names for tick marks. c.TickLabels = classNames; numClasses = size(classNames, 1); % Center tick labels. c.Ticks = 1/(numClasses * 2):1/numClasses:1; % Remove tick marks. c.TickLength = 0; end
[1] Wu, Bichen, Xuanyu Zhou, Sicheng Zhao, Xiangyu Yue, and Kurt Keutzer. “SqueezeSegV2: Improved Model Structure and Unsupervised Domain Adaptation for Road-Object Segmentation from a LiDAR Point Cloud.” In 2019 International Conference on Robotics and Automation (ICRA), 4376–82. Montreal, QC, Canada: IEEE, 2019.https://doi.org/10.1109/ICRA.2019.8793495.