Lidar Toolbox™ provides algorithms, functions, and apps for designing, analyzing, and testing lidar processing systems. You can perform object detection and tracking, semantic segmentation, shape fitting, lidar registration, and obstacle detection. Lidar Toolbox supports lidar-camera cross calibration for workflows that combine computer vision and lidar processing.
You can train custom detection and semantic segmentation models using deep learning and machine learning algorithms such as PointSeg, PointPillar, and SqueezeSegV2. The Lidar Labeler app supports manual and semi-automated labeling of lidar point clouds for training deep learning and machine learning models. The toolbox lets you stream data from Velodyne® lidars and read data recorded by Velodyne and IBEO lidar sensors.
Lidar Toolbox provides reference examples illustrating the use of lidar processing for perception and navigation workflows. Most toolbox algorithms support C/C++ code generation for integrating with existing code, desktop prototyping, and deployment.
High-level overview of lidar applications.
Interactively label a point cloud or point cloud sequence.
Integrate lidar and camera data.
Overview of coordinate systems in Lidar Toolbox.
This example shows how to train a PointSeg semantic segmentation network on 3-D organized lidar point cloud data.
This example shows how to detect, classify, and track vehicles by using lidar point cloud data captured by a lidar sensor mounted on an ego vehicle.
This example shows you how to estimate the rigid transformation between a 3-D lidar and a camera.
This example demonstrates how to process 3-D lidar data from a sensor mounted on a vehicle to progressively build a map.