Camera sensor model with lens in 3D simulation environment
Automated Driving Toolbox / Simulation 3D
The Simulation 3D Camera block provides an interface to a camera with a lens in a 3D simulation environment. This environment is rendered using the Unreal Engine® from Epic Games®. The sensor is based on the ideal pinhole camera model, with a lens added to represent a full camera model, including lens distortion. For more details, see Algorithms.
If you set Sample time to -1
, the block uses the
sample time specified in the Simulation 3D Scene
Configuration block. To use this sensor, you must include a Simulation 3D
Scene Configuration block in your model.
The block outputs images captured by the camera during simulation. You can use these images to visualize and verify your driving algorithms. In addition, on the Ground Truth tab, you can select options to output the ground truth data for developing depth estimation and semantic segmentation algorithms. You can also output the location and orientation of the camera in the world coordinate system of the scene. The image shows the block with all ports enabled.
The table summarizes the ports and how to enable them.
Port | Description | Parameter for Enabling Port | Sample Visualization |
---|---|---|---|
Image | Outputs an RGB image captured by the camera | n/a |
|
Depth | Outputs a depth map with values from 0 m to 1000 meters | Output depth |
|
Labels | Outputs a semantic segmentation map of label IDs that correspond to objects in the scene | Output semantic segmentation |
|
Location | Outputs the location of the camera in the world coordinate system | Output location (m) and orientation (rad) | n/a |
Orientation | Outputs the orientation of the camera in the world coordinate system | Output location (m) and orientation (rad) | n/a |
Note
The Simulation 3D Scene Configuration block must execute before the Simulation 3D Camera block. That way, the Unreal Engine 3D visualization environment prepares the data before the Simulation 3D Camera block receives it. To check the block execution order, right-click the blocks and select Properties. On the General tab, confirm these Priority settings:
Simulation 3D Scene Configuration — 0
Simulation 3D Camera — 1
For more information about execution order, see How Unreal Engine Simulation for Automated Driving Works.
To visualize the camera images that are output by the Image port, use a Video Viewer (Computer Vision Toolbox) or To Video Display (Computer Vision Toolbox) block.
To learn how to visualize the depth and semantic segmentation maps that are output by the Depth and Labels ports, see the Depth and Semantic Segmentation Visualization Using Unreal Engine Simulation example.
Because the Unreal Engine can take a long time to start between simulations, consider logging the signals that the sensors output. You can then use this data to develop perception algorithms in MATLAB®. See Configure a Signal for Logging (Simulink).
You can also save image data as a video by using a To Multimedia File (Computer Vision Toolbox) block. For an example of this setup, see Design Lane Marker Detector Using Unreal Engine Simulation Environment.
The block uses the camera model proposed by Jean-Yves Bouguet [1]. The model includes:
The pinhole camera model does not account for lens distortion because an ideal pinhole camera does not have a lens. To accurately represent a real camera, the full camera model used by the block includes radial and tangential lens distortion.
For more details, see What Is Camera Calibration? (Computer Vision Toolbox)
[1] Bouguet, J. Y. Camera Calibration Toolbox for Matlab. http://www.vision.caltech.edu/bouguetj/calib_doc
[2] Zhang, Z. "A Flexible New Technique for Camera Calibration." IEEE Transactions on Pattern Analysis and Machine Intelligence. Vol. 22, No. 11, 2000, pp. 1330–1334.
[3] Heikkila, J., and O. Silven. “A Four-step Camera Calibration Procedure with Implicit Image Correction.” IEEE International Conference on Computer Vision and Pattern Recognition. 1997.
cameraIntrinsics
(Computer Vision Toolbox)