cameraParameters

Object for storing camera parameters

Description

The cameraParameters object stores the intrinsic, extrinsic, and lens distortion parameters of a camera.

Creation

You can create a cameraParameters object using the cameraParameters function described here. You can also create a cameraParameters object by using the estimateCameraParameters with an M-by-2-by-numImages array of input image points. M is the number of keypoint coordinates in each pattern.

Description

cameraParams = cameraParameters creates a cameraParameters object that contains the intrinsic, extrinsic, and lens distortion parameters of a camera.

example

cameraParams = cameraParameters(Name,Value) sets properties of the cameraParameters object by using one or more Name,Value pair arguments. Unspecified properties use default values.

cameraParams = cameraParameters(paramStruct) creates an identical cameraParameters object from an existing cameraParameters object with parameters stored in paramStruct.

Input Arguments

expand all

Stereo parameters, specified as a stereo parameters struct. To get a paramStruct from an existing cameraParameters object, use the toStruct function.

Properties

expand all

Intrinsic Camera Parameters:

Projection matrix, specified as a 3-by-3 identity matrix. The object uses the following format for the matrix format:

[fx00sfy0cxcy1]

The coordinates [cx cy] represent the optical center (the principal point), in pixels. When the x and y axis are exactly perpendicular, the skew parameter, s, equals 0.

fx = F*sx
fy = F*sy
F, is the focal length in world units, typically expressed in millimeters.
[sx, sy] are the number of pixels per world unit in the x and y direction respectively.
fx and fy are expressed in pixels.

This property is read-only.

Camera intrinsics object, stated as a cameraIntrinsics object. The object contains information about camera intrinsic calibration parameters, including lens distortion.

Dependency

You must provide an image size (using the ImageSize property) for the Intrinsics property to be non-empty. The intrinsics for the camera parameters depends on the image size.

Image size, specified as a two-element vector [mrows,ncols].

Camera Lens Distortion:

Radial distortion coefficients, specified as either a two- or three-element vector. When you specify a two-element vector, the object sets the third element to 0. Radial distortion occurs when light rays bend more near the edges of a lens than they do at its optical center. The smaller the lens, the greater the distortion. The camera parameters object calculates the radial distorted location of a point. You can denote the distorted points as (xdistorted, ydistorted), as follows:

xdistorted = x(1 + k1*r2 + k2*r4 + k3*r6)

ydistorted= y(1 + k1*r2 + k2*r4 + k3*r6)

x, y = undistorted pixel locations
k1, k2, and k3 = radial distortion coefficients of the lens
r2 = x2 + y2
Typically, two coefficients are sufficient. For severe distortion, you can include k3. The undistorted pixel locations appear in normalized image coordinates, with the origin at the optical center. The coordinates are expressed in world units.

Tangential distortion coefficients, specified as a two-element vector. Tangential distortion occurs when the lens and the image plane are not parallel. The camera parameters object calculates the tangential distorted location of a point. You can denote the distorted points as (xdistorted, ydistorted). The undistorted pixel locations appear in normalized image coordinates, with the origin at the optical center. The coordinates are expressed in world units.

Tangential distortion occurs when the lens and the image plane are not parallel. The tangential distortion coefficients model this type of distortion.

The distorted points are denoted as (xdistorted, ydistorted):

xdistorted = x + [2 * p1 * x * y + p2 * (r2 + 2 * x2)]

ydistorted = y + [p1 * (r2 + 2 *y2) + 2 * p2 * x * y]

  • x, y — Undistorted pixel locations. x and y are in normalized image coordinates. Normalized image coordinates are calculated from pixel coordinates by translating to the optical center and dividing by the focal length in pixels. Thus, x and y are dimensionless.

  • p1 and p2 — Tangential distortion coefficients of the lens.

  • r2: x2 + y2

Extrinsic Camera Parameters:

3-D rotation matrix, specified as a 3-by-3-by-P, with P number of pattern images. Each 3-by-3 matrix represents the same 3-D rotation as the corresponding vector.

The following equation provides the transformation that relates a world coordinate in the checkerboard frame [X Y Z] and the corresponding image point [x y]:

s[xy1]=[XYZ1][Rt]K

R is the 3-D rotation matrix.
t is the translation vector.
K is the IntrinsicMatrix.
s is a scalar.
This equation does not take distortion into consideration. The undistortImage function removes distortion.

3-D rotation vectors, specified as a P-by-3 matrix containing P rotation vectors. Each vector describes the 3-D rotation of the camera image plane relative to the corresponding calibration pattern. The vector specifies the 3-D axis about which the camera is rotated, where the magnitude is the rotation angle in radians. The RotationMatrices property provides the corresponding 3-D rotation matrices.

Camera translations, specified as an P-by-3 matrix. This matrix contains translation vectors for P images. The vectors contain the calibration pattern that estimates the calibration parameters. Each row of the matrix contains a vector that describes the translation of the camera relative to the corresponding pattern, expressed in world units.

The following equation provides the transformation that relates a world coordinate in the checkerboard frame [X Y Z] and the corresponding image point [x y]:

s[xy1]=[XYZ1][Rt]K

R is the 3-D rotation matrix.
t is the translation vector.
K is the IntrinsicMatrix.
s is a scalar.
This equation does not take distortion into consideration. The undistortImage function removes distortion.

To ensure that the number of rotation vectors equals the number of translation vectors, set the RotationVectors and TranslationVectors properties in the constructor. Setting only one property but not the other results in an error.

Estimated Camera Parameter Accuracy:

Average Euclidean distance between reprojected and detected points, specified as a numeric value in pixels.

Estimated camera parameters accuracy, specified as an M-by-2-by-P array of [x y] coordinates. The [x y] coordinates represent the translation in x and y between the reprojected pattern key points and the detected pattern key points. The values of this property represent the accuracy of the estimated camera parameters. P is the number of pattern images that estimates camera parameters. M is the number of keypoints in each image.

World points reprojected onto calibration images, specified as an M-by-2-by-P array of [x y] coordinates. P is the number of pattern images and M is the number of keypoints in each image.

Settings for Camera Parameter Estimation:

Number of calibration patterns that estimates camera extrinsics, specified as an integer. The number of calibration patterns equals the number of translation and rotation vectors.

World coordinates of key points on calibration pattern, specified as an M-by-2 array. M represents the number of key points in the pattern.

World points units, specified as a character vector or string scalar. The value describes the units of measure.

Estimate skew flag, specified as a logical scalar. When you set the logical to true, the object estimates the image axes skew. When you set the logical to false, the image axes are exactly perpendicular.

Number of radial distortion coefficients, specified as the number '2' or '3'.

Estimate tangential distortion flag, specified as the logical scalar true or false. When you set the logical to true, the object estimates the tangential distortion. When you set the logical to false, the tangential distortion is negligible.

Examples

collapse all

Use the camera calibration functions to remove distortion from an image. This example creates a vision.cameraParameters object manually, but in practice, you would use the estimateCameraParameters or the Camera Calibrator app to derive the object.

Create a vision.cameraParameters object manually.

IntrinsicMatrix = [715.2699 0 0; 0 711.5281 0; 565.6995 355.3466 1];
radialDistortion = [-0.3361 0.0921]; 
cameraParams = cameraParameters('IntrinsicMatrix',IntrinsicMatrix,'RadialDistortion',radialDistortion); 

Remove distortion from the images.

I = imread(fullfile(matlabroot,'toolbox','vision','visiondata','calibration','mono','image01.jpg'));
J = undistortImage(I,cameraParams);

Display the original and the undistorted images.

figure; imshowpair(imresize(I,0.5),imresize(J,0.5),'montage');
title('Original Image (left) vs. Corrected Image (right)');

References

[1] Zhang, Z. “A flexible new technique for camera calibration”. IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 22, No. 11, pp. 1330–1334, 2000.

[2] Heikkila, J, and O. Silven. “A Four-step Camera Calibration Procedure with Implicit Image Correction”, IEEE International Conference on Computer Vision and Pattern Recognition, 1997.

Extended Capabilities

Introduced in R2014a