Before you can acquire data from your device, you must tell the engine the format of the data it can expect to receive from your device. Without this information, the engine does not know how to interpret the data. For example, the engine needs to know the size of the bytes used to store image data, the length of each line and the total number of lines in each image frame, and the number of planes, or bands, in each image frame. (e.g. RGB data has three bands). The following figure illustrates this information.
In some cases, this format information is determined by external
standards, such as the RS-170/NTSC standard. In other cases, device
vendors define many different formats, described in the documentation
that comes with the device. Adaptor writers decide which of these
supported formats they want to make available to users of their adaptor
in their getAvailHW()
function, described in Storing Format Information.
This section describes how you specify format information in your adaptor after using the adaptor class virtual functions.
You specify the dimensions of the image data a device outputs using the following virtual functions.
getMaxHeight()
— Returns
an integer that specifies the maximum height of the image data.
getMaxWidth()
— Returns
an integer that specifies the maximum width of the image data.
getNumberOfBands()
— Returns
an integer that specifies the number of dimensions in the data. For
example, RGB formats use three bands.
The engine calls these functions in your adaptor to get the
resolution information that it displays in the VideoResolution
property
of the video input object.
vid = videoinput('mydeviceimaq'); get(vid,'VideoResolution') ans = 640 480
Your adaptor also call these functions when it creates the IAdaptorFrame
object
to receive image data. See Implementing the Acquisition Thread Function for more information.
The getMaxHeight()
, getMaxWidth()
,
and getNumberOfBands()
functions in an adaptor
typically perform the following processing:
Determine the format specified by the
user when they created the video input object. The engine passes this
information as an argument to your adaptor's createInstance()
function.
Based on the format chosen, return the
appropriate values of the height, width, or number of bands. Your
adaptor can accomplish this in many ways. One way, illustrated by
the demo adaptor, is to determine these values in your getAvailHW()
function
and store the information in application data in the IDeviceFormat
object
— see Defining Classes to Hold Device-Specific Information. Then,
the getMaxHeight()
, getMaxWidth()
,
and getNumberOfBands()
functions can retrieve this
application data and read these values.
The following implementations of the getMaxHeight()
and getMaxWidth()
functions
determine the value based on the format specified by the user. The
number of bands depends on whether the format is color or monochrome.
For color formats, such as RGB and YUV, the number of bands is always
3. For monochrome (black and white) formats, the number of bands is
always 1. The Image Acquisition Toolbox™ software only supports
image data with 1 or 3 bands.
Replace the stub implementations in the example adaptor with
the following code C++ file, mydevice.cpp
, created
in Chapter 3. The values are appropriate for the format names specified
in the example in Specifying Device and Format Information.
int MyDeviceAdaptor::getMaxHeight() const{ if(strcmp(_formatName,"RS170"){ return 480; } else { return 576; } int MyDeviceAdaptor::getMaxWidth() const { if(strcmp(_formatName,"RS170"){ return 640; } else { return 768; } } int MyDeviceAdaptor::getNumberOfBands() const { return 1; }
In addition to the image frame dimensions, you must provide the engine with information about the byte layout of the image data. Byte layout includes the number of bits used to represent pixel values, whether the data is signed or unsigned, the endianness of the data, and whether the device sends the bottom row first.
To specify this information, you must select one of the FRAMETYPE
enumerations
defined by the adaptor kit. The adaptor kit defines enumerations for
many different frame types to represent the wide variety of formats
supported by devices. For example, if your device is a monochrome
(black and white) device that returns 8-bit data, you might choose
the MONO8
frame type. If your device is a color
device that returns 24-bit data, you might choose the RGB24
frame
type. The following table summarizes the frame types that are available.
To choose a specific format, view the list in the Image Acquisition Toolbox Adaptor
Kit API Reference documentation or open the AdaptorFrameTypes.h
file.
Format | Frame Types |
---|---|
Monochrome | 8-, 10-, 12-, and 16-bit formats; both little-endian and big-endian; in regular and flip formats. (In flip formats, the device delivers the bottom line first.) |
| Signed 16- and 32-bit formats; both little-endian and big-endian; in regular and flip formats. |
| Floating-point and double formats; both little-endian and big-endian formats; in regular and flip formats. |
Color | 8-, 24-, 32-, and 48-bit RGB formats; both little-endian and big-endian; regular and flip; packed and planar (see Understanding Packed and Planar Formats). |
| Frame types that specify the order of the bytes of color data (RGB or GBR) and specify where the blank byte is located (XRGB or XGBR). |
| Formats that represent colors in 4-bits (4444), 5-bits (555), 5- or 6-bits (565), or 10-bits (101010). |
| Formats that use the YUV color space. |
Your adaptor's getFrameType()
function must
return the appropriate frame type that describes the data returned
by your device for the specified format.
If your device supports multiple color formats, you do not need
to expose all the formats to toolbox users. You can simply provide
one color format and handle the low-level details in your adaptor
with FRAMETYPE
.
The following example shows a skeletal implementation of the getFrameType()
function.
An actual implementation might select the frame type based on the
format the user selected.
virtual imaqkit::frametypes::FRAMETYPE getFrameType() const { return imaqkit::frametypes::FRAMETYPE:MONO8; }
The adaptor kit IAdaptorFrame
class defines many FRAMETYPE
enumerations
that cover the many possible types of image data devices can return.
For example, some devices can return color images in packed or nonpacked
(planar) formats. These formats describe how the bytes of red, green,
and blue data are arranged in memory. In packed formats, the red,
green, and blue triplets are grouped together. In nonpacked formats,
all the red data is stored together, followed by all the green data,
followed by all the blue data. The following figure illustrates this
distinction.
Packed and Planar Formats
To get more information about video formats, go to the fourcc.org
Web
site.