Hyperspectral imaging measures the spatial and spectral characteristics of an object by imaging it at different wavelengths. The wavelength range extends beyond the visible spectrum and covers ultraviolet (UV) to long wave infrared (LWIR) wavelengths. The most popular are the visible, near-infrared and mid-infrared wavelength bands. An hyperspectral imaging sensor acquires several number of images with narrow and contiguous wavelengths within a specified spectral range. Each of these contain more subtle and detailed information.
Hyperspectral image processing involves representing, analyzing, and interpreting information contained in the hyperspectral images.
The values measured by a hyperspectral imaging sensor are stored to a binary data file by using band sequential (BSQ), band-interleaved-by-pixel (BIP), or band-interleaved-by-line (BIL) encoding formats. The data file is associated to a header file that contains ancillary information (metadata) like sensor parameters, acquisition settings, spatial dimensions, spectral wavelengths, and encoding formats that are required for proper representation of the values in the data file.
For hyperspectral image processing, the values read from the data file are arranged into a three dimensional (3-D) array (M×N×C) where M and N are the spatial dimensions of the acquired data, C is the spectral dimension specifying the number of spectral wavelengths used during acquisition. Thus, you can consider the 3-D array as a set of two dimensional (2-D) monochromatic images captured at varying wavelengths. This set is known as the hyperspectral data cube or data cube.
The hypercube
function constructs the data cube by reading the data file
and the metadata information in the associated header file. The hypercube
function creates a hypercube
object and
stores the data cube, spectral wavelengths, and the metadata to its properties. You
can use the hypercube
object as input to all other functions in
the Image Processing Toolbox™ Hyperspectral Imaging Library.
Color Representation of Data Cube: To visualize
and understand the object being imaged, it is useful to represent the data cube as a
2-D image by using color schemes. The color representation of the data cube allows
to visually inspect the data and supports decision making. You can use the colorize
function to compute the Red-Green-Blue (RGB), false-color,
and color-infrared (CIR) representation of the data cube.
The RGB color scheme uses the red, green, and blue spectral band responses to generate the 2-D image of the hyperspectral data cube. The RGB color scheme brings the natural appearance but results in significant loss of subtle information.
The false-color scheme uses a combination of any number of bands other than the visible red, green, and blue spectral bands. Use false-color representation to visualize the spectral responses of bands outside the visible spectrum. The false-color scheme efficiently captures distinct information across all spectral bands of hyperspectral data.
The CIR color scheme uses spectral bands in the NIR range. The CIR representation of hyperspectral data cube is particularly useful in displaying and analyzing vegetation areas of the data cube.
The spatial and the spectral characteristics of the hyperspectral data are characterized by the pixels. Each pixel is a vector of values that specifies the intensities at a location (x,y) in z different bands. The vector is known as the pixel spectra and it defines the spectral signature of the pixel located at (x,y). The pixel spectra are important features in hyperspectral data analysis.
The pixel values can be uncalibrated digital numbers (DNs) or calibrated radiance and reflectance values. In case of remote sensing application, the important preprocessing step is to calibrate DNs by using radiometric and atmospheric correction methods. This process improves interpretation of the pixel spectra and provides better results when you analyse multiple data sets, as in a classification problem.
The other preprocessing step that is important in all hyperspectral imaging applications is the dimensionality reduction. The large number of bands in the hyperspectral data increases the computational complexity in processing the data cube. The contiguous nature of the band images exhibit redundant information across bands. The neighboring bands in a hyperspectral image have high correlation and results in spectral redundancy. You can remove the redundant bands by decorrelating the band images. Some of the popular approaches for reducing the spectral dimensionality of a data cube includes band selection and orthogonal transforms.
The band selection approach uses
orthogonal space projections to find the spectrally distinct and most
informative bands in the data cube. Use selectBands
and removeBands
functions for finding the most informative
bands and removing specified bands respectively.
Orthogonal transforms such as principal component analysis (PCA) and maximum noise fraction (MNF) decorrelates the band information and finds the principal component bands. PCA transforms the data to a lower dimensional space and finds principal components with their directions along the maximum variances of the input bands. The principal components are ordered in decreasing order of amount of total variance explained. On the other hand, MNF computes the principal components that maximizes the signal-noise-ratio rather than the variance. MNF transform is particularly efficient in deriving principal components from noisy band images. The principal component bands are spectrally distinct bands with low inter-band correlation.
The hyperpca
and hypermnf
functions reduces the spectral dimensionality
of the data cube by using PCA and MNF transform respectively. The pixel
spectra derived from the reduced data cube are used for hyperspectral
data analysis.
In a hyperspectral image, the intensity values recorded at each pixel specifies the spectral characteristics of the region that the pixel belong to. The region can be a homogeneous surface or heterogeneous surface. The pixels that belong to the homogeneous surface are known as the pure pixels. These pure pixels constitute the endmembers of the hyperspectral data.
The heterogeneous surfaces are a combination of two or more distinct homogeneous surfaces. The pixels belonging to the heterogeneous surfaces are known as the mixed pixels. The spectral signature of a mixed pixel is a combination of two or more endmembers signatures. This spatial heterogeneity is mainly due to the low spatial resolution of the hyperspectral sensors.
Spectral unmixing is the process for decomposing the spectral signatures of the mixed pixels into its constituent endmembers. The spectral unmixing process involves two steps:
Endmember extraction: The spectra of the endmembers are the prominent features and can be used for efficient spectral unmixing, segmentation, and classification of hyperspectral images. Convex geometry based approaches such as pixel-purity index (PPI), fast iterative pixel purity index (FIPPI), and N-FINDR are some of the efficient approaches for endmember extraction.
Use ppi
function to estimate the endmembers by
using the PPI approach. The ppi
method
projects the pixel spectra to an orthogonal space and
identifies extrema pixels in the projected space as
endmembers. This is a non-iterative approach and the results
depend on the unit random vectors used for orthogonal
projection. For better results, the method requires a large
number of random unit vectors for projection and hence, is
computationally expensive.
Use fippi
function to estimate the endmembers by
using FIPPI approach. The fippi
method is
an iterative approach and it uses automatic target
generation process to estimate the initial set of unit
vectors for orthogonal projection. The algorithm converges
faster and identifies unique endmembers.
Use nfindr
function to estimate the endmembers
by using the N-FINDR method. N-FINDR is an iterative
approach that constructs a simplex by using the pixel
spectra. The method assumes that the volume of a simplex
formed by the endmembers is larger than any other volume
defined by other combination of pixels. Hence, the set of
pixel signatures for which the volume of the simplex is
high, are chosen as endmembers.
Abundance map estimation: Given the endmember signatures, it is useful to estimate the fractional amount of each endmember present at each pixel. The abundance maps are generated for each endmember and they represent the distribution of endmember spectra in the image. You can label each pixel to belong to an endmember spectra by comparing all the abundance map values obtained for that pixel.
Use the estimateAbundanceLS
function to estimate the abundance
maps for each endmember spectra
Spectral matching is the important step in interpreting the pixel spectra. Spectral matching is performed to identify the class of the endmember material by comparing its spectra with one or more reference spectra. The reference data are pure spectral signatures of materials that are available as spectral libraries.
Use the readEcostressSig
function to read the reference spectra files from
ECOSTRESS spectral library. Then, you can compute the similarity between the files
in the ECOSTRESS library spectra and an endmember spectra by using spectralMatch
function. You can also use sam
and sid
functions to match two spectral signatures of same length.
Classification and target detection are one among the important applications of hyperspectral image processing. You can segment and classify each pixel in a hyperspectral image through unmixing and spectral matching. For an example, see Hyperspectral Image Analysis Using Maximum Abundance Classification and Classify Hyperspectral Image Using Library Signatures and SAM.
Similarly, you can perform target detection by matching the spectral signatures. For an example, see Target Detection Using Spectral Signature Matching.
In addition, hyperspectral image processing is widely used for anomaly detection and vegetation analysis.
anomalyRX
| estimateAbundanceLS
| hypercube
| ndvi
| ppi
| spectralMatch