imageLIME

Explain image classification result using LIME

    Description

    example

    scoreMap = imageLIME(net,X,label) uses the locally-interpretable model-agnostic explanation (LIME) technique to compute a map of the importance of the features in the input image X when the network net evaluates the class score for the class given by label. Use this function to explain classification decisions and check that your network is focusing on the appropriate features of the image.

    The LIME technique approximates the classification behavior of the net using a simpler, more interpretable model. By generating synthetic data from input X, classifying the synthetic data using net, and then using the results to fit a simple regression model, the imageLIME function determines the importance of each feature of X to the network's classification score for class given by label.

    This function requires Statistics and Machine Learning Toolbox™.

    example

    [scoreMap,featureMap,featureImportance] = imageLIME(net,X,label) also returns a map of the features used to compute the LIME results and the calculated importance of each feature.

    example

    ___ = imageLIME(___,Name,Value) specifies options using one or more name-value pair arguments in addition to the input arguments in previous syntaxes. For example, 'NumFeatures',100 sets the target number of features to 100.

    Examples

    collapse all

    Use imageLIME to visualize the parts of an image are important to a network for a classification decision.

    Import the pretrained network SqueezeNet.

    net = squeezenet;

    Import the image and resize to match the input size for the network.

    X = imread("laika_grass.jpg");
    inputSize = net.Layers(1).InputSize(1:2);
    X = imresize(X,inputSize);

    Display the image. The image is of a dog named Laika.

    imshow(X)

    Classify the image to get the class label.

    label = classify(net,X)
    label = categorical
         toy poodle 
    
    

    Use imageLIME to determine which parts of the image are important to the classification result.

    scoreMap = imageLIME(net,X,label);

    Plot the result over the original image with transparency to see which areas of the image affect the classification score.

    figure
    imshow(X)
    hold on
    imagesc(scoreMap,'AlphaData',0.5)
    colormap jet

    The network focuses predominantly on Laika's head and back to make the classification decision. Laika's eye and ear are also important to the classification result.

    Use imageLIME to determine the most important features in an image and isolate them from the unimportant features.

    Import the pretrained network SqueezeNet.

    net = squeezenet;

    Import the image and resize to match the input size for the network.

    X = imread("sherlock.jpg");
    inputSize = net.Layers(1).InputSize(1:2);
    X = imresize(X,inputSize);

    Classify the image to get the class label.

    label = classify(net,X)
    label = categorical
         golden retriever 
    
    

    Compute the map of the feature importance and also obtain the map of the features and the feature importance. Set the image segmentation method to 'grid', the number of features to 64, and the number of synthetic images to 3072.

    [scoreMap,featureMap,featureImportance]  = imageLIME(net,X,label,'Segmentation','grid','NumFeatures',64,'NumSamples',3072);

    Plot the result over the original image with transparency to see which areas of the image affect the classification score.

    figure
    imshow(X)
    hold on
    imagesc(scoreMap,'AlphaData',0.5)
    colormap jet
    colorbar

    Use the feature importance to find the indices of the most important five features.

    numTopFeatures = 5;
    [~,idx] = maxk(featureImportance,numTopFeatures);

    Use the map of the features to mask out the image so only the most important five features are visible. Display the masked image.

    mask = ismember(featureMap,idx);
    maskedImg = uint8(mask).*X;
    figure
    imshow(maskedImg);

    Input Arguments

    collapse all

    Image classification network, specified as a SeriesNetwork object or a DAGNetwork object. You can get a trained network by importing a pretrained network or by training your own network using the trainNetwork function. For more information about pretrained networks, see Pretrained Deep Neural Networks.

    net must contain a single input layer and a single output layer. The input layer must be an imageInputLayer. The output layer must be a classificationLayer.

    Input image, specified as a numeric array.

    The image must be the same size as the image input size of the network net. The input size is specified by the InputSize property of the network's imageInputLayer.

    Data Types: single | double | int8 | int16 | int32 | int64 | uint8 | uint16 | uint32 | uint64

    Class label used to calculate the feature importance map, specified as a categorical, a char vector, a string scalar or a vector of these values.

    If you specify label as a vector, the software calculates the feature importance for each class label independently. In that case, scoreMap(:,:,k) and featureImportance(idx,k) correspond to the map of feature importance and the importance of feature idx for the kth element in label, respectively.

    Example: ["cat" "dog"]

    Data Types: char | string | categorical

    Name-Value Pair Arguments

    Specify optional comma-separated pairs of Name,Value arguments. Name is the argument name and Value is the corresponding value. Name must appear inside quotes. You can specify several name and value pair arguments in any order as Name1,Value1,...,NameN,ValueN.

    Example: 'NumFeatures',100,'Segmentation','grid', 'OutputUpsampling','bicubic','ExecutionEnvironment','gpu' segments the input image into a grid of approximately 100 features, executes the calculation on the GPU, and upsamples the resulting map to the same size as the input image using bicubic interpolation.

    Target number of features to divide the input image into, specified as the comma-separated pair consisting of 'NumFeatures' and a positive integer.

    A larger value of 'NumFeatures' divides the input image into more, smaller features. To get the best results when using a larger number of features, also increase the number of synthetic images using the 'NumSamples' name-value pair.

    The exact number of features depends on the input image and segmentation method specified using the 'Segmentation' name-value pair and can be less than the target number of features.

    When you specify 'Segmentation','superpixels', the actual number of features can be greater or less than the number specified using 'NumFeatures'.

    When you specify 'Segmentation','grid', the actual number of features can be less than the number specified using 'NumFeatures'. If your input image is square specify 'NumFeatures' as a square number.

    Example: 'NumFeatures',100

    Data Types: single | double | int8 | int16 | int32 | int64 | uint8 | uint16 | uint32 | uint64

    Number of synthetic images to generate, specified as the comma-separated pair consisting of 'NumSamples' and a positive integer.

    A larger number of synthetic images gives better results but takes more time to compute.

    Example: 'NumSamples',1024

    Data Types: single | double | int8 | int16 | int32 | int64 | uint8 | uint16 | uint32 | uint64

    Segmentation method to use to divide the input image into features, specified as the comma-separated pair consisting of 'Segmentation' and 'superpixels' or 'grid'.

    The imageLIME function segments the input image into features in the following ways depending on the segmentation method.

    • 'superpixels' — Input image is divided into superpixel features, using the superpixels (Image Processing Toolbox) function. Features are irregularly shaped, based on the value of the pixels. This option requires Image Processing Toolbox™.

    • 'grid' — Input image is divided into a regular grid of features. Features are approximately square, based on the aspect ratio of the input image and the specified value of 'NumFeatures'. The number of grid cells can be smaller than the specified value of 'NumFeatures'. If the input image is square, specify 'NumFeatures' as a square number.

    For photographic image data, the 'superpixels' option usually gives better results. In this case, features are based on the contents of the image, by segmenting the image into regions of similar pixel value. For other types of images, such as spectrograms, the more regular 'grid' option can provide more useful results.

    Example: 'Segmentation','grid'

    Data Types: char | string

    Type of simple model to fit, specified as the specified as the comma-separated pair consisting of 'Model' and 'tree' or 'linear'.

    The imageLIME function classifies the synthetic images using the network net and then uses the results to fit a simple, interpretable model. The methods used to fit the results and determine the importance of each feature depend on the type of simple model used.

    • 'tree' — Fit a regression tree using fitrtree (Statistics and Machine Learning Toolbox) then compute the importance of each feature using predictorImportance (Statistics and Machine Learning Toolbox)

    • 'linear' — Fit a linear model with lasso regression using fitrlinear (Statistics and Machine Learning Toolbox) then compute the importance of each feature using the weights of the linear model.

    Example: 'Model','linear'

    Data Types: char | string

    Output upsampling method to use when segmentation method is 'grid', specified as the comma-separated pair consisting of 'OutputUpsampling' and one of the following.

    • 'nearest' — Use nearest-neighbor interpolation expand the map to the same size as the input data. The map indicates the size of the each feature with respect to the size of the input data.

    • 'bicubic' — Use bicubic interpolation to produce a smooth map the same size as the input data.

    • 'none' — Use no upsampling. The map can be smaller than the input data.

    If 'OutputUpsampling' is 'nearest' or 'bicubic', the computed map is upsampled to the size of the input data using the imresize function.

    Example: 'OutputUpsampling','bicubic'

    Size of the mini-batch to use to compute the map feature importance, specified as the comma-separated pair consisting of 'MiniBatchSize' and a positive integer.

    A mini-batch is a subset of the set of synthetic images. The mini-batch size specifies the number of synthetic images that are passed to the network at once. Larger mini-batch sizes lead to faster computation, at the cost of more memory.

    Example: 'MiniBatchSize',256

    Hardware resource for computing map, specified as the comma-separated pair consisting of 'ExecutionEnvironment' and one of the following.

    • 'auto' — Use a GPU if one is available. Otherwise, use the CPU.

    • 'cpu' — Use the CPU.

    • 'gpu' — Use the GPU.

    The GPU option requires Parallel Computing Toolbox™. To use a GPU for deep learning, you must also have a CUDA® enabled NVIDIA® GPU with compute capability 3.0 or higher. If you choose the 'ExecutionEnvironment','gpu' option and Parallel Computing Toolbox or a suitable GPU is not available, then the software returns an error.

    Example: 'ExecutionEnvironment','gpu'

    Output Arguments

    collapse all

    Map of feature importance, returned as a numeric matrix or a numeric array. Areas in the map with higher positive values correspond to regions of input data that contribute positively to the specified classification label.

    The value of scoreMap(i,j) denotes the importance of the image pixel (i,j) to the simple model., except when you use the options 'Segmentation','grid', and 'OutputUpsampling','none'. In that case, the scoreMap is smaller than the input image, and the value of scoreMap(i,j) denotes the importance of the feature at position (i,j) in the grid of features.

    If label is specified as a vector, the change in classification score for each class label is calculated independently. In that case, scoreMap(:,:,k) corresponds to the occlusion map for the kth element in label.

    Map of features, returned as a numeric matrix.

    For each pixel (i,j) in the input image, idx = featureMap(i,j) is an integer corresponding to the index of the feature containing that pixel.

    Feature importance, returned as a numeric vector or a numeric matrix.

    The value of featureImportance(idx) is the calculated importance of the feature specified by idx. If you provide labels as a vector of categorical values, char vectors, or string scalars, then featureImportance(idx,k) corresponds to the importance of feature idx for label(k).

    More About

    collapse all

    LIME

    The locally interpretable model-agnostic explanations (LIME) technique is an explainability technique used to explain the classification decisions made by a deep neural network.

    Given the classification decision of deep network for a piece of input data, the LIME technique calculates the importance of each feature of the input data to the classification result.

    The LIME technique approximates the behavior of a deep neural network using a simpler, more interpretable model, such as a regression tree. To map the importance of different parts of the input image, the imageLIME function of performs the following steps.

    • Segment the image into features.

    • Generate synthetic image data by randomly including or excluding features. Each pixel in an excluded feature is replaced with the value of the average image pixel.

    • Classify the synthetic images using the deep network.

    • Fit a regression model using the presence or absence of image features for each synthetic image as binary regression predictors for the scores of the target class.

    • Compute the importance of each feature using the regression model.

    The resulting map can be used to determine which features were most important to a particular classification decision. This can be especially useful for making sure your network is focusing on the appropriate features when classifying.

    Introduced in R2020b