predict

Predict labels using naive Bayes classification model

Description

example

label = predict(Mdl,X) returns a vector of predicted class labels for the predictor data in the table or matrix X, based on the trained, full or compact naive Bayes classifier Mdl.

example

[label,Posterior,Cost] = predict(Mdl,X) also returns:

  • A matrix of posterior probabilities (Posterior) indicating the likelihood that a label comes from a particular class.

  • A matrix of misclassification costs (Cost). For each observation in X, the predicted class label corresponds to the minimum expected classification costs among all classes.

Input Arguments

expand all

Naive Bayes classifier, specified as a ClassificationNaiveBayes model or CompactClassificationNaiveBayes model returned by fitcnb or compact, respectively.

Predictor data to be classified, specified as a numeric matrix or table.

Each row of X corresponds to one observation, and each column corresponds to one variable.

  • For a numeric matrix:

    • The variables making up the columns of X must have the same order as the predictor variables that trained Mdl.

    • If you trained Mdl using a table (for example, Tbl), then X can be a numeric matrix if Tbl contains all numeric predictor variables. To treat numeric predictors in Tbl as categorical during training, identify categorical predictors using the CategoricalPredictors name-value pair argument of fitcnb. If Tbl contains heterogeneous predictor variables (for example, numeric and categorical data types) and X is a numeric matrix, then predict throws an error.

  • For a table:

    • predict does not support multi-column variables and cell arrays other than cell arrays of character vectors.

    • If you trained Mdl using a table (for example, Tbl), then all predictor variables in X must have the same variable names and data types as those that trained Mdl (stored in Mdl.PredictorNames). However, the column order of X does not need to correspond to the column order of Tbl. Tbl and X can contain additional variables (response variables, observation weights, etc.), but predict ignores them.

    • If you trained Mdl using a numeric matrix, then the predictor names in Mdl.PredictorNames and corresponding predictor variable names in X must be the same. To specify predictor names during training, see the PredictorNames name-value pair argument of fitcnb. All predictor variables in X must be numeric vectors. X can contain additional variables (response variables, observation weights, etc.), but predict ignores them.

Data Types: table | double | single

Notes:

  • If Mdl.DistributionNames is 'mn', then the software returns NaNs corresponding to rows of X containing at least one NaN.

  • If Mdl.DistributionNames is not 'mn', then the software ignores NaN values when estimating misclassification costs and posterior probabilities. Specifically, the software computes the conditional density of the predictors given the class by leaving out the factors corresponding to missing predictor values.

  • For predictor distribution specified as 'mvmn', if X contains levels that are not represented in the training data (i.e., not in Mdl.CategoricalLevels for that predictor), then the conditional density of the predictors given the class is 0. For those observations, the software returns the corresponding value of Posterior as a NaN. The software determines the class label for such observations using the class prior probability, stored in Mdl.Prior.

Output Arguments

expand all

Predicted class labels, returned as a categorical vector, character array, logical or numeric vector, or cell array of character vectors.

label:

  • Is the same data type as the observed class labels (Mdl.Y) that trained Mdl

  • Has length equal to the number of rows of Mdl.X

  • Is the class yielding the lowest expected misclassification cost (Cost)

Class posterior probabilities, returned as a numeric matrix. Posterior has rows equal to the number of rows of X and columns equal to the number of distinct classes in the training data (size(Mdl.ClassNames,1)).

Posterior(j,k) is the predicted posterior probability of class k (i.e., in class Mdl.ClassNames(k)) given the observation in row j of X.

Data Types: double

Expected misclassification costs, returned as a numeric matrix. Cost has rows equal to the number of rows of X and columns equal to the number of distinct classes in the training data (size(Mdl.ClassNames,1)).

Cost(j,k) is the expected misclassification cost of the observation in row j of X being predicted into class k (i.e., in class Mdl.ClassNames(k)).

Examples

expand all

Load Fisher's iris data set.

load fisheriris
X = meas;    % Predictors
Y = species; % Response
rng(1);

Train a naive Bayes classifier and specify to holdout 30% of the data for a test sample. It is good practice to specify the class order. Assume that each predictor is conditionally, normally distributed given its label.

CVMdl = fitcnb(X,Y,'Holdout',0.30,...
    'ClassNames',{'setosa','versicolor','virginica'});
CMdl = CVMdl.Trained{1};          % Extract trained, compact classifier
testIdx = test(CVMdl.Partition); % Extract the test indices
XTest = X(testIdx,:);
YTest = Y(testIdx);

CVMdl is a ClassificationPartitionedModel classifier. It contains the property Trained, which is a 1-by-1 cell array holding a CompactClassificationNaiveBayes classifier that the software trained using the training set.

Label the test sample observations. Display the results for a random set of 10 observations in the test sample.

idx = randsample(sum(testIdx),10);
label = predict(CMdl,XTest);
table(YTest(idx),label(idx),'VariableNames',...
    {'TrueLabel','PredictedLabel'})
ans=10×2 table
      TrueLabel       PredictedLabel
    ______________    ______________

    {'setosa'    }    {'setosa'    }
    {'versicolor'}    {'versicolor'}
    {'setosa'    }    {'setosa'    }
    {'virginica' }    {'virginica' }
    {'versicolor'}    {'versicolor'}
    {'setosa'    }    {'setosa'    }
    {'virginica' }    {'virginica' }
    {'virginica' }    {'virginica' }
    {'setosa'    }    {'setosa'    }
    {'setosa'    }    {'setosa'    }

A goal of classification is to estimate posterior probabilities of new observations using a trained algorithm. Many applications train algorithms on large data sets, which can use resources that are better used elsewhere. This example shows how to efficiently estimate posterior probabilities of new observations using a Naive Bayes classifier.

Load Fisher's iris data set.

load fisheriris
X = meas;    % Predictors
Y = species; % Response
rng(1);

Partition the data set into two sets: one in the training set, and the other is new unobserved data. Reserve 10 observations for the new data set.

n = size(X,1);
newInds = randsample(n,10);
inds = ~ismember(1:n,newInds);
XNew = X(newInds,:);
YNew = Y(newInds);

Train a naive Bayes classifier. It is good practice to specify the class order. Assume that each predictor is conditionally, normally distributed given its label. Conserve memory by reducing the size of the trained SVM classifier.

Mdl = fitcnb(X(inds,:),Y(inds),...
    'ClassNames',{'setosa','versicolor','virginica'});
CMdl = compact(Mdl);
whos('Mdl','CMdl')
  Name      Size            Bytes  Class                                                        Attributes

  CMdl      1x1              5238  classreg.learning.classif.CompactClassificationNaiveBayes              
  Mdl       1x1             12539  ClassificationNaiveBayes                                               

The CompactClassificationNaiveBayes classifier (CMdl) uses less space than the ClassificationNaiveBayes classifier (Mdl) because the latter stores the data.

Predict the labels, posterior probabilities, and expected class misclassification costs. Since true labels are available, compare them with the predicted labels.

CMdl.ClassNames
ans = 3x1 cell
    {'setosa'    }
    {'versicolor'}
    {'virginica' }

[labels,PostProbs,MisClassCost] = predict(CMdl,XNew);
table(YNew,labels,PostProbs,'VariableNames',...
    {'TrueLabels','PredictedLabels',...
    'PosteriorProbabilities'})
ans=10×3 table
      TrueLabels      PredictedLabels             PosteriorProbabilities          
    ______________    _______________    _________________________________________

    {'setosa'    }    {'setosa'    }               1     4.1259e-16     1.1846e-23
    {'versicolor'}    {'versicolor'}      1.0373e-60        0.99999     5.8053e-06
    {'virginica' }    {'virginica' }     4.8708e-211     0.00085645        0.99914
    {'setosa'    }    {'setosa'    }               1     1.4053e-19     2.2672e-26
    {'versicolor'}    {'versicolor'}      2.9308e-75        0.99987     0.00012869
    {'setosa'    }    {'setosa'    }               1      2.629e-18     4.4297e-25
    {'versicolor'}    {'versicolor'}      1.4238e-67        0.99999      9.733e-06
    {'versicolor'}    {'versicolor'}     2.0667e-110        0.94237       0.057625
    {'setosa'    }    {'setosa'    }               1     4.3779e-19     3.5139e-26
    {'setosa'    }    {'setosa'    }               1     1.1792e-17     2.2912e-24

MisClassCost
MisClassCost = 10×3

    0.0000    1.0000    1.0000
    1.0000    0.0000    1.0000
    1.0000    0.9991    0.0009
    0.0000    1.0000    1.0000
    1.0000    0.0001    0.9999
    0.0000    1.0000    1.0000
    1.0000    0.0000    1.0000
    1.0000    0.0576    0.9424
    0.0000    1.0000    1.0000
    0.0000    1.0000    1.0000

PostProbs and MisClassCost are 15-by- 3 numeric matrices, where each row corresponds to a new observation and each column corresponds to a class. The order of the columns corresponds to the order of CMdl.ClassNames.

Load Fisher's iris data set. Train the classifier using the petal lengths and widths.

load fisheriris
X = meas(:,3:4);
Y = species;

Train a naive Bayes classifier. It is good practice to specify the class order. Assume that each predictor is conditionally, normally distributed given its label.

Mdl = fitcnb(X,Y,...
    'ClassNames',{'setosa','versicolor','virginica'});

Mdl is a ClassificationNaiveBayes model. You can access its properties using dot notation.

Define a grid of values in the observed predictor space. Predict the posterior probabilities for each instance in the grid.

xMax = max(X);
xMin = min(X);
h = 0.01;
[x1Grid,x2Grid] = meshgrid(xMin(1):h:xMax(1),xMin(2):h:xMax(2));

[~,PosteriorRegion] = predict(Mdl,[x1Grid(:),x2Grid(:)]);

Plot the posterior probability regions and the training data.

figure;
% Plot posterior regions 
scatter(x1Grid(:),x2Grid(:),1,PosteriorRegion);
% Adjust color bar options
h = colorbar;
h.Ticks = [0 0.5 1];
h.TickLabels = {'setosa','versicolor','virginica'};
h.YLabel.String = 'Posterior';
h.YLabel.Position = [-0.5 0.5 0];
% Adjust color map options
d = 1e-2;
cmap = zeros(201,3);
cmap(1:101,1) = 1:-d:0;
cmap(1:201,2) = [0:d:1 1-d:-d:0];
cmap(101:201,3) = 0:d:1;
colormap(cmap);
% Plot data
hold on
gh = gscatter(X(:,1),X(:,2),Y,'k','dx*');
title 'Iris Petal Measurements and Posterior Probabilities';
xlabel 'Petal length (cm)';
ylabel 'Petal width (cm)';
axis tight
legend(gh,'Location','Best')
hold off

More About

expand all

References

[1] Hastie, T., R. Tibshirani, and J. Friedman. The Elements of Statistical Learning, Second Edition. NY: Springer, 2008.

Extended Capabilities