Predict labels using naive Bayes classification model
[
also returns:label
,Posterior
,Cost
]
= predict(Mdl
,X
)
A matrix of posterior
probabilities (Posterior
) indicating the
likelihood that a label comes from a particular class.
A matrix of misclassification
costs (Cost
). For each observation in
X
, the predicted class label corresponds to the
minimum expected classification costs among all classes.
Mdl
— Naive Bayes classifierClassificationNaiveBayes
model | CompactClassificationNaiveBayes
modelNaive Bayes classifier, specified as a ClassificationNaiveBayes
model
or CompactClassificationNaiveBayes
model
returned by fitcnb
or compact
,
respectively.
X
— Predictor data to be classifiedPredictor data to be classified, specified as a numeric matrix or table.
Each row of X
corresponds to one observation, and each
column corresponds to one variable.
For a numeric matrix:
The variables making up the columns of
X
must have the same order as
the predictor variables that trained
Mdl
.
If you trained Mdl
using a table
(for example, Tbl
), then
X
can be a numeric matrix if
Tbl
contains all numeric
predictor variables. To treat numeric predictors in
Tbl
as categorical during
training, identify categorical predictors using the
CategoricalPredictors
name-value pair
argument of fitcnb
.
If Tbl
contains heterogeneous
predictor variables (for example, numeric and
categorical data types) and X
is a
numeric matrix, then predict
throws
an error.
For a table:
predict
does not support
multi-column variables and cell arrays other than cell
arrays of character vectors.
If you trained Mdl
using a table
(for example, Tbl
), then all
predictor variables in X
must have
the same variable names and data types as those that
trained Mdl
(stored in
Mdl.PredictorNames
). However, the
column order of X
does not need to
correspond to the column order of
Tbl
. Tbl
and
X
can contain additional
variables (response variables, observation weights,
etc.), but predict
ignores
them.
If you trained Mdl
using a numeric
matrix, then the predictor names in
Mdl.PredictorNames
and
corresponding predictor variable names in
X
must be the same. To specify
predictor names during training, see the PredictorNames
name-value pair argument
of fitcnb
. All predictor variables
in X
must be numeric vectors.
X
can contain additional
variables (response variables, observation weights,
etc.), but predict
ignores
them.
Data Types: table
| double
| single
If Mdl.DistributionNames
is 'mn'
,
then the software returns NaN
s corresponding to rows of
X
containing at least one NaN
.
If Mdl.DistributionNames
is not
'mn'
, then the software ignores NaN
values when estimating misclassification costs and posterior probabilities.
Specifically, the software computes the conditional density of the
predictors given the class by leaving out the factors corresponding to
missing predictor values.
For predictor distribution specified as 'mvmn'
, if
X
contains levels that are not represented in the
training data (i.e., not in Mdl.CategoricalLevels
for
that predictor), then the conditional density of the predictors given the
class is 0. For those observations, the software returns the corresponding
value of Posterior
as a NaN
. The
software determines the class label for such observations using the class
prior probability, stored in Mdl.Prior
.
label
— Predicted class labelsPredicted class labels, returned as a categorical vector, character array, logical or numeric vector, or cell array of character vectors.
label
:
Posterior
— Class posterior probabilitiesClass posterior
probabilities, returned as a numeric matrix.
Posterior
has rows equal to the number of rows of
X
and columns equal to the number of distinct
classes in the training data
(size(Mdl.ClassNames,1)
).
Posterior(j,k)
is the predicted posterior probability
of class k
(i.e., in class
Mdl.ClassNames(k)
) given the observation in row
j
of X
.
Data Types: double
Cost
— Expected misclassification costsExpected misclassification
costs, returned as a numeric matrix. Cost
has
rows equal to the number of rows of X
and columns equal
to the number of distinct classes in the training data
(size(Mdl.ClassNames,1)
).
Cost(j,k)
is the expected misclassification cost of the
observation in row j
of X
being
predicted into class k
(i.e., in class
Mdl.ClassNames(k)
).
Load Fisher's iris data set.
load fisheriris X = meas; % Predictors Y = species; % Response rng(1);
Train a naive Bayes classifier and specify to holdout 30% of the data for a test sample. It is good practice to specify the class order. Assume that each predictor is conditionally, normally distributed given its label.
CVMdl = fitcnb(X,Y,'Holdout',0.30,... 'ClassNames',{'setosa','versicolor','virginica'}); CMdl = CVMdl.Trained{1}; % Extract trained, compact classifier testIdx = test(CVMdl.Partition); % Extract the test indices XTest = X(testIdx,:); YTest = Y(testIdx);
CVMdl
is a ClassificationPartitionedModel
classifier. It contains the property Trained
, which is a 1-by-1 cell array holding a CompactClassificationNaiveBayes
classifier that the software trained using the training set.
Label the test sample observations. Display the results for a random set of 10 observations in the test sample.
idx = randsample(sum(testIdx),10); label = predict(CMdl,XTest); table(YTest(idx),label(idx),'VariableNames',... {'TrueLabel','PredictedLabel'})
ans=10×2 table
TrueLabel PredictedLabel
______________ ______________
{'setosa' } {'setosa' }
{'versicolor'} {'versicolor'}
{'setosa' } {'setosa' }
{'virginica' } {'virginica' }
{'versicolor'} {'versicolor'}
{'setosa' } {'setosa' }
{'virginica' } {'virginica' }
{'virginica' } {'virginica' }
{'setosa' } {'setosa' }
{'setosa' } {'setosa' }
A goal of classification is to estimate posterior probabilities of new observations using a trained algorithm. Many applications train algorithms on large data sets, which can use resources that are better used elsewhere. This example shows how to efficiently estimate posterior probabilities of new observations using a Naive Bayes classifier.
Load Fisher's iris data set.
load fisheriris X = meas; % Predictors Y = species; % Response rng(1);
Partition the data set into two sets: one in the training set, and the other is new unobserved data. Reserve 10 observations for the new data set.
n = size(X,1); newInds = randsample(n,10); inds = ~ismember(1:n,newInds); XNew = X(newInds,:); YNew = Y(newInds);
Train a naive Bayes classifier. It is good practice to specify the class order. Assume that each predictor is conditionally, normally distributed given its label. Conserve memory by reducing the size of the trained SVM classifier.
Mdl = fitcnb(X(inds,:),Y(inds),... 'ClassNames',{'setosa','versicolor','virginica'}); CMdl = compact(Mdl); whos('Mdl','CMdl')
Name Size Bytes Class Attributes CMdl 1x1 5238 classreg.learning.classif.CompactClassificationNaiveBayes Mdl 1x1 12539 ClassificationNaiveBayes
The CompactClassificationNaiveBayes
classifier (CMdl
) uses less space than the ClassificationNaiveBayes
classifier (Mdl
) because the latter stores the data.
Predict the labels, posterior probabilities, and expected class misclassification costs. Since true labels are available, compare them with the predicted labels.
CMdl.ClassNames
ans = 3x1 cell
{'setosa' }
{'versicolor'}
{'virginica' }
[labels,PostProbs,MisClassCost] = predict(CMdl,XNew); table(YNew,labels,PostProbs,'VariableNames',... {'TrueLabels','PredictedLabels',... 'PosteriorProbabilities'})
ans=10×3 table
TrueLabels PredictedLabels PosteriorProbabilities
______________ _______________ _________________________________________
{'setosa' } {'setosa' } 1 4.1259e-16 1.1846e-23
{'versicolor'} {'versicolor'} 1.0373e-60 0.99999 5.8053e-06
{'virginica' } {'virginica' } 4.8708e-211 0.00085645 0.99914
{'setosa' } {'setosa' } 1 1.4053e-19 2.2672e-26
{'versicolor'} {'versicolor'} 2.9308e-75 0.99987 0.00012869
{'setosa' } {'setosa' } 1 2.629e-18 4.4297e-25
{'versicolor'} {'versicolor'} 1.4238e-67 0.99999 9.733e-06
{'versicolor'} {'versicolor'} 2.0667e-110 0.94237 0.057625
{'setosa' } {'setosa' } 1 4.3779e-19 3.5139e-26
{'setosa' } {'setosa' } 1 1.1792e-17 2.2912e-24
MisClassCost
MisClassCost = 10×3
0.0000 1.0000 1.0000
1.0000 0.0000 1.0000
1.0000 0.9991 0.0009
0.0000 1.0000 1.0000
1.0000 0.0001 0.9999
0.0000 1.0000 1.0000
1.0000 0.0000 1.0000
1.0000 0.0576 0.9424
0.0000 1.0000 1.0000
0.0000 1.0000 1.0000
PostProbs
and MisClassCost
are 15
-by- 3
numeric matrices, where each row corresponds to a new observation and each column corresponds to a class. The order of the columns corresponds to the order of CMdl.ClassNames
.
Load Fisher's iris data set. Train the classifier using the petal lengths and widths.
load fisheriris
X = meas(:,3:4);
Y = species;
Train a naive Bayes classifier. It is good practice to specify the class order. Assume that each predictor is conditionally, normally distributed given its label.
Mdl = fitcnb(X,Y,... 'ClassNames',{'setosa','versicolor','virginica'});
Mdl
is a ClassificationNaiveBayes
model. You can access its properties using dot notation.
Define a grid of values in the observed predictor space. Predict the posterior probabilities for each instance in the grid.
xMax = max(X); xMin = min(X); h = 0.01; [x1Grid,x2Grid] = meshgrid(xMin(1):h:xMax(1),xMin(2):h:xMax(2)); [~,PosteriorRegion] = predict(Mdl,[x1Grid(:),x2Grid(:)]);
Plot the posterior probability regions and the training data.
figure; % Plot posterior regions scatter(x1Grid(:),x2Grid(:),1,PosteriorRegion); % Adjust color bar options h = colorbar; h.Ticks = [0 0.5 1]; h.TickLabels = {'setosa','versicolor','virginica'}; h.YLabel.String = 'Posterior'; h.YLabel.Position = [-0.5 0.5 0]; % Adjust color map options d = 1e-2; cmap = zeros(201,3); cmap(1:101,1) = 1:-d:0; cmap(1:201,2) = [0:d:1 1-d:-d:0]; cmap(101:201,3) = 0:d:1; colormap(cmap); % Plot data hold on gh = gscatter(X(:,1),X(:,2),Y,'k','dx*'); title 'Iris Petal Measurements and Posterior Probabilities'; xlabel 'Petal length (cm)'; ylabel 'Petal width (cm)'; axis tight legend(gh,'Location','Best') hold off
A misclassification cost is the relative severity of a classifier labeling an observation into the wrong class.
There are two types of misclassification costs: true and expected. Let K be the number of classes.
True misclassification cost —
A K-by-K matrix, where element
(i,j) indicates the misclassification
cost of predicting an observation into class j if
its true class is i. The software stores the misclassification
cost in the property Mdl.Cost
, and used in computations.
By default, Mdl.Cost(i,j)
= 1 if i
≠ j
,
and Mdl.Cost(i,j)
= 0 if i
= j
.
In other words, the cost is 0
for correct classification,
and 1
for any incorrect classification.
Expected misclassification cost — A K-dimensional vector, where element k is the weighted average misclassification cost of classifying an observation into class k, weighted by the class posterior probabilities. In other words,
the software classifies observations to the class corresponding with the lowest expected misclassification cost.
The posterior probability is the probability that an observation belongs in a particular class, given the data.
For naive Bayes, the posterior probability that a classification is k for a given observation (x1,...,xP) is
where:
is the conditional
joint density of the predictors given they are in class k. Mdl.DistributionNames
stores
the distribution names of the predictors.
π(Y = k)
is the class prior probability distribution. Mdl.Prior
stores
the prior distribution.
is the joint density of the predictors. The classes are discrete, so
The prior probability of a class is the believed relative frequency with which observations from that class occur in a population.
[1] Hastie, T., R. Tibshirani, and J. Friedman. The Elements of Statistical Learning, Second Edition. NY: Springer, 2008.
This function fully supports tall arrays. You can use models trained on either in-memory or tall data with this function.
For more information, see Tall Arrays (MATLAB).
Usage notes and limitations:
Use saveLearnerForCoder
, loadLearnerForCoder
, and codegen
to generate code for the predict
function. Save
a trained model by using saveLearnerForCoder
. Define an entry-point function
that loads the saved model by using loadLearnerForCoder
and calls the
predict
function. Then use codegen
to generate code for the entry-point function.
This table contains
notes about the arguments of predict
. Arguments not included in this
table are fully supported.
Argument | Notes and Limitations |
---|---|
Mdl | For the usage notes and limitations of the model object,
see
Code Generation of the
|
X |
|
For more information, see Introduction to Code Generation.
You have a modified version of this example. Do you want to open this example with your edits?