Classify observations in support vector machine (SVM) classifier
If you are using a linear SVM model for classification and the model has many
support vectors, then using resubPredict
for the prediction
method can be slow. To efficiently classify observations based on a linear SVM
model, remove the support vectors from the model object by using discardSupportVectors
.
By default and irrespective of the model kernel function, MATLAB® uses the dual representation of the score function to classify observations based on trained SVM models, specifically
This prediction method requires the trained support vectors and
α coefficients (see the SupportVectors
and
Alpha
properties of the SVM model).
By default, the software computes optimal posterior probabilities using Platt’s method [1]:
Perform 10-fold cross-validation.
Fit the sigmoid function parameters to the scores returned from the cross-validation.
Estimate the posterior probabilities by entering the cross-validation scores into the fitted sigmoid function.
The software incorporates prior probabilities in the SVM objective function during training.
For SVM, predict
and resubPredict
classify
observations into the class yielding the largest score (the largest posterior
probability). The software accounts for misclassification costs by applying the
average-cost correction before training the classifier. That is, given the class prior
vector P, misclassification cost matrix C, and
observation weight vector w, the software defines a new vector of
observation weights (W) such that
[1] Platt, J. “Probabilistic outputs for support vector machines and comparisons to regularized likelihood methods.” Advances in Large Margin Classifiers. MIT Press, 1999, pp. 61–74.
ClassificationSVM
| CompactClassificationSVM
| fitcsvm
| fitPosterior
| fitSVMPosterior
| predict