Classification loss for observations not used in training
returns
the cross-validated classification
losses obtained by the cross-validated, binary, linear classification
model L
= kfoldLoss(CVMdl
)CVMdl
. That is, for every fold, kfoldLoss
estimates
the classification loss for observations that it holds out when it
trains using all other observations.
L
contains a classification loss for each
regularization strength in the linear classification models that compose CVMdl
.
uses
additional options specified by one or more L
= kfoldLoss(CVMdl
,Name,Value
)Name,Value
pair
arguments. For example, indicate which folds to use for the loss calculation
or specify the classification-loss function.
CVMdl
— Cross-validated, binary, linear classification modelClassificationPartitionedLinear
model objectCross-validated, binary, linear classification model, specified as a ClassificationPartitionedLinear
model object. You can create a
ClassificationPartitionedLinear
model using fitclinear
and specifying any one of the cross-validation, name-value
pair arguments, for example, CrossVal
.
To obtain estimates, kfoldLoss applies the same data used to cross-validate the linear
classification model (X
and Y
).
Specify optional
comma-separated pairs of Name,Value
arguments. Name
is
the argument name and Value
is the corresponding value.
Name
must appear inside quotes. You can specify several name and value
pair arguments in any order as
Name1,Value1,...,NameN,ValueN
.
'Folds'
— Fold indices to use for classification-score prediction1:CVMdl.KFold
(default) | numeric vector of positive integersFold indices to use for classification-score prediction, specified
as the comma-separated pair consisting of 'Folds'
and
a numeric vector of positive integers. The elements of Folds
must
range from 1
through CVMdl.KFold
.
Example: 'Folds',[1 4 10]
Data Types: single
| double
'LossFun'
— Loss function'classiferror'
(default) | 'binodeviance'
| 'exponential'
| 'hinge'
| 'logit'
| 'mincost'
| 'quadratic'
| function handleLoss function, specified as the comma-separated pair consisting
of 'LossFun'
and a built-in, loss-function name
or function handle.
The following table lists the available loss functions. Specify one using its corresponding character vector or string scalar.
Value | Description |
---|---|
'binodeviance' | Binomial deviance |
'classiferror' | Classification error |
'exponential' | Exponential |
'hinge' | Hinge |
'logit' | Logistic |
'mincost' | Minimal expected misclassification cost (for classification scores that are posterior probabilities) |
'quadratic' | Quadratic |
'mincost'
is appropriate for
classification scores that are posterior probabilities. For
linear classification models, logistic regression learners
return posterior probabilities as classification scores by
default, but SVM learners do not (see predict
).
Specify your own function using function handle notation.
Let n
be the number of observations in X
and K
be
the number of distinct classes (numel(Mdl.ClassNames)
, Mdl
is
the input model). Your function must have this signature
lossvalue = lossfun
(C,S,W,Cost)
The output argument lossvalue
is
a scalar.
You choose the function name (lossfun
).
C
is an n
-by-K
logical
matrix with rows indicating which class the corresponding observation
belongs. The column order corresponds to the class order in Mdl.ClassNames
.
Construct C
by setting C(p,q) =
1
if observation p
is in class q
,
for each row. Set all other elements of row p
to 0
.
S
is an n
-by-K
numeric
matrix of classification scores. The column order corresponds to the
class order in Mdl.ClassNames
. S
is
a matrix of classification scores, similar to the output of predict
.
W
is an n
-by-1
numeric vector of observation weights. If you pass W
,
the software normalizes them to sum to 1
.
Cost
is a K-by-K
numeric
matrix of misclassification costs. For example, Cost = ones(K)
- eye(K)
specifies a cost of 0
for correct
classification, and 1
for misclassification.
Specify your function using 'LossFun',@
.lossfun
Data Types: char
| string
| function_handle
'Mode'
— Loss aggregation level'average'
(default) | 'individual'
Loss aggregation level, specified as the comma-separated pair
consisting of 'Mode'
and 'average'
or 'individual'
.
Value | Description |
---|---|
'average' | Returns losses averaged over all folds |
'individual' | Returns losses for each fold |
Example: 'Mode','individual'
L
— Cross-validated classification lossesCross-validated classification losses, returned
as a numeric scalar, vector, or matrix. The interpretation of L
depends
on LossFun
.
Let R
be the number of regularizations strengths is the
cross-validated models (stored in
numel(CVMdl.Trained{1}.Lambda)
) and
F
be the number of folds (stored in
CVMdl.KFold
).
If Mode
is 'average'
,
then L
is a 1-by-R
vector. L(
is
the average classification loss over all folds of the
cross-validated model that uses regularization strength
j
)j
.
Otherwise, L
is an
F
-by-R
matrix.
L(
is the classification loss for fold i
,j
)i
of the cross-validated model that uses regularization strength
j
.
To estimate L
,
kfoldLoss
uses the data that created
CVMdl
(see X
and Y
).
Load the NLP data set.
load nlpdata
X
is a sparse matrix of predictor data, and Y
is a categorical vector of class labels. There are more than two classes in the data.
The models should identify whether the word counts in a web page are from the Statistics and Machine Learning Toolbox™ documentation. So, identify the labels that correspond to the Statistics and Machine Learning Toolbox™ documentation web pages.
Ystats = Y == 'stats';
Cross-validate a binary, linear classification model that can identify whether the word counts in a documentation web page are from the Statistics and Machine Learning Toolbox™ documentation.
rng(1); % For reproducibility CVMdl = fitclinear(X,Ystats,'CrossVal','on');
CVMdl
is a ClassificationPartitionedLinear
model. By default, the software implements 10-fold cross validation. You can alter the number of folds using the 'KFold'
name-value pair argument.
Estimate the average of the out-of-fold, classification error rates.
ce = kfoldLoss(CVMdl)
ce = 7.6017e-04
Alternatively, you can obtain the per-fold classification error rates by specifying the name-value pair 'Mode','individual'
in kfoldLoss
.
Load the NLP data set. Preprocess the data as in Estimate k-Fold Cross-Validation Classification Error, and transpose the predictor data.
load nlpdata Ystats = Y == 'stats'; X = X';
Cross-validate a binary, linear classification model using 5-fold cross-validation. Optimize the objective function using SpaRSA. Specify that the predictor observations correspond to columns.
rng(1); % For reproducibility CVMdl = fitclinear(X,Ystats,'Solver','sparsa','KFold',5,... 'ObservationsIn','columns'); CMdl = CVMdl.Trained{1};
CVMdl
is a ClassificationPartitionedLinear
model. It contains the property Trained
, which is a 5-by-1 cell array holding a ClassificationLinear
models that the software trained using the training set of each fold.
Create an anonymous function that measures linear loss, that is,
is the weight for observation j, y_j is response j (-1 for the negative class, and 1 otherwise), and f_j is the raw classification score of observation j. Custom loss functions must be written in a particular form. For rules on writing a custom loss function, see the LossFun
name-value pair argument. Because the function does not use classification cost, use ~
to have kfoldLoss
ignore its position.
linearloss = @(C,S,W,~)sum(-W.*sum(S.*C,2))/sum(W);
Estimate the average cross-validated classification loss using the linear loss function. Also, obtain the loss for each fold.
ce = kfoldLoss(CVMdl,'LossFun',linearloss)
ce = -8.0982
ceFold = kfoldLoss(CVMdl,'LossFun',linearloss,'Mode','individual')
ceFold = 5×1
-8.3165
-8.7633
-7.4342
-8.0423
-7.9347
To determine a good lasso-penalty strength for a linear classification model that uses a logistic regression learner, compare test-sample classification error rates.
Load the NLP data set. Preprocess the data as in Specify Custom Classification Loss.
load nlpdata Ystats = Y == 'stats'; X = X';
Create a set of 11 logarithmically-spaced regularization strengths from through .
Lambda = logspace(-6,-0.5,11);
Cross-validate binary, linear classification models using 5-fold cross-validation, and that use each of the regularization strengths. Optimize the objective function using SpaRSA. Lower the tolerance on the gradient of the objective function to 1e-8
.
rng(10); % For reproducibility CVMdl = fitclinear(X,Ystats,'ObservationsIn','columns',... 'KFold',5,'Learner','logistic','Solver','sparsa',... 'Regularization','lasso','Lambda',Lambda,'GradientTolerance',1e-8)
CVMdl = ClassificationPartitionedLinear CrossValidatedModel: 'Linear' ResponseName: 'Y' NumObservations: 31572 KFold: 5 Partition: [1×1 cvpartition] ClassNames: [0 1] ScoreTransform: 'none' Properties, Methods
Extract a trained linear classification model.
Mdl1 = CVMdl.Trained{1}
Mdl1 = ClassificationLinear ResponseName: 'Y' ClassNames: [0 1] ScoreTransform: 'logit' Beta: [34023×11 double] Bias: [-13.2559 -13.2559 -13.2559 -13.2559 -9.1017 -7.1128 -5.4113 -4.4974 -3.6007 -3.1606 -2.9794] Lambda: [1.0000e-06 3.5481e-06 1.2589e-05 4.4668e-05 1.5849e-04 5.6234e-04 0.0020 0.0071 0.0251 0.0891 0.3162] Learner: 'logistic' Properties, Methods
Mdl1
is a ClassificationLinear
model object. Because Lambda
is a sequence of regularization strengths, you can think of Mdl
as 11 models, one for each regularization strength in Lambda
.
Estimate the cross-validated classification error.
ce = kfoldLoss(CVMdl);
Because there are 11 regularization strengths, ce
is a 1-by-11 vector of classification error rates.
Higher values of Lambda
lead to predictor variable sparsity, which is a good quality of a classifier. For each regularization strength, train a linear classification model using the entire data set and the same options as when you cross-validated the models. Determine the number of nonzero coefficients per model.
Mdl = fitclinear(X,Ystats,'ObservationsIn','columns',... 'Learner','logistic','Solver','sparsa','Regularization','lasso',... 'Lambda',Lambda,'GradientTolerance',1e-8); numNZCoeff = sum(Mdl.Beta~=0);
In the same figure, plot the cross-validated, classification error rates and frequency of nonzero coefficients for each regularization strength. Plot all variables on the log scale.
figure; [h,hL1,hL2] = plotyy(log10(Lambda),log10(ce),... log10(Lambda),log10(numNZCoeff)); hL1.Marker = 'o'; hL2.Marker = 'o'; ylabel(h(1),'log_{10} classification error') ylabel(h(2),'log_{10} nonzero-coefficient frequency') xlabel('log_{10} Lambda') title('Test-Sample Statistics') hold off
Choose the indexes of the regularization strength that balances predictor variable sparsity and low classification error. In this case, a value between to should suffice.
idxFinal = 7;
Select the model from Mdl
with the chosen regularization strength.
MdlFinal = selectModels(Mdl,idxFinal);
MdlFinal
is a ClassificationLinear
model containing one regularization strength. To estimate labels for new observations, pass MdlFinal
and the new data to predict
.
Classification loss functions measure the predictive inaccuracy of classification models. When you compare the same type of loss among many models, a lower loss indicates a better predictive model.
Consider the following scenario.
L is the weighted average classification loss.
n is the sample size.
For binary classification:
yj is the observed class label. The software codes it as –1 or 1, indicating the negative or positive class, respectively.
f(Xj) is the raw classification score for observation (row) j of the predictor data X.
mj = yjf(Xj) is the classification score for classifying observation j into the class corresponding to yj. Positive values of mj indicate correct classification and do not contribute much to the average loss. Negative values of mj indicate incorrect classification and contribute significantly to the average loss.
For algorithms that support multiclass classification (that is, K ≥ 3):
yj*
is a vector of K – 1 zeros, with 1 in the
position corresponding to the true, observed class
yj. For example,
if the true class of the second observation is the third class and
K = 4, then
y2*
= [0 0 1 0]′. The order of the classes corresponds to the order in
the ClassNames
property of the input
model.
f(Xj)
is the length K vector of class scores for
observation j of the predictor data
X. The order of the scores corresponds to the
order of the classes in the ClassNames
property
of the input model.
mj = yj*′f(Xj). Therefore, mj is the scalar classification score that the model predicts for the true, observed class.
The weight for observation j is wj. The software normalizes the observation weights so that they sum to the corresponding prior class probability. The software also normalizes the prior probabilities so they sum to 1. Therefore,
Given this scenario, the following table describes the supported loss
functions that you can specify by using the 'LossFun'
name-value pair
argument.
Loss Function | Value of LossFun | Equation |
---|---|---|
Binomial deviance | 'binodeviance' | |
Exponential loss | 'exponential' | |
Classification error | 'classiferror' | The classification error is the weighted fraction of misclassified observations where is the class label corresponding to the class with the maximal posterior probability. I{x} is the indicator function. |
Hinge loss | 'hinge' | |
Logit loss | 'logit' | |
Minimal cost | 'mincost' | The software computes the weighted minimal cost using this procedure for observations j = 1,...,n.
The weighted, average, minimum cost loss is |
Quadratic loss | 'quadratic' |
This figure compares the loss functions (except 'mincost'
) for one
observation over m. Some functions are normalized to pass through [0,1].
ClassificationLinear
| ClassificationPartitionedLinear
| kfoldPredict
| loss
You have a modified version of this example. Do you want to open this example with your edits?