Classification loss for cross-validated kernel classification model
returns the classification loss
obtained by the cross-validated, binary kernel model (loss
= kfoldLoss(CVMdl
)ClassificationPartitionedKernel
) CVMdl
. For every fold,
kfoldLoss
computes the classification loss for validation-fold
observations using a model trained on training-fold observations.
By default, kfoldLoss
returns the classification error.
returns the classification loss with additional options specified by one or more name-value
pair arguments. For example, specify the classification loss function, number of folds, or
aggregation level.loss
= kfoldLoss(CVMdl
,Name,Value
)
Load the ionosphere
data set. This data set has 34 predictors and 351 binary responses for radar returns, which are labeled either bad ('b'
) or good ('g'
).
load ionosphere
Cross-validate a binary kernel classification model using the data.
CVMdl = fitckernel(X,Y,'Crossval','on')
CVMdl = ClassificationPartitionedKernel CrossValidatedModel: 'Kernel' ResponseName: 'Y' NumObservations: 351 KFold: 10 Partition: [1x1 cvpartition] ClassNames: {'b' 'g'} ScoreTransform: 'none' Properties, Methods
CVMdl
is a ClassificationPartitionedKernel
model. By default, the software implements 10-fold cross-validation. To specify a different number of folds, use the 'KFold'
name-value pair argument instead of 'Crossval'
.
Estimate the cross-validated classification loss. By default, the software computes the classification error.
loss = kfoldLoss(CVMdl)
loss = 0.0940
Alternatively, you can obtain the per-fold classification errors by specifying the name-value pair 'Mode','individual'
in kfoldLoss
.
Load the ionosphere
data set. This data set has 34 predictors and 351 binary responses for radar returns, which are labeled either bad ('b'
) or good ('g'
).
load ionosphere
Cross-validate a binary kernel classification model using the data.
CVMdl = fitckernel(X,Y,'Crossval','on')
CVMdl = ClassificationPartitionedKernel CrossValidatedModel: 'Kernel' ResponseName: 'Y' NumObservations: 351 KFold: 10 Partition: [1x1 cvpartition] ClassNames: {'b' 'g'} ScoreTransform: 'none' Properties, Methods
CVMdl
is a ClassificationPartitionedKernel
model. By default, the software implements 10-fold cross-validation. To specify a different number of folds, use the 'KFold'
name-value pair argument instead of 'Crossval'
.
Create an anonymous function that measures linear loss, that is,
is the weight for observation j, is the response j (–1 for the negative class and 1 otherwise), and is the raw classification score of observation j.
linearloss = @(C,S,W,Cost)sum(-W.*sum(S.*C,2))/sum(W);
Custom loss functions must be written in a particular form. For rules on writing a custom loss function, see the 'LossFun'
name-value pair argument.
Estimate the cross-validated classification loss using the linear loss function.
loss = kfoldLoss(CVMdl,'LossFun',linearloss)
loss = -0.7792
CVMdl
— Cross-validated, binary kernel classification modelClassificationPartitionedKernel
model objectCross-validated, binary kernel classification model, specified as a ClassificationPartitionedKernel
model object. You can create a
ClassificationPartitionedKernel
model by using fitckernel
and specifying any one of the cross-validation name-value pair arguments.
To obtain estimates, kfoldLoss
applies the same data used to
cross-validate the kernel classification model (X
and
Y
).
Specify optional
comma-separated pairs of Name,Value
arguments. Name
is
the argument name and Value
is the corresponding value.
Name
must appear inside quotes. You can specify several name and value
pair arguments in any order as
Name1,Value1,...,NameN,ValueN
.
kfoldLoss(CVMdl,'Folds',[1 3 5])
specifies to use only the
first, third, and fifth folds to calculate the classification loss.'Folds'
— Fold indices for prediction1:CVMdl.KFold
(default) | numeric vector of positive integersFold indices for prediction, specified as the comma-separated pair consisting of
'Folds'
and a numeric vector of positive integers. The elements
of Folds
must be within the range from 1
to
CVMdl.KFold
.
The software uses only the folds specified in Folds
for
prediction.
Example: 'Folds',[1 4 10]
Data Types: single
| double
'LossFun'
— Loss function'classiferror'
(default) | 'binodeviance'
| 'exponential'
| 'hinge'
| 'logit'
| 'mincost'
| 'quadratic'
| function handleLoss function, specified as the comma-separated pair consisting of
'LossFun'
and a built-in loss function name or a function handle.
This table lists the available loss functions. Specify one using its corresponding value.
Value | Description |
---|---|
'binodeviance' | Binomial deviance |
'classiferror' | Classification error |
'exponential' | Exponential |
'hinge' | Hinge |
'logit' | Logistic |
'mincost' | Minimal expected misclassification cost (for classification scores that are posterior probabilities) |
'quadratic' | Quadratic |
'mincost'
is appropriate for classification
scores that are posterior probabilities. For kernel classification models,
logistic regression learners return posterior probabilities as classification
scores by default, but SVM learners do not (see kfoldPredict
).
Specify your own function by using function handle notation.
Assume that n
is the number of observations in
X
, and K
is the number of distinct
classes (numel(CVMdl.ClassNames)
, where
CVMdl
is the input model). Your function must have this signature:
lossvalue = lossfun
(C,S,W,Cost)
The output argument lossvalue
is a scalar.
You specify the function name
(lossfun
).
C
is an
n
-by-K
logical matrix with rows
indicating the class to which the corresponding observation belongs. The
column order corresponds to the class order in
CVMdl.ClassNames
.
Construct C
by setting C(p,q) =
1
, if observation p
is in class
q
, for each row. Set all other elements of row
p
to 0
.
S
is an
n
-by-K
numeric matrix of
classification scores. The column order corresponds to the class order in
CVMdl.ClassNames
. S
is a matrix of
classification scores, similar to the output of kfoldPredict
.
W
is an n
-by-1 numeric vector of
observation weights. If you pass W
, the software
normalizes the weights to sum to 1
.
Cost
is a
K
-by-K
numeric matrix of
misclassification costs. For example, Cost = ones(K) –
eye(K)
specifies a cost of 0
for correct
classification, and 1
for misclassification.
Example: 'LossFun',@
lossfun
Data Types: char
| string
| function_handle
'Mode'
— Aggregation level for output'average'
(default) | 'individual'
Aggregation level for the output, specified as the comma-separated pair consisting of
'Mode'
and 'average'
or
'individual'
.
This table describes the values.
Value | Description |
---|---|
'average' | The output is a scalar average over all folds. |
'individual' | The output is a vector of length k containing one value per fold, where k is the number of folds. |
Example: 'Mode','individual'
loss
— Classification lossClassification loss, returned as a numeric scalar or numeric column vector.
If Mode
is 'average'
, then
loss
is the average classification loss over all folds.
Otherwise, loss
is a k-by-1 numeric column
vector containing the classification loss for each fold, where k is
the number of folds.
Classification loss functions measure the predictive inaccuracy of classification models. When you compare the same type of loss among many models, a lower loss indicates a better predictive model.
Suppose the following:
L is the weighted average classification loss.
n is the sample size.
yj is the observed class label. The software codes it as –1 or 1, indicating the negative or positive class, respectively.
f(Xj) is the raw classification score for the transformed observation (row) j of the predictor data X using feature expansion.
mj = yjf(Xj) is the classification score for classifying observation j into the class corresponding to yj. Positive values of mj indicate correct classification and do not contribute much to the average loss. Negative values of mj indicate incorrect classification and contribute to the average loss.
The weight for observation j is wj. The software normalizes the observation weights so that they sum to the corresponding prior class probability. The software also normalizes the prior probabilities so that they sum to 1. Therefore,
This table describes the supported loss functions that you can specify by using the
'LossFun'
name-value pair argument.
Loss Function | Value of LossFun | Equation |
---|---|---|
Binomial deviance | 'binodeviance' | |
Exponential loss | 'exponential' | |
Classification error | 'classiferror' | The classification error is the weighted fraction of misclassified observations where is the class label corresponding to the class with the maximal posterior probability. I{x} is the indicator function. |
Hinge loss | 'hinge' | |
Logit loss | 'logit' | |
Minimal cost | 'mincost' | The software computes the weighted minimal cost using this procedure for observations j = 1,...,n.
The weighted, average, minimum cost loss is |
Quadratic loss | 'quadratic' |
This figure compares the loss functions (except minimal cost) for one observation over m. Some functions are normalized to pass through [0,1].
You have a modified version of this example. Do you want to open this example with your edits?