Class: ClassificationLinear
Classification loss for linear classification models
returns the classification losses for the predictor data in
L
= loss(Mdl
,Tbl
,ResponseVarName
)Tbl
and the true class labels in
Tbl.ResponseVarName
.
specifies options using one or more name-value pair arguments in addition to any
of the input argument combinations in previous syntaxes. For example, you can
specify that columns in the predictor data correspond to observations or specify
the classification loss function.L
= loss(___,Name,Value
)
Mdl
— Binary, linear classification modelClassificationLinear
model objectBinary, linear classification model, specified as a ClassificationLinear
model object.
You can create a ClassificationLinear
model object
using fitclinear
.
X
— Predictor dataPredictor data, specified as an n-by-p full or sparse matrix. This orientation of X
indicates that rows correspond to individual observations, and columns correspond to individual predictor variables.
Note
If you orient your predictor matrix so that observations correspond to columns and specify 'ObservationsIn','columns'
, then you might experience a significant reduction in computation time.
The length of Y
and the number of observations
in X
must be equal.
Data Types: single
| double
Y
— Class labelsClass labels, specified as a categorical, character, or string array, logical or numeric vector, or cell array of character vectors.
The data type of Y
must be the same as the
data type of Mdl.ClassNames
. (The software treats string arrays as cell arrays of character
vectors.)
The distinct classes in Y
must
be a subset of Mdl.ClassNames
.
If Y
is a character array, then
each element must correspond to one row of the array.
The length of Y
must be equal to the number
of observations in X
or
Tbl
.
Data Types: categorical
| char
| string
| logical
| single
| double
| cell
Tbl
— Sample dataSample data used to train the model, specified as a table. Each row of
Tbl
corresponds to one observation, and each column corresponds
to one predictor variable. Optionally, Tbl
can contain additional
columns for the response variable and observation weights. Tbl
must
contain all the predictors used to train Mdl
. Multicolumn variables
and cell arrays other than cell arrays of character vectors are not allowed.
If Tbl
contains the response variable used to train Mdl
, then you do not need to specify ResponseVarName
or Y
.
If you train Mdl
using sample data contained in a table, then the input
data for loss
must also be in a table.
ResponseVarName
— Response variable nameTbl
Response variable name, specified as the name of a variable in Tbl
. If Tbl
contains the response variable used to train Mdl
, then you do not need to specify ResponseVarName
.
If you specify ResponseVarName
, then you must specify it as a character
vector or string scalar. For example, if the response variable is stored as
Tbl.Y
, then specify ResponseVarName
as
'Y'
. Otherwise, the software treats all columns of
Tbl
, including Tbl.Y
, as predictors.
The response variable must be a categorical, character, or string array, a logical or numeric vector, or a cell array of character vectors. If the response variable is a character array, then each element must correspond to one row of the array.
Data Types: char
| string
Specify optional
comma-separated pairs of Name,Value
arguments. Name
is
the argument name and Value
is the corresponding value.
Name
must appear inside quotes. You can specify several name and value
pair arguments in any order as
Name1,Value1,...,NameN,ValueN
.
'LossFun'
— Loss function'classiferror'
(default) | 'binodeviance'
| 'exponential'
| 'hinge'
| 'logit'
| 'mincost'
| 'quadratic'
| function handleLoss function, specified as the comma-separated pair consisting
of 'LossFun'
and a built-in, loss-function name
or function handle.
The following table lists the available loss functions. Specify one using its corresponding character vector or string scalar.
Value | Description |
---|---|
'binodeviance' | Binomial deviance |
'classiferror' | Classification error |
'exponential' | Exponential |
'hinge' | Hinge |
'logit' | Logistic |
'mincost' | Minimal expected misclassification cost (for classification scores that are posterior probabilities) |
'quadratic' | Quadratic |
'mincost'
is appropriate for
classification scores that are posterior probabilities. For
linear classification models, logistic regression learners
return posterior probabilities as classification scores by
default, but SVM learners do not (see predict
).
Specify your own function by using function handle notation.
Let n
be the number of observations in X
and
K
be the number of distinct
classes (numel(Mdl.ClassNames)
, where
Mdl
is the input model). Your
function must have this signature:
lossvalue = lossfun
(C,S,W,Cost)
The output argument
lossvalue
is a scalar.
You choose the function name
(lossfun
).
C
is an
n
-by-K
logical matrix with rows indicating the class to
which the corresponding observation belongs. The
column order corresponds to the class order in
Mdl.ClassNames
.
Construct C
by setting
C(p,q) = 1
, if observation
p
is in class
q
, for each row. Set all other
elements of row p
to
0
.
S
is an
n
-by-K
numeric matrix of classification scores. The
column order corresponds to the class order in
Mdl.ClassNames
.
S
is a matrix of classification
scores, similar to the output of predict
.
W
is an
n
-by-1 numeric vector of
observation weights. If you pass
W
, the software normalizes the
weights to sum to 1
.
Cost
is a
K
-by-K
numeric matrix of misclassification costs. For
example, Cost = ones(K) –
eye(K)
specifies a cost of
0
for correct classification,
and 1
for
misclassification.
Example: 'LossFun',@
lossfun
Data Types: char
| string
| function_handle
'ObservationsIn'
— Predictor data observation dimension'rows'
(default) | 'columns'
Predictor data observation dimension, specified as the comma-separated
pair consisting of 'ObservationsIn'
and 'columns'
or 'rows'
.
Note
If you orient your predictor matrix so that observations correspond to columns and
specify 'ObservationsIn','columns'
, then you might experience a
significant reduction in optimization execution time. You cannot specify
'ObservationsIn','columns'
for predictor data in a
table.
'Weights'
— Observation weightsones(size(X,1),1)
(default) | numeric vector | name of variable in Tbl
Observation weights, specified as the comma-separated pair consisting
of 'Weights'
and a numeric vector or the name of a
variable in Tbl
.
If you specify Weights
as a numeric
vector, then the size of Weights
must be
equal to the number of observations in X
or
Tbl
.
If you specify Weights
as the name of a
variable in Tbl
, then the name must be a
character vector or string scalar. For example, if the weights
are stored as Tbl.W
, then specify
Weights
as 'W'
.
Otherwise, the software treats all columns of
Tbl
, including
Tbl.W
, as predictors.
If you supply weights, then for each regularization strength, loss
computes the weighted classification loss and
normalizes weights to sum up to the value of the prior probability in
the respective class.
Data Types: double
| single
L
— Classification lossesLoad the NLP data set.
load nlpdata
X
is a sparse matrix of predictor data, and Y
is a categorical vector of class labels. There are more than two classes in the data.
The models should identify whether the word counts in a web page are from the Statistics and Machine Learning Toolbox™ documentation. So, identify the labels that correspond to the Statistics and Machine Learning Toolbox™ documentation web pages.
Ystats = Y == 'stats';
Train a binary, linear classification model that can identify whether the word counts in a documentation web page are from the Statistics and Machine Learning Toolbox™ documentation. Specify to hold out 30% of the observations. Optimize the objective function using SpaRSA.
rng(1); % For reproducibility CVMdl = fitclinear(X,Ystats,'Solver','sparsa','Holdout',0.30); CMdl = CVMdl.Trained{1};
CVMdl
is a ClassificationPartitionedLinear
model. It contains the property Trained
, which is a 1-by-1 cell array holding a ClassificationLinear
model that the software trained using the training set.
Extract the training and test data from the partition definition.
trainIdx = training(CVMdl.Partition); testIdx = test(CVMdl.Partition);
Estimate the training- and test-sample classification error.
ceTrain = loss(CMdl,X(trainIdx,:),Ystats(trainIdx))
ceTrain = 1.3572e-04
ceTest = loss(CMdl,X(testIdx,:),Ystats(testIdx))
ceTest = 5.2804e-04
Because there is one regularization strength in CMdl
, ceTrain
and ceTest
are numeric scalars.
Load the NLP data set. Preprocess the data as in Estimate Test-Sample Classification Loss, and transpose the predictor data.
load nlpdata Ystats = Y == 'stats'; X = X';
Train a binary, linear classification model. Specify to hold out 30% of the observations. Optimize the objective function using SpaRSA. Specify that the predictor observations correspond to columns.
rng(1); % For reproducibility CVMdl = fitclinear(X,Ystats,'Solver','sparsa','Holdout',0.30,... 'ObservationsIn','columns'); CMdl = CVMdl.Trained{1};
CVMdl
is a ClassificationPartitionedLinear
model. It contains the property Trained
, which is a 1-by-1 cell array holding a ClassificationLinear
model that the software trained using the training set.
Extract the training and test data from the partition definition.
trainIdx = training(CVMdl.Partition); testIdx = test(CVMdl.Partition);
Create an anonymous function that measures linear loss, that is,
is the weight for observation j, is response j (-1 for the negative class, and 1 otherwise), and is the raw classification score of observation j. Custom loss functions must be written in a particular form. For rules on writing a custom loss function, see the LossFun
name-value pair argument.
linearloss = @(C,S,W,Cost)sum(-W.*sum(S.*C,2))/sum(W);
Estimate the training- and test-sample classification loss using the linear loss function.
ceTrain = loss(CMdl,X(:,trainIdx),Ystats(trainIdx),'LossFun',linearloss,... 'ObservationsIn','columns')
ceTrain = -7.8330
ceTest = loss(CMdl,X(:,testIdx),Ystats(testIdx),'LossFun',linearloss,... 'ObservationsIn','columns')
ceTest = -7.7383
To determine a good lasso-penalty strength for a linear classification model that uses a logistic regression learner, compare test-sample classification error rates.
Load the NLP data set. Preprocess the data as in Specify Custom Classification Loss.
load nlpdata Ystats = Y == 'stats'; X = X'; rng(10); % For reproducibility Partition = cvpartition(Ystats,'Holdout',0.30); testIdx = test(Partition); XTest = X(:,testIdx); YTest = Ystats(testIdx);
Create a set of 11 logarithmically-spaced regularization strengths from through .
Lambda = logspace(-6,-0.5,11);
Train binary, linear classification models that use each of the regularization strengths. Optimize the objective function using SpaRSA. Lower the tolerance on the gradient of the objective function to 1e-8
.
CVMdl = fitclinear(X,Ystats,'ObservationsIn','columns',... 'CVPartition',Partition,'Learner','logistic','Solver','sparsa',... 'Regularization','lasso','Lambda',Lambda,'GradientTolerance',1e-8)
CVMdl = ClassificationPartitionedLinear CrossValidatedModel: 'Linear' ResponseName: 'Y' NumObservations: 31572 KFold: 1 Partition: [1x1 cvpartition] ClassNames: [0 1] ScoreTransform: 'none' Properties, Methods
Extract the trained linear classification model.
Mdl = CVMdl.Trained{1}
Mdl = ClassificationLinear ResponseName: 'Y' ClassNames: [0 1] ScoreTransform: 'logit' Beta: [34023x11 double] Bias: [1x11 double] Lambda: [1x11 double] Learner: 'logistic' Properties, Methods
Mdl
is a ClassificationLinear
model object. Because Lambda
is a sequence of regularization strengths, you can think of Mdl
as 11 models, one for each regularization strength in Lambda
.
Estimate the test-sample classification error.
ce = loss(Mdl,X(:,testIdx),Ystats(testIdx),'ObservationsIn','columns');
Because there are 11 regularization strengths, ce
is a 1-by-11 vector of classification error rates.
Higher values of Lambda
lead to predictor variable sparsity, which is a good quality of a classifier. For each regularization strength, train a linear classification model using the entire data set and the same options as when you cross-validated the models. Determine the number of nonzero coefficients per model.
Mdl = fitclinear(X,Ystats,'ObservationsIn','columns',... 'Learner','logistic','Solver','sparsa','Regularization','lasso',... 'Lambda',Lambda,'GradientTolerance',1e-8); numNZCoeff = sum(Mdl.Beta~=0);
In the same figure, plot the test-sample error rates and frequency of nonzero coefficients for each regularization strength. Plot all variables on the log scale.
figure; [h,hL1,hL2] = plotyy(log10(Lambda),log10(ce),... log10(Lambda),log10(numNZCoeff + 1)); hL1.Marker = 'o'; hL2.Marker = 'o'; ylabel(h(1),'log_{10} classification error') ylabel(h(2),'log_{10} nonzero-coefficient frequency') xlabel('log_{10} Lambda') title('Test-Sample Statistics') hold off
Choose the index of the regularization strength that balances predictor variable sparsity and low classification error. In this case, a value between to should suffice.
idxFinal = 7;
Select the model from Mdl
with the chosen regularization strength.
MdlFinal = selectModels(Mdl,idxFinal);
MdlFinal
is a ClassificationLinear
model containing one regularization strength. To estimate labels for new observations, pass MdlFinal
and the new data to predict
.
Classification loss functions measure the predictive inaccuracy of classification models. When you compare the same type of loss among many models, a lower loss indicates a better predictive model.
Consider the following scenario.
L is the weighted average classification loss.
n is the sample size.
For binary classification:
yj is the observed class label. The software codes it as –1 or 1, indicating the negative or positive class, respectively.
f(Xj) is the raw classification score for observation (row) j of the predictor data X.
mj = yjf(Xj) is the classification score for classifying observation j into the class corresponding to yj. Positive values of mj indicate correct classification and do not contribute much to the average loss. Negative values of mj indicate incorrect classification and contribute significantly to the average loss.
For algorithms that support multiclass classification (that is, K ≥ 3):
yj*
is a vector of K – 1 zeros, with 1 in the
position corresponding to the true, observed class
yj. For example,
if the true class of the second observation is the third class and
K = 4, then
y2*
= [0 0 1 0]′. The order of the classes corresponds to the order in
the ClassNames
property of the input
model.
f(Xj)
is the length K vector of class scores for
observation j of the predictor data
X. The order of the scores corresponds to the
order of the classes in the ClassNames
property
of the input model.
mj = yj*′f(Xj). Therefore, mj is the scalar classification score that the model predicts for the true, observed class.
The weight for observation j is wj. The software normalizes the observation weights so that they sum to the corresponding prior class probability. The software also normalizes the prior probabilities so they sum to 1. Therefore,
Given this scenario, the following table describes the supported loss
functions that you can specify by using the 'LossFun'
name-value pair
argument.
Loss Function | Value of LossFun | Equation |
---|---|---|
Binomial deviance | 'binodeviance' | |
Exponential loss | 'exponential' | |
Classification error | 'classiferror' | The classification error is the weighted fraction of misclassified observations where is the class label corresponding to the class with the maximal posterior probability. I{x} is the indicator function. |
Hinge loss | 'hinge' | |
Logit loss | 'logit' | |
Minimal cost | 'mincost' | The software computes the weighted minimal cost using this procedure for observations j = 1,...,n.
The weighted, average, minimum cost loss is |
Quadratic loss | 'quadratic' |
This figure compares the loss functions (except 'mincost'
) for one
observation over m. Some functions are normalized to pass through [0,1].
By default, observation weights are prior class probabilities.
If you supply weights using Weights
, then the
software normalizes them to sum to the prior probabilities in the
respective classes. The software uses the renormalized weights to
estimate the weighted classification loss.
Usage notes and limitations:
loss
does not support tall table
data.
For more information, see Tall Arrays.
You have a modified version of this example. Do you want to open this example with your edits?