fit

Train incremental learning model

Description

The fit function fits a configured incremental learning model for linear regression (incrementalRegressionLinear object) or linear binary classification (incrementalClassificationLinear object) to streaming data. To additionally track performance metrics using the data as it arrives, use updateMetricsAndFit instead.

To fit or cross-validate a regression or classification model to an entire batch of data at once, see the other machine learning models in Regression or Classification.

example

Mdl = fit(Mdl,X,Y) returns an incremental learning model Mdl, which represents the input incremental learning model Mdl trained using the predictor and response data, X and Y respectively. Specifically, fit implements the following procedure:

  1. Initialize the solver with the configurations and linear model coefficient and bias estimates of the input incremental learning model Mdl.

  2. Fit the model to the data, and store the updated coefficient estimates and configurations in the output model Mdl.

The input and output models are the same data type.

example

Mdl = fit(Mdl,X,Y,Name,Value) uses additional options specified by one or more name-value pair arguments. For example, you can specify observation weights or that the columns of the predictor data matrix correspond to observations.

Examples

collapse all

Create a default incremental linear SVM model for binary classification. Specify a 5,000 observation estimation period and the SGD solver.

Mdl = incrementalClassificationLinear('EstimationPeriod',5000,'Solver','sgd')
Mdl = 
  incrementalClassificationLinear

            IsWarm: 0
           Metrics: [1×2 table]
        ClassNames: [1×0 double]
    ScoreTransform: 'none'
              Beta: [0×1 double]
              Bias: 0
           Learner: 'svm'


  Properties, Methods

Mdl is an incrementalClassificationLinear model. All its properties are read-only.

Mdl must be fit to data before you can perform any other operations using it.

Load the human activity data set. Randomly shuffle the data.

load humanactivity
n = numel(actid);
rng(1) % For reproducibility
idx = randsample(n,n);
X = feat(idx,:);
Y = actid(idx);

For details on the data set, display Description.

Responses can be one of five classes. Dichotomize the response by identifying whether the subject is moving (actid > 2).

Y = Y > 2;

Use fit to fit the incremental model to the training data, in chunks of 50 observations at a time, to simulate a data stream. At each iteration:

  • Process 50 observations.

  • Overwrite the previous incremental model with a new one fitted to the incoming observation.

  • Store β1, the number of training observations, and the prior probability of whether the subject moved (Y = true) to monitor their evolution during incremental training.

% Preallocation
numObsPerChunk = 50;
nchunk = floor(n/numObsPerChunk);
beta1 = zeros(nchunk,1);    
numtrainobs = zeros(nchunk,1);
priormoved = zeros(nchunk,1);

% Incremental fitting
for j = 1:nchunk
    ibegin = min(n,numObsPerChunk*(j-1) + 1);
    iend   = min(n,numObsPerChunk*j);
    idx = ibegin:iend;    
    Mdl = fit(Mdl,X(idx,:),Y(idx));
    beta1(j) = Mdl.Beta(1);
    numtrainobs(j) = Mdl.NumTrainingObservations; 
    priormoved(j) = Mdl.Prior(Mdl.ClassNames == true);
end

IncrementalMdl is an incrementalClassificationLinear model object that has experienced all the data in the stream.

To see how the parameters evolved during incremental learning, plot them on separate subplots.

figure;
subplot(2,2,1)
plot(beta1)
ylabel('\beta_1')
xline(Mdl.EstimationPeriod/numObsPerChunk,'r-.');
xlabel('Iteration')
axis tight
subplot(2,2,2)
plot(numtrainobs);
ylabel('Number of Training Observations')
xline(Mdl.EstimationPeriod/numObsPerChunk,'r-.');
xlabel('Iteration')
axis tight
subplot(2,2,3)
plot(priormoved);
ylabel('Prior P(Subject Moved)')
xline(Mdl.EstimationPeriod/numObsPerChunk,'r-.');
xlabel('Iteration')
axis tight

The plot suggests that fit does not fit the model to the data nor update the parameters until after the estimation period.

Train a linear model for binary classification by using fitclinear, convert it to an incremental learner, and then track its performance and fit it to streaming chunks of data. Orient the observations in columns, and specify observation weights.

Load and Preprocess Data

Load the human activity data set. Randomly shuffle the data. Orient the observations of the predictor data in columns.

load humanactivity
rng(1); % For reproducibility
n = numel(actid);
idx = randsample(n,n);
X = feat(idx,:)';
Y = actid(idx);

For details on the data set, display Description.

Responses can be one of five classes. Dichotomize the response by identifying whether the subject is moving (actid > 2).

Y = Y > 2;

Suppose that the data collected when the subject was not moving (Y = false) has double the quality than when the subject was moving. Create a weight variable that attributes 2 to observations collected from a still subject, and 1 to a moving subject.

W = ones(n,1) + ~Y;

Train Linear Model for Binary Classification

Fit a linear model for binary classification to a random sample of half the data.

idxtt = randsample([true false],n,true);
TTMdl = fitclinear(X(:,idxtt),Y(idxtt),'ObservationsIn','columns',...
    'Weights',W(idxtt))
TTMdl = 
  ClassificationLinear
      ResponseName: 'Y'
        ClassNames: [0 1]
    ScoreTransform: 'none'
              Beta: [60×1 double]
              Bias: -0.1107
            Lambda: 8.2967e-05
           Learner: 'svm'


  Properties, Methods

TTMdl is a ClassificationLinear model object representing a traditionally trained linear model for binary classification model.

Convert Trained Model

Convert the traditionally trained classification model to a binary classification linear model for incremental learning.

IncrementalMdl = incrementalLearner(TTMdl)
IncrementalMdl = 
  incrementalClassificationLinear

            IsWarm: 1
           Metrics: [1×2 table]
        ClassNames: [0 1]
    ScoreTransform: 'none'
              Beta: [60×1 double]
              Bias: -0.1107
           Learner: 'svm'


  Properties, Methods

Separately Track Performance Metrics and Fit Model

Use the updateMetrics and fit functions to perform incremental learning on the rest of the data. Simulate a data stream by processing 50 observations at a time. At each iteration:

  1. Call updateMetrics to update the cumulative and window classification error of the model given the incoming chunk of observations. Overwrite the previous incremental model to update the losses in the Metrics property. Note that the function does not fit the model to the chunk of data — the chunk is "new" data for the model. Specify that the observations are oriented in columns, and specify the observation weights.

  2. Call fit to fit the model to the incoming chunk of observations. Overwrite the previous incremental model to update the model parameters. Specify that the observations are oriented in columns, and specify the observation weights.

  3. Store the classification error and first estimated coefficient β1.

% Preallocation
idxil = ~idxtt;
nil = sum(idxil);
numObsPerChunk = 50;
nchunk = floor(nil/numObsPerChunk);
ce = array2table(zeros(nchunk,2),'VariableNames',["Cumulative" "Window"]);
beta1 = [IncrementalMdl.Beta(1); zeros(nchunk,1)];
Xil = X(:,idxil);
Yil = Y(idxil);
Wil = W(idxil);

% Incremental fitting
for j = 1:nchunk
    ibegin = min(nil,numObsPerChunk*(j-1) + 1);
    iend   = min(nil,numObsPerChunk*j);
    idx = ibegin:iend;
    IncrementalMdl = updateMetrics(IncrementalMdl,Xil(:,idx),Yil(idx),...
        'ObservationsIn','columns','Weights',Wil(idx));
    ce{j,:} = IncrementalMdl.Metrics{"ClassificationError",:};
    IncrementalMdl = fit(IncrementalMdl,Xil(:,idx),Yil(idx),'ObservationsIn','columns',...
        'Weights',Wil(idx));
    beta1(j + 1) = IncrementalMdl.Beta(1);
end

IncrementalMdl is an incrementalClassificationLinear model object that has experienced all the data in the stream.

Alternatively, you can use updateMetricsAndFit to update performance metrics of the model given a new chunk of data, and then fit the model to the data.

Plot a trace plots of the performance metrics and estimated coefficient β1.

figure;
subplot(2,1,1)
h = plot(ce.Variables);
xlim([0 nchunk]);
ylabel('Classification Error')
legend(h,ce.Properties.VariableNames)
subplot(2,1,2)
plot(beta1)
ylabel('\beta_1')
xlim([0 nchunk]);
xlabel('Iteration')

The cumulative loss is stable and gradually decreases, whereas the window loss jumps.

β1 changes gradually, then levels off, as fit processes more chunks.

Incrementally train a linear regression model only when its performance degrades.

Load and shuffle the 2015 NYC housing data set. For more details on the data, see NYC Open Data.

load NYCHousing2015

rng(1) % For reproducibility
n = size(NYCHousing2015,1);
shuffidx = randsample(n,n);
NYCHousing2015 = NYCHousing2015(shuffidx,:);

Extract the response variable SALEPRICE from the table. For numerical stability, scale SALEPRICE by 1e6.

Y = NYCHousing2015.SALEPRICE/1e6;
NYCHousing2015.SALEPRICE = [];

Create dummy variable matrices from the categorical predictors.

catvars = ["BOROUGH" "BUILDINGCLASSCATEGORY" "NEIGHBORHOOD"];
dumvarstbl = varfun(@(x)dummyvar(categorical(x)),NYCHousing2015,...
    'InputVariables',catvars);
dumvarmat = table2array(dumvarstbl);
NYCHousing2015(:,catvars) = [];

Treat all other numeric variables in the table as linear predictors of sales price. Concatenate the matrix of dummy variables to the rest of the predictor data.

idxnum = varfun(@isnumeric,NYCHousing2015,'OutputFormat','uniform');
X = [dumvarmat NYCHousing2015{:,idxnum}];

Configure a linear regression model for incremental learning so that it does not have an estimation or metrics warm-up period, and the metrics window size is 1000. Fit the configured model to the first 100 observations.

Mdl = incrementalRegressionLinear('EstimationPeriod',0,'MetricsWarmupPeriod',0,'MetricsWindowSize',1000);
numObsPerChunk = 100;
Mdl = fit(Mdl,X(1:numObsPerChunk,:),Y(1:numObsPerChunk));

Mdl is an incrementalRegressionLinear model object.

Perform incremental learning, with conditional fitting, by following this procedure for each iteration.

  • Simulate a data stream by processing a chunk of 100 observations at a time.

  • Update the model performance by computing the epsilon insensitive loss, within a 200 observation window.

  • Fit the model to the chunk of data only when the loss more than doubles from the minimum loss experienced.

  • When tracking performance and fitting, overwrite the previous incremental model.

  • Store the epsilon insensitive loss and β313 to see the evolution of the loss and coefficient.

  • Track when fit trains the model.

% Preallocation
n = numel(Y) - numObsPerChunk;
nchunk = floor(n/numObsPerChunk);
beta313 = zeros(nchunk,1);
ei = array2table(nan(nchunk,2),'VariableNames',["Cumulative" "Window"]);
trained = false(nchunk,1);

% Incremental fitting
for j = 2:nchunk
    ibegin = min(n,numObsPerChunk*(j-1) + 1);
    iend   = min(n,numObsPerChunk*j);
    idx = ibegin:iend;
    Mdl = updateMetrics(Mdl,X(idx,:),Y(idx));
    ei{j,:} = Mdl.Metrics{"EpsilonInsensitiveLoss",:};
    minei = min(ei{:,2});
    pdiffloss = (ei{j,2} - minei)/minei*100;
    if pdiffloss > 100
        Mdl = fit(Mdl,X(idx,:),Y(idx));
        trained(j) = true;
    end    
    beta313(j) = Mdl.Beta(end);
end

Mdl is an incrementalRegressionLinear model object that has experienced all the data in the stream.

To see how the model performance and β313 evolved during training, plot them on separate subplots.

subplot(2,1,1)
plot(beta313)
hold on
plot(find(trained),beta313(trained),'r.')
ylabel('\beta_{313}')
xline(Mdl.EstimationPeriod/numObsPerChunk,'r-.');
legend('\beta_{313}','Training occurs','Location','southeast')
hold off
subplot(2,1,2)
plot(ei.Variables)
ylabel('Epsilon Insensitive Loss')
xline(Mdl.EstimationPeriod/numObsPerChunk,'r-.');
xlabel('Iteration')
legend(ei.Properties.VariableNames)

The trace plot of β313 shows periods of constant values, during which the loss did not double from the minimum experienced.

Input Arguments

collapse all

Incremental learning model to fit to streaming data, specified as an incrementalClassificationLinear or incrementalRegressionLinear model object. You can create Mdl directly or by converting a supported, traditionally trained machine learning model using the incrementalLearner function. For more details, see the corresponding reference page.

Chunk of predictor data to which the model is fit, specified as a floating-point matrix of n observations and Mdl.NumPredictors predictor variables. The value of the 'ObservationsIn' name-value pair argument determines the orientation of the variables and observations.

The length of the observation labels Y and the number of observations in X must be equal; Y(j) is the label of observation j (row or column) in X.

Note

  • If Mdl.NumPredictors = 0, fit infers the number of predictors from X, and sets the congruent property of the output model. Otherwise, if the number of predictor variables in the streaming data changes from Mdl.NumPredictors, fit issues an error.

  • fit supports only floating-point input predictor data. If the input model Mdl represents a converted, traditionally trained model fit to categorical data, use dummyvar to convert each categorical variable to a numeric matrix of dummy variables, and concatenate all dummy variable matrices and any other numeric predictors. For more details, see Dummy Variables.

Data Types: single | double

Chunk of labels to which the model is fit, specified as a categorical, character, or string array, logical or floating-point vector, or cell array of character vectors for classification problems; or a floating-point vector for regression problems.

The length of the observation labels Y and the number of observations in X must be equal; Y(j) is the label of observation j (row or column) in X.

For classification problems:

  • fit supports binary classification only.

  • When the ClassNames property of the input model Mdl is nonempty, the following conditions apply:

    • If Y contains a label that is not a member of Mdl.ClassNames, fit issues an error.

    • The data type of Y and Mdl.ClassNames must be the same.

Data Types: char | string | cell | categorical | logical | single | double

Note

  • If an observation (predictor or label) or weight Weight contains at least one missing (NaN) value, fit ignores the observation. Consequently, fit uses fewer than n observations to compute the model performance.

  • The chunk size n and the stochastic gradient descent (SGD) hyperparameter batch size (Mdl.BatchSize) can be different values. If n < Mdl.BatchSize, fit uses the n available observations when it applies SGD.

Name-Value Pair Arguments

Specify optional comma-separated pairs of Name,Value arguments. Name is the argument name and Value is the corresponding value. Name must appear inside quotes. You can specify several name and value pair arguments in any order as Name1,Value1,...,NameN,ValueN.

Example: 'ObservationsIn','columns','Weights',W specifies that the columns of the predictor matrix correspond to observations, and the vector W contains observation weights to apply during incremental learning.

Predictor data observation dimension, specified as the comma-separated pair consisting of 'ObservationsIn' and 'columns' or 'rows'.

Chunk of observation weights, specified as the comma-separated pair consisting of 'Weights' and a floating-point vector of positive values. fit weighs the observations in X with the corresponding values in Weights. The size of Weights must equal n, which is the number of observations in X.

By default, Weights is ones(n,1).

For more details, including normalization schemes, see Observation Weights.

Data Types: double | single

Output Arguments

collapse all

Updated incremental learning model, returned as an incremental learning model object of the same data type as the input model Mdl, either incrementalClassificationLinear or incrementalRegressionLinear.

If Mdl.EstimationPeriod > 0, incremental fitting functions updateMetricsAndFit and fit estimate hyperparameters using the first Mdl.EstimationPeriod observations passed to them; they do not train the input model to that data. However, if an incoming chunk of n observations is greater than or equal to the number of observations left in the estimation period m, fit estimates hyperparameters using the first nm observations, and fits the input model to the remaining m observations. Consequently, the software updates the Beta and Bias properties, hyperparameter properties, and record-keeping properties such as NumTrainingObservations.

For classification problems, if the ClassNames property of the input model Mdl is an empty array, fit sets the ClassNames property of the output model Mdl to unique(Y).

Tips

  • Unlike traditional training, a separate test (hold out) set might not exist for incremental learning. Therefore, to treat each incoming chunk of data as a test set, pass the incremental model and each incoming chunk to updateMetrics before training the model on the same data.

Algorithms

collapse all

Observation Weights

For classification problems, if the prior class probability distribution is known (Mdl.Prior is not composed of NaN values), fit normalizes observation weights to sum to the prior class probabilities in the respective classes. This action implies that observation weights are the respective prior class probabilities by default.

For regression problems or if the prior class probability distribution is unknown, the software normalizes the specified observation weights to sum to 1 each time you call fit.

Introduced in R2020b