Model Building and Assessment

Feature selection, model selection, hyperparameter optimization, cross-validation, predictive performance evaluation, and classification accuracy comparison tests

When building a high-quality, predictive classification model, it is important to select the right features (or predictors) and tune hyperparameters (model parameters that are not estimated).

To tune hyperparameters of a specific model, select the hyperparameter values and cross-validate the model using those values. For example, to tune an SVM model, choose a set of box constraints and kernel scales, and then cross-validate a model for each pair of values. Certain Statistics and Machine Learning Toolbox™ classification functions offer automatic hyperparameter tuning through Bayesian optimization, grid search, or random search. However, the main function used to implement Bayesian optimization, bayesopt, is flexible enough for use in other applications. See Bayesian Optimization Workflow.

Feature selection and hyperparameter tuning can yield multiple models. You can compare the k-fold misclassification rates, receiver operating characteristic (ROC) curves, or confusion matrices among the models. Or, conduct a statistical test to detect whether a classification model significantly outperforms another.

To automatically select a model with tuned hyperparameters, use fitcauto. This function tries a selection of classification model types with different hyperparameter values and returns a final model that is expected to perform well on new data. Use fitcauto when you are uncertain which classifier types best suit your data.

To build and assess classification models interactively, use the Classification Learner app.

To interpret a classification model, you can use lime or plotPartialDependence.

Apps

Classification LearnerTrain models to classify data using supervised machine learning

Functions

expand all

fscchi2Univariate feature ranking for classification using chi-square tests
fscmrmrRank features for classification using minimum redundancy maximum relevance (MRMR) algorithm
fscncaFeature selection using neighborhood component analysis for classification
oobPermutedPredictorImportancePredictor importance estimates by permutation of out-of-bag predictor observations for random forest of classification trees
predictorImportanceEstimates of predictor importance for classification tree
predictorImportanceEstimates of predictor importance for classification ensemble of decision trees
sequentialfsSequential feature selection using custom criterion
relieffRank importance of predictors using ReliefF or RReliefF algorithm
fitcautoAutomatically select classification model with optimized hyperparameters
bayesoptSelect optimal machine learning hyperparameters using Bayesian optimization
hyperparametersVariable descriptions for optimizing a fit function
optimizableVariableVariable description for bayesopt or other optimizers
crossvalEstimate loss using cross-validation
cvpartitionPartition data for cross-validation
repartitionRepartition data for cross-validation
testTest indices for cross-validation
trainingTraining indices for cross-validation

Local Interpretable Model-Agnostic Explanations (LIME)

limeLocal interpretable model-agnostic explanations (LIME)
fitFit simple model of local interpretable model-agnostic explanations (LIME)
plotPlot results of local interpretable model-agnostic explanations (LIME)

Partial Dependence

partialDependenceCompute partial dependence
plotPartialDependenceCreate partial dependence plot (PDP) and individual conditional expectation (ICE) plots
confusionchartCreate confusion matrix chart for classification problem
confusionmatCompute confusion matrix for classification problem
perfcurveReceiver operating characteristic (ROC) curve or other performance curve for classifier output
testcholdoutCompare predictive accuracies of two classification models
testckfoldCompare accuracies of two classification models by repeated cross-validation

Objects

expand all

FeatureSelectionNCAClassificationFeature selection for classification using neighborhood component analysis (NCA)
BayesianOptimizationBayesian optimization results

Topics

Classification Learner App

Train Classification Models in Classification Learner App

Workflow for training, comparing and improving classification models, including automated, manual, and parallel training.

Assess Classifier Performance in Classification Learner

Compare model accuracy scores, visualize results by plotting class predictions, and check performance per class in the Confusion Matrix.

Feature Selection and Feature Transformation Using Classification Learner App

Identify useful predictors using plots, manually select features to include, and transform features using PCA in Classification Learner.

Feature Selection

Introduction to Feature Selection

Learn about feature selection algorithms and explore the functions available for feature selection.

Sequential Feature Selection

This topic introduces to sequential feature selection and provides an example that selects features sequentially using a custom criterion and the sequentialfs function.

Neighborhood Component Analysis (NCA) Feature Selection

Neighborhood component analysis (NCA) is a non-parametric method for selecting features with the goal of maximizing prediction accuracy of regression and classification algorithms.

Tune Regularization Parameter to Detect Features Using NCA for Classification

This example shows how to tune the regularization parameter in fscnca using cross-validation.

Regularize Discriminant Analysis Classifier

Make a more robust and simpler model by removing predictors without compromising the predictive power of the model.

Selecting Features for Classifying High-dimensional Data

This example shows how to select features for classifying high-dimensional data.

Automated Model Selection

Automated Classifier Selection with Bayesian Optimization

Use fitcauto to automatically try a selection of classification model types with different hyperparameter values, given training predictor and response data.

Hyperparameter Optimization

Bayesian Optimization Workflow

Perform Bayesian optimization using a fit function or by calling bayesopt directly.

Variables for a Bayesian Optimization

Create variables for Bayesian optimization.

Bayesian Optimization Objective Functions

Create the objective function for Bayesian optimization.

Constraints in Bayesian Optimization

Set different types of constraints for Bayesian optimization.

Optimize a Cross-Validated SVM Classifier Using bayesopt

Minimize cross-validation loss using Bayesian Optimization.

Optimize an SVM Classifier Fit Using Bayesian Optimization

Minimize cross-validation loss using the OptimizeParameters name-value pair in a fitting function.

Bayesian Optimization Plot Functions

Visually monitor a Bayesian optimization.

Bayesian Optimization Output Functions

Monitor a Bayesian optimization.

Bayesian Optimization Algorithm

Understand the underlying algorithms for Bayesian optimization.

Parallel Bayesian Optimization

How Bayesian optimization works in parallel.

Cross-Validation

Implement Cross-Validation Using Parallel Computing

Speed up cross-validation using parallel computing.

Classification Performance Evaluation

Performance Curves

Examine the performance of a classification algorithm on a specific test data set using a receiver operating characteristic curve.