When building a high-quality, predictive classification model, it is important to select the right features (or predictors) and tune hyperparameters (model parameters that are not estimated).
To tune hyperparameters of a specific model, select the hyperparameter values and cross-validate the model using those values. For example, to tune an SVM model, choose a set of box constraints and kernel scales, and then cross-validate a model for each pair of values. Certain Statistics and Machine Learning Toolbox™ classification functions offer automatic hyperparameter tuning through Bayesian optimization, grid search, or random search. However, the main function used to implement Bayesian optimization, bayesopt
, is flexible enough for use in other applications. See Bayesian Optimization Workflow.
Feature selection and hyperparameter tuning can yield multiple models. You can compare the k-fold misclassification rates, receiver operating characteristic (ROC) curves, or confusion matrices among the models. Or, conduct a statistical test to detect whether a classification model significantly outperforms another.
To automatically select a model with tuned hyperparameters, use fitcauto
. This function tries a selection of classification model types with different hyperparameter values and returns a final model that is expected to perform well on new data. Use fitcauto
when you are uncertain which classifier types best suit your data.
To build and assess classification models interactively, use the Classification Learner app.
To interpret a classification model, you can use lime
or plotPartialDependence
.
Classification Learner | Train models to classify data using supervised machine learning |
Train Classification Models in Classification Learner App
Workflow for training, comparing and improving classification models, including automated, manual, and parallel training.
Assess Classifier Performance in Classification Learner
Compare model accuracy scores, visualize results by plotting class predictions, and check performance per class in the Confusion Matrix.
Feature Selection and Feature Transformation Using Classification Learner App
Identify useful predictors using plots, manually select features to include, and transform features using PCA in Classification Learner.
Introduction to Feature Selection
Learn about feature selection algorithms and explore the functions available for feature selection.
This topic introduces to sequential feature selection and provides an example that
selects features sequentially using a custom criterion and the
sequentialfs
function.
Neighborhood Component Analysis (NCA) Feature Selection
Neighborhood component analysis (NCA) is a non-parametric method for selecting features with the goal of maximizing prediction accuracy of regression and classification algorithms.
Tune Regularization Parameter to Detect Features Using NCA for Classification
This example shows how to tune the regularization parameter in fscnca
using cross-validation.
Regularize Discriminant Analysis Classifier
Make a more robust and simpler model by removing predictors without compromising the predictive power of the model.
Selecting Features for Classifying High-dimensional Data
This example shows how to select features for classifying high-dimensional data.
Automated Classifier Selection with Bayesian Optimization
Use fitcauto
to automatically try a selection of classification
model types with different hyperparameter values, given training predictor and response
data.
Bayesian Optimization Workflow
Perform Bayesian optimization using a fit function
or by calling bayesopt
directly.
Variables for a Bayesian Optimization
Create variables for Bayesian optimization.
Bayesian Optimization Objective Functions
Create the objective function for Bayesian optimization.
Constraints in Bayesian Optimization
Set different types of constraints for Bayesian optimization.
Optimize a Cross-Validated SVM Classifier Using bayesopt
Minimize cross-validation loss using Bayesian Optimization.
Optimize an SVM Classifier Fit Using Bayesian Optimization
Minimize cross-validation loss using the OptimizeParameters
name-value pair in a
fitting function.
Bayesian Optimization Plot Functions
Visually monitor a Bayesian optimization.
Bayesian Optimization Output Functions
Monitor a Bayesian optimization.
Bayesian Optimization Algorithm
Understand the underlying algorithms for Bayesian optimization.
Parallel Bayesian Optimization
How Bayesian optimization works in parallel.
Implement Cross-Validation Using Parallel Computing
Speed up cross-validation using parallel computing.
Examine the performance of a classification algorithm on a specific test data set using a receiver operating characteristic curve.