To view or edit the test sections, select a test file, suite, or case in the Test Browser pane.
You can select MATLAB® releases installed on your system to create and run tests in. Use this preference to specify the MATLAB installations that you want to make available for testing with Test Manager. You can use releases from R2011b forward. The releases you add become available to select from the Select releases for simulation list when you design the test.
You can add releases to the list and delete them. You cannot delete the release you started MATLAB in.
To add a release, click Add, navigate to the location of the MATLAB installation you want to add, and click OK.
For more information, see Run Tests in Multiple Releases of MATLAB.
To simplify the Test Manager layout, you can select the sections of the test case, test suite, or test file that appear in the Test Manager. Test case sections that were modified appear in the Test Manager, regardless of the preference setting.
In the toolstrip, click Preferences.
Select the Test File, Test Suite, or Test Case tab.
Select sections to show, or clear sections to hide. To show only sections where settings are set, clear all selections in the Preferences dialog box.
Click OK.
Also see sltest.testmanager.getpref
and sltest.testmanager.setpref
.
Select the releases that you want available for running test cases. Build the list of releases using the Release pane in the Test Manager Preferences dialog box. For more information, see Run Tests in Multiple Releases of MATLAB.
Tag your tests with useful categorizations, such as safety
,
logged-data
, or burn-in
. Filter tests using
these tags when executing tests or viewing results. See Filter and Reorder Test Execution and Results.
In this section, add descriptive text to your test case, test suite, or test file.
If you have a Simulink® Requirements™ license, you can establish traceability by linking your test cases to requirements. For more information, see Link to Test Cases from Requirements (Simulink Requirements).
To link a test case, test suite, or test file to a requirement:
Open the Requirements Editor. In the Simulink Toolstrip, on the Apps tab, under Model Verification, Validation, and Test, click Requirements Manager.
Highlight a requirement.
In the Test Manager, in the Requirements section, click the arrow next to the Add button and select Link to Selected Requirement.
The requirement link appears in the Requirements list.
Displays the contents of the MATLAB file that defines the MATLAB-based Simulink test.
Specify the model you want to test in the System Under Test
section. To use an open model in the currently active Simulink window, click the Use current model button
.
Note
The model must be available on the path to run the test case. You can add the model's containing folder to the path using the preload callback. See Callbacks.
Specifying a new model in the System Under Test section can cause
the model information to be out of date. To update the model test harnesses,
Signal Builder groups, and available configuration sets, click the
Refresh button .
If you have a test harness in your system under test, then you can select the test
harness to use for the test case. If you have added or removed test harnesses in the
model, click the Refresh button to view the updated test harness list.
For more information about using test harnesses, see Refine, Test, and Debug a Subsystem.
You can override the System Under Test simulation settings such as the simulation mode, start time, stop time, and initial state.
The System Under Test cannot be in fast restart or external mode.
To stop a test running in Rapid Accelerator mode, press Ctrl+C at the MATLAB command prompt.
When running parallel execution in rapid accelerator mode, streamed signals do not show up in the Test Manager.
The System Under Test cannot be a protected model.
In this section, you can specify parameter values in the test case to override the parameter values in the model workspace, data dictionary, base workspace, or in a Model reference hierarchy. Parameters are grouped into sets. You can turn parameter sets and individual parameter overrides on or off by using the check box next to the set or parameter.
To add a parameter override:
Click Add.
A dialog box opens with a list of parameters. If the list of parameters is
not current, click the Refresh button
in the dialog box.
Select the parameter you want to override.
To add the parameter to the parameter set, click OK.
Enter the override value in the parameter Override Value column.
To restore the default value of a parameter, clear the value in the Override Value column and press Enter.
You can also add a set of parameter overrides from a MAT-file, including MAT-files
generated by Simulink
Design Verifier™. Click the Add arrow and select Add
File
to create a parameter set from a MAT-file.
For an example that uses parameter overrides, see Override Model Parameters in a Test Case.
The Test Manager displays only top-level system parameters from the system under test.
Two callback scripts are available in each test suite that execute at different times during a test:
Setup runs before test file executes.
Cleanup runs after test file executes.
Two callback scripts are available in each test suite that execute at different times during a test:
Setup runs before the test suite executes.
Cleanup runs after the test suite executes.
Three callback scripts are available in each test case that execute at different times during a test:
Pre-load runs before the model loads and before the model callbacks.
Post-load runs after the model loads and the
PostLoadFcn
model callback.
Cleanup runs after simulations and model callbacks.
If you are running multiple test cases, the order in which the callbacks execute is:
Preload test case 1
Load model 1
Preload test case 2
Load model 2
Post-load test case 1
Simulate model 1
Clean up test case 1
Post-load test case 2
Simulate model 2
Clean up test case 2
To run a single callback script, click the Run button
above the corresponding script.
You can use predefined variables in the test case callbacks:
sltest_bdroot
available in
Post-Load: The model simulated by the test
case. The model can be a harness model.
sltest_sut
available in
Post-Load: The system under test. For a
harness, it is the component under test.
sltest_isharness
available in
Post-Load: Returns true if
sltest_bdroot
is a harness model.
sltest_simout
available in
Cleanup: Simulation output produced by
simulation.
sltest_iterationName
available in
Pre-Load, Post-Load, and
Cleanup: Name of the currently executing test
iteration.
The test case callback scripts are not stored with the model and do not override Simulink model callbacks. Consider the following when using callbacks:
To stop execution of an infinite loop from a callback script, press Ctrl+C at the MATLAB command prompt.
sltest.testmanager
functions are not supported.
You can enter a callback to define variables and conditions used only in the Logical and Temporal Assessments pane using the Assessment Callback section. See Assessment Callback under Logical and Temporal Assessments for more information.
A test case can use input data from:
A Signal Editor or Signal Builder block in the system under test. Select Signal Editor scenario or Signal Builder group, and select the scenario or signal group. The system under test can have only one Signal Builder or Signal Editor block at the top level.
An external data file. In the External Inputs table, click Add. Select a MAT-file or Microsoft® Excel® file.
For more information on using external files as inputs, see Run Tests Using External Data. For information about the file format for Microsoft Excel files in Test Manager, see Format Test Case Data in Excel.
An input file template that you create and populate with data. See Test Case Input Data Files.
To include the input data in your test results set, select Include input data in test result.
If the time interval of your input data is shorter than the model simulation time, you can limit the simulation to the time specified by your input data by selecting Stop simulation at last time point.
For more information on test inputs, see the Test Authoring: Inputs page.
From the Test Manager, you can edit your input data files.
To edit a file, select the file and click Edit. You can then edit the data in the signal editor for MAT-files or Microsoft Excel for Excel files.
To learn about the syntax for Excel files, see Format Test Case Data in Excel.
Use the Simulation Outputs section to add signal outputs to your test results. Signals logged in your model or test harness can appear in the results after you add them as simulation outputs. You can then plot them. Add individual signals to log and plot or add a signal set.
Under Simulation Outputs, click Add. Follow the user interface.
Use the options in the Other Outputs subsection to add states, final states, model output values, data store variables, and signal logging values to your test results. To enable selecting one or more of these options, click Override model settings.
States — Include state values between blocks during simulation. You must have a Sequence Viewer block in your model to include state values.
Final states — Include final state values. You must have a Sequence Viewer block in your model to include final state values.
Output — Include model output values
Data stores — Include logged data store variables in Data Store Memory blocks in the model. This option is selected by default.
Signal logging — Include logged signals specified in the model. This option is selected by default.
For more information, see Capture Simulation Data in a Test Case.
In the test case, you can specify configuration settings that differ from the settings in the model. Setting the configuration settings in the test case enables you to try different configurations without modifying your model.
These sections appear in equivalence test cases. Use them to specify the details about the simulations that you want to compare. Enter the system under test, the test harness if applicable, and simulation setting overrides under Simulation 1. You can then click Copy settings from Simulation 1 under Simulation 2 to use a starting point for your second set of simulation settings.
For the test to pass, Simulation 1 and Simulation 2 must log the same signals.
Use these sections with the Equivalence Criteria section to define the premise of your test case. For an example of an equivalence test, see Test Two Simulations for Equivalence.
This section appears in equivalence test cases. The equivalence criteria is a set of signal data to compare in Simulation 1 and Simulation 2. Specify tolerances to regulate pass-fail criteria of the test. You can specify absolute, relative, leading, and lagging tolerances for the signals.
To specify tolerances, first click Capture to run the system under test in Simulation 1 and add signals marked for logging to the table. Specify the tolerances in the table.
After you capture the signals, you can select signals from the table to narrow your results. If you do not select signals under Equivalence Criteria, running the test case compares all the logged signals in Simulation 1 and Simulation 2.
For an example of an equivalence test case, see Test Two Simulations for Equivalence.
The Baseline Criteria section appears in baseline test cases. When a baseline test case executes, Test Manager captures signal data from signals in the model marked for logging and compares them to the baseline data.
To capture logged signal data from the system under test to use as the baseline criteria, click Capture. Then follow the prompts in the Capture Baseline dialog box. Capturing the data compiles and simulates the system under test and stores the output from the logged signals to the baseline. For a baseline test example, see Compare Model Output To Baseline Data.
You can save the signal data to a MAT-file or a Microsoft Excel file. To understand the format of the Excel file, see Format Test Case Data in Excel.
You can capture the baseline criteria using the current release for simulation or another release installed on your system. Add the releases you want to use in the Test Manager preferences. Then, select the releases you want available in your test case using the Select releases for simulation option in the test case. When you run the test, you can compare the baseline against the release you created the baseline in or against another release. For more information, see Run Tests in Multiple Releases of MATLAB.
When you select Excel as the output format, you can specify the sheet name to save the data to. If you use the same Excel file for input and output data, by default both sets of data appear in the same sheet.
If you are capturing the data to a file that already contains outputs, specify the sheet name to overwrite the output data only in that sheet of the file.
To save a baseline for each test case iteration in a separate sheet in the same file, select Capture a baseline for each iterations. This check box appears only if your test case already contains iterations. For more information iterations, see Test Iterations.
You can specify tolerances to determine the pass-fail criteria of the test case. You can specify absolute, relative, leading, and lagging tolerances for individual signals or the entire baseline criteria set.
After you capture the baseline, the baseline file and its signals appear in the table. In the table, you can set the tolerances for the signals. To see tolerances used in an example for baseline testing, see Compare Model Output To Baseline Data.
By clicking Add, you can select an existing file as a baseline. You can add MAT-files and Microsoft Excel files as the baseline. Format Microsoft Excel files as described in Format Test Case Data in Excel.
You can edit the signal data in your baseline, for example, if your model changed and you expect different values. To open the signal editor or the Microsoft Excel file for editing, select the baseline file from the list and click Edit. See Manually Update Signal Data in a Baseline.
You can also update your baseline when you examine test failures in the data inspector view. See Examine Test Failures and Modify Baselines.
Use iterations to repeat a test with different parameter values, configuration sets, or input data.
You can run multiple simulations with the same inputs, outputs, and criteria by sweeping through different parameter values in a test case.
Models and external data files can contain multiple test input scenarios, such as signal groups. To simplify your test file architecture, you can run different input scenarios as iterations rather than as different test cases. You can apply different baseline data to each iteration, or capture new baseline data from an iteration set.
You can iterate over different configuration sets, for example to compare results between solvers or data types.
To create iterations from defined parameter sets, signal groups, external data files, or configuration sets, use table iterations. To create a custom set of iterations from the available test case elements, write a MATLAB iteration script in the test case. For more information about test iterations, see Test Iterations
Create temporal assessments using the form-based editor that prompts you for conditions, events, signal values, delays, and responses. When you collapse the individual elements, the editor displays a readable statement summarizing the assessment. See Assess Temporal Logic by Using Temporal Assessments and Logical and Temporal Assessment Syntax for more information.
You can define variables and use them in logical and temporal assessment conditions and expressions in the Assessment Callback section.
Define variables by writing a script in the Assessment Callback section. You can map these variables to symbols in the Symbols pane by right-clicking the symbol, selecting Map to expression, and entering the variable name in the Expression field. For information on how to map variables to symbols, see Map to expression under Resolve Assessment Parameter Symbols.
The Assessment Callback section has access to the predefined variables that contain test, simulation, and model data. You can define a variable as a function of this data. For more information, see Define Variables in the Assessment Callback Section.
t
(time)The symbol t
is automatically bound to simulation time and can
be used in logical and temporal assessment conditions. This symbol does not need to
be mapped to a variable and is not visible in the Symbols pane.
For example, to limit an assessment to a time between 5 and 7 seconds, create a
Trigger-response assessment and, in the trigger
condition, enter t < 5 & t > 7
. To avoid unexpected
behavior, do not define a new symbol t
in the
Symbols pane.
This section includes an embedded MATLAB editor to define custom pass/fail criteria for your test. Select function customCriteria(test) to enable the criteria script in the editor. Custom criteria operate outside of model run time; the script evaluates after model simulation.
Common uses of custom criteria include verifying signal characteristics or verifying
test conditions. MATLAB Unit Test qualifications provide a framework for verification criteria.
For example, this custom criteria script gets the last value of the signal
PhiRef
and verifies that it equals 0
:
% Get the last value of PhiRef from the dataset Signals_Req1_3 lastValue = test.sltest_simout.get('Signals_Req1_3').get('PhiRef').Values.Data(end); % Verify that the last value equals 0 test.verifyEqual(lastValue,0);
See Process Test Results with Custom Scripts. For a list of MATLAB Unit Test qualifications, see Table of Verifications, Assertions, and Other Qualifications.
You can also define plots in the Custom Criteria section. See Create, Store, and Open MATLAB Figures.
Use this test section to configure coverage collection for a test file. (The settings
propagate down to its test suites and test cases.) Coverage filter files specified here
override filter files specified in the model configuration settings. For more
information, see Collect Coverage in Tests. For information on the
coverage metrics option , see the parameter info for
CovMetricSettings
in Internal Programmatic Model Settings.
When your tests generate figures, select this option to clear the working environment of figures after the test execution completes.
Select this option to store figures generated during the test with the test file. You can enter MATLAB code that creates figures and plots as a callback or in the test case Custom Criteria section. See Create, Store, and Open MATLAB Figures.
Select Generate report after execution to create a report after the test executes. Selecting this option displays report options that you can set. The settings are saved with the test file.
Note
To enable the options to specify the number of plots per page, select Plots for simulation output and baseline.
For detailed reporting information, see Export Test Results and Customize Test Results Reports.