The Model Testing Dashboard collects metric data from the model design and testing artifacts in a project. Use the metric data to assess the status and quality of your model testing.
The dashboard analyzes the artifacts in a project, such as requirements, models, and test results. Each metric in the dashboard measures a different aspect of the quality of the testing of your model and reflects guidelines in industry-recognized software development standards, such as ISO 26262 and DO-178C.
This example shows how to assess the testing status of a model by using the Model Testing Dashboard. If the requirements, models, or tests in your project change, use the dashboard to assess the impact on testing and update the artifacts to achieve your testing goals.
Open the project that contains the models and testing artifacts. For this example, at the command line, type dashboardCCProjectStart
.
To open the Model Testing Dashboard, use one of these approaches:
On the Project tab, click Model Testing Dashboard.
At the command line, enter modelTestingDashboard
.
When you first open the dashboard for a project, the dashboard must identify the artifacts in the project and trace them to the models. To run the analysis and collect metric results, click Trace and Collect All.
The dashboard displays metric results for the model selected in the Artifacts panel. To collect metric data for the model, click Collect Results. If you want to collect metrics for all models in the project, click Collect Results > Collect All Results. If metric data was previously collected for a model, the dashboard populates from the existing data. Collecting data for a metric requires a license for the product that supports the underlying artifacts, such as Simulink Requirements, Simulink Test, or Simulink Coverage. However, a license is not required to view existing metric data.
View Traceability of Design and Testing Artifacts
The Artifacts panel organizes the artifacts in the project under the models that they trace to. If there are changes to artifact files in the project, the dashboard indicates that you must refresh the data in the dashboard by clicking Trace Artifacts. For this example, in the Artifacts panel, expand the folder for the model db_DriverSwRequest
. For a model in the project, the traced artifacts include:
Functional Requirements -- Requirements that are linked to the model with a link where the Type is set to Implements
or indirectly linked to the model through other requirements. Create or import these requirements in a requirements file (.slreqx
) by using Simulink Requirements.
Design -- The model file that contains the component that you test.
Test Cases -- Test cases that run the model or library. Create these test cases in a test suite file by using Simulink Test.
Test Results -- Results of the test cases for the model. To use the results in the dashboard, run the unit tests, export the results, and save them as a results file. The dashboard shows the latest saved results from the test cases.
An artifact appears in the folder Untraced if the dashboard has not traced the artifact to a component model. The folder includes artifacts that are missing traceability and artifacts that the dashboard is unable to trace. If an artifact generates an error during traceability analysis, it appears under the Errors folder. For more information about untraced artifacts and errors, see Trace Artifacts to Components for Model Testing Analysis.
Navigate to the requirement artifact db_DriverSwRequest > Functional Requirements > db_req_funct_spec.slreqx > Cancel Switch Detection and click the requirement. The Details pane displays the name of the artifact and the folder path in the project to the file that contains the artifact.
View Metric Results for a Component
You can collect and view metric results for each model in the Artifacts panel. To view the results for the model db_DriverSwRequest
, in the Artifacts panel, click db_DriverSwRequest
. The top of the dashboard shows the name of the model, the data collection timestamp, and the user name that collected the data. If artifacts in the project change after the results are collected, the Stale Metrics icon shows that some of the dashboard widgets might show stale data, which does not include the changes. The affected widgets appear highlighted in grey. Re-collect the metric data to update the stale widgets with data from the current artifacts. For this example, the data in the dashboard is not stale.
The dashboard widgets summarize the metric data results and show testing issues to address such as:
Missing traceability between requirements and tests
Tests or requirements with a disproportionate number of links
Tests of certain types that you must review
Failed or disabled tests
Missing coverage
To explore the data in more detail, click an individual metric widget. For the selected metric, a table displays the metric value for each artifact. The table provides hyperlinks to open the artifacts so that you can get detailed results and fix the artifacts that have issues. When exploring these tables, note that:
You can filter results by the value returned for each artifact. To filter results, click the filter icon in the table header.
Some widgets filter the table by default to show only the results that the widget displays. For example, for the Requirements Linked to Tests section, the table for the Unlinked widget is filtered to show only requirements that are missing test cases.
To sort the results by artifact, source file, or value, click the arrow in the corresponding column header.
A standard measure of testing quality is the traceability between individual requirements and the test cases that verify them. To assess the traceability of your tests and requirements, use the metric data in the Test Case Analysis section of the dashboard. You can quickly find issues in the requirements and tests by using the summary data in the widgets. Click a widget to view a table with detailed results and links to open the artifacts.
Requirements Missing Tests
In the Requirements Linked to Tests section, the Unlinked widget indicates how many requirements are missing tests. Add tests and links to these requirements. The Requirements with Tests dial widget shows the linking progress as the percentage of requirements that have tests.
Click the any widget in the section to see the detailed results in the Requirement linked to test cases table. For each requirement, the table shows the source file that contains the requirement and whether the requirement is linked to at least one test case. When you click the Unlinked widget, the table is filtered to show only requirements that are missing links to test cases.
Requirements with Disproportionate Numbers of Tests
The Tests per Requirement section summarizes the distribution of the number tests linked to each requirement. For each value, a colored bin indicates the number of requirements that are linked to that number of tests. Darker colors indicate more requirements. If a requirement has a too many tests, it might be too broad, and you may want to break it down into multiple more granular requirements and link them to the respective tests. If a requirement has too few tests, consider adding more tests and linking them to the requirement.
To see the requirements that have a certain number of test cases, click the corresponding bin to open the Test cases per requirement table. For each requirement, the table shows the source file that contains the requirement and the number of linked test cases. To see results for all of the requirements, in the Linked Test Cases column, click the filter icon, then select Clear Filters.
Tests Missing Requirements
In the Tests Linked to Requirements section, the Unlinked widget indicates how many tests are not linked to requirements. Add links from these tests to the requirements that they verify. The Tests with Requirements dial widget shows the linking progress as the percentage of tests that link to requirements.
Click the any widget in the section to see detailed results in the Test linked to requirements table. For each test case, the table shows the source file that contains the test and whether the test case is linked to at least one requirement. When you click the Unlinked widget, the table is filtered to show only test cases that are missing links to requirements.
Tests with Disproportionate Numbers of Requirements
The Requirements per Test widget summarizes the distribution of the number of requirements linked to each test. For each value, a colored bin indicates the number of requirements that are linked to that number of tests. Darker colors indicate more tests. If a test has too many or too few requirements, it might be more difficult to investigate failures for that test, and you may want to change the test or requirements so that they are easier to track. For example, if a test verifies many more requirements than the other tests, consider breaking it down into multiple smaller tests and linking them to the requirements.
To see the test cases that have a certain number of requirements, click the corresponding bin to open the Requirements per test case table. For each test case, the table shows the source file that contains the test and the number of linked requirements. To see the results for all of the test cases, in the Linked Requirements column, click the filter icon, then select Clear Filters.
Disproportionate Number of Tests of One Type
The Tests by Type and Tests with Tags widgets show how many tests the component has of each type and with each custom tag. In industry standards, tests are often categorized as normal tests or robustness tests. You can tag test cases with Normal
or Robustness
and see the total count for each tag by using the Tests with Tag widget. Use the breakdown to decide if you want to add tests of a certain type or with a certain tag.
To see the test cases of one type, click the corresponding row in the Tests by Type table to open the Test case type table. For each test case, the table shows the source file that contains the test and the test type. To see results for all of the test cases, in the Type column, click the filter icon, then select Clear Filters.
To see the test cases that have a tag, click the corresponding row in the Tests with Tag table to open the Test case tags table. For each test case, the table shows the source file that contains the test and the tags on the test case. To see results for all of the test cases, in the Tags column, click the filter icon, then select Clear Filters.
To see a summary of the test results and coverage measurements, use the widgets in the Test Result Analysis section of the dashboard. Find issues in the tests and in the model by using the test result metrics. Find coverage gaps by using the coverage metrics and add tests to address missing coverage. When you run the tests for a model, export the results and save the file in the project. Then collect the dashboard metrics and check the results for these testing issues.
Tests That Have Not Passed
In the Model Test Status section, the Untested and Disabled widgets indicate how many tests for the component have not been run. Run the tests by using the Simulink Test Manager and export the new results.
The Failed widget indicates how many tests failed. Open each failed test in the Test Manager and investigate the artifacts that caused the failure. Fix the artifacts, re-run the tests, and export the results.
Click the any widget in the section to open the Test case status table. For each test case, the table shows the source file that contains the test and the status of the test result. When you click the Failed, Untested, or Disabled widgets, the table is filtered to show only tests for those results. The dashboard analyzes only the latest test result that it traces to each test case.
Missing Coverage
The Model Coverage widget shows if there are model elements that are not covered by the tests. If one of the coverage types shows less than 100% coverage, you can investigate the coverage gaps. Add tests to cover the gaps or justify points that do not need to be covered. Then run the tests again and export the results.
To see the detailed results for one type of coverage, click the corresponding bar. For the model and test cases, the table shows the source file and the achieved and justified coverage.