This example shows how to create a performance test and
regression test for the fprintf
function.
Consider the following unit (regression) test. You can run this test as a
performance test using runperf('fprintfTest')
instead of
runtests('fprintfTest')
.
classdef fprintfTest < matlab.unittest.TestCase properties file fid end methods(TestMethodSetup) function openFile(testCase) testCase.file = tempname; testCase.fid = fopen(testCase.file,'w'); testCase.assertNotEqual(testCase.fid,-1,'IO Problem') testCase.addTeardown(@delete,testCase.file); testCase.addTeardown(@fclose,testCase.fid); end end methods(Test) function testPrintingToFile(testCase) textToWrite = repmat('abcdef',1,5000000); fprintf(testCase.fid,'%s',textToWrite); testCase.verifyEqual(fileread(testCase.file),textToWrite) end function testBytesToFile(testCase) textToWrite = repmat('tests_',1,5000000); nbytes = fprintf(testCase.fid,'%s',textToWrite); testCase.verifyEqual(nbytes,length(textToWrite)) end end end
The measured time does not include the time to open and close the file or the
assertion because these activities take place inside a
TestMethodSetup
block, and not inside a
Test
block. However, the measured time includes the time
to perform the verifications. Best practice is to measure a more accurate
performance boundary.
Create a performance test in a file, fprintfTest.m
, in
your current working folder. This test is similar to the regression test with
the following modifications:
The test inherits from matlab.perftest.TestCase
instead of matlab.unittest.TestCase
.
The test calls the startMeasuring
and
stopMeasuring
methods to create a boundary
around the fprintf
function call.
classdef fprintfTest < matlab.perftest.TestCase properties file fid end methods(TestMethodSetup) function openFile(testCase) testCase.file = tempname; testCase.fid = fopen(testCase.file,'w'); testCase.assertNotEqual(testCase.fid,-1,'IO Problem') testCase.addTeardown(@delete,testCase.file); testCase.addTeardown(@fclose,testCase.fid); end end methods(Test) function testPrintingToFile(testCase) textToWrite = repmat('abcdef',1,5000000); testCase.startMeasuring(); fprintf(testCase.fid,'%s',textToWrite); testCase.stopMeasuring(); testCase.verifyEqual(fileread(testCase.file),textToWrite) end function testBytesToFile(testCase) textToWrite = repmat('tests_',1,5000000); testCase.startMeasuring(); nbytes = fprintf(testCase.fid,'%s',textToWrite); testCase.stopMeasuring(); testCase.verifyEqual(nbytes,length(textToWrite)) end end end
The measured time for this performance test includes only the call to
fprintf
, and the testing framework still evaluates the
qualifications.
Run the performance test. Depending on your system, you might see warnings
that the performance testing framework ran the test the maximum number of times,
but did not achieve a 0.05
relative margin of error with a
0.95
confidence level.
results = runperf('fprintfTest')
Running fprintfTest .......... .......... . Done fprintfTest __________ results = 1×2 TimeResult array with properties: Name Valid Samples TestActivity Totals: 2 Valid, 0 Invalid. 4.1417 seconds testing time.
The results
variable is a
1
-by-2
TimeResult
array. Each element in the array corresponds to one of
the tests defined in the test file.
Display the measurement results for the first test. Your results might vary.
results(1)
ans = TimeResult with properties: Name: 'fprintfTest/testPrintingToFile' Valid: 1 Samples: [4×7 table] TestActivity: [8×12 table] Totals: 1 Valid, 0 Invalid. 2.7124 seconds testing time.
As indicated by the size of the TestActivity
property,
the performance testing framework collected 8
measurements.
This number includes 4
measurements to warm up the code. The
Samples
property excludes warm-up measurements.
Display the sample measurements for the first test.
results(1).Samples
ans = 4×7 table Name MeasuredTime Timestamp Host Platform Version RunIdentifier ______________________________ ____________ ____________________ ___________ ________ __________________________________________ ____________________________________ fprintfTest/testPrintingToFile 0.067729 24-Jun-2019 16:22:09 MY-HOSTNAME win64 9.7.0.1141441 (R2019b) Prerelease Update 2 62991eef-5570-47b0-ade5-b8a805245e8f fprintfTest/testPrintingToFile 0.067513 24-Jun-2019 16:22:09 MY-HOSTNAME win64 9.7.0.1141441 (R2019b) Prerelease Update 2 62991eef-5570-47b0-ade5-b8a805245e8f fprintfTest/testPrintingToFile 0.068737 24-Jun-2019 16:22:09 MY-HOSTNAME win64 9.7.0.1141441 (R2019b) Prerelease Update 2 62991eef-5570-47b0-ade5-b8a805245e8f fprintfTest/testPrintingToFile 0.068576 24-Jun-2019 16:22:10 MY-HOSTNAME win64 9.7.0.1141441 (R2019b) Prerelease Update 2 62991eef-5570-47b0-ade5-b8a805245e8f
Display the mean measured time for the first test. To exclude data collected
in the warm-up runs, use the values in the Samples
field.
sampleTimes = results(1).Samples.MeasuredTime; meanTest = mean(sampleTimes)
meanTest = 0.0681
Determine the average time for all the test elements. The
fprintfTest
test includes two different methods. Compare
the time for each method (test element).
Since the performance testing framework returns a Samples
table for each test element, concatenate all these tables into one table. Then
group the rows by test element Name
, and compute the mean
MeasuredTime
for each group.
fullTable = vertcat(results.Samples); summaryStats = varfun(@mean,fullTable,... 'InputVariables','MeasuredTime','GroupingVariables','Name')
summaryStats = 2×3 table Name GroupCount mean_MeasuredTime ______________________________ __________ _________________ fprintfTest/testPrintingToFile 4 0.068139 fprintfTest/testBytesToFile 9 0.071595
Both test methods write the same amount of data to a file. Therefore, some of
the difference between the mean values is attributed to calling the
fprintf
function with an output argument.
Change the statistical objectives defined by the runperf
function by constructing and running a time experiment. Construct a time
experiment with measurements that reach a sample mean with a
3%
relative margin of error within a
97%
confidence level. Collect 4
warm-up measurements and up to 16
sample measurements.
Construct an explicit test suite.
suite = testsuite('fprintfTest');
Construct a time experiment with a variable number of sample measurements, and run the tests.
import matlab.perftest.TimeExperiment experiment = TimeExperiment.limitingSamplingError('NumWarmups',4,... 'MaxSamples',16,'RelativeMarginOfError',0.03,'ConfidenceLevel',0.97); resultsTE = run(experiment,suite);
Running fprintfTest
.......... ..........Warning: Target Relative Margin of Error not met after running the MaxSamples for fprintfTest/testPrintingToFile.
........
Done fprintfTest
__________
In this example output, the performance testing framework is not able to meet the stricter statistical objectives with the specified number of maximum samples. Your results might vary.
Compute the statistics for all the test elements.
fullTableTE = vertcat(resultsTE.Samples); summaryStatsTE = varfun(@mean,fullTableTE,... 'InputVariables','MeasuredTime','GroupingVariables','Name')
summaryStatsTE = 2×3 table Name GroupCount mean_MeasuredTime ______________________________ __________ _________________ fprintfTest/testPrintingToFile 16 0.069482 fprintfTest/testBytesToFile 4 0.067902
Increase the maximum number of samples to 32
and rerun the
time experiment.
experiment = TimeExperiment.limitingSamplingError('NumWarmups',4,... 'MaxSamples',32,'RelativeMarginOfError',0.03,'ConfidenceLevel',0.97); resultsTE = run(experiment,suite);
Running fprintfTest .......... ...... Done fprintfTest __________
Compute the statistics for all the test elements.
fullTableTE = vertcat(resultsTE.Samples); summaryStatsTE = varfun(@mean,fullTableTE,... 'InputVariables','MeasuredTime','GroupingVariables','Name')
summaryStatsTE = 2×3 table Name GroupCount mean_MeasuredTime ______________________________ __________ _________________ fprintfTest/testPrintingToFile 4 0.067228 fprintfTest/testBytesToFile 4 0.067766
The testing framework achieves the statistical objectives for both tests with
4
samples.
Start a new MATLAB® session. A new session ensures that MATLAB has not run the code contained in your tests.
Measure the first-time cost of your code by creating and running a fixed time experiment with zero warm-up measurements and one sample measurement.
Construct an explicit test suite. Since you are measuring the first-time cost of the function, run a single test. To run multiple tests, save the results and start a new MATLAB session between tests.
suite = testsuite('fprintfTest/testPrintingToFile');
Construct and run the time experiment.
import matlab.perftest.TimeExperiment
experiment = TimeExperiment.withFixedSampleSize(1);
results = run(experiment,suite);
Running fprintfTest . Done fprintfTest __________
Display the results. Observe the TestActivity
table to
ensure there are no warm-up samples.
fullTable = results.TestActivity
fullTable = 1×12 table Name Passed Failed Incomplete MeasuredTime Objective Timestamp Host Platform Version TestResult RunIdentifier ______________________________ ______ ______ __________ ____________ _________ ____________________ ___________ ________ __________________________________________ ________________________________ ____________________________________ fprintfTest/testPrintingToFile true false false 0.071754 sample 24-Jun-2019 16:31:27 MY-HOSTNAME win64 9.7.0.1141441 (R2019b) Prerelease Update 2 [1×1 matlab.unittest.TestResult] 045394eb-e722-4241-8da2-1d17a97ac90a
The performance testing framework collects one sample for each test.
matlab.perftest.TestCase
| matlab.perftest.TimeExperiment
| matlab.perftest.TimeResult
| matlab.unittest.measurement.DefaultMeasurementResult
| runperf
| testsuite