Use code generation options and optimizations to improve the execution speed
of the generated code. You can modify or disable dynamic memory allocation,
which can affect execution speed.
Parallelized code can be generated by using parfor
loops.
When available, take advantage of preexisting optimized C code and specialized
libraries to speed up execution.
For more information about how to optimize your code for specific conditions, see Optimization Strategies.
parfor | Parallel for -loop |
coder.varsize | Declare variable-size data |
coder.const | Fold expressions into constants in generated code |
coder.inline | Control inlining in generated code |
coder.unroll | Unroll for -loop by making a copy of
the loop body for each loop iteration |
coder.ceval | Call external C/C++ function |
coder.LAPACKCallback | Abstract class for specifying the LAPACK library and LAPACKE header file for LAPACK calls in generated code |
coder.BLASCallback | Abstract class for specifying the BLAS library and CBLAS header and data type information for BLAS calls in generated code |
coder.fftw.StandaloneFFTW3Interface | Abstract class for specifying an FFTW library for FFTW calls in generated code |
Minimize Dynamic Memory Allocation
Improve execution time by minimizing dynamic memory allocation.
Provide Maximum Size for Variable-Size Arrays
Use techniques to help the code generator determine the upper bound for a variable-size array.
Disable Dynamic Memory Allocation During Code Generation
Disable dynamic memory allocation in the app or at the command line.
Set Dynamic Memory Allocation Threshold
Disable dynamic memory allocation for arrays less than a certain size.
Generate Code with Parallel for-Loops (parfor)
Generate a loop that runs in parallel on shared-memory multicore platforms.
Specify Maximum Number of Threads in parfor-Loops
Generate a MEX function that executes loop iterations in parallel on specific number of available cores.
Control Compilation of parfor-Loops
Treat parfor
-loops as parfor
-loops
that run on a single thread.
Install OpenMP Library on macOS Platform
Install OpenMP library to generate parallel for
-loops on
macOS platform.
Minimize Redundant Operations in Loops
Move operations outside of loop when possible.
Control loop unrolling.
Avoid Data Copies of Function Inputs in Generated Code
Generate code that passes input arguments by reference.
Control Inlining to Fine-Tune Performance and Readability of Generated Code
Inlining eliminates the overhead of function calls but can produce larger C/C++ code and reduce code readability.
Fold Function Calls into Constants
Reduce execution time by replacing expression with constant in the generated code.
Disable Support for Integer Overflow or Nonfinites
Improve performance by suppressing generation of supporting code to handle integer overflow or nonfinites.
Integrate External/Custom Code
Improve performance by integrating your own optimized code.
Speed Up Linear Algebra in Generated Standalone Code by Using LAPACK Calls
Generate LAPACK calls for certain linear algebra functions. Specify LAPACK library to use.
Speed Up Matrix Operations in Generated Standalone Code by Using BLAS Calls
Generate BLAS calls for certain low-level matrix operations. Specify BLAS library to use.
Speed Up Fast Fourier Transforms in Generated Standalone Code by Using FFTW Library Calls
Generate FFTW library calls for fast Fourier transforms. Specify the FFTW library.
Synchronize Multithreaded Access to FFTW Planning in Generated Standalone Code
Implement FFT library callback class methods and provide supporting C code to prevent concurrent access to FFTW planning.
Optimize the execution speed or memory usage of generated code.
Dynamic Memory Allocation and Performance
Dynamic memory allocation can slow down execution speeds.
Algorithm Acceleration Using Parallel for-Loops (parfor)
Generate MEX functions for parfor
-loops.
Classification of Variables in parfor-Loops
Variables inside parfor
-loops are classified as loop, sliced,
broadcast, reduction, or temporary.
Reduction Assignments in parfor-Loops
A reduction variable accumulates a value that depends on all the loop iterations together.
MATLAB Coder Optimizations in Generated Code
To improve the performance of generated code, the code generator uses optimizations.
The code generator optimizes generated code by using
memcpy
.
The code generator optimizes generated code by using memset
.
LAPACK Calls in Generated Code
LAPACK function calls improve the execution speed of code generated for certain linear algebra functions.
BLAS function calls improve the execution speed of code generated for certain low-level vector and matrix operations.
Generate Code That Uses Row-Major Array Layout
Generate C/C++ code with row elements stored contiguously in memory.
Diagnose errors for code generation of parfor
-loops.
MEX Generated on macOS Platform Stays Loaded in Memory
Troubleshoot issues that occur when the source MATLAB® code contains global or persistent variables that are reachable
from the body of a parfor
-loop.