All Algorithms |
Algorithm | Choose between 'trust-region-dogleg' (default), 'trust-region' ,
and 'levenberg-marquardt' . The Algorithm option
specifies a preference for which algorithm to use. It is only a preference
because for the trust-region algorithm, the nonlinear system of equations
cannot be underdetermined; that is, the number of equations (the number
of elements of F returned by fun )
must be at least as many as the length of x . Similarly,
for the trust-region-dogleg algorithm, the number of equations must
be the same as the length of x . fsolve uses
the Levenberg-Marquardt algorithm when the selected algorithm is unavailable.
For more information on choosing the algorithm, see Choosing the Algorithm. To
set some algorithm options using optimset instead
of optimoptions : Algorithm — Set the algorithm
to 'trust-region-reflective' instead of 'trust-region' .
InitDamping — Set the
initial Levenberg-Marquardt parameter λ by
setting Algorithm to a cell array such as {'levenberg-marquardt',.005} .
|
CheckGradients | Compare user-supplied derivatives
(gradients of objective or constraints) to finite-differencing derivatives.
The choices are true or the default false .
For optimset , the name is
DerivativeCheck and the values
are 'on' or 'off' .
See Current and Legacy Option Names. |
Diagnostics | Display diagnostic information
about the function to be minimized or solved. The choices are 'on' or
the default 'off' . |
DiffMaxChange | Maximum change in variables for
finite-difference gradients (a positive scalar). The default is Inf . |
DiffMinChange | Minimum change in variables for
finite-difference gradients (a positive scalar). The default is 0 . |
Display | Level of display (see Iterative Display):
'off' or 'none' displays
no output.
'iter' displays output at each
iteration, and gives the default exit message.
'iter-detailed' displays output
at each iteration, and gives the technical exit message.
'final' (default) displays just
the final output, and gives the default exit message.
'final-detailed' displays just
the final output, and gives the technical exit message.
|
FiniteDifferenceStepSize | Scalar or vector step size factor for finite differences. When
you set FiniteDifferenceStepSize to a vector v , the
forward finite differences delta are delta = v.*sign′(x).*max(abs(x),TypicalX);
where sign′(x) = sign(x) except sign′(0) = 1 .
Central finite differences aredelta = v.*max(abs(x),TypicalX);
Scalar FiniteDifferenceStepSize expands to a vector. The default
is sqrt(eps) for forward finite differences, and eps^(1/3)
for central finite differences.
For optimset , the name is
FinDiffRelStep . See Current and Legacy Option Names. |
FiniteDifferenceType | Finite differences, used to estimate gradients,
are either 'forward' (default), or 'central' (centered). 'central' takes
twice as many function evaluations, but should be more accurate. The
algorithm is careful to obey bounds when estimating both types of
finite differences. So, for example, it could take a backward, rather
than a forward, difference to avoid evaluating at a point outside
bounds.
For optimset , the name is
FinDiffType . See Current and Legacy Option Names. |
FunctionTolerance | Termination tolerance on the function
value, a positive scalar. The default is 1e-6 .
See Tolerances and Stopping Criteria.
For optimset , the name is
TolFun . See Current and Legacy Option Names. |
FunValCheck | Check whether objective function
values are valid. 'on' displays an error when the
objective function returns a value that is complex , Inf ,
or NaN . The default, 'off' ,
displays no error. |
MaxFunctionEvaluations | Maximum number of function evaluations
allowed, a positive integer. The default is 100*numberOfVariables .
See Tolerances and Stopping Criteria and Iterations and Function Counts.
For optimset , the name is
MaxFunEvals . See Current and Legacy Option Names. |
MaxIterations | Maximum number of iterations allowed,
a positive integer. The default is 400 . See Tolerances and Stopping Criteria and Iterations and Function Counts.
For optimset , the name is
MaxIter . See Current and Legacy Option Names. |
OptimalityTolerance | Termination tolerance on the first-order optimality (a positive
scalar). The default is 1e-6 . See First-Order Optimality Measure. Internally,
the 'levenberg-marquardt' algorithm uses an optimality
tolerance (stopping criterion) of 1e-4 times FunctionTolerance and
does not use OptimalityTolerance . |
OutputFcn | Specify one or more user-defined functions that an optimization
function calls at each iteration. Pass a function handle
or a cell array of function handles. The default is none
([] ). See Output Function and Plot Function Syntax. |
PlotFcn | Plots various measures of progress while the algorithm executes;
select from predefined plots or write your own. Pass a
built-in plot function name, a function handle, or a
cell array of built-in plot function names or function
handles. For custom plot functions, pass function
handles. The default is none
([] ):
'optimplotx' plots the
current point.
'optimplotfunccount'
plots the function count.
'optimplotfval' plots the
function value.
'optimplotstepsize' plots
the step size.
'optimplotfirstorderopt'
plots the first-order optimality measure.
Custom plot functions use the same syntax
as output functions. See Output Functions for Optimization Toolbox™ and Output Function and Plot Function Syntax. For
optimset , the name is
PlotFcns . See Current and Legacy Option Names. |
SpecifyObjectiveGradient | If true , fsolve uses
a user-defined Jacobian (defined in fun ), or Jacobian information (when using JacobianMultiplyFcn ),
for the objective function. If false (default), fsolve approximates
the Jacobian using finite differences.
For optimset , the name is
Jacobian and the values are
'on' or 'off' .
See Current and Legacy Option Names. |
StepTolerance | Termination tolerance on x ,
a positive scalar. The default is 1e-6 . See Tolerances and Stopping Criteria.
For optimset , the name is
TolX . See Current and Legacy Option Names. |
TypicalX | Typical x values.
The number of elements in TypicalX is equal to
the number of elements in x0 , the starting point.
The default value is ones(numberofvariables,1) . fsolve uses TypicalX for
scaling finite differences for gradient estimation. The trust-region-dogleg algorithm
uses TypicalX as the diagonal terms of a scaling
matrix. |
UseParallel | When true , fsolve estimates
gradients in parallel. Disable by setting to the default, false .
See Parallel Computing. |
trust-region Algorithm |
JacobianMultiplyFcn | Jacobian multiply function, specified as a function handle. For
large-scale structured problems, this function computes
the Jacobian matrix product J*Y ,
J'*Y , or
J'*(J*Y) without actually forming
J . The function is of the
form where
Jinfo contains a matrix used to
compute J*Y (or
J'*Y , or
J'*(J*Y) ). The first argument
Jinfo must be the same as the
second argument returned by the objective function
fun , for example,
in Y
is a matrix that has the same number of rows as there
are dimensions in the problem. flag
determines which product to compute:
In each case, J is
not formed explicitly. fsolve uses
Jinfo to compute the
preconditioner. See Passing Extra Parameters for information on
how to supply values for any additional parameters
jmfun needs. Note 'SpecifyObjectiveGradient' must
be set to true for
fsolve to pass
Jinfo from
fun to
jmfun .
See Minimization with Dense Structured Hessian, Linear Equalities for a similar example. For
optimset , the name is
JacobMult . See Current and Legacy Option Names. |
JacobPattern | Sparsity pattern of the Jacobian
for finite differencing. Set JacobPattern(i,j) = 1 when fun(i) depends
on x(j) . Otherwise, set JacobPattern(i,j)
= 0 . In other words, JacobPattern(i,j) = 1 when
you can have ∂fun(i) /∂x(j) ≠ 0. Use JacobPattern when
it is inconvenient to compute the Jacobian matrix J in fun ,
though you can determine (say, by inspection) when fun(i) depends
on x(j) . fsolve can approximate J via
sparse finite differences when you give JacobPattern . In
the worst case, if the structure is unknown, do not set JacobPattern .
The default behavior is as if JacobPattern is a
dense matrix of ones. Then fsolve computes a
full finite-difference approximation in each iteration. This can be
very expensive for large problems, so it is usually better to determine
the sparsity structure. |
MaxPCGIter | Maximum number of PCG (preconditioned
conjugate gradient) iterations, a positive scalar. The default is max(1,floor(numberOfVariables/2)) .
For more information, see Equation Solving Algorithms. |
PrecondBandWidth | Upper bandwidth of preconditioner
for PCG, a nonnegative integer. The default PrecondBandWidth is Inf ,
which means a direct factorization (Cholesky) is used rather than
the conjugate gradients (CG). The direct factorization is computationally
more expensive than CG, but produces a better quality step towards
the solution. Set PrecondBandWidth to 0 for
diagonal preconditioning (upper bandwidth of 0). For some problems,
an intermediate bandwidth reduces the number of PCG iterations. |
SubproblemAlgorithm | Determines how the iteration step
is calculated. The default, 'factorization' , takes
a slower but more accurate step than 'cg' . See Trust-Region Algorithm. |
TolPCG | Termination tolerance on the PCG
iteration, a positive scalar. The default is 0.1 . |
Levenberg-Marquardt Algorithm |
InitDamping | Initial value of the Levenberg-Marquardt parameter,
a positive scalar. Default is 1e-2 . For details,
see Levenberg-Marquardt Method. |
ScaleProblem | 'jacobian' can sometimes improve the
convergence of a poorly scaled problem. The default is 'none' .
|