Nonlinear regression prediction confidence intervals
[
returns
predictions, Ypred
,delta
]
= nlpredci(modelfun
,X
,beta
,R
,'Covar',CovB
)Ypred
, and 95% confidence interval
half-widths, delta
, for the nonlinear regression
model modelfun
at input values X
.
Before calling nlpredci
, use nlinfit
to
fit modelfun
and get the estimated coefficients, beta
,
residuals, R
, and variance-covariance matrix, CovB
.
[
returns
predictions, Ypred
,delta
]
= nlpredci(modelfun
,X
,beta
,R
,'Jacobian',J
)Ypred
, and 95% confidence interval
half-widths, delta
, for the nonlinear regression
model modelfun
at input values X
.
Before calling nlpredci
, use nlinfit
to
fit modelfun
and get the estimated coefficients, beta
,
residuals, R
, and Jacobian, J
.
If you use a robust option with nlinfit
,
then you should use the Covar
syntax rather than
the Jacobian
syntax. The variance-covariance matrix, CovB
,
is required to properly take the robust fitting into account.
To compute confidence intervals for complex parameters
or data, you need to split the problem into its real and imaginary
parts. When calling nlinfit
:
Define your parameter vector beta
as
the concatenation of the real and imaginary parts of the original
parameter vector.
Concatenate the real and imaginary parts of the response
vector Y
as a single vector.
Modify your model function modelfun
to
accept X
and the purely real parameter vector,
and return a concatenation of the real and imaginary parts of the
fitted values.
With the problem formulated this way, nlinfit
computes
real estimates, and confidence intervals are feasible.
nlpredci
treats NaN
values
in the residuals, R
, or the Jacobian, J
,
as missing values, and ignores the corresponding observations.
If the Jacobian, J
, does not
have full column rank, then some of the model parameters might be
nonidentifiable. In this case, nlpredci
tries
to construct confidence intervals
for estimable predictions, and returns NaN
for
those that are not.
[1] Lane, T. P. and W. H. DuMouchel. “Simultaneous Confidence Intervals in Multiple Regression.” The American Statistician. Vol. 48, No. 4, 1994, pp. 315–321.
[2] Seber, G. A. F., and C. J. Wild. Nonlinear Regression. Hoboken, NJ: Wiley-Interscience, 2003.