This example shows optimizing a function that throws an error when the evaluation point has norm larger than 2. The error model for the objective function learns this behavior.
Create variables named x1 and x2 that range from -5 to 5.
var1 = optimizableVariable('x1',[-5,5]);
var2 = optimizableVariable('x2',[-5,5]);
vars = [var1,var2];
The following objective function throws an error when the norm of x = [x1,x2] exceeds 2:
function f = makeanerror(x)
f = x.x1 - x.x2 - sqrt(4-x.x1^2-x.x2^2);
fun = @makeanerror;
Plot the error model and minimum objective as the optimization proceeds. Optimize for 60 iterations so the error model becomes well-trained. For reproducibility, set the random seed and use the 'expected-improvement-plus' acquisition function.
Predict the error at points on the line x1 = x2. If the error model were perfect, it would have value -1 at every point where the norm of x is no more than 2, and value 1 at all other points.
Prediction points, specified as a table with D columns, where
D is the number of variables in the problem. The function performs
its predictions on these points.
error — Mean of error coupled constraint N-by-1 vector
Mean of error coupled constraint, returned as an
N-by-1 vector, where
N is the number of rows of
XTable. The mean is the posterior mean of the error
coupled constraint at the points in XTable.
bayesopt deems your objective function to return an
error if it returns anything other than a finite real scalar. See Objective Function Errors.
sigma — Standard deviation of error coupled constraint N-by-1 vector
Standard deviation of error coupled constraint, returned as an
N-by-1 vector, where
N is the number of rows of
XTable.