Hello,
I have problem with optimization using fminunc with GradObj set to on. I'm trying to find minimum in White & Holst function – which have only one minimum at [1.0, 1.0]. Without GradObj fminunc is able to find optimum. With GradObj function stops at [0.8043, 0.5152] with message:
—–
First-order Iteration Func-count f(x) Step-size optimality 0 1 749.038 2.36e+003 1 2 61.637 0.00042391 157 2 4 1.60037 10 13.5 3 5 1.1263 1 2 4 6 1.12404 1 2 5 7 1.11603 1 1.99 6 8 1.09697 1 2.5 7 9 1.04559 1 4.69 8 10 0.949647 1 8.73 9 11 0.878371 1 10.3 10 12 0.637169 1 8.25 11 13 0.395188 1 2.76 12 15 0.289895 0.414369 2.66 13 17 0.288287 0.190601 4.45 14 18 0.260038 1 5.08 15 19 0.172 1 3.41 16 20 0.107135 1 0.765 17 22 0.100943 0.260267 2.38 18 23 0.0843457 1 3.47 19 24 0.0516633 1 0.9 First-order Iteration Func-count f(x) Step-size optimality 20 25 0.0460001 1 1.11 21 26 0.0431929 1 0.123 22 27 0.0413137 1 0.00751 23 28 0.0409519 1 0.00349 24 29 0.0409373 1 7.4e-006
Local minimum found.
Optimization completed because the size of the gradient is less than the default value of the function tolerance.
<stopping criteria details>
X = 0.8043 0.5152 FVAL = 0.0409 EXITFLAG = 1 OUTPUT = iterations: 24 funcCount: 29 stepsize: 1 firstorderopt: 7.4034e-006 algorithm: 'medium-scale: Quasi-Newton line search' message: [1x438 char]
—–
Here is code I'm running:
function [f,G] = func2(x) f = 100*(x(2) - x(1)*x(1)*x(1))*(x(2) - x(1)*x(1)*x(1)) + (1 - x(1))*(1 - x(1)); % Gradient of the objective function
if nargout > 1 G = [-600*x(2)*x(1)*x(1) + 600*x(1)*x(1)*x(1)*x(1)*x(1) - 2 200*x(2) - 200*x(1)*x(1)*x(1) + 2*x(2)]; endend
…
options = optimset('LargeScale', 'off', 'InitialHessType', 'Scaled-Identity', 'GradObj', 'on', 'Display', 'iter');% where startX = -1.2 and startY = 1.0
[X,FVAL,EXITFLAG,OUTPUT] = fminunc(@func2,[startX, startY], options)
Please, help me solve this problem.
Best Answer