MATLAB: Is fmincon finding the maxima instead of minima

fminconmaximaminimanon-linear constraint

I am trying to use fmincon to solve a simple problem: minimize f(x)=x(1)+x(2) under the constraint x(1)^2 +x(2)^2 = 1. I used the following code for this:
f = @(x) x(1) + x(2);
x0 = [0; 0];
lb = [-10; -10];
ub = [10; 10];
[x, retn] = fmincon(f, x0, [], [], [], [], lb, ub, @nonlincon)
function [cineq, ceq] = nonlincon(x)
cineq = [];
ceq = x(1)^2 + x(2)^2 - 1;
end
This outputs the value of x as [0.7071; 0.7071] and the value of retn as 1.4142. These values are clearly not the local or global minima. The minima occurs at [-0.7071; -0.7071]. In fact, MATLAB found the maxima.
If we change the initial value to [0; 1], then the correct minima is found. Also, correct minima is found when we change the algorithm to sqp. Both interior-point and active-set give the same (wrong) answer.
It also works correctly if we change the constraint to two inequality constraints: (x(1)^2 + x(2)^2 – 1.001 <= 0) and (0.999 – (x(1)^2 + x(2)^2) <= 0).
Please explain if I'm using fmincon in a way it isn't supposed to be used. I'm using MATLAB R2017a with a license for academic use.

Best Answer

Never start an optimization at the point [0,0]. While it may succeed, it is dangerous, because some tools may decide to do things proportionally to the start point. When you start at 0, well, you get the idea. Just better to be safe than sorry. A good choice is often to use a unit start point, perhaps ones. Or a random start point is even safer.
x0 = rand(2,1);
As it turns out, I'd bet that the unit start of ones(2,1) also drives fmincon into the same rat-infested hole of death. The random start will work absolutely fine though. As I said, by far the safest place to start. Unless of course, you like rat-infested holes of death. To each their own...
So, what happened here? I would suggest that fmincon got stuck, got hung up in a local place that is stable, and could not decide that anyplace is better than that. This is a rare occurrence, but it can happen.
Nonlinear equality constraints can be a bit difficult to work with. You started the optimizer at [0;0], which is not a feasible point. That is, it does not satisfy the constraint.
The optimizer tries to find its way to the constraint region, which in this case, is a circle of radius 1. But once it gets there, it now tries to wander around, while staying inside the feasible set. That means it must stay on that surface. But only we understand the feasible set is a circle. Fmincon must do it the hard way, and it got confused. It first moved from [0;0], to a new point [sqrt(2)/2,sqrt(2)/2]. But this seems to be a stable point for fmincon. The gradient of the function is zero. Now, at what it understands is a stable point, it just gave up.
By the way, it is usually a good idea to take the THIRD output from fmincon too. That will sometimes explain what fmincon did, and why it think it is happy with the result.
[x, retn, exitflag] = fmincon(f, x0, [], [], [], [], lb, ub, @nonlincon)
Local minimum found that satisfies the constraints.
Optimization completed because the objective function is non-decreasing in
feasible directions, to within the default value of the optimality tolerance,
and constraints are satisfied to within the default value of the constraint tolerance.
<stopping criteria details>
x =
0.70711
0.70711
retn =
1.4142
exitflag =
1
Ok, so fmincon think the point is a stable point. It got hung up, due to a bad choice of starting values.
As I said, rand(2,1) would have worked well.
Better of course, is to change the problem completely!!!!!!!!!!!!!!!!!!!!
You want points that live on the circumference of a circle! So write your code to live on a circle.
radius = 1;
f = @(theta) radius*cos(theta) + radius*sin(theta);
Now use any optimizer you darn well want to, because there are really no constraints at all! Don't forget that sin and cos work in radians. So I suppose we can be nice and bound things to one revolution of the circle, thus [0,2*pi].
[x, retn, exitflag] = fmincon(f,rand(1),[],[],[],[],0,2*pi)
Local minimum found that satisfies the constraints.
Optimization completed because the objective function is non-decreasing in
feasible directions, to within the default value of the optimality tolerance,
and constraints are satisfied to within the default value of the constraint tolerance.
<stopping criteria details>
x =
3.927
retn =
-1.4142
exitflag =
1
In degrees, that is...
rad2deg(x)
ans =
225
So it found the point we want.
But this is a 1-dimensional problem. While we could have used fminsearch as easily, fminbnd is best, thus most efficient, fast to converge to many digits of precision, etc.
[x,fval,exitflag] = fminbnd(f,0,2*pi)
x =
3.927
fval =
-1.4142
exitflag =
1
rad2deg(x)
ans =
225
I think the point is, IF you can avoid nonlinear equality constraints, you will often be in a better, happier place. When there is no good reason to use them, don't. After all, rat-infested holes of death tend to be less than desirable places of habitation. ;-)