Why are you doing this? Really, why are you trying to solve it using that kluge of approaches? Go simple. Don't over-complicate things.
n = 4;
AA = [12, -1, 0, 0; ...
0, 0, 111, -1];
bb = [0; 0];
x = [1; 10; 1; 61];
Lets see. Minimize norm(x-y), subject to AA*y = bb. Trivial, using the proper tool. In the optimization TB, that is lsqlin. As I show below, my own LSE also gives it directly, found on the file exchange.
Ylsqlin = lsqlin(eye(4),x,[],[],AA,bb)
Minimum found that satisfies the constraints.
Optimization completed because the objective function is non-decreasing in
feasible directions, to within the default value of the optimality tolerance,
and constraints are satisfied to within the default value of the constraint tolerance.
<stopping criteria details>
Ylsqlin =
0.834482758578957
10.0137931029475
0.549586106124114
61.0040577797767
Verification:
AA*Ylsqlin - bb
ans =
-1.77635683940025e-15
0
The sum of squares of the difference was:
norm(x - Ylsqlin)^2
ans =
0.230475348269706
Using my own LSE for comparison, found on the file exchange.
lse(eye(4),x,AA,bb)
ans =
0.83448275862069
10.0137931034483
0.549586106151599
61.0040577828275
Now, how would it be solved using quadprog? The sum of squares of differences is:
(x-y)'*eye(4)*(x-y)
Expanding, we get
x'*x + y'*y - 2*x'*y
x'*x is a constant, so irrelevant to the optimization, BUT it explains why your objectives are so different.
To convert this objective to a quadprog usable form, we want it as:
FVAL = 0.5*X'*H*X + f'*X.
So
Now solve.
Yqp = quadprog(H,F,[],[],AA,bb)
Minimum found that satisfies the constraints.
Optimization completed because the objective function is non-decreasing in
feasible directions, to within the default value of the optimality tolerance,
and constraints are satisfied to within the default value of the constraint tolerance.
<stopping criteria details>
Yqp =
0.834482758599826
10.0137931031979
0.549586106137858
61.0040577813022
It should be no surprise this is the same as lsqlin, but it took more thought to write. Use the proper tools. And now, you want to use fmincon.
fun = @(y) norm(x-y);
x0 = rand(4,1);
Yfmincon = fmincon(fun,x0,[],[],AA,bb)
Local minimum found that satisfies the constraints.
Optimization completed because the objective function is non-decreasing in
feasible directions, to within the default value of the optimality tolerance,
and constraints are satisfied to within the default value of the constraint tolerance.
<stopping criteria details>
Yfmincon =
0.834482752432331
10.013793029188
0.549586102080044
61.0040573308849
So esentially identical. And very easy to write in all cases, though lsqlin was by far the simplest.
Anyway, there was absolutely NO need to provide a gradient function to fmincon. (In fact, there is almost never a need to provide a gradient.) A waste of time certainly in this case. Nor even an m-file cost function. The other differences in your results I would put down to simple algebra mistakes that are hardly worth debugging, because you massively over-complicated the problem.
Best Answer