Simple implementation of interior point method

convex optimizationMATLABnumerical optimizationoptimizationquadratic programming

I am trying to build a simple implementation of the interior point optimization method using MATLAB. I am looking forward to solving a very simple system (2 variable) of constrained quadratic minimization:

$$
\begin{alignat}{2}
&\!\min_{x} &\qquad& J=\frac{1}{2}x^TGx+f^Tx\\
&\text{subject to} & & Ax \leq b
\end{alignat}
$$

Now, I have introduced log-barriers in the function to be minimized and the problem becomes unconstrained:

$$
\begin{alignat}{2}
&\!\min_{x} &\qquad& \bar{J}=J+k\Psi=\frac{1}{2}x^TGx+f^Tx+k\sum_i{-\log{(b_i-a_ix)}}\\
\end{alignat}
$$

With $a_i$ representing a row of $A$ and $b_i$ the respective value in the $b$ vector.

The basic algorithm of the interior point method is the following:

G = [2,-1;-1,1];
f = [-1;0];
A = [1,0;-1,0;0,1;0,-1];
b = [0.5;1;0.5;1];
x0 = [-0.5;-0.5];
xprev = x0;

k = 1;
v = 20;

nIter = 20;

for i = 1:nIter
 % Minimize function with barrier
 x = newtonMethod(G,f,xprev,A,b,k);

 if norm(x-xprev) < 0.0001
  break;
 end

 % Update barrier multiplicative constant 
 k = k/v;

 xprev = x;
end

And in order to minimize the modified function I am using Newton's method, because we have access to the first and second derivatives:

$$
\nabla \bar{J}=\nabla(J+k\Psi)=Gx+f+k\sum_i{\frac{a_i^T}{b_i-a_ix}}
$$

$$
\bar{J}''=(J+k\Psi)''=G+k\sum_i{\frac{a_i^Ta_i}{(b_i-a_ix)^2}}
$$

To find the optimal $x$, I proceed with standard Newton's method:

$$
x_{k+1}=x_{k}-\bar{J}''(x_k)^{-1} \cdot \nabla \bar{J}(x_k)
$$

The algorithm I am using for this is contained in the 'newtonMethod' function:

function x_opt = newtonMethod(G,f,x0,A,b,k)

  xprev = x0;

  [n,m] = size(A);

  log_jacobian = zeros(m,1);
  log_hessian = zeros(m);

  for i = 1:n
      a_i = A(i,:);  
      b_i = b(i);

      log_jacobian = log_jacobian + a_i'*1/(b_i-a_i*xprev);
      log_hessian = log_hessian + a_i'*a_i*1/(b_i-a_i*xprev)^2;
  end

  for i = 1:200 
      J_hessian = (G + k*log_hessian);
      J_jacobian = (G*xprev + f + k*log_jacobian);                           

      x = xprev - J_hessian\J_jacobian;

      if norm(x-xprev) < 0.0001
          break;
      end

      xprev = x;
  end

  x_opt = x;

end

The constraints ($Ax<b$) define the region $-1<x1<0.5$ and $-1<x2<0.5$, with $x=[x1, x2]^T$, so, I'm expecting the minimum to be at $x^*=[0.5, 0.5]^T$, and my algorithm is completely ignoring the constraints, because I keep getting $x^*=[1, 1]^T$, which is the solution to the same problem without the constraints (first image, outside the feasible region).

I'm guessing my Newton's method implementation is wrong, because I've replaced the line "x = newtonMethod(G,f,xprev,A,b,k);" with a built-in MATLAB minimization function, and it works as expected (second image).

Incorrect solution
Correct solution

Best Answer

You need to add a line search step. Newton's method just gives you a search direction, but if you take a full Newton step you can violate the constraints. Finding the maximum step length such that you do not violate any of the constraints is trivial, then you have to shrink that step length by a small percentage because the barrier is $\infty$ at the boundary. If you enter "fraction to the boundary rule" in your favorite search engine, you will find a few implementations.