[Math] A constrained gradient descent algorithm

gradient descentnumerical optimizationoptimization

I am looking for a way to find a solution to the constrained minimization problem using the gradient descent Algorithm. it follows

min  J(T) where T=[T_1, T_2, ... , T_m]
s.t T_i > 0 for i=1,...,m
    sum(T) = constant

How can I use the constraints? It's very simple constraints but too hard for me now…

Best Answer

You can't apply gradient descent directly. Here are a few alternatives:

  • If $J(T)$ is linear, this is a very simple problem to solve using Simplex Method or any other Linear Solver you want to choose.

  • However, I assume $J(T)$ is not linear. If $J(T)$ is quadratic, you can use active-set QP solver to find the solution which again, is quite a mature technology.

  • If $J(T)$ is not quadratic but something convex, you can use tools like CVX to solve your problem. Again, these tools are quite mature.

  • If $J(T)$ is not even convex, then you can use Interior Point Methods or Penalty-based methods for solving the problem. There are many softwares you can use.

If you give us more details about what $J(T)$ is, we might be able to give you a more appropriate solution.

Also, be careful when using strict inequalities in optimization. Numerical optimization only makes sense on compact sets (and hence, in $\Re^N$, closed and bounded). To see why this is true, try $\min_x x$ such that $x\in(0,1)$.

Related Question