To your first question: It seems there are some implicit restrictions that should be made explicit. Are there nonnegativity constraints on the variables $y_{l,c}$? In order to guarantee a solution, you must have positive lower bounds on $p_{l,c}$, as otherwise the logarithm blows up. (Constraints of the form $p_{l,c} > 0$ are frowned upon in optimization; $p_{l,c} \ge \epsilon$ is usually preferred.)
Assuming reasonable assumptions as outlined above ($I_{l,c} > 0$), your problem is a convex optimization problem. (The objective function is concave, but the corresponding minimization problem $\min (-f)$ --- see your second question --- is convex.)
I am not sure why you want to use a particular algorithm. GRG is probably not too bad, but there are other algorithms out there. Are you planning to implement this yourself? (That would be an entirely different question.) The best approach is likely to depend on $L$ and $C$. For a start I would recommend either just a simplistic approach like the built-in Excel solver or perhaps the Frontline solver or OpenSolver add-ins for Excel, or an open-source package such as Ipopt.
Dual Problem Solution
The Lagrangian is given by:
$$ L \left( x, \lambda, \nu \right) = {x}^{T} x + {\lambda}^{T} \left( x - a \right) + \nu \left( \boldsymbol{1}^{T} x - b \right) = {x}^{T} x + \left(
\lambda + \nu \boldsymbol{1} \right)^{T} x -{\lambda}^{T} a - \nu b $$
The Dual Function is given by:
$$ g \left( \lambda, \nu \right) = \inf_{x} L \left( x, \lambda, \nu \right) $$
Looking at the term related to $ x $:
$$ \inf_{x} {x}^{T} x + \left( \lambda + \nu \boldsymbol{1} \right)^{T} x $$
Which is a quadratic form of $ x $ with its minimizer given by:
$$ {x}^{\ast} = -\frac{1}{2} \left(
\lambda + \nu \boldsymbol{1} \right) $$
Its minimum given by
$$ {{x}^{\ast}}^{T} {x}^{\ast} + \left(
\lambda + \nu \boldsymbol{1} \right)^{T} {x}^{\ast} = -\frac{1}{4} \left(
\lambda + \nu \boldsymbol{1} \right)^{T} \left( \lambda + \nu \boldsymbol{1} \right) $$
Hence the Dual Problem is given by:
\begin{align*}
\text{maximize} & \quad & -\frac{1}{4} \left(
\lambda + \nu \boldsymbol{1} \right)^{T} \left( \lambda + \nu \boldsymbol{1} \right) - {\lambda}^{T} a - \nu b \\
\text{subject to} & \quad & \lambda \succeq 0
\end{align*}
The problem is Concave in $ \left( \lambda, \nu \right) $ hence it is a convex problem.
It can be solved by a Quadratic Programming as:
$$ \left(
\lambda + \nu \boldsymbol{1} \right)^{T} \left( \lambda + \nu \boldsymbol{1} \right) = {\left\| E v \right\|}_{2}^{2} = {v}^{T} {E}^{T} E v = {v}^{T} H v $$
Where $ v = {\left[ \lambda, \nu \right]}^{T}, \; E = \left[ I, \boldsymbol{1} \right], \; H = {E}^{T} E $. Then the problem becomes:
$$
\begin{align*}
\text{minimize} & \quad & \frac{1}{4} {v}^{T} H v + {v}^{T} f \\
\text{subject to} & \quad & A v \preceq 0
\end{align*}
$$
Where $ A = - \left[ I, \boldsymbol{0} \right], \; f = {\left[ a, b \right]}^{T} $.
The above can directly solved by MATLAB's quadprog()
. Then $ {x}^{\ast} = -0.5 E v $.
Best Answer
This is a Community Wiki Solution.
Feel free to edit and add.
I will point up and mark solution for any other solution made by the community.
KKT
The Lagrangian is given by:
$$ L \left( y, \lambda \right) = \frac{1}{2} {\left\| y - x \right\|}^{2} + \mu \left( {e}^{T} y - k \right) - {\lambda}_{1}^{T} y + {\lambda}_{2}^{T} \left( y - e \right) $$
The KKT Constraints:
\begin{align} \left( 1 \right) \; & {\nabla}_{y} L \left( y, \lambda \right) = y - x + \mu e - {\lambda}_{1} + {\lambda}_{2} & = 0 \\ \left( 2 \right) \; & {\nabla}_{\mu} L \left( y, \lambda \right) = {e}^{T} y - k & = 0 \\ \left( 3 \right) \; & -{\lambda}_{1}^{T} y & = 0 \\ \left( 4 \right) \; & {\lambda}_{2}^{T} \left( y - e \right) & = 0 \\ \left( 5 \right) \; & {\lambda}_{1} & \geq 0 \\ \left( 6 \right) \; & {\lambda}_{2} & \geq 0 \end{align}
Multiplying $ \left( 1 \right) $ by $ {e}^{T} $ and using $ \left( 2 \right) $ yields:
$$ {e}^{T} y - {e}^{T} x + \mu {e}^{T} e +{e}^{T} {\lambda}_{2} \Rightarrow \mu = \frac{ {e}^{T} x - k }{ n - {e}^{T} {\lambda}_{1} + {e}^{T} {\lambda}_{2} } $$
Plugging the result into $ \left( 1 \right) $ yields
Seems to be hard to get analytic solution.
Any other way to solve this system of equations?
Under Work...
Feel free to continue.
Projected Subgradient
Dual Projected Sub Gradient
Given a Problem in the form:
\begin{align*} \arg \min_{x} & \quad f \left( x \right) \\ s.t. & \quad {g}_{i} \left( x \right) \leq 0 , \; i = 1, 2, \cdots, m \\ & \quad x \in \mathcal{S} \end{align*}
Where
Then, from Amir Beck's Lecture Notes, The Dual Projected Sub Gradient is given by:
In this problem $ f \left( y \right) = \frac{1}{2} { \left\| y - x \right\| }^{2} $, $ i = 1, 2, ..., n, \; {g}_{i} \left( y \right) = -{y}_{i} $, $ i = n + 1, n + 2, ..., 2n, \; {g}_{i} \left( y \right) = {y}_{i} - 1 $ and $ \mathcal{S} = \left\{ x \mid {e}^{T} x = k \right\} $.
The Sub Problem $ {x}_{k} = \arg \min_{x \in \mathcal{S}} \left\{ f \left( x \right) + \sum_{i = 1}^{m} {\lambda}_{i}^{k} {g}_{i} \left( x \right) \right\} $ should be solved using Projected Sub Gradient.
In this case the Projection Operator is given by $ {\mathcal{P}}_{{e}^{T} x = k} \left( x \right) = x - e {\left( {e}^{T} e \right)}^{-1} \left( {e}^{T} x - b \right) $.
The Gradient of $ L \left( y, \lambda \right) = \frac{1}{2} { \left\| y - x \right\| }^{2} + \sum_{i = 1}^{m} {\lambda}_{i} {g}_{i} \left( x \right) $ is given by $ {\nabla}_{y} L \left( y, \lambda \right) = y - x + {\left[ \left( {\lambda}_{n + 1} - {\lambda}_{1} \right), \left( {\lambda}_{n + 2} - {\lambda}_{2} \right), \cdots, \left( {\lambda}_{2n} - {\lambda}_{n} \right) \right]}^{T} $.
This is a MATLAB code which implements the method: