Linear objective function with determinant as a constraint

constraintsconvex optimizationnonlinear optimizationoptimization

So I have the following optimization problem
$$
\begin{aligned}
\max_{x:=(x_1,\dots,x_n)^T\in\mathbb{R}^n} \sum_{i=1}^n x_i \\
\text{s.t. }\quad \text{ det} G(x) = 0
\end{aligned}
$$

were $G(x) = I+\sum_{i=1}^n G_ix_i$ with $G_i\in\mathbb{R}^{m\times m}$ positive definite matrices.

This looks similar to some problems from "Convex Optimization" and "Linear Matrix Inequalities in System and Control Theory" by Stephen Boyd, such as the maxdet problem. However I have not been able to study the solution to such problem, even-tough I don't even have inequality constraints. I built the Lagrangian as
$$
\mathcal{L} = \sum_{i=1}^n x_i+ \lambda\text{det} G(x)
$$

But when I take the gradient of $\mathcal{L}$ using Jacobi's formula (https://en.wikipedia.org/wiki/Jacobi%27s_formula) it seems that I require $G(x)$ to be invertible. However, the constraint $\text{det} G(x)=0$ forbids this. I know that even if $G(x)^{-1}$ does not exists, its adjugate does exists. However, I don't see how this simplify the problem due to the expresion of $G(x)$. I expected to end up with some LMI condition or something like that.

Do suggest any other strategy to tackle this problem?

EDIT. Working a little bit more. Computing $\frac{\partial}{\partial x_i}\mathcal{L} = 0$ gives this:
$$
\text{tr}(G(x)^*G_i) = -1/\lambda
$$

where $G(x)^*$ is the adjugate of $G(x)$. Or equivalently
$$
\text{tr}(G(x)^*G_i) = \text{tr}(G(x)^*G_j), \ \ \forall i,j \in \{1,\dots,n\}
$$

So, I end up with $n-1$ equations of the form $\text{tr}(G(x)^*(G_i-G_j))=0$ and the last equation is just $\text{det }G(x)=0$. Is that it? $n$ equations and $n$ variables and plug that into the computer? This is somewhat unsatisfying for me. I'm trying to find some expression for the solutions which might give more insight on how the solutions look like.

Do you see any improvement in this formulation?

EDIT 2. Someone suggested to replace the original problem with
$$
\begin{aligned}
\max_{x:=(x_1,\dots,x_n)^T\in\mathbb{R}^n} \sum_{i=1}^n x_i \\
\text{s.t.}\ \ G(x) \succeq 0
\end{aligned}
$$

Which apparently is not necessarily equivalent to the original one but is stated as a classical LMI and is more suitable for me. Numerically I obtain that solutions to this new problem are solutions to the original one, but there might be some solutions missing I guess. Would this be true in general? What do we lose by studying this problem instead?

Best Answer

This can be formulated and solved as an optimization problem with linear objective and bilinear equality constraints, which can be solved to global optimality using Gurboi 9.x, or a general purpose branch and bound global optimizer.

The constraint det(G(X)) = 0 is equivalent to G(X) being less than full rank, i.e., that one of the columns, the nth column to make it concrete, is a linear combination of all the other columns. Let $\alpha_i$, i=1,..,n-1 be additional optimization variables, which serve as the multiplier of the respective column.

The $n$ by $1$ vector constraint is

$$\Sigma_{i=1}^{i=n-1}\alpha_i\text{(ith column of G(x))} = \text{nth column of G(x)}$$ This involves product terms, $\alpha_i x_i$, hence is an $n$ by $1$ vector of bilinear equality constraints.

This might not be easy (fast) to solve, but it is easy to formulate.

Related Question