[Math] Jacobian or Hessian to optimize equation

hessian-matrixjacobianmaxima-minimanewton raphsonoptimization

My objective is to maximize the following function in $[0,1]^n$
$$F(x_1,…,x_n)=\frac{1}{\bigg(1-\sum\limits_{i=1}^nx_i\bigg)^{1-\sum\limits_{i=1}^nx_i}\prod\limits_{i=1}^nx_i^{x_i}}$$
where each $x_i\in [0,1]$.

For instance, if $n=3$, we have
$$F(x_1,x_2,x_3)=\frac{1}{(1-x_1-x_2-x_3)^{1-x_1-x_2-x_3}x_1^{x_1}x_2^{x_2}x_3^{x_3}}$$
I am not sure if I should use

$1)$ The inverse Hessian method with Newton's method in optimization to maximize this function, or

$2)$ if I should use Jacobian matrices to solve a system of equations where I have all partial derivatives set to $0$ plus a constraint equation to describe the interval.

For the Hessian approach, I can use the formula
$$x_{n+1}=x_n-[\mathcal{H}F(x_n)]^{-1}\nabla F(x_n)$$
However, I am unsure how to apply the constraint in this case.

For the Jacobian approach, I have the following system of equations to solve using Jacobian matrices:
$$\frac{\partial F}{\partial x_1}=0$$
$$\vdots$$
$$\frac{\partial F}{\partial x_n}=0$$
$$\sum\limits_{i=1}^{n}nx_i=1$$
Note that when we take the partial derivative with respect to $x_i$, we obtain an expression of the form
$$\frac{e(X)\ln (L(X))}{g(x)}$$
where $e(X)$ is some exponential component, $L(X)$ is some function within a logarithmic component, and $g(x)$ is another exponential component. Solving for $L(x)=1$ satisfies $\frac{\partial F}{\partial x_i}=0$. There are many solutions to this system if we exclude the constraint that $\sum\limits_{i=1}^{n}nx_i=1$. I am guessing that there is only one unique solution when we add the constraint, although I am not sure how to prove this.

Best Answer

You might consider an interior point Newton's method with a log-barrier of the form:

$$\sum_{n}(\log(-4x_n(x_n-1)))^k.$$

This function has an easily computable (diagonal) closed form Hessian and for a reasonably large odd value of $k$ it will be very close to zero in the interval $[\varepsilon,1-\varepsilon]^n$, while rapidly going to negative infinity outside of it.

If you add this log-barrier to your objective function it will de-facto bias your optimal step-direction away from the edges of the interval $[0,1]^n$, while resulting in a minimal amount of unwanted step-direction bias on the interior of $[0,1]^n$.

On the other hand, you could also try Lagrange multipliers.