Linear Algebra – Solving Optimization Problems

linear algebraoptimization

Let $a_1, a_2, \ldots, a_n$ be real numbers such that $a_1 + \cdots + a_n = 0$ and $a_1^2 + \cdots +a_n^2 = 1$. What is the maximum value of $a_1a_2 + a_2a_3 + \cdots + a_{n – 1}a_n + a_na_1$?

I'd like to emphasize that this was found in a random linear algebra exercise sheet so one might expect that there exists a clever solution based on matrix manipulation…

Personally I tried to conceive the problem geometrically and analytically. Notably $n=1$ has no meaning, $n=2$ gives $-1$, $n=3$ gives $-1/2$. This did not reveal much about the general case except for the fact that Lagrangian (system of partial derivatives and all that jazz) seems to imply that ANY combination satisfying the constraints gives the same value (value I'm trying to maximize) – but this needs some further checking.

Back to the linear algebra I see traces of matrices, but I need to somewhat simply express $A$ and $B$ (see below) in terms of one another before anything useful can be done…

$$A = \operatorname{diag}(a_1,a_2,\ldots,a_{n-1},a_n);$$
$$B = \operatorname{diag}(a_2,a_3,\ldots,a_{n-1},a_n,a_1);$$

P.S. The problem was originally taken from this very forum but it's quite and old post and I don't seem to be able to leave a comment there.

Best Answer

EDIT: So here we go with a completely rewrite in the language of linear algebra.

In $V=\mathbb C^n$ with standard scalar product $\langle e_j,e_k\rangle=\delta_{jk}$ consider the linear map $$\begin{matrix}\phi\colon &V&\longrightarrow &V\\ &(x_1,x_2,\ldots,x_{n-1},x_n)&\longmapsto&(x_2,x_3,\ldots,x_n,x_1), \end{matrix}$$ that is $e_1\mapsto e_n$ and $e_k\mapsto e_{k-1}$ for $1<k\le n$. The problem statement asks us to maximize $\langle \phi a,a\rangle$ subject to the conditions

$$\begin{align}\tag{c1}\langle a,v_n\rangle &= 0,\\ \tag{c2}\langle a,a\rangle&=1,&\text{and}\\ \tag{c3}a&\in \mathbb R^n.\end{align}$$

Let $\zeta\in\mathbb C$ be a primitive $n$th root of unity, for example $\zeta=e^{\frac{2\pi}n}=\cos\frac{2\pi}{n}+i\sin\frac{2\pi}{n}$. For $0\le k<n$ let $$v_k=\sum_{\nu=1}^n \zeta^{\nu k}e_\nu=(\zeta^k,\zeta^{2k},\ldots ,\zeta^{nk}).$$ Then $$\langle v_k,v_k\rangle = n,\qquad\langle v_k,v_j\rangle=0\quad\text{for }j\ne k,\qquad \phi(v_k)=\zeta^kv_k,$$i.e. the $v_k$ are an orthogonal eigenvector basis of $V$. Thus if $$\tag1a=\sum_{k=1}^n c_kv_k$$ with $c_k\in\mathbb C$, then $$\langle a,a\rangle = n\sum_{k=1}^n|c_k|^2\qquad\text{and}\qquad\langle \phi a,a\rangle = n\sum_{k=1}^n \zeta^k|c_k|^2.$$

Condition (c$3$) implies that especially $\langle \phi a,a\rangle \in\mathbb R$ and condition (c$1$) is simply that $c_n=0$ in $(1)$. From this we obtain the bound $$\begin{align}\langle \phi a,a\rangle& =n\sum_{k=1}^{n} \zeta^k|c_k|^2\\& =n\sum_{k=1}^{n-1} \zeta^k|c_k|^2\\&=n\Re\left(\sum_{k=1}^{n-1} \zeta^k|c_k|^2\right)\\&=n\sum_{k=1}^{n-1} \Re(\zeta^k)|c_k|^2\\\tag2&\le \max\bigl\{\Re(\xi)\mid \xi^n=1,\xi\ne1\bigr\}\cdot n\sum_{k=1}^{n-1}|c_k|^2\\&=\cos\frac{2\pi}{n}\cdot\langle a,a\rangle.\end{align}.$$ Note that the special choice $a=\frac 1{\sqrt n} (v_1+v_{n-1})=\frac 1{\sqrt n} (v_1+\overline{v_1})$ yields $a\in \mathbb R^n$, $\langle a,a\rangle=1$ and equality in $(2)$. Therefore $$\max\bigl\{\langle \phi a,a\rangle\mid a\in\mathbb R^n,\langle a,a\rangle=1,\langle a,v_n\rangle=0\bigr\}= \cos\frac{2\pi}{n} .$$ Note that for $n=2$ and $n=3$, we recover the results $\cos\pi=-1$ and $\cos\frac{2\pi}3=-\frac12$, confirming what you found.

Related Question