EDIT: So here we go with a completely rewrite in the language of linear algebra.
In $V=\mathbb C^n$ with standard scalar product $\langle e_j,e_k\rangle=\delta_{jk}$ consider the linear map
$$\begin{matrix}\phi\colon &V&\longrightarrow &V\\
&(x_1,x_2,\ldots,x_{n-1},x_n)&\longmapsto&(x_2,x_3,\ldots,x_n,x_1),
\end{matrix}$$
that is $e_1\mapsto e_n$ and $e_k\mapsto e_{k-1}$ for $1<k\le n$.
The problem statement asks us to maximize $\langle \phi a,a\rangle$ subject to the conditions
$$\begin{align}\tag{c1}\langle a,v_n\rangle &= 0,\\
\tag{c2}\langle a,a\rangle&=1,&\text{and}\\
\tag{c3}a&\in \mathbb R^n.\end{align}$$
Let $\zeta\in\mathbb C$ be a primitive $n$th root of unity, for example $\zeta=e^{\frac{2\pi}n}=\cos\frac{2\pi}{n}+i\sin\frac{2\pi}{n}$.
For $0\le k<n$ let $$v_k=\sum_{\nu=1}^n \zeta^{\nu k}e_\nu=(\zeta^k,\zeta^{2k},\ldots ,\zeta^{nk}).$$
Then $$\langle v_k,v_k\rangle = n,\qquad\langle v_k,v_j\rangle=0\quad\text{for }j\ne k,\qquad \phi(v_k)=\zeta^kv_k,$$i.e. the $v_k$ are an orthogonal eigenvector basis of $V$.
Thus if $$\tag1a=\sum_{k=1}^n c_kv_k$$ with $c_k\in\mathbb C$, then $$\langle a,a\rangle = n\sum_{k=1}^n|c_k|^2\qquad\text{and}\qquad\langle \phi a,a\rangle = n\sum_{k=1}^n \zeta^k|c_k|^2.$$
Condition (c$3$) implies that especially $\langle \phi a,a\rangle \in\mathbb R$ and condition (c$1$) is simply that $c_n=0$ in $(1)$.
From this we obtain the bound
$$\begin{align}\langle \phi a,a\rangle& =n\sum_{k=1}^{n} \zeta^k|c_k|^2\\& =n\sum_{k=1}^{n-1} \zeta^k|c_k|^2\\&=n\Re\left(\sum_{k=1}^{n-1} \zeta^k|c_k|^2\right)\\&=n\sum_{k=1}^{n-1} \Re(\zeta^k)|c_k|^2\\\tag2&\le \max\bigl\{\Re(\xi)\mid \xi^n=1,\xi\ne1\bigr\}\cdot n\sum_{k=1}^{n-1}|c_k|^2\\&=\cos\frac{2\pi}{n}\cdot\langle a,a\rangle.\end{align}.$$
Note that the special choice $a=\frac 1{\sqrt n} (v_1+v_{n-1})=\frac 1{\sqrt n} (v_1+\overline{v_1})$ yields $a\in \mathbb R^n$, $\langle a,a\rangle=1$ and equality in $(2)$.
Therefore
$$\max\bigl\{\langle \phi a,a\rangle\mid a\in\mathbb R^n,\langle a,a\rangle=1,\langle a,v_n\rangle=0\bigr\}= \cos\frac{2\pi}{n} .$$
Note that for $n=2$ and $n=3$, we recover the results $\cos\pi=-1$ and $\cos\frac{2\pi}3=-\frac12$, confirming what you found.
Another way would be to prove that every elementary matrix $E$ is connected in $GL_n(\mathbb{R})$ to either $I_n$ or $\begin{pmatrix} -1 & 0 \\ 0 & I_{n-1} \end{pmatrix}$. Using Gaussian elimination, every $n \times n$ real matrix $A$ can written in the form $A = E_1 E_2 \cdots E_n R$
where $R$ is the row-reduced echelon form of $A$ and the $E_i$ are elementary matrices. If $A$ is invertible, then $R = I_n$, so this just says every $A \in GL_n(\mathbb{R})$ can be expressed as a product of elementary matrices:
$$A = E_1 E_2 \cdots E_n.$$
Continuously deforming each $E_i$ into either $I_n$ or $\begin{pmatrix} -1 & 0 \\ 0 & I_{n-1} \end{pmatrix}$ as appropriate shows that $A$ can also be continuously deformed to either $I_n$ or $\begin{pmatrix} -1 & 0 \\ 0 & I_{n-1} \end{pmatrix}$, depending on whether an even or odd number of the latter matrix occurs in the product when the deformation is completed. This implies $GL_n(\mathbb{R})$ has at most two path-components, and you have already observed there are at least two, using the determinant.
First, argue that every elementary matrix $E$ can be connected either to $I_n$ or to $I_n$ with a single diagonal entry replaced by $-1$. There are three types of elementary matrix, hence three cases to consider.
- $E$ is $I_n$ plus a single off-diagonal entry $\lambda$. In this case, $E$ is connected to $I_n$ by a path of elementary matrices of the same type. Just continuously decay $\lambda$ to zero.
- $E$ is $I_n$ with a single diagonal entry replaced by a nonzero scalar $\lambda$.
- If $\lambda < 0$, then $E$ is connected to $I_n$ by a path of elementary matrices of the same type. Just continuously change $\lambda$ to $1$ without crossing $0$.
- If $\lambda < 0$, then $E$ is connected to $I_n$ with a single diagonal entry replaced by $-1$ by a path of elementary matrices of the same type. Just continuously change $\lambda$ to $-1$ without crossing $0$.
- $E$ is a transposition. For example, $E = \begin{pmatrix} 1 & 0 & 0 \\ 0 & 0 & 1 \\ 0 & 1 & 0\\ \end{pmatrix}$. Note $E = \begin{pmatrix} 1 & 0 & 0 \\ 0 & -1 & 0 \\ 0 & 0 & 1\\ \end{pmatrix}\begin{pmatrix} 1 & 0 & 0 \\ 0 & 0 & -1 \\ 0 & 1 & 0 \\ \end{pmatrix}$ and the matrix on the right is connected to $I_n$ via $\begin{pmatrix} 1 & 0 & 0 \\ 0 & \cos(t) & -\sin(t) \\ 0 & \sin(t) & \cos(t) \\ \end{pmatrix}$ so that $E$ is connected to $\begin{pmatrix} 1 & 0 & 0 \\ 0 & -1 & 0 \\ 0 & 0 & 1\\ \end{pmatrix}$. In general, when $E$ is any transposition, it is connected to $I_n$ with a single diagonal entry replaced by $-1$.
The argument will be complete provided that any matrix $B$ which is $I_n$ with a single diagonal entry replaced by $-1$ is connected to the particular example $\begin{pmatrix} -1 & 0 \\ 0 & I_{n-1} \end{pmatrix}$. Indeed, if one has, say, $B = \begin{pmatrix} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & -1 \end{pmatrix}$ then one can use the path $\begin{pmatrix} \cos(t) & 0 & -\sin(t) \\ 0 & 1 & 0 \\ \sin(t) & 0 & \cos(t)\end{pmatrix} B \begin{pmatrix} \cos(t) & 0 & -\sin(t) \\ 0 & 1 & 0 \\ \sin(t) & 0 & \cos(t)\end{pmatrix}^{-1}$ to continuously shift the $-1$ to the upper-left corner.
Best Answer
The answer is $n=1,2,4,8$. The existence of $A_1,\dots,A_n$ is equivalent to the existence of a division composition algebra on $\mathbb R^n$. This is because we can assume $A_1=I$ by substituting $v$ by $A_1^{-1}v$. Then fix a vector $v\neq 0$. Let $(v_1,v_2,\dots,v_n)$ denote $(A_1v,\dots,A_nv)$ which is a basis of $\mathbb R^n$. Then we define $v_iw=A_iw$ for any $i$ and $w$, and using distribution law we can define $(a_1v_1+\dots+a_nv_n)w$ for all $(a_1,\dots,a_n)$ in $\mathbb R^n$. One can easily verify that this gives a division composition algebra of $\mathbb R^n$. By applying Hurwitz Theorem one knows that $1,2,4,8$ are the only answers.