Let the eigenvalues of $A$ be $a_1\ge a_2\ge \dots \ge a_n$, and the eigenvalues of $B$ be $b_1\ge b_2\ge \dots \ge b_n$. According to the discussion in comments, we are looking for bounds on the singular values of $AB$, denoted $s_1\ge s_2\ge \dots \ge s_n$. The min-max principle is the natural tool to use. Actually, the Wikipedia article does not state the form I'd like to use here: the $k$th largest singular value of $M$ is $\inf\{\|M-T\|:\mathrm{rank}\, T\le k-1\}$. This formula appears in another article with attribution to Allakhverdiev (and no reference). I think the result of Allakhverdiev was the extension to compact operators in Hilbert spaces, but don't know for sure. In general, Wikipedia articles on singular values are a toxic mess.
Claim 1: $s_k\le \min\{ a_i b_{j} : i+j=k+1\}$.
Proof: Let $T$ be an operator of rank $\le i-1$ such that $a_i=\|A-T\|$. Similarly, let $S$ be an operator of rank $\le j-1$ such that $b_{j}=\|B-S\|$. Since $\|(A-T)(B-S)\|\le a_i b_{j}$, it remains to estimate the rank of $(A-T)(B-S)-AB$. Writing $(A-T)(B-S)-AB=-T(B-S)-AS$, we see that the rank is at most $j-1+i-1=k-1$, as desired.
Claim 2: $s_k\ge \max\{ a_i b_{j} : i+j=k+n\}$.
If $A$ and $B$ are invertible, Claim 2 follows by applying Claim 1 to $(AB)^{-1}$. Indeed, the $k$th largest singular value of $AB$ is the reciprocal of the $(n+1-k)$th largest singular value of $(AB)^{-1}$. The $(n+1-k)$th largest singular value of $(AB)^{-1}$ does not exceed $\min \{a_{n+1-i}^{-1}b_{n+1-j}^{-1} : i+j=n+2-k\}$. Decoding these sums of indices, we get Claim 2. The general case follows by considering $A+\epsilon I$ and $B+\epsilon I$, and letting $\epsilon\to 0$.
As noticed in the comments, your optimization problem can be formulated as
$${\rm max}_{\mathbf{x},\mathbf{y}} \mathbf{x}^T S(\mathbf{y}) \mathbf{x} ~~~{\rm s.t.} \|\mathbf{y}\|_{2} = 1,\|\mathbf{x}\|_{2} = 1,$$
where $$ S(\mathbf{y})= S = \left [ \begin{array}{ccc}
\mathbf{y}^{H}S_{11}\mathbf{y} & \ldots & \mathbf{y}^{H}S_{1n}\mathbf{y}\\
\vdots & \ddots & \vdots\\
\mathbf{y}^{H}S_{n1}\mathbf{y} & \ldots & \mathbf{y}^{H}S_{nn}\mathbf{y}\\
\end{array} \right].$$
If we decouple the optimization of $\mathbf{x}$ and $\mathbf{y},$ it is easy to derive an iterative algorithm: Let $\mathcal{P}\{M\}$ denote the eigenvector associated with the largest eigenvalue $\lambda_{\rm max}\{M\}$ of $M$.
- It is easy to see that, for fix $\mathbf{y},$ the solution to the optimization problem is given by $\mathbf{x}=\mathcal{P}\{ S(\mathbf{y})\}$.
- Also, for fix $\mathbf{x},$ the solution to the optimization problem is given by $\mathbf{y}=\mathcal{P}\{ S(\mathbf{x})\}$, where $S(\mathbf{x})$ is defined as $$ S(\mathbf{x}) = \sum_{i=1,j=i}^n \mathbf{x}_i\mathbf{x}_j S_{i,j}. $$
Then, an iterative algorithm would be
- Create random $\mathbf{x}_{0}$, with $\lVert \mathbf{x}_{0} \rVert = 1$.
- For iteration index $k=1,2...$ do
- Compute $S(\mathbf{x}_{k-1})$ and $\mathbf{y}_k=\mathcal{P}\{ S(\mathbf{x}_{k-1})\}$.
- Compute $S(\mathbf{y}_{k})$ and $\mathbf{x}_k=\mathcal{P}\{ S(\mathbf{y}_{k})\}$.
This algorithm has the useful property that the value $ \mathbf{x}_k^T S(\mathbf{y}_k) \mathbf{x}_k$ increases (or remains at least the same) in every iteration.
$Proof:$
From the definition, it is clear that $ \mathbf{x}^T S(\mathbf{y}) \mathbf{x} = \mathbf{y}^T S(\mathbf{x}) \mathbf{y}$.
Let us now consider $\mathbf{x}_{k-1}^T S(\mathbf{y}_{k-1}) \mathbf{x}_{k-1} = \mathbf{y}_{k-1}^T S(\mathbf{x}_{k-1}) \mathbf{y}_{k-1}.$ We choose $\mathbf{y}_{k} = \mathcal{P}\{ S(\mathbf{x}_{k-1}) \}$ (of course with $\lVert\mathbf{y}_{k} \rVert =1$) and have $\mathbf{y}_{k}^T S(\mathbf{x}_{k-1}) \mathbf{y}_{k} = \lambda_{\rm max}\{S(\mathbf{x}_{k-1}) \} $ and, consequently $\mathbf{y}_{k}^T S(\mathbf{x}_{k-1}) \mathbf{y}_{k} \geq \mathbf{y}_{}^T S(\mathbf{x}_{k-1}) \mathbf{y}_{} $ for all $\mathbf{y}_{}$ with unit norm. In particular, we have
$\mathbf{x}_{k-1}^T S(\mathbf{y}_{k}) \mathbf{x}_{k-1} = \mathbf{y}_{k}^T S(\mathbf{x}_{k-1}) \mathbf{y}_{k} \geq \mathbf{y}_{k-1}^T S(\mathbf{x}_{k-1}) \mathbf{y}_{k-1} = \mathbf{x}_{k-1}^T S(\mathbf{y}_{k-1}) \mathbf{x}_{k-1}. $
In analogues, we can show that $\mathbf{x}_{k-1}^T S(\mathbf{y}_{k}) \mathbf{x}_{k-1} \leq \mathbf{x}_{k}^T S(\mathbf{y}_{k}) \mathbf{x}_{k}$ holds true as $\mathbf{x}_{k} = \mathcal{P}\{ S(\mathbf{y}_{k}) \}$ .
Finally, we conclude $\mathbf{x}_{k-1}^T S(\mathbf{y}_{k-1}) \mathbf{x}_{k-1} \leq \mathbf{x}_{k-1}^T S(\mathbf{y}_{k}) \mathbf{x}_{k-1} \leq \mathbf{x}_{k}^T S(\mathbf{y}_{k}) \mathbf{x}_{k}$.
Even though this algorithm computes a sequence of vectors that lead to monotonically increasing const function of the optimization probelm, it is unlikly that the algorithm finds the global optimum.
Best Answer
Notice that $\max\{\lambda_1(A),\lambda_1(B)\}\leq \lambda_1(A+B)$, since $A,B$ are positive semidefinite Hermitian matrices. Thus, $\lambda_1(A)+\lambda_1(B)\leq 2\max\{\lambda_1(A),\lambda_1(B)\}\leq 2 \lambda_1(A+B)$.
Edit: Moreover, this inequality is sharp.
Let $A=\left(\begin {array}{cc} 1&0\\ 0&0 \end{array} \right)$ and $B=\left(\begin {array}{cc} 0&0\\ 0&1 \end{array} \right)$. Thus, $A+B=\left(\begin {array}{cc} 1&0\\ 0&1 \end{array} \right)$.
Notice that $\lambda_1(A)+\lambda_1(B)=2=2\lambda_1(A+B)$.