You know that $M=U D U^T$, with $D$ diagonal and $U$ unitary. Suppose $D$ is
ordered so that $D= \operatorname{diag}(\lambda_1,\lambda_1,\lambda_3,...,\lambda_p)$.
Then $(M-\lambda_1 I)x = 0$ iff $(D-\lambda_1 I) U^T x = 0$.
Note that $D-\lambda_1 = \operatorname{diag}(0,0,\lambda_3-\lambda_1,...,\lambda_p-\lambda_1)$ and let $y_1 = U e_1, y_2 = U e_2$, then
$y_1,y_2$ are linearly independent and $(D-\lambda_1 I) U^T y_k = (D-\lambda_1 I) e_k = 0$ (for $k=1,2$).
Hence $\operatorname{sp} \{ y_1,y_2\} \subset \ker (M-\lambda_1 I)$.
Since $\operatorname{sp} \{ y_1,y_2\} = \operatorname{sp} \{ y_1-y_2, y_1+y_2 \}$, we see that the eigenvectors are not 'unique'.
As an aside, eigenvectors are never unique. Even if normalized, if $u$ is an eigenvector, so is $-u$.
You do have in general inequalities for singular values of the product of two matrices $M$, $N$, due to Weyl or Schur, I forgot, Horn and Johnson should have the reference. If $\alpha_i$, $\beta_i$ $\gamma_i$ are the singular values of $M$, $N$ and $MN$, in decreasing order then
$$\alpha_1 \cdot \beta_1\ge \gamma_1 \\
\alpha_1 \alpha_2 \ge \gamma_1 \gamma_2\ge \gamma_1 \gamma_2\\
\ldots\ldots\ldots\\
\alpha_1 \cdots\alpha_n \cdot\beta_1 \cdots \beta_n = \gamma_1 \cdots \gamma_n$$
The first inequality is not hard, since it is the inequality for the $l^2$ norm. The other are obtained considering the associated operators $\wedge^k M, N, MN$.
So, fixing $\lambda_i$ for a hermitian, and the $d_i$ there definitely are inequalities for the eigenvalues of $DAD$. Hard to tell what are all the defining inequalities.
Best Answer
I'm not sure what is the general proof you have in mind, but if we choose any orthonormal basis $v_1, \ldots, v_n$ for $\mathbb{R}^n$ (with respect to the standard inner product $\left< \cdot, \cdot \right>$) then
$$ \mathrm{tr}(A) = \sum_{i=1}^n \left< Av_i, v_i \right>. $$
If $A$ is symmetric, then by choosing $v_1, \ldots, v_n$ to be an orthonormal basis of eigenvectors of $A$ (with $Av_i = \lambda_i v_i$), you immediately get
$$ \mathrm{tr}(A) = \sum_{i=1}^n \left< Av_i, v_i \right> = \sum_{i=1}^n \left< \lambda_i v_i, v_i \right> = \sum_{i=1}^n \lambda_i. $$