Let $\mu_i$ be the eigenvalues of $L S^{-1}$. Then $(\lambda_i, \mu_i, s_i)$ obey the multiplicative version of Horn's inequalities. The most basic of these, if $\lambda_1 \geq \lambda_2 \geq \cdots \geq \lambda_n$ and $s_1^{-1} \geq \cdots \geq s_n^{-1}$ and $\mu_1 \geq \mu_2 \geq \cdots \geq \mu_n$ is that
$$\mu_{i+j-1} \leq \lambda_i s_j^{-1} \ \mbox{and}\ \mu_{i+j-n} \geq \lambda_i s_j^{-1}.$$
Proof: Let $X=\sqrt{L}$ and $T=\sqrt{S^{-1}}$. So the singular values of $X$ and $T$ are $\sqrt{\lambda_i}$ and $\sqrt{s_i^{-1}}$. Then $\sqrt{\mu_i}$ are the singular values of $XT$. By a result of Klyachko (Random walks on symmetric spaces and inequalities for matrix spectra, Linear Algebra and its Applications, Volume 319, Issues 1–3, 1 November 2000, Pages 37–59), the singular values of a product obey the exponentiated version of Horn's inequalities.
This answer addresses the first question.
A real-valued, symmetric matrix is positive-semidefinite if and only if all its eigenvalues are nonnegative -- $0$ counts, no matter how many times it occurs.
Also, positive-semidefiniteness everywhere of the Hessian is equivalent to convexity; see this Math.SE question. Hessian matrices of real-valued $C^2$ functions are symmetric. So $0$ eigenvalues do count in this equivalence too; if your computation was correct, then the first function would be convex.
However, your first eigenvalue computation is incorrect. Indeed, we see that
$$H \begin{pmatrix} 1 \\ -1 \end{pmatrix} = \begin{pmatrix} -1 \\ 1 \end{pmatrix} = (-1) \begin{pmatrix} 1 \\ -1 \end{pmatrix},$$
so there is an eigenvalue of $-1$. Also, there is an eigenvalue of $1$, since
$$H \begin{pmatrix} 1 \\ 1 \end{pmatrix} = \begin{pmatrix} 1 \\ 1 \end{pmatrix}.$$
Since the number of eigenvalues is less than or equal to the dimension, we have eigenvalues of $\pm 1$.
[How did I magically create these eigenvectors? Well, I saw that $H^2 = I$, so I figured the constants were nice, and since the matrix looks like a reflection over the axis $x_1 = x_2$ (i.e., changing the roles of $x_1$ and $x_2$), I figured the lines $x_2 = x_1$ and $x_2 = -x_1$ would be relevant.]
Regarding the "usual" method of characteristic polynomial for finding the eigenvalues, I get the equation for $\det (\lambda I - H ) = 0$ of
$$\lambda^2 - 1 = 0.$$
Did you make a computational error here?
In any event, with the correct computation, the first function is not convex. [Geometrically, if you are familiar with convexity of one-dimensional functions, if you let $x_2 = -x_1$, you get $g(x_1) \overbrace{=}^{defined} f(x_1, -x_1) = - x_1^2$, which shows a ``concave down'' effect].
P.S. Since for diagonalizable matrices (including real symmetric matrices), the determinant of the matrix is the product of the eigenvalues, and the determinant of your second matrix is 0 (I think), you should have at least one zero eigenvalue. You should double-check that the other eigenvalue is positive.
Best Answer
$M\ge0$ only if $P$ is an orthogonal projection. Suppose $M\ge0$. Then for every $x\in\ker(P)$, we have $$ \|M^{1/2}x\|^2=x^\top Mx=x^\top(Px)+(x^\top P^\top)x-(x^\top P^\top)(Px)=0. $$ Therefore $M^{1/2}x=0$. In turn, $$ 0=Mx=(P+P^\top-P^\top P)x=P^\top x. $$ Thus $\ker(P)\subseteq\ker(P^\top)$. By interchanging the roles of $P$ and $P^\top$ in the previous argument, the reverse inclusion is also true. Hence $\ker(P)=\ker(P^\top)$. But then $\operatorname{ran}(P)=\ker(P^\top)^\perp=\ker(P)^\perp$, so that $P$ is an orthogonal projection.