The answer is: It depends!
There are two common definitions of positive semidefiniteness.
Definition 1: $\newcommand{\m}{\mathbf}\newcommand{\A}{\m A}\newcommand{\B}{\m B}\newcommand{\x}{\m x} \A \in M_n(\mathbb R)$ is positive semidefinite if and only if for all $\x \in \mathbb R^n$ we have $\x^T \A \x \geq 0$.
Definition 2: Same as Definition 1, with the additional requirement that $\A$ be symmetric.
Let's start with the second definition, which may be the more commonly assumed. In this case, the stated result is true and follows easily from the definitions. In particular,
$$
\x^T(\A + \B)\x = \x^T \A \x + \x^T \B \x \geq 0 \>,
$$
since both terms in the middle are nonnegative. Furthermore if $\A$ and $\B$ are both symmetric, then so is $\A + \B$. Hence,
$$
\x^T (\A + \B)^2 \x = \x^T (\A+\B)^T (\A+\B) \x = \|(\A+\B) \x\|_2^2 \geq 0 \>.
$$
Now, let's look at the first definition. If we drop the assumption that $\A$ and $\B$ are symmetric, then the stated result in the question is false.
It is, of course, still true that $\A+\B$ is positive semidefinite (the same proof above applies), but $(\A+\B)^2$ may not be.
Counterexample: It suffices to consider only a single matrix $\A$, with $\B = 0$. Take
$$\A = \left(\begin{array}{rr} 1 & -2 \\ 0 & 1\end{array}\right) \>.$$
Then, if $\x = (x_i)$,
$$
\x^T \A \x = x_1^2 - 2 x_1 x_2 + x_2^2 = (x_1 - x_2)^2 \geq 0 \>.
$$
So, $\A$ is positive semidefinite. However,
$$
\A^2 = \left(\begin{array}{rr} 1 & -4 \\ 0 & 1\end{array}\right) \>,
$$
which is clearly not positive semidefinite; to see this, just take $\x$ to be a vector of ones.
Let the eigenvalues of $A$ be $a_1\ge a_2\ge \dots \ge a_n$, and the eigenvalues of $B$ be $b_1\ge b_2\ge \dots \ge b_n$. According to the discussion in comments, we are looking for bounds on the singular values of $AB$, denoted $s_1\ge s_2\ge \dots \ge s_n$. The min-max principle is the natural tool to use. Actually, the Wikipedia article does not state the form I'd like to use here: the $k$th largest singular value of $M$ is $\inf\{\|M-T\|:\mathrm{rank}\, T\le k-1\}$. This formula appears in another article with attribution to Allakhverdiev (and no reference). I think the result of Allakhverdiev was the extension to compact operators in Hilbert spaces, but don't know for sure. In general, Wikipedia articles on singular values are a toxic mess.
Claim 1: $s_k\le \min\{ a_i b_{j} : i+j=k+1\}$.
Proof: Let $T$ be an operator of rank $\le i-1$ such that $a_i=\|A-T\|$. Similarly, let $S$ be an operator of rank $\le j-1$ such that $b_{j}=\|B-S\|$. Since $\|(A-T)(B-S)\|\le a_i b_{j}$, it remains to estimate the rank of $(A-T)(B-S)-AB$. Writing $(A-T)(B-S)-AB=-T(B-S)-AS$, we see that the rank is at most $j-1+i-1=k-1$, as desired.
Claim 2: $s_k\ge \max\{ a_i b_{j} : i+j=k+n\}$.
If $A$ and $B$ are invertible, Claim 2 follows by applying Claim 1 to $(AB)^{-1}$. Indeed, the $k$th largest singular value of $AB$ is the reciprocal of the $(n+1-k)$th largest singular value of $(AB)^{-1}$. The $(n+1-k)$th largest singular value of $(AB)^{-1}$ does not exceed $\min \{a_{n+1-i}^{-1}b_{n+1-j}^{-1} : i+j=n+2-k\}$. Decoding these sums of indices, we get Claim 2. The general case follows by considering $A+\epsilon I$ and $B+\epsilon I$, and letting $\epsilon\to 0$.
Best Answer
If $ABx = \lambda x$, $x\neq 0$, then $\langle ABx,Bx\rangle = \lambda \langle x,Bx\rangle$. Now, both $\langle ABx,Bx\rangle$ and $\langle x,Bx\rangle$ are non-negative. Hence, if $\langle x,Bx\rangle > 0$, it follows that $\lambda\ge 0$. If $\langle x,Bx\rangle = 0$, then $Bx = 0$ and thus $\lambda = 0$.