You cannot prove that $TU$ is self-adjoint because this is not true in general. You should try some other way to prove that all eigenvalues of $TU$ are real. For example, let us abuse the notations so that $T,U$ are also their respective matrix representation under the canonical basis.
- As $U$ is positive definite, it can be orthogonally diagonalized as $U=QDQ^T$ for some positive diagonal matrix $D$ and some real orthogonal matrix $Q$.
- Hence you may take a self-adjoint square root of $U$. That is, set $U^{1/2}=QD^{1/2}Q^T$, where $D^{1/2}$ is the entrywise square root of $D$.
- Show that $TU$ is similar to $U^{1/2}TU^{1/2}$ and argue that the latter has real eigenvalues.
Note: over a finite-(nonzero)-dimensional complex vector space, every linear transformation has at least one eigenvalue, since the characteristic polynomial splits. It follows by induction that it is upper trianguarizable. But not every linear transformation, even over $\mathbb{C}$, is diagonalizable. Typically, consider the Jordan block
$$
T=\pmatrix{0&1\\0&0}.
$$
Key point: for a self-adjoint transformation, every eigenvalue is actually real. Indeed, if $T^*=T$ and if $Tx=\lambda x$ with $x\neq 0$, then
$$
\lambda (x,x)=(x,\lambda x)=(x,Tx)=(Tx,x)=(\lambda x,x)=\overline{\lambda}(x,x)\quad\Rightarrow\quad \lambda=\overline{\lambda}
$$
where I took an inner-product antilinear in the first variable.
Conclusion: it suffices to prove the result for matrices, up to taking the matrix $A$ of $T$ in an orthonormal basis. Since the characteristic polynomial of $A$ splits over $\mathbb{C}$ and since every root is real, it splits over $\mathbb{R}$. And the characteristic polynomial of $A$ is the same, whether you look at it as an element of $M_n(\mathbb{R})$, or as an element of $M_n(\mathbb{C})$. So the charateristic polynomial of $T$ splits over $\mathbb{R}$. And in general, for a linear transformation $T$ over a finite-dimensional $K$-vector space, every root $\lambda$ in $K$ of the characteristic polynomial in $K[X]$ is an eigenvalue of $T$. And conversely. Since both are equivalent to $T-\lambda \mbox{Id}$ not being invertible. Via the determinant for the root side. And via the rank-nullity theorem on the eigenvalue side.
Another way to reach the conclusion: now take $A$ the matrix of $T$ in an orthonormal basis. Then $T$ is self-adjoint if and only if $A^*=A$ is Hermitian (equal to its transconjugate). Over $\mathbb{R}$, this is just $A^T=A$ (i.e. $A$ symmetric, equal to its transpose). But we can see $A$, even if it is in $M_n(\mathbb{R})$, as an element of $M_n(\mathbb{C})$. So it has at least an
an eigenpair $(\lambda,x)$ in $\mathbb{C}\times \mathbb{C}^n$. But as we have just shown, $\lambda $ must be real. Therefore, writing $x=y+iz$ where $y$ and $z$ are the real and imaginary parts of $x\in\mathbb{C}^n$ in $\mathbb{R}^n$, we get
$$
Ay+iAz=A(y+iz)=Ax=\lambda x=\lambda (y+iz)=\lambda y+i\lambda z\quad \Rightarrow \quad Ay=\lambda y\quad Az=\lambda z.
$$
Si $x\neq 0$, one of $y,z$ must be nonzero, giving a real eigenpair for $A$. Whence for $T$ going back from the matrix to the operator.
Best Answer
The proposed solution is not well written, but it appears that $\lambda$ is an arbitrary eigenvalue of $T^*T$ and $x$ is a normalized eigenvector associated with $\lambda$. In that case, $$\lambda = \lambda \langle x,x\rangle = \langle \lambda x,x\rangle = \langle T^*T(x),x\rangle$$ The point of the argument was to demonstrate that all eigenvalues of $T^*T$ are nonnegative.
I would, however, suggest a more direct approach. Note $T^*T$ is self-adjoint, and given $x\in V$, $\langle T^*Tx,x\rangle = \langle Tx,Tx\rangle \ge 0$. Hence, $T^*T$ is positive semidefinite. By a similar argument $TT^*$ is positive semidefinite.