The diagonalization theorem, here for example, states that you can take
$$ A = \left[\begin{matrix} -1 & 0 & 1\\3 & 0 & -3\\1 & 0 & -1\end{matrix}\right]$$
and turn it into a diagonal matrix
$$ V = \left[\begin{matrix} 0 & 0 & 0\\0 & 0 & 0\\0 & 0 & -2\end{matrix}\right] $$
where the diagonal elements of $V$ are the eigenvalues $(0,0,-2)$ of $A$ using
$$V = P^{-1} A P$$
where $P = (v_1 \quad v_2 \quad v_3)$ is invertible. This only happens if $A$ has $n$ linearly-independent eignevectors $v_1, v_2, v_3.$ In this case, although $\lambda_1 = \lambda_2 = 0$, you have a non-singular
$$
P =
\left[\begin{matrix} 1 & 0 & -1\\0 & 1 & 3\\1 & 0 & 1\end{matrix}\right]
$$
To decide so, first find all eignevectors, form $P$ and check if $P$ is non-singular (equivalently, $v_1, v_2, v_3$ are linearly-independent). In the first matrix, however,
$$P
=
\left[\begin{matrix} 1/4 & 1 & 0\\1/2 & 1 & 0\\1 & 1 & 0\end{matrix}\right]
$$
which is singular.
The proof of this property is not so easy as those of the basic properties of eigenvalues and eigenvectors. It can be shown by induction, or by explicit construction (see eg here)
I like to visualize the property in this way:
We know that an hermitian matrix with $n$ distict eigenvalues has $n$ eigenvectors that are not only LI (as in general matrices) but, more than that, orthogonal. We also know that this matrix is diagonalizable, with unitary $U$ (both properties are easy to prove).
Now, if our hermitian matrix happens to have repeated (degenerate) eigenvalues, we can regard it as a perturbation of some another hermitian matrix with distinct eigenvalues.
By a continuity argument, we should see that the matrix perturbation than transforms different (but perhaps close) eigenvalues into coincident ones, cannot make the orthogonal eigenvectors linearly dependent.
Put in other way: an hermitian matrix $A$ with repeated eigenvalues can be expressed as the limit of a sequence of hermitian matrices with distinct eigenvalues. Because all members of the sequence have $n$ orthogonal eigenvectors, by a continuity argument, they cannot end in LD eigenvectors.
This approach leads to a nice intuition, IMO, and it can be formalized. But for a formal proof the other methods are to be preferred.
Best Answer
Guide:
Try to compute $(PD^\frac12P^{-1})^2$