It doesn't make much sense to compute the eigenvalues without their multiplicity and then say "since the eigenvalues are real, the matrix is diagonalizable" because it doesn't hold in general. It does hold for symmetric matrices but symmetric matrices have real eigenvalues and are diagonalizable so there is no need to calculate anything to deduce your matrix is diagonalizable.
You can say that $A$ is diagonalizable because it is real and symmetric without calculating anything. Alternatively, you can compute the eigenvalues (they will be real) and then compute the geometric multiplicity of each eigenvalue and then conclude $A$ is diagonalizable. It seems that whoever wrote the solution mixed both approaches.
As proven in this post, the idea goes as follows: take $W$ an $B$-invariant subspace. Now, since $B$ is diagonalizable with eigenvalues $\mu_1, \dots, \mu_k$,
$$
\mathbb{k}^n = E_{\mu_1} \oplus \cdots \oplus E_{\mu_k}
$$
It suffices to see that $W = (W\cap E_{\mu_1}) \oplus \cdots \oplus ( W\cap E_{\mu_k})$ in which case one can form a basis from basis of each $W \cap E_{\mu_i}$, which will be made of eigenvalues of $B$ because it is contained in $E_{\mu_i}$. In effect, let's see both inclusion: the immediate one is that $(W\cap E_{\mu_1}) \oplus \cdots \oplus ( W\cap E_{\mu_k})\subseteq W$ since each space is contained in $W$, and the latter is a subspace.
As for the other, since $W = W \cap \mathbb{k}^k = W \cap \bigoplus_{i=1}^n E_{\mu_i}$, any element $w$ of $W$ is a sum of eigenvectors,
$$w = e_1 + \dots + e_l$$
with $e_i$ eigenvector of eigenvalue $\mu_{j_i}$. Therefore, it is sufficient to show that if $\sum_{i=1}^ke_l \in W$, then $e_1, \dots, e_l \in W$. We proceed by induction on $l$. If $l = 1$, then $e_1 = w \in W$. If $l >1$, since
$$
Bw - \mu_{j_1}w = (\mu_{j_1} - \mu_{j_1})e_1 + \dots + (\mu_{j_l} - \mu_{j_1})e_l \in W
$$
and $\mu_{j_i} - \mu_{j_1} \neq 0$, by inductive hypothesis $e_i \in W$ for $i >1$, and so finally $e_1 = w - e_2 - \dots - e_l \in W$, completing the proof.
Best Answer
A matrix $A \in M_n(\mathbb{R})$ is diagonalizable if and only if there is a basis of $\mathbb{R}^n$ consisting of eigenvectors of $A$. This implies that the geometric multiplicity of each eigenvalue $\lambda$ of $A$ (the number of linearly independent eigenvectors associated to $\lambda$) is the same as the algebraic multiplicity of each eigenvalue (the power in which $(x - \lambda)$ appears in the factorization of the characteristic polynomial $p_A(x)$ of $A$).
A matrix $A \in M_n(\mathbb{R})$ is orthogonally diagonalizable if and only if there is an orthonormal basis of $\mathbb{R}^n$ consisting of eigenvectors of $A$. This is a stronger condition in the sense that any orthogonally diagonalizable matrix is clearly diagonalizable but the converse does not hold.
In particular, both for the case of diagonalization and orthogonal diagonalization the geometric multiplicity of each eigenvalue must be the same as the algebraic multiplicity but in the case of orthogonal diagonalization this is not enough. The difference between regular diagonalization and orthogonal diagonalization is that in the case of an orthogonally diagonalizable matrix, the eigenspaces associated to distinct eigenvalues are orthogonal to each other while if $A$ is merely diagonalizable, this needs not be the case.