Yes, everything is well and good. You followed the Gram-Schmidt process and got a right result. Well done!
Edit: for the orthogonal part. It remains to get integer coefficients (multiply by the pertinent square roots to do away with them) and to re-normalise to get norm-one vectors.
Basically, you’re going to perform a partial diagonalization of $M$.
Let $\{v_2,\dots,v_n\}$ be a basis for the orthogonal complement of $v_1$ and assemble $v_1$ and the other basis vectors into the matrix $B$. Then $$B^{-1}MB = \begin{bmatrix}\lambda_1 & \mathbf 0^T \\ \mathbf 0 & M'\end{bmatrix}.$$ The submatrix $M'$ is the “reduced” matrix that you’re looking for. Its eigenvalues are the remaining eigenvalues of $M$, but keep in mind that you’re no longer working in the standard basis, so once you’ve found the coordinates of its eigenvectors, you’ll need to convert back into the original basis.
As for what to choose as the basis for $v_1^\perp$, I’m not sure that an orthonormal basis is the best practical choice. Because of all the normalization the entries of $M'$ are unlikely to be “nice,” which will just make it more likely that you’ll make errors if you’re doing this by hand. An orthogonal basis might be convenient because it makes the inversion for the change of basis easier: $B^{-1}MB = \frac1{\det B}B^TMB$, but you have to trade that off against the work required to produce this basis. A basis that’s very simple to generate is $\{(2,-1,0,0,\dots)^T, (3,0,-1,0,\dots)^T, (4,0,0,-1,\dots)^T, \dots\}$ and inverting the resulting matrix doesn’t seem like it would be too bad. You could, for instance, perform Gaussian elimination on $[B\mid MB]$ to compute $B^{-1}MB$.
Using your basis, we compute $$B^{-1}MB = \begin{bmatrix}60&0&0&0\\0&-16&28&-32\\0&-84&-168&102\\0&92&4&-86\end{bmatrix}.$$ The upper-left element is indeed the eigenvalue associated with $(1,2,3,4)^T$ and the rest of that row and column consists of zeros, as expected.
Best Answer
I try to give my input from a methodical approach. From your above computations, we can see that:
For $\lambda = 0$, these eigenvectors $\{v_1,v_2\}$ span the eigenspace $E_{\lambda=0}$: \begin{Bmatrix} \begin{bmatrix} -1\\ 1 \\ 0 \end{bmatrix}, \begin{bmatrix} -1\\ 0 \\ 1 \end{bmatrix} \end{Bmatrix}
and for $\lambda = 3$, this eigenvector $\{v_3\}$ span the eigenspace $E_{\lambda=3}$: \begin{Bmatrix} \begin{bmatrix} 1\\ 1 \\ 1 \end{bmatrix} \end{Bmatrix}
These eigenvectors are linearly independent. However, there are repeated eigenvalues for $\lambda = 0$. Next, we check $\langle v_1, v_2 \rangle$. We observe that these eigenvectors are not orthogonal, as what you have shown $\langle v_1, v_2 \rangle =1\neq0$.
Hence, we can use Gram-Schmidt orthogonalization to obtain our orthonormal set.
For $E_{\lambda=0}$, we find our orthonormal vectors $\{e_1, e_2\}$:
\begin{align} u_1 & = v_1 \\ e_1 & = \frac{u_1}{||u_1||} = \frac{1}{\sqrt2} \begin{bmatrix} -1\\ 1 \\ 0 \end{bmatrix} \\ \DeclareMathOperator{\proj}{proj} & \\ u_2 & = v_2 - \proj_{u_1} (v_2) \\ & = v_2 - \langle v_2, u_1\rangle u_1\\ & = \begin{bmatrix} -1\\ 0 \\ 1 \end{bmatrix} - \left\langle \begin{bmatrix} -1\\ 0 \\ 1 \end{bmatrix}, \frac{1}{\sqrt2} \begin{bmatrix} -1\\ 1 \\ 0 \end{bmatrix} \right\rangle \frac{1}{\sqrt2} \begin{bmatrix} -1\\ 1 \\ 0 \end{bmatrix} \\ & = \begin{bmatrix} -1/2\\ -1/2 \\ 1 \end{bmatrix} = \frac{1}{2} \begin{bmatrix} -1\\ -1 \\ 2 \end{bmatrix} \\ e_2 & = \frac{u_2}{||u_2||} = \frac{1}{\sqrt6} \begin{bmatrix} -1 \\ -1 \\ 2 \end{bmatrix} \\ \end{align}
For $E_{\lambda=3}$:
\begin{align} u_3 & = v_3 \\ e_3 & = \frac{u_3}{||u_3||} = \frac{1}{\sqrt 3} \begin{bmatrix} 1 \\ 1 \\ 1 \end{bmatrix} \\ \end{align}
Hence, $$ \begin{align} Q & = [e_1 \space e_2 \space e_3] = \begin{bmatrix} -1/ \sqrt2 & -1/ \sqrt6 & 1/ \sqrt3\\ 1/ \sqrt2 & -1/ \sqrt6 & 1/ \sqrt3\\ 0/ \sqrt2 & 2/ \sqrt6 & 1/ \sqrt3\\ \end{bmatrix}: \\ D & = \begin{bmatrix} 0 & 0 & 0 \\ 0 & 0 & 0 \\ 0 & 0 & 3 \\ \end{bmatrix} \end{align} $$