Let $n>0$, let $v_1\dots v_n$ be non-zero generalized eigenvectors to distinct eigenvalues $\lambda_1\dots \lambda_n$.
Then the vectors $v_1\dots v_n$ are linearly independent.
First I prove this lemma:
Lemma: If $\lambda_1\ne\lambda_2$ then $\ker (\lambda_1 I - A)^{k_1}\cap \ker(\lambda_2 I - A)^{k_2}=\{0\}$ for all $k_1\ge1$ and $k_2\ge1$.
Proof:
Let $k_1\ge 1$, $k_2=1$, $v\in \ker (\lambda_1 I - A)^{k_1}\cap\ker (\lambda_2 I - A)$. Then $Av = \lambda_2v$, $(\lambda_1 I - A)^{k_1}v = (\lambda_1 - \lambda_2)^{k_1}v$, implying $v=0$.
Let $k_1>1$, $k_2>1$. Let $v\in \ker (\lambda_1 I - A)^{k_1}\cap \ker(\lambda_2 I - A)^{k_2}$. Then $(\lambda_2 I - A)^{k_2-1}v\in \ker (\lambda_1 I - A)^{k_1}\cap \ker (\lambda_2 I - A)$, which implies by the above considerations
$(\lambda_2 I - A)^{k_2-1}v=0$, hence $v\in \ker (\lambda_1 I - A)^{k_1}\cap \ker(\lambda_2 I - A)^{k_2-1}$. By induction it follows $v=0$.
[End of Proof]
Proof of the claim above: By induction with respect to $n$. The claim is obviously true for $n=1$.
Assume that the claim holds for some $n\ge1$. Let now $n+1$ vectors etc as above be given. Let $k_1\dots k_{n+1}$ be positive numbers such that
$$
(\lambda_i I - A)^{k_i}v_i=0 \quad i=1\dots n+1.
$$
Let $a_1\dots a_{n+1}$ be scalars such that
$$
\sum_{i=1}^{n} a_i v_i =0.
$$
Applying $(\lambda_{n+1}I-A)^{k_{n+1}}$ to this equation yields
$$
\sum_{i=1}^{n} a_i (\lambda_{n+1}I-A)^{k_{n+1}}v_i =0.
$$
By the Lemma above it follows $(\lambda_{n+1}I-A)^{k_{n+1}}v_i \ne0$. Since $(\lambda_{n+1}I-A)^{k_{n+1}}v_i \in \ker(\lambda_iI-A)^{k_i}$, it follows by the induction assumption that the vectors $(\lambda_{n+1}I-A)^{k_{n+1}}v_1\dots (\lambda_{n+1}I-A)^{k_{n+1}}v_n$ are linearly independent.
Hence $a_1=\dots a_n=0$. This also implies $a_{n+1}=0$. Hence the claim is proven.
[End of Proof]
The eigenspace for the eigenvalue $\lambda=0$ is given by:
$$ A\mathbf x=
\begin{bmatrix}0&1&-1&-1\\
0&0&0&0\\
0&-1&2&2\\
0&1&-2&-2
\end{bmatrix}
\begin{bmatrix}
x\\y\\z\\t
\end{bmatrix}=
\begin{bmatrix}
0\\0\\0\\0
\end{bmatrix}
$$
that gives:
$$
\begin{bmatrix}
x\\y\\z\\t
\end{bmatrix}=
\begin{bmatrix}
x\\0\\-t\\t
\end{bmatrix}
$$
so we can chose two linearly independent eigenvectors as:
$$\mathbf v_1=
\begin{bmatrix}
0\\0\\-1\\1
\end{bmatrix}\qquad \mathbf v_2=
\begin{bmatrix}
1\\0\\0\\0
\end{bmatrix}
$$
Now using $\mathbf v_1$ we can find a generalized eigenvector searching a solution of:
$$
\begin{bmatrix}0&1&-1&-1\\
0&0&0&0\\
0&-1&2&2\\
0&1&-2&-2
\end{bmatrix}
\begin{bmatrix}
x\\y\\z\\t
\end{bmatrix}=\begin{bmatrix}
0\\0\\-1\\1
\end{bmatrix}
$$
that gives a vector of the form
$$
\begin{bmatrix}
x\\y\\z\\t
\end{bmatrix}=
\begin{bmatrix}
x\\-1\\-1-t\\t
\end{bmatrix}
$$ and, for $x=t=0$ we can chose the vector $\mathbf w_1=[0,-1,-1,0]^T$
In the same way we can find the generalized eigenvector $\mathbf w_2=[0,2,1,0]$ as a solution of $A\mathbf x=\mathbf v_2$.
Now we have the matrix
$$
M=[\mathbf v_1,\mathbf w_1,\mathbf v_2, \mathbf w_2]=
\begin{bmatrix}
0&0&1&0\\
0&-1&0&2\\
-1&-1&0&1\\
1&0&0&0
\end{bmatrix}
$$
with the inverse:
$$
M^{-1}=\begin{bmatrix}
0&0&0&1\\
0&1&-2&-2\\
1&0&0&0\\
0&1&-1&-1
\end{bmatrix}
$$
and a Jordan decomposition of The matrix $A$ is:
$$A=
\begin{bmatrix}0&1&-1&-1\\
0&0&0&0\\
0&-1&2&2\\
0&1&-2&-2
\end{bmatrix}=
\begin{bmatrix}
0&0&1&0\\
0&-1&0&2\\
-1&-1&0&1\\
1&0&0&0
\end{bmatrix}
\begin{bmatrix}
0&1&0&0\\
0&0&0&0\\
0&0&0&1\\
0&0&0&0
\end{bmatrix}
\begin{bmatrix}
0&0&0&1\\
0&1&-2&-2\\
1&0&0&0\\
0&1&-1&-1
\end{bmatrix}=
MJM^{-1}
$$
This decomposition is not unique, In the sense that the matrix $M$ ( and $M^{-1}$) can be different, because we can chose different eigenvectors and generalized eigenvectors.
If, as in OP, we chose:
$$\mathbf v'_1=
\begin{bmatrix}
1\\0\\0\\0
\end{bmatrix}\qquad \mathbf v'_2=
\begin{bmatrix}
1\\0\\1\\-2
\end{bmatrix}
$$
than the generalized eigenvectors that satisfies the equations:
$$
A\mathbf w'_1=\mathbf v'_1 \qquad A\mathbf w'_2=\mathbf v'_2
$$
becomes:
$$
\mathbf w'_1=
\begin{bmatrix}
1\\2\\0\\1
\end{bmatrix} \qquad \mathbf w'_2=
\begin{bmatrix}
1\\3\\1\\1
\end{bmatrix}
$$
(this seems the mistake in OP) and we have a matrix
$$
S=
\begin{bmatrix}
1&1&1&1\\
0&2&0&3\\
0&0&1&1\\
0&1&-1&1
\end{bmatrix}
$$
and a Jordan decomposition:
$$
SJS^{-1}=\begin{bmatrix}
1&1&1&1\\
0&2&0&3\\
0&0&1&1\\
0&1&-1&1
\end{bmatrix}
\begin{bmatrix}
0&1&0&0\\
0&0&0&0\\
0&0&0&1\\
0&0&0&0
\end{bmatrix}
\begin{bmatrix}
1&-2&2&3\\
0&2&-3&-3\\
0&1&-1&-2\\
0&-1&2&2
\end{bmatrix}=
\begin{bmatrix}0&1&-1&-1\\
0&0&0&0\\
0&-1&2&2\\
0&1&-2&-2
\end{bmatrix}=A
$$
Finally, note that the eigenspace of the eigenvalue $\lambda=0$ is the kernel of $A$ and also $\mathbf u_1=A\mathbf e_2$ and $\mathbf u_2=A\mathbf e_3$ are vectors of the kernel, so they are eigenvectors of $A$, and $\mathbf e_2$ and $\mathbf e_3$ are the corresponding generalized eigenvectors, so another matrix that gives a Jordan decomposition is $N=[\mathbf u_1,\mathbf e_2,\mathbf u_2,\mathbf e_3]$
Best Answer
You know that if you have a vector $$u = \begin{pmatrix} a \\ b \\ c \end{pmatrix}$$ then you have $$(A- 2I)u = \begin{pmatrix} b \\ 0 \\ -b \end{pmatrix} = v$$
You also know that $v$ is an eigenvector of $A$, with eigenvalue 2, since it is in your calculated eigenspace (it can be written as $v = bv_1 - bw_1$).
Hence $u$ will be a generalized eigenvector as long as $b \neq 0$.
If you want to find a transformation matrix that takes your matrix to its Jordan normal form, you can first pick a generalized eigenvector, say $$u = \begin{pmatrix} 0 \\ 1 \\ 0 \end{pmatrix}$$ and then pick $$u_1 = (A-2I)u = \begin{pmatrix} 1 \\ 0 \\ -1 \end{pmatrix}$$ as your first eigenvector, and then another vector in the eigenspace, linearly independent of $u_1$, e.g. $$u_2 = \begin{pmatrix} 1 \\ 0 \\ 0 \end{pmatrix}$$ and then you can form the matrix $$T = \begin{pmatrix} | & | & | \\ u & u_1 & u_2 \\ | & | & | \end{pmatrix}$$ so that $T^{-1}AT = J$ where $J$ is the Jordan normal form of $A$.