Yes. In particular, there are exactly as many generalized eigenvector cycles as there are linearly independent eigenvectors. To see this, the last element in any chain is an eigenvector; conversely, for any eigenvector we can build a cycle that has length at least $1$.
The $1$-eigenspace is the kernel of the map $I-A$, i.e., the null-space of the matrix
$$\begin{bmatrix}
0 & 0 & 0 \\
-1 & -2 & -1 \\
2 & 4 & 2
\end{bmatrix}.$$
If $(I-A)x=0$, then
$$\begin{bmatrix}
0 & 0 & 0 \\
-1 & -2 & -1 \\
2 & 4 & 2
\end{bmatrix}\begin{bmatrix}
x_1\\ x_2 \\ x_3
\end{bmatrix}=\begin{bmatrix}
0\\ -x_1-2x_2-x_3\\
2x_1+4x_2+2x_3
\end{bmatrix}=\begin{bmatrix}
0\\0\\0
\end{bmatrix}.$$
The only constraints are then $-x_1-2x_2-x_3=0$ and $2x_1+4x_2+2x_3=0$, which is really just the one equation $x_1+2x_2+x_3=0$. It helps to pick parameters $x_2=s$ and $x_3=t$ (these are the "free-variables"). Then the null-space consists of all $(x_1,x_2,x_3)$ satisfying
\begin{align}
x_1&=-2s-t\\
x_2&=s\\
x_3&=t
\end{align}
The null-space is thus
$$\operatorname{null}(I-A)\operatorname{span}\left\{\begin{bmatrix}-2\\ 1\\ 0 \end{bmatrix},\begin{bmatrix}-1\\ 0\\ 1 \end{bmatrix} \right\}$$
giving the two eigenvectors also verified by computer.
To find generalized eigenvectors, we need to find the null-space of $(I-A)^2$, but it turns out $(I-A)^2=0$. Thus
$$\operatorname{null}\left((I-A)^2\right)=\operatorname{span}\left\{\begin{bmatrix}1\\ 0\\ 0 \end{bmatrix},\begin{bmatrix}0\\ 1\\ 0 \end{bmatrix},\begin{bmatrix}0\\ 0\\ 1 \end{bmatrix} \right\}.$$
Conveniently, any vector not in the span of the two ordinary eigenvectors will thus work. Since those two lie in the plane $x_1+2x_2+x_3=0$, we can take the normal to this plane, i.e.,
$$\begin{bmatrix}
1\\2\\1
\end{bmatrix}.$$
Best Answer
If you're good with generalized eigenvectors as a concept, skip this. Recall for an eigenvector $Av=\lambda v$ so $(A-\lambda I)v=0$. For generalized eigenvectors we satisfy $(A-\lambda I)^kv=0$, so we can see that an eigenvector can be generalized by solving for $(A-\lambda I)u=v$ where $v$ is a generalized eigenvector of degree $k$ and $u$ of degree $k+1$. Then $(A-\lambda I)^{k+1}u=(A-\lambda I)^k(A-\lambda I)u=(A-\lambda I)^kv=0$.
To the computation. Did you check to subtract $2I$ from the original matrix?
$$(A-2I)v= \begin{bmatrix} 0 & -2 & 1 & 1 \\ 0 & -1 & 1 & 1 \\ 0 & 0 & 0 & 1 \\ 0 & 0 & 0 & 0 \\ \end{bmatrix}v=0$$
gives us our first eigenvector of $$ \begin{bmatrix} 1 \\ 0 \\ 0 \\ 0 \\ \end{bmatrix}$$ by inspection. A lone zero column does us justice.
Then we want to solve $(A-\lambda I)u=v$ or: $$\begin{bmatrix} 0 & -2 & 1 & 1 \\ 0 & -1 & 1 & 1 \\ 0 & 0 & 0 & 1 \\ 0 & 0 & 0 & 0 \\ \end{bmatrix} \begin{bmatrix} a \\ b \\ c \\ d \\ \end{bmatrix} = \begin{bmatrix} 1 \\ 0 \\ 0 \\ 0 \\ \end{bmatrix}$$
Note to get the third zero, we need that $\begin{bmatrix} 0 & 0 & 0 & 1 \\ \end{bmatrix}\begin{bmatrix} a \\ b \\ c \\ d \\ \end{bmatrix}=\begin{bmatrix} 0 \\ \end{bmatrix}$ so it must be that $d=0$. Then the second row must satisfy $\begin{bmatrix} 0 & -1 & 1 & 0 \\ \end{bmatrix}\begin{bmatrix} a \\ b \\ c \\ 0 \\ \end{bmatrix}=\begin{bmatrix} 0 \\ \end{bmatrix}$ giving $-b+c=0$ so $b=c$. Then we have from the first row $\begin{bmatrix} 0 & -2 & 1 & 1 \\ \end{bmatrix}\begin{bmatrix} a \\ b \\ b \\ 0 \\ \end{bmatrix}=\begin{bmatrix} 1 \\ \end{bmatrix}$ so $-b=1$ gives $b=c=-1$ so $\begin{bmatrix} a \\ b \\ c \\ d \\ \end{bmatrix}=\begin{bmatrix} 0 \\ -1 \\ -1 \\ 0 \\ \end{bmatrix}$
One more time:
$$\begin{bmatrix} 0 & -2 & 1 & 1 \\ 0 & -1 & 1 & 1 \\ 0 & 0 & 0 & 1 \\ 0 & 0 & 0 & 0 \\ \end{bmatrix} \begin{bmatrix} e \\ f \\ g \\ h \\ \end{bmatrix} = \begin{bmatrix} 0 \\ -1 \\ -1 \\ 0 \\ \end{bmatrix}$$
Start with the third row, so $\begin{bmatrix} 0 & 0 & 0 & 1 \\ \end{bmatrix}\begin{bmatrix} e \\ f \\ g \\ h \\ \end{bmatrix}=\begin{bmatrix} -1 \\ \end{bmatrix}$ so $h=-1$. Then from the 2nd row, $\begin{bmatrix} 0 & -1 & 1 & 1 \\ \end{bmatrix}\begin{bmatrix} e \\ f \\ g \\ -1 \\ \end{bmatrix}=\begin{bmatrix} -1 \\ \end{bmatrix}$ so $-f+g+h=-f+g-1=-1$, so $-f+g=0$ or $f=g$.
By the first row, this implies that $\begin{bmatrix} 0 & -2 & 1 & 1 \\ \end{bmatrix}\begin{bmatrix} e \\ f \\ f \\ -1 \\ \end{bmatrix}=\begin{bmatrix} 0 \\ \end{bmatrix}$ so $-2f+f+h=-f-1=0$, or $f=g=-1$.
So we get $\begin{bmatrix} e \\ f \\ g \\ h \\ \end{bmatrix}=\begin{bmatrix} 0 \\ -1 \\ -1 \\ -1 \\ \end{bmatrix}$.