To start, a brief digression from the specific case at hand: suppose $B$ is any $2 \times 2$ matrix with a single, repeated eigenvalue $\lambda$; then we know there exists at least one vector $v_1 \ne 0$ such that $Bv_1 = \lambda v_1$. If in addition there existed $v_2 \ne 0$, linearly independent from $v_1$ with $Bv_2 = \lambda v_2$, then for any vector $v = av_1 + bv_2$ we would have $Bv = aBv_1 + bBv_2 = a\lambda v_1 + b\lambda v_2 = \lambda v$, which shows that $B = \lambda I$, where $I$ is the $2 \times 2$ identity matrix. We thus conclude that if $B$ is not of this form, there is at most a one-dimensional subspace of vectors $\alpha v_1$ such that $Bv_1 = \lambda v_1$. Furthermore, we have $(B - \lambda I)^2 = 0$, so that for any vector $v$, $(B - \lambda I)(B - \lambda I)v =(B - \lambda I)^2 v = 0$; if we choose $v_2$ linearly independent of $v_1$, then by what we have seen $(B - \lambda I)v_2 \ne 0$, but $(B - \lambda I)(B - \lambda I)v_2 = 0$; this implies that we must have $(B - \lambda I)v_2 = \alpha v_1$ for some $\alpha$, so by linearity we can in fact take $(B - \lambda I)v_2 = v_1$; $(B - \lambda I)v_2$ is in fact an eigenvector of $B$, with eigenvalue $\lambda$. $v_2$ is called a generalized eigenvector corresponding to eigenvalue $\lambda$; note that $Bv_2 = \lambda v_2 + v_1$; this terminology is of course well-known.
Now in such a situation if we form the matrix $E$ such that
$E = \begin{bmatrix} v_1 & v_2 \end{bmatrix}, \tag{1}$
i.e., the columns of $E$ are $v_1, v_2$ then it is clear that
$BE = \begin{bmatrix} Bv_1 & Bv_2 \end{bmatrix} = \begin{bmatrix} \lambda v_1 & \lambda v_2 + v_1 \end{bmatrix}. \tag{2}$
Now $E^{-1}$ exists by the linear independence of $v_1, v_2$, hence we have
$\begin{bmatrix} E^{-1}v_1 & E^{-1}v_2 \end{bmatrix} = E^{-1} \begin{bmatrix} v_1 & v_2 \end{bmatrix} = E^{-1} E = I, \tag{3}$
which shows that
$E^{-1}v_1 = \begin{pmatrix} 1 \\ 0 \end{pmatrix} \tag{4}$
and
$E^{-1}v_2 = \begin{pmatrix} 0 \\ 1 \end{pmatrix}; \tag{5}$
therefore
$E^{-1}BE = \begin{bmatrix} \lambda E^{-1} v_1 & \lambda E^{-1}v_2 + E^{-1} v_1 \end{bmatrix} = \begin{bmatrix} \lambda & 1 \\ 0 & \lambda \end{bmatrix}, \tag{6}$
which is the Jordan canonical form of $B$.
We can use the conclusions reached in the preceding discussion to show how to correctly find the Jordan canonical from of the matrix
$A = \begin{bmatrix} 15 & -4 \\ 49 & -13 \end{bmatrix}, \tag{7}$
which as we know has a single eigenvalue $\lambda = 1$ of multiplicity $2$. We observe that
$A - \lambda I = A - I = \begin{bmatrix} 14 & -4 \\ 49 & -14 \end{bmatrix} \ne 0, \tag{8}$
which, according to the above, implies that $A$ has a one-dimensional eigenspace for it's single eigenvalue $1$. As has been shown, we can take a non-zero vector in this eigenspace to be $v_1 = (2, 7)^T$:
$\begin{bmatrix} 15 & -4 \\ 49 & -13 \end{bmatrix} \begin{pmatrix} 2 \\ 7 \end{pmatrix} = \begin{pmatrix} 2 \\ 7 \end{pmatrix}. \tag{9}$
At this point, instead of using $(A - I)^2 = 0$ and choosing $v_2 \in \ker (A - I)^2$
arbitrarily, we need to solve
$(A - I)v_2 = v_1 \tag{10}$
or
$\begin{bmatrix} 14 & -4 \\ 49 & -14 \end{bmatrix} v_2 = \begin{pmatrix} 2 \\ 7 \end{pmatrix}; \tag{11}$
a solution is
$v_2 = \begin{pmatrix} 1 \\ 3 \end{pmatrix}, \tag{12}$
but it is worth noting that $v_2 + \alpha v_1$ is also a solution for any $\alpha$,
since $v_1 \in \ker (A - I)$; this fact explains the apparent discrepancy between the_candyman's answer, which effectively gives
$\begin{pmatrix} a \\ b \end{pmatrix} = \begin{pmatrix} \frac{2k + 1}{7} \\ k \end{pmatrix} \tag{13}$
for the possible generalized eigenvectors, whereas the present analysis yields
$v_2 + \alpha v_1 = \begin{pmatrix} 2 \alpha + 1 \\ 7 \alpha + 3 \end{pmatrix}; \tag{14}$
taking $\alpha = \frac{1}{7} (k -3)$ shows these two sets are the same. The vector
$(1, 0)^T$ is not of this form; there is no $\alpha$ such that $(1, 0)^T = v_2 + \alpha v_1$. In any event, we may take for our matrix $E$
$E = \begin{bmatrix} 2 & 2 \alpha + 1 \\ 7 & 7\alpha + 3 \end{bmatrix}, \tag{15}$
and we easily see that $\det (E) = -1$, in accord with the_candyman's result. The columns of $E$ are therefore linearly independent for all $\alpha$, though this was already apparent from the independence of $v_1$ and $v_2$; being non-singular, $E$ is invertible and we may take its inverse, thus:
$E^{-1} = -\begin{bmatrix} 7 \alpha + 3 & -2 \alpha - 1 \\ -7 & 2 \end{bmatrix}; \tag{16}$
taking $E^{-1}AE$ will then yield
$E^{-1}AE = \begin{bmatrix} \lambda & 1 \\ 0 & \lambda \end{bmatrix}, \tag{17}$
in accord with equation (6).
The key thing in the above is that we need to find the generalized eigenvector corresponding to $\lambda$ in the event that the matrix in question is not a scalar multiple of the identity matrix $I$.
First, the minimal polynomial is not $\lambda^2$, and it seems to me that Ben's row reduction is wrong.
We have that the characteristic polynomial is $p(\lambda)=\lambda^4$, so, we already know that the jordan matrix will only have the value $0$ on the main diagonal.
Now we want to find the kernel of $(A-0I)=A$, so, reducing matrix $A$, we get
$$\pmatrix{
3&-1&1&7\\
9&-3&-7&-1\\
0&0&4&-8\\
0&0&2&-4} \leadsto
\pmatrix{
3&-1&0&0\\
0&0&1&0\\
0&0&0&1\\
0&0&0&0}
$$
Solving the system is not difficult,
$$
\pmatrix{
3&-1&0&0\\
0&0&1&0\\
0&0&0&1\\
0&0&0&0}
\cdot
\pmatrix{
x_1\\
x_2\\
x_3\\
x_4}
=
\pmatrix{
0\\
0\\
0\\
0}
$$
we get, $x_3=x_4=0$ and $3x_1=x_2$, if $x_1=1$, then $x_2=3$, then the base of the kernel is given by the vector, $(1,3,0,0)$. From this, we conclude that the form of jordan has only one block, so it is of the form,
$$\left(\begin{matrix}0&0&0&0\\1&0&0&0\\0&1&0&0\\0&0&1&0\end{matrix}\right)$$
as the exponent of the minimal polynomial can be given by the size of the largest block, we have that the minimal polynomial is $p_m(\lambda)=\lambda^4$
You can go on and find the Jordan base, as follows:
You want to find a vector $v$, so that, $A^3v,A^2v,Av,v$, they are all non-null. Calculating the exponents, we obtain,
$$A^2=\pmatrix{
0&0&28&-14\\
0&0&0&126\\
0&0&0&0\\
0&0&0&0}
\qquad
A^3=\pmatrix{
0&0&84&-168\\
0&0&252&-504\\
0&0&0&0\\
0&0&0&0}
$$
note that $e_3$ and $e_4$ are suitable vectors for what we want (because the third and fourth columns do not cancel and $A^4$ is the null matrix), let's choose $e_4$. So,
$$A^3e_4=(-168,-504,0,0)\qquad A^2e_4=(-14,126,0,0)\qquad Ae_4=(7,-1,-8,-4)$$
this way a Jordan base is
$$\{(0,0,0,1),(7,-1,-8,-4),(-14,126,0,0),(-168,-504,0,0)\}$$
it is not a pretty base, but this is a way to resolve the exercise quickly. We have so,
$$P=
\pmatrix{
0&7&-14&-168\\
0&-1&126&-504\\
0&-8&0&0\\
1&-4&0&0}
$$
And you can see that this solves the problem, $J=P^{-1}AP$.
Best Answer
$$\lambda I-A=\begin{pmatrix}\lambda-1&-1&0\\ 0&\lambda-1&-1\\ 0&0&\lambda-2\end{pmatrix}$$
so
$$\lambda=1:\;\;\begin{cases}-y=0\\ -z=0\\-z=0\end{cases}\implies \begin{pmatrix}x\\0\\0\end{pmatrix}\;,\;\;x\neq 0\,,\;\;\;\text{is an eigenvector for}\;\;\lambda =1$$
$$\lambda=2:\;\;\begin{cases}x-y=0\\ y-z=0\end{cases}\implies \begin{pmatrix}x\\x\\x\end{pmatrix}\;,\;x\neq0\,,\,\,\;\text{is an eigenvector for}\;\;\lambda =2$$
Take it from here (you only need one more generalized eigenvector...)