As rschwieb pointed out in his answer, which has mysteriously disappeared since my original posting of this answser, the method proposed by the OP in the text of the question is erroneous; indeed, the first answer proposed by Brad S. is also problematic in terms of finding the characteristic polynomial of $A$, though his calculations are apparently correct for the characteristic polynomial of $A'$, which is the issue he is evidently addressing.
To explicitly use the fact that $4$ is an eigenvalue of $A$, observe that the $4$ in the lower-right hand corner of $A$ corresponds to the eigenvector $\mathbf e_4 = (0, 0, 0, 1)^T$, and that the upper left $3 \times 3$ block of $A$, hence of $A - \lambda$, leaves the subspace normal to $\mathbf e_4$ invariant. Thus the eigenvalues of $A$ restricted to that invariant subspace will simply be those of $A$ exclusive of the eigenvalue $4$ associated with $\mathbf e_4$; the characteristic polynomial of this restriction of $A$ is in fact
$\det(\begin{bmatrix} 4 - \lambda &0&1\\1&1 - \lambda &1\\0&1&1 - \lambda \end{bmatrix}) = (4 - \lambda)(\lambda^2 - 2\lambda) + 1 = -\lambda^3 + 6\lambda^2 - 8\lambda + 1, \tag{0}$
as is shown in detail below. Multiplying (0) by the factor $4 - \lambda$ yields $p_A(\lambda) = (4 - \lambda)(-\lambda^3 + 6\lambda^2 - 8\lambda + 1)$ for the characteristic polynomial of $A$.
It appears to me that the error in the OP's method lies in computing the eigenvalues of $A - 4I$ on the subspace spanned by $\mathbf e_1 = (1, 0, 0)^T$, $\mathbf e_2 = (0, 1, 0)^T$, $\mathbf e_3 = (0, 0, 1)^T$, instead those of the restriction of $A$ itself. Indeed, an inspection of rschwieb's tables, when they existed, showed that the eigenvalues of $A'$ may be had by subtracting $4$ from those of $A$, exclusive of the eigenvalue $\lambda = 4$ of $A$. This agrees with the basic fact that $B\mathbf v = \mu \mathbf v \Leftrightarrow (B + \alpha I)v = (\mu + \alpha)\mathbf v$, i.e., eigenvalues simply shift by $\alpha$ if $\alpha I$ is added to a matrix $B$.
Having said these things, here's how I would handle this one:
the given matrix is
$A=\begin{bmatrix} 4&0&1&0\\1&1&1&0\\0&1&1&0 \\0&0&0&4 \end{bmatrix}, \tag{1}$
so
$A - \lambda I = \begin{bmatrix} 4 - \lambda &0&1&0\\1&1 - \lambda &1&0\\0&1&1 - \lambda&0 \\0&0&0&4 - \lambda \end{bmatrix}, \tag{2}$
so the characteristic polynomial $p_A(\lambda)$ is
$p_A(\lambda) = \det (A - \lambda I) = \det (\begin{bmatrix} 4 - \lambda &0&1&0\\1&1 - \lambda &1&0\\0&1&1 - \lambda&0 \\0&0&0&4 - \lambda \end{bmatrix}); \tag{3}$
since the last row and last column are $0$, save for the $4, 4$ entry, expansion in minors along the last row or column yields
$p_A(\lambda) = (4 - \lambda) \det(\begin{bmatrix} 4 - \lambda &0&1\\1&1 - \lambda &1\\0&1&1 - \lambda \end{bmatrix}) \tag{4}$
and we have, by any of a number of standard methods/formulas for the computation of $3 \times 3$ determinants
$\det(\begin{bmatrix} 4 - \lambda &0&1\\1&1 - \lambda &1\\0&1&1 - \lambda \end{bmatrix}) = (4 - \lambda)(1 - \lambda)^2 + 1 - (4 - \lambda)$
$= (4 - \lambda)((1 - \lambda)^2 - 1) + 1 = (4 - \lambda)(\lambda^2 - 2\lambda) + 1$
$= -\lambda^3 + 6\lambda^2 - 8\lambda + 1, \tag{5}$
so that
$p_A(\lambda) = (4 -\lambda)(-\lambda^3 + 6\lambda^2 - 8\lambda + 1) = \lambda^4 - 10\lambda^3 + 32\lambda^2 - 33\lambda + 4. \tag{6}$
This method tacitly exploits the fact that $4$ is an eigenvalue of $A$ with eigenvector $\mathbf e_4$ when it invokes the expansion by minors; the fact that $A \mathbf e_4 = 4 \mathbf e_4$ is related to the zeroes of the last row and column of $A$; but about this I will say no more at present. My day job beckons.
Hope this helps. Cheerio,
and as always,
Fiat Lux!!!
Permuting rows two and three, as well as columns two and three yields the matrix
$$
A_2:=
\begin{pmatrix}
1275 & 0 & -169 & -208 \\
0 & 1275 &-208 & -256 \\
-169 & -208 & 1531& -208 \\
-208 & -256& -208 & 1444\\
\end{pmatrix}.
$$
Denoting $B:=\pmatrix{-169 & -208\\-208 & -256}$, the matrix can be written in block form
$$
A_2 = \pmatrix{ 1275\ I_2 & B \\ B & B + 1700\ I_2} = 1275\ I_4 + \pmatrix{ 0 & B \\ B & B + 425 I_2}.
$$
The matrix $B=\pmatrix{-13\cdot 13 & -13\cdot 16\\-13\cdot 16 & -16\cdot 16}$,
has eigenvalues $0$ and $-425=-256-169$ with eigenvectors vectors $v_0=\pmatrix{16\\-13}$ and $v_{-425}=\pmatrix{13\\16}$, respectively.
These vectors combined yield the eigenvectors of $A_2$ (pairs eigenvector $\to$ eigenvalue)
$$
\pmatrix{v_0\\0} \to 1275, \
\pmatrix{0\\v_0} \to 1700, \
\pmatrix{v_{-425}\\v_{-425}} \to 850, \
\pmatrix{v_{-425}\\-v_{-425}} \to 1700.
$$
Thus, the characteristic polynomial is
$$
p(\lambda) = (\lambda-1700)^2 (\lambda-1275) (\lambda-850).
$$
Expanding the characteristic polynomial the traditional way is no fun, the determinant of the matrix is $\approx 3\cdot 10^{12}$.
Best Answer
Once upon a less enlightened time, when people were less knowledgeable in the intricacies of algorithmically computing eigenvalues, methods for generating the coefficients of a matrix's eigenpolynomial were quite widespread. One of the more prominent methods for computing the coefficients was a method ascribed to both the Frenchman Leverrier, and the Russian Faddeev (who was an (co-)author of one of the oldest references on the practice of numerical linear algebra).
The (Faddeev-)Leverrier method is a method that will require you to do a number of matrix multiplications to generate the coefficients of the characteristic polynomial. Letting the $n\times n$ matrix $\mathbf A$ have the monic characteristic polynomial $(-1)^n \det(\mathbf A-\lambda\mathbf I)=\lambda^n+c_{n-1}\lambda^{n-1}+\cdots+c_0$, the algorithm proceeds like so:
$\mathbf C=\mathbf A;$
$\text{for }k=1,\dots,n$
$\text{if }k>1$
$\qquad \mathbf C=\mathbf A\cdot(\mathbf C+c_{n-k+1}\mathbf I);$
$c_{n-k}=-\dfrac{\mathrm{tr}(\mathbf C)}{k};$
$\text{end for}$
If your computing environment can multiply matrices, or take their trace (sum of the diagonal elements, $\mathrm{tr}(\cdot)$), then you can easily program (Faddeev-)Leverrier. The method works nicely in exact arithmetic, or in hand calculation (assuming you have the stamina to repeatedly multiply matrices), but is piss-poor in inexact arithmetic, as the method tends to greatly magnify rounding errors in the matrix, ever yielding coefficients that become increasingly inaccurate as the iteration proceeds. But, for the simple $3\times 3$ case envisioned by the OP, this should work nicely.
People interested in this old, retired method might want to see this paper.