As rschwieb pointed out in his answer, which has mysteriously disappeared since my original posting of this answser, the method proposed by the OP in the text of the question is erroneous; indeed, the first answer proposed by Brad S. is also problematic in terms of finding the characteristic polynomial of $A$, though his calculations are apparently correct for the characteristic polynomial of $A'$, which is the issue he is evidently addressing.
To explicitly use the fact that $4$ is an eigenvalue of $A$, observe that the $4$ in the lower-right hand corner of $A$ corresponds to the eigenvector $\mathbf e_4 = (0, 0, 0, 1)^T$, and that the upper left $3 \times 3$ block of $A$, hence of $A - \lambda$, leaves the subspace normal to $\mathbf e_4$ invariant. Thus the eigenvalues of $A$ restricted to that invariant subspace will simply be those of $A$ exclusive of the eigenvalue $4$ associated with $\mathbf e_4$; the characteristic polynomial of this restriction of $A$ is in fact
$\det(\begin{bmatrix} 4 - \lambda &0&1\\1&1 - \lambda &1\\0&1&1 - \lambda \end{bmatrix}) = (4 - \lambda)(\lambda^2 - 2\lambda) + 1 = -\lambda^3 + 6\lambda^2 - 8\lambda + 1, \tag{0}$
as is shown in detail below. Multiplying (0) by the factor $4 - \lambda$ yields $p_A(\lambda) = (4 - \lambda)(-\lambda^3 + 6\lambda^2 - 8\lambda + 1)$ for the characteristic polynomial of $A$.
It appears to me that the error in the OP's method lies in computing the eigenvalues of $A - 4I$ on the subspace spanned by $\mathbf e_1 = (1, 0, 0)^T$, $\mathbf e_2 = (0, 1, 0)^T$, $\mathbf e_3 = (0, 0, 1)^T$, instead those of the restriction of $A$ itself. Indeed, an inspection of rschwieb's tables, when they existed, showed that the eigenvalues of $A'$ may be had by subtracting $4$ from those of $A$, exclusive of the eigenvalue $\lambda = 4$ of $A$. This agrees with the basic fact that $B\mathbf v = \mu \mathbf v \Leftrightarrow (B + \alpha I)v = (\mu + \alpha)\mathbf v$, i.e., eigenvalues simply shift by $\alpha$ if $\alpha I$ is added to a matrix $B$.
Having said these things, here's how I would handle this one:
the given matrix is
$A=\begin{bmatrix} 4&0&1&0\\1&1&1&0\\0&1&1&0 \\0&0&0&4 \end{bmatrix}, \tag{1}$
so
$A - \lambda I = \begin{bmatrix} 4 - \lambda &0&1&0\\1&1 - \lambda &1&0\\0&1&1 - \lambda&0 \\0&0&0&4 - \lambda \end{bmatrix}, \tag{2}$
so the characteristic polynomial $p_A(\lambda)$ is
$p_A(\lambda) = \det (A - \lambda I) = \det (\begin{bmatrix} 4 - \lambda &0&1&0\\1&1 - \lambda &1&0\\0&1&1 - \lambda&0 \\0&0&0&4 - \lambda \end{bmatrix}); \tag{3}$
since the last row and last column are $0$, save for the $4, 4$ entry, expansion in minors along the last row or column yields
$p_A(\lambda) = (4 - \lambda) \det(\begin{bmatrix} 4 - \lambda &0&1\\1&1 - \lambda &1\\0&1&1 - \lambda \end{bmatrix}) \tag{4}$
and we have, by any of a number of standard methods/formulas for the computation of $3 \times 3$ determinants
$\det(\begin{bmatrix} 4 - \lambda &0&1\\1&1 - \lambda &1\\0&1&1 - \lambda \end{bmatrix}) = (4 - \lambda)(1 - \lambda)^2 + 1 - (4 - \lambda)$
$= (4 - \lambda)((1 - \lambda)^2 - 1) + 1 = (4 - \lambda)(\lambda^2 - 2\lambda) + 1$
$= -\lambda^3 + 6\lambda^2 - 8\lambda + 1, \tag{5}$
so that
$p_A(\lambda) = (4 -\lambda)(-\lambda^3 + 6\lambda^2 - 8\lambda + 1) = \lambda^4 - 10\lambda^3 + 32\lambda^2 - 33\lambda + 4. \tag{6}$
This method tacitly exploits the fact that $4$ is an eigenvalue of $A$ with eigenvector $\mathbf e_4$ when it invokes the expansion by minors; the fact that $A \mathbf e_4 = 4 \mathbf e_4$ is related to the zeroes of the last row and column of $A$; but about this I will say no more at present. My day job beckons.
Hope this helps. Cheerio,
and as always,
Fiat Lux!!!
Best Answer
Let $A$ be the adjacency matrix for $G$. Without loss of generality, we can permute the vertices so that $v_i$ corresponds to the first row/column, and $v_j$ to the second row and column. Since $v_i$ is degree $1$, we have $$xI-A = \begin{pmatrix}x & -1 & \mathbf{0}^\mathrm{T} \\ -1 & x & Y^\mathrm{T} \\ \mathbf{0} & Y & Z\end{pmatrix},$$ where $Y$ and $Z$ are matrices of the appropriate sizes. Note that $Z$ is just $xI-A_{ij}$, where $A_{ij}$ the adjacency matrix of $G\backslash\{v_i,v_j\}$. Taking a cofactor expansion along the first row, we have $$\det(xI-A) = x\begin{vmatrix}x & Y^\mathrm{T} \\ Y & Z\end{vmatrix} + \begin{vmatrix} -1 & Y^\mathrm{T} \\ \mathbf{0} & Z\end{vmatrix}.$$ The first determinant is just $\det(xI-A_i)$, where $A_i$ is the adjacency matrix of $G\backslash\{v_i\}$. The second determinant evaluates to $\det(Z) = \det(xI-A_{ij})$. Therefore we have $$\det(xI-A) = x\det(xI-A_i) - \det(xI-A_{ij}),$$ as required.