Use the fact that $\begin{vmatrix} a & b+e \\
c & d+f \end{vmatrix} =
\begin{vmatrix} a & b \\
c & d \end{vmatrix} +
\begin{vmatrix} a & e \\
c & f \end{vmatrix}
$
We can use this fact to separate out powers of $\lambda$. Following is an example for $2 \times 2$ matrix.
$$
\begin{vmatrix} a-\lambda & b \\
c & d-\lambda \end{vmatrix} =
\begin{vmatrix} a & b \\
c & d-\lambda \end{vmatrix} +
\begin{vmatrix} -\lambda & b \\
0 & d-\lambda \end{vmatrix} =
\begin{vmatrix} a & b \\
c & d \end{vmatrix} + %%
\begin{vmatrix} a & 0 \\
c & -\lambda \end{vmatrix} + %%
\begin{vmatrix} -\lambda & b \\
0 & d \end{vmatrix} +
\begin{vmatrix} -\lambda & 0 \\
0 & -\lambda \end{vmatrix}
$$
This decompose $det$ expression into sum of various powers of $\lambda$.
Now try it with a $3 \times 3$ matrix and then generalize it.
I get the impression that the intended answer to this question is more simple: as you noted, $a_0 = f(0) = \det(A - 0I) = \det(A)$. Since a square matrix $A$ is invertible if and only if $\det(A) \neq 0$, we have $A$ is invertible if and only if $a_0 \neq 0$.
This is basically no different from your argument, except that you justify $A^{-1}$ not existing based on the formula $A^{-1} = \frac{1}{\det(A)}\operatorname{adj}(A)$, which only makes sense if $\det(A) \neq 0$. There's a subtle issue with this logic. Specifically, $A$ being invertible, and its inverse, are not defined in terms of the above formula for $A^{-1}$.
This formula is a theorem proven about inverses, and as you've pointed out, it only really makes sense when $\det(A) \neq 0$. However, there's nothing to say, prima facie, that this formula for inverses doesn't just fail whenever $\det(A) = 0$. That is, it might be the case that this formula provides the inverse of $A$ whenever $\det(A) \neq 0$, but some other method needs to be employed in the case where $\det(A) = 0$.
To take an analogous example from power series, given a power series $\sum_{n=0}^\infty a_n x^n$, we can compute the radius of convergence to be
$$\lim_{n \to \infty} \frac{a_n}{a_{n+1}},$$
wherever the above limit exists and is finite. If the limit doesn't exist, then there is still a radius of convergence, but it cannot be found using this formula. It would therefore be incorrect to assume, based on this formula, that the limit must exist whenever the radius of convergence exists.
All that said, it is, of course, true that $\det(A) \neq 0$ whenever $A$ is invertible; it's just a separate result.
Best Answer
The answer is no.
You are given a degree $n$ polynomial $p(x)$ that an $n\times n$ matrix $A$ satisfies. This is not enough information to find the characteristic polynomial $c_A(x)$, although you will be able to narrow it down to finitely many possibilities.
Let's look at an example to see why. Suppose your friend picks the matrix $A$. Suppose he doesn't tell you $A$, but he does tell you $p(x)$ and asks you to guess $c_A(x)$. Suppose your friend picked
$$A=\begin{bmatrix} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 2 \end{bmatrix}$$
and told you that $p(x)=x^3-6x^2+11x-6$.
You could then reason that $p(x)=(x-1)(x-2)(x-3)$. Which would mean that the minimal polynomial $m_A(x)$ must be one of the following: $(x-1)$, $(x-2)$, $(x-3)$, $(x-1)(x-2)$, $(x-1)(x-3)$, $(x-2)(x-3)$, $(x-1)(x-2)(x-3)$.
Hence the characteristic polynomial $c_A(x)$ must be one of the following: $(x-1)^3$, $(x-2)^3$, $(x-3)^3$, $(x-1)^2(x-2)$, $(x-1)(x-2)^2$, $(x-1)^2(x-3)$, $(x-1)(x-3)^2$, $(x-2)^2(x-3)$, $(x-2)(x-3)^2$, $(x-1)(x-2)(x-3)$.
Since your friend picked
$$A=\begin{bmatrix} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 2 \end{bmatrix}$$
the characteristic polynomial is $c_A(x)=(x-1)^2(x-2)$, but you can't prove that, because for all you know he may have picked
$$B=\begin{bmatrix} 1 & 0 & 0 \\ 0 & 2 & 0 \\ 0 & 0 & 3 \end{bmatrix}$$
which also satisfies $p(x)=x^3-6x^2+11x-6=(x-1)(x-2)(x-3)$.