Here is, I think, a possible answer.
From the Jacobi's identity, it follows that
$$ - \frac{d}{d \lambda} C_A(\lambda) = \frac{d}{d \lambda} \det ( \lambda I - A) = \text{tr} \left( \text{adj} ( \lambda I - A ) \frac{d}{d \lambda} ( \lambda I - A) \right) = \text{tr} ( \text{adj} ( \lambda I - A ) ) $$
Therefore,
$$ \frac{d}{d \lambda} C_A(\lambda) \Bigg|_{\lambda=0} = \text{tr} ( \text{adj} ( A ) )$$
Observe that
$$ \frac{d}{d \lambda} C_A(\lambda) \Bigg|_{\lambda=0} = (-1)^{n+1} \lim_{\lambda \rightarrow 0} \frac{C_A(\lambda) - C_A(0)}{\lambda} = (-1)^{n+1} \lim_{\lambda \rightarrow 0} \Gamma_A(\lambda) = (-1)^{n+1} \Gamma_A(0)$$
Therefore,
$$ (-1)^{n+1} \Gamma_A(0) = \text{tr} ( \text{adj} ( A ) ) $$
First let me simplify the clutter of notation a bit; set $S:=V\otimes_RR[t]$ and $E:=\operatorname{End}_{R[t]}(S)$.
Throughout this answers I will view elements of $R[f,t]$ as $R[t]$-linear endomorphisms of $S$, i.e. as elements of $E$. As for exterior powers; for the proof only the case $k=1$ is relevant, so I won't bother with them at all.
The idea of the proof is to show that the endomorphism $\chi_f\in E$ vanishes on the quotient $S/(f-t)S$, and then to show that $S/(f-t)S\cong V$ as $R$-modules. The main ingredient is showing that $f-t\in E$ commutes with its adjugate. This relies on the fact that $\chi_f$ is not a zero divisor in $R[t]$.
The proof is a lot of commutative algebra, I have assumed everything in Atiyah-Maconald. If any part is unclear, let me know.
Step 1: The characteristic polynomial is not a zero divisor in $R[t]$.
The characteristic polynomial $\chi_f$ of $f-t\in R[f,t]$ is the determinant of the $R[t]$-linear map $f-t\in E$. Note that $\chi_f\in R[t]$ is not a zero divisor because $f-t\in E$ is injective, because its leading coefficient as a polynomial in $t$, i.e. as an element of $(R[f])[t]$, is a unit.
Step 2: The endomorphism $f-t\in E$ commutes with its adjugate w.r.t. the given pairing.
The adjugate of $f-t\in E$ with respect to the given perfect pairing is the unique $F\in E$ such that
$$F\cdot(f-t)=\chi_f\cdot1_S.\tag{1}$$
Because $\chi_f\in R[t]$ is not a zero divisor, localizing at $\chi_f$ yields an injection $R[t]\ \longrightarrow\ R[t]_{\chi_f}$. Because $V$ is a finitely generated free $R$-module, this in turn yields injections
$$S\ \longrightarrow\ S_{\chi_f}
\qquad\text{ and }\qquad
E\ \longrightarrow\ E_{\chi_f}.$$
By construction $\chi_f$ is a unit in $E_{\chi_f}$ and hence $(1)$ shows that also $f-t$ is a unit in $E_{\chi_f}$, so
$$F=\chi_f\cdot(f-t)^{-1},$$
in $E_{\chi_f}$. This shows that $F$ and $f-t$ commute in $E_{\chi_f}$, because both are $R[t]$-linear and $\chi_f\in R[t]$. Because $E_{\chi_f}$ contains $E$ as a subring, they also commute in $E$.
Step 3: On the quotient module $S/(f-t)S$ we have $\chi_f(f)=0$.
Because $F$ and $f-t$ commute, for all $(f-t)s\in(f-t)S$ we have
$$F((f-t)s)=(f-t)F(s)\in(f-t)S,$$
so $F$ maps the $S$-submodule $(f-t)S\subset S$ into itself. This means $F$ descends to an $R[t]$-linear map
$$S/(f-t)S\ \longrightarrow\ S/(f-t)S.$$
In this quotient $f-t$ is identically zero, so identity $(1)$ shows that on the quotient
$$F\cdot0=\chi_f\cdot1_{S/(f-t)S},$$
and so $\chi_f$ is identically zero on $S/(f-t)S$, where of course $\chi_f(t)=\chi_f(f)$ on the quotient.
Step 4: Also $\chi_f(f)=0$ on $V$.
Because $\chi_f(f)=0$ on $S/(f-t)S$ and the composition
$$V\ \longrightarrow\ S\ \longrightarrow\ S/(f-t)S,$$
is an isomorphism of $R[f]$-modules, it follows that $\chi_f(f)=0$ on $V$.
Best Answer
The reason we need the lemma is that from $P(t)=b(t)(A-tI)$ one cannot directly conclude that $P(A)=b(A)(A-AI)$.
If $R$ is a commutative ring, then there is a natural map $R[t]\to R^R$ which is a ring homomorphism (we endow $R^R$ with the pointwise ring structure: $(f+g)(r) = f(r)+g(r)$, and $fg(r) = f(r)g(r)$ for every $r\in R$). If $p(t)=q(t)s(t)$, then for every $r\in R$ you have that $p(r)=q(r)s(r)$.
But this doesn't work if $R$ is not commutative. For example, taking $p(t) = at$, $q(t) = t$ and $s(t)=a$, you have $p(t)=q(t)s(t)$ in $R[t]$ (since $t$ is central in $R[t]$ even when $R$ is not commutative), but $p(r) = ar$ while $q(r)s(r) = ra$. So you get $p(r)=q(r)s(r)$ if and only if $a$ and $r$ commute. Thus, while you can certainly define a map $\psi\colon R[t]\to R^R$ by $$\psi(a_0+a_1t+\cdots+a_nt^n)(r) = a_0 + a_1r + \cdots + a_nr^n,$$ this map is not a ring homomorphism when the ring is not commutative. This is the situation we have here, where the ring $R$ is the ring $n\times n$ matrices over $\mathbb{K}$, which is not commutative when $n\gt 1$. In particular, from $P(t) = B(t)(A-tI)$ one cannot simply conclude that $P(A)=B(A)(A-AI)$. This implicitly assumes that your map $M_n(\mathbb{K})[t]\to M_n(\mathbb{K})^{M_n(\mathbb{K})}$ is multiplicative, which it is not in this case.
If your $A$ happens to be central in $M_n(\mathbb{K})$, then it is true that the induced map $M_n(\mathbb{K})[t]\to M_n(\mathbb{K})$ is a homomorphism. But then you would be assuming that your $A$ is a scalar multiple of the identity. It would also be true if the coefficients of the polynomial $b(t)$ centralize $A$, but you are not assuming that. So you do need to prove that in this case you have $P(A)=b(A)(A-AI)$, since it does not follow from the general set-up (the way it would in a commutative setting).
P.S. In fact, this is the subtle point where the proof that a polynomial over a field of degree $n$ has at most $n$ roots breaks down for skew fields/division rings. If $K$ is a division ring, then the division algorithm holds for polynomials with coefficients over $K$, so one can show that for every $p(t)\in K[t]$ and $a(t)\in K[t]$, $a(t)\neq 0$, there exist unique $q(t)$ and $r(t)$ such that $p(t)=q(t)a(t) + r(t)$ and $r(t)=0$ or $\deg(r)\lt \deg(a)$. From this, we can deduce that for every polynomial $p(t)$ and for every $a\in K$, we can write $p(t) = q(t)(t-a) + r$, where $r\in K$. But the proof of the Remainder and Factor Theorems no longer goes through, because we cannot go from $p(t)=q(t)(t-a)+r$ to $p(a)=q(a)(a-a)+r$; and you cannot get the recursion argument to work, because from $p(t)=q(t)(t-a)$, and $p(b)=0$ with $b\neq a$, you cannot deduce that $q(b)=0$. For instance, over the real quaternions, we have $p(t)=t^2+1=(t+i)(t-i)$, but $p(j)=j^2+1\neq 2k = ij-ji = (j+i)(j-i)$. I remember when I first learned the corresponding theorems for polynomial rings, the professor challenging us to identify all the field axioms used in the proofs of the Remainder and Factor Theorem; none of us spotted the use of commutativity in the evaluation map.