The formula that you give for the absolute value of the error is exact, sort of. By that I mean that for every $x$, there is a $\xi(x)$ in our interval such that the error when you use the interpolating quadratic at $x$ is exactly the one obtained by putting $\xi=\xi(x)$ in the error formula.
In most cases, this kind of Mean Value Theorem information leads to estimates, but not to exact values, since we don't know $\xi(x)$.
However, here we are evaluating the third derivative of our cubic at $\xi(x)$. The third derivative is constant! So it doesn't matter what $\xi(x)$ is, that part of the error estimate does not change. The rest is an explicit polynomial in $x$. It follows that the error estimate and the actual error are equal, as you noticed computationally. Both are polynomials in $x$. Given any two polynomials, equality at all values of $x$ (or even at infinitely many values of $x$, or, for cubics, at $4$ values of $x$) means that the coefficients match.
Precisely the same thing happens when you use the same process to approximate a polynomial of degree $n+1$ by a Newton interpolating polynomial of degree $n$. If you look up the error estimate,
you will see that it is a certain polynomial in $x$ times a term $f^{(n+1)}(\xi(x))$. (I like that version better than the absolute value version, since we also get information about the sign of the error.)
For a polynomial $f$ of degree $n+1$, the $(n+1)$-th derivative is constant, so
the term $f^{(n+1)}(\xi(x))$ is a constant independent of $x$. Thus the error estimate and the error are identically equal. Since they are both polynomials, that means their coefficients are equal.
We assume the Stirling approximation (first order) $$\log(n!)\approx n \log(n) - n = g(n) \tag{1}$$
And the Taylor expansion:
$$g(x)\approx g(x_0)+g'(x_0)(x-x_0)+g''(x_0)\frac{(x-x_0)^2}{2}=\\
=x_0\log(x_0)-x_0 + \log(x_0)\delta+\frac{\delta^2}{2 x_0} \tag{2}$$
where $\delta = x-x_0$ (increment).
Then the change in the box that decrements its counting (box 1) by $\delta$ is
$$\Delta_1(\ln\Omega_{\{n\}})=-\ln((n_0-\delta)!)+\ln(n_0!)=\\= \log(n_0)\delta-\frac{\delta^2}{2 n_0} \tag{3}$$
The change in the other box that increments its counting (box 2) by $\delta$ is
$$\Delta_2(\ln\Omega_{\{n\}})=-\ln((n_0+\delta)!)+\ln(n_0!)=\\ =-\log(n_0)\delta-\frac{\delta^2}{2 n_0} \tag{4}$$
The total change is then $$\Delta(\ln\Omega_{\{n\}})= \Delta_1(\ln\Omega_{\{n\}})+\Delta_2(\ln\Omega_{\{n\}})=-\frac{\delta^2}{n_0} \tag{5}$$
(that's why we need to take a second-order Taylor expansion: because the linear terms cancel - not because "we are at a local maximum" - actually the result is valid for any $n_0$, not only for $n_0=N/M$)
The original statement
$$\delta\left(\ln\Omega_{\{n\}}\right)\approx \ln\Omega_0-\frac{\delta n^2}{2 n_0}$$
seems to have two errors: First, if mixes the value of the statistical weight with its change. If we are speaking of the change, then the first "$\delta$" is right, but the initial term $\ln\Omega_0$ is wrong - sanity check: if $\delta n=0$ the change should be zero). Furthermore, the result seems to have a missing two factor (indeed, the copied derivation emphasizes that the first variation is due to one box, but then it seems ot forget about the other one).
Edit: Regarding your edit: your "equation in blue" is only correct if it's understood to be the change in one box (see eq $3$). If it's understood (as it seems) as the total change, then it's wrong. The net change is given by eq. $(5)$
Best Answer
The quality of your data. If your data points are highly accurate, then it makes sense to respect them as much as possible, and some form of interpolation would be appropriate. On the other hand, if you expect some uniform random noise to be associated with the data, then it makes sense to pay more attention to local averages, to "smooth" it out. In this case an approximation scheme is more appropriate.