I've found a partial solution, namely when $a = b = 0$. The thing to realize is that $D$ and $E$ actually commute so they share a common set of eigenvectors. We can then make a change of basis to this eigenbasis $(x,y) \to (X,Y)$ so that the difference equation becomes, in this basis,
$$
\begin{pmatrix}
r E_1-\lambda+r^{-1}D_1 & 0 \\
0 & r E_1-\lambda+r^{-1}D_2
\end{pmatrix}
\begin{pmatrix}
X \\ Y
\end{pmatrix}
=
\begin{pmatrix}
0 \\0
\end{pmatrix}
$$
where $E_i, D_i$ are the eigenvalues of $E, D$ with common eigenvector $(x_i, y_i)$. Our difference equation then "uncouples" into two regular difference equations. The boundary conditions are still the same $X_0 = Y_0 = X_{N+1} = Y_{N+1} = 0$ and so we really have two copies of the regular tridiagonal Toeplitz matrix problem. If $a \neq 0, b \neq 0$ then we can't apply the same trick, because $C$ doesn't commute with $D$ or $E$.
After some extended search, found an article "Numerically Stable Algorithms for Inversion of Block Tridiagonal and Banded Matrices" that gives direct algorithm for even more general problem.
$$
M = \begin{bmatrix}
A_1 & -B_1 & 0 & 0 & \cdots & 0\\
-B_1^T & A_2 & -B_2 & 0 & \cdots & 0\\
0 & -B_2^T & A_3 & -B_3 & \cdots & 0\\
\vdots & \vdots & \vdots & \vdots & \ddots & \vdots \\
0 & 0 & 0 & 0 & \cdots & A_N
\end{bmatrix}
$$
where all $A_i$ are symmetric, all $B_i$ are non-singular.
Then, symmetry of $M$, as a whole, is exploited to get decomposition $(M^{-1})_{ij}=U_iV_j^T,\quad j \ge i$.
Note, when inverse of a symmetric matrix exists (and we assume it does) it should be symmetric as well.
$U$ and $V$ are given recursively:
$$
U_1 = I, \quad U_2=B_1^{-1}A_1\\
U_{i+1}=B_i^{-1}(A_iU_i-B^T_{i-1}U_{i-1}),\quad i=2,...,N-1\\
V_N^T=(A_NU_N-B_{N-1}^TU_{N-1}^T)^{-1}, \quad V_{N-1}^T=V_N^TA_NB_{N-1}^{-1}\\
V_i^T=(V_{i+1}^TA_{i+1}-V_{i+2}^TB_{i+1}^T)B_i^{-1}, \quad i=N-2,...,1
$$
Under constraints, that all $A_i$ are equal, all $B_i$ are equal, and $B=B^T$ (which I missed when posting original question!) equations become even simpler:
$$
U_1 = I, \quad U_2 = B^{-1}A \\
U_{i+1} = B^{-1}AU_i - U_{i-1},\quad i=2,...,N-1\\
V_N^T=(AU_N-BU_{N-1}^T)^{-1}, \quad V_{N-1}^T=V_N^TAB^{-1}\\
V_i^T=V_{i+1}^TAB^{-1}-V_{i+2}^T, \quad i=N-2,...,1
$$
Best Answer
You can go through the proof of Gershgorin circle theorem to prove strict inequality.
Let $M_n$ denote the original matrix. Suppose there exists an eigenvalue $\lambda$ of $M_n$ with $|\lambda| \geq 1$. Let $v = (v_1, \dots, v_n)^T$ be an eigenvector for $\lambda$.
Since at least one of $v_i$ is nonzero, we may divide $v$ by the maximum of $|v_i|$ and assume that the maximum of $|v_i|$ is equal to $1$. Let $k \in \{1, \dots, n\}$ be the smallest index such that $|v_k| = 1$.
If $k = 1$, then from $M_nv = \lambda v$ we get $\frac 12 v_2 = \lambda v_1$. Taking absolute values, we have $|v_2| \geq 2$ which contradicts our assumption. The same argument shows that $k = n$ leads to contradiction.
Now assume that $1 < k < n$. Again from $M_nv = \lambda v$ we get $\frac 1 2(v_{k - 1} + v_{k + 1}) = \lambda v_k$. Taking absolute values, we have $|v_{k - 1} + v_{k + 1}| \geq 2$.
However both $v_{k - 1}$ and $v_{k + 1}$ have absolute value at most $1$, hence the above inequality implies that $|v_{k - 1}| = 1$. This contradicts the minimality of $k$.
The same technique can be applied to more general matrices.
By the way, if we denote by $P_n(t)$ the characteristic polynomial of the matrix $M_n$, then the polynomials $P_n(t)$ satisfy the recurrence relation $P_{n + 2} - tP_{n + 1} + \frac14P_n = 0$, together with initial conditions $P_0 = 1$ and $P_1 = t$.