It's certainly not necessary for $A$ and $D$ to be invertible, e.g. with $B = \pmatrix{1 & 0\cr 0 & 1\cr}$ you could have $A = D = \pmatrix{0 & 0\cr 0 & 0\cr}$, or with $B = \pmatrix{1 & 0\cr 0 & 0\cr}$ you could have $A = \pmatrix{0 & 0\cr 0 & 1\cr}$ and $D = \pmatrix{1 & 1\cr 1 & 1\cr}$.
Of course with $B = \pmatrix{0 & 0\cr 0 & 0\cr}$ the eigenvalues of $M$ are the union of the eigenvalues of $A$ and of $D$.
In all cases $\text{Tr}(M) = \text{Tr}(A) + \text{Tr}(D)$, so the sum of the eigenvalues of $M$ is the sum for $A$ plus the sum for $D$.
In the case $B = \pmatrix{1 & 0\cr 0 & 1\cr}$, the coefficient of $\lambda^2$ in the characteristic polynomial of $M$ (which is $\sum_{i < j} \lambda_i \lambda_j$ where $\lambda_i$ are the eigenvalues of $M$) is $a_{{1}}a_{{2}}+a_{{1}}d_{{1}}+a_{{1}}d_{{2}}+a_{{2}}d_{{1}}+a_{{2}}d_{
{2}}+d_{{1}}d_{{2}}+2$, where $a_i$ and $d_i$ are the eigenvalues of $A$ and $D$ respectively.
In the case $B = \pmatrix{1 & 0\cr 0 & 0\cr}$, that coefficient would be
$a_{{1}}a_{{2}}+a_{{1}}d_{{1}}+a_{{1}}d_{{2}}+a_{{2}}d_{{1}}+a_{{2}}d_{
{2}}+d_{{1}}d_{{2}}+1$.
In the case $B = \pmatrix{1 & 0\cr 0 & 0\cr}$, I think these are the only equations linking the eigenvalues of $M$ with those of $A$ and $D$: you can choose $\lambda_1, \lambda_2, \lambda_3, \lambda_4$ arbitrarily satisfying the two constraints on
$\sum_i \lambda_i$ and $\sum_{i<j} \lambda_i \lambda_j$ and find a suitable $A$ and $D$
with eigenvalues $a_i$ and $d_j$ that works.
In the case $B = \pmatrix{1 & 0\cr 0 & 1\cr}$, it looks to me like there is an
additional constraint: you can only choose one eigenvalue (say $\lambda_1$) arbitrarily, and then the
other three will satisfy
$$\eqalign{{\lambda}^{3}&+ \left( -a_{{1}}-a_{{2}}-d_{{1}}-d_{{2}}+\lambda_{{1}}
\right) {\lambda}^{2}\cr + \left( a_{{1}}a_{{2}} \right. & \left. +a_{{1}}d_{{1}}+a_{{1}}d_
{{2}}-a_{{1}}\lambda_{{1}}+a_{{2}}d_{{1}}+a_{{2}}d_{{2}}-a_{{2}}
\lambda_{{1}}+d_{{1}}d_{{2}} -d_{{1}}\lambda_{{1}} -d_{{2}}\lambda_{{1}}
+{\lambda_{{1}}}^{2}+2 \right) \lambda \cr +a_{{2}}d_{{2}}\lambda_{{1}} &-a_{
{1}}a_{{2}}d_{{2}}-a_{{2}}d_{{1}}d_{{2}}-a_{{1}}a_{{2}}d_{{1}}-a_{{1}}
d_{{1}}d_{{2}}-a_{{1}}-a_{{2}}+{\lambda_{{1}}}^{3}+2\,\lambda_{{1}}-d_
{{1}}-d_{{2}}-d_{{1}}{\lambda_{{1}}}^{2}\cr -d_{{2}}{\lambda_{{1}}}^{2} & +a_
{{1}}a_{{2}}\lambda_{{1}}+a_{{1}}d_{{2}}\lambda_{{1}}+a_{{2}}d_{{1}}
\lambda_{{1}}+a_{{1}}d_{{1}}\lambda_{{1}}+d_{{1}}d_{{2}}\lambda_{{1}}-
a_{{2}}{\lambda_{{1}}}^{2}-a_{{1}}{\lambda_{{1}}}^{2}=0\cr}
$$
Your proof seems fine to me.
As for a counterexample, consider
$$A = \begin{bmatrix}
2 & 0 & 1 & 0 \\
0 & 1 & 2 & 1 \\
2 & 1 & 1 & 1 \\
1 & 1 & 1 & 2
\end{bmatrix}.$$
In one hand, $\det A = -4$ (check here), and in the other hand, $\det (A_{11} A_{22} - A_{21}A_{12}) = 0$ (check here).
Best Answer
The claim is false; take $m=1$, $n=2$ and $B=\tbinom{0\ 1}{1\ 0}$ to get the matrix $$\begin{pmatrix} 0&1&1\\ 1&0&0\\ 1&0&0 \end{pmatrix},$$ which is clearly not invertible. In the same way, for any value of $m$ and $n$ with $m<n$, the matrix $B$ with $1$'s on the antidiagonal and $0$'s elsewhere yields a singular matrix.
On the other hand, if $m=n$ then it is easily verified that $$ \begin{pmatrix} B&I_n\\I_n&O \end{pmatrix} \begin{pmatrix} O&I_n\\I_n&-B \end{pmatrix} =I_{2n},$$ and so the matrix is indeed invertible. Here $I_k$ denotes the $k\times k$ identity matrix.