I claim that any algebraic integer $\lambda$ is the eigenvalue of a nonnegative matrix with integer coefficients. This answer relies on Doug Lind's answer here. Let $\lambda_1$, $\lambda_2$, ..., $\lambda_r$ be the Galois conjugates of $\lambda$. Let $\ell = \max(|\lambda_i|)$. Choose an integer $L$ large enough that $L^n \cdot (L-2)/(L-1) > \ell^n \cdot \ell/(\ell-1)$ for all positive integers $n$. I claim there will be a nonnegative integer matrix with eigenvalues $(\lambda_1, \lambda_2, \ldots, \lambda_r, L,L,\ldots, L)$ where there are $r$ copies of $L$.
Let $\mu_1$, $\mu_2$, ..., $\mu_{2r}$ be $(\lambda_1, \lambda_2, \ldots, \lambda_r, L,L,\ldots, L)$. According to Lind's answer, all we have to check is that, for all $n>0$,
$$\frac{1}{n} \sum_k \sum_{d|n} \mu(n/d) \mu_k^d$$
is a nonnegative integer.
It's an integer: Let $g(x) = \prod_{i=1}^{2r} (x-\mu_i)$. Since $\lambda$ is an algebraic integer, $g$ is a monic polynomial with integer coefficients. So there is a $(2r) \times (2r)$ integer matrix $B$ with characteristic polynomial $g$ -- the companion matrix.
As explained on the second paper Lind links, if $g$ is the characteristic polynomial of an integer matrix, then there is a graph theoretic interpretation of the above quantity which is manifestly integral. (I gave a slightly incorrect description of this interpretation before; I'll stick to just citing for now.)
It's nonnegative
We have
$$\sum_{d|n} \mu(n/d) L^d \geq L^n - \sum_{-\infty < d<n} L^d = L^n - \frac{L^{n-1}}{1-1/L}=L^n \frac{L-2}{L-1}.$$
and
$$\left| \sum_{d|n} \mu(n/d) \lambda^d \right| \leq \sum_{- \infty < d \leq n} |\lambda|^d = \frac{|\lambda|^n}{1-1/|\lambda|} = |\lambda|^n \frac{|\lambda|}{|\lambda|-1}$$
So, for every $i$, $\sum_{d|n} \mu(n/d) L^d$ is a positive integer which is greater than the norm of $\sum_{d|n} \mu(n/d) \lambda_i^d$.
So the real part of
$$\sum_{i=1}^r \left( \sum_{d|n} \mu(n/d) L^d + \sum_{d|n} \mu(n/d) \lambda_i^d \right)$$
is nonnegative. By Galois symmetry, the sum is real, so it is a nonnegative real, as desired.
Disclaimer: I haven't read the paper Lind cites.
In response to Turbo's question: Any eigenvalue of an nonnegative integer matrix is an eigenvalue of a $(0,1)$ matrix. Choose $N$ larger than any matrix entry and replace each matrix entry $a_{ij}$ with an $N \times N$ block of the form
$$\begin{pmatrix}
1 & 1 & \cdots & 1 & 0 & 0 \cdots & 0 \\
1 & 1 & \cdots & 1 & 0 & 0 \cdots & 0 \\
1 & 1 & \cdots & 1 & 0 & 0 \cdots & 0 \\
\vdots & \vdots & \vdots & \vdots & \vdots & \ddots & \vdots & \\
1 & 1 & \cdots & 1 & 0 & 0 \cdots & 0 \\
\end{pmatrix}$$
where there are $a_{ij}$ columns of $1$'s.
If $(v_1, v_2, \dots, v_n)^T$ is an eigenvector of the old matrix, then $(v_1, v_1, \dots, v_1, v_2, v_2, \dots, v_2, \dots, v_n, v_n, \dots, v_n)^T$ is an eigenvector of the new matrix with the same eigenvalue.
If the matrices have entries from a (unital) ring $R$ then the set of such matrices is isomorphic to $R[[x]]$, the ring of formal power series over $R$. To see this, observe that the map sending the infinite matrix with $a_0 = 0$, $a_1 = 1$ and $a_k = 0$ for $k \ge 2$ to $x$ is a ring isomorphism.
This also answers the second question: if $R$ is an integral domain then set of matrices embeds canonically in the field of fractions of $R[[x]]$ and this is the smallest field containing $R[[x]]$. In particular, if $R$ is a field then this field is $\{ \sum_{k=-m}^\infty a_k x^k : a_k \in R, m \in \mathbb{N}_0 \}$.
I'm uncertain how $\mathrm{reg}$ is (well)-defined, but certainly one can take $R$ to be the polynomial ring $\mathbb{C}[z]$ and then something like $\sum_{k=0}^\infty B_k(z) x^k$ is a well-defined element of $R[[x]] = \mathbb{C}[z][[x]]$. If, as in the correction then one wants Bernoulli numbers rather than the polynomials, just specialize to $\mathbb{C}[[x]]$ by evaluating at $z=0$.
Best Answer
To answer say a previous question, call such tridiagonal matrix $T$; the matrices $T$ and $-T$ (of dimension $n$) are unitarily congruent (they have the same spectra), that is there is an 'alternating' $\pm 1$ diagonal matrix $U$ such that $U^*TU=-T$.
As you noticed when you square $T$ and rearrange it by a permutation $P$ the diagonal blocks have the same eigenvalues. This is true as: for $P_i$ the permutation matrix exchanging $2i+1\leftrightarrow i+1$, set $P=\prod_{i=1}^{\lfloor \frac{n-1}{2}\rfloor}P_i=P_{\lfloor \frac{n-1}{2}\rfloor}\cdots P_1$; we get $PTP^*=\begin{pmatrix}0_{\lceil\frac{n}{2}\rceil}&A\\B&0_{\lfloor \frac{n}{2}\rfloor}\end{pmatrix}.$ The square block $0_m$ is a submatrix with all zeros of dimension $m$. Squaring the last identity gives your matrix $\begin{pmatrix}AB=C_1&0\\0&BA=C_2\end{pmatrix}.$