I'll consider the special case of symmetric tridiagonal matrices with zero diagonal for this answer.
I prefer calling the even-order tridiagonal ones Golub-Kahan matrices. These matrices turn up in deriving the modification of the QR algorithm for computing the singular value decomposition (SVD). More precisely, given an $n\times n$ bidiagonal matrix like ($n=4$)
$$\mathbf B=\begin{pmatrix}d_1&e_1&&\\&d_2&e_2&\\&&d_3&e_3\\&&&d_4\end{pmatrix}$$
the $2n\times 2n$ block matrix $\mathbf K=\left(\begin{array}{c|c}\mathbf 0&\mathbf B^\top \\\hline \mathbf B&\mathbf 0\end{array}\right)$ is similar to the Golub-Kahan tridiagonal
$$\mathbf P\mathbf K\mathbf P^\top=\begin{pmatrix}& d_1 & & & & & & \\d_1 & & e_1 & & & & & \\& e_1 & & d_2 & & & & \\& & d_2 & & e_2 & & & \\& & & e_2 & & d_3 & & \\& & & & d_3 & & e_3 & \\& & & & & e_3 & & d_4 \\& & & & & & d_4 & \end{pmatrix}$$
where $\mathbf P$ is a permutation matrix. This similarity transformation is referred to as the "perfect shuffle".
The importance of this is that the eigenvalues of the Golub-Kahan matrices always come in $\pm$ pairs; more precisely, if $\mathbf B$ has the singular values $\sigma_1,\sigma_2,\dots,\sigma_n$, then the eigenvalues of the Golub-Kahan tridiagonal are $\pm\sigma_1,\pm\sigma_2,\dots,\pm\sigma_n$.
Odd-order zero-diagonal tridiagonals can be treated similarly, as they have a zero eigenvalue in addition to the $\pm$ pairs of eigenvalues. The treatment given above for Golub-Kahan tridiagonals becomes applicable after deflating out the zero eigenvalue; one can do this by applying the QR decomposition $\mathbf T=\mathbf Q\mathbf R$ and forming the product $\mathbf R\mathbf Q$ and deleting the last row and last column, thus forming a Golub-Kahan tridiagonal.
See Ward and Gray's paper (along with the associated FORTRAN code) and this beautiful survey by David Watkins.
This is more a long comment or a remark than an answer, but maybe it can help, at least in certain cases.
Assume $B$ is invertible, and so is $A+\delta B$.
Notice that
$$A+\delta B = \left(\frac{1}{\delta}AB^{-1}+1\right)\delta B.$$
Therefore
$$(A+\delta B)^{-1} = \frac{1}{\delta}B^{-1}\left(\frac{1}{\delta}AB^{-1}+1\right)^{-1}$$
Now assume that $\left|\det\left(\frac{1}{\delta}AB^{-1}\right)\right|<1$. Then we have
$$\left(\frac{1}{\delta}AB^{-1}+1\right)^{-1} = \sum_{n\ge0}(-\delta)^{-n}(AB^{-1})^n.$$
In conclusion, we have
$$(A+\delta B)^{-1} = -\sum_{n\ge0}(-\delta)^{-(n+1)}B^{-1}(AB^{-1})^n.$$
Best Answer
In general, given matrices $A,B$ appropriately sized so that $AB$ is defined, we also know that $B^\dagger A^\dagger$ is defined, and in particular that $B^\dagger A^\dagger=(AB)^\dagger.$ (By $\dagger$ I denote transpose.)
Now, for any square matrix $A$ and any integer $n$ for which $A^n$ is defined (negative $n$ make sense if and only if $A$ is invertible, while nonnegative $n$ always make sense), it follows that $\left(A^\dagger\right)^n$ is defined, and that $\left(A^\dagger\right)^n=(A^n)^\dagger.$ (Why?)
From there, we can readily see that (defined) even powers of antisymmetric matrices are symmetric, as are all (defined) integer powers of symmetric matrices.
Since a sum of symmetric matrices of the same size is again symmetric (why?), then it follows that $Q^{2012}+D^{2013}$ is symmetric. (Why?)
For the second, keep in mind that for any matrix $A$ and any constant $c,$ we have $\left(cA\right)^\dagger=cA^\dagger.$ This, together with the above observations, will allow us to conclude (after some manipulation) that $(P+Q)(P-Q)$ is symmetric.