I'll consider the special case of symmetric tridiagonal matrices with zero diagonal for this answer.
I prefer calling the even-order tridiagonal ones Golub-Kahan matrices. These matrices turn up in deriving the modification of the QR algorithm for computing the singular value decomposition (SVD). More precisely, given an $n\times n$ bidiagonal matrix like ($n=4$)
$$\mathbf B=\begin{pmatrix}d_1&e_1&&\\&d_2&e_2&\\&&d_3&e_3\\&&&d_4\end{pmatrix}$$
the $2n\times 2n$ block matrix $\mathbf K=\left(\begin{array}{c|c}\mathbf 0&\mathbf B^\top \\\hline \mathbf B&\mathbf 0\end{array}\right)$ is similar to the Golub-Kahan tridiagonal
$$\mathbf P\mathbf K\mathbf P^\top=\begin{pmatrix}& d_1 & & & & & & \\d_1 & & e_1 & & & & & \\& e_1 & & d_2 & & & & \\& & d_2 & & e_2 & & & \\& & & e_2 & & d_3 & & \\& & & & d_3 & & e_3 & \\& & & & & e_3 & & d_4 \\& & & & & & d_4 & \end{pmatrix}$$
where $\mathbf P$ is a permutation matrix. This similarity transformation is referred to as the "perfect shuffle".
The importance of this is that the eigenvalues of the Golub-Kahan matrices always come in $\pm$ pairs; more precisely, if $\mathbf B$ has the singular values $\sigma_1,\sigma_2,\dots,\sigma_n$, then the eigenvalues of the Golub-Kahan tridiagonal are $\pm\sigma_1,\pm\sigma_2,\dots,\pm\sigma_n$.
Odd-order zero-diagonal tridiagonals can be treated similarly, as they have a zero eigenvalue in addition to the $\pm$ pairs of eigenvalues. The treatment given above for Golub-Kahan tridiagonals becomes applicable after deflating out the zero eigenvalue; one can do this by applying the QR decomposition $\mathbf T=\mathbf Q\mathbf R$ and forming the product $\mathbf R\mathbf Q$ and deleting the last row and last column, thus forming a Golub-Kahan tridiagonal.
See Ward and Gray's paper (along with the associated FORTRAN code) and this beautiful survey by David Watkins.
Let $S$ be your symmetric matrix.
You can now add a large positive multiple of the identity matrix.
This ensures that your matrix $S+c I$ is diagonally dominant and symmetric, and thus positive definite.
See
http://mathworld.wolfram.com/DiagonallyDominantMatrix.html
Now, you clearly have $S= (S+cI)-cI$ (and $c I$ is certainly positive definite as well).
Best Answer
Since, $B$ is symmetric and positive definite, it is Unitarily diagonalizable, that is $\exists \, U \in U_n(\mathbb{R})$, such that $U^{*}BU = D[d_1,\cdots,d_n]$ is a diagonal matrix, with $d_i > 0$, for all $i = 1(1)n$.
Denote, $U^{*}AU = W = (w_{ij})_{n \times n}$, then,
$AB^m = B^mA \implies U^{*}AU(U^{*}BU)^m = (U^{*}BU)^mU^{*}AU \implies WD^m = D^mW$
So, $d_i^mw_{ij} = w_{ij}d_j^m$ for all $i,j$
Which implies, $d_iw_{ij} = w_{ij}d_j$ for all $i,j$ (since, $d_i,d_j >0$ and $d_i^m = d_j^m \implies d_i = d_j$)
That is $WD = DW$, which in turn implies $AB = BA$.