I'll consider the special case of symmetric tridiagonal matrices with zero diagonal for this answer.
I prefer calling the even-order tridiagonal ones Golub-Kahan matrices. These matrices turn up in deriving the modification of the QR algorithm for computing the singular value decomposition (SVD). More precisely, given an $n\times n$ bidiagonal matrix like ($n=4$)
$$\mathbf B=\begin{pmatrix}d_1&e_1&&\\&d_2&e_2&\\&&d_3&e_3\\&&&d_4\end{pmatrix}$$
the $2n\times 2n$ block matrix $\mathbf K=\left(\begin{array}{c|c}\mathbf 0&\mathbf B^\top \\\hline \mathbf B&\mathbf 0\end{array}\right)$ is similar to the Golub-Kahan tridiagonal
$$\mathbf P\mathbf K\mathbf P^\top=\begin{pmatrix}& d_1 & & & & & & \\d_1 & & e_1 & & & & & \\& e_1 & & d_2 & & & & \\& & d_2 & & e_2 & & & \\& & & e_2 & & d_3 & & \\& & & & d_3 & & e_3 & \\& & & & & e_3 & & d_4 \\& & & & & & d_4 & \end{pmatrix}$$
where $\mathbf P$ is a permutation matrix. This similarity transformation is referred to as the "perfect shuffle".
The importance of this is that the eigenvalues of the Golub-Kahan matrices always come in $\pm$ pairs; more precisely, if $\mathbf B$ has the singular values $\sigma_1,\sigma_2,\dots,\sigma_n$, then the eigenvalues of the Golub-Kahan tridiagonal are $\pm\sigma_1,\pm\sigma_2,\dots,\pm\sigma_n$.
Odd-order zero-diagonal tridiagonals can be treated similarly, as they have a zero eigenvalue in addition to the $\pm$ pairs of eigenvalues. The treatment given above for Golub-Kahan tridiagonals becomes applicable after deflating out the zero eigenvalue; one can do this by applying the QR decomposition $\mathbf T=\mathbf Q\mathbf R$ and forming the product $\mathbf R\mathbf Q$ and deleting the last row and last column, thus forming a Golub-Kahan tridiagonal.
See Ward and Gray's paper (along with the associated FORTRAN code) and this beautiful survey by David Watkins.
There is such a thing, at least over the reals.
Suppose $m>n$.
Then an $m\times n$ matrix has full rank if and only if it contains an $n\times n$ submatrix of full rank.
Let $A$ be an $m\times n$ matrix and let $A_1,\dots,A_N$ be its $n\times n$ submatrices.
(The exact value of the number $N$ is irrelevant here; it only depends on $m$ and $n$.)
Now let $D(A)=\sum_{k=1}^N\det(A_k)^2$.
Clearly $D(A)$ is polynomial in each element since the determinant is, and $D(A)=0$ if and only if none of the $n\times n$ submatrices of $A$ has full rank.
I don't know if such things have been studied or given a name.
Best Answer
Not sure if there is a closed-form answer, but $$ AX-XA = C $$ is a linear system of $n\times n$ equations of $n \times n$ unknowns.
One approach is using the vectorization operator $$\operatorname{vec} X = \begin{pmatrix} x_{11}\\ x_{21}\\ \vdots\\ x_{n1}\\ x_{12}\\ \vdots\\ x_{nn} \end{pmatrix}$$ Using Kronecker product $$ \operatorname{vec} (AXB) = (B^\top \otimes A) \operatorname{vec} X $$ the system becomes $$ (I \otimes A - A^\top \otimes I)\operatorname{vec} X= \operatorname{vec} C\\ \operatorname{vec} X = (I \otimes A - A^\top \otimes I)^{-1}\operatorname{vec} C \mkern{-227mu}\frac{\phantom{\operatorname{vec} X = (I \otimes A - A^\top \otimes I)^{-1}\operatorname{vec} C}}{\phantom{b}} $$
Edit. As kindly pointed by @loup blanc the matrix $I \otimes A - A^\top \otimes I$ is always degenerate so there is either no solution to the equation or infinite number of the solutions.
The another approach may be the following: let $\Omega \Lambda \Omega^{-1}$ be the eigendecomposition for $A$. $$ [A,X] = \Omega \Lambda \Omega^{-1} X - X \Omega \Lambda \Omega^{-1} = \Omega [\Lambda, \Omega^{-1}X\Omega] \Omega^{-1} = C\\ [\Lambda, \Omega^{-1}X\Omega] = \Omega^{-1}C\Omega $$ Denoting $R = \Omega^{-1}C\Omega, Y = \Omega^{-1}X\Omega$ the equation becomes $$ [\Lambda, Y] = R $$ with indices that is $$ \lambda_i Y_{ij} - \lambda_j Y_{ij} = R_{ij}\\ Y_{ij} = \begin{cases} \dfrac{1}{\lambda_i - \lambda_j}R_{ij}, &\lambda_i \neq \lambda_j\\ \text{any}, &\lambda_{i} = \lambda_j \end{cases} $$ provided that $R_{ij} = 0$ for every $\lambda_i = \lambda_j$ (at least, $\operatorname{diag}(R) = 0$).