I'll consider the special case of symmetric tridiagonal matrices with zero diagonal for this answer.
I prefer calling the even-order tridiagonal ones Golub-Kahan matrices. These matrices turn up in deriving the modification of the QR algorithm for computing the singular value decomposition (SVD). More precisely, given an $n\times n$ bidiagonal matrix like ($n=4$)
$$\mathbf B=\begin{pmatrix}d_1&e_1&&\\&d_2&e_2&\\&&d_3&e_3\\&&&d_4\end{pmatrix}$$
the $2n\times 2n$ block matrix $\mathbf K=\left(\begin{array}{c|c}\mathbf 0&\mathbf B^\top \\\hline \mathbf B&\mathbf 0\end{array}\right)$ is similar to the Golub-Kahan tridiagonal
$$\mathbf P\mathbf K\mathbf P^\top=\begin{pmatrix}& d_1 & & & & & & \\d_1 & & e_1 & & & & & \\& e_1 & & d_2 & & & & \\& & d_2 & & e_2 & & & \\& & & e_2 & & d_3 & & \\& & & & d_3 & & e_3 & \\& & & & & e_3 & & d_4 \\& & & & & & d_4 & \end{pmatrix}$$
where $\mathbf P$ is a permutation matrix. This similarity transformation is referred to as the "perfect shuffle".
The importance of this is that the eigenvalues of the Golub-Kahan matrices always come in $\pm$ pairs; more precisely, if $\mathbf B$ has the singular values $\sigma_1,\sigma_2,\dots,\sigma_n$, then the eigenvalues of the Golub-Kahan tridiagonal are $\pm\sigma_1,\pm\sigma_2,\dots,\pm\sigma_n$.
Odd-order zero-diagonal tridiagonals can be treated similarly, as they have a zero eigenvalue in addition to the $\pm$ pairs of eigenvalues. The treatment given above for Golub-Kahan tridiagonals becomes applicable after deflating out the zero eigenvalue; one can do this by applying the QR decomposition $\mathbf T=\mathbf Q\mathbf R$ and forming the product $\mathbf R\mathbf Q$ and deleting the last row and last column, thus forming a Golub-Kahan tridiagonal.
See Ward and Gray's paper (along with the associated FORTRAN code) and this beautiful survey by David Watkins.
If $\mathbf{A}$ is semi-positive, we have that the sequence $\{\mathbf{A}_k\} = \{\mathbf{A}+\frac{1}{k}\mathbf{I}\}$ consists of positive matrices1 . What's more, $\mathbf{A}_k \to \mathbf{A}$ in operator norm, and with $\mathbf{A}_k = \mathbf{L}_k\mathbf{L}_k^T$, we have that $\mathbf{L}_k$ converges to a limit we name $\mathbf{L}$. Finally, $\mathbf{L}$ has the desired properties, that is $\mathbf{A} = \mathbf{L}\mathbf{L}^T$.
Therefore, an efficient algorithm would be to add a small diagonal $\varepsilon \mathbf{I}$ to $\mathbf{A}$ so that $\mathbf{A} + \varepsilon \mathbf{I}$ is numerically positive definite, and then perform the Cholesky decomposition. We finally remark that computing the eigenvalues of $\mathbf{A}$ asks for more floating point operations.
1 proof sketch on wikipedia
Best Answer
Inverting a nonsymmetric matrix $A$ by $A^{-1}=A^T(AA^T)^{-1}$ using the Cholesky factorization of $AA^T$ is a pretty bad idea both in terms of complexity and numerical stability.
There is absolutely nothing saved in terms of the number of operations. Forming the product $AA^T$ prior to its factorization involves about $n^3$ flops (yes, yes, Strassen gives a better exponent). With the cost of $n^3/3$ flops of the Cholesky factorization this gives $4n^3/3$ flops in total. This is twice more expensive than LU and just same as the Householder QR factorization for $A$ directly.
Not only this approach of inversion is not faster in any way, it is also inferior to almost any other method one might think of. The numerical errors committed by solving systems like $AA^T$ are proportional to the square of the condition number of $A$ which in turn roughly halves the number of valid digits in the solution compared to more stable methods based on LU or QR.