The $LDL^T$ decomposition of an SPSD matrix cannot be unique. If
$$
A=LDL^T
$$
with
$$\tag{1}
D=\begin{bmatrix}D_{11}&0\\0&0\end{bmatrix}, \quad
L=\begin{bmatrix}L_{11}&0\\L_{21}&L_{22}\end{bmatrix},
$$
where $D_{11}$ is nonsingular (with positive diagonal entries), then it is easy to see that the sub-matrix $L_{22}$ can be in fact chosen arbitrarily. Generally, you would need to consider a pivoted factorisation leading to
$$\tag{2}
\Pi^TA\Pi=LDL^T
$$
with $L$ and $D$ of the form (1) and some permutation matrix $\Pi$, because accepting a zero pivot would make the remainder of the factorisation algorithm undefined.
Assume that you have a factorisation (1) obtained (by luck) without pivoting (or consider $\Pi^TA\Pi$ instead of $A$) and define $A^+=L^{-T}D^+L^{-1}$. It is easy to verify that
$$
A^+=\begin{bmatrix}L_{11}^{-T}D_{11}^{-1}L_{11}^{-1}&0\\0&0\end{bmatrix},
$$
so $A^+$ is unique as it does not depend on the "non-unique block" $L_{22}$. So in fact
$$
A^{+}=\begin{bmatrix}A_{11}^{-1}&0\\0&0\end{bmatrix},
$$
where $A_{11}$ is the leading principal sub-matrix of $A$ (of the dimension equal to the rank of $A$ consistent with the partitioning of the factors in (1)).
You might want to note that $A^{+}$ defined this way is not the Moore-Penrose pseudo inverse, since it generally $AA^{+}$ and $A^+A$ are not symmetric. On the other hand, the matrix $A^{+}$ as you defined it would form a so-called generalised reflexive inverse, or a (1,2)-generalised inverse (since it satisfies the first two of the four conditions defining the unique Moore-Pseudo inverse).
If you insists to compute the Moore-Penrose pseudo-inverse from the $LDL^T$ factorisation, consider (as before with luck or pivoting) that you have (1) and write $A$ as
$$
A=\tilde{L}D_{11}\tilde{L}^{T},
$$
where $\tilde{L}^T=[L_{11}^T,L_{21}^T]$.
Since $D_{11}$ and $\tilde{L}$ have full rank we can write
$$
A^{\dagger}=(\tilde{L}D_{11}\tilde{L}^T)^{\dagger}=(\tilde{L}^{\dagger})^TD_{11}^{-1}\tilde{L}^{\dagger},
$$
where
$$
\tilde{L}^{\dagger}=(\tilde{L}^T\tilde{L})^{-1}\tilde{L}^T
=(L_{11}^TL_{11}+L_{21}^TL_{21})^{-1}[L_{11}^T,L_{21}^T].
$$
Hence we obtain quite an awful expression
$$
A^{\dagger}=\tilde{L}\tilde{D}_{11}^{-1}\tilde{L}^{T},
\quad
\tilde{D}_{11}=\tilde{L}^T\tilde{L}D_{11}\tilde{L}^T\tilde{L}=(L_{11}^TL_{11}+L_{21}^TL_{21})D_{11}(L_{11}^TL_{11}+L_{21}^TL_{21}).
$$
The only way for $LDL^T$ to be positive definite for any $L$ is if $D$ itself it positive definite. This is because $\left \langle LDL^Tv,v \right \rangle=\left \langle D(L^Tv),(L^Tv) \right \rangle$, and the value of $L^Tv$ is unconstrained, since $v$ and $L$ are arbitrary.
If $L$ is assumed to arise from a Cholesky-like decomposition, then it is no longer an arbitrary matrix, but the conclusion about $D$ is still true because $\{L^tv: L \text{ lower triangular}, v\in\mathbb{R}^n\}=\mathbb{R}^n$.
Since $D$ is block diagonal, it is positive definite iff each block is positive definite. therefore the condition that you are after is for each block to be positive definite. Note that this is strictly stronger than having a positive determinant.
In general, a matrix is positive definite iff the determinants of all principal minors are positive; this is called Sylvester's Criterion.
Best Answer
I will show how an $LDL^T$ factorization can be computed inductively. You will still need to explain why you never divide by zero, but this is the essence of my original hint.
Write $$A = \begin{bmatrix} A_{11} & A_{21}^T \\ A_{21} & A_{22} \end{bmatrix} = \begin{bmatrix} 1 & \\ v & L \end{bmatrix} \begin{bmatrix} \alpha & \\ & D \end{bmatrix} \begin{bmatrix} 1 & v^T \\ & L^T \end{bmatrix} = \begin{bmatrix} \alpha & \alpha v^T \\ \alpha v^T & LDL^T + \alpha vv^T \end{bmatrix}$$ Here $A_{11}$ has dimension $1$, $A_{21}$ and $v$ are column vectors of dimension $n-1$ and $A_{22}$, $D$ and $L$ have dimension $n-1$ and $\alpha$ is simply a scalar. The matrix $D$ is diagonal, $L$ is lower unit triangular. Evidently, $$\alpha = a_{11}, \quad v = \frac{1}{\alpha}A_{21}$$ and $L$ can be computed the matrix. $$A' = A_{22} - \alpha vv^T$$ which is symmetric and has dimension $n-1$.
The fact that your matrix has a particular sparsity pattern, i.e., is tridiagonal has not been used at all. It will allow you to greatly reduce the cost of computing $v$ and applying the symmetric update.