Nope. Just playing around with my computer, I found the matrix
[$\frac{11}{10}$ $\frac{1}{100}$ $\frac{99}{100}$]
[$\frac{1}{100}$ $\frac{11}{10}$ $\frac{99}{100}$]
[$\frac{99}{100}$ $\frac{99}{100}$ $\frac{11}{10}$]
with determinant $\frac{-25179}{31250}$.
Is this perhaps a misremembering of the definition of a diagonally dominant matrix?
I'll consider the special case of symmetric tridiagonal matrices with zero diagonal for this answer.
I prefer calling the even-order tridiagonal ones Golub-Kahan matrices. These matrices turn up in deriving the modification of the QR algorithm for computing the singular value decomposition (SVD). More precisely, given an $n\times n$ bidiagonal matrix like ($n=4$)
$$\mathbf B=\begin{pmatrix}d_1&e_1&&\\&d_2&e_2&\\&&d_3&e_3\\&&&d_4\end{pmatrix}$$
the $2n\times 2n$ block matrix $\mathbf K=\left(\begin{array}{c|c}\mathbf 0&\mathbf B^\top \\\hline \mathbf B&\mathbf 0\end{array}\right)$ is similar to the Golub-Kahan tridiagonal
$$\mathbf P\mathbf K\mathbf P^\top=\begin{pmatrix}& d_1 & & & & & & \\d_1 & & e_1 & & & & & \\& e_1 & & d_2 & & & & \\& & d_2 & & e_2 & & & \\& & & e_2 & & d_3 & & \\& & & & d_3 & & e_3 & \\& & & & & e_3 & & d_4 \\& & & & & & d_4 & \end{pmatrix}$$
where $\mathbf P$ is a permutation matrix. This similarity transformation is referred to as the "perfect shuffle".
The importance of this is that the eigenvalues of the Golub-Kahan matrices always come in $\pm$ pairs; more precisely, if $\mathbf B$ has the singular values $\sigma_1,\sigma_2,\dots,\sigma_n$, then the eigenvalues of the Golub-Kahan tridiagonal are $\pm\sigma_1,\pm\sigma_2,\dots,\pm\sigma_n$.
Odd-order zero-diagonal tridiagonals can be treated similarly, as they have a zero eigenvalue in addition to the $\pm$ pairs of eigenvalues. The treatment given above for Golub-Kahan tridiagonals becomes applicable after deflating out the zero eigenvalue; one can do this by applying the QR decomposition $\mathbf T=\mathbf Q\mathbf R$ and forming the product $\mathbf R\mathbf Q$ and deleting the last row and last column, thus forming a Golub-Kahan tridiagonal.
See Ward and Gray's paper (along with the associated FORTRAN code) and this beautiful survey by David Watkins.
Best Answer
Since $A$ is SPSD, $I+A$ is SPD since $x^T(I+A)x=x^Tx+x^TAx>0$ for all nonzero $x$. Inverse of an SPD matrix $B$ is SPD as well because $$ x^TB^{-1}x=(B^{-1}x)^TB(B^{-1}x)=y^TBy>0 $$ for all nonzero $x$ (since $B$ is SPD and hence nonsingular, $y\neq 0$ iff $x\neq 0$ and for every $y\neq 0$ there's a nonzero $x$ such that $x=By$). Hence $(I+A)^{-1}$ is SPD too.
An SPD matrix $B$ always have positive diagonal entries since $0<e_i^TBe_i=b_{ii}$, where $e_i$ is the $i$th vector of the canonical basis and $b_{ii}$ is the $i$th diagonal entry of $B$.
P.S.: The assumption on the diagonal and off-diagonal entries of $A$ is not needed here. All what is required is that $A$ is SPSD. However, note that an SPSD matrix with zero diagonal entries is necessarily the zero matrix. To see this, note that the trace of $A$ (that is, the sum of the diagonal entries) is equal to the sum of the eigenvalues due to the invariance of the trace w.r.t. the change of basis. The eigenvalues of $A$ are non-negative because $A$ is SPSD and since they sum up to zero, they are necessarily all equal to zero.