In the definition of the eigendecomposition of $S$, you say that $Q$ comprises only those vectors for the nonsingular part of S. Typically, one writes a complete eigendecomposition where $Q$ is a square matrix of size $m\times m$. As you hint in your final sentences, the eigenvectors corresponding to the zero eigenvalues should span the nullspace of $S$. They can't be arbitrary vectors such that $Q$ spans $C^m$; they must also be orthogonal to the eigenvectors corresponding to the nonzero eigenvalues. When you perform the full eigendecomposition, the $Q$ is square, and in this case orthogonal, so then $QQ^T = Q^TQ = I$.
The general steps is to compute an expression for the inverse using the Woodbury identity and then computing the diagonal entries of the inverse.
The answer can be done in three steps: First apply the Sherman-Morrison-Woodybury update (http://en.wikipedia.org/wiki/Sherman%E2%80%93Morrison_formula, http://en.wikipedia.org/wiki/Woodbury_matrix_identity) to the matrix $M$. I will assume that $M = D + UU^T$. Applying the identity
$ M^{-1} = D^{-1} - D^{-1}U (I + U^TD^{-1}U)^{-1}U^TD^{-1} $
Second, Compute the Cholesky decomposition of $ F = (I + U^TD^{-1}U)^{-1} = RR^T $, where $R$ is a lower triangular matrix. Then, define $V = D^{-1}UR$, so that
$M^{-1} = D^{-1} - VV^{T}$
Finally, $diag(M^{-1})$ can be obtained as the sum of the inverse of a diagonal matrix and the the sum of a low rank matrix. Using MATLAB-like notation
$[M^{-1}]_{ii} = [D^{-1}]_{ii} - V(i,:)V(i,:)^T$
The resulting computation is ${\cal O}(k^2n + k^3)$, where $k$ is the rank of the low-rank part and $n$ the size of the matrix.
I have assumed here that all the inverses exists. However, this method might not be the most numerically stable. For stability issues with the update, see
A note on the stability of solving a rank-p modification of a linear system by the Sherman-Morrison-Woodbury formula, EL Yip, SIAM Journal of Scientific Computing, 1986.
Best Answer
I would recommend reading http://www.unige.ch/~gander/consulting/X/EigenUpdate.ps.gz and having a look at the cited work of Golub and Van Loan. They show howto update matrices with rank-one-changes. You can understand your update matrix $D \in\mathbb{R}^{n\times n}$ as a sum of $n$ rank-one-updates. Good luck!