First high-dimensional means a large matrix (but still two dimensional), right? Like an m-by-m matrix instead of an m-by-m-by-m tensor? There are three ways I can think of on top of my head.
For tridiagonal matrices Thomas' Algorithm is good. But in this case we have a perturbed tridiagonal matrix so we can use Thomas plus Sherman-Morrison Formula.
For sparse matrices, Givens Rotation kicks ass.
For positive definite, Cholesky Decomposition is the algorithm of choice.
These can be used to (like Gaussian elimination) to factor the matrix in question into triangular matrices which can then be easily inverted. No doubt there are others or even hybrid methods not listed here. I am sure MATLAB has some criteria (for example the sparsity percentage) to decide which algorithm would work the fastest and then goes ahead and applies it.
The general steps is to compute an expression for the inverse using the Woodbury identity and then computing the diagonal entries of the inverse.
The answer can be done in three steps: First apply the Sherman-Morrison-Woodybury update (http://en.wikipedia.org/wiki/Sherman%E2%80%93Morrison_formula, http://en.wikipedia.org/wiki/Woodbury_matrix_identity) to the matrix $M$. I will assume that $M = D + UU^T$. Applying the identity
$ M^{-1} = D^{-1} - D^{-1}U (I + U^TD^{-1}U)^{-1}U^TD^{-1} $
Second, Compute the Cholesky decomposition of $ F = (I + U^TD^{-1}U)^{-1} = RR^T $, where $R$ is a lower triangular matrix. Then, define $V = D^{-1}UR$, so that
$M^{-1} = D^{-1} - VV^{T}$
Finally, $diag(M^{-1})$ can be obtained as the sum of the inverse of a diagonal matrix and the the sum of a low rank matrix. Using MATLAB-like notation
$[M^{-1}]_{ii} = [D^{-1}]_{ii} - V(i,:)V(i,:)^T$
The resulting computation is ${\cal O}(k^2n + k^3)$, where $k$ is the rank of the low-rank part and $n$ the size of the matrix.
I have assumed here that all the inverses exists. However, this method might not be the most numerically stable. For stability issues with the update, see
A note on the stability of solving a rank-p modification of a linear system by the Sherman-Morrison-Woodbury formula, EL Yip, SIAM Journal of Scientific Computing, 1986.
Best Answer
Rephrasing a bit,
$${\bf B} := \sum_{i=1}^d \mbox{diag} ({\bf v}_i) \, {\bf A} \,\mbox{diag} ({\bf v}_i) = \sum_{i=1}^d {\bf v}_i {\bf v}_i^\top \circ {\bf A} = \left( \sum_{i=1}^d {\bf v}_i {\bf v}_i^\top \right) \circ {\bf A} = \left( \, {\bf V} {\bf V}^\top \right) \circ {\bf A} $$
where $\circ$ denotes the Hadamard product and ${\bf V}$ is the $n \times d$ matrix whose $i$-th column is ${\bf v}_i$. The inverse of ${\bf B}$ can be computed using standard methods.