I should preface this with a disclaimer. It may not be the kind of insight you are looking for and certainly not as insightful as the previous answers, but it is a very direct derivation only requiring the most basic knowledge of linear algebra.
Suppose you wish to solve the following for $\mathbf{x}$:
$$\left(A+\mathbf{uv}^T\right)\mathbf{x}=\mathbf{y}$$
$$A\mathbf{x}=\mathbf{y}-\mathbf{uv}^T\mathbf{x}$$
$$\mathbf{x}=A^{-1}\mathbf{y}-A^{-1}\mathbf{uv}^T\mathbf{x}$$
Notice that $\mathbf{v}^T\mathbf{x}$ is a scalar, let us call it $s$.
So the solution for $\mathbf{x}$ in terms of $s$ is:
$$\mathbf{x}=A^{-1}\mathbf{y}-A^{-1}\mathbf{u}s$$
And solving for $s$:
$$s=\mathbf{v}^T\mathbf{x}=\mathbf{v}^TA^{-1}\mathbf{y}-\mathbf{v}^TA^{-1}\mathbf{u}s$$
$$\left(1+\mathbf{v}^TA^{-1}\mathbf{u}\right)s=\mathbf{v}^TA^{-1}\mathbf{y}$$
$$s=\frac{\mathbf{v}^TA^{-1}\mathbf{y}}{1+\mathbf{v}^TA^{-1}\mathbf{u}}$$
Substituting for $s$ in the solution for $\mathbf{x}$:
$$\mathbf{x}=A^{-1}\mathbf{y}-\frac{A^{-1}\mathbf{uv}^TA^{-1}\mathbf{y}}{1+\mathbf{v}^TA^{-1}\mathbf{u}}$$
Or:
$$\mathbf{x}=\left(A^{-1}-\frac{A^{-1}\mathbf{uv}^TA^{-1}}{1+\mathbf{v}^TA^{-1}\mathbf{u}}\right)\mathbf{y}$$
So:
$$\left(A+\mathbf{uv}^T\right)^{-1}=A^{-1}-\frac{A^{-1}\mathbf{uv}^TA^{-1}}{1+\mathbf{v}^TA^{-1}\mathbf{u}}$$
You don't really fix this, you should just avoid taking inverses of nearly-singular matrices like the one you have defined. Even though $B$ in this case has an exact inverse, in a numerically-approximate calculation that R does it really matters whether it is approximately singular or not. R's function rcond will tell you the reciprocal condition number of $B$, and of $C^{-1}+V A^{-1} U$; if the answer is a very small number (e.g., $10^{-9}$ or less), then the numerically-approximate computation is poorly conditioned and any answer will be rather inaccurate.
To be honest, I don't understand why you are doing this at all. Finding an eigen-decomposition of the (almost singular) matrix $B$ is hard, and you gain nothing by doing this only to insert this into the Woodbury formula. If all you know about $B$ is that it is symmetric, just calculate $A+B$ and invert that. In general, it makes very little sense to invert an arbitrary matrix by computing its eigen-decomposition. Even if a matrix is symmetric (which guarantees that it can be diagonalized), why not just invert it directly? (R will do it for you easily.) Not to mention that while $B$ is almost singular and the inverse of its diagonal form is needed to apply the formula, the matrix $A+B$ is not quite so singular, and can be inverted easily.
You are not saving any work by using Woodbury identity on a generic matrix like $A+B$. Usually, it makes a lot more sense to use it when the matrix $BCD$ (in $A+BCD$) has some very special form that makes calculating the inverses in the Woodbury identity easy: often $C$ is 1, and $B^t$ and $D$ are vectors, or $2\times n$ matrices -- then the only inverse in the problem that is hard is $A^{-1}$; for example, $DA^{-1}B$ means just solving a system of linear equations when $B$ is a vector. With the matrix $B$ that you wrote down this does not hold: $B$ is just a generic matrix, so Woodbury identity offers no help.
Finally, in general it is not advisable to calculate the inverses of matrices directly. In other words, the usual practice is not to calculate $A^{-1}$ or any other inverse explicitly, but instead whenever you need to find $A^{-1}X$, solve the set of systems of linear equations $AY=X$ instead of computing $A^{-1}$ and multiplying by $X$. So whenever in your code you have
solve(A) %*% X
, it should really be replaced with solve(A,X)
.
Are you familiar with the standard techniques for solving systems of linear equations, like Gaussian elimination and LU factorization and the like, and how they would be used to calculate matrix inverses? Your question can probably be better answered by a chapter on linear equations in an introductory numerical analysis textbook.
Best Answer
The arrow shape $n\times n$ symmetric (positive definite) matrix $A=(a_{ij})$ can be written in the form $$ A=D+ce_1^T+e_1c^T=D+[c,e_1][e_1,c]^T=D+C\Pi C^T, $$ where $c=[0,a_{21},\ldots,a_{n1}]^T$, $e_1=[1,0,\ldots,0]^T$, and $\Pi=\begin{bmatrix}0 & 1 \\ 1 & 0\end{bmatrix}$. Using the Woodbury formula, we have $$ A^{-1}=(D+C\Pi C^T)^{-1}=D^{-1}-D^{-1}C(\Pi^{-1}+C^TD^{-1}C)^{-1}C^TD^{-1}. $$ Applying the inverse of $A$ hence involves:
For solving $Ax=b$, you do:
This kind of the arrow-shape matrix, if using the Cholesky factorisation without permutations, has full triangular factors (in the first step, in order to eliminate the first column, you introduce nonzeros generally in each position of the matrix). Hence in this case, the complexity of the elimination would be $O(n^3)$.
To avoid this, it is sufficient to permute $A$ such that the arrow is directed to the south-west corner instead of the north-east one, that is, you permute the first and last row (and symmetrically the first and last column). Then you need only to eliminate the last row which can be done with $O(n)$ operations.