linear-algebra – Matrix Inversion Lemma with Pseudoinverses

linear algebramatricesrandom matrices

The utility of the Matrix Inversion Lemma has been well-exploited for several questions on MO. Thus, with some positive hope, I'd like to field a question of my own.

Suppose we pick $n$ values $x_1,\ldots,x_n$, independently sampled from $N(0,1)$ (mean 0, unit variance gaussian). Then, we form the (rank 3 at best) positive semidefinite matrix:
$$A = \alpha ee^T + [\cos(x_i-x_j)],$$
where $e$ denotes the vector of all ones, and $\alpha > 0$ is a fixed scalar.

For $n \ge 3$, simple experiments lead one to conjecture that:
$$e^TA^\dagger e = \alpha^{-1},$$
where $A^\dagger$ is the Moore-Penrose pseudoinverse of $A$ (obtained in Matlab using the 'pinv' function).

This should be fairly easy to prove with the right tools, such as a Matrix inversion lemma that allows rank deficient matrices or pseudoinverses. So my question is:

How to prove the above conjecture (without too much labor, if possible)?

Best Answer

In fact more generally for any positive semidefinite matrix $A = \sum_{i=0}^k e_i e_i^T$ with $e_i$'s linearly independent, we have that $e_i^T B e_i = 1$, where $B$ is the Moore-Penrose pseudoinverse of $A$. This applies here since almost surely your matrix $A$ is of this form with $k=3$ and $e_1 = \sqrt \alpha e$.

Proof: Let $E$ be the linear span of the $e_i$'s. If I understood correctly the notion of Moore-Penrose pseudoinverse, $B$ is described in the following way: as a linear map, $B$ is zero on the orthogonal of $E$, and on $E$ it is the inverse of the restriction of $A$ to $E$. Let $\beta_{i,j}$ be defined by $B e_i = \sum_j \beta_{i,j} e_j$, so that $e_i^T B e_i = \sum_j\beta_{i,j} e_i^T e_j$. Expressing that $A B e_i = e_i$, we get in particular that $\sum_j\beta_{i,j} e_i^T e_j = 1$, QED.