Solved – Inverse covariance matrix, off-diagonal entries

covariance-matrixmatrix

Let $\Sigma$ be a covariance matrix. According to the material in this link,

If the elements of $\Sigma$ are all positive, most of the off-diagonal elements in $\Sigma^{-1}$ will be negative.

This is actually written about the correlation matrix, but the principle should be the same.

What does "most" here mean? Is there a common condition that would make all the off-diagonal elements negative?

Best Answer

The underlying intuition is quite general: because multiplying a matrix by its inverse has to produce a matrix with a lot of zeros, if the original matrix contains only positive values then obviously the inverse has to contain some negative values in order to produce those zeros. But the intuition goes wrong in making the leap from "some" to "most." The problem is that only one negative coefficient is needed in each row to make this happen.

As a counterexample, consider the family of $n\times n$ matrices $X_{n,\epsilon} = A_{n-1} + \epsilon 1_{n}^\prime 1_{n}$ for $\epsilon \gt 0$ and positive integers $n$ where

$$A_{n-1} = \pmatrix{ 2 & -1 & 0 & 0 & 0 & 0 & 0 & \cdots & 0 \\ -1 & 2 & -1 & 0 & 0 & 0 & 0 & \cdots & 0 \\ 0 & -1 & 2 & -1 & 0 & 0 & 0 & \cdots & 0 \\ &&&&\ddots&&&&\\ 0 & \cdots & 0 & 0 & 0 & -1 & 2 & -1 & 0 \\ 0 & \cdots & 0 & 0 & 0 & 0 & -1 & 2 & -1 \\ 0 & \cdots & 0 & 0 & 0 & 0 & 0 & -1 & 2} $$

and

$$1_{n} = (1,1,\ldots, 1)$$

has $n$ coefficients. Notice that when $0\lt\epsilon\lt 1,$ $X_{n,\epsilon}$ has only $2(n-1)$ negative coefficients (namely, $-1+\epsilon$) and the remaining $n^2 - 2n + 2 = (n-1)^2 + 1$ of them (namely, $2+\epsilon$ and $\epsilon$) are strictly positive.

I chose these matrices $A_{n-1}$ because (1) they are (obviously) symmetric; (2) they are positive-definite (this is not so obvious, but it's an easy consequence of the theory of Lie Algebras in which they naturally arise); and (3) they have simple inverses with positive coefficients,

$$A_{n-1}^{-1} = \left(b_{ij}\right);\quad b_{ij} = \frac{\min(n+1-i,n+1-j)\min(i,j)}{n+1}.$$

For instance,

$$A_{3-1}^{-1} = \frac{1}{4}\pmatrix{3&2&1 \\ 2 & 4&2\\1&2&3}.$$

This is easy to prove simply by multiplying the two pairs of matrices and computing that the result is the $n\times n$ identity matrix.

The Sherman-Morrison formula asserts

$$X_{n,\epsilon}^{-1} = A_{n-1}^{-1} - \color{gray}{\frac{\epsilon}{1 + \epsilon\, 1_{n} A_{n-1}^{-1} 1_{n}} \left(A_{n-1}^{-1} 1_{n}^\prime 1_{n} A_{n-1}^{-1}\right)} = A_{n-1}^{-1} + \color{gray}{O(\epsilon)}.\tag{*}$$

Because the smallest entry in $A_{n-1}^{-1}$ is $1/(n+1),$ we can easily find $0\lt \epsilon \lt 1$ that are also small enough to make all the entries in the subtracted (gray) part of $(*)$ less than $1/(n+1),$ which leaves all the entries of $X_{n}^{-1}$ positive. (For instance, $0 \lt \epsilon\lt 1/(2n^3)$ will serve.)

Obviously $X_{n,\epsilon}^{-1}$ is symmetric. For sufficiently small positive $\epsilon$ its eigenvalues must be close to those of $A_{n-1}^{-1},$ all of which are positive (because $A_{n-1}$ itself is positive definite), which makes all such $X_{n,\epsilon}^{-1}$ legitimate covariance matrices.

We may conclude

For all $n\ge 1$ and (for each $n$) sufficiently small $\epsilon\gt 0,$ the matrix $X_{n,\epsilon}^{-1}$ is a covariance matrix with strictly positive entries and its inverse $X_{n,\epsilon}$ has $(n-1)^2 + 1$ strictly positive entries, too.

Thus, as $n$ grows large, the proportion of its positive entries becomes arbitrarily close to $1,$ because

$$\frac{(n-1)^2 + 1}{n^2} \gt \left(1-\frac{1}{n}\right)^2 \to 1.$$

Related Question