Making any matrix $A$ invertible by changing exactly $\text{size}(A)$ entries

determinantlinear algebra

Can a nonsingular square matrix be made singular by changing exactly one element or vice versa?

After verifying my solution for the problem in the link above, I am thinking if the following statement which is simmilar to the problem in the link, is true.

Given any non-invertible $n \times n$ matrix $A$, is it possible to make $A$ invertible by changing exactly

  1. $n-1$ entries?
  2. $n$ entries?

The answer to $(1)$ is a no. That is because there would be a row consisting of $0$'s.

But what about $(2)$?

Best Answer

NB: I'm assuming that this was a question about matrices with entries that are real (or maybe complex) numbers

If you have an all-zero matrix, you need to alter at least $n$ entries to get a non-zero determinant, because the determinant is the sum of all products of exactly $n$ elements taken one-from-each-row-and-column (albeit a sum with signs). If only $n-1$ entries are nonzero, then each summand in the determinant is zero, so the det is zero, and the matrix is not invertible.

If you're allowed to change $n$ entries, then the answer is yes. What you do is pick a really large number $K$ (large relative to the other entries in your matrix --- details below) and alter all the diagonal elements to be $K$. For large enough $K$, this will have nonzero determinant.

I think this is clear for $n = 1$, so I'll consider only $n > 1$ from now on.

The "How large?" question

The determinant of an $n \times n$ matrix $A$ can be computed by taking a sum of terms, where each term is produced by picking $n$ entries of $A$, one from each row and column, and computing their product, $p$, and then multiplying by $\pm 1$. How many such terms are there? It turns out that there are $n!$ of them. For a $2 \times 2$ matrix, for instance, the terms are $a_{11}a_{22}$ and $-a_{21}a_{12}$, and there are exactly $2! = 2$ of these; if you write out the $3 \times 3$ formula, you'll find you have $6$ terms, and the pattern will become obvious. I'm being casual about the plusses and minuses because they're not going to matter much in what follows.

Let's say that $A'$ is the matrix $A$ with the diagonals all replaced by the number $K$, for some $K$. I'm going to divide the terms in the determinant of $A'$ into two piles:

  1. The single term $a_{11} \cdot a_{22} \cdots a_{nn} = K \cdot K \cdots K = K^n$

and

  1. All other terms, which have at most $n-1$ $K$s in their products.

I want to estimate how large the sum of terms of type $2$ can be.

First, there are $n! - 1$ of these terms.

Second, the sum of these terms is no larger than the sum of the absolute values of the terms. (I'm using the idea that $|a + b| \le |a| + |b|$, over and over again). So letting $b_{ij} = |a_{ij}|$, I know that each term is a products of some number of $b_{ij}$s, and some number of $K$s, but at most $n-1$ of the $K$s. Let's let $$ M = \max(b_{ij}), i, j = 1, \ldots,n$$ be the largest of the numbers $b_{ij}$, i.e., the largest absolute value of any entry of $A$. And let's agree to choose $K$ larger than $M$, although exactly how large is still to be decided.

Then we know that each of our $n! - 1$ terms has no more than $n-1$ copies of $K$, and hence is no more than $$ T = K^{n-1} M. $$

That means that the sum of all type-2 terms is at most $$ (n! - 1)T = (n! - 1) K^{n-1} M. $$ The remaining term is $K^n$ (from the product of the altered diagonal entries).

Now I'm going to tell you how to pick $K$. I pick $K$ to be $$ K = 2 (n! - 1) M $$ I chose that carefully to be at least twice as big as the biggest possible sum of all the other terms.

Now earlier, I promised I was going to pick $K > M$. Does this formula do that? Well, remember that I said I was giving a proof only for $n > 1$, so $n! \ge 2$, so $n! - 1 \ge 1$, hence $2(n! - 1) \ge 2$. Hence my number $K$ is at least $2M$, hence greater than $M$, so I'm OK.

Now let's look at the determinant. It's the sum of all the terms. The (single) type-1 term is $$ K^n $$ which I'm going to say is $$ K^{n-1} \cdot K = K^{n-1} \cdot (2 (n! - 1) M). \tag{1} $$ Hold that thought.

The sum of all the type-2 terms is no larger than $$ K^{n-1} (n! - 1)M $$ as I showed earlier. Letting $U = K^{n-1}(n! - 1)M$, we have the type-1 term is $2U$ and the sum of all the other terms is no larger than $U$. So even if all the type-two terms ended up negative, we'd still have at least $2U - U = U > 0$, i.e., a nonzero determinant.

All of this is essentially a proof of @Damien's remark about diagonally dominant matrices, which might not be familiar to someone who'd ask this question.