[Math] If the determinant of a matrix goes to infinity, does it means it has no inverse

determinantlinear algebramatricesnumerical linear algebra

Context

I have a linear time-invariant (single-input, single-output) system in state space representation (https://en.wikipedia.org/wiki/State-space_representation#Linear_systems):

$$ \mathbf{x'}(t) = \mathbf{A}*\mathbf{x}(t) + \mathbf{B}*{u(t)}$$

In which:

  • $\mathbf{x}(t)$ is a $n$-order vector of the variables of the system;

  • the output $\mathbf{y}(t)$ is the last variable of $\mathbf{x}(t)$, i.e., $\mathbf{x}_{n} = \mathbf{y}$;

  • $\mathbf{x'}(t)$ is a $n$-order vector of first order derivatives;

  • the $n$ x $n$ square matrix $\mathbf{A}$ is sparse; it's non-zero elements are in the: first column, first line and main diagonal;

$$ \mathbf{A} =
\begin{bmatrix}
{a}_{11} & {a}_{12} & {a}_{13} & \cdots & {a}_{1n} \\
{a}_{21} & {a}_{22} & 0 & \cdots & 0 \\
{a}_{31} & 0 & {a}_{33} & \cdots & 0 \\
\vdots & \vdots & \vdots & \ddots & \vdots \\
{a}_{n1} & 0 & 0 & \cdots & {a}_{nn} \\
\end{bmatrix}
$$

  • the input $u(t)$ is a scalar;

  • $\mathbf{B}$ is a sparse $n$-order vector which the only non-zero element is the first.

To solve the state-space, I thought of doing integration, line by line:
$$\mathbf{x}(t) = {e}^{\mathbf{A}t}\mathbf{x}(0) + \int_{0}^{t}{e}^{\mathbf{A}(t-\tau)}\mathbf{B}u(\tau)d\tau $$

That would be very simple if ${\mathbf{A}}$ was a diagonal matrix. Because if it is, then so will be ${e}^{\mathbf{A}t}$; and every element of it will be ${e}^{{a}_{ii}t}$.

Where ${a}_{ii}$ is an element of ${\mathbf{A}}$.


The question arises

So my question arised when I tried to diagonalize ${\mathbf{A}}$…

For ${\mathbf{A}}$ to be diagonalizable, it must be invertible, i.e., $$det({\mathbf{A}}) \neq 0$$

I went to check the determinant of ${\mathbf{A}}$, and I got:
$${det(\mathbf{A})}=1.96*{10}^{16}$$

So, I am wondering: is ${\mathbf{A}}$ invertible? Can I diagonalize it?


Why do I ask?

Suppose that the determinant of ${\mathbf{A}}$ goes to infinity. Then the determinant of it's inverse will go to zero.

As if:
$$det({\mathbf{A}}) = \infty $$
then
$$ det({\mathbf{A}^{-1}}) = 0 $$

Therefore, the inverse of ${\mathbf{A}}$ is not invertible (This apparent contradiction is what is bugging me).

A note:

Because the determinant was so high, the inverse will have it's elements with near-zero values. This will certainly cause numerical errors.

Best Answer

I am interpreting the matrix $A$ to be a sequence of matrices, whose limit is taken entrywise, and $B$ likewise.

Consider $A_j=\left(\begin{smallmatrix}j&0\\0&1\end{smallmatrix}\right)$. We have $det(A_j)=j$, which approaches infinity as $j\to\infty$. However, $A_j^{-1}=\left(\begin{smallmatrix}\frac{1}{j}&0\\0&1\end{smallmatrix}\right)$, which approaches $\left(\begin{smallmatrix}0&0\\0&1\end{smallmatrix}\right)$. This limit exists, and indeed has determinant $0$.

This is not a contradiction; each individual matrix in the sequence has an inverse. The limiting matrix of the $A_j$ sequence doesn't exist, as its entries would no longer be real numbers. Hence the fact that the limiting $B_j$ matrix isn't invertible doesn't matter.

Related Question