Proof by induction with an nxn-matrix

inductionlinear algebramatrices

The matrix A $ \in \mathbb{R}^{n\times n}$ is of the form
\begin{equation*}
A = \begin{pmatrix}
0 & 1 & 0 & \cdots & 0 \\
\vdots & \ddots & \ddots & \ddots & \vdots \\
& & \ddots & \ddots & 0 \\
\vdots & & & \ddots & 1 \\
0 & \cdots & & \cdots & 0 \\
\end{pmatrix}
\end{equation*}

Now I want to compute $e^{tA}$ and $e^{tA} = \sum_{k=0}^{\infty} \frac{1}{k!}\cdot (tA)^{k}$.

I observed that $A^{2}$ is equal to the matrix A only with de "diagonal" of ones moved 1 up.

\begin{equation*}
A^{2} = \begin{pmatrix}
0 & 0 & 1 & \cdots & 0 \\
\vdots & \ddots & \ddots & \ddots & \vdots \\
& & \ddots & \ddots & 1 \\
\vdots & & & \ddots & 0 \\
0 & \cdots & & \cdots & 0 \\
\end{pmatrix}
\end{equation*}

And $A^{3}$ is equal to the matrix A with the "diagonal" of ones moved 2 up.
So to compute $e^{tA}$ I want to proof that $A^{n}$ is equal to the matrix A with the "diagonal" of ones moved up n-1 times, which results in $A^{n} = 0$. I have tried to proof it with induction.

So claim: If A $ \in \mathbb{R}^{n\times n}$ is of the above form, then $A^{n} = 0$.

For n = 2, the 2 x 2 matrix is equal to:
\begin{equation*}
A =
\begin{pmatrix}
0 & 1 \\
0 & 0
\end{pmatrix}
\end{equation*}

And $A^{2} = 0$.
So the claim holds for n = 2.
But I don't know how to do the induction step in this case.

Other ways to compute $e^{tA}$ with this matrix A are also welcome.

Best Answer

If you just want to prove that $A^2$ is the matrix $A$ with the diagonal "moved up", you actually don't need to do it by induction, you can calculate it directly. What can be proven by induction however is the following stronger statement that multiplying $A$ by itself $p$ times yields the matrix $A$ with the diagonal "shifted up" $p$ times. Formally this can be written as :

Lemma : $$\forall p \in \{1, 2, \ldots\}, \forall (i,j) \in \{1,\ldots,n\}, (A^p)_{i,j} = \delta_{i,j+p+1}$$ Where A is assumed to be an $n\times n$ matrix defined by $ A_{i,j} = \delta_{i,j+1} $ where $\delta$ is the Kronecker delta

(Check that it is consistent with what you are trying to prove)

Proof (by induction) :

  • Base case : The statement is true by definition for $p=1$, so the base case is done. Let's prove it for $p=2$ anyway just for reference : Given that $A_{i,j} = \delta_{i,j+1}$, let's prove that $(A^2)_{i,j} = \delta_{i,j+2}$

$$\forall (i,j) \in \{1,\ldots,n\}, (A^2)_{i,j} = (A \times A)_{i,j} := \sum_{k=1}^n A_{i,k}A_{k,j} $$

Replacing with the expression of A, this gives

$$(A^2)_{i,j} = \sum_{k=1}^n \delta_{i,k+1} \delta_{k,j+1}$$

Now the only non-zero terms in the above sum are the ones such that $i=k+1$ and $k = j+1$ simultaneously. This means that the only non-zero term in that sum is the one corresponding to index $k$ satisfying $k = i-1 = j+1$. In other words, if $i$ and $j$ are such that $i = j+2$, there is only one non-zero term in the above sum, and that term is equal to $1$ by definition of Kronecker delta. Therefore

$$ (A^2)_{i,j} = \delta_{i,j+2} $$

  • Induction step : Now consider an integer $p\ge 1$ such that $(A^p)_{i,j} = \delta_{i,j+p+1}$ (which exists thanks to the previous step), we want to prove that $(A^{p+1})_{i,j} = \delta_{i,j+p+2}$. Just as before :

$$\forall (i,j) \in \{1,\ldots,n\}, (A^{p+1})_{i,j} = (A^p \times A)_{i,j} := \sum_{k=1}^n A^p_{i,k}A_{k,j} $$

Using our induction hypothesis, we get

$$(A^{p+1})_{i,j} = \sum_{k=1}^n \delta_{i,k+p+1} \delta_{k,j+1}$$

A reasoning similar to the previous one yields that there can only be at most one non-zero term in that sum, which will exist if $i$ and $j$ are such that $i = j + p+ 2$, which gives that

$$(A^{p+1})_{i,j} = \delta_{i,j+p+2}$$

Which is what we wanted to prove and completes the proof by induction.