Linear Algebra – Orthogonal Matrices and Matrix Exponential Convergence

exponential functionexponentiationlinear algebramatricesmatrix equations

Part (a)

For which 2×2 orthogonal matrices A does
$\large e^A=I+\frac{A^1}{1!}+\frac{A^2}{2!}+…$
converge?

Part(b)

For what A does the series converge to an orthogonal matrix?

My work:

Let A be 2×2 and orthogonal. Then $A^tA = AA^t = I$ and so this implies that A is normal. Over the ground field = $C$, A is then orthogonally / unitarily diagonalizable.

We can write $A=QDQ^*$, where Q is unitary and D is diagonal with the eigenvalues of A on the diagonal. Also, since A is assumed to be orthogonal, then the modulus of each eigenvalue is 1.

Now $$e^A=I+\frac{A^1}{1!}+\frac{A^2}{2!}+…$$

$$ e^A=I+\frac{QDQ^*}{1!}+\frac{QD^2Q^*}{2!}+…$$

$$ e^A= Q(I+\frac{D}{1!}+\frac{D^2}{2!}+…)Q^*$$

$$ e^A= Qe^DQ^*$$

Where $e^D$ is again diagonal.

What can I say from here? I know that online sources such as Wikipedia and Wolfram just state without any proof or extended discussions that the matrix exponential is well-defined and converges for any square matrix. If this is stated as a fact without proof, then it seems a little strange that I am working on a problem statement that asks "for which orthogonal 2×2 matrices A does $e^A$ converge". Is there an important point that I am overlooking? Or can I really just state that the matrix exponential converges for any square matrix A, hence it is well-defined and converges for any 2×2 orthogonal matrix A?

Any suggestions and hints for how to finish part (a) and how to start on part (b) are welcome.

Thanks,

Best Answer

I know that online sources such as Wikipedia and Wolfram just state without any proof or extended discussions that the matrix exponential is well-defined and converges for any square matrix.

$\quad$ Every matrix has an element of maximal size. $($Obviously, if anything can cause

divergence, it's that one$).~$ Let its absolute value be $M.~$ So let us construct a square

matrix S, whose every single element is $M.~$ Then $S^k=\Big(n^{k-1}M^k\Big)_{n\times n}~,~$ and each

element of $A^k$ lies between $\pm~n^{k-1}M^k.~$ But $~e^S\approx\bigg(\dfrac{e^{nM}}n\bigg)_{n\times n}~,~$ so every element

of $e^A$ is definitely bounded. However, even in this case divergence could still theoret–

ically happen, if at least one such element $($not necessarily the same$)$ were to freely

oscillate inside a given range, without actually converging to any particular value

within that interval. But this is not possible, since each new term of the infinite series

decreases at an exponential rate, being trapped between $\pm~\dfrac{n^{k-1}M^k}{k!}.$