[Math] Computing the state transition matrix

control theorylinear algebralinear-controlmatrix-calculusordinary differential equations

Let $$\pmatrix{\dot x_1\\ \dot x_2\\ \dot x_3} = \underbrace{\pmatrix{0 & 3 & 2\cos(7t)\\-3 & 0 & -2\sin(7t)\\-2\cos(7t) & 2\sin(7t) & 0}}_{A(t)}\underbrace{\pmatrix{x_1(t)\\x_2(t)\\x_3(t)}}_{X(t)}.$$ Find the state transition matrix $\phi(t,0)$ for the system.

I know how to work with this if the matrix $A$ is actually time-invariant. It is as simple as computing $\phi(t,0) = e^{At}$, which can be decided by the eigenvalues of $A$ or even putting the matrix $A$ into real-Jordan form.

However, we have a time-variant matrix $A(t)$. Initially I thought that maybe I could try and use the $e^{At}$ formula again, but my first hint that this probably won't work was the fact that MATLAB and Mathematica spit out a gigantic mess.

Can anyone provide a hint on what might need to be done in order to compute the state transition matrix for a time-varying system such as this? More specifically, is there a more intuitive method to finding the state transition matrix for a system like this than the Peano-Baker series?

Best Answer

If $A(t_1)$ and $A(t_2)$ commute for all $t_1$ and $t_2$, so $A(t_1)\,A(t_2) = A(t_2)\,A(t_1)$, then you could use

$$ \Phi(t,0) = e^{\int_0^t A(\tau)\,d\tau} = e^{B(t)} $$

which indeed for the time-invariant case comes down to $e^{A\,t}$. In order to see why, one can use the Taylor expansion as a way of calculating the matrix exponential

$$ e^{B(t)} = I + B(t) + \frac{1}{2!} B(t)^2 + \frac{1}{3!} B(t)^3 + \cdots $$

The time derivative of this expression should be equal to $A(t)\,\Phi(t,0)$. Taking the time derivative of this Taylor expansion gives

$$ \frac{d}{dt}e^{B(t)} = \dot{B}(t) + \frac{1}{2!} \left(\dot{B}(t)\,B(t) + B(t)\,\dot{B}(t)\right) + \frac{1}{3!} \left(\dot{B}(t)\,B(t)^2 + B(t)\,\dot{B}(t)\,B(t) + B(t)^2\,\dot{B}(t)\right) + \cdots $$

when using that $\dot{B}(t) = A(t)$ and that $B(t)$ can be seen as a linear sum of all $A(\tau)\,\forall \tau\in[0,t]$, then if $A(t_1)$ and $A(t_2)$ commute for all $t_1$ and $t_2$ it follows that $A(t)$ and $B(t)$ should commute as well. This allows for $\dot{B}(t) = A(t)$ to be factored out for all term of the derivative of the Taylor series

$$ \frac{d}{dt}e^{B(t)} = A(t)\left(I + \frac{1}{2!} 2\,B(t) + \frac{1}{3!} 3\,B(t)^2 + \cdots\right) = A(t)\left(I + B(t) + \frac{1}{2!} B(t)^2 + \cdots\right) $$

which is just equal to $A(t)\,e^{B(t)}$.

However your $A(t)$ does not satisfy this condition, so this solution for the state transition matrix can not be used. I am not aware of another way one could for finding an analytical solution for the state transition matrix besides the Peano-Baker series. In this document it is mentioned at page 40 that the state transition matrix is almost always obtained via numerical integration. That document is from 1991. So if back then numerical integration would be preferred over analytical solutions, then nowadays with much more computation power at our disposal it should probably still be the preferred method.

However it can be noted that once you have obtained a solution for $\Phi(t,0)\,\forall\,t\in[0,T]$ with $T$ the period of $A(t)$, then all further solutions can be derived from this using the following expression recursively

$$ \Phi(t+T,0) = \Phi(t,0)\,\Phi(T,0). $$

From this you can also say something about stability, namely

$$ \Phi(n\,T,0) = \Phi(T,0)^n\quad \forall\ n\in\mathbb{Z} $$

so if $\Phi(T,0)$ is a Schur matrix (all eigenvalues inside the unit circle) then the system will be stable.