How does an invariant subspace creates a partition of the space

control theorydifferential-geometrydynamical systemsinvariant-theory

I am studying invariant subspaces(in the linear context), and I am having ssome troubles understanding.

I have studied that if I consider a system of the type:

$\dot{x}= Ax$

so an autonomous system. If I consider an invariant subspace $V$, such that it is invariant under $A$, so:

$AV\subset V$

I can apply a coordinate transformation:

$TAT^{-1}=\begin{pmatrix}
A_{11} &A_{12} \\
0 & A_{22}
\end{pmatrix}$

and so I obtain a system of the form:

$\dot{z_1}=A_{11}z_1 + A_{12}z_2$

$\dot{z_2}=A_{22}z_2$

Now, the part I don't understand is the following:

In my notes it is written that from this coordinate transformation, I am able to put in evidence that the system can be decomposed in two subsystems and also that I can put in evidence another important property.

The important property is that associated to $V$ there is a partition of the space, so something like:

enter image description here

even if , in order to make us understand, my professor used straight lines instead of planes, but i guess the concept is the same.

Also from the figure, what I have understood is that the evolution of the system, starting from an initial condition, moves from one partition to the other along the direction of $z_2$ and along the direction of the invariant subspace along the direction $z_1$ (also this is not really clear to me).

I have been on this for a while, searching for an explanation, but I cannot understand: how are these partition made?

Can somebody please clarify this concept to me?

Best Answer

Are you following Wonham [1]? This is my source. The topic starts at pg. 12. I will denote linear (sub)spaces by script letters and linear maps by roman letters. Vectors are lower case roman letters.

Let $x\in\mathscr{X}$ (start by treating this as $\mathbb{R}^n$) and let $A: \mathscr{X} \to \mathscr{X}$ be the linear map associated to the dynamics. Let $\mathscr{V}$ be any subspace that is $A$-invariant. I will try to address the following questions

  • What is it, and
  • What is its significance?

Pick any subspace $\mathscr{R}$ so that $\mathscr{V} \oplus \mathscr{R} = \mathscr{X}.$ The direct sum in this case just mandates that $\mathscr{V}$ is not contained in $\mathscr{R}$ and vice-versa. Really, we are just picking any complementary space to $\mathscr{V}$ that covers the entire state space.

Pick any basis $\{r_i\}$ and $\{v_i\}$ for $\mathscr{R}$ and $\mathscr{V}$ respectively. You now have a basis for $\mathscr{X}$ where a subset of this basis spans $\mathscr{V}.$ Corresponding to this basis is a coordinate transformation $P: \mathscr{X} \to \mathscr{X}$ that is completely characterized by the equations $$\begin{aligned} P r_1 &= z_1\\ &\;\vdots\\ P r_{n-k} &= z_{n-k}\\ P v_1 &= z_{n-k+1}\\ &\;\vdots\\ P v_k &= z_{n} \end{aligned}$$ where the ${z_i}$ will be the basis used for your coordinates in the variable $z$ of your question (as long as we set the ordering of the original basis to be $\{r_1, \dots, r_{n-k}, v_1,\dots, v_k\}$). This is precisely the coordinate transformation you are looking for. Judging by other answers and your question, I will assume you understand how to verify this.

So we have the change of coordinates. What does this change of coordinates represent? What it tells us is that the original system can actually be viewed as a cascade system. I disagree with the other answer that claims that there is only one system except in some special case. From a control perspective, there are two cascade subsystems where we view one of the states as an input to another. In particular, if we define $u = (z_{n-k+1}, \ldots, z_n)^\top$ and $v = (z_1, \ldots, z_{n-k})^\top$ then we can draw the signal diagram,

Signal diagram for cascade system.


Aside: Why do we care about this decomposition?

You can grant that the states $z_{n-k+i}$ evolve independently from the earlier states $z_1, \ldots, z_{n-k}.$ This is important because as you start talking about control, you start caring about which parts of the system need to be controllable. You can decompose the space $\mathscr{X}$ into different invariant subspaces with respect to $A.$

For one, imagine $x = A x + B s$ where $s$ is the input. Here $B:\mathscr{U}\to \mathscr{X}$ where $\mathscr{U}$ is the space of controls. Suppose $B \mathscr{U} = \mathscr{V}.$ That is, suppose that all our control power can only be put in the direction of $\mathscr{V}.$ Notice that the decomposition informs us that we can control $u$ wherever we want. This, in a sense, implies that the dynamics of $A_{22}$ don't matter (we can just eliminate that!) and what really matters is the pair $(A_{11}, A_{12})$.

This is a restrictive case, however. Generalizing this notion is where the books money goes. (EDIT: I shouldn't sell this reference short. It does way more than this...There are also other good references on this topic that seek a similar goal in similar ways.)


So now onto this business of the so-called partition. The partition is, as described by others, given by the affine translation of the subspace $\mathscr{V}.$ Given the map $P$ above, we can define it explicitly. The number of indices needed to characterize it is related to the codimension of $\mathscr{V}$; the number of directions you can choose to "leave" a particular translated instance of $\mathscr{V}.$ So far, that has been $k.$ So the family of sets in the partition are, for all $\sigma_1,\ldots, \sigma_k\in\mathbb{R},$ $$ F_{\sigma_1, \ldots, \sigma_{n-k}} = P^{-1} \left\{ z \in\mathscr{X} : z_1 = \sigma_1, \ldots, z_{n-k} = \sigma_{n-k}\right\} $$ where I hope you'll excuse my abuse of notation. The first $n-k,$ "$z$" states correspond to the directions that take you off of one affine translation of the subspace $\mathscr{V}$ to another. As I've said, you can view it as the number of directions you can leave a given affine translation of $\mathscr{V}.$

Having said all of this, you are reading too much into that picture. There is no guarantee that the solution will tend to translation of $V$ or even leave the instance of $V$ it started on. This all depends on what the initial conditions are and what $(A_{11}, A_{12})$ are as a subsystem. In fact, I'd consider that figure misleading since it doesn't depict the really important fact:

The dynamics of the state projected on the subspace $\mathscr{V}$ are independent of the states that are transverse to $\mathscr{V}$ and are only determined by your position in that set $F_{\sigma}$ (determined by state $u$).

[1]: W.M. Wonham. Linear Multivariable Control - A Geometric Approach. New York, NY: Springer-Verlag Inc., 1985.

Related Question