Finite-Time Stability Concept

control theorylinear-controloptimal controlstability-in-odesstability-theory

I am trying to understand the concept of finite-time stability. I found some articles that cover controller design to satisfy finite-time stability of a system but these were actually so complex.

What I thought is that, since the concept of controllability is a finite-time concept(e.g. a state can be reached in finite time), isn't it true that if a system is both asymptotically stable and controllable, then the system is finite-time stable?

Since the controllability concept does not cover the controller that is chosen, I am not sure if it can cover finite-time stability.

Best Answer

No this is not true.

Take the system

$$ \dot{x} = u $$

which is controllable because its controllability matrix is $1$ and so has full rank. Now take $u=-x$ and you get $\dot{x} = -x$ which is asymptotically stable.

But this doesn't mean we reach the equilibrium in finite time. The solution of the system is:

$$ x(t) = x(0)e^{-t} $$

So if $x(0) \neq 0$ and you try to solve for $t$:

$$ 0 = x(0)e^{-t} \implies 0 = e^{-t} $$

However the exponential function is never zero for any finite $t$, only in the limit when $t \rightarrow \infty$. So although the system is controllable and the picked controller made the system asymptotically stable, it still takes an infinite time to actually reach the equilibrium, so it is not finite time stable.

Controllability only says that you can reach a state in finite time (for a single time step, with some controller), not that your solution can actually stay there.

For example if you assume $x(0) > 0$ to actually reach $0$ you could just use $u = -1$. Then the solution becomes

$$ x(t) = x(0) - t $$

So with this controller you reach $0$ after the finite time $t = x(0)$. But you won't stay there.