[Math] Difference between equilibrium point and Unique Equilibrium Point

dynamical systemsterminology

Is there any difference between unique equilibrium point and equilibrium point? If yes, please tell me what it is and How is it used to solve a dynamical system.

You can consider a dynamical system of the form: $x′=f(x)$. The system has an equilibrium point at $x = 0$.

Another example would that be of a simple pendulum which has equilibrium points at extremes and its stationary point. Can someone tell me which one exactly is an unique equilibrium point?

Best Answer

Note: this discussion is applicable to both continuous and discrete time dynamical systems, but I will limit it to continuous time as they are analogous.

Given the dynamical system $x' = f(x)$, the vector $x$ is the state of the dynamical system, and the function $f$ tells us how the system moves. In special cases, the system does not move and we call these states 'fixed or equilibrium points' of the system.

For example, lets look at:

$$x' = x^3 - 8$$

What happens when $x = 2$? The derivative $x'$ at this point is fixed, it is neither increasing nor decreasing, in other words, it is stuck on $\tilde x = 2$. If you want a fixed point at zero, just use $x' = x, x^2, ~\text{or}~ x^3 \ldots$

Do you notice any other values that would give us this case in this dynamical system? No, there is a single, unique fixed point.

Lets look at a direction field plot for this system and see if we can further describe this. The direction field plot is:

enter image description here

What do you notice across the entire range when $x = 2$? If you perturb just slightly away from $2$, look what happens to the solution depending on which side of $2$ you end up on.

Lets change the example to $x' = (x-1)(x-3)^2$. How many fixed points do we have in this case? Two of them, namely, $x = 1$ and $x = 3$.

A direction field plot of this case should show us fixed points are these locations, so this looks like:

enter image description here

Do you notice what happens to the derivative at those two fixed points?

It is also important to note that a system may have no equilibrium points, for example $x' = e^x$. So, we can have none, one (unique) or many equilibrium (fixed) points for a system.

This notion can easily be extended to higher dimensional systems too. For example, lets take the system:

$$x' = x^2 + y^2 -25 \\ y' = x + y + 1$$

In this case, to find the fixed points, we want to know where $x'$ and $y'$ are simultaneously zero. We find two points for this at: $(-4, 3)$ and $(3,-4)$. A phase portrait of this system should show us these two fixed points, so we have:

enter image description here

Do you notice those two points on the phase portrait?

You also asked about the pendulum. The orbits of the pendulum away from the equilibrium points $(n \pi, 0)$, where $n \in \mathbb{Z}$, are given by the solutions of the scalar equation:

$$\dfrac{dx_2}{dx_1} = \dfrac{-(g/l) \sin x_1}{x_2}$$

If we plot the contours of this, we have:

enter image description here

Do you see where we are getting zero? This is where the derivative is not changing.

We can write the pendulum as a system as:

$$x' = y \\ y' = -\sin x$$

A phase portrait shows:

enter image description here

The center corresponds to a state of neutrally stable equilibrium, with the pendulum at rest and hanging straight down. The small orbits surrounding the center represent small oscillations about the equilibrium. The critical case that correspond to heteroclinic trajectories join the saddles, which represent the inverted pendulum at rest.

This was a handwaving argument and there are formal mathematical definitions for these equilibrium points.. We can get into formal definitions, discussions about cardinality and other, but there is not enough room in the margins to include those here.

These fixed points allow us to do a lot of qualitative analyses without actually solving the system and this is extremely helpful. Once we figure out the fixed points, we want to classify them by type and this tells us about stability. We can have stable and unstable points and they are further broken down into nodes, saddles, $\ldots$.

Stability

There are several notions for stability. We say that $x* = 0$ is an attracting fixed point when we start near $x*$ and approach it as $t \rightarrow \infty$. If $x(t) \rightarrow x*$ as $t \rightarrow \infty$, we call this globally attracting.

The other notion typically used is Liapunov stability. In this approach, we say a fixed point $x*$ is Liapunov stable if all trajectories that start sufficiently close to $x*$, remain close for all time.

In practice, there are really two types of stability:

  • If a fixed point is 'both' Liapunov stable and attracting, it is stable or sometimes asymptotically stable.
  • $x*$ is unstable when it is neither attracting nor Liapunov stable.

It is amazing that using those two notions, we can tell all of this from the eigenvalues of a system. We look at their sign and it is able to tell us this. For example if both are positive we are unstable. Why? Because any trajectory, regardless we we start will go away from the fixed. When both are negative, we have the opposite case. When they are different signs, we have a saddle as we have this argument going on between the female and male eigenvalues and that is never good. Lastly, we can have neutral stability with a center with complex eigenvalues. These notions tell us a lot of information about the long term behavior of our system. We want to know that over time the building, bridge or whatever will be stable for all time and if we can describe a system using dynamics, we can easily make these determinations.

Again, this is a bit of hand waving from the mathematical definitions, but is about as easy an understanding as you can muster.

Related Question