[Physics] Calculating Lyapunov exponents from a multi-dimensional experimental time series

chaos theorycomplex systemsdata analysisexperimental-physicsnon-linear-systems

Wolf's paper Determining Lyapunov Exponents from a Time Series states that:

Experimental data typically consist of discrete measurements of a
single observable. The well-known technique of phase space
reconstruction with delay coordinates [2, 33, 34] makes it possible to
obtain from such a time series an attractor whose Lyapunov spectrum is
identical to that of the original attractor.

One of the cited papers, Geometry from a Time Series, elaborates:

The heuristic idea behind the reconstruction method is that to specify
the state of a three-dimensional system at any given time, the
measurement of any three independent quantities should be sufficient
[…]. The three quantities typically used are the values of each
state-space coordinate, $x(t)$, $y(t)$, and $z(t)$. We have found
that […] one can obtain a variety of three independent quantities
which appear to yield a faithful phase-space representation of the
dynamics of the original $x$, $y$, $z$ space. One possible set of
three such quantities is the value of the coordinate with its values
at two previous times, e.g. $x(t)$, $x(t – \tau)$, and $x(t – 2\tau)$.

Finally, Rosenstein's paper A practical method for calculating largest Lyapunov exponents from small data sets states that:

The first step of our approach involves reconstructing the attractor
dynamics from a single time series. We use the method of delays [27,
37] since one goal of our work is to develop a fast and easily
implemented algorithm. The reconstructed trajectory, X, can be
expressed as a matrix where each row is a phase-space vector. That is,
$$ X = [X_1\;X_2\; …\,X_M]^T$$
where $X_i$ is the state of the system at discrete time $i$.

All three papers seem to implicitly assume that the system under study has a multi-dimensional phase space, but that only one dimension can be measured experimentally, and therefore that the full phase space data must be reconstructed from a one-dimensional time series.

However, what if the time series is multi-dimensional, indeed of the same dimension as the phase space, to begin with? For instance, consider the problem of showing experimentally that a simple pendulum is not chaotic. The phase space is 4-dimensional ($r$, $\dot r$, $\phi$, $\dot \phi$) and it is straight-forward to design an experiment which generates a 4-dimensional time series of the values of these variables at each time step.

Is it possible to skip the reconstruction in this case, and use $X = [r\; \dot r\; \phi\; \dot \phi]^T$ in place of the reconstructed trajectory in Rosenstein's paper, with no additional modifications? Is there a simpler way to calculate Lyapunov exponents when the full phase-space state of the system is known?

Best Answer

However, what if the time series is multi-dimensional, indeed of the same dimension as the phase space, to begin with?

Well, how would you know that your time series is of the same dimension as the phase space? Usually, because you already know the dynamical equations for your system (as for your pendulum). If you observe a real-life complex system, however, you might be able to obtain a multivariate time series, but there is no way to say whether its dimension corrensponds to the actual dimension of the phase space, since you cannot know the latter. Therefore I am addressing two cases separately:

  1. You know the dynamical equations for your system. Be very careful to assume this unless your system is simulated.
  2. You have obtained a multivariate time series from an unknown system.

1. You can simulate the system

Roughly speaking, you determine the largest Lyapunov exponent (and also the others) by looking at how quick two trajectories diverge after passing through two points that are close in phase space. If you only have a reconstructed phase space of your system from a time series, the only way to obtain two such nearby trajectories is to look for two points that are close to each other in your reconstructed phase space. However, if you can simulate your system, you can generate such points for yourself simply by applying a slight perturbation to the state of your simulated system. Apart from this, the method is basically the same (and is described in section 3 of the paper by Wolf et al. for example).

Also, there are some cases where you can determine the Lyapunov exponents analytically.

2. You have a multivariate time series

Estimating the Lyapunov exponents from a time series happens roughly in two steps:

  1. Reconstructing the phase space from the time series.
  2. Estimating the Lyapunov exponent from this reconstructed phase space.

Step 2 does not care about how you reconstructed the phase space – given that you do it properly and that the attractor is maximally unfolded. And in step 1, having more than one observable from your system is usually a huge benefit. A simple approach would be to start with your multivariate time series and add delayed embeddings (as described for example in your quote from Packard et al.) of your component time series until you are confident that you have unfolded the attractor. Keep in mind however, that some of your observables might not be independent or at least strongly correlated. Little surprisingly, there are more sophisticated methods for this (as a start, a quick search yielded this paper).

Related Question