Solved – Are time series methods only good for forecasting

forecastingmultilevel-analysispanel datatime series

Many time series methods are oriented solely in terms of forecasting (e.g., ARIMA). However, it seems like a growth curve modeling framework (i.e., random coefficient modeling) can do virtually everything else. For instance instead of performing interrupted time series analysis via segmented regression, one can just use a discontinuous growth model. To account for autocorrelation, AR terms can be included within the level-1 growth curve model as well. Even seasonal models (i.e., harmonic and those with indicator variables) can just as easily be used within the growth curve modeling approach at level-1. It also handles multiple “subjects” much better; rather than having to perform a multivariate time series analysis, the growth curve model naturally accommodates additional sources of observations (e.g., people, economic indices).

So my question is:
Am I missing something, or is time series analysis (for all intents and purposes) just a collection of forecasting methods, with any explanatory modelling handleable by the regular multilevel growth curve model?

It seems to me that this might be the case, as there is no essential difference between time series data and longitudinal in data, other than time series are longer and generally pertain to an in-depth analysis of one “subject” (e.g., GDP; see Difference between longitudinal design and time series).

Best Answer

Usually you want to forecast the behaviour of a system to improve your reaction to and interaction with that system. But at the end of the day, forecasting is only a last resort, if it is not feasible to control the system. Moreover, forecasting with nothing but time series is something you usually only do if your system is so complex that you cannot incorporate any useful other knowledge about it into your forecasts.

Time-series analysis gives you a lot of methods to understand the inner workings of a system, which in turn may be the first step to controlling it. For example, it may yield the following information:

  • What are the internal rhythms of the system and what is their relevance to my observable?
  • To what extent is my system noise-dominated and how does that noise look like?
  • Is the system stationary or not – the latter being an indicator for long-term changes of external conditions influencing the system.
  • If I regard my system as a dynamical system, what are the features of the underlying dynamics: Is it chaotic or regular? How does it react to perturbations? How does its phase space look like?
  • For a system with multiple components: Which components interact with each other?
  • How do I model my system if I want my model to do more than just reproduce certain features of observed time series, such as yielding an understanding of the system, properly describing situations that are not comparable to anything that has been observed in the past at all, e.g., when I actively manipulate the system or an extreme event happens (such as in a disaster simulation). All of the above points can play into this and moreover, time-series analysis can be used to verify a model by comparing the time series of the original and the model.

Some practical examples:

  • Climate research employs a lot of time-series analysis but is not only good for forecasting climate but also tries to answers the important question of how we influence our climate.
  • If you have a time series related to the illness of an individual female patient and you find a strong frequency component of roughly one month, this is a strong hint to the menstrual cycle being somehow involved. Even if you fail to understand this relation, and can only treat the symptoms by giving some medication at the right time, you can benefit from taking the actual menstrual cycle of that patient into account (this is an example were you forecast with more than just time series).