I am confused that when I should use static model (like cross-sectional regression) or other forms of time series model. I see some examples of analyzing time ordered data by crosss-sectional regression model. For instance, model GDP by using other economic factors but not time series analysis. Does such analysis a correct way to do? Do we still need to consider additional lags if the regression model already fulfill all its assumptions well? And do we need to add lagged term of dependent variables considering that GDP indeded has serial correlation?
Solved – Linear regression vs Time series analysis
regressiontime series
Related Solutions
The major difference between time series data and cross-section data is that the former focuses on results gained over an extended period of time, often within a small area, whilst the latter focuses on the information received from surveys and opinions at a particular time, in various locations, depending on the information sought. Moreover, gdp in one time lag is likely to be dependent on the next time lag and so on. In a cross sectional point of view, you ignore this correlation. For your problem, I guess you will be trying to see how gdp is being affected by employment over time so that you can also estimate the future scenario.
Usually you want to forecast the behaviour of a system to improve your reaction to and interaction with that system. But at the end of the day, forecasting is only a last resort, if it is not feasible to control the system. Moreover, forecasting with nothing but time series is something you usually only do if your system is so complex that you cannot incorporate any useful other knowledge about it into your forecasts.
Time-series analysis gives you a lot of methods to understand the inner workings of a system, which in turn may be the first step to controlling it. For example, it may yield the following information:
- What are the internal rhythms of the system and what is their relevance to my observable?
- To what extent is my system noise-dominated and how does that noise look like?
- Is the system stationary or not – the latter being an indicator for long-term changes of external conditions influencing the system.
- If I regard my system as a dynamical system, what are the features of the underlying dynamics: Is it chaotic or regular? How does it react to perturbations? How does its phase space look like?
- For a system with multiple components: Which components interact with each other?
- How do I model my system if I want my model to do more than just reproduce certain features of observed time series, such as yielding an understanding of the system, properly describing situations that are not comparable to anything that has been observed in the past at all, e.g., when I actively manipulate the system or an extreme event happens (such as in a disaster simulation). All of the above points can play into this and moreover, time-series analysis can be used to verify a model by comparing the time series of the original and the model.
Some practical examples:
- Climate research employs a lot of time-series analysis but is not only good for forecasting climate but also tries to answers the important question of how we influence our climate.
- If you have a time series related to the illness of an individual female patient and you find a strong frequency component of roughly one month, this is a strong hint to the menstrual cycle being somehow involved. Even if you fail to understand this relation, and can only treat the symptoms by giving some medication at the right time, you can benefit from taking the actual menstrual cycle of that patient into account (this is an example were you forecast with more than just time series).
Best Answer
What you typically want from a model is that the parameter estimators be consistent and have a known distribution (typically normal or asymptotically normal). Given these properties, you can rely on them without further adjustments and use the estimated model for inference, policy implications etc. Consistency and distributional properties can be achieved if the model satisfies the statistical assumptions underlying the estimation method. These assumptions are testable, so it makes sense to test them after you have estimated the model.
If these assumptions are not satisfied, your parameter estimators may be inconsistent or have weird distributions. Then you cannot readily use them and you have to be more careful about inference.
Some more background:
There are different traditions of modelling and thus different modelling approaches. A review article by Gilbert (1986) throws some light on that.
Some would start from a theoretical model which is supposed to be "true" (i.e. correctly specified) in the subject-matter sense, estimate it, find out it does not withstand the tests on statistical assumptions underlying the estimation method, and then start tackling the problem of how to estimate the parameters of the supposedly "true" model in a good way.
Others would say that not passing the test means the "true" model is misspecified (here we see an implication from the statistical domain carrying over to the subject-matter domain). They would rather not postulate a "true" model at the start but would build a reasonable specification so that the statistical assumptions are satisfied.
This is a crude take-away, at least how I understood it. Almost 30 years have passed since the publication of the paper, and perhaps one of these approaches is the dominating one now, but I do not have the data on that so I cannot claim anything with certainty.