Let's assume we have four time series a, b, c and d with 10 measurments.
a(1), ..., a(10)
b(1), ..., b(10)
c(1), ..., c(10)
d(1), ..., d(10)
a, b and c are assumed to show the same trend and periodicity.
The question is how can I compare d to a combination of a, b and c in order to test whether d is differing from the assumed trend and periodicity.
The problem is that a, b and c have different ranges, so an average
X(i) := ( a(i) + b(i) + c(i) ) / 3
is not useful.
My question is what would be a good way to reach a meaningful combination?
Would it make sense to normalize the mean of all series a, b, c, d to 1 and then compare d to the average of a, b and c? Or would I also have to normalize the standard deviation of all four series to 1 first?
Best Answer
I am not particularly familiar with them, but it seems to me that this would be a reasonable case to use a VAR (vector autoregression) model for.
In particular, a VAR with a linear time trend or period-specific deterministic component would give you a nice summary statistic for the overall trend in a given period. See for example equation (11.4) here:
http://faculty.washington.edu/ezivot/econ584/notes/varModels.pdf
You could then consider your series
d
as analagous to the exogenous variable in impulse response modeling (or theX
in the equation listed above) and see its joint effect on the vector of Y's (your seriesa-c
). The F-test seems to be the standard way to do this.Example
Here's an example that (I think) shows what you are looking to test:
The null Granger hypothesis is rejected at
p<.05
. So a shock ind
is useful in predicting future values of(a,b,c)
.