Time Series Analysis – Implications of Fitting ARIMA Model with Constant Variance to Nonconstant Variance Process

heteroscedasticitystationaritytime series

Hamilton (page 657) in Time Series Analysis warns that a variance that changes over time has implications for the validity and efficiency of statistical inference about the parameters in an AR(p) model (and by extension in an ARMA or ARIMA model).

enter image description here

Here are 3 articles that compare ARCH models with ARIMA models. As far as I can tell each of these articles fits an ARIMA model (with constant variance) to series that come from processes with clearly non-constant variance/processes with variance that exhibits volatility clusters. I will point out that each of these articles also fits and ARCH/GARCH type model but I raise the question of whether fitting an ARIMA type model (with constant variance) in these cases makes any sense? Curiously, article 3 even concludes that the ARMA model (with constant variance) does a better job predicting than the GARCH model. What are the implications of estimating a process with non-constant variance with an ARIMA model that assumes constant variance on the validity and efficiency of statistical inference about its parameters? Do those implications differ when the variance changes linearly versus when it exhibits volatility clusters? (see plot below) Routinely you see people concluding that rejecting the null hypothesis in a unit root test (e.g. ADF) amounts to concluding that a series is stationary (instead of I(0)), once again gliding over the possibility of non-constant variance and of its implications. Both of the series pictured below do not have a unit root, and yet have clearly non-constant variance, as did the series in each of the 3 cited articles.

enter image description here

par(mfrow=c(2,3))
########################################################################
set.seed(400)
y<-rep(NA,100)
for (i in 1:100) {
  y[i]<-rnorm(1,mean=0,sd=i)
}
plot(y,type="p",main="1: variance increases linearly",col="red",lwd=2);abline(h=0)

u<-urca::ur.df(y=y, type = "none",lags=12)
summary(u)
forecast::Acf(u@res,lag.max=70,type="correlation",main="ACF",xlab="")
forecast::Acf(u@res,lag.max=70,type="partial",main="PACF",xlab="")
########################################################################
y<-rep(NA,100)
for (i in 1:100) {
  if (i<50) y[i]<-rnorm(1,mean=0,sd=2/i^1.08)
  else y[i]<-rnorm(1,mean=0,sd=i^2/30000)
}
plot(y,type="p",main="2: variance does not increase linearly",col="blue",lwd=2);abline(h=0)

u<-urca::ur.df(y=y, type = "none",lags=12)
summary(u)
forecast::Acf(u@res,lag.max=70,type="correlation",main="ACF",xlab="")
forecast::Acf(u@res,lag.max=70,type="partial",main="PACF",xlab="")
########################################################################

In this post Richard Hardy gives some potentially valuable clues. If I follow what he says correctly, as long as the true model does not contain any MA terms even if its variance has ARCH type clusters, fitting an ARMA model will still yield consistent estimators. Of course, the standard errors would be off. If there are MA terms in the true model then the estimators will not even be consistent, in addition to the standard errors being off. This may be taking us closer to the answer, perhaps.

Best Answer

As long as the true model does not contain any MA terms even if its variance has ARCH type clusters, fitting an ARMA model will still yield consistent estimators. Of course, the standard errors would be off. If there are MA terms in the true model then the estimators will not even be consistent, in addition to the standard errors being off.