Solved – Are log difference time series models better than growth rates

data transformationeconometricsforecastinglogarithmtime series

Often I see authors estimate a "log difference" model, e.g.

$\log (y_t)-\log(y_{t-1}) = \log(y_t/y_{t-1}) = \alpha + \beta x_t$

I agree this is appropriate to relate $x_t$ to a percentage change in $y_t$ while $\log (y_t)$ is $I(1)$.

But the log difference is an approximation, and it seems one could just as well estimate a model without the log transformation, e.g.

$y_t/y_{t-1} -1 = (y_t – y_{t-1}) / y_{t-1}=\alpha+\beta x_t$

Moreover the growth rate would precisely describe the percent change, while the log difference would only approximate the percent change.

However, I've found the log difference approach is used much more often. In fact, using the growth rate $y_t/y_{t-1}$ seems just as appropriate to address stationarity as taking the first difference. In fact, I have found that forecasting becomes biased (sometimes called the retransformation problem in the literature) when transforming the log variable back to the level data.

What are the benefits to using the log difference compared to the growth rate? Are there any inherent problems with the growth rate transformation? I'm guessing I am missing something, otherwise it would seem obvious to use that approach more often.

Best Answer

One major advantage of log-differences is symmetry: if you have a log difference of $0.1$ today and one of $-0.1$ tomorrow, you are back from where you started. In contrast, 10% growth today and 10% decline tomorrow will not bring you back to the initial value.