time-series – Should Data Be Differenced Before Running an ADF Test?

augmented-dickey-fullerdifferencingstationaritytime seriestrend

I plot my data as shown in the following screenshot:

enter image description here

Clearly the series contains a trend. A first order difference of my data will eliminate the trend, which I plot as follows:

enter image description here

Now I would like to run an Augmented Dickey-Fuller (ADF) test, using the function adf.test from the R package aTSA. Usually the output of this function will contain three types: 'no drift no trend', 'with drift and trend' and 'with drift and trend'. But I am confused whether I should test the differenced series and then rely on the type 'with drift no trend', or should I test the original series and rely on the type 'with drift and trend'? Can anyone help?

Best Answer

No, you should not difference your series before running an ADF test. You would difference a series that contains a unit root. To find that out, you can run an ADF or some other unit-root test. The test result will give you an indication of whether you need to difference your time series. So you do the ADF first rather than last.

Your particular series appears to have a deterministic linear trend. A sound way to deal with that is either (1) to fit a linear trend and work with the residuals or (2) to include a linear trend in a model for the series. Differencing is not warranted, because a linear trend does not imply a unit root.

(You may look up the term overdifferencing to see what can go wrong when differencing a time series that does not have a unit root.)