I'm trying the functions to check the cointegration of a matrix.
I'm using Phillips & Ouliaris Cointegration Test
The function in tseries package is po.test and ca.po in urca
The results with urca are:
> ca.po(prices, demean='none')
########################################
# Phillips and Ouliaris Unit Root Test #
########################################
Test of type Pu
detrending of series none
Call:
lm(formula = z[, 1] ~ z[, -1] - 1)
Residuals:
Min 1Q Median 3Q Max
-7.4960 -0.2912 0.7116 1.4530 3.3962
Coefficients:
Estimate Std. Error t value Pr(>|t|)
z[, -1] 0.559705 0.004678 119.6 <2e-16 ***
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Residual standard error: 1.73 on 749 degrees of freedom
Multiple R-squared: 0.9503, Adjusted R-squared: 0.9502
F-statistic: 1.431e+04 on 1 and 749 DF, p-value: < 2.2e-16
Value of test-statistic is: 12.9648
Critical values of Pu are:
10pct 5pct 1pct
critical values 20.3933 25.9711 38.3413
The result with tseries are:
> po.test(prices, demean=FALSE)
Phillips-Ouliaris Cointegration Test
data: prices
Phillips-Ouliaris standard = -25.6421, Truncation lag parameter = 7,
p-value = 0.01
Warning message:
In po.test(prices, demean = FALSE) : p-value smaller than printed p-value
As you can see I'm testing the same matrix (prices).
How is it possible that urca tells there is NO cointegration and tseries: YES?
Prices max it's a simple matrix with two columns (stock1 – stock2), take a look to an extract of that.
1 3.065448 5.244870
2 3.094924 5.806821
3 2.873858 5.647601
4 3.205457 6.190820
5 3.315990 6.453064
6 3.168612 6.865161
7 3.271777 7.230428
Thank you
Best Answer
The devil is in the details. The help page
po.test
, you would have found this:And in help page of
ca.po
:So you can guess that the number of lags is chosen differently. The code from the functions justify this hypothesis. The code from
po.test
:From the
ca.po
:Hence the statistics are actually different and so are the results.
This is not uncommon situation in testing for unit-roots and cointegration. If different statistics give different results, this usually means that something is missing. Also note that in general these statistics do not deal well with structural breaks, so if there are events which might of introduced structural breaks it would be prudent to take them into account.