Situation:
I had tired a 1000-data generated by random error(i.i.d.), then I sub it into different unit root tests. I got different results among the tests. The following are the test statistics I got:
For R project:
adf.test @ tseries ~ -10.2214 (lag = 9)
ur.df @ urca ~ -21.8978
ur.sp @ urca ~ -27.68
pp.test @ tseries ~ -972.3343 (truncation lag =7)
ur.pp @ urca ~ -973.2409
ur.kpss @ urca ~ 0.1867
kpss.test @ tseries ~ 0.1867 (truncation lag =7)
For MATLAB:
(adf test) ~ -0.43979
Questions:
1. Tests under same test name, say Phillips-perron test (pp.test & ur.pp), they have different test statistics. Why?
2. Don't the Phillips-perron test based on the Dickey-Fuller distribution table? How the value being so negative (-9xx)?
3. What is truncation lag? Is it the same with lag terms?
Recent comments
36 weeks 1 day ago
36 weeks 2 days ago
36 weeks 2 days ago
1 year 6 weeks ago
1 year 10 weeks ago
1 year 12 weeks ago
1 year 12 weeks ago
1 year 14 weeks ago
1 year 19 weeks ago
1 year 19 weeks ago