Different results in the unit root test. Why?

Situation:
I had tired a 1000-data generated by random error(i.i.d.), then I sub it into different unit root tests. I got different results among the tests. The following are the test statistics I got:

For R project:
adf.test @ tseries ~ -10.2214 (lag = 9)
ur.df @ urca ~ -21.8978
ur.sp @ urca ~ -27.68
pp.test @ tseries ~ -972.3343 (truncation lag =7)
ur.pp @ urca ~ -973.2409
ur.kpss @ urca ~ 0.1867
kpss.test @ tseries ~ 0.1867 (truncation lag =7)

For MATLAB:
(adf test) ~ -0.43979

Questions:
1. Tests under same test name, say Phillips-perron test (pp.test & ur.pp), they have different test statistics. Why?
2. Don't the Phillips-perron test based on the Dickey-Fuller distribution table? How the value being so negative (-9xx)?
3. What is truncation lag? Is it the same with lag terms?

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

Accidental post?

This is SO not related to anything we normally discuss here, I can only wonder if you accidentally posted to the wrong forum.

Since I can't interpret it as any topic I'm familiar with, I can't even suggest where you should go look instead.

Safe to say though, it is not here at Lambda the Ultimate.

Anthropologist

Or perhaps he's an Anthropologist Errant, trying to see how various communities react to various kinds of off-topic posts. :-)