Solved – Interpreting p-values in Fisher vs Neyman-Pearson frameworks

hypothesis testingp-valuestatistical significance

I am a little confused about what p-values mean under Fisher's signficance testing & Neyman-Pearson's hypothesis testing.

Fisher uses p-values as a continuous measure of evidence against a null hypothesis? So a p-value of 0.06 would indicate that there is no difference and the null hypothesis is true?

However, does this mean the same thing under Neyman-Pearson. I know that you have to pre-set alpha values for type 1 errors but does this affect p?
Does a p-value greater than alpha indicate that there is >5% chance of a type one error occurring?

Best Answer

Fisher uses p-values as a continuous measure of evidence against a null hypothesis?

Perhaps. What convinces you of this?

So a p-value of 0.06 would indicate that there is no difference and the null hypothesis is true?

Not at all. How did you go from 'continuous measure of evidence against' to 'there is no difference'?

In particular, Fisher would not make the mistake of thinking that failure to reject makes $H_0$ actually true.

Does a p-value greater than alpha indicate that there is >5% chance of a type one error occuring

No, for two reasons.

(i) if $p>\alpha$ you won't reject, so you can't commit a type I error at all

(ii) You don't even have an $\alpha$ probability of making a type I error, since the type I error rate is a conditional probability, and in real situations, the joint probability is close to zero (that is, point null hypotheses are almost never exactly true; you can only make a type I error when they are exactly true).

[ ... I suppose that I'm arguably acting more as a Bayesian there]