ISTR there is a form of hypothesis testing where the null hypothesis is the thing you want to assert to be true. IIRC this is based on statistical power, which is the probability [in a frequentist sense] that the null hypothesis will be rejected when it is false. Therefore if the p-value is above the significance level, but the test has high statistical power, then we would expect the null to be rejected if it were false as the test has high power, so the fact that it doesn't suggests it isn't, simple! ;o)
I'll see if I can remember what it is called and look it up, until then caveat lector!
Update: I think what I had in mind is "accept support" hypothesis testing, rather than "reject support" testing, see e.g. here.
Another (hopefully) illustrative update:
Climate skeptics often claim that there has been no global warming since 1998, often citing a BBC interview with Prof. Phil Jones of the Climatic Research Unit at UEA (where I also work). Prof. Jones was asked:
Q: Do you agree that from 1995 to the present there has been no statistically-significant global warming
and answered:
A: Yes, but only just. I also calculated the trend for the period 1995 to 2009. This trend (0.12C per decade) is positive, but not significant at the 95% significance level. The positive trend is quite close to the significance level. Achieving statistical significance in scientific terms is much more likely for longer periods, and much less likely for shorter periods.
The test Jones is using here is the standard reject-support type hypothesis test, where the null hypothesis is the opposite of which he would assert to be true
H0: The rate of warming since 1998 is zero.
H1: The rate of warming since 1998 is greater than zero.
Over the period concerned, the likelihood of the observations under the null hypothesis p > 0.05, which is why Prof. Jones correctly said that there had not been statistically significant warming since 1998.
However, for a skeptic to use this test to support their view that there were no global warming would not be a good idea as they are arguing FOR the null hypothesis, and reject-support hypothesis testing is biased in favour of the null hypothesis. We start off by assuming that H0 is true and only proceed with H1 if H0 is inconsistent with the observations.
What a climate skeptic should do is to perform an accept-support test, so we fix a significance level and then see if we have sufficient observations for the power of the test to be sufficient to be confident of rejecting the null hypothesis if it were actually false. Sadly computing statistical power is rather tricky (which is presumably why reject-support testing is more popular). It turns out that in this case, the test doesn't have sufficient statistical power. Combining the two hypothesis tests we find that the observations don't rule out the possibility that it hasn't warmed, nor do they rule out the possibility that it has continued to rise at the original rate (which is easily seen by looking at the confidence interval for the trend without all this hassle).
Note that Prof. Jones suggests that the likelihood of being able to find statistically significant warming depends on the length of the timescale on which you look, which suggests that he does understand the idea of the power of a test.
Hopefully this example illustrates that you can take H0 to be the thing that you want to be true, but it is so much more complicated that it is worth avoiding if you can. It is also a nice example of how the general public doesn't really understand statistical significance.
I think you are almost constructing p-values in the question.
You are correct, you can just set a threshold using $t=T(X)$, but as you point out, you'd like to calculate the error rate associated with that t-value. In order to do so you need to know the null distribution of your test statistic, which is $F$. So in order to find the t-value that has a 5% false positive under the null, you need to find $t$ such that $F(t)=0.05$, i.e. $t=F^{-1}(0.05)$
In many (most?) cases inverting the cdf is harder than evaluating it, hence it is much easier to calculate $p=F(T)$ and then check if $p<0.05$ than to check if $t<F^{-1}(0.05)$
Best Answer
You are correct. A fuller quotation is
At best that is a silly slip. The "e-25" is crucial detail. However, it is a fundamental misunderstanding that "the P-value is 4.44". P-values lie between 0 and 1.
Further, it is confused and confusing to equate P = 0.05 and a 95% confidence interval. No confidence interval is being used here at all.