First question. You are right about being able to use software instead of tables
of the chi-squared distribution. For example, if df = 9 and the
chi-squared statistic is 20.16, you could look at a chi-squared
table to see that $20.16 > 19.02,$ where 19.02 cuts area 0.025
from the upper tail of $Chisq(df = 9)$.
You you would reject at
that 2.5% level.
If you wanted a P-value, you could use software
to find the probability of the chi-squared statistic being
greater than 20.16. In R software this is computed as follows,
where pchisq
stands for the CDF of a chi-squared distribution:
1 - pchisq(20.16, 9)
## 0.01695026
Thus the P-value (probability of a value more extreme than 20.16)
is about 0.017. Some software will give you the P-value automatically.
Second question. As far as binning is concerned, you are right that in some
instances there are alternate possible ways of binning. You do not
want so many bins that the expected counts in each bin get less than about 5, otherwise the approximation of the chi-squared statistic
to the chi-squared distribution is not good. Given that restriction,
it is usually better to use more bins rather than fewer.
Also notice
that the df of the chi-squared distribution depends directly on
the number of $bins$ used, not on the overall number of $events$ counted.
(I do not understand what you say about 'approximately Gaussian'
in this context.)
Examples: Here is an example in which we simulate 60 rolls of a fair die, so that we expect 10 instances of each face. The observed numbers
of each face are tabulated. Finally, a chi-squared test that the
die is fair has a chi-squared goodness-of-fit statistic of 3.0,
and a P-value of 70% (consistent with a fair die).
face = sample(1:6, 60, rep=T) # simulate 60 rolls of fair die
table(face)
## face
## 1 2 3 4 5 6
## 9 6 12 10 10 13
chisq.test(table(face))
## Chi-squared test for given probabilities # default is equal probabilities
## data: table(face)
## X-squared = 3, df = 5, p-value = 0.7
In the test, the default is that faces have equal probabilities
unless some other probability vector is specified. The test procedure
chisq.test
finds the P-value as follows (and rounds):
1 - pchisq(3, 5)
## 0.6999858
In our second example, we simulate 600 rolls of a die that
is heavily biased in favor of faces 4, 5, and 6 (see prob
vector). Here
the null hypothesis that the die is fair is soundly rejected
with an extremely small P-value.
face = sample(1:6, 600, repl=T, prob=c(1,1,1,2,2,2)/9 )
table(face)
## face
## 1 2 3 4 5 6
## 59 67 80 123 135 136
chisq.test(table(face))
## Chi-squared test for given probabilities # default is test for 'fair' die
## data: table(face)
## X-squared = 62.2, df = 5, p-value = 4.263e-12
Best Answer
Chi-squared g00dness-of-fit (GOF) tests are widely used and often misinterpreted. Here are two examples that involve testing to judge whether a die is fair.
Example 1: Suppose we roll a die 60 times, and get the following summary table of results.
If the die is fair, then we say we would 'expect' each face to occur 10 times. Of course, that would be an 'average' result. In view of random variation, it would be a very rare outcome to see a frequency of exactly 10 for each of the six faces.
The question is how much different from the 'expected' results $E_i = 10$ can the actual results $X_i$ be before we reject the null hypothesis that each face has probability $p_i = 1/6?$
The usual way to measure departure from the idealized outcome is to compute the GOF statistic
$$Q = \sum_{i=1}^6 \frac{(X_i - E_i)^2}{E_i}.$$
For the data shown above, we have $Q = 5.4.$ Notice that if all six observed frequencies were 10's, we would have $Q = 0,$ so large values of $Q$ correspond to poor fit to the null hypothesis that the die is fair.
If the null hypothesis is true, $Q \stackrel{aprx}{\sim} \mathsf{Chisq}(\nu = 5),$ the chi-squared distribution with $\nu = 6 - 1 = 5$ degrees of freedom. This is an approximation, but with all expected values $E_i > 5,$ some theory and some simulation studies show that the approximation is good enough to use in testing the null hypothesis.
If we are testing the null hypothesis at the 5% level of significance, the 'critical value' above which we reject the null hypothesis is $c = 11.0705.$ Because $Q < c$ we do not reject the null hypothesis. We say that the data are consistent with behavior of a fair die. The value $c$ cuts 5% of the area from the upper tail of $\mathsf{Chisq}(5).$
In R statistical software, the test procedure looks like this, where
face
is the vector of the 60 outcomes tabled above. [Unless a vector of probabilities other than $p = (1/6, 1/6, 1/6, 1/6, 1/6, 1/6)$ is specified, the program assumes the 'given probabilities' are equally likely.]The P-value is the probability a fair die would give a $Q$-value greater than our result $Q = 5.4.$ [Another way to test at the 5% level is to reject the null hypothesis if the P-value is smaller than 5%.]
The figure below shows the density curve of $\mathsf{Chisq}(5).$ The vertical dotted red line is at the critical value $c = 11.0705,$ the vertical solid black line is at the observed value $Q = 5.4,$ and the area beneath the curve to the right of the black line is the P-value.
Example 2: By placing a lead weight beneath the corner of a die where faces 4, 5, and 6 meet it would be possible to make an unfair die with probabilities $$p = (7/36, 7/36, 7/36, 5/36, 5/36, 5/36).$$ With $n = 60$ rolls of such an altered die, the expected counts would be $$E = \left(11\frac23, 11\frac23, 11\frac23, 8\frac13, 8\frac13, 8\frac13 \right).$$
Now we ask whether our data are also consistent with 60 rolls of such an unfair die. Again the 'null distribution' of $Q$ is $\mathsf{Chisq}(5)$ and the critical value is $c=11.0705.$ However, we must use the new expected values $E_i$ in the formula for the GOF statistic, so that $Q = 7.2 < c$ and the null hypothesis is (once again) not rejected.
So we cannot say in Example 1 that we have "proved" the die is fair. The data are also consistent with a die that is biased as described in the current example. With only $n = 60$ rolls of the die, we do not have enough information to distinguish between a fair die and a somewhat biased one.
If the die were truly biased as described and the number of rolls had been greater (perhaps 600 instead of 60), then we would very likely get data that are clearly not consistent with a fair die.
Note: The data for these examples resulted from 60 rolls of a die that I suppose is fair. (Transparent plastic and no signs of tampering.)