You can use Spiegelhalter's test (1983, not the 'omnibus test' from 1977):
function pval = spiegel_test(x)
% compute pvalue under null of x normally distributed;
% x should be a vector;
% D. J. Spiegelhalter, 'Diagnostic tests of distributional shape,'
% Biometrika, 1983
xm = mean(x);
xs = std(x);
xz = (x - xm) ./ xs;
xz2 = xz.^2;
N = sum(xz2 .* log(xz2));
n = numel(x);
ts = (N - 0.73 * n) / (0.8969 * sqrt(n)); %under the null, ts ~ N(0,1)
pval = 1 - abs(erf(ts / sqrt(2))); %2-sided test. if only Matlab had R's pnorm function ...
I include code to test this under the null and under a few alternatives:
% under H0:
pvals = nan(10000,1);
for tt=1:numel(pvals);
pvals(tt) = spiegel_test(randn(300,1));
end
mean(pvals < 0.05)
I get something like:
ans =
0.0512
Under some alternatives:
%under Ha (using a Tukey g-distribution)
g = 0.4;
pvals = nan(10000,1);
for tt=1:numel(pvals);
pvals(tt) = spiegel_test((exp(g * randn(300,1)) - 1)/g);
end
mean(pvals < 0.05)
%under Ha (using a Tukey h-distribution)
h = 0.1;
pvals = nan(10000,1);
for tt=1:numel(pvals);
x = randn(300,1);
pvals(tt) = spiegel_test(x .* exp(0.5 * h * x.^2));
end
mean(pvals < 0.05)
I get:
ans =
0.8494
ans =
0.8959
This test discards the knowledge that the mean must equal zero, so is perhaps less powerful than other tests. Spiegelhalter notes this test performs reasonably well for sample sizes greater than about 25, and is designed to test against symmetric alternatives (e.g. the Tukey h-distribution). It is less powerful against asymmetric alternatives.
Here is some R code to do a simulation generating data from a normal with the same mean and sd, then doing the KS test using the sample (not the generating) statistics:
out <- replicate(100000, {x <- rnorm( length(abc), mean(abc), sd(abc) );
ks.test(x, pnorm, mean(x), sd(x))$p.value } )
hist(out)
mean(out <= ks.test(abc, pnorm, mean(abc), sd(abc))$p.value)
My estimated p-value from the simulation is 0.021 (can get more accuracy/precision by running it for more simulations) which is more similar to the minitab/systat values (but not exactly. So this suggests that the other programs may be adjusting in some way for the estimated parameter values. But there is still enough difference that I expect the adjustment is different from the simulation procedure.
Best Answer
ks.test
inR
allows one to adjust the mean and sd of the distribution to be tested against. e.g.