Some possibilities (though I still am not sure of the actual details):
If the response is a frequency then you generally want to use a binomial or poisson outcome for counts, but your sample data is not integers, is it counts divided by time or something similar? If so you could still use a poisson regression model to analyze the data (generalized linear models, glm). If there is concern about the dependence in the data then generalized estimating equations (gee) or generalized mixed effects models (gmm) extend the poisson regression this way.
A flexible non-parametric method is permutation testing (many of the other non-parametric methods are special cases of permutation testing).
I don't remember enough about SPSS to give more specific suggestions.
I have a set of measurements coming from a manufacturing processes. I want to test if the measurements come from a normal distribution. If I understand correctly, it's wrong to use K-S test or A-D test with mean and variance estimated by the sample.
You're correct -- at least with the usual tables.
BTW, that's what everybody else in my team is doing. Of course, that doesn't make it right! It's just that I cannot rely on my coworkers for guidance on this topic.
Indeed.
However, intuitively I suppose that, having fitted the parameters, now the test has much less power to reject the null.
Correct.
Now, even with estimated mean and variance, I usually get absurdly low p-values (stuff like 0.0001 or less!). Thus, I think I'm safe to say that my data are definitely not normal. Is that correct?
In effect, yes (either that or the null is true but an event with very low probability occurred).
However, you don't need the test to know that your data aren't drawn from a normal distribution. [I bet I could prove to you that none of the data you're testing can actually have come from a normal population, but I'd need to know what you're measuring to give you the right argument.]
In essence then, one could only fail to reject by gathering too small a sample. (So the wonder is then why you'd bother to test it. It's a question you already know the answer to, and in any case the answer is of no value to you. It doesn't matter if the distribution is truly normal or not. An answer to a slightly different question is much more useful.)
I know that the right way to approach this issue would be to follow the procedure in
Testing whether data follows T-Distribution
But that's a bit complicated for me, and there are quite a few steps I don't understand. For example, what's the R function random?
No wonder -- that's not R code!!!
While I could easily explain how to automate an Anderson-Darling test in R, (or even easier, point you to the package that already does it for you*), I see no reason why any of this would answer a question you should care about.
The critical question here is: Why are you testing normality?
* If you must test normality, for all that it makes no sense to do so that I can see, the package nortest
implements unspecified-parameter (i.e. composite) normality tests, including one based on the Anderson-Darling statistic ... but why on earth would you not use Shapiro-Wilk? It's in vanilla R, and it's nearly always more powerful even than the Anderson-Darling for alternatives people tend to care about.
Best Answer
You can use the Wilcoxon–Mann–Whitney rank sum test.
All running times of the healthy mice are censored (since you stopped them before they were done running). Even though we know that their mean (or median) running time is at least 10 minutes, we cannot estimate is exactly and so we cannot know how much longer healthy mice can run than unhealthy mice.
However we know that (during the experiment) all heathy mice ran longer than all unhealthy mice. That is, we know the ranking of running times exactly. The Wilcoxon–Mann–Whitney test statistic is based on ranks, so it can be computed correctly even with the censoring.
Fun fact: In your data there is no overlap between the running times (and ranks) of healthy mice and unhealthy mice: the two groups are completely separated. In this special case the one-sided p-value is $n_h!n_u!/(n_h + n_u)!$ where $n_h$ and $n_u$ are the number of healthy and unhealthy mice respectively and $n!$ is the factorial of $n$ [1].
[1] Biostatistics for Biomedical Research course notes. Available online. See Section 7.3 about the Wilcoxon–Mann–Whitney two-sample test.