Correct me if I'm wrong, but I think this can be done in R using this command
chisq.test(c(15, 13, 10, 17))
Chi-squared test for given probabilities
data: c(15, 13, 10, 17)
X-squared = 1.9455, df = 3, p-value = 0.5838
This assumes proportions of 1/4 each. You can modify expected values via argument p
. For example, you think people may prefer (for whatever reason) one color over the other(s).
chisq.test(c(15, 13, 10, 17), p = c(0.5, 0.3, 0.1, 0.1))
Chi-squared test for given probabilities
data: c(15, 13, 10, 17)
X-squared = 34.1515, df = 3, p-value = 1.841e-07
Your question is still somewhat ambiguous, is it “A differs from B or C, etc.” (suggesting you want to tests many comparisons, at least informally) or “A differs from B or C” (one specific test, which does however begs the question of why you had 5 conditions in the first place if only this comparison is important)?
In any case, you can always consider some subset of conditions or collapse two conditions together and still analyze the resulting contingency table with a $\chi^2$ test of independence. There are systematic ways to define these subtests (look up “partitioning chi squared”), which might in fact be implemented in the software you are using, saving you the trouble of defining new contingency tables.
If you are doing many such tests and the comparisons were not defined beforehand, you might want to first do an overall test of independence on the 5x2 contingency table and only proceed further if the overall null hypothesis is rejected. Alternatively, you might consider some adjustment for multiple comparisons.
Yet, another way to follow-up a $\chi^2$ test of independence, especially when you don't have specific comparisons in mind, is to look at standardized Pearson residuals.
Partitioning and looking at residuals are somewhat analogous to post-hoc tests for the $\chi^2$ test. Both of these are discussed in Alan Agresti's book Categorical Data Analysis.
PS: Re-reading your edit, I see that the different conditions are different doses of the same drug. The techniques discussed above do not take the nature of the manipulation into account, it might be worthwhile to model it directly, perhaps with a logistic regression?
Lastly, if this is all too much or does not exactly apply to your situation, you could also consider a “minimalist” approach:
- $\chi^2$ test of independence on the whole contingency table, as a rough check that you have something else than noise/sampling variation
- Plot/direct interpretation of the pattern of proportions, perhaps with a confidence interval around each proportion.
This is not perfect and might not fly in some “test-centered” disciplines but it seems perfectly reasonable to me.
Best Answer
The R Package 'pairwise.prop.test()' does the post-hoc job. One can also define several methods of adjusting p-value. However, this is only a post-hoc test and no planned comparisons.