I did a little google research because I found the question quite interesting, these tests have been mentioned:
- Nemenyi-Damico-Wolfe-Dunn test (link, there is an r-package for doing the test)
- Dwass-Steel-Chritchlow-Fligner (link, Conover WJ, Practical Nonparametric Statistics (3rd edition). Wiley 1999.
- Conover-Inman test (link, same as above)
I didn't know any of these and I don't know if any of these is available in JMP. If not: There are people doing a standard anova but simply replacing the dependent values by their ranks. Then you could use Tukey's HSD again.
What you should always keep in mind, is that The Difference Between “Significant” and “Not Significant” is not Itself Statistically Significant -- there is a nice paper with this title by Gelman and Stern, which I link to here, but the idea is very simple. Here is how they start explaining it:
Consider two independent studies with effect estimates and standard errors of 25±10 and 10±10. The first study is statistically significant at the 1% level, and the second is not at all statistically significant, being only one standard error away from 0. Thus, it would be tempting to conclude that there is a large difference between the two studies. In fact, however, the difference is not even close to being statistically significant: the estimated difference is 15, with a standard error of
$\sqrt{10^2+10^2}=14$.
In your case, when you conduct three separate Wilcoxon tests, you might get p-values e.g. $0.045$, $0.045$, and $0.055$. First two are "significant" according to the common $p<0.05$ criterion, and the third one is not. However, the difference between p-values is tiny, and so it is very well possible that if you compare three groups between each other, then you will fail to get any significant difference. Which seems to be exactly your case.
In addition: doing Kruskal-Wallis on the pre and post measures separately is probably not the best approach. You can subtract pre from post and do one Kruskal-Wallis on these differences. It is of course still possible (as I explain above) that you will not get a significant difference, but this is a more correct approach.
Just to stress it again: if one drug comes out with significant pre-post difference and another one with insignificant, it is (by itself) no reason whatsoever to believe that one drug is better than another. Unfortunately, it is a very widespread mistake.
Best Answer
No, it is not a valid nonparametric alternative.
The rank sum test (either original Wilcoxon flavor, or New Improved Mann-Whitney $U$ varieties):
See, for example, Kruskal-Wallis Test and Mann-Whitney U Test. (Also the pairwise.wilcox.test seems not to have the ties adjustments that these tests do.)
The nonparametric pairwise multiple comparisons tests you are likely looking for are Dunn's test, the Conover-Iman test, or the Dwass-Steel-Crichtlow-Fligner test. I have made packages that perform Dunn's test (with options for controlling the FWER or FDR) freely available I have implemented Dunn's test for Stata and for R, and have implemented the Conover-Iman test for Stata and for R.
References
Conover, W. J. and Iman, R. L. (1979). On multiple-comparisons procedures. Technical Report LA-7677-MS, Los Alamos Scientific Laboratory.
Crichtlow, D. E. and Fligner, M. A. (1991). On distribution-free multiple comparisons in the one-way analysis of variance. Communications in Statistics—Theory and Methods, 20(1):127.
Dunn, O. J. (1964). Multiple comparisons using rank sums. Technometrics, 6(3):241–252.