Both are testing for displacement of the x variable with respect to the y variable, but the 2 tests have opposite meanings for the term "greater" (and therefor also or "less").
In the ks.test
"greater" means that the CDF of 'x' is higher than the CDF of 'y' which means that things like the mean and the median will be smaller values in 'x' than in 'y' if the CDF of 'x' is "greater" than the CDF of 'y'. In 'wicox.test' and 't.test' the mean, median, etc. will be greater in 'x' than in 'y' if you believe that the alternative of "greater" is true.
An example from R:
> x <- rnorm(25)
> y <- rnorm(25, 1)
>
> ks.test(x,y, alt='greater')
Two-sample Kolmogorov-Smirnov test
data: x and y
D = 0.6, p-value = 0.0001625
alternative hypothesis: two-sided
> wilcox.test( x, y, alt='greater' )
Wilcoxon rank sum test
data: x and y
W = 127, p-value = 0.9999
alternative hypothesis: true location shift is greater than 0
> wilcox.test( x, y, alt='less' )
Wilcoxon rank sum test
data: x and y
W = 127, p-value = 0.000101
alternative hypothesis: true location shift is less than 0
Here I generated 2 samples from a normal distribution, both with sample size 25 and standard deviation of 1. The x
variable comes from a distribution of mean 0 and the y
variable from a distribution of mean 1. You can see the results of ks.test
give a very significant result testing in the "greater" direction even though x
has the smaller mean, this is because the CDF of x
is above that of y
. The wilcox.test
function shows lack of significance in the "greater" direction, but similar level of significance in the "less" direction.
Both tests are different approaches to testing the same idea, but what "greater" and "less" mean to the 2 tests are different (and conceptually opposite).
What does having infinity as the upper bound of a confidence interval mean? Is this because I'm using the one-tailed version of the test?
Yes, it's because you're doing a one-tailed version of the test; no matter how far the sample location is in the 'wrong' direction (i.e. the direction inconsistent with the alternative), it's still consistent with the null - so you're only considering one-sided bounds.
would that mean I would be justified in saying "with a 95% confidence x[,5]'s mean will be within -72 of x[,6]'s?"
No it wouldn't justify that statement. For starters you're not testing means at all unless you make some additional assumptions that would make difference in means coincide with the population equivalent of the location-shift estimate for the test.
In the second place, the location-difference could be in the 'wrong' direction, so 'within' doesn't quite work either.
In the third place, two locations aren't normally considered to be 'within' a negative distance of each other.
You could say something like "the estimated improvement from the first to the second algorithm was 21" (and then give the units!). Note that I said 21 and not 72. If you explain to the reader what the pseudo-median of the differences is, you can give more detail about what this difference is measuring.
What does the V value mean with regard to my data?
It's the value of the Signed Rank statistic. Check the references mentioned below for how it's calculated (particularly Hollander & Wolfe if you can find it since that's the references given in the R help, so the statistic is sure to correspond).
Specifically, the two main definitions that I've seen are either that all signed ranks are added (this is the version on the Wikipedia page), OR that only the positive-signed ranks are added. It looks like R uses the second one. That is, if $x$ and $y$ are the two paired samples, so the differences $x-y$ are tested, then
sum(rank(abs(x-y))[x>y])
should give the same statistic as R. Like so:
> sum(rank(abs(x[,5]-x[,6]))[x[,5]>x[,6]])
[1] 22
From what I can see it is the difference between median(x[,5]) and median(x[,6]
It isn't. Well, they might coincide occasionally (as with your sample) but that's not what is going on. You should probably start by reading up about how the statistic works. I'd suggest something like Conover's Practical Nonparametric Statistics. Or, ideally, you could check the Signed Rank Test reference in the R help on wilcox.test
(Hollander & Wolfe).
The actual value of the statistic isn't usually of interest. The estimate of the size of the location-shift would be relevant (and doesn't depend on which definition of the statistic is used). That is, the fact that 0 is inside the interval matters a lot, the "-21" matters somewhat, the "-72" might matter, the "22" probably doesn't (though there's little harm in quoting it if the definition of the statistic is clear to the reader).
Best Answer
The two-sample t-test is appropriate here, because you want to compare the two groups directly.
Two groups can differ significantly, and yet the CIs can still overlap. However, if the CIs do not overlap, then the groups must differ significantly. (This is of course assuming that the significance test and the CIs are calculated using the same assumptions about the data.) This is commonly misunderstood. Reference: http://blog.minitab.com/blog/real-world-quality-improvement/common-statistical-mistakes-you-should-avoid
How can the means of two groups differ significantly and yet have overlapping CIs? Loosely speaking, I think of it this way. There is 95% likelihood that the true mean for each group lies within the CI for that group. But in order for them to have the same mean, one group mean would lie at the extreme of its CI, and the other group mean would lie at the opposite extreme of its CI. That is an unlikely scenario.