You should use the signed rank test when the data are paired.
You'll find many definitions of pairing, but at heart the criterion is something that makes pairs of values at least somewhat positively dependent, while unpaired values are not dependent. Often the dependence-pairing occurs because they're observations on the same unit (repeated measures), but it doesn't have to be on the same unit, just in some way tending to be associated (while measuring the same kind of thing), to be considered as 'paired'.
You should use the rank-sum test when the data are not paired.
That's basically all there is to it.
Note that having the same $n$ doesn't mean the data are paired, and having different $n$ doesn't mean that there isn't pairing (it may be that a few pairs lost an observation for some reason). Pairing comes from consideration of what was sampled.
The effect of using a paired test when the data are paired is that it generally gives more power to detect the changes you're interested in. If the association leads to strong dependence*, then the gain in power may be substantial.
* specifically, but speaking somewhat loosely, if the effect size is large compared to the typical size of the pair-differences, but small compared to the typical size of the unpaired-differences, you may pick up the difference with a paired test at a quite small sample size but with an unpaired test only at a much larger sample size.
However, when the data are not paired, it may be (at least slightly) counterproductive to treat the data as paired. That said, the cost - in lost power - may in many circumstances be quite small - a power study I did in response to this question seems to suggest that on average the power loss in typical small-sample situations (say for n of the order of 10 to 30 in each sample, after adjusting for differences in significance level) may be surprisingly small.
[If you're somehow really uncertain whether the data are paired or not, the loss in treating unpaired data as paired is usually relatively minor, while the gains may be substantial if they are paired. This suggests if you really don't know, and have a way of figuring out what is paired with what assuming they were paired -- such as the values being in the same row in a table, it may in practice may make sense to act as if the data were paired to be safe -- though some people may tend to get quite exercised over you doing that.]
If your data is normally distributed -- you can analyze a number of ways, including a QQ Plot -- then it is fine to run a t-test. But, in order to make the least number of assumptions about the data it is best to use the non-parametric Wilcoxon Signed Rank test.
Due to the fact that you have very few samples (24) I would advise going the Wilcoxon Signed Rank path. I would thoroughly analyze this question because it appears to answer a lot on necessary questions.
Be sure to understand exactly how the type I error and the power behaves in your test.
Best Answer
Common practise is to compare p-value with three levels - 0.05, 0.01 and 0.001. Since your p-value is less than each of them, you have to choose the smallest one, so you should conclude that differences are significant and p<0.001. Roughly speaking: The smaller the p-value, the more significant differences are.
Since we do not know distribution of your data, we do not also know which test should you use. But you have quite large sample, so there is high chance that parametric test can be appropriate (t-test for paired data).